Beginner's Guide to SEO [Search Engine Optimization]

At its core, search engine optimization (SEO) is about increasing your website's visibility in the organic search results of the major search engines.

To get that visibility, you need to understand three basic components:

What kind of content people want or need.

Importance of Search engine optimization (SEO)

How search engines work.

How to properly promote and optimize your website.

While search engines and technology are always evolving, there are some underlying fundamentals that have remained unchanged since the early days of SEO.

That's why, in collaboration with some of the leading authorities and experts in the field, we created this in-depth overview and tutorial, to define SEO for aspiring SEO professionals and explain how search engine optimization actually works. now. Today.

What is SEO beginner?

The Basics of Search Engine Optimization:  Have you heard of Maslow's hierarchy of needs? It is a theory of psychology that prioritizes the most fundamental human needs (such as air, water, and physical security) over the more advanced needs (such as esteem and social belonging). The theory is that you cannot meet the needs of the top without first ensuring that the most fundamental needs are met. Love doesn't matter if you don't have food.

Our founder, Rand Fishkin, created a similar pyramid to explain how people should act on SEO, and we've affectionately called it "Mozlow's SEO hierarchy of needs."

This is what it looks like:

Importance of Search engine optimization (SEO) 

As you can see, the foundation of good SEO starts with ensuring crawl accessibility and works from there.

With this beginner's guide, we can follow these seven steps to successful SEO:

Crawl accessibility so engines can read your website

Attractive content that responds to the search engine's query

Keywords optimized to attract search engines and engines

Great user experience including fast loading speed and compelling user experience

Sharable content that gets links, quotes, and amplification

Title, URL and description to get a high CTR in the rankings

Snippet / outline markup to stand out in SERPs

We will spend time on each of these areas throughout this guide, but we wanted to introduce it here because it offers a glimpse into how we structure the guide as a whole.

What is SEO?

SEO stands for "search engine optimization". It is the practice of increasing both the quality and quantity of website traffic, as well as exposure to your brand, through unpaid (also known as "organic") search engine results.

Despite the acronym, SEO is as much about people as it is about the search engines themselves. It's about understanding what people are searching for online, the answers they're looking for, the words they're using, and the type of content they're wanting to consume. Knowing the answers to these questions will allow you to connect with people searching online for the solutions you offer.

If knowing your audience intent is one side of the SEO coin, delivering it in a way that search engine crawlers can find and understand is the other. In this guide, hope to learn how to do both.

What does that word mean?

If you're having trouble with any of the definitions in this chapter, be sure to open our SEO glossary for reference.

Search engine basics

Search engines are answering machines. They examine billions of pieces of content and evaluate thousands of factors to determine which content is most likely to answer your query.

Search engines do all of this by discovering and cataloging all the content available on the Internet (web pages, PDF files, images, videos, etc.) through a process known as "crawl and indexing" and then sorting it by how well it matches. with the query in a process that we refer to as "classification." We'll cover crawling, indexing, and sorting in more detail in Chapter 2.

What search results are "organic"?

As we said above, organic search results are those that are obtained through effective SEO, they are not paid (that is, they are not advertised). These used to be easy to spot - the ads were clearly labeled as such, and the remaining results generally took the form of "10 blue links" which are listed below them. But with the way that search has changed, how can we detect organic results today?

Today's search engine results pages, often referred to as "SERPs," are packed with more advertising and more dynamic organic result formats (called "SERP features") than ever before. Some examples of SERP features are Featured Snippets (or Response Boxes), People Also Ask Boxes, Image Carousels, etc. New SERP features keep popping up, driven largely by what people search for.

For example, if you search for "Denver weather," you will see a weather forecast for the city of Denver directly in the SERP instead of a link to a site that might have that forecast. And if you search for "pizza Denver" you will see a "local package" result consisting of pizzerias in Denver. Convenient, right?

It is important to remember that search engines make money from advertising. Your goal is to better resolve search engine queries (within SERPs), keep search engines coming back, and keep them in SERPs longer.

Some SERP features on Google are organic and can be influenced by SEO. These include featured snippets (a promoted organic result that displays an answer within a box) and related questions (also known as "People are asking too" boxes).

It's worth noting that there are many other search features that, while not paid advertising, typically cannot be influenced by SEO. These functions often have data acquired from proprietary data sources, such as Wikipedia, WebMD, and IMDb.

Why SEO is important

While paid advertising, social media, and other online platforms can drive traffic to websites, the majority of online traffic comes from search engines.

Organic search results cover more digital real estate, appear more credible to savvy search engines, and receive far more clicks than paid ads. For example, of all searches in the US, only ~ 2.8% of people click on paid ads.

Bottom line: SEO has ~ 20 times more traffic opportunities than PPC on both mobile and desktop.

SEO is also one of the only online marketing channels that, when configured correctly, can continue to pay dividends over time. If you provide a solid piece of content that deserves to rank for the right keywords, your traffic can increase over time, while advertising needs ongoing funding to drive traffic to your site.

Search engines are getting smarter, but they still need our help.

Optimizing your site will help you provide better information to search engines so that your content can be indexed and displayed correctly in search results.

Should I hire an SEO professional, consultant or agency?

Depending on your bandwidth, willingness to learn, and the complexity of your website (s), you could do some basic SEO yourself. Or you may find that you prefer the help of an expert. Either way is fine!

If you end up seeking the help of an expert, it is important to know that many agencies and consultants "provide SEO services", but their quality can vary widely. Knowing how to choose a good SEO company can save you a lot of time and money, as wrong SEO techniques can harm your site more than they will help.

White hat vs black hat SEO

"White hat SEO" refers to SEO techniques, best practices and strategies that are governed by the rule of search engines, their main focus is to provide more value to people.

"Black hat SEO" refers to techniques and strategies that try to spam / fool search engines. While black hat SEO can work, it puts websites at great risk of being penalized and / or de-indexed (removed from search results) and has ethical implications.

Penalized websites have ruined businesses. It's just another reason to be very careful when choosing an SEO expert or agency.

Search engines share similar goals with the SEO industry

Search engines want to help you be successful. In fact, Google even has a Beginner's Guide to Search Engine Optimization, much like the Beginner's Guide. They are also quite supportive of the efforts of the SEO community. Digital marketing conferences, such as Unbounce, MNsearch, SearchLove, and Moz's own MozCon, regularly attract engineers and representatives from major search engines.

Google helps webmasters and SEOs through its Central Help Forum for Webmasters and by hosting live meetings during business hours. (Bing, unfortunately, closed its webmaster forums in 2014).

While the webmaster guidelines vary from search engine to search engine, the underlying principles are the same: don't try to fool the search engines. Instead, give your visitors a great online experience. To do so, follow the search engine guidelines and comply with the user's intent.

  • Google Webmaster Guidelines
  • Basic principles:
  • Create pages primarily for users, not for search engines.
  • Don't fool your users.

Avoid tricks intended to improve search engine rankings. A good rule of thumb is whether you would feel comfortable explaining what you've done on a website to a Google employee. Another useful test is to ask, "Does this help my users? Would I do this if search engines didn't exist?"

  1. Think about what makes your website unique, valuable, or attractive.
  2. Things to avoid:
  3. Auto generated content
  4. Participate in link schemes
  5. Create pages with little or no original content (i.e. copied from elsewhere)
  6. Cloaking - The practice of showing search engine crawlers different content than visitors.
  7. Hidden text and links
  8. Doorway Pages - Pages built to rank well on specific searches to funnel traffic to your website.

Bing Webmaster Guidelines

  • Basic principles:
  • Provide clear, deep, engaging, and easy-to-find content on your site.
  • Keep the page titles clear and relevant.
  • Links are considered a sign of popularity, and Bing rewards links that have grown organically.
  • Social influence and social actions are positive signals and can have an impact on your organic ranking in the long term.
  • Page speed is important, along with a helpful and positive user experience.
  • Use alt attributes to describe images so that Bing can better understand the content.
  • Things to avoid:
  • Thin content, pages that primarily display ads or affiliate links, or that redirect visitors to other sites will not rank well.
  • Abusive link tactics that aim to inflate the number and nature of inbound links, such as buying links, participating in link schemes, can lead to de-indexing.
  • Make sure there are clean, concise, and keyword-rich URL structures. Dynamic parameters can mess up your URLs and cause duplicate content problems.
  • Keep your URLs descriptive, short, and keyword-rich where possible and avoid characters other than letters.
  • Bury links in Javascript / Flash / Silverlight; keep content out of these as well.
  • Duplicate content
  • Keyword stuffing
  • Cloaking - The practice of showing search engine crawlers different content than visitors.
  • Guidelines for representing your local business on Google

If the company you are doing SEO work for operates locally, either outside of a store or heading to customer locations to perform the service, you qualify for a Google My Business listing. For local businesses like these, Google has guidelines that govern the dos and don'ts of creating and managing these tabs.

Basic principles:

Make sure you are eligible for inclusion in the Google My Business index; You must have a physical address, even if it is your home address, and you must serve customers face-to-face, either at your location (like a retail store) or theirs (like a plumber)

Honestly and accurately represent all aspects of your local business data, including your name, address, phone number, website address, business categories, hours of operation, and other characteristics.

Things to avoid

Creating Google My Business listings for entities that are not eligible

Misrepresentation of key business information, including "stuffing" your business name with geographic or service keywords, or creating false address lists

Use of PO boxes or virtual offices instead of authentic postal addresses

Abuse of the reviews part of the Google My Business list, through false positive reviews of your company or false negative reviews of your competitors

Costly Beginner Mistakes From Not Reading The Fine Details Of Google Guidelines

Fulfill the user's intention

Instead of violating these guidelines in an attempt to trick search engines into ranking you higher, focus on understanding and fulfilling the user's intent. When a person searches for something, she gets a desired result. Whether it's a reply, concert tickets, or a photo of a cat, that desired content is your "user intent."

If a person does a search for "bands", is their intention to find bands, wedding bands, band saws, or something else?

Your job as an SEO is to quickly provide users with the content they want in the format they want it in.

Common user intent types:

Informative: search for information. Example: "What is the best type of laptop for photography?"

Navigation: search for a specific website. Example: "Apple"

Transactional: Looking to buy something. Example: "good deals on MacBook Pros"

You can get a glimpse of user intent by Googling the desired keywords and evaluating the current SERP. For example, if there is a photo carousel, people searching for that keyword are most likely looking for photos.

Also evaluate what content your top ranked competitors are offering that you are not currently providing. How can you provide 10 times the value of your website?

Providing high-quality, relevant content on your website will help you rank higher in search results, and more importantly, establish credibility and trust with your online audience.

Before doing any of that, you must first understand your website goals to execute a strategic SEO plan.

Know the goals of your website / client

Every website is different, so take the time to really understand the business goals of a specific site. Not only will this help you determine what areas of SEO to focus on, where to track conversions, and how to set benchmarks, but it will also help you create talking points for negotiating SEO projects with clients, bosses, etc.

What will your KPIs (key performance indicators) be to measure ROI in SEO? More simply, what is your barometer for measuring the success of your organic search efforts? You'll want to have it documented, even if it's that simple:

For the ____________ website, my top SEO KPI is ____________.

Here are some common KPIs to get you started:

  • Sales
  • downloads
  • Email logs
  • Contact form submissions
  • Phone calls

And if your business has a local component, you'll want to define KPIs for your Google My Business listings as well. These may include:

Clicks to call

Clicks to the website

Clicks to get driving directions

You may have noticed that things like "ranking" and "traffic" weren't on the KPI list, and that's intentional.

"But wait a minute!" You say. "I came here to learn about SEO because I heard that it could help me rank and get traffic, and are you telling me those are not important goals?"

No way! You have heard correctly. SEO can help your website rank higher in search results and consequently drive more traffic to your website, it's just that ranking and traffic is a means to an end. Ranking is of little use if no one is clicking to your site, and of little use to increase your traffic if that traffic fails to achieve a broader business goal.

For example, if you run a lead generation site, you would rather have:

1,000 monthly visitors and 3 people fill out a contact form? OR...

300 monthly visitors and 40 people fill out a contact form?

If you use SEO to drive traffic to your site for conversions, we hope you choose the latter. Before embarking on SEO, make sure you've established your business goals, then use SEO to help you achieve them, not the other way around.

SEO accomplishes much more than vanity metrics. When done right, it helps real companies achieve real goals for their success.


How do search engines work?

Search engines have three main functions:

Crawl - Search for content on the internet, checking the code / content of every URL it finds.

Index: stores and organizes the content found during the crawl process. Once a page is in the index, it is running to be displayed as a result of the relevant queries.

Ranking: Provide the pieces of content that will best respond to a search engine query, which means that the results are ordered from most to least relevant.

What is search engine crawling?

Crawling is the discovery process in which search engines send out a team of robots (known as crawlers or spiders) to find new and updated content. The content can vary, it can be a web page, an image, a video, a PDF, etc., but regardless of the format, the content is discovered through links.

Googlebot starts off by fetching some web pages and then follows the links on those web pages to find new URLs. By jumping along this link path, the crawler can find new content and add it to its index called Caffeine, a massive database of discovered URLs, and then retrieve it when a search engine is looking for information that the content of that URL is a good match for.

What is a search engine index?

Search engines process and store the information they find in an index, a huge database of all the content they have discovered and consider good enough to show to search engines.

Search engine ranking

When someone conducts a search, search engines scour their index for highly relevant content and then sort that content in hopes of solving the search engine's query. This order of search results by relevance is known as ranking. In general, you can assume that the higher a website is ranked, the more relevant the search engine believes the site is to the query.

You can block search engine crawlers in part or all of your site, or instruct search engines to avoid storing certain pages in their index. While there may be reasons for doing this, if you want search engines to find your content, you must first ensure that it is accessible to crawlers and that it is indexable. Otherwise it is almost invisible.

By the end of this chapter, you will have the context you need to work with the search engine, rather than against it!

Crawling: Can Search Engines Find Your Pages?

As you just learned, making sure your site is crawled and indexed is a prerequisite for showing up in the SERPs. If you already have a website, it may be a good idea to start by looking at how many of your pages are in the index. This will provide valuable information on whether Google is crawling and finding all the pages you want, and none that it isn't.

One way to check your indexed pages is "site:", an advanced search operator. Go to Google and type "site:" in the search bar. This will return the results that Google has in its index for the specified site:

The number of results that Google displays (see "About XX results" above) is not exact, but it gives you a solid idea of ​​which pages are indexed on your site and how they are currently displayed in search results.

For more accurate results, monitor and use the Index Coverage report in Google Search Console. You can sign up for a free Google Search Console account if you don't have one. With this tool, you can submit sitemaps for your site and control how many submitted pages have actually been added to Google's index, among other things.

If it doesn't appear anywhere in the search results, there are a few possible reasons why:

Your site is new and has not been crawled yet.

Your site is not linked to any external website.

Navigating your site makes it difficult for a robot to track you effectively.

Your site contains a basic code called crawler directives that blocks search engines.

Google has penalized your site for spam tactics.

Most people think about making sure Google can find their important pages, but it's easy to forget that there are likely pages that you don't want Googlebot to find. These can include things like old URLs that have thin content, duplicate URLs (like sorting and filtering parameters for e-commerce), special promo code pages, test or trial pages, etc.

To divert Googlebot from certain pages and sections of your site, use robots.txt.


The robots.txt files are located in the root directory of websites (for example, and suggest which parts of your site search engines should and should not crawl, as well as the speed at the one that crawls your site. , via specific robots.txt directives.

How the Google robot handles robots.txt files

If Googlebot cannot find a robots.txt file for a site, it proceeds to crawl the site.

If Googlebot finds a robots.txt file for a site, it will generally follow the suggestions and proceed to crawl the site.

If Googlebot encounters an error trying to access a site's robots.txt file and cannot determine whether it exists or not, it will not crawl the site.

Defining URL parameters in GSC

Some sites (more common with e-commerce) make the same content available at several different URLs by adding certain parameters to the URLs. If you've ever shopped online, you've probably narrowed your search using filters. For example, you can search for "shoes" on Amazon and then refine your search by size, color, and style. Every time you refine, the URL changes slightly: 32 & highlight = green + dress & cat_id = 1 & sessionid = 123 $ affid = 43

How does Google know which version of the URL to deliver to search engines? Google does a good job of determining the representative URL on its own, but you can use the URL parameters feature in Google Search Console to tell Google exactly how you want your pages to be treated. If you use this feature to tell the Googlebot "do not crawl any URLs with the ____ parameter", then you are basically asking it to hide this content from the Googlebot, which could lead to removal of those pages from the search results. That's what you want if those parameters create duplicate pages, but not ideal if you want those pages to be indexed.

Can crawlers find all your important content?

Now that you know some tactics to ensure that search engine crawlers stay away from your unimportant content, let's learn about optimizations that can help Googlebot find your important pages.

Sometimes a search engine will be able to find parts of your site by crawling, but other pages or sections can be hidden for one reason or another. It's important to make sure that search engines can discover all of the content you want to index, and not just your home page.

Ask yourself this: Can the bot crawl your website and not just you?

Is your content hidden behind the login forms?

If you require users to log in, fill out forms, or take surveys before accessing certain content, search engines won't see those protected pages. A tracker is definitely not going to log in.

Do you trust the search forms?

Robots cannot use search forms. Some people believe that if they put a search box on their site, search engines will be able to find everything their visitors are looking for.

Is text hidden within non-text content?

Non-text media forms (images, video, GIF, etc.) should not be used to display the text you want to index. While search engines are getting better at recognizing images, there is no guarantee that they can still read and understand them. It is always better to add text within the <HTML> markup of your web page.

Can Search Engines Follow Your Site Navigation?

Just as a crawler needs to discover your site through links from other sites, you need a link path on your own site to guide you from page to page. If you have a page that you want search engines to find, but it's not linked to any other page, it's almost invisible. Many sites make the critical mistake of structuring your navigation in ways inaccessible to search engines, hampering your ability to appear in search results.

Common navigation errors that can prevent crawlers from seeing your entire site:

Have a mobile navigation that shows different results than your desktop navigation

Any type of navigation where the menu items are not in the HTML, such as JavaScript-enabled navigations. Google has improved a lot in crawling and understanding JavaScript, but it is not yet a perfect process. The safest way to make sure Google finds, understands, and indexes something is to put it in HTML.

Personalization, or showing unique navigation to a specific type of visitor in front of others, might seem like a cover-up to a search engine crawler.

Forgetting the link to a main page of your website through your navigation; remember, links are the routes that crawlers follow to access new pages

This is why it is essential that your website has clear navigation and useful URL folder structures.

Do you have a clean information architecture?

Information architecture is the practice of organizing and tagging the content of a website to improve the efficiency and searchability of users. The best information architecture is intuitive, which means that users shouldn't have to think hard to navigate your website or find something.

Are you using sitemaps?

A sitemap is exactly what it sounds like: a list of URLs on your site that crawlers can use to discover and index your content. One of the easiest ways to ensure that Google finds your highest priority pages is to create a file that meets Google standards and submit it through Google Search Console. While submitting a sitemap doesn't replace the need for good site navigation, it can certainly help crawlers track down all of your important pages.

If your site doesn't have any other sites linked to it, you may still be able to index it by submitting your XML sitemap in Google Search Console. There is no guarantee that they will include a submitted URL in their index, but it is worth a try!

Do crawlers get errors when they try to access your URLs?

In the process of crawling the URLs on your site, a crawler can find errors. You can go to the "Crawling Errors" report in Google Search Console to detect the URLs where this could be happening. This report will show you the server errors and the not found errors. Server log files can also show you this, as well as a trove of other information, such as crawl frequency, but because accessing and analyzing server log files is a more advanced tactic, we won't discuss it in depth in the Beginner's Guide. although you can learn more about it here.

Before you can do anything meaningful with your crawl error report, it is important to understand server errors and "not found" errors.

4xx Codes - When Search Engine Crawlers Can't Access Your Content Due To Client Error

4xx errors are client errors, which means the requested URL contains incorrect syntax or cannot be met. One of the most common 4xx errors is the "404 - not found" error. These can occur due to a URL typo, a deleted page, or a broken redirect, just to name a few examples. When search engines hit a 404, they can't access the URL. When users hit a 404, they can get frustrated and walk away.

5xx Codes - When Search Engine Crawlers Are Unable To Access Your Content Due To A Server Error

5xx errors are server errors, which means that the server the web page is on did not fulfill the search engine or search engine request to access the page. In Google Search Console's "Crawling Error" report, there is a tab dedicated to these errors. Usually this happens because the request for the URL timed out, so Googlebot abandoned the request. See the Google documentation for more information on how to troubleshoot server connectivity issues.

Fortunately, there is a way to tell search engines and search engines that your page has moved: the 301 (permanent) redirect.

The 301 status code itself means that the page has been permanently moved to a new location, so avoid redirecting URLs to irrelevant pages, that is, URLs where the content of the old URL is not actually found. If a page is ranked for a query and 301 to a URL with different content, it could fall into the ranking position because the content that made it relevant to that particular query is no longer there. The 301s are powerful - move URLs responsibly.

You also have the option to 302 redirect a page, but this should be reserved for temporary moves and in cases where passing link fairness is not a big concern. The 302s are like a detour from the highway. You are temporarily diverting traffic through a certain route, but it won't be that way forever.

Once you've made sure your site is optimized for crawlers, the next order of business is to make sure it can be indexed.

Indexing: How do search engines interpret and store your pages?

Once you've made sure your site has been crawled, the next order of business is to make sure it can be indexed. That's right, just because a search engine can discover and crawl your site doesn't necessarily mean it will be stored in their index. In the previous section on crawling, we discussed how search engines discover your web pages. The index is where the discovered pages are stored. After a crawler finds a page, the search engine renders it as a browser would. In the process of doing this, the search engine analyzes the content of that page. All that information is stored in your index.

You can also view the text-only version of your site to determine if your important content is effectively crawled and cached.

Are pages ever removed from the index?

Yes, pages can be removed from the index. Some of the main reasons a URL can be removed include:

The URL shows a "not found" error (4XX) or a server error (5XX). This could be accidental (the page was moved and a 301 redirect was not set) or intentional (the page was removed and 404 modified to remove it from the index)

The URL had a noindex meta tag added - site owners can add this tag to instruct the search engine to skip the page from their index.

The URL was manually penalized for violating the search engine's Webmaster Guidelines and was removed from the index as a result.

URL crawling has been blocked with the addition of a password required before visitors can access the page.

If you think that a page on your website that was previously in Google's index is no longer showing up, you can use the URL inspection tool to find out the status of the page or use Explore like Google, which has a "Request indexing" function. to submit individual URLs to the index. (Bonus: GSC's "search" tool also has a "render" option that lets you see if there are any issues with the way Google interprets your page.)

Tell search engines how to index your site

Robot meta-guidelines

Meta directives (or "meta tags") are instructions that you can give to search engines regarding how you want your web page to be treated.

You can tell search engine crawlers things like "do not index this page in search results" or "do not pass any link value to any link on the page." These instructions are executed through Robots Meta Tags in the <head> of your HTML pages (the most used) or through X-Robots-Tag in the HTTP header.

Robots meta tag

The robots meta tag can be used within the HTML <head> of your web page. You can exclude all or specific search engines. The following are the most common meta-directives, along with the situations in which you may apply them.

index / noindex tells the engines if the page should be crawled and maintained in a search engine index for retrieval. If you choose to use "noindex", you are communicating to the crawlers that you want the page to be excluded from search results. By default, search engines assume that they can index all pages, so there is no need to use the value "index".

When to use: You can choose to mark a page as "noindex" if you are trying to clip thin pages from your site's Google index (for example, user-generated profile pages) but still want them to be accessible to visitors.

follow / nofollow tells search engines whether or not the links on the page should be followed. "Follow" results in bots following the links on your page and passing the link value to those URLs. Or, if you choose to use "nofollow", search engines will not follow or pass any link value to links on the page. By default, all pages are assumed to have the "follow" attribute.

When to use: nofollow is often used in conjunction with noindex when trying to prevent a page from being indexed, as well as preventing the crawler from following links on the page.

noarchive is used to prevent search engines from caching the page. By default, the engines will keep visible copies of all the pages they have indexed, accessible to search engines via the link cached in search results.

When to use: If you have an ecommerce site and their prices change regularly, you might consider the noarchive tag to prevent users from seeing outdated prices.

X-Robots Label

The x-robots tag is used within the HTTP header of your URL, which provides more flexibility and functionality than meta tags if you want to block search engines at scale because you can use regular expressions, block non-HTML files, and apply noindex tags. throughout the site.

Derivatives used in a robot meta tag can also be used in an X-Robots-Tag.

To learn more about robot meta tags, explore Google's robot meta tag specifications.

To determine relevance, search engines use algorithms, a process or formula by which stored information is retrieved and ordered in a meaningful way. These algorithms have undergone many changes over the years to improve the quality of search results. Google, for example, makes algorithm adjustments every day - some of these updates are minor quality adjustments, while others are core / broad algorithm updates implemented to address a specific problem, such as Penguin to address link spam. Please see our Google Algorithm Change History for a list of confirmed and unconfirmed Google updates dating back to the year 2000.

Why does the algorithm change so often? Is Google just trying to keep us on our toes? While Google doesn't always reveal details about why they do what they do, we do know that Google's goal in making algorithm adjustments is to improve the overall quality of the search. That is why, in response to questions about the algorithm update, Google will respond with something like: "We are doing quality updates all the time." This indicates that if your site suffered after an algorithm adjustment, please compare it to the Google Quality Guidelines or the Search Quality Rater Guidelines, both of which are very revealing in terms of what the search engines want.

What do search engines want?

Search engines have always wanted the same thing: to provide useful answers to user questions in the most useful formats. If that's true, why does SEO seem to be different now than it was in years past?

Think of it in terms of someone learning a new language.

At first, your understanding of the language is very rudimentary: "See Spot Run." Over time, their understanding begins to deepen and they learn semantics - the meaning behind language and the relationship between words and phrases. Over time, with enough practice, the student knows the language well enough to understand the nuances and is able to give answers even to vague or incomplete questions.

When search engines were just beginning to learn our language, it was much easier to manipulate the system using tricks and tactics that actually go against quality guidelines. Take keyword stuffing, for example. If you want to rank for a particular keyword as "funny jokes", you can add the words "funny jokes" a bunch of times on your page and make it b*old, hoping to improve your ranking for that term:

Welcome to funny jokes! We tell the funniest jokes in the world. Funny jokes are funny and crazy. Your funny joke awaits you. Sit back and read funny jokes because funny jokes can make you happier and more fun. Some favorite funny jokes.

This tactic created terrible experiences for users, and instead of laughing at the funny jokes, people were bombarded by annoying and difficult-to-read text. It may have worked in the past, but this is never what the search engines wanted.

The role of links in SEO

When we talk about links, we could say two things. Backlinks or "inbound links" are links from other websites that point to your website, while internal links are links on your own site that point to your other pages (on the same site).

Links have historically played an important role in SEO. From the beginning, search engines needed help determining which URLs were more trustworthy than others to help them determine how to rank search results. Calculating the number of links pointing to a certain site helped them do this.

Backlinks work very similar to real life WoM (word of mouth) referrals. Let's take a hypothetical coffee shop, Jenny's Coffee, as an example:

References from others = good sign of authority

Example: Many different people have told you that Jenny's Coffee is the best in town.

References from yourself = biased, so not a good sign of authority

Example: Jenny claims that Jenny's Coffee is the best in town.

Low quality or irrelevant source references = not a good sign of authority and might even mark you as spam

Example: Jenny paid for people who had never visited her cafeteria to tell others how good it is.

No references = unclear authority

Example: Jenny's Coffee may be good, but you haven't been able to find anyone who has an opinion, so you can't be sure.

That is why PageRank was created. PageRank (part of Google's core algorithm) is a link analysis algorithm named after one of Google's founders, Larry Page. PageRank estimates the importance of a web page by measuring the quality and quantity of links that point to it. The assumption is that the more relevant, important and trustworthy a web page is, the more links it will have obtained.

The more natural backlinks you have from high authority (trustworthy) websites, the better your chances of ranking higher in search results.

The role content plays in SEO

Links would be of no use if they didn't direct search engines to something. That something is contained! Content is more than just words; it's anything meant to be consumed by search engines - there's video content, image content, and of course text. If search engines are answering machines, content is the means by which the engines deliver those responses.

Every time someone does a search, there are thousands of possible results, so how do search engines decide which pages the search engine will find valuable? A big part of determining where your page will rank for a given query is how well the content on your page matches the intent of the query. In other words, does this page match the words that were searched and help to complete the task that the search engine was trying to accomplish?

Due to this focus on user satisfaction and task accomplishment, there are no strict benchmarks on how long your content should be, how many times it should contain a keyword, or what you should include in heading tags. All of these can influence a page's performance in search, but the focus should be on the users who will read the content.

Today, with hundreds or even thousands of ranking signals, the top three have remained fairly consistent: links to your website (which serve as third-party credibility signals), on-page content (quality content that meets the intention of a search engine) and RankBrain.

What is RankBrain?

RankBrain is the machine learning component of Google's core algorithm. Machine learning is a computer program that continues to improve its predictions over time through new observations and training data. In other words, you are always learning, and since you are always learning, your search results should constantly improve.

For example, if RankBrain notices a lower ranking URL that provides better results to users than higher ranking URLs, you can bet that RankBrain will adjust those results, raising the most relevant result and demoting the less relevant pages as a by-product. .

Like most things with the search engine, we don't know exactly what RankBrain comprises, but apparently, the folks at Google don't either.

What does this mean for SEOs?

As Google will continue to leverage RankBrain to promote the most relevant and useful content, we must focus on fulfilling the intent of the search engine more than ever. Provide the best possible information and experience for the search engines who can come to your page, and you've taken a great first step toward performing well in the world of RankBrain.

Engagement metrics: correlation, causality, or both?

With Google rankings, your engagement metrics are most likely part correlation and part causal.

When we say engagement metrics, we mean data that represents how search engines interact with your site based on search results. This includes things like:

Clicks (visits from search)

Time on page (amount of time the visitor spent on a page before leaving it)

Bounce rate (the percentage of all website sessions where users viewed only one page)

Pogo-sticking (clicking on an organic result and then quickly returning to the SERP to choose another result)

Many tests, including Moz's own ranking factors survey, have indicated that engagement metrics correlate with higher rankings, but causality has been hotly debated. Are good engagement metrics only indicative of highly ranked sites? Or do sites rank high because they have good engagement metrics?

What did Google say

While they have never used the term "direct ranking signal", Google has made it clear that they absolutely use click data to modify the SERP for particular queries.

According to Google's former head of search quality, Udi Manber:

“Ranking itself is affected by click data. If we find that, for a particular query, 80% of people click on n. 2 and only 10% click on No. 1, after a while we realize that probably No. 2 is the one that people want, so we'll change it. "

Another comment from former Google engineer Edmond Lau corroborates this:

“It's pretty clear that any reasonable search engine would use click data on their own results to feed back rankings and improve the quality of search results. The actual mechanics of how click data is used is often proprietary, but Google makes it clear that it uses click data with its patents on systems as rank-adjusted content items. "

Because Google needs to maintain and improve search quality, it seems inevitable that engagement metrics are more than a correlation, but it appears that Google doesn't call engagement metrics a "ranking signal" because those metrics are used to improve search quality and individual URL rank is just a by-product of that.

What tests have confirmed

Various tests have confirmed that Google will adjust the SERP order in response to search engine engagement:

Rand Fishkin's test in 2014 resulted in n. 7 which rose to No. 1 after about 200 people clicked on the URL in the SERP. Interestingly, the ranking improvement seemed to be isolated from the location of the people who visited the link. The ranking position skyrocketed in the US, where many participants were located, while it remained lower on the page in Google Canada, Google Australia, etc.

Larry Kim's comparison of top pages and their average dwell time before and after RankBrain seemed to indicate that the machine learning component of Google's algorithm degrades the ranking position of pages that people don't spend as much time on.

Darren Shaw's tests have also demonstrated the impact of user behavior on local search and map package results.

Since user engagement metrics are clearly used to fine-tune SERPs for quality and rank position changes as a by-product, it's safe to say that SEOs should optimize engagement. Participation does not change the objective quality of your website, but its value to search engines in relation to other results of that query. That's why, after there are no changes to your page or your backlinks, it could lower your ranking if search engine behavior indicates that they like other pages better.

In terms of ranking web pages, engagement metrics act as a fact checker. Objective factors like links and content rank the page first, then engagement metrics help Google make adjustments if they didn't get it right.

The evolution of search results

When search engines lacked the sophistication they have today, the term "10 blue links" was coined to describe the flat structure of SERPs. Every time a search was done, Google would show a page with 10 organic results, each in the same format.

In this search landscape, occupy the position n. 1 was the holy grail of SEO. But then something happened. Google began adding results in new formats to its search results pages, called SERP functions. Some of these SERP features include:

Paid Ads

Featured snippets

People also ask boxes

Local package (map)

Knowledge panel

Links to sites

And Google adds new ones all the time. They even experimented with "zero result SERPs", a phenomenon where only a Knowledge Graph result was displayed in the SERP with no results below it, except for an option to "see more results".

The addition of these features caused some initial panic for two main reasons. For one thing, many of these features caused organic results to drop even further in the SERP. Another by-product is that fewer search engines click on organic results, as more queries are answered on the SERP itself.

So why would Google do this? It all goes back to the search experience. User behavior indicates that some queries are better satisfied with different content formats. Notice how the different types of SERP features match the different types of query attempts.

Localized search

A search engine like Google has its own proprietary index of local business listings, from which it builds local search results.

If you are doing local SEO work for a business that has a physical location that customers can visit (eg Dentist) or for a business that travels to visit clients (eg Plumber), make sure to claim, verify and optimize a free Google My Business Listing.

When it comes to localized search results, Google uses three main factors to determine ranking:

  1. Relevance
  2. Distance
  3. Prominence
  4. Relevance


Relevance is how well a local business matches what the search engine is looking for. To ensure that the company is doing its best to be relevant to search engines, make sure the company information is complete and accurate.


Google uses your geographic location to provide you with better local results. Local search results are extremely sensitive to proximity, which refers to the location of the search engine and / or the location specified in the query (if the search engine included one).

Organic search results are sensitive to search engine location, although they are rarely as pronounced as in local package results.


With prominence as a factor, Google seeks to reward companies that are known in the real world. In addition to a business's offline prominence, Google also looks at some factors online to determine local ranking, such as:


The number of Google reviews a local business receives and the sentiment of those reviews have a notable impact on its ability to rank in local results.


A "business appointment" or "business listing" is a web-based reference to a "NAP" of a local business (name, address, phone number) on a localized platform (Yelp, Acxiom, YP, Infogroup, Localeze, etc.) .

Local rankings are influenced by the number and consistency of local business citations. Google pulls data from a wide variety of sources to continually build its local business index. When Google finds multiple consistent references to a business name, location, and phone number, it strengthens Google's "confidence" in the validity of that data. This leads to Google being able to display the business with a higher degree of confidence. Google also uses information from other sources on the web, such as links and articles.

Organic ranking

SEO best practices also apply to local SEO, as Google also considers a website's position in organic search results when determining local ranking.

In the next chapter, you will learn the best practices on the page that will help Google and users better understand your content.

[Bonus!] Local participation

Although Google does not include it as a local ranking factor, the role of engagement will only increase as time goes on. Google continues to enrich local results by incorporating real-world data such as popular times to visit and average length of visits ...

Certainly, now more than ever, local results are being influenced by real-world data. This interactivity is the way search engines interact and respond to local businesses, rather than purely static (and game-friendly) information like links and quotes.

Since Google wants to offer the best and most relevant local businesses to search engines, it makes perfect sense that they use real-time engagement metrics to determine quality and relevance.

You don't need to know the ins and outs of Google's algorithm (that's still a mystery!), But by now you should have a great basic understanding of how the search engine finds, interprets, stores, and classifies content.

Post a Comment

Previous Post Next Post