Have you ever desired to avert Google from indexing a specific URL on your world wide web internet site and exhibiting it in their lookup motor success internet pages (SERPs)? If you take care of net web pages very long plenty of, a working day will probably arrive when you need to have to know how to do this.
The 3 approaches most generally used to avoid the indexing of a URL by Google are as follows:
Applying the rel=”nofollow” attribute on all anchor factors utilized to hyperlink to the web page to reduce the hyperlinks from remaining followed by the crawler.
Working with a disallow directive in the site’s robots.txt file to reduce the page from currently being crawled and indexed.
Using google index download with the material=”noindex” attribute to avoid the page from getting indexed.
Although the differences in the 3 methods look to be delicate at to start with glance, the efficiency can range substantially relying on which strategy you pick.
Applying rel=”nofollow” to stop Google indexing
Lots of inexperienced website owners try to avoid Google from indexing a distinct URL by working with the rel=”nofollow” attribute on HTML anchor components. They incorporate the attribute to every anchor aspect on their web-site made use of to link to that URL.
Which include a rel=”nofollow” attribute on a backlink prevents Google’s crawler from adhering to the hyperlink which, in flip, stops them from finding, crawling, and indexing the concentrate on site. When this process may well operate as a brief-time period alternative, it is not a feasible very long-expression answer.
The flaw with this tactic is that it assumes all inbound back links to the URL will contain a rel=”nofollow” attribute. The webmaster, on the other hand, has no way to stop other world wide web internet sites from linking to the URL with a adopted link. So the odds that the URL will sooner or later get crawled and indexed applying this process is rather superior.
Making use of robots.txt to prevent Google indexing
One more popular strategy made use of to protect against the indexing of a URL by Google is to use the robots.txt file. A disallow directive can be extra to the robots.txt file for the URL in query. Google’s crawler will honor the directive which will avert the page from becoming crawled and indexed. In some situations, nonetheless, the URL can nonetheless look in the SERPs.
Often Google will show a URL in their SERPs though they have never ever indexed the contents of that site. If enough web web pages connection to the URL then Google can frequently infer the subject of the website page from the backlink text of all those inbound backlinks. As a consequence they will show the URL in the SERPs for related queries. Whilst applying a disallow directive in the robots.txt file will prevent Google from crawling and indexing a URL, it does not guarantee that the URL will never ever look in the SERPs.
Employing the meta robots tag to stop Google indexing
If you will need to protect against Google from indexing a URL whilst also stopping that URL from being exhibited in the SERPs then the most productive tactic is to use a meta robots tag with a written content=”noindex” attribute within just the head factor of the website web page. Of system, for Google to in fact see this meta robots tag they want to initially be in a position to find and crawl the webpage, so do not block the URL with robots.txt. When Google crawls the site and discovers the meta robots noindex tag, they will flag the URL so that it will under no circumstances be shown in the SERPs. This is the most successful way to avert Google from indexing a URL and displaying it in their lookup benefits.