Google’s John Mueller On Blocking Robots.txt Files From Being Indexed


Google’s John Mueller On Blocking Robots.txt Files From Being Indexed
‘ );

h3_html = ‘

‘+cat_head_params.sponsor.headline+’

‘;

cta = ‘‘+cat_head_params.cta_text.textual content+’
atext = ‘

‘+cat_head_params.sponsor_text+’

‘;
scdetails = scheader.getElementsByClassName( ‘scdetails’ );
sappendHtml( scdetails[0], h3_html );
sappendHtml( scdetails[0], atext );
sappendHtml( scdetails[0], cta );
// brand
sappendHtml( scheader, “http://www.searchenginejournal.com/” );
sc_logo = scheader.getElementsByClassName( ‘sc-logo’ );
logo_html = ‘http://www.searchenginejournal.com/‘;
sappendHtml( sc_logo[0], logo_html );

sappendHtml( scheader, ‘

ADVERTISEMENT

‘ );

if(“undefined”!=typeof __gaTracker)
__gaTracker(‘create’, ‘UA-1465708-12’, ‘auto’, ‘tkTracker’);
__gaTracker(‘tkTracker.set’, ‘dimension1’, window.location.href );
__gaTracker(‘tkTracker.set’, ‘dimension2’, ‘search engine optimization’ );
__gaTracker(‘tkTracker.set’, ‘contentGroup1’, ‘search engine optimization’ );
__gaTracker(‘tkTracker.ship’, );
slinks = scheader.getElementsByTagName( “a” );
sadd_event( slinks, ‘click on’, spons_track );

} // endif cat_head_params.sponsor_logo
});

Google’s John Mueller lately supplied some recommendation on learn how to block robots.txt and sitemap recordsdata from being listed in search outcomes.

This recommendation was prompted by a tweet from Google’s Gary Illyes, who randomly identified that robots.txt can technically be listed like every other URL. While it supplies particular instructions for crawling, there’s nothing to cease it from being listed.

Here’s the complete tweet from Illyes:

“Triggered by an inside query: robots.txt from indexing viewpoint is only a url whose content material will be listed. It can turn into canonical or it may be deduped, identical to every other URL.
It solely has particular which means for crawling, however there its index standing doesn’t matter in any respect.”

In response to his fellow Googler, Mueller stated the x-robots-tag HTTP header can be utilized to dam indexing of robots.txt or sitemap recordsdata. That wasn’t all he needed to say on the matter, nonetheless, as this was arguably the important thing takeaway:

“Also, if your robots.txt or sitemap file is ranking for normal queries (not site:), that’s usually a sign that your site is really bad off and should be improved instead.”

So in the event you’re operating into the issue the place your robots.txt file is rating in search outcomes, blocking it utilizing the x-robots-tag HTTP header is an effective short-term answer. But if that’s occurring then there are doubtless a lot bigger points to care for within the long-term, as Mueller suggests.



Source hyperlink search engine marketing

Be the first to comment

Leave a Reply

Your email address will not be published.


*