h3_html = ‘
cta = ‘
atext = ‘
scdetails = scheader.getElementsByClassName( ‘scdetails’ );
sappendHtml( scdetails, h3_html );
sappendHtml( scdetails, atext );
sappendHtml( scdetails, cta );
sappendHtml( scheader, “http://www.searchenginejournal.com/” );
sc_logo = scheader.getElementsByClassName( ‘sc-logo’ );
logo_html = ‘‘;
sappendHtml( sc_logo, logo_html );
sappendHtml( scheader, ‘
__gaTracker(‘create’, ‘UA-1465708-12’, ‘auto’, ‘tkTracker’);
__gaTracker(‘tkTracker.set’, ‘dimension1’, window.location.href );
__gaTracker(‘tkTracker.set’, ‘dimension2’, ‘search engine optimization’ );
__gaTracker(‘tkTracker.set’, ‘contentGroup1’, ‘search engine optimization’ );
slinks = scheader.getElementsByTagName( “a” );
sadd_event( slinks, ‘click on’, spons_track );
} // endif cat_head_params.sponsor_logo
Google’s John Mueller lately supplied some recommendation on learn how to block robots.txt and sitemap recordsdata from being listed in search outcomes.
This recommendation was prompted by a tweet from Google’s Gary Illyes, who randomly identified that robots.txt can technically be listed like every other URL. While it supplies particular instructions for crawling, there’s nothing to cease it from being listed.
Here’s the complete tweet from Illyes:
“Triggered by an inside query: robots.txt from indexing viewpoint is only a url whose content material will be listed. It can turn into canonical or it may be deduped, identical to every other URL.
It solely has particular which means for crawling, however there its index standing doesn’t matter in any respect.”
In response to his fellow Googler, Mueller stated the x-robots-tag HTTP header can be utilized to dam indexing of robots.txt or sitemap recordsdata. That wasn’t all he needed to say on the matter, nonetheless, as this was arguably the important thing takeaway:
“Also, if your robots.txt or sitemap file is ranking for normal queries (not site:), that’s usually a sign that your site is really bad off and should be improved instead.”
So in the event you’re operating into the issue the place your robots.txt file is rating in search outcomes, blocking it utilizing the x-robots-tag HTTP header is an effective short-term answer. But if that’s occurring then there are doubtless a lot bigger points to care for within the long-term, as Mueller suggests.