Understanding GoogleJOX bypass Googlebots
I have gone through google documentation and countless blog posts on the subject, and depending on the date and source, there seems to be some conflicting information. Enlighten your wisdom to this humble peasant and everything will be fine.
My client routes look like this:
My first question is: Does googlebot translate this into:
I made a simple server-side router in php that builds the requested pages for googlebot and my plan was to redirect
But when using Google "Fetch as Googlebot" (the first time I can add) it doesn't seem to recognize any links on the page. It just returns "Success" and shows me the html of the homepage (Update: When you tell Fetch as Googlebot to
it, it just returns the content of the homepage without the _escaped_fragment_ magic). This brings me to my second question:
Do I need to follow the specific syntax when using hashbang links so that googlebot can crawl them?
My links look like this:
<a href="#!/page">Page Headline</a>
Update1: So when I ask Fetch how Googlebot to get
, this shows up in the access log:
"GET /_escaped_fragment_/page HTTP/1.1" 301 502 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
But it doesn't seem to match the 301 setting, and displays the master page instead. Should I be using 302 instead? This is the rule I'm using:
RedirectMatch 301 /_escaped_fragment_/(.*) /router/$1
and then change to
source to share
No one has answered this question yet
Check out similar questions: