• Our team is looking to connect with folks who use email services provided by Plesk, or a premium service. If you'd like to be part of the discovery process and share your experiences, we invite you to complete this short screening survey. If your responses match the persona we are looking for, you'll receive a link to schedule a call at your convenience. We look forward to hearing from you!
  • We are looking for U.S.-based freelancer or agency working with SEO or WordPress for a quick 30-min interviews to gather feedback on XOVI, a successful German SEO tool we’re looking to launch in the U.S.
    If you qualify and participate, you’ll receive a $30 Amazon gift card as a thank-you. Please apply here. Thanks for helping shape a better SEO product for agencies!
  • The BIND DNS server has already been deprecated and removed from Plesk for Windows.
    If a Plesk for Windows server is still using BIND, the upgrade to Plesk Obsidian 18.0.70 will be unavailable until the administrator switches the DNS server to Microsoft DNS. We strongly recommend transitioning to Microsoft DNS within the next 6 weeks, before the Plesk 18.0.70 release.
  • The Horde component is removed from Plesk Installer. We recommend switching to another webmail software supported in Plesk.

Issue Google crawler cannot crawl my page sitewide issue error connecting to server

palletdeal

New Pleskian
Server operating system version
Ubuntu 20.04 x64
Plesk version and microupdate number
18.0.48
Hi all,
I've been trying for a few months to get my sitemap updated in search console, however Google cant crawl my pages due to an server connection error.

Does somebody maybe know how to resolve this issue?

Kind regards,

Marc
 
Hi Peter,
Thank you for your reply. I have solved the issue by disabling IP address banning.
I don't know if its necessary to have it activated, otherwise I should add all of Google's IP adresses.
 
Solution may not be easy in this case, at least require some work. If Google is blocked, then very likely this is due to a directory or file in a website of yours that is known to Google or used by the website, but is password-protected or protected an .htaccess rule. When the bot hits that file, it creates an entry in the error log "access denied". This will be seen by Fail2Ban so that Fail2ban treats the visit as if a real intruder tried to access protected directories or files.

You will need to check the website logs for "access denied" entries to find the root cause.
 
Back
Top