FAQ

Frequently Asked Questions

I just downloaded a site, but only the home page works

You are probably viewing the website on your own computer, or you forgot to upload the .htaccess file to your server. Use our step-by-step installation guide to see a working website.

We use a file called “.htaccess” to rewrite URLs. This means that a URL such as example.com/page/ or example.com/page.php will actually redirect to an HTML file. However, you won't be able to see this in your browser. The content will be accessible on the “pretty URL”. This technique is good for SEO'ers who want incoming backlinks to redirect to the original URL.

There are also surprising amount of people who mix up the demo version and the paid version. If you ordered both, then make sure to use the files from the paid version. And yes, that seems really obvious.

Note that an .htaccess file only works on Apache servers. We can translate this file to work on other servers, such as Microsoft IIS or Nginx for a small additional fee. More than 95% of our customers have a hosting account that runs on Apache, so this this rarely happens.

Can you recover the original PHP/CFM/ASPX Files?

PHP is a server side scripting language, that normally generates html files.

This means you need access to the backend to download PHP files. The Internet archive never had access to the server/backend of a website, so it does not have these PHP files.

The only thing that the archive has is the html output, that was generated by the PHP file. We can restore this under the old .PHP URL, but it is technically an HTML file.

WordPress is also written in PHP, so – for the WordPress integration - we reverse engineer the PHP pages. These pages will not always function exactly like in the original website, but they will look the same.

WordPess also uses a MySQL database, which we also reverse engineer, in case you order the WordPress conversion.

Why Can Large Websites Take Several Hours To Scrape?

There are a few reasons why large websites are slow:

  • The Internet Archive is slow.
  • We triple check every broken link to make sure that it is indeed a broken link and not a broken archive.org server. This means that especially websites with many broken links tend to be slow.
  • The archive blocks our IP if we scrape to fast. We are in direct contact with the Internet archive and they requested us to use a custom User Agent so they can track our behavior. They have the legal right to shut us down, so we have to be nice to them.

If you want to help the Internet Archive, then support them with a financial service, so they can invest in a faster infrastructure. They are good guys, who rely on open source techniques, which is unfortunately one of the reasons their speed is limited. We give a free recovery if you send us a screenshot of a donation upwards of $25.

What about copyrights?

We don't know the laws from all countries. It's probably illegal in some countries.

However, for most websites the copyright is credited in the footer to the domain, such as "Copyright © example.com" . So if you own the domain, you could make a case for owning the copyrights.

It's a grey area for sure, and there is probably nobody who can tell you with certainty, since there is not much legal precedence.

For the expired content, it's unlikely anybody will every find out or care. We make sure the content is not published elsewhere on the internet, so it will be difficult for the other party to provide evidence of financial damages.

In reality: most content on the Internet is duplicated elsewhere, so it's unlikely that these type of borderline cases will cause you any problems. The worst case scenario is receiving an official DMCA takedown request. You then have to remove the content before a certain date to avoid legal action.

We never heard from a customer who actually got into legal troubles because of using our services.

Do you offer an unlimited montly plan for WordPress site restorations?

We do not offer an unlimited subscription for WordPress conversion, because it takes our developer 1-2 hours per domain.The process is only partially automatic, which is why the price is higher than the regular HTML solution.

Please note that the HTML scraper can still recover websites that were originally made with WordPress. They will look exactly the same as the original website. You just won't be able to use the WordPress dashboard (unless you opt for the WordPress conversion)

If I send the request and pay today when can you deliver a recovered website with WordPress integration?

The time of delivery depends of the number of pages of the website. A small website is scraped in less than an hour and a large website might take up to a few days. After the scraping has been done, our developer usually delivers the WordPress conversion within 24h-48h.

My company email only allow 10MB for each email. How could I receive the web archive?

We give you a download link so there is no problem.

Is there any way to restore a page from the Internet Archive so that links work directly to online pages rather than to archived pages?

Our script removes the archive prefix automatically. We restore all links as how they used to be when the website was still working online. There will be no traces left of the Wayback Machine.

What is the difference if the archive.org circle around the date is blue or green? 

  • A blue circle means a status code of 2xx, such as 200. This is the normal status code for a regular web page on the Wayback Machine. A blue circle is usually a safe choice.
  • A green circle signifies a 3xx status code, which means a redirect. Try to avoid the green dots when picking a date to scrape. It's better to get the target URL which the redirect leads to.
  • Orange means an error with a 4xx status code.
  • A red dot around the date means a server-side error, which carries a 5xx status code.

When I go to the scraping result page on waybackmachinedownloader.com, I see many links to waybackmachinedownloader.com. Why do you create these links?

These are links to pages that were not availabe on archive.org. Search engines do not like broken links, so our software automatically redirects broken links to the front page of a website (= the root domain).

If you preview the scraping results on our website, they will look like links to waybackmachinedownloader.com, because that is the front page of our website. However, after uploading the files to your domain, those links will point to yourdomain.com

I have some kind of question about the sitemap on my website.

Apart from any sitemap that might have been included with the old site, we create our own new sitemap on domain.com/sitemap.xml. In the robots.txt file we tell search engines to use this sitemap. If the original website also had a sitemap, then you will now have two sitemaps. This is not a problem for search engines. The old sitemap will just be regarded as a normal page.