Frequently Asked Questions
You are probably viewing the website on your own computer, or you forgot to upload the .htaccess file to your server. Use our step-by-step installation guide to see a working website.
We use a file called “.htaccess” to rewrite URLs. This means that a URL such as example.com/page/ or example.com/page.php will actually redirect to an HTML file. However, you won't be able to see this in your browser. The content will be accessible on the “pretty URL”. This technique is good for SEO'ers who want incoming backlinks to redirect to the original URL.
There are also surprising amount of people who mix up the demo version and the paid version. If you ordered both, then make sure to use the files from the paid version. And yes, that seems really obvious.
Note that an .htaccess file only works on Apache servers. We can translate this file to work on other servers, such as Microsoft IIS or Nginx for a small additional fee. More than 95% of our customers have a hosting account that runs on Apache, so this this rarely happens.
PHP is a server side scripting language, that normally generates html files.
This means you need access to the backend to download PHP files. The Internet archive never had access to the server/backend of a website, so it does not have these PHP files.
The only thing that the archive has is the html output, that was generated by the PHP file. We can restore this under the old .PHP URL, but it is technically an HTML file.
WordPress is also written in PHP, so – for the WordPress integration - we reverse engineer the PHP pages. These pages will not always function exactly like in the original website, but they will look the same.
WordPess also uses a MySQL database, which we also reverse engineer, in case you order the WordPress conversion.
There are a few reasons why large websites are slow:
If you want to help the Internet Archive, then support them with a financial service, so they can invest in a faster infrastructure. They are good guys, who rely on open source techniques, which is unfortunately one of the reasons their speed is limited. We give a free recovery if you send us a screenshot of a donation upwards of $25.
We don't know the laws from all countries. It's probably illegal in some countries.
However, for most websites the copyright is credited in the footer to the domain, such as "Copyright © example.com" . So if you own the domain, you could make a case for owning the copyrights.
It's a grey area for sure, and there is probably nobody who can tell you with certainty, since there is not much legal precedence.
For the expired content, it's unlikely anybody will every find out or care. We make sure the content is not published elsewhere on the internet, so it will be difficult for the other party to provide evidence of financial damages.
In reality: most content on the Internet is duplicated elsewhere, so it's unlikely that these type of borderline cases will cause you any problems. The worst case scenario is receiving an official DMCA takedown request. You then have to remove the content before a certain date to avoid legal action.
We never heard from a customer who actually got into legal troubles because of using our services.
We do not offer an unlimited subscription for WordPress conversion, because it takes our developer 1-2 hours per domain.The process is only partially automatic, which is why the price is higher than the regular HTML solution.
Please note that the HTML scraper can still recover websites that were originally made with WordPress. They will look exactly the same as the original website. You just won't be able to use the WordPress dashboard (unless you opt for the WordPress conversion)
The time of delivery depends of the number of pages of the website. A small website is scraped in less than an hour and a large website might take up to a few days. After the scraping has been done, our developer usually delivers the WordPress conversion within 24h-48h.
We give you a download link so there is no problem.
Our script removes the archive prefix automatically. We restore all links as how they used to be when the website was still working online. There will be no traces left of the Wayback Machine.
These are links to pages that were not availabe on archive.org. Search engines do not like broken links, so our software automatically redirects broken links to the front page of a website (= the root domain).
If you preview the scraping results on our website, they will look like links to waybackmachinedownloader.com, because that is the front page of our website. However, after uploading the files to your domain, those links will point to yourdomain.com
I have some kind of question about the sitemap on my website.
Apart from any sitemap that might have been included with the old site, we create our own new sitemap on domain.com/sitemap.xml. In the robots.txt file we tell search engines to use this sitemap. If the original website also had a sitemap, then you will now have two sitemaps. This is not a problem for search engines. The old sitemap will just be regarded as a normal page.
We download entire websites - not just one page.
If you can browse to a page, by starting from the front page from a certain date, then we download that page. Our software works like a human user who clicks on all links on the front page. Then it will visit those links and again click on all the links it can find on those pages. It will continue like this until it found all pages.
For more information, see this blog post: Improved backend now also recovers pages without internal link path