SUPPORT > FAQ

Frequently Asked Questions

The "Force restart from beginning" checkbox will tell Project Lazarus to delete any saved output data from previous crawls using the seedlist file you have specified.  

This is useful if you want to use a seed list with the same name as a previous seed list but do not want to continue the previous crawl.

When you do NOT have this box checked, Project Lazarus will attempt to continue your previous crawl session for the specified seed list file name.

This is useful if you have a list of domains that you simply want to check if they are expired.  This will NOT crawl the domains looking for links, it will test the list of domains to see if they are expired.

To stop the crawler simply press CTRL+C on your keyboard.

You can then resume the crawl again later by using the same seedlist and starting the crawl WITHOUT checking the "Force restart" checkbox.

With version 6 of the crawler there is also a new and improved crawl-filter file.  This is actually very powerful when used properly and I'll tell you why.  Lazarus also has a feature called "Wild" mode, which basically means that it will not stay on seedlist sites, it will crawl all found links.  This can be good or bad, good because it will have a never-ending queue of links to crawl, bad because some of those sites it crawls to could be junk.  However, you can put restraints on the crawler when in wild mode by editing the crawl-filter and making some modifications to what type of sites you will allow the crawler to crawl.

For example, say you are interested in only crawling .edu and .gov websites, because you know that domains found on those sites will obviously have a backlink from .edu and .gov sites.  So what you would do is add *.gov* and *.edu* to the crawl-filter under the [include] section, like this:

[include]

*.edu*
*.gov*

What this will tell the crawler is that it is okay to crawl any website that include ".edu" or ".gov" in the url.  Then you can add to your seedlist a .edu or .gov site, set it to wild crawl, and it will endlessly find other .edu and .gov websites as long as their are links to them while it is crawling.

You can also use the crawl-filter to tell the crawler which urls not to crawl by adding strings under the [exclude] section like this:

[exclude]

*facebook.com*

*youtube.com*

*twitter.com*

That exclude section would tell the crawler to never crawl urls that contain those strings.

Go to this URL to upgrade your old license to a new version 6 API Token:

https://license.lazaruscrawler.com:9951/convert/

If you run into any problems, please submit a support ticket.

In Lazarus version 6 the domain-filter has been improved with [include] and [exclude] sections as explained in the video below:

Guarantee

Our promise to you is that we will help you get Project Lazarus expired domain scraper setup and running correctly on your machine that meets the minimum system requirements of Windows XP or newer, 2GB of Ram or more, and 2Ghz CPU or faster. If for some reason we are unable to assist you in getting Project Lazarus completely functional after you have properly configured your Namecheap and Moz API's, and given us access to your machine via RDP or Teamviewer we will provide a full refund. We want you to be successful and to get value out of this software. We make no claims or guarantees to the quality or quantity of expired domains that you will find, or your ability to sell domains, or to rank your own websites using the found domains.