A repository is similar to any other system that stores data, like a modern day database.
For more information on using the dt Search Spider to index dynamically-generated content, see "How to use dt Search Web with dynamically-generated content".
A Web crawler, sometimes called a spider or spiderbot and often shortened to crawler, is an Internet bot that systematically browses the World Wide Web, typically for the purpose of Web indexing (web spidering).
Mechanisms exist for public sites not wishing to be crawled to make this known to the crawling agent.
For example, including a file can request bots to index only parts of a website, or nothing at all.
Or you could enter a crawl depth of 4 to reach four levels deep into the site.
For more information on web site indexing options, see: After a search, dt Search Spider will display retrieved HTML or PDF files with hit highlighting, and all links and images intact.
Web search engines and some other sites use Web crawling or spidering software to update their web content or indices of others sites' web content.
Web crawlers copy pages for processing by a search engine which indexes the downloaded pages so users can search more efficiently.
If the crawler is performing archiving of websites it copies and saves the information as it goes.
The archives are usually stored in such a way they can be viewed, read and navigated as they were on the live web, but are preserved as ‘snapshots'.
Crawlers consume resources on visited systems and often visit sites without approval.