Give us a starting URL and a domain. The crawler follows links, discovers pages, and takes a screenshot of each one — up to 1,000 pages per crawl.
What it solves
Unlike batch screenshots where you provide a URL list manually, crawl discovers pages automatically. Point at a seed URL and the service follows links within the domain to find and capture every page — ideal for site audits, documentation, and full-site visual inventories without building a sitemap first.
Implementation notes
• Automatic link discovery via BFS crawl.
• Domain-restricted to prevent unbounded crawling.
• Cancel anytime — completed screenshots are preserved.
Crawl API endpoints
Base URL: https://api.screenshotcenter.com/api/v1
POST
/crawl/create— Start a crawl job
Submit a seed URL to begin crawling. The service discovers pages automatically via BFS link following, restricted to the same domain. Returns a crawl ID immediately.
GET
/crawl/info— Get crawl status
Poll for real-time progress: discovered URLs, pages captured, failures, and estimated completion time.
GET
/crawl/list— List recent crawls
Retrieve all recent crawl jobs for the authenticated account with pagination and status filtering.
POST
/crawl/cancel— Cancel crawl job
Best-effort cancellation of a running crawl. Pages already dispatched to browser clients will complete; queued discoveries are dropped.