Screenshot every page on a schedule and store the results in your cloud storage of choice.
Recover from hacks, validate deploys, prove compliance, and track how your site changes over time —
all automatically, with no manual browser sessions.
Version control and database backups protect your code and content — but they can't tell you
what your site looked like to a visitor at a specific point in time. CSS regressions,
injected spam, broken templates, and layout changes are invisible in a diff.
A visual backup captures the rendered output of every page — exactly as a real browser sees it.
That record becomes invaluable when something goes wrong: a hack, a botched deploy, a CMS upgrade
that silently breaks a template, or a legal dispute requiring proof of what was published on a given date.
✓
Timestamped records for legal and compliance purposes
✓
Full-page captures including below-the-fold content
✓
Optional PDF output for archival-grade fidelity
✓
Delivered straight to your existing storage infrastructure
✓What visual backups protect
→Injected spam or malicious content after a hack
→Template regressions from a CMS or plugin update
→Layout shifts after a CSS framework upgrade
→Accidental deletion of images or hero content
→Brand or content changes made without approval
iWhat code backups can't show you
→How the page actually rendered in a real browser
→Third-party widget or ad content visible to visitors
→Dynamic content loaded via JavaScript after page load
→Country- or user-specific content variations
How it works
Four steps to a fully automated visual backup
1
Export your URL list
Pull URLs from your sitemap, CMS export, or crawl tool. One URL per line in a text file, or two columns (URL, filename) in a CSV.
2
Configure the batch
Set your capture options once: full-page or viewport, country, format (PNG/PDF), viewport size, ad blocking, and your storage integration.
3
Submit and monitor
POST the file to /batch/create. Poll /batch/info for real-time progress — count, processed, and failed update as the job runs.
4
Screenshots land in storage
Each screenshot is pushed to your connected storage destination as it completes. No waiting for the full batch — files appear in real time.
API Example
Full-site visual backup in one API call
Upload your URL list, set your storage integration, and every screenshot lands in your cloud
storage as it completes — no polling required for the files themselves.
import io.screenshotcenter.ScreenshotCenterClient;import io.screenshotcenter.models.*;import java.nio.file.*;import java.time.LocalDate;import java.util.List;public class Main { public static void main(String[] args) throws Exception { var client = new ScreenshotCenterClient("YOUR_API_KEY"); var urls = Files.readAllLines(Path.of("sitemap-urls.txt")); var batch = client.batch().create(urls, new BatchParams() .name("Weekly backup — " + LocalDate.now()) .size("page").pdf(true).hideAds(true).hidePopups(true) .apps(List.of(42))); // 42 = your S3 app config ID System.out.println("Batch " + batch.getId() + " started"); }}
using ScreenshotCenter;var client = new ScreenshotCenterClient("YOUR_API_KEY");var urls = await File.ReadAllLinesAsync("sitemap-urls.txt");var batch = await client.Batch.CreateAsync(urls, new BatchParams { Name = $"Weekly backup — {DateTime.Today:yyyy-MM-dd}", Size = "page", Pdf = true, HideAds = true, HidePopups = true, Apps = new[] { 42 }, // 42 = your S3 app config ID});Console.WriteLine($"Batch {batch.Id} started");
Automate on a schedule — cron or GitHub Actions
backup-site.sh + crontab
# crontab — run every Sunday at 02:000 2 * * 0 /usr/local/bin/backup-site.sh# backup-site.sh#!/bin/bashDATE=$(date +%Y-%m-%d)curl -X POST \ https://api.screenshotcenter.com/v1/batch/create \ -H "X-API-KEY: $SCREENSHOTCENTER_API_KEY" \ -F "file=@/var/www/sitemap-urls.txt" \ -F "name=Backup $DATE" \ -F "size=page" \ -F "apps[]=$APP_ID"
You can also trigger backups from GitHub Actions, GitLab CI, Zapier, Make, or n8n —
any system that can make an HTTP request on a schedule.
Use Cases
Who needs a visual website backup
Visual backups aren't just for disaster recovery. Here are the workflows teams run once they have
a reliable screenshot history of their site.
🛡️
Hack and defacement recovery
Keep a rolling visual record of every page so you can immediately identify what changed after a breach or defacement. Compare before and after screenshots to pinpoint the exact pages and elements that were altered.
→Detect injected spam content or malicious redirects visually
→Document the scope of a defacement for insurance and legal purposes
→Restore visual reference for reconstruction after an attack
🔄
Pre-deployment snapshots
Capture the current state of every page before a major deploy, CMS update, or plugin change. If something breaks, you have an exact visual record of what the site looked like before — without relying on version control.
→Full-site backup before a WordPress core or plugin update
→Page-level snapshots before a CMS or platform migration
→Visual baseline before running A/B tests on key pages
📅
Scheduled periodic archival
Run automated batch captures on a daily, weekly, or monthly schedule to build a timestamped visual history of your site. Track how pages evolve over time without manual effort.
→Daily homepage capture to track campaign and content changes
→Weekly full-site archive for compliance and governance records
→Monthly competitive benchmarks alongside your own site snapshots
⚖️
Legal and regulatory evidence
Produce timestamped, immutable screenshot evidence of your site's content at a specific date — for contract disputes, copyright claims, regulatory audits, or advertiser proof-of-placement.
→Advertising proof-of-placement for clients and agencies
→Copyright dispute evidence showing first-publication date
→Regulatory filings requiring a record of published disclosures
📊
Brand consistency monitoring
Archive screenshots across all pages and regions after a brand refresh. Catch inconsistencies — wrong colours, old logos, broken fonts — before customers do, by comparing visual snapshots across your full site.
→Post-rebrand audit across hundreds of product and landing pages
→Validate brand consistency across multiple country or language subfolders
→Catch unauthorised template changes on franchisee or partner sites
🏢
Multi-site and multi-property management
Agencies and enterprises managing dozens of client or brand sites can run a single batch job to archive all properties simultaneously. Output goes directly to organised cloud storage folders, one per site.
→Digital agencies archiving client sites before handoffs
→Enterprise teams maintaining visual records across regional subdomains
→Franchise networks capturing store locator or landing pages per location
Storage Integrations
Deliver backups directly to your storage
Connect your preferred storage destination in the dashboard. Screenshots are pushed as soon as
each capture finishes — no ZIP to download, no manual file handling.
Lossless full-fidelity screenshots. Best for visual diffing and quality-sensitive archives.
📷
JPEG
Smaller file sizes with minimal quality loss. Better when archiving thousands of pages with storage constraints.
⚡
WebP
Best compression-to-quality ratio. Ideal for modern pipelines that support the format.
📄
PDF
Archival-grade output that preserves text layer, links, and full-page fidelity. Required for legal and compliance archives.
Set pdf=true to generate both a screenshot and a PDF for every URL in the same batch run.
Start your first visual backup today
Get 500 free captures to test the backup workflow. No credit card required. Connect your
cloud storage, upload your URL list, and your first backup runs in minutes.
The batch API has no built-in scheduler, but you can trigger a batch job on any schedule using a cron job, a CI/CD pipeline (GitHub Actions, GitLab CI), or a workflow tool like Zapier, Make, or n8n. Each run submits a fresh URL list and pushes results to your storage integration.
Can I store backups directly in my cloud storage without downloading a ZIP?
Yes. Connect a storage app (S3, Google Drive, Dropbox, OneDrive, SharePoint, Azure Blob, or FTP) in the dashboard, then pass its ID in the apps parameter when creating a batch. Screenshots are delivered to your storage destination as each one finishes.
Can I capture PDF versions alongside screenshots for archival?
Yes. Set pdf=true in the batch request and every URL in the batch will produce both a screenshot and a PDF. PDFs are included in the ZIP download or pushed to your storage integration alongside the image files.
How do I capture pages that require a login?
Use the steps parameter (JSON-encoded) to automate the login flow — navigate to the login page, type credentials, submit the form, then navigate to the target page. Alternatively, inject session cookies directly using the cookie parameter.
Can I route backup captures through a specific country?
Yes. Set the country parameter to capture pages as they appear to visitors in that country — including localised pricing, language, and regulatory content. Useful for multi-region sites that serve different content per locale.
What happens if some URLs fail to load?
Failed URLs are automatically retried up to five times with exponential backoff. The final ZIP and CSV index list the status of every URL — finished or error — so you can identify and re-run any that could not be captured.