Per-plan limits
These numbers are indicative. The authoritative values live in the Console under your current plan.
| Plan | /scrape, /search, /map, /extract (req/sec) | /crawl (req/sec) | Concurrent crawls |
|---|---|---|---|
| Free | 5 | 2 | 2 |
| Starter | 50 | 10 | 10 |
| Pro | 200 | 30 | 20 |
/v1/crawl has a tighter per-second budget because each call can dispatch
hundreds of background page fetches. Concurrent crawls cap the number of
/v1/crawl jobs in-flight for a single key; completed or cancelled
crawls don’t count.
429 response shape
When you exceed a limit ZapFetch returns429 Too Many Requests with a
Retry-After header (seconds):
Recommended client-side behavior
- Honor
Retry-Afterliterally — don’t retry before the header says you can. - On repeated
429s, fall back to exponential backoff with jitter:sleep(min(cap, base * 2^attempt) + random_jitter). - Batch work where possible:
/v1/crawlaccepts alimitandmaxDepthso you do not have to orchestrate per-page/v1/scrapecalls yourself. - Pool connections per-process — a burst of “one-shot” calls hits rate limits faster than a steady stream with keep-alive.
Requesting higher limits
Pro customers with bursty workloads can request a higher per-minute ceiling by emailingsupport@zapfetch.com; we review case-by-case. Rate limits
never exceed what your credit balance can actually support — this is a
safety net, not a bottleneck for normal use.