Per-plan limits
| Plan | /scrape, /search, /map, /extract (req/sec) | /crawl (req/sec) | Concurrent crawls |
|---|---|---|---|
| Free | 5 | 2 | 2 |
| Starter | 50 | 10 | 10 |
| Pro | 200 | 30 | 20 |
| Scale | 500 | 75 | 50 |
| Business | 1,000 | 150 | 100 |
| Enterprise | custom | custom | custom |
These numbers are indicative. The authoritative values for your account are shown in the Console under your current plan.
Why /v1/crawl has a tighter budget
/v1/crawl carries a lower per-second limit than other endpoints because each call can dispatch hundreds of background page fetches. The concurrent crawls column caps the number of /v1/crawl jobs in-flight for a single API key — completed or cancelled crawls do not count against the limit.
429 response shape
When you exceed a limit, ZapFetch returns429 Too Many Requests with a Retry-After header indicating how many seconds to wait:
Recommended client-side behavior
- Honor
Retry-Afterliterally — do not retry before the header says you can. - Use exponential backoff with jitter on repeated
429s:sleep(min(cap, base * 2^attempt) + random_jitter). - Batch work where possible —
/v1/crawlaccepts alimitandmaxDepth, so you do not need to orchestrate per-page/v1/scrapecalls yourself. - Pool connections per process — a burst of one-shot calls hits rate limits faster than a steady stream with keep-alive connections.
Requesting higher limits
Pro customers with bursty workloads can request a higher per-minute ceiling by emailingsupport@zapfetch.com. Requests are reviewed case-by-case. Note that rate limits never exceed what your credit balance can actually support — the limits are a safety net, not a bottleneck for normal usage.