Scraping converts any public URL into clean, usable content. Send a URL to POST /v1/scrape and ZapFetch fetches the page, strips the noise, and returns exactly the formats you ask for — markdown for LLM pipelines, HTML for custom parsing, a screenshot for visual checks, or structured JSON via LLM extraction. All of this costs 1 credit per successful call, regardless of how many formats you request in the same call. Failed requests — timeouts, 4xx/5xx responses, DNS errors — are free.
Request a scrape
Pass your URL and one or more formats values in the request body. The available formats are:
markdown — clean prose, stripped of nav and boilerplate
html — raw HTML of the rendered page
screenshot — a PNG of the page at desktop viewport
extract — structured JSON from LLM inference (see the Extract guide)
curl -X POST https://api.zapfetch.com/v1/scrape \
-H "Authorization: Bearer $ZAPFETCH_KEY" \
-H "Content-Type: application/json" \
-d '{
"url": "https://example.com",
"formats": ["markdown"]
}'
You can request several formats simultaneously and still pay only 1 credit. The response object will contain a key for each format you requested.
curl -X POST https://api.zapfetch.com/v1/scrape \
-H "Authorization: Bearer $ZAPFETCH_KEY" \
-H "Content-Type: application/json" \
-d '{
"url": "https://example.com",
"formats": ["markdown", "html", "screenshot"]
}'
Combining markdown and screenshot in one call is useful for building visual QA checks on top of your text pipeline — you get both artifacts for the cost of a single request.
Credit cost
| Outcome | Cost |
|---|
| Successful scrape (HTTP 2xx) | 1 credit |
Failed scrape (timeout, 4xx, 5xx, DNS error, robots.txt denial) | 0 credits |
Every response body includes creditsUsed and remainingCredits so you can track consumption in real time.
remainingCredits is cached for approximately 10 minutes to keep hot paths fast, so a burst of requests may show the count ticking down in batches rather than strictly one-by-one.
Next steps
- Scrape many pages at once with Crawl.
- Pull structured fields out of a page with Extract.
- Search the web and scrape results in one call with Search.