Skip to main content
ZapFetch exposes five endpoints that cover the full web-data lifecycle: scrape a single page, crawl an entire site, search the web, map all reachable URLs on a domain, and extract structured data with an LLM. All requests go to https://api.zapfetch.com and authenticate with your UUID-style API key. The examples below show each endpoint end-to-end so you can copy, paste, and run them immediately.
1

Export your API key

Store your key in an environment variable so you don’t have to repeat it in every command.
export ZAPFETCH_KEY="YOUR_ZAPFETCH_API_KEY"
You can find your API key in the ZapFetch Console. The Free tier gives you 1,500 credits per month — no credit card required.
2

Scrape a single URL

Fetch one page and receive clean markdown or structured content in the response body.
curl -X POST https://api.zapfetch.com/v1/scrape \
  -H "Authorization: Bearer $ZAPFETCH_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "url": "https://example.com",
    "formats": ["markdown"]
  }'
3

Crawl a whole site

Crawling is asynchronous. The first request starts the job and returns a job ID. Poll the status endpoint until the crawl finishes and all pages are available.
# Start the crawl — note the job ID in the response.
curl -X POST https://api.zapfetch.com/v1/crawl \
  -H "Authorization: Bearer $ZAPFETCH_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "url": "https://docs.example.com",
    "limit": 50
  }'

# Check status and retrieve the crawled pages.
curl https://api.zapfetch.com/v1/crawl/JOB_ID \
  -H "Authorization: Bearer $ZAPFETCH_KEY"
Crawls are billed per page fetched. Set limit to control the maximum number of pages so you stay within your budget.
4

Search the web

Run a live web search and optionally scrape each result in the same call. Use scrapeOptions to control the format returned for each result page.
curl -X POST https://api.zapfetch.com/v1/search \
  -H "Authorization: Bearer $ZAPFETCH_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "query": "best vector databases 2026",
    "limit": 5,
    "scrapeOptions": { "formats": ["markdown"] }
  }'
5

Map a site

Discover every reachable URL on a domain without fetching page content. This is useful for planning a crawl or auditing site structure before you commit credits.
curl -X POST https://api.zapfetch.com/v1/map \
  -H "Authorization: Bearer $ZAPFETCH_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "url": "https://docs.example.com",
    "limit": 500
  }'
6

Extract structured data

Pass a JSON Schema and a plain-language prompt. ZapFetch fetches the pages, runs inference, and returns typed fields — no parsing code required.
curl -X POST https://api.zapfetch.com/v1/extract \
  -H "Authorization: Bearer $ZAPFETCH_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "urls": ["https://news.ycombinator.com"],
    "prompt": "Extract the top 5 story titles with their points and author.",
    "schema": {
      "type": "object",
      "properties": {
        "stories": {
          "type": "array",
          "items": {
            "type": "object",
            "properties": {
              "title":  { "type": "string" },
              "points": { "type": "integer" },
              "author": { "type": "string" }
            }
          }
        }
      }
    }
  }'

Next steps

Now that you’ve confirmed all five endpoints work, automate your workflows using one of the SDK quickstarts:
  • Use ZapFetch with Python — install firecrawl-py and run scrape, crawl, and extract with three lines of Python each.
  • Use ZapFetch with Node.js — install @mendable/firecrawl-js and get TypeScript-typed responses with built-in rate-limit handling.
  • Check your current plan and usage in the Console.