https://api.zapfetch.com and authenticate with your UUID-style API key. The examples below show each endpoint end-to-end so you can copy, paste, and run them immediately.
Export your API key
Store your key in an environment variable so you don’t have to repeat it in every command.
Scrape a single URL
Fetch one page and receive clean markdown or structured content in the response body.
Crawl a whole site
Crawling is asynchronous. The first request starts the job and returns a job ID. Poll the status endpoint until the crawl finishes and all pages are available.
Crawls are billed per page fetched. Set
limit to control the maximum number of pages so you stay within your budget.Search the web
Run a live web search and optionally scrape each result in the same call. Use
scrapeOptions to control the format returned for each result page.Map a site
Discover every reachable URL on a domain without fetching page content. This is useful for planning a crawl or auditing site structure before you commit credits.
Next steps
Now that you’ve confirmed all five endpoints work, automate your workflows using one of the SDK quickstarts:- Use ZapFetch with Python — install
firecrawl-pyand run scrape, crawl, and extract with three lines of Python each. - Use ZapFetch with Node.js — install
@mendable/firecrawl-jsand get TypeScript-typed responses with built-in rate-limit handling. - Check your current plan and usage in the Console.