Skip to main content
ZapFetch is fully compatible with the official Firecrawl Python SDK (firecrawl-py). You only need to make two changes to any existing Firecrawl code: swap the api_url to https://api.zapfetch.com and use your ZapFetch API key. Everything else — method names, parameters, and response shapes — stays exactly the same.
Already using firecrawl-py against Firecrawl? Your code works against ZapFetch without any other modifications.
1

Install the SDK

Install firecrawl-py from PyPI using pip.
pip install firecrawl-py
2

Initialize the client

Import FirecrawlApp and pass your ZapFetch API key along with the ZapFetch base URL. Store your key in an environment variable rather than hard-coding it in source files.
from firecrawl import FirecrawlApp

app = FirecrawlApp(
    api_key="YOUR_ZAPFETCH_API_KEY",
    api_url="https://api.zapfetch.com",
)
Load your key from the environment with os.environ["ZAPFETCH_KEY"] to keep credentials out of your codebase.
3

Scrape a single page

Call scrape_url with the target URL and the list of formats you want back. The method returns a dict whose keys match the requested formats.
result = app.scrape_url(
    "https://example.com",
    params={"formats": ["markdown"]},
)
print(result["markdown"])
4

Crawl a site

Use crawl_url with wait_until_done=True to block until the crawl finishes and receive all pages in a single response. Each page in job["data"] includes metadata such as sourceURL.
# Blocking helper — waits for the crawl to finish and returns all pages.
job = app.crawl_url(
    "https://docs.example.com",
    params={"limit": 50},
    wait_until_done=True,
)

for page in job["data"]:
    print(page["metadata"]["sourceURL"])
Crawls are billed per page fetched. Use the limit parameter to cap the number of pages and control your credit spend.
5

Extract structured data

Define a JSON Schema describing the fields you want, write a plain-language prompt, and call app.extract. ZapFetch fetches the pages and runs LLM inference to return typed data — no HTML parsing required.
schema = {
    "type": "object",
    "properties": {
        "stories": {
            "type": "array",
            "items": {
                "type": "object",
                "properties": {
                    "title":  {"type": "string"},
                    "points": {"type": "integer"},
                    "author": {"type": "string"},
                },
            },
        },
    },
}

data = app.extract(
    urls=["https://news.ycombinator.com"],
    params={
        "prompt": "Top 5 stories with points and author.",
        "schema": schema,
    },
)
print(data)

Credit usage

Every response includes a usage object in the metadata — the same structure the Firecrawl SDK already surfaces. Any existing budget-tracking or alerting logic you’ve built against Firecrawl continues to work unchanged against ZapFetch.

Next steps