# Blocking helper — waits for the crawl to finish and returns all pages.job = app.crawl_url( "https://docs.example.com", params={"limit": 50}, wait_until_done=True,)for page in job["data"]: print(page["metadata"]["sourceURL"])
Every response includes a usage chunk in the metadata — the same object
the Firecrawl SDK already surfaces — so existing budget and alerting logic
keeps working unchanged.