Fact-check any content in real-time with one API call.
from factivelabs import FactiveLabs client = FactiveLabs(api_key="YOUR_API_KEY") result = client.verify( content="The Earth is 4.5 billion years old. The Great Wall is visible from space.", mode="text" ) for claim in result.claims: print(f"{claim.verdict}: {claim.text}")
Every claim passes through seven independent stages before a verdict is returned.
Each claim is analyzed by parallel classifiers that identify its knowledge domain (science, politics, history, etc.) and detect framing flags like negation, misconception, or attribution — so downstream stages know exactly what they’re dealing with.
Based on classification, each claim is directed to the appropriate verification path: general-knowledge claims go to one model, time-sensitive claims get live search, weather claims hit a weather API, and idioms or opinions are skipped entirely.
A structured query is built from the claim and its context, then run against live web sources. Results are retrieved from authoritative sources — not a static corpus — ensuring evidence is current and relevant.
The claim is cross-referenced against retrieved evidence. A verdict (confirmed, disputed, or inconclusive) is assigned along with a human-readable explanation citing the specific sources used.
When initial evidence is ambiguous or verdicts conflict, independent recheck rounds run with fresh queries and different models. Split decisions are resolved through majority consensus across rounds.
A specialized review layer scans every result for eight known failure patterns: verdict/explanation mismatches, nitpicking, absence-as-disproof, circular confirmation, and more. Flagged results are routed to corrective actions.
Clean results are delivered as structured JSON: verdict, confidence, explanation, sources, and metadata. Flagged results are re-verified, relabeled, or suppressed before delivery — so only quality-checked results reach your users.
Content is decomposed into precise, atomic claims before verification. Our extraction engine balances focus and coverage so every meaningful claim is captured at the right weight, fast.
Every claim returns confirmed, disputed, or inconclusive. Machine-readable with no parsing and no ambiguity. Use directly in conditional logic.
Each verdict includes the URLs and titles of the sources used to make the determination. Built-in transparency your users can trust.
Get the exact passage from each source that supports or contradicts the claim. Get the evidence, not just a link.
A one-liner for compact UIs, plus a full reasoning breakdown for detail views. Choose which to display — both are included in every response.
Every highlight region includes character offsets and the matched text itself — so you can pinpoint claims by position or simple string search. Build inline highlighting, annotations, or underlines with zero guesswork.
When a claim is disputed, get back corrected text ready to swap in. Go beyond flagging errors — deliver the fix, not just the flag.
Long documents come back with a full section map — name, word count, character offsets, and a skip recommendation for non-verifiable sections like references or appendices. Use it to build a section picker or automatically exclude boilerplate before verification.
Send it all at once, stream it in as it's generated, or submit in bulk.
Send the full content in a single request. Accepts any format — text, URLs, YouTube, TikTok, PDFs, audio, images. Best for articles, documents, and finished content.
POST /api/v1/verify
Push content chunk by chunk as it's generated. We detect paragraphs and start fact-checking before the content is finished. Built for LLM output streams, live transcription, and real-time captions.
POST /api/v1/verify/stream
Submit up to 100 items in a single request. Each is processed independently and concurrently. Best for content pipelines, CMS ingestion, and bulk verification jobs.
POST /api/v1/verify/batch
All at once, streamed as they come, or picked up later in batch.
Send a request, get the full result when processing completes. Best for short content and simple integrations.
"stream": false, "async": false
Receive claim-by-claim results via Server-Sent Events as they're extracted and verified. Ideal for real-time UIs.
"stream": true
Submit and receive a job ID immediately. Poll the status endpoint to check progress. Best for large documents and batch processing.
"async": true
Three steps from zero to your first verified claim.
Create a free account and generate your key. 500 free claims per month included.
Use our Python SDK for the fastest integration, or call the REST API directly.
Send any text to the verify endpoint and get back claims with verdicts, sources, and explanations.
Integrate fact-checking into any workflow. Fact-check a single claim at a time or submit content with hundreds or even thousands of claims.
Add a verification layer to any LLM pipeline. Submit model output and get back structured evidence for every claim — before it reaches your users. Try it out live here.
Give your AI agent the ability to verify its own output before presenting it to users. Catch hallucinations in RAG pipelines, chatbots, and copilots.
Flag disputed claims in user-generated content before it goes live. Run at ingestion time or as a batch job across your entire content library.
Verify sources and claims at scale. Process entire articles or documents and get claim-by-claim breakdowns with source citations.
Fact-check articles, press releases, and reports before publication. What takes hours manually takes seconds through the API.
Verify claims in filings, marketing materials, and regulated documents. Reduce liability by catching factual errors before they ship.
Toadstool is a live app built entirely on our API. Check it out!
A claim is a single verifiable statement extracted from your content. For example, "The Earth is 4.5 billion years old" is one claim. A paragraph might contain 3-5 claims. The API automatically extracts and counts individual claims.
Yes. The Playground lets you test the API without creating an account. For programmatic access, create a free account to get your API key — no credit card required.
On paid plans, you’ll be billed $0.01 per additional claim automatically. On the Free plan, requests will return a 402 error — upgrade to continue.
You’ll receive a 429 response with a Retry-After header. Upgrade to a higher plan for increased limits.
The Toadstool app is our consumer product — a fact-checking app built on the same pipeline. The API gives developers and AI agents programmatic access. Same engine, different interfaces.