Submit content and get back individual claims with verdicts, explanations, sources, and more — in seconds.
from factivelabs import FactiveLabs client = FactiveLabs(api_key="YOUR_API_KEY") result = client.verify( content="The Earth is 4.5 billion years old. The Great Wall is visible from space.", mode="text" ) for claim in result.claims: print(f"{claim.verdict}: {claim.text}")
Submit text, URLs, documents, or different media formats. We extract claims and fact-check in real-time.
Content is decomposed into precise, atomic claims before verification. Our extraction engine balances focus and coverage so every meaningful claim is captured at the right weight, fast.
Every claim returns confirmed, disputed, or inconclusive. Machine-readable with no parsing and no ambiguity. Use directly in conditional logic.
Each verdict includes the URLs and titles of the sources used to make the determination. Built-in transparency your users can trust.
Get the exact passage from each source that supports or contradicts the claim. Get the evidence, not just a link.
A one-liner for compact UIs, plus a full reasoning breakdown for detail views. Choose which to display — both are included in every response.
Character offsets (start/end) map every claim back to the original text. Build inline highlighting, annotations, or underlines with zero guesswork.
When a claim is disputed, get back corrected text ready to swap in. Go beyond flagging errors — deliver the fix, not just the flag.
All at once, streamed as they come, or picked up later in batch.
Send a request, get the full result when processing completes. Best for short content and simple integrations.
"stream": false, "async": false
Receive claim-by-claim results via Server-Sent Events as they're extracted and verified. Ideal for real-time UIs.
"stream": true
Submit and receive a job ID immediately. Poll the status endpoint to check progress. Best for large documents and batch processing.
"async": true
Every claim is extracted, verified against multiple sources, and returned with structured evidence.
Send any format — text, URL, audio, YouTube video, TikTok, PDF, DOCX, AI chat share links, images, and more. We extract the raw text automatically.
Our AI model decomposes the text into individual granular claims — each one a single verifiable statement. A single paragraph can produce dozens of claims, a long research paper can produce over 1,000.
Each claim is independently verified against multiple sources. Depending on the knowledge domain for the claim, different models and verification approaches are applied.
You get back structured results for every claim: a verdict, short and detailed plain-English explanation, source citations with relevant snippets, character offsets mapping each claim to its original position, and corrective text when a claim is disputed.
Three steps from zero to your first verified claim.
Create a free account and generate your key. 500 free claims per month included.
Use our Python SDK for the fastest integration, or call the REST API directly.
Send any text to the verify endpoint and get back claims with verdicts, sources, and explanations.
Integrate fact-checking into any workflow. Fact-check a single claim at a time or submit content with hundreds or even thousands of claims.
Add a verification layer to any LLM pipeline. Submit model output and get back structured evidence for every claim — before it reaches your users. Try it out live here.
Give your AI agent the ability to verify its own output before presenting it to users. Catch hallucinations in RAG pipelines, chatbots, and copilots. MCP server coming soon.
Flag disputed claims in user-generated content before it goes live. Run at ingestion time or as a batch job across your entire content library.
Verify sources and claims at scale. Process entire articles or documents and get claim-by-claim breakdowns with source citations.
Fact-check articles, press releases, and reports before publication. What takes hours manually takes seconds through the API.
Verify claims in filings, marketing materials, and regulated documents. Reduce liability by catching factual errors before they ship.
Plain text, URLs (web pages, news articles), YouTube videos, TikTok videos, Instagram posts, Reddit threads, PDFs, Word documents (DOCX), images, and audio files. The API automatically detects the format and extracts the text needed for our fact-checking pipeline.
Claims are verified in parallel, not sequentially — so a document with 100 claims takes roughly the same 10–15 seconds as one with 5. Short text can return in under 5 seconds. For longer content, use streaming mode to start receiving results immediately as each claim is verified.
A claim is a single verifiable statement extracted from your content. For example, "The Earth is 4.5 billion years old" is one claim. A typical news article might produce 30-100 claims depending on the length. You're billed per claim verified, not per API call — so a request that produces 20 claims counts as 20.
On our internal benchmark tests, we typically see over 90% performance for both sensitivity and specificity. Performance will vary depending on domain.
Our pipeline cross-references each claim against multiple independent live web sources and weighs the evidence before rendering a verdict. It's designed to be conservative — when evidence is mixed or insufficient, you'll get an "inconclusive" verdict rather than a false positive. That said, no fact-checking system is perfect. We recommend using verdicts as a strong signal, not a final authority.
Yes. The API is built for production workloads. It supports concurrent requests, handles documents with dozens of claims in parallel, and offers three response modes (sync, streaming, batch) so you can integrate it into real-time UIs, background pipelines, or moderation queues.
Claims that lack sufficient evidence or are by nature subjective are returned with an "inconclusive" verdict. The response still includes the sources that were checked and an explanation of why a determination couldn't be made — so your users aren't left guessing.
Free accounts get 500 claims per month at 10 requests per minute. Pro ($49/mo) unlocks unlimited claims at 60 requests per minute with up to 50 items per batch. Enterprise plans go up to 300 requests per minute with 100 items per batch and a custom SLA. See the pricing page for full details.
We do not store the content you submit. Request payloads are processed in memory and discarded after the response is delivered. We retain basic usage metadata (timestamps, claim counts) for billing and rate limiting, but never the content itself.
Sync returns the full result in a single response — best for short content where you can wait a few seconds. Streaming sends results as Server-Sent Events as each claim is verified — ideal for real-time UIs. Batch lets you submit a request and poll for results later — useful for long documents or background processing.