Overview Documentation API Reference Playground Pricing
Sign In Get API Key

Fact-check anything in
real-time with one API call.

Submit content and get back individual claims with verdicts, explanations, sources, and more — in seconds.

curl -X POST https://api.factivelabs.com/api/v1/verify \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "content": "The Earth is approximately 4.5 billion years old.",
    "mode": "text"
  }'

Fact-check any source

Send text, a doc, or URL and we'll extract the content and fact-check it in real-time.

Text
Web / URLs
PDF
Word
Voice / Audio
Images
YouTube
Instagram
TikTok
X
Reddit
ChatGPT
Perplexity
Gemini
Claude

What you get back

Every API response is structured for developers. Here's what you get...

Granular Claims

Content is decomposed into precise, atomic claims before verification. Getting this right is the hardest part of fact-checking: too coarse and you miss nuance, too fine and you get noise. Our extraction engine balances focus and coverage so every meaningful claim is captured at the right weight, fast.

Structured Verdicts

Every claim returns confirmed, disputed, or inconclusive. Machine-readable with no parsing and no ambiguity. Use directly in conditional logic.

Source Citations

Each verdict includes the URLs and titles of the sources used to make the determination. Built-in transparency your users can trust.

Source Snippets

Get the exact passage from each source that supports or contradicts the claim. Get the evidence, not just a link.

Short & Detailed Explanations

A one-liner for compact UIs, plus a full reasoning breakdown for detail views. Choose which to display — both are included in every response.

Sentence Positioning

Character offsets (start/end) map every claim back to the original text. Build inline highlighting, annotations, or underlines with zero guesswork.

Corrective Text

When a claim is disputed, get back corrected text ready to swap in. Go beyond flagging errors — deliver the fix, not just the flag.

Choose from three response modes

All at once, streamed as they come, or picked up later in batch.

Synchronous Default

Send a request, get the full result when processing completes. Best for short content and simple integrations.

"stream": false, "async": false

Streaming SSE

Receive claim-by-claim results via Server-Sent Events as they're extracted and verified. Ideal for real-time UIs.

"stream": true

Batch / Async Polling

Submit and receive a job ID immediately. Poll the status endpoint to check progress. Best for large documents and batch processing.

"async": true

How the pipeline works

Every claim is extracted, verified against multiple sources in parallel, and returned with structured evidence — not a single AI opinion.

📄

1. Content Extraction

Send any format — text, URL, YouTube video, TikTok, PDF, DOCX, or image. We extract the raw text automatically.

Input "Water boils at 100 degrees Celsius at sea level. The Great Wall of China is visible from space with the naked eye. Coffee consumption has been linked to increased longevity."

2. Claim Extraction

Our AI model decomposes the text into individual granular claims — each one a single verifiable statement. A single paragraph can produce dozens of claims, and they're all verified in parallel.

3 claims extracted 1. "Water boils at 100 degrees Celsius at sea level"
2. "The Great Wall of China is visible from space with the naked eye"
3. "Coffee consumption has been linked to increased longevity"
🔍

3. Verification

Each claim is independently verified against live web sources in parallel — not sequentially. We cross-reference multiple independent sources per claim, weigh agreement and contradiction, and assess the strength of the evidence. This isn't a single LLM call; it's a multi-stage research pipeline running concurrently across every claim.

4. Verdicts + Sources

You get back structured results for every claim: a verdict, plain-English explanation, source citations with relevant snippets, character offsets mapping each claim to its original position, and corrective text when a claim is disputed.

Results CONFIRMED "Water boils at 100 degrees Celsius at sea level"
  Standard boiling point of water at 1 atm is 100 °C, consistent across multiple sources.

DISPUTED "The Great Wall of China is visible from space with the naked eye"
  NASA confirms it is not visible to the unaided eye from low Earth orbit.

INCONCLUSIVE "Coffee consumption has been linked to increased longevity"
  Some studies show correlation, but results vary and causation is not established.

Get started in 2 minutes

Three steps from zero to your first verified claim.

1

Get your API key

Create a free account and generate your key. 500 free claims per month included.

2

Install the SDK (optional)

Use our Python SDK for the fastest integration, or call the REST API directly.

pip install factivelabs
3

Verify your first claim

Send any text to the verify endpoint and get back claims with verdicts, sources, and explanations.

result = client.verify(content="Your text here", mode="text")

Built for developers and AI agents

Integrate fact-checking into any workflow. Fact-check a single claim at a time or submit content with hundreds or even thousands of claims.

💪

AI Safety & Hallucination Detection

Add a verification layer to any LLM pipeline. Submit model output and get back structured evidence for every claim — before it reaches your users. Try it out live here.

🤖

AI Agents & Copilots

Give your AI agent the ability to verify its own output before presenting it to users. Catch hallucinations in RAG pipelines, chatbots, and copilots. MCP server coming soon.

📜

Content Moderation

Flag disputed claims in user-generated content before it goes live. Run at ingestion time or as a batch job across your entire content library.

📚

Research & Journalism

Verify sources and claims at scale. Process entire articles or documents and get claim-by-claim breakdowns with source citations.

📰

Publishing & Editorial

Fact-check articles, press releases, and reports before publication. What takes hours manually takes seconds through the API.

⚖️

Compliance & Legal

Verify claims in filings, marketing materials, and regulated documents. Reduce liability by catching factual errors before they ship.

See the API in action at Factive-AI.com

Visit our demonstration site to see what you can do with our fact-checking API.

Try Factive AI — the real-time fact-checker

Fact-check anything from AI search results to Instagram posts. Drop in a URL, upload a PDF, or paste text — Factive extracts every claim and verifies it in real time.

AI search results
YouTube videos
Instagram & TikTok
PDFs & documents
News articles & URLs
Any text or image
Open Factive AI
factivelabs.com
Confirmed
The Earth orbits the Sun at an average distance of 93 million miles
Disputed
Lightning never strikes the same place twice
Confirmed
Water molecules consist of two hydrogen atoms and one oxygen atom
Inconclusive
Coffee consumption correlates with increased longevity

Frequently asked questions

What formats can I submit?

Plain text, URLs (web pages, news articles), YouTube videos, TikTok videos, Instagram posts, Reddit threads, PDFs, Word documents (DOCX), images, and audio files. The API automatically detects the format and extracts the text needed for our fact-checking pipeline.

How fast is it?

Claims are verified in parallel, not sequentially — so a document with 100 claims takes roughly the same 10–15 seconds as one with 5. Short text can return in under 5 seconds. For longer content, use streaming mode to start receiving results immediately as each claim is verified.

What counts as a "claim" for billing?

A claim is a single verifiable statement extracted from your content. For example, "The Earth is 4.5 billion years old" is one claim. A typical news article might produce 30-100 claims depending on the length. You're billed per claim verified, not per API call — so a request that produces 20 claims counts as 20.

How accurate is it?

On our internal benchmark tests, we typically see over 90% performance for both sensitivity and specificity. Performance will vary depending on domain.

Our pipeline cross-references each claim against multiple independent live web sources and weighs the evidence before rendering a verdict. It's designed to be conservative — when evidence is mixed or insufficient, you'll get an "inconclusive" verdict rather than a false positive. That said, no fact-checking system is perfect. We recommend using verdicts as a strong signal, not a final authority.

Can I use this in production?

Yes. The API is built for production workloads. It supports concurrent requests, handles documents with dozens of claims in parallel, and offers three response modes (sync, streaming, batch) so you can integrate it into real-time UIs, background pipelines, or moderation queues.

What happens when a claim can't be verified?

Claims that lack sufficient evidence or are by nature subjective are returned with an "inconclusive" verdict. The response still includes the sources that were checked and an explanation of why a determination couldn't be made — so your users aren't left guessing.

Is there a rate limit?

Free accounts get 500 claims per month at 10 requests per minute. Pro ($49/mo) unlocks unlimited claims at 60 requests per minute with up to 50 items per batch. Enterprise plans go up to 300 requests per minute with 100 items per batch and a custom SLA. See the pricing page for full details.

Do you store my content?

We do not store the content you submit. Request payloads are processed in memory and discarded after the response is delivered. We retain basic usage metadata (timestamps, claim counts) for billing and rate limiting, but never the content itself.

What's the difference between sync, streaming, and batch?

Sync returns the full result in a single response — best for short content where you can wait a few seconds. Streaming sends results as Server-Sent Events as each claim is verified — ideal for real-time UIs. Batch lets you submit a request and poll for results later — useful for long documents or background processing.