Overview Documentation API Reference Playground Pricing
Sign In Get API Key

Fact-check anything
in real-time with
one API call.

Submit content and get back individual claims with verdicts, explanations, sources, and more — in seconds.

from factivelabs import FactiveLabs

client = FactiveLabs(api_key="YOUR_API_KEY")

result = client.verify(
    content="The Earth is 4.5 billion years
      old. The Great Wall is visible
      from space.",
    mode="text"
)

for claim in result.claims:
    print(f"{claim.verdict}: {claim.text}")
▶ Response
Confirmed "The Earth is 4.5 billion years old"
Disputed "The Great Wall is visible from space"
2 claims · 1.2s · 6 sources

Submit any content.

Submit text, URLs, documents, or different media formats. We extract claims and fact-check in real-time.

Text
Web / URLs
PDF
Word
Voice / Audio
Images
YouTube
Instagram
TikTok
X
Reddit
ChatGPT
Perplexity
Gemini
Claude

Get back structured data.

Granular Claims

Content is decomposed into precise, atomic claims before verification. Our extraction engine balances focus and coverage so every meaningful claim is captured at the right weight, fast.

Verdicts

Every claim returns confirmed, disputed, or inconclusive. Machine-readable with no parsing and no ambiguity. Use directly in conditional logic.

Source Citations

Each verdict includes the URLs and titles of the sources used to make the determination. Built-in transparency your users can trust.

Source Snippets

Get the exact passage from each source that supports or contradicts the claim. Get the evidence, not just a link.

Short & Detailed Explanations

A one-liner for compact UIs, plus a full reasoning breakdown for detail views. Choose which to display — both are included in every response.

Sentence Positioning

Character offsets (start/end) map every claim back to the original text. Build inline highlighting, annotations, or underlines with zero guesswork.

Corrective Text

When a claim is disputed, get back corrected text ready to swap in. Go beyond flagging errors — deliver the fix, not just the flag.

Choose from three response modes.

All at once, streamed as they come, or picked up later in batch.

Synchronous Default

Send a request, get the full result when processing completes. Best for short content and simple integrations.

"stream": false, "async": false

Streaming SSE

Receive claim-by-claim results via Server-Sent Events as they're extracted and verified. Ideal for real-time UIs.

"stream": true

Batch / Async Polling

Submit and receive a job ID immediately. Poll the status endpoint to check progress. Best for large documents and batch processing.

"async": true

How our pipeline works...

Every claim is extracted, verified against multiple sources, and returned with structured evidence.

📄

1. Content Extraction

Send any format — text, URL, audio, YouTube video, TikTok, PDF, DOCX, AI chat share links, images, and more. We extract the raw text automatically.

Input "OpenAI was founded in 2015 by Sam Altman and Elon Musk as a nonprofit research lab focused on artificial general intelligence. The company released GPT-4 in early 2023, which was trained on over 1 trillion parameters. ChatGPT reached 100 million users within two months of launch, making it the fastest-growing consumer app in history."

2. Claim Extraction

Our AI model decomposes the text into individual granular claims — each one a single verifiable statement. A single paragraph can produce dozens of claims, a long research paper can produce over 1,000.

5 claims extracted 1. "OpenAI was founded in 2015"
2. "Sam Altman and Elon Musk founded OpenAI"
3. "OpenAI was founded as a nonprofit research lab"
4. "GPT-4 was trained on over 1 trillion parameters"
5. "ChatGPT reached 100 million users within two months of launch"
🔍

3. Verification

Each claim is independently verified against multiple sources. Depending on the knowledge domain for the claim, different models and verification approaches are applied.

4. Data delivery

You get back structured results for every claim: a verdict, short and detailed plain-English explanation, source citations with relevant snippets, character offsets mapping each claim to its original position, and corrective text when a claim is disputed.

Get started in 2 minutes.

Three steps from zero to your first verified claim.

1

Get your API key

Create a free account and generate your key. 500 free claims per month included.

2

Install the SDK (optional)

Use our Python SDK for the fastest integration, or call the REST API directly.

pip install factivelabs
3

Verify your first claim

Send any text to the verify endpoint and get back claims with verdicts, sources, and explanations.

result = client.verify(content="Your text here", mode="text")

Built for production environments.

Integrate fact-checking into any workflow. Fact-check a single claim at a time or submit content with hundreds or even thousands of claims.

💪

AI Safety & Hallucination Detection

Add a verification layer to any LLM pipeline. Submit model output and get back structured evidence for every claim — before it reaches your users. Try it out live here.

🤖

AI Agents & Copilots

Give your AI agent the ability to verify its own output before presenting it to users. Catch hallucinations in RAG pipelines, chatbots, and copilots. MCP server coming soon.

📜

Content Moderation

Flag disputed claims in user-generated content before it goes live. Run at ingestion time or as a batch job across your entire content library.

📚

Research & Journalism

Verify sources and claims at scale. Process entire articles or documents and get claim-by-claim breakdowns with source citations.

📰

Publishing & Editorial

Fact-check articles, press releases, and reports before publication. What takes hours manually takes seconds through the API.

⚖️

Compliance & Legal

Verify claims in filings, marketing materials, and regulated documents. Reduce liability by catching factual errors before they ship.

See it in action at Factive-AI.com.

Review mode

Fact-check content from all forms of media: text docs, audio, YouTube, Instagram, links, AI shares, and more.

Factive Review Mode

Search mode

Fact-check AI chats in real-time. Try different models: ChatGPT, Perplexity, Gemini, and more.

Factive Search Mode

Frequently asked questions:

What formats can I submit?

Plain text, URLs (web pages, news articles), YouTube videos, TikTok videos, Instagram posts, Reddit threads, PDFs, Word documents (DOCX), images, and audio files. The API automatically detects the format and extracts the text needed for our fact-checking pipeline.

How fast is it?

Claims are verified in parallel, not sequentially — so a document with 100 claims takes roughly the same 10–15 seconds as one with 5. Short text can return in under 5 seconds. For longer content, use streaming mode to start receiving results immediately as each claim is verified.

What counts as a "claim" for billing?

A claim is a single verifiable statement extracted from your content. For example, "The Earth is 4.5 billion years old" is one claim. A typical news article might produce 30-100 claims depending on the length. You're billed per claim verified, not per API call — so a request that produces 20 claims counts as 20.

How accurate is it?

On our internal benchmark tests, we typically see over 90% performance for both sensitivity and specificity. Performance will vary depending on domain.

Our pipeline cross-references each claim against multiple independent live web sources and weighs the evidence before rendering a verdict. It's designed to be conservative — when evidence is mixed or insufficient, you'll get an "inconclusive" verdict rather than a false positive. That said, no fact-checking system is perfect. We recommend using verdicts as a strong signal, not a final authority.

Can I use this in production?

Yes. The API is built for production workloads. It supports concurrent requests, handles documents with dozens of claims in parallel, and offers three response modes (sync, streaming, batch) so you can integrate it into real-time UIs, background pipelines, or moderation queues.

What happens when a claim can't be verified?

Claims that lack sufficient evidence or are by nature subjective are returned with an "inconclusive" verdict. The response still includes the sources that were checked and an explanation of why a determination couldn't be made — so your users aren't left guessing.

Is there a rate limit?

Free accounts get 500 claims per month at 10 requests per minute. Pro ($49/mo) unlocks unlimited claims at 60 requests per minute with up to 50 items per batch. Enterprise plans go up to 300 requests per minute with 100 items per batch and a custom SLA. See the pricing page for full details.

Do you store my content?

We do not store the content you submit. Request payloads are processed in memory and discarded after the response is delivered. We retain basic usage metadata (timestamps, claim counts) for billing and rate limiting, but never the content itself.

What's the difference between sync, streaming, and batch?

Sync returns the full result in a single response — best for short content where you can wait a few seconds. Streaming sends results as Server-Sent Events as each claim is verified — ideal for real-time UIs. Batch lets you submit a request and poll for results later — useful for long documents or background processing.