Support
1. Getting Started
What is Reality Check™?
A real-time AI calibration tool that monitors your LLM inputs and outputs. It flags potential hallucinations so you can trust your AI workflows—whether you're using a Chrome extension, embedding via our API/SDK, or running on-premise.
Key Benefits
- 86% fewer hallucinations in real time
- 25% higher RAG accuracy and up to 72% cost savings
- Works with any AI model (LLM, AI agent, DNN, etc.)
- Mathematical guarantees on prediction quality
- Free & Pro tiers, plus API and SDK for integration
2. Installation & Setup
A. Chrome Extension (Free & Pro)
- Install Reality Check™ from the Chrome Web Store.
- Click the Reality Check™ icon in your toolbar.
- Sign in with your Confidentia account (or continue as Guest).
- Choose your tier
- Free: Basic hallucination alerts right in your browser.
- Pro: Extended context windows, customizable alert thresholds, priority support, and enterprise SLAs.
Once installed, Reality Check™ runs automatically on any in-browser AI interface.
B. API & SDK Integration
Obtain an API key
- Sign up or upgrade at confidentia.ai → Request Early Access.
Install the package
pip install confidentia-realitycheckInitialize in your code
from realitycheck import RealityCheck
rc = RealityCheck(api_key="YOUR_KEY")Wrap your LLM calls
result = rc.check(prompt="…", model_output="…")
if result.hallucination:
# take action: reroute / reprompt / reviewTier differences
- Free: Up to 10 requests/minute, basic alerting.
- Pro: Higher throughput, advanced uncertainty metrics, SLA-backed uptime.
C. On-Premise / Air-Gapped
- Contact info@confidentia.ai for installer packages and offline activation.
3. How It Works
- Data Sampling
– We analyze semantic and probabilistic characteristics of your prompt + response. - Uncertainty Estimation
– Our conformal prediction step quantifies "how sure" the AI is. - Alert Classification
- 🟢 Green: No hallucination detected
- 🟡 Yellow: Uncertain – please verify
- 🔴 Red: Definite hallucination – do not trust
4. Real-Time Actions
When Reality Check™ warns you:
- Reroute to a stronger model (e.g. GPT-4o)
- Reprompt to refine your question
- Review by consulting a human expert
5. Troubleshooting
Extension icon not appearing
Solution: Confirm it's enabled at chrome://extensions and then restart Chrome.
Alerts not firing for certain models
Solution: Make sure every LLM call is wrapped by the RealityCheck API/extension hook.
"Invalid API key" error
Solution: Log into your Confidentia dashboard at confidentia.ai/account → API Keys, and verify you copied the key exactly.
Don't see "Yellow" or "Red" indicators
Solution: Check your subscription tier—Free-tier covers green-only basic detection; upgrade to Pro for full alert spectrum.
On-prem installer won't activate
Solution: Send your license file and installer logs to info@confidentia.ai for offline activation assistance.
6. FAQ
Q: Which LLMs are supported?
A: Any model—OpenAI GPT, Anthropic, local DNNs, etc. Reality Check™ hooks into your prompt/response pipeline.
Q: What's the difference between Free & Pro?
- Free: Basic hallucination alerts, Chrome extension only.
- Pro: Full API access, higher throughput, enterprise SLAs.
Q: How much will I save?
Based on typical RAG pipelines, customers save up to 72% on query costs and see 25% improved accuracy.
Q: Can I host on my own servers?
Yes—on-premise and air-gapped deployments are available. Reach out for licensing.
7. Contact & Support
- Email: info@confidentia.ai
Still stuck? Drop us a line at info@confidentia.ai or hit the chat widget on our site—our world-class AI science team is here to help!