You'll learn something real about AI honesty — and your data makes the calibration more accurate for everyone. Seven ways to participate, from 2 minutes to 20.
If you have 2 minutes, rate your AI. If you have 10, try the Mirror Challenge — it's our most valuable data type and the most eye-opening experience.
Ask your AI to assess itself, then rate the same AI yourself. The gap between how your AI sees itself and how you see it is the most valuable data point we can collect. Two perspectives, one system, one truth.
Start the Mirror Challenge →Rate yourself on the same six dimensions. AI systems average 478/600. Humans average 430. Where do you land? Your self-assessment strengthens the human baseline that makes all other comparisons meaningful.
Rate Yourself →No prompt needed. No AI interaction required. Just rate the AI you use most on six dimensions based on your real, daily experience. The fastest way to contribute meaningful data.
Rate Your AI →Three phases. Real calibration data from 315+ assessments. Your AI rates itself, encounters truth, then re-rates. The Learning Index measures honesty under pressure. No system has ever scored higher after seeing the data.
Begin Full Assessment →Run the assessment on two AI systems you use regularly. Same person, same standards, different systems. Which one knows itself better? Comparative data from a single rater eliminates individual bias.
Compare Two AIs →A nurse sees AI differently than a developer. A teacher sees it differently than a trader. Rate AI through the lens of your professional expertise. Your domain knowledge reveals what general users miss.
Share Your Expertise →AI systems update constantly. Your experience evolves. If you've assessed before, retake it for the same system. Longitudinal data reveals whether AI honesty is improving or declining over time.
Retake Assessment →