SELF-AUDIT RESULTS
We audited 28 GenuVerity reports using the same rigorous standards we apply to external claims. Result: 21% of reports had zero quality concerns. We identified patterns needing improvement - primarily around verdict assignment and expert citation specificity - and have updated our guidelines to address them. This audit demonstrates our commitment to applying our own standards to ourselves.
Fact-checkers hold others accountable. But who fact-checks the fact-checkers? GenuVerity believes transparency requires applying the same rigorous standards we use on external claims to our own work. This audit evaluates not technical compliance (source counts, formatting) but journalistic and analytical quality - are our verdicts supported? Is our reasoning sound? Do we treat subjects fairly?
Audit Methodology
Our quality framework evaluates reports across four dimensions, weighted by importance to accurate fact-checking:
Source credibility, primary vs secondary sources, quote context integrity, acknowledgment of contradictory evidence.
Reasoning transparency, correlation vs causation handling, fallacy avoidance, complexity recognition.
Charitable interpretation of opposing arguments, explanation of WHY claims spread, avoidance of dismissive tone.
Scope match between verdict and claim, category accuracy, evidence-to-verdict alignment.
This framework draws from the IFCN Code of Principles, academic research on fact-checking effectiveness from the Reuters Institute, and verification standards from First Draft.
Audit Process
We used a hybrid approach: automated scanning flagged potential concerns, then human reviewers validated each flag. This balances efficiency with accuracy.
Sample selection: 28 reports stratified by verdict type (FALSE, TRUE, MIXED, MISLEADING, CONTEXT) to ensure representative coverage. Both recent and older reports included to test consistency over time.
What we scanned for:
- Vague authority appeals ("experts say" without naming specific sources)
- Truncated quotes that might strip context
- Missing counter-arguments for FALSE/MISLEADING verdicts
- Verdict-evidence mismatches
- Undefined or missing verdict assignments
What We Found
The scanner flagged 39 potential concerns across 28 reports. After human validation, 13 were confirmed as valid (33% precision rate). The remaining 26 were false positives - legitimate uses flagged by overly broad patterns.
Valid Concerns Identified
5 reports had no verdict assigned in our data system. These were analysis or investigation pieces that should either have verdicts added or be reclassified as non-verdict content.
3 instances of "experts say" or "researchers found" language that could be more specific. While sources were linked, the text should name the specific expert or institution.
3 FALSE/MISLEADING reports didn't fully explain WHY the debunked claim gained traction. Adding this context improves reader understanding and avoids appearing dismissive.
False Positives (Not Actual Problems)
The scanner over-flagged several patterns that, upon review, were appropriate:
- Absolutist language: "No evidence exists" flagged 11 times - but this is appropriate fact-checking language when substantiated
- Opinion source detection: 6 flags for the word "opinion" - all were references to "public opinion polls" or self-aware statements, not problematic sourcing
- Truncated quotes: Most ellipsis usage was legitimate readability editing, not context-stripping
Clean Reports: Our Exemplars
6 reports (21%) had zero quality concerns flagged. These demonstrate best practices:
- Clear verdict-to-evidence alignment
- Specific expert citations with names and institutions
- Explanation of why false claims gained traction
- Full context for all quotes
- Acknowledgment of complexity where warranted
These reports will serve as internal templates for future work.
Actions Taken
Based on this audit, we've made the following changes:
Immediate Fixes
- Reviewing 5 undefined-verdict reports to assign appropriate verdicts or reclassify
- Adding specific researcher names to 2 reports flagged for vague authority
Process Updates
- Mandatory verdict rule: Every fact-check report MUST have a verdict. Analysis pieces without verdicts must use a different category.
- Specific expert citations: Guidelines now require naming the specific expert, institution, or study - not "experts say."
- Counter-argument requirement: FALSE/MISLEADING reports must include a section explaining WHY the false claim spread.
Scanner Improvements
- Removing over-sensitive patterns (absolutist language, "opinion" keyword)
- Refining detection to reduce false positives
- Adding exclusions for "public opinion" and self-aware statements
Commitment to Ongoing Transparency
This audit is not a one-time exercise. We commit to:
- Quarterly audits: Reviewing sample of reports against quality framework
- Public reporting: Publishing audit results like this one
- Methodology transparency: Our full quality rubric is available at genuverity.com/methodology
- Corrections policy: When we get something wrong, we issue prominent corrections
Fact-checking requires trust. Trust requires transparency. We hold ourselves to the same standards we apply to others.
This audit found that 79% of our reports had some flagged concerns, though most were false positives. After human review, 13 valid issues were identified and are being addressed. Our updated guidelines will prevent these patterns in future reports. We're not perfect - but we're committed to improvement and transparency about where we fall short.