The only platform that detects both persuasion techniques and addiction patterns in media content — across podcasts, YouTube, and news. Continuously. At scale.
Anyone can write a prompt that says “find persuasion techniques.” That's not what this is. XRAE is a fine-tuned large language model — trained through thousands of expert-annotated examples, distilled from the most capable AI systems available, and validated against peer-reviewed behavioral science. It doesn't guess. It detects.
XRAE was built through teacher-student distillation — the most capable frontier models annotated thousands of media segments, then that knowledge was distilled into a specialized detection model. This isn't prompt engineering. It's model engineering. The result is a system that has internalized the patterns, not one that follows instructions about them.
XRAE runs on dedicated hardware. No cloud API calls. No third-party data sharing. No per-token costs. No rate limits. Your content never leaves the infrastructure. This means we can analyze at a scale and speed that API-dependent services simply cannot match — and the model can never be taken away, deprecated, or repriced by a vendor.
The 32-code taxonomy wasn't invented — it was compiled from decades of published research. Each addiction pattern maps to specific peer-reviewed work: Skinner's variable reinforcement schedules, Zeigarnik's cognitive load theory, Przybylski's FOMO research (250+ studies), Horton & Wohl's parasocial interaction theory (70 years of validation). When XRAE flags a pattern, it's invoking science — not opinion.
Every piece of content is analyzed multiple times at varying temperature settings. Only detections that survive consensus voting make it to your dashboard. This eliminates noise and false positives — when XRAE flags something, it means the pattern was strong enough to be detected consistently across independent passes. Zero false positives on clean, informational content.
There will be others who try to do this. They'll wrap a prompt around a general-purpose API and call it “media analysis.” Some already have. But there is a fundamental difference between asking a model to look for something and training a model to see it.
A prompted model is renting someone else's intelligence. When the API changes, your analysis changes. When the price goes up, your margins go down. When the model gets deprecated, you start over. You don't own anything.
And XRAE doesn't stand still. Every week, new training data flows back into the pipeline. Detection failures get corrected. Edge cases get labeled. The model retrains, re-evaluates, and improves — a compound learning loop that gets sharper with every episode it analyzes. The longer XRAE runs, the better it gets. That's not something you can replicate by updating a prompt.
XRAE is ours. The training data is ours. The taxonomy is ours. The hardware is ours. The model weights are ours. The improvement curve is ours. No dependency on any vendor's API, pricing, or product roadmap. That is what makes Prizm possible — and what makes it defensible.
After KGM v. Meta/Google, the question isn't whether content causes harm — a jury already decided that. The question is whether you can see it coming. Prizm gives you the data.
Build evidence for content liability cases with forensic-grade analysis of addiction patterns. Every finding traced to peer-reviewed psychology.
Know what you're buying. See the manipulation profile of any show before you place an ad. Protect your brand from association with addictive content.
Access the largest continuously-updated dataset of media influence techniques. 32-code taxonomy grounded in 70+ years of behavioral science.
Two dimensions of detection that no other system offers: the techniques that influence what you think, and the patterns that keep you coming back for more.
Fear manipulation and emotional exploitation
EM01–EM02False reasoning, authority appeals, selective evidence
LF01–LF04Charged wording, minimization, euphemism
LG01–LG03Source positioning, identity construction, commitment pressure
SM01, IC01–IC03, CC01–CC03Selective framing, narrative imprinting, social pressure
FR01–FR06, SP01–SP02Content mechanisms that drive compulsive consumption
AD01–AD08This is what separates XRAE from everything else. These aren't persuasion techniques — they're the structural mechanisms that make content psychologically compulsive. Each one is mapped to published research.
XRAE ingests content from 135+ feeds, runs every episode and article through the detection model, and delivers results to your dashboard — automatically.
RSS feeds, podcast audio, YouTube, and news articles flow into the pipeline continuously.
XRAE runs multi-pass consensus detection across all 32 codes. Only consistent findings survive.
Findings, trends, and exportable data appear in your dashboard within hours of publication.
Prizm monitors content across the political spectrum — left, center, and right. Same model, same rubric, same standards for everyone.
56+ podcasts across left, center, and right. Full episode transcripts analyzed for all 32 techniques. Trending analysis shows patterns over time.
61+ news outlets monitored continuously. See how different outlets frame the same story. Compare influence profiles across the media landscape.
No tiers. No feature gates. Every subscriber gets everything.
Cancel anytime. No long-term contracts.