How PRISM works

Every morning, PRISM collects stories from a small set of trusted sources, ranks them by how genuinely interesting they are, and distills them into a finite, readable edition. This page explains every step — what AI does, what it does not do, and where you can verify our claims.

Editorial principles

Finite over infinite

Each category shows roughly five stories. You can finish today's edition. There is no infinite scroll, no algorithm designed to maximize time on site.

AI is invisible until asked

Categorization, interest scoring, and summarization happen server-side before you see anything. The reader sees what looks like good editing. Audio is the only surfaced AI feature.

Speed is editorial

Pages load in under 1.5 seconds on a good connection, under 2.5 seconds on slow 4G. We treat performance as a courtesy to readers, not a metric.

No personalization

Every reader sees the same edition. There are no user accounts, no reading history, no behavioral targeting. The same curation for everyone.

News sources

We ingest from the following sources every 30 minutes. We chose them for editorial quality, licensing clarity, and global coverage.

SourceNote
The New York Times
Headlines and abstracts only — we link out, never republish body text.
The Guardian
Full articles under permissive terms. Our AI writes its own summary; we do not republish verbatim.
GNews
Breadth aggregator covering 60,000+ sources globally.
BBC
RSS top stories feed.
Reuters
World news RSS feed.
NPR
Top stories RSS feed.
The Verge
Technology and culture RSS feed.
Ars Technica
Technology and science RSS feed.

Legal note: We never republish full article text from sources that do not grant that right. For NYT and GNews-aggregated articles, we show our own AI-written summary and link directly to the original. All summaries are clearly attributed.

Categories

Every article is assigned to exactly one of the following ten categories by the AI. The list is fixed — the model cannot invent new ones.

How AI is used

Deduplication

Before AI is involved, we check URL hashes and embedding similarity against articles from the last 48 hours. If two stories are more than 85% similar (cosine distance < 0.15), only the higher-quality source is shown. This uses OpenAI's text-embedding-3-small model — the only part of our pipeline that uses OpenAI; everything else uses Moonshot Kimi.

Categorization & interest scoring

Each article is sent to Moonshot's Kimi model with a structured prompt (see Prompt registry below). The model assigns a category, writes a 3–5 sentence summary in neutral language, and scores interest from 0–100 using this rubric:

  • Novelty (0–25): Is this genuinely new vs. ongoing coverage?
  • Magnitude (0–25): How many people are materially affected?
  • Insight (0–25): Does this change how a smart, curious person sees the world?
  • Texture (0–25): Is the story specific and well-reported, not press-release filler?

Penalties (up to −30): SEO bait, listicles without substance, celebrity gossip without cultural weight, rewrites of yesterday's story.

Audio narration

Articles with an interest score of 60 or higher receive a TTS narration using ElevenLabs (model: eleven_turbo_v2_5). The audio is generated from our AI-written summary, not from the original article body. Duration is typically 30–90 seconds per article.

Daily podcast brief

At 5:30 UTC each morning, Kimi selects the 8 most interesting articles from the previous 24 hours (with category diversity — no more than 2 per category), writes a 700–850 word podcast script, and ElevenLabs narrates it as a single mp3. The result is available at /podcast and in the RSS feed for podcast apps.

What AI does NOT do

  • AI does not fabricate facts. Every summary is grounded in the source article text. The model is instructed never to invent quotes, statistics, or events.
  • AI does not predict or editorialize. The interest score is a measure of journalistic quality and relevance, not political alignment or entertainment value.
  • AI does not select the daily lead story arbitrarily. The lead is always the highest-scored article in the World or Politics category for that day.
  • AI does not replace journalists. Every story links to its original source. We surface editorial work; we do not replace it.
  • AI does not generate images. We use the lead image from the source article. If none exists, we use a category-themed gradient — never an AI-generated image.

Prompt registry

Every AI call in PRISM is made with a versioned prompt. The ID is stored alongside each article so we can re-run evaluations when prompts change.

categorize-v1

Categorizes each article into one of 10 fixed categories, rates interest 0–100, writes a neutral summary, and extracts key entities.

rewrite-only-v1

Used when only a headline and abstract are available (e.g. NYT). Produces a 2–3 sentence contextual summary without inventing detail.

podcast-script-v1

Produces the daily 5-minute podcast script in SSML, covering 5–7 stories with a consistent editorial voice.

fallback-v1

Minimal prompt used when the main prompt fails validation twice. Returns only category, summary, and interest score.

Model

All LLM tasks use Moonshot Kimi (kimi-k2-0905-preview) via an OpenAI-compatible API. Embeddings use OpenAI's text-embedding-3-small (1536 dimensions). Voice synthesis uses ElevenLabs eleven_turbo_v2_5. Model names are stored as environment variables and versioned alongside prompt IDs.

Questions & corrections

If you believe an article has been misrepresented, misattributed, or if you are a publisher with licensing concerns, please contact us. We take editorial accuracy seriously and will respond within 24 hours.

PRISM is an independent project. It is not affiliated with any of the news organizations whose content it aggregates.

About — Methodology | PRISM