Research Intelligence Platform

The research layer your
whole org uses —
and your team controls.

The studies happen. The insights are real. But by the time a PM needs them, they're buried in a Drive folder from eight months ago. The research isn't bad. The infrastructure is broken.

60–70%
synthesis time saved
Less formatting. More thinking.
6 hrs
study → insights live
Was 3 days. Now the same afternoon.
teams with access
PM, Sales, Marketing, CS, and Data.
The catalogue problem

Research doesn't disappear.
It becomes unfindable.

Product decisions get made without evidence not because the research doesn't exist — but because nobody can retrieve it when it matters.

Formats don't connect
Docs, Notion, Otter — all isolated, none searchable together.
Insights buried in prose
A PM can't extract signal from a 40-page report in two minutes.
Context dies with team changes
When the researcher who ran the study leaves, the interpretation leaves with them.
DRIVE: /Research/2024–2025
247 files · last indexed: never
Q4-2024
Onboarding usability study — 6 participants
Report exported to Docs. Mentioned in Slack once. Never cited again.
onboardingusability
buried
Q3-2024
PM interviews — decision-making patterns
Transcript in Otter. Summary never written. Insights: unknown.
discoveryinterviews
lost
Q2-2024
Pricing page — 3 concepts tested
Critical findings about conversion trust. PM left. Context gone.
pricingconversion
orphaned
Q1-2024
JTBD interviews — 12 users, 3 segments
The foundational insight work. Locked in a 47-page Notion doc.
jtbddiscovery
inaccessible
"We ran that study eight months ago. I know the findings are in there somewhere."
— Every researcher, every quarter
The research intelligence platform

Every transcript becomes
a decision-ready insight.

Built like a library. Searched like a catalogue. No new tools — configured inside the stack you already use.

Platform
01
Upload & Process
02
Review & Approve
03
Decision Brief
04
One source. Five teams.
Step 01 — Upload & Process
Transcript → insight cards.
Upload a raw transcript. Within minutes, Claude extracts draft insight cards — tagged by product area, scored by evidence strength, mapped to a JTBD statement. Nothing publishes automatically.
Input: Raw Transcript
P: So the thing that frustrated me—

[00:12:34] I knew we had done this research but I couldn't find where it lived. It took me 45 minutes to track down the Notion page and another 20 to find the actual insight...

[00:13:02] By that time I'd already made the decision without it.
Output: Insight Cards (draft)
Research retrieval is so slow it's abandoned mid-decision
"By that time I'd already made the decision without it."
discoveryjtbd: find fast
Strong evidence
45-min search friction precedes most research abandonment
infrastructurefindability
Moderate evidence
Step 02 — Researcher Review
You control what ships.
Draft cards appear in your review queue. Edit the statement, adjust JTBD mapping, confirm evidence strength. Approve individually or in batch. Nothing is visible until you say so.
◆ Pending your review
Research retrieval is so slow it's abandoned mid-decision
"By that time I'd already made the decision without it." — P3, Discovery Interview
discoveryjtbd: find fast
2 of 5 cards reviewed · 3 pending
Step 03 — Decision Brief
PM asks. Brief arrives.
A PM types a product question. The system retrieves relevant cards, surfaces evidence, flags contradictions, outputs a structured decision brief in seconds.
Should we redesign the onboarding flow for Q3 launch?
Supporting evidence
3 studies confirm current onboarding creates a drop-off at the "first value moment" — users who don't experience value within 8 minutes have a 67% churn rate within 30 days.
Contradictions
One study (Q2 2024, n=6) found power users prefer current complexity. Redesign risk: alienating a high-retention segment to optimise for conversion.
What we still don't know
No research on segment-specific onboarding paths. The hypothesis that power users and new users need different flows is untested.
Confidence: 78% · 4 studies · 1 gap flagged
Step 04 — Team Access
One source. Every team.
Each team gets a filtered view built for how they use research. Research team controls quality. Everyone else gets the signal they need.
Product / PM
Decision briefs with evidence scores, contradictions, and recommended next steps.
Sales
Filtered customer language view — exact quotes, product area, evidence strength.
Marketing
JTBD emotional lens — what users are trying to accomplish and why it matters.
Customer Success
Submit support patterns into the insight pipeline. Research team reviews before publishing.
Live demo

Ask a product question.
Get a research-backed brief in seconds.

This is the PM Decision Brief — one of Lumen's built-in intelligence tools. Type any product question. It reads your full research repository and returns a structured brief in seconds. Try it live on a real dataset.

🧭
PM Decision Brief
Ask a product question — get a research-backed brief
Try a sample question or type your own
Want this on your research?
This runs on a sample dataset. Yours would have every insight from every study your team has ever run — fully searchable in seconds.
Apply for Early Access →
What it actually does

What it actually does.

Six capabilities. No new tools — built inside the stack you already use.

AI Synthesis Pipeline
Upload a transcript. Structured insight cards generated in minutes, not days. You review and approve — nothing publishes without your sign-off.
Semantic Search
Ask a real product question in natural language. Get the most relevant insights ranked by evidence strength — not just keyword matches.
Decision Briefs
Auto-generated briefs with supporting evidence, contradictions, confidence scores, and identified research gaps. In seconds, not days.
JTBD Mapping
Every insight card is mapped to a Jobs-to-be-Done statement. Taxonomy is validated with your team and stays consistent across all studies.
Role-Scoped Views
PM gets decision briefs. Sales gets customer language. Marketing gets emotional JTBD. CS gets a submission pipeline. All from the same repository.
Evidence Scoring
Every insight is scored by evidence strength. Weak signals are labelled. Strong evidence is surfaced first. Contradictions are flagged automatically.
Built for the whole org

One source. Every team.

The research team controls quality. Every other team gets a filtered view built for how they actually use insights.

Research Team
You set the standard. You ship the signal.
Upload a transcript. Review AI-extracted draft cards. Edit, approve, or reject. Nothing reaches the organisation without your sign-off.
  • AI-assisted synthesis — you do the judgment
  • Review queue with batch approval
  • Full audit trail on every published card
  • Evidence strength scoring you control
Product / PM
Ask a question. Get a decision brief.
Type a real product question. Get a structured brief with evidence, contradictions, and what the research doesn't yet answer.
  • Natural language research retrieval
  • Decision briefs with confidence scores
  • Contradiction flagging built in
  • Gap identification — what's untested
Sales · Marketing · CS
Filtered access to the insight catalogue.
Each team gets a view built for how they use research. Customer language for sales. JTBD framing for marketing. Pattern submission for CS.
  • Role-scoped views — relevant signal only
  • Customer quote library for sales
  • JTBD emotional lens for marketing
  • CS can submit patterns for review
The engagement

3–4 weeks. Your stack.
You own it completely.

Not a SaaS subscription. A structured engagement that builds the system inside your existing Notion or Google Drive, hands it to your team fully documented, and exits.

01
Discover
Week 1 · ~3 hrs your team
kickoff session workflow shadow asset audit team survey
Shadow one complete study end-to-end. Audit what exists in Drive and Notion. Run a 5-question team survey. Goal: understand what's actually broken before building anything.
Expert Walkthrough doc
Opportunity map
CRAFT Implementor Brief
02
Repository Build
Weeks 2–3 · ~4 hrs your team
notion schema jtbd taxonomy backfill 10–15 studies sync setup
03
AI Pipeline Configuration
Week 3 · ~2 hrs your team
claude integration prompt tuning review workflow live test
04
Access & Handoff
Week 4 · ~3 hrs your team
team onboarding role-scoped views documentation handoff
First cohort — limited spots

See if Lumen
is a fit for your team.

First round is for teams already on Notion, Google Drive, or OneDrive. This isn't SaaS — it's a 3–4 week engagement where we build Lumen inside your stack and hand it to your team fully documented. You own it.

Apply for Early Access See how it works →
No commitment. We'll scope it together on a 30-minute call.
Notion·Google Drive·OneDrive·Claude API