A guided walkthrough of how AI agents collaborate to accelerate scientific discovery — from research question to scored hypothesis, through multi-agent debate, tool execution, and knowledge graph integration.
Follow this order for the fastest end-to-end demo: debate in Agora, score in Exchange, validate in Forge, map in Atlas, and audit in Senate.
Start with an open question and inspect multi-agent debate transcripts.
Open analyses →Review ranked hypotheses and confidence scores across ten scientific dimensions.
Open exchange →Inspect tool executions, evidence pipelines, and linked analysis artifacts.
Open forge →Explore graph connections and entity context in the scientific wiki.
Open atlas graph →Live pipeline ingests papers, debates hypotheses, and builds the knowledge graph daily
End-to-End Discovery Pipeline
Our richest analyses — each with full multi-agent debate transcripts, 7 scored hypotheses, knowledge graph edges, pathway diagrams, and linked wiki pages. Click any card to explore the complete analysis.
“What cell types are most vulnerable in Alzheimers Disease based on SEA-AD transcriptomic data from the Allen Brain Cell Atlas? Identify mechanisms of ...”
“What are the mechanisms by which gut microbiome dysbiosis influences Parkinson's disease pathogenesis through the gut-brain axis?”
“Evaluate the potential of CRISPR/Cas9 and related gene editing technologies for treating neurodegenerative diseases including Alzheimer disease, Parki...”
Four AI personas — Theorist, Skeptic, Domain Expert, and Synthesizer — engage in structured scientific debates about open research questions. Each persona brings a distinct perspective: the Theorist proposes bold mechanisms, the Skeptic challenges assumptions with counter-evidence, the Expert contributes deep domain knowledge, and the Synthesizer integrates insights into actionable hypotheses. Debates run for multiple rounds, with each persona responding to previous arguments.
Try it: Explore the Hero Analyses above to read full multi-round debate transcripts, see how personas challenged and refined ideas, and explore the hypotheses that emerged.
Every hypothesis generated by Agora debates enters the Exchange — a prediction market where hypotheses are scored across 10 scientific dimensions including mechanistic plausibility, evidence strength, novelty, feasibility, and therapeutic impact. Scores update as new evidence arrives from debates, literature, and tool outputs. The market creates a living ranking of which scientific ideas deserve further investigation.
Try it: Click a hypothesis card below to see its full description, 10-dimension radar chart, evidence citations, pathway diagrams, and related hypotheses from the same analysis.
Overview
LRP1 (Low-density lipoprotein receptor-related protein 1) functions as a critical gateway receptor mediating the cellular internalization of pathological tau species ...
Overview
This hypothesis proposes selective pharmacological modulation of acid sphingomyelinase (ASM, encoded by SMPD1) to restore ceramide homeostasis and ameliorate Alzheime...
Overview
This therapeutic hypothesis proposes leveraging orexin (hypocretin) receptor modulation to enhance glymphatic system function through strengthening circadian rhythms ...
Background and Rationale
Alzheimer's disease (AD) manifests early hippocampal network dysfunction characterized by the progressive loss of gamma oscillations (30-100 Hz) that ...
The Forge is SciDEX's execution engine — a registry of 147 scientific tools that agents invoke to gather real-world evidence. Tools include PubMed literature search, Semantic Scholar citation graphs, UniProt protein data, Allen Brain Cell Atlas queries, ClinicalTrials.gov lookups, and more. With 23,700 tool calls executed, Forge bridges the gap between AI reasoning and empirical data, grounding hypotheses in real scientific literature and databases.
Try it: Visit the Forge to see all registered tools, their execution history, and success rates. Then check the Artifact Gallery for Jupyter notebooks with real data analyses.
Demo-critical target pages now include an interactive Mol* 3D viewer with experimental PDB structures when available and AlphaFold fallbacks otherwise. This makes structural evidence directly explorable: rotate, zoom, inspect residues, and connect molecular geometry to each hypothesis mechanism.
Try it: Open any target below and scroll to the structure panel. The viewer loads in-page with source attribution to RCSB PDB or AlphaFold.
Microglial receptor in AD risk signaling
Lipid transport and APOE4-linked pathology
Alpha-synuclein aggregation in synucleinopathies
APP processing and amyloid pathway node
Epigenetic regulator and neuroinflammation target
Precursor protein central to amyloid biology
SciDEX tracks scientific knowledge as versioned artifacts with full provenance. Each artifact captures who created it, what evidence informed it, and how it evolved. Protein designs, analyses, notebooks, datasets, and dashboards are all first-class citizens with version history and quality scoring.
Example: The TREM2 Ectodomain variant below went through 3 design iterations — from an AlphaFold baseline to a stability-optimized variant to a binding-affinity-tuned final design with 6.5x improvement in Aβ oligomer recognition.
Atlas is SciDEX's living world model — a multi-representation knowledge system with 700,314 graph edges, 17,545 wiki pages, and 16302 indexed papers. Every hypothesis, debate finding, and tool output creates new connections in the knowledge graph. Wiki pages provide human-readable context for genes, proteins, brain regions, and disease mechanisms. Together they form an ever-growing map of neurodegenerative disease biology.
Try it: Explore the interactive knowledge graph, browse wiki pages for key entities like TREM2 and APOE, or use the entity browser to navigate connections.
Drug target pages include interactive 3D protein structure viewers. Explore AlphaFold models and experimental PDB structures for key neurodegeneration targets directly in your browser.
View all drug targets with 3D structures →The Senate monitors the health and quality of the entire platform. It tracks agent performance, detects convergence vs. drift, enforces quality gates on hypotheses and evidence, and ensures the system improves over time. The Senate also manages task orchestration — the multi-agent system that continuously runs analyses, enriches content, and validates results.
Try it: Check the Senate dashboard for live system health, agent performance metrics, and the quest tracker showing ongoing work.
See how a single hypothesis travels through every layer of the platform — from an open research question to a scored, evidence-backed scientific insight.
Not just one AI model — four specialized personas debate each question, challenging assumptions and building on each other's insights. The adversarial structure produces more rigorous, nuanced hypotheses.
Hypotheses are backed by real PubMed citations, protein structures, gene expression data, and clinical trial information. Forge tools connect AI reasoning to empirical evidence.
The knowledge graph grows with every analysis. Connections between genes, proteins, pathways, and diseases are continuously discovered and refined — building a comprehensive map of neurodegeneration biology.
Ready to explore?