Detecting Emerging Trends in Patient Healthcare
Coming soon!
Featured
🚀 Read Our Mission — Learn about the problem we are trying to solve, how we are solving it, and our vision for the future.
Coming soon!
Coming soon!
📚 View all posts in the Graph-based Healthcare Series
Graph-based Healthcare Series — 6
This is the sixth post in our ongoing series on graph-based healthcare tools. Stay tuned for upcoming entries on clinical modeling, decision support systems, and graph-powered AI assistants.
In our previous post, we introduced a Bayesian diagnostic engine that uses synthetic patient data to quantify clinical evidence, score conditions, and update probabilities in a way that mirrors how clinicians think.
In this post, we zoom out from system architecture and generative modeling to answer a practical question:
How does GraphRAG perform on real patient messages compared to a traditional RAG system?
To evaluate this, we tested both systems on real-world caregiver messages from Last Mile Health (LMH), labeled and scored by GPT-4o, across over 300 cases. The results provide a compelling look at the strengths, weaknesses, and tradeoffs of graph-structured retrieval in clinical QA tasks.
📚 View all posts in the Graph-based Healthcare Series
Graph-based Healthcare Series — 5
This is the fifth post in an ongoing series on graph-based healthcare tools. Stay tuned for upcoming entries on clinical modeling, decision support systems, and graph-powered AI assistants.
In our previous post, we explored how large language models (LLMs) can simulate realistic pediatric patient encounters based on the IMNCI guidelines. These synthetic notes were grounded in real clinical logic, labeled with structured IMNCI classifications, and validated using a multi-agent verification strategy inspired by the Bayesian Truth Serum (BTS). The result: a high-fidelity dataset of richly annotated, clinically plausible pediatric cases.
In this post, we put that dataset to work—prototyping a Bayesian diagnostic engine that quantifies clinical evidence, scores conditions, and updates probabilities in a way that mirrors how clinicians think.
📚 View all posts in the Graph-based Healthcare Series
Graph-based Healthcare Series — 4
This is the fourth post in an ongoing series on graph-based healthcare tools. Stay tuned for upcoming entries on clinical modeling, decision support systems, and graph-powered AI assistants.
In our previous post, we demonstrated how agentic flows can transform diagnosis from a reactive retrieval task into a guided, context-aware reasoning process. By orchestrating modular assistants, tracking physician intent, and dynamically adapting based on feedback, we built a collaborative diagnostic experience that’s explainable, flexible, and clinically grounded.
In this post, we shift focus to the synthetic data generation side of the equation. We detail the steps taken to generate a diverse set of synthetic patient cases—each featuring unique symptoms, conditions, and diagnostic paths. These examples simulate a wide range of realistic clinical scenarios, laying the foundation for applying Bayesian pattern recognition methods to richly structured, verifiable patient data.
📚 View all posts in the Graph-based Healthcare Series
Graph-based Healthcare Series — 3
This is the third post in an ongoing series on graph-based healthcare tools. Stay tuned for upcoming entries on clinical modeling, decision support systems, and graph-powered AI assistants.
In our previous post, we demonstrated how the IMNCI graph model could power a graph-based retrieval-augmented generation (graph RAG) pipeline. By combining structured clinical knowledge with large language models (LLMs), we laid the foundation for a system that supports real-world diagnostic workflows.
In this installment, we take that idea further by introducing agentic flows—a new phase in our clinical decision support pipeline. Here, an intelligent, dialogue-capable assistant doesn’t just answer queries; it actively guides the diagnostic process. This assistant leverages the structured IMNCI graph as its reasoning backbone and uses pydantic graph to statefully orchestrate a set of modular, task-specific assistants (tools).
📚 View all posts in the Graph-based Healthcare Series
Graph-based Healthcare Series — 2
This is the second post in an ongoing series on graph-based healthcare tools. Stay tuned for upcoming entries on clinical modeling, decision support systems, and graph-powered AI assistants.
In our previous post, we explored how the Integrated Management of Neonatal and Childhood Illness (IMNCI) guidelines were transformed from static, text-heavy documents into an interactive graph model. This structure enabled more intuitive navigation of clinical logic, laying the groundwork for advanced applications in AI-assisted patient diagnosis.
In this follow-up, we demonstrate how that graph model serves as the foundation for a graph-based retrieval-augmented generation (graph RAG) system. By combining the structured clinical knowledge encoded in Neo4j with the generative capabilities of large language models (LLMs), we create a framework that supports transparent, context-aware patient diagnosis at the point of care.
📚 View all posts in the Graph-based Healthcare Series
Graph-based Healthcare Series — 1
This is the first post in an ongoing series on graph-based healthcare tools. Stay tuned for upcoming entries on clinical modeling, decision support systems, and graph-powered AI assistants.
The Integrated Management of Neonatal and Childhood Illness (IMNCI) guidelines are a vital resource for diagnosing and treating pediatric conditions in low-resource settings. However, their traditional format—dense tables and long blocks of text—is difficult to navigate in fast-paced, high-pressure clinical environments.
To make these guidelines more usable, we built an interactive, graph-based model of the IMNCI protocol. By translating non-linear diagnostic logic into a structured, visual format, we enable faster, more intuitive decision-making and pave the way for intelligent clinical tools.