Building a local-AI healthcare MVP in the browser (Chrome Prompt API) — MediReport Assist
I’ve had blood checkups frequently. The reports usually show raw numbers (Hb, RBC, WBC, etc.) and very little interpretation. To get clarity, I’d end up booking an appointment — and sometimes even then, different doctors would interpret the same report differently.
So I built a small MVP: MediReport Assist — a privacy-first, accessibility-friendly app that generates a human-readable summary and diet guidance from a few blood report values.
- Live app: https://medi-report-assist.vercel.app/
- GitHub: https://github.com/pavan-sh/medi-report-assist
- Devpost: https://devpost.com/software/medireport-assist
Disclaimer: This is an MVP demo and not medical advice. Always consult a qualified doctor for diagnosis and treatment.
Why “local AI” in a healthcare-style app
Healthcare data is sensitive. If your workflow depends on sending numbers to a cloud model, you immediately inherit:
- privacy/security risk (data leaving the device)
- latency + internet dependency
- operational cost (API usage)
Chrome’s built-in AI (Prompt API) lets you process data on-device, which improves:
- privacy: the input stays local to the browser
- speed: no network round-trips for inference
- reliability: works even on flaky connections
That tradeoff is exactly why this was a fun project: it’s a realistic use case for “local AI” that isn’t just a toy chatbot.
What the app does (MVP scope)
For this MVP, I intentionally kept the scope small and focused:
- Collects numeric blood report inputs:
- sex
- age
- hemoglobin (Hb)
- RBC count
- total WBC
- Generates a summary + diet guidance
- Shows a meaningful error if local AI isn’t available
- Does not retain user data
UX: accessibility + safety
Two things I cared about from day one:
- Accessibility: clear labels, predictable layout, keyboard-friendly UI.
- Safety/expectations: explicit disclaimer and a privacy checkbox before submission.
The form enforces a privacy-policy agreement before it runs the analysis.
The core implementation (Prompt API)
The app is built with:
- Next.js 14
- TypeScript
- Tailwind
- shadcn/ui components
On the client, it checks for the Prompt API via the global ai object.
// if `ai` doesn't exist, the browser doesn't support it (or it's not enabled)// @ts-expect-errorif (typeof ai === 'undefined') { setIsAIEnabled(false)} else { // @ts-expect-error const session = await ai?.languageModel?.create() setAiSession(session)}Then it generates a prompt from the form values:
export function generatePromptInput(params) { return `Below is blood report for: Sex - ${params.sex}, Age - ${params.age}, Hemoglobin - ${params.hemoglobin} gm%, RBC count - ${params.rbcCount} millions/cumm, Total WBC - ${params.totalWbc} cumm.
Provide a summary of the report along with guidance on diet to address deficiencies.Use tables to present the diet plan`}And uses streaming output for better UX:
const stream = await session.promptStreaming(prompt)for await (const chunk of stream) { setResponseText(chunk.trim())}In early tests, promptStreaming() felt noticeably more responsive than waiting for a single long prompt() response.
Handling unsupported browsers (the “real-world” part)
If the user opens the app in a non-supported browser (or Chrome without Prompt API enabled), the app shows a dedicated error screen with steps.
It includes guidance to:
- Join Chrome’s early preview program
- Update Chrome
- Enable any required AI/Prompt API flags (if the preview program instructs)
- Restart Chrome
This matters because “local AI” on the web is still emerging — feature detection and graceful fallback are mandatory.
Challenges I hit
- Browser support / flags: getting Prompt API working required the right Chrome version and setup.
- Latency:
prompt()could take a while for longer prompts; streaming improved the perceived performance.
What I learned
- How to integrate Chrome’s built-in Prompt API into a real UI flow
- Why privacy-first architecture matters more in healthcare-adjacent apps
- Why accessibility testing should happen early (not after the fact)
What’s next
If I extend this beyond the MVP, the next features would be:
- Support more test categories (lipid profile, urinalysis, imaging summaries, etc.)
- Download / print the generated analysis
- Better guardrails in the prompt (more “don’t diagnose” behavior)
Try it
If you’re building with Chrome built-in AI too, I’d love to compare notes — especially around reliability, performance, and what UX patterns work best for “on-device” inference.