A New York Times Profile Helped Sell A Healthcare Illusion cover art

A New York Times Profile Helped Sell A Healthcare Illusion

A New York Times Profile Helped Sell A Healthcare Illusion

Listen for free

View show details

Summary

A billion-dollar “solo founder” AI success story sounds inspiring until you look at what was actually scaled. We dig into the Medvi telehealth blowup, the New York Times narrative that amplified it, and the uncomfortable question underneath it all: what happens when AI isn’t just writing marketing copy, but manufacturing medical credibility at scale?

We walk through how a lean telehealth brand can sit on top of outsourced infrastructure for physicians, pharmacies, shipping, and compliance and how that model can either expand access or hide accountability. Then we unpack the specific red flags that surfaced: alleged fake physician personas used in advertising, misleading trust signals that mimic real clinical authority, and the risks of marketing compounded GLP-1 medications in ways that imply FDA approval. We also talk about deepfaked before-and-after images, fabricated outcomes, and why that kind of deception hits harder in healthcare than in almost any other category.

From there, we zoom out to the bigger picture in digital health and AI ethics: what Legitscript certification is supposed to signal, why platforms are pushing AI disclosure rules, and why the “scarcest resource” may soon be the real patient-doctor relationship. We also debate whether a true one-person AI-built billion-dollar company is inevitable and why the future of approved AI doctors could be both powerful and terrifying.

Subscribe for more real talk on AI, telehealth compliance, and building trust the right way and if you found this useful, share it and leave a review. What guardrail do you most want to see for AI in healthcare?

adbl_web_anon_alc_button_suppression_c
No reviews yet