Episodes

  • The Workforce Is *Not* AI-Ready (ft. Ben Tasker, AI education leader)
    Mar 31 2026

    Send us Fan Mail

    Everyone says they’re “AI-first.”

    Very few organizations are AI-ready.

    In this episode of FUTUREPROOF., we sit down with Ben Tasker, who is leading one of the largest workforce-scale AI education efforts in the public utility sector — upskilling 36,000 employees while advising global organizations on certification and governance.

    Ben calls this moment the “AI Between Times.” The tools are evolving rapidly, but the AI-driven economy they promise hasn’t fully stabilized. That gap creates risk — and opportunity.

    We unpack what actually breaks when companies try to move beyond pilot projects:

    • Why buying AI tools is easy — and building internal capability isn’t
    • The tension between augmentation and displacement
    • What the 70/30 rule means in cost-constrained environments
    • Why governance must precede implementation
    • And how AI fluency is quietly becoming a new form of institutional power

    Ben argues that AI strategy lives or dies at the human level. Not because technology isn’t powerful, but because incentives, culture, and leadership determine whether that power compounds or fractures an organization.

    This conversation isn’t about hype cycles.

    It’s about whether institutions can transform fast enough — without breaking trust in the process.

    Because the future of work won’t be defined by who bought the best tools.

    It will be defined by who prepared their people.

    Show More Show Less
    23 mins
  • GLP-1s, AI, and the New Health Economy (ft. Rajiv Leventhal, health analyst)
    Mar 10 2026

    Send us Fan Mail

    Healthcare is colliding with technology faster than most people realize.

    In this episode of FUTUREPROOF., I sit down with analyst Rajiv Leventhal, who covers the intersection of healthcare, pharma, and tech, to unpack three forces reshaping the system at once: AI, GLP-1 weight loss drugs, and the mental health impact of digital life.

    We start with AI as a health tool. Nearly a quarter of ChatGPT’s global weekly users now use it for health-related prompts. That’s not a niche behavior. It’s a mainstream one. The question isn’t whether people will turn to AI for medical guidance. They already are.

    The real tension is trust and liability. General-purpose AI tools aren’t bound by HIPAA in the same way healthcare providers are. Yet they’re increasingly acting as digital concierges — answering late-night pediatric questions, explaining lab results, and helping people prepare for appointments in a system where access is strained.

    And that system is strained. Even in major cities, patients can wait months — sometimes a year — to see specialists. When access gaps widen, alternative tools step in. AI isn’t replacing doctors. It’s filling holes.

    We then turn to GLP-1 drugs and the weight-loss explosion. What began as diabetes treatment became a cultural and commercial wave driven by social media, FDA approvals, and aggressive advertising. But beneath the surface is a regulatory gray market of compounded versions, patent battles, and telehealth platforms monetizing demand.

    Finally, we tackle social media’s impact on mental health. The evidence linking heavy use — especially among teens — to anxiety and depression is growing, even if causation remains complex. Is this a regulation problem? A parental problem? A public health issue? Or another example of technology moving faster than governance?

    This episode isn’t about hype.

    It’s about what happens when broken systems create openings — and tech companies move into the space.

    Because when trust erodes and access declines, people don’t wait.

    They improvise.

    Show More Show Less
    27 mins
  • The Storytelling Revolution: Why Humanity's Earliest Innovation Still Matters (ft. author Kevin Ashton)
    Mar 26 2026

    Send us Fan Mail

    In this episode of FUTUREPROOF., we sit down with Kevin Ashton—the technologist who coined the term Internet of Things and helped usher in the smartphone era—to talk about something even more foundational than AI.

    Stories.

    In his new book, The Story of Stories, Kevin traces a million-year arc—from the first fires where early humans gathered, to the invention of writing and printing, to electricity, electronics, and the smartphone. His thesis is provocative: language did not create stories. Stories created language.

    Every major storytelling revolution has followed a simple pattern: it increases the number of people who can tell stories—and the number of people who can hear them.

    For the first time in history, anyone can tell stories to everyone.

    But there’s a catch.

    While AI cannot understand meaning, algorithms now determine which stories we see, amplifying bias, shaping belief, and influencing behavior at scale. The power of storytelling has never been more democratized—or more intermediated.

    We explore:

    • Why storytelling is innate, not cultural
    • The eight great revolutions of human communication
    • Why machines can generate content but not meaning
    • The risks of algorithmic amplification
    • The role of critical thinking in a post-scarcity information world
    • Whether the next storytelling revolution is technological—or cognitive

    This conversation isn’t about nostalgia.
    It’s about understanding the oldest human technology in a moment when the newest one is accelerating everything.

    If we think in stories—and we always will—the question becomes:
    Who shapes the stories that shape us?

    Show More Show Less
    24 mins
  • Less DEI, more FAIRness (ft. author Lily Zheng)
    Feb 24 2026

    Send us Fan Mail

    For years, organizations have poured millions into DEI training.

    And yet most employees still report discrimination. Promotion gaps persist. Trust remains uneven.

    So what’s going on?

    In this episode of FUTUREPROOF., I sit down with Lily Zheng — strategist and author of Fixing Fairness — to interrogate a hard truth: much of what we call DEI doesn’t work. Not because fairness is unpopular. Not because inclusion is misguided. But because we keep trying to fix people instead of fixing systems.

    Lily introduces the FAIR framework — Fairness, Access, Inclusion, and Representation — and argues that the real leverage isn’t in workshops. It’s in incentives, evaluation criteria, hiring processes, and executive accountability.

    We explore:

    • Why standalone DEI training can backfire
    • The “missing stair” metaphor — and how organizations normalize dysfunction
    • The Cobra Effect of poorly designed diversity incentives
    • Why representation is ultimately about trust, not optics
    • What meritocracy gets wrong about itself
    • And why rebranding DEI won’t solve structural problems

    At a moment when DEI faces political backlash and corporate retrenchment, Lily makes a counterintuitive claim: the future of workplace inclusion will be more rigorous, more measured, and more accountable — not less.

    This is a systems conversation.

    Not about slogans.
    Not about performative commitments.
    About incentives, power, and what actually moves outcomes.

    If you care about leadership, governance, and the second-order effects of institutional design, this episode will challenge you.

    Show More Show Less
    32 mins
  • Soft Skills Are the Hard Advantage in the AI Era (ft. Bushra Khan)
    Feb 17 2026

    Send us Fan Mail

    For years, we treated emotional intelligence like a cultural add-on.

    Nice to have.
    Important, maybe.
    But not central to performance.

    That framing doesn’t survive the AI era.

    In this episode of FUTUREPROOF., I sit down with Dr. Bushra Khan, founder of Leading with BK, to examine what actually differentiates leaders as automation compresses the knowledge gap. When AI can draft, analyze, summarize, and even simulate difficult conversations, the advantage shifts. It moves from what you know to how you show up.

    Bushra has spent over 15 years helping leaders translate emotional intelligence from buzzword into operating system. We talk about why “soft skills” should be understood as strategic skills, how negativity bias quietly distorts leadership judgment, and why loneliness inside high-performing teams is less about remote work and more about emotional avoidance.

    We also explore some uncomfortable tensions:

    • If AI amplifies leaders, what exactly is it amplifying?
    • When does candor become bluntness — and erode trust instead of building it?
    • Why do leaders underestimate the emotional consequences of automation?
    • What does bravery look like when decisions are both rational and painful?

    Bushra argues that most organizations are still trying to fix people instead of fixing environments. They invest in workshops while ignoring incentives. They push productivity while neglecting psychological safety. They assume proximity equals connection.

    But as AI takes over more technical tasks, influence becomes the real differentiator. And influence is emotional before it is analytical.

    This conversation isn’t about positivity or platitudes. It’s about leadership under pressure — layoffs, automation, rapid skills shifts — and what it takes to signal trust and authority through noise.

    Because the future of work won’t just test our systems.

    It will test our emotional maturity.

    Show More Show Less
    28 mins
  • How People Endure When Systems Collapse (ft. Trevor Reed, author & Russia detainee)
    Feb 10 2026

    Send us Fan Mail

    This episode of FUTUREPROOF. is different.

    My guest is Trevor Reed, a former U.S. Marine who was wrongfully detained and abused in a Russian gulag for nearly three years, freed in a high-profile prisoner exchange in 2022—and then made a decision few could comprehend: he voluntarily went to Ukraine to fight against the same system that imprisoned him.

    In this conversation, Trevor reflects on what captivity does to the human mind, how survival reshapes your definition of justice, and why freedom—real freedom—can’t be taken for granted once you’ve lost it.

    We talk about:

    • What daily life inside a Russian penal colony is actually like—and how close he came to dying there
    • The mental discipline required to survive prolonged isolation, hunger, and uncertainty
    • The emotional toll of being turned into a geopolitical bargaining chip
    • Why revenge eventually gave way to a deeper definition of justice
    • The surreal contrast between everyday life and active war zones in Ukraine
    • Being critically wounded by a landmine—and what it means to survive twice
    • How his understanding of freedom, responsibility, and humanity has fundamentally changed

    This is not a conversation about politics.
    It’s a conversation about power, resilience, moral injury, and what it means to remain human when systems fail you.

    Trevor’s memoir, Retribution: A Former US Marine's Harrowing Journey from Wrongful Imprisonment in Russia to the Front Lines of the Ukrainian War, is not an easy read—but it is an important one. And this conversation is not comfortable—but it is necessary.

    Show More Show Less
    25 mins
  • The ROI of Not Being a Robot (ft. author & VaynerX exec Claude Silver)
    Feb 3 2026

    Send us Fan Mail

    What if the most undervalued leadership skill in the AI era isn’t technical fluency—but emotional presence?

    This episode of FUTUREPROOF. features Claude Silver, the world’s first Chief Heart Officer and the No. 2 executive at VaynerX, joining the show to unpack why authenticity, empathy, and belonging are no longer “nice-to-haves,” but strategic advantages.

    Claude’s 2025 book, Be Yourself at Work, challenges the long-standing belief that professionalism requires emotional distance. Instead, she argues that in a world defined by AI, automation, and burnout, the leaders who win are the ones who lead with heart—intentionally, skillfully, and without performative fluff.

    We explore:

    • Why “authenticity” has been misunderstood—and how to practice it without oversharing or losing authority
    • What leading with heart actually looks like inside a 2,000-person global organization
    • How emotional skills become power skills as AI absorbs more technical work
    • The difference between fitting in and true belonging—and why that gap is costing companies talent and trust
    • How leaders can balance emotional bravery with emotional efficiency in an always-on, high-pressure world

    This is a conversation about leadership after the old playbook breaks—and what replaces it when humanity becomes the edge.

    Show More Show Less
    25 mins
  • AI Is Scaling Fast—Accessibility Isn’t. Here’s How We Fix That.
    Jan 6 2026

    Send us Fan Mail

    Guest: Joe Devon
    Title: Chair, GAAD Foundation | Co-founder, Global Accessibility Awareness Day

    AI is reshaping how we design software—but accessibility still too often shows up as an afterthought. In this episode of FUTUREPROOF., Joe Devon joins us to unpack what it actually means to build technology that works for everyone, especially as generative AI becomes embedded across products, platforms, and workflows.

    Joe explains why accessibility isn’t a niche concern—it affects more than 1.3 billion people globally—and why AI represents both the biggest risk and the biggest opportunity the accessibility movement has ever seen. We dig into the early findings from the AI Model Accessibility Checker (AIMAC), what most AI models still get wrong about accessible code, and why “AI will fix it later” is a dangerous assumption.

    We also explore how front-end tools like AI-generated captions, voice interfaces, and image descriptions are changing daily life for users with disabilities—and where back-end AI systems can finally close the gap between automated testing and real-world usability. Throughout the conversation, Joe makes a compelling case that accessibility is not just a moral imperative, but a design discipline that will separate future-proof products from legacy ones.

    Topics covered:

    • Why most digital products still fail basic accessibility standards
    • How AI can dramatically expand—or quietly restrict—access
    • What AIMAC reveals about how accessible today’s AI models really are
    • Front-end vs. back-end accessibility breakthroughs
    • The ethical stakes of deploying inaccessible AI at scale
    • Why inclusive design must be a core requirement, not a patch
    Show More Show Less
    23 mins