Episodes

  • #328 Kevin Tian: Exploring Doppel's AI-Native Social Engineering Defense Platform
    Mar 27 2026

    AI is changing more than just productivity.

    It's changing what we can trust.

    In this episode, Kevin Tian, Co-founder and CEO of Doppel, breaks down how AI is enabling a new wave of social engineering attacks—from deepfake phone calls to impersonation across LinkedIn, YouTube, and search engines.

    The reality is this:
    Deepfakes are just one part of a much bigger problem.

    Attackers are now operating across multiple channels at once, using AI to manipulate people, not just systems. And as these attacks scale, the real risk isn't just fraud or data loss—it's the erosion of trust in everything we see online.

    Kevin explains how Doppel is building an AI-native defense platform to detect, map, and shut down these attacks in real time, and why the future of cybersecurity will be defined by AI vs AI.

    If you're thinking about AI, security, or the future of trust online—this conversation is essential.


    Stay Updated:
    Craig Smith on X: https://x.com/craigss

    Eye on A.I. on X: https://x.com/EyeOn_AI

    (00:00) AI Deepfakes & The Collapse of Trust
    (01:56) Why "Social Engineering" Is Bigger Than Phishing
    (05:20) Deepfakes, Misinformation & Multi-Channel Attacks
    (09:16) The Rise of Deepfake Phone Calls
    (12:43) How Attackers Manipulate AI & Search Results
    (14:39) The Origin Story Behind Doppel
    (18:55) How Doppel Detects & Stops Attacks in Real Time
    (22:55) Can Attackers Misuse AI Defense Tools?
    (24:26) How to Tell What's Real vs Fake Online
    (28:20) What Is Human Risk Management?
    (30:36) AI vs AI: The Future of Cyber Defense
    (34:04) What CEOs Must Do About AI Threats
    (37:18) Working with Platforms Like YouTube & LinkedIn
    (39:52) Can We Ever Fully Stop Deepfakes?
    (44:40) How Doppel Works for Enterprises

    Show More Show Less
    48 mins
  • #327 Baris Gultekin: The Next Phase of AI - Agents That Understand Your Company's Data
    Mar 19 2026

    This episode is sponsored by Modulate.

    Meet Velma, voice AI that detects tone, intent, and stress:http://preview.modulate.ai

    Baris Gultekin, Head of AI at Snowflake, breaks down how enterprise AI is actually being built, deployed, and scaled today. From running AI directly inside governed data environments to enabling natural language access across entire organizations, this conversation explores the shift from experimentation to real-world impact.

    You'll learn why Snowflake's core philosophy centers around bringing AI to the data, how data agents are transforming decision-making across teams, and what it takes to build trustworthy AI systems with governance, guardrails, and high-quality retrieval at the core.

    Baris also shares how leading companies are already saving thousands of hours through AI-driven automation, why culture and leadership determine AI success, and what the future looks like as agents move from pilots to full-scale production.

    If you want to understand where enterprise AI is actually headed and what separates hype from real execution, this episode breaks it down.

    (00:00) The Evolution of Snowflake AI

    (01:40) Baris Gultekin: Background & AI Mission

    (02:59) Why AI Must Run Next to Data

    (04:29) Inside Snowflake's AI Infrastructure

    (09:08) Model Choice vs Product Layer Strategy

    (12:16) Building Trust: Governance, Guardrails & Quality

    (16:01) How Enterprise Agents Are Built & Orchestrated

    (20:10) AI Adoption Across the Entire Organization

    (24:39) Reasoning vs Retrieval: What Matters More

    (27:43) Real Use Case: Faster Decision-Making with AI

    (31:44) AI as a Co-Pilot for Leaders

    (36:52) Preparing Data for AI at Scale

    (38:46) What the AI Data Cloud Really Means

    Show More Show Less
    42 mins
  • #326 Zuzanna Stamirowska: Inside Pathway's AI Systems That Work with Live, Real-Time Data
    Mar 11 2026

    This episode is sponsored by tastytrade.

    Trade stocks, options, futures, and crypto in one platform with low commissions and zero commission on stocks and crypto. Built for traders who think in probabilities, tastytrade offers advanced analytics, risk tools, and an AI-powered Search feature.

    Learn more at https://tastytrade.com/


    In this episode of the Eye on AI, Craig Smith speaks with Zuzanna Stamirowska about how Pathway is enabling AI systems to work with live, continuously updating data.

    Most AI applications rely on static datasets that quickly become outdated. Pathway takes a different approach, allowing developers to build AI systems that process real-time data streams, keeping models, knowledge bases, and AI agents constantly up to date.

    Craig and Zuzanna explore why real-time data may be critical for the next generation of LLM applications, RAG systems, and enterprise AI infrastructure, and what it takes to build AI that can operate in a constantly changing world.

    Subscribe for more conversations with the researchers and builders shaping the future of AI.



    Stay Updated:

    Craig Smith on X: https://x.com/craigss

    Eye on A.I. on X: https://x.com/EyeOn_AI


    (00:00) The Core Problem: Why Today's AI Lacks Memory

    (03:16) Pathway's Mission to Bring Memory Into AI

    (04:53) Zuzanna's Background in Complexity Science

    (10:30) Why Transformers Reset Like "Groundhog Day"

    (14:34) The Brain-Inspired Dragon Hatchling Architecture

    (23:59) How the Network Learns and Builds Connections

    (37:38) Performance vs Transformers on Language Tasks

    (49:37) Productizing the Technology With NVIDIA and AWS

    (54:23) Can Memory Solve AI Hallucinations?

    Show More Show Less
    1 hr and 8 mins
  • #325 Phelim Brady: Why AI's Future Depends on Human Judgement
    Mar 9 2026

    AI often looks fully automated. But behind the scenes, a huge amount of human judgment is shaping how these systems actually work.

    In this episode, Craig Smith speaks with Phelim Bradley, co-founder and CEO of Prolific, a platform that connects millions of real people with researchers and AI labs to evaluate and improve AI systems.

    They explore the hidden human layer behind modern AI, why traditional benchmarks are becoming less reliable, and why AI companies increasingly rely on real human feedback to measure model performance in the real world.

    Phelim also explains how demographic differences influence how models are evaluated, why human judgment remains critical even as AI improves, and how the collaboration between humans and AI will shape the next phase of development.

    This conversation reveals the human backbone behind today's AI systems.


    Stay Updated:

    Craig Smith on X: https://x.com/craigss

    Eye on A.I. on X: https://x.com/EyeOn_AI

    (00:00) Preview and Intro

    (02:45) Founding Prolific And Early Pain Points

    (06:30) From Mechanical Turk To Representativeness

    (09:55) Academic Research And AI Use Cases Split

    (13:40) Vetting Real Participants And Fighting Fraud

    (17:45) Scale, Community Growth, And Talent Mix

    (22:00) High-Complexity Projects Over Commoditised Labeling

    (26:40) Measuring Model Persuasion With Live Conversations

    (30:20) Demographic-Aware Model Preference Benchmarks

    (34:10) The Rise Of Human Evaluation Over Benchmarks

    (38:00) Enterprise Model Choice And Continuous Evaluation

    (42:00) Why Humans Won't Disappear From The Loop





    Show More Show Less
    47 mins
  • #324 Sharon Zhou: Inside AMD's Plan to Build Self-Improving AI
    Feb 27 2026

    AI is not just getting smarter. It is getting faster by learning how to optimize the hardware it runs on.

    In this episode, Sharon Zhou, VP of AI at AMD and former Stanford AI researcher, explains how language models are beginning to write and optimize their own GPU kernel code. We explore what self improving AI actually means, how reinforcement learning is used in post training, and why kernel optimization could be one of the most overlooked scaling levers in modern AI.

    Sharon breaks down how GPU efficiency impacts the cost of training and inference, why catastrophic forgetting remains a challenge in continual learning, and how verifiable rewards from hardware profiling can help models improve themselves. The conversation also dives into compute economics, synthetic data, RLHF, and why infrastructure may define the next phase of AI progress.

    If you want to understand where AI scaling is really happening beyond bigger models and more data, this episode goes under the hood.


    Stay Updated:

    Craig Smith on X: https://x.com/craigss

    Eye on A.I. on X: https://x.com/EyeOn_AI


    (00:00) Preview and Intro

    (00:25) Sharon Zhou's Background and Transition to AMD

    (02:00) What Is Self-Improving AI?

    (04:16) What Is a GPU Kernel and Why It Matters

    (07:01) Using AI Agents and Evolutionary Strategies to Write Kernels

    (11:31) Just-In-Time Optimization and Continual Learning

    (13:59) Self-Improving AI at the Infrastructure Layer

    (16:15) Synthetic Data and Models Generating Their Own Training Data

    (20:48) AMD's AI Strategy: Research Meets Product

    (23:22) Inside the NeurIPS Tutorial on AI-Generated Kernels

    (30:59) Reinforcement Learning Beyond RLHF

    (39:09) 10x Faster Kernels vs 10x More Compute

    (41:50) Will Efficiency Reduce Chip Demand?

    (42:18) Beyond Language Models: Diffusion, JEPA, and Robotics

    (45:34) Educating the Next Generation of AI Builders

    Show More Show Less
    46 mins
  • #323 David Ha: Why Model Merging Could Be the Next AI Breakthrough
    Feb 24 2026

    This episode is sponsored by tastytrade.
    Trade stocks, options, futures, and crypto in one platform with low commissions and zero commission on stocks and crypto. Built for traders who think in probabilities, tastytrade offers advanced analytics, risk tools, and an AI-powered Search feature.

    Learn more at https://tastytrade.com/



    Artificial intelligence is reaching a turning point. Instead of building bigger and bigger models, what if the real breakthrough comes from letting AI evolve?


    In this episode of Eye on AI, David Ha, Co-Founder and CEO of Sakana AI, explains why evolutionary strategies and collective intelligence could reshape the future of machine learning. We explore model merging, multi-agent systems, Monte Carlo tree search, and the AI Scientist framework designed to generate and evaluate new research ideas. The conversation dives into open-ended discovery, quality and diversity in AI systems, world models, and whether artificial intelligence can push beyond the boundaries of human knowledge.


    If you're interested in AGI, evolutionary AI, frontier models, AI research automation, or how AI could start discovering science on its own, this episode offers a clear look at where the field may be heading next.


    Stay Updated:

    Craig Smith on X: https://x.com/craigss

    Eye on A.I. on X: https://x.com/EyeOn_AI


    (00:00) AI Should Evolve, Not Just Scale

    (03:54) David's Journey From Finance to Evolutionary AI

    (10:18) Why Gradient Descent Gets Stuck

    (18:12) Model Merging and Collective Intelligence

    (28:18) Combining Closed Frontier Models

    (32:56) Inside the AI Scientist Experiment

    (38:11) Parent Selection, Diversity and Innovation

    (49:25) Can AI Discover Truly New Knowledge?

    (53:05) Why Continual Learning Matter

    Show More Show Less
    57 mins
  • #322 Amanda Luther: The Widening AI Value Gap (Inside BCG's AI Research)
    Feb 19 2026

    In this episode of Eye on AI, Craig Smith speaks with Amanda Luther, Senior Partner at Boston Consulting Group and global lead of BCG's AI Transformation practice, about what their latest 1,500-company AI study reveals about the widening gap between AI leaders and laggards.

    Only 5% of companies are truly "future-built" with AI embedded across their core business functions. These firms are seeing measurable gains in revenue growth, EBIT margins, and shareholder returns. Meanwhile, 60% of organizations are either experimenting or struggling to extract real value.

    Amanda breaks down how BCG measures AI maturity across 41 capabilities, how AI impact flows through the P&L, and why leading companies invest twice as much in AI as their competitors. She explains where AI is actually creating value today, from sales and marketing to procurement and retail operations, and why most of that value comes from core business functions, not back-office automation.

    The conversation also explores the rise of agentic systems, why many early agent deployments fail, and what it really takes to redesign workflows around AI. Amanda shares practical advice for companies stuck in experimentation mode, how to prioritize the right use cases, and why training and change management matter more than chasing the perfect vendor.

    If you want to understand how AI is reshaping competitive advantage in enterprise organizations, this episode provides a data-backed look at what separates the leaders from everyone else.

    Stay Updated:

    Craig Smith on X: https://x.com/craigss
    Eye on A.I. on X: https://x.com/EyeOn_AI


    (00:00) The AI Value Gap

    (01:17) Inside BCG's 1,500-Company AI Study

    (04:14) What "Future-Built" Companies Do Differently

    (09:30) How AI Impact Is Measured on the P&L

    (12:57) Why AI Leaders Invest 2X More

    (14:16) Where AI Is Driving Real Cost Reduction

    (16:20) Agentic AI: Hype vs Reality

    (20:13) Where Agents Actually Create Value

    (24:22) Tech vs Talent: Where the Money Goes

    (26:58) Will AI Laggards Slowly Disappear?

    (31:58) Why Adoption Is Accelerating Now

    (40:07) How to Start: Amanda's Advice to AI Laggards

    Show More Show Less
    54 mins
  • #321 Nick Frosst: Why Cohere Is Betting on Enterprise AI, Not AGI
    Feb 17 2026

    This episode is sponsored by tastytrade.

    Trade stocks, options, futures, and crypto in one platform with low commissions and zero commission on stocks and crypto. Built for traders who think in probabilities, tastytrade offers advanced analytics, risk tools, and an AI-powered Search feature.

    Learn more at https://tastytrade.com/



    In this episode of Eye on AI, Nick Frosst, Co-Founder of Cohere and former Google Brain researcher, explains why Cohere is betting on enterprise AI instead of chasing AGI.

    While much of the AI industry is focused on artificial general intelligence, Cohere is building practical, capital-efficient large language models designed for real-world enterprise deployment. Nick breaks down why scaling transformers does not equal AGI, why inference cost and ROI matter, and how enterprise AI differs from consumer AI hype.

    We discuss enterprise LLM deployment, private data, regulated industries like banking and healthcare, agentic systems, evaluation benchmarks, and why AI will likely become embedded infrastructure rather than a headline breakthrough.

    If you care about enterprise AI, AGI debates, large language models, and the future of AI in business, this conversation delivers a grounded perspective from inside one of the leading AI companies.

    Stay Updated:

    Craig Smith on X: https://x.com/craigss

    Eye on A.I. on X: https://x.com/EyeOn_AI


    (00:00) From Google Brain to Cohere

    (03:54) Discovering Transformers

    (06:39) The Transformer Dominance

    (09:44) What AGI Actually Means

    (12:26) Planes vs Birds: The AI Analogy

    (14:08) Why Cohere Isn't Chasing AGI

    (18:38) Distillation & Model Efficiency

    (21:42) What Enterprise AI Really Does

    (25:20) Private Data & Secure Deployment

    (26:59) Enterprise Use Cases (RBC Example)

    (32:22) Why AI Benchmarks Mislead

    (34:55) Why Most AI Stays in Demo

    (38:23) What "Agents" Actually Are

    (43:32) The Problem With AGI Fear

    (49:15) Scaling Enterprise AI

    (53:24) Why AI Will Get "Boring"

    Show More Show Less
    1 hr and 1 min