The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) cover art

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

By: Sam Charrington
Listen for free

About this listen

Machine learning and artificial intelligence are dramatically changing the way businesses operate and people live. The TWIML AI Podcast brings the top minds and ideas from the world of ML and AI to a broad and influential community of ML/AI researchers, data scientists, engineers and tech-savvy business and IT leaders. Hosted by Sam Charrington, a sought after industry analyst, speaker, commentator and thought leader. Technologies covered include machine learning, artificial intelligence, deep learning, natural language processing, neural networks, analytics, computer science, data science and more.All rights reserved Politics & Government Science
Episodes
  • The Race to Production-Grade Diffusion LLMs with Stefano Ermon - #764
    Mar 26 2026
    Today, we're joined by Stefano Ermon, associate professor at Stanford University and CEO of Inception Labs to discuss diffusion language models. We dig into how diffusion approaches—traditionally used for images—are being adapted for text and code generation, the technical challenges of applying continuous methods to discrete token spaces, and how diffusion models compare to traditional autoregressive LLMs. Stefano introduces Mercury 2, a commercial-scale diffusion LLM that can generate multiple tokens simultaneously and achieve inference speeds 5-10x faster than small frontier models, paving the way for latency-sensitive applications like voice interactions and fast agentic loops. We also cover the open research challenges in diffusion LLM training, serving infrastructure requirements, and post-training for diffusion-based systems. Finally, Stefano shares his perspective on whether diffusion models can rival or surpass autoregressive LLMs at scale, the advantages for highly controllable generation, and what the future of multimodal diffusion models might look like. The complete show notes for this episode can be found at https://twimlai.com/go/764.
    Show More Show Less
    1 hr and 3 mins
  • Agent Swarms and Knowledge Graphs for Autonomous Software Development with Siddhant Pardeshi - #763
    Mar 10 2026
    In this episode, Sid Pardeshi, co-founder and CTO of Blitzy, joins us to discuss building autonomous development systems able to deliver production-ready software at enterprise scale. Sid contrasts AI-assisted coding with end-to-end autonomy, arguing that “code is a commodity” and acceptance is the real metric—security, standards, tests, and maintainability included. We explore Blitzy’s hybrid graph-plus-vector approach, which grounds agents and combines semantic signals with keyword search to navigate large repositories efficiently. Sid breaks down context and agent engineering, how effective context windows have plateaued, and why dynamic agent personas, tool selection, and model-specific prompting matter at scale. He details their orchestration of large swarms of AI agents to collaboratively analyze codebases, plan tasks, and execute complex tasks in parallel. We also dig into why Agents.md and flat memories break down, storing feedback in the knowledge graph, and building real-world evals beyond leaderboards to choose the right model for each task. The complete show notes for this episode can be found at https://twimlai.com/go/763.
    Show More Show Less
    1 hr and 16 mins
  • AI Trends 2026: OpenClaw Agents, Reasoning LLMs, and More with Sebastian Raschka - #762
    Feb 26 2026
    In this episode, Sebastian Raschka, independent LLM researcher and author, joins us to break down how the LLM landscape has changed over the past year and what is likely to matter most in 2026. We discuss the shift from raw model scaling to reasoning-focused post-training, inference-time techniques, and better tool integration. Sebastian explains why methods like self-consistency, self-refinement, and verifiable-reward reinforcement learning have become central to progress in domains like math and coding, and where those approaches still fall short. We also explore agentic workflows in practice, including where multi-agent systems add real value and where reliability constraints still dominate system design. The conversation covers architecture trends such as mixture-of-experts, attention efficiency strategies, and the practical impact of long-context models, alongside persistent challenges like continual learning. We close with Sebastian’s perspective on maintaining strong coding fundamentals in the age of AI assistants and a preview of his new book, Build A Reasoning Model (From Scratch). The complete show notes for this episode can be found at https://twimlai.com/go/762.
    Show More Show Less
    1 hr and 19 mins
All stars
Most relevant
Love these shows. Informative and well put together and I always leave a little bit more informed 😬

My favourite AI podcast

Something went wrong. Please try again in a few minutes.