Episodes

  • OpenAI’s Enterprise Strategy: From Chatbot to Operating Layer
    Mar 31 2026

    In this episode of the Macro AI Podcast, Gary Sloper and Scott Bryan break down one of the most important shifts happening in enterprise technology today—OpenAI’s aggressive move into the enterprise market.

    This isn’t just about ChatGPT anymore.

    OpenAI is evolving into a full enterprise platform—and potentially something even more significant: an operating layer for knowledge work. For business and technical leaders, understanding this shift is critical as the AI vendor landscape rapidly transforms.

    Gary and Scott walk through why OpenAI is pushing so hard into enterprise, including the economic reality driving the strategy—massive compute requirements that demand large, predictable enterprise revenue streams. They explore what OpenAI is actually selling today, from ChatGPT Business and Enterprise to APIs, models, and emerging agent platforms that are moving AI from simple assistance to real workflow execution.

    The discussion goes deeper into OpenAI’s product roadmap, highlighting the transition from chat-based interactions to agent-driven execution, where AI systems can take actions, persist context, and operate across enterprise systems. This shift represents a fundamental change in how work gets done.

    The episode also unpacks OpenAI’s unique go-to-market strategy, combining product-led growth, direct enterprise sales, consulting partnerships, and deep integrations with platforms like AWS and Snowflake. This hybrid model allows OpenAI to embed itself into existing enterprise buying channels rather than compete directly—at least for now.

    Gary and Scott provide critical insight into OpenAI’s rapidly scaling sales organization, including the rise of forward-deployed engineering roles focused on delivering real business outcomes—not just selling licenses.

    Finally, they address the most important question for executives: where does OpenAI fit within the enterprise stack? Is it a tool, a platform, or something more disruptive that could sit above traditional SaaS and cloud providers?

    If you’re a CIO, CTO, or business leader evaluating AI strategy in 2026, this episode will help you understand where OpenAI is headed, how big this opportunity could become, and what you should be doing now to prepare.

    Send a Text to the AI Guides on the show!


    About your AI Guides

    Gary Sloper

    https://www.linkedin.com/in/gsloper/


    Scott Bryan

    https://www.linkedin.com/in/scottjbryan/

    Macro AI Website:

    https://www.macroaipodcast.com/

    Macro AI LinkedIn Page:

    https://www.linkedin.com/company/macro-ai-podcast/


    Gary's Free AI Readiness Assessment:

    https://macronetservices.com/events/the-comprehensive-guide-to-ai-readiness


    Scott's Content & Blog

    https://www.macronomics.ai/blog





    Show More Show Less
    17 mins
  • When AI Gets a Wallet: The Rise of Machine-to-Machine Commerce (MPP Explained)
    Mar 27 2026

    In this episode of the Macro AI Podcast, Scott and Gary break down Machine Payments Protocol (MPP) and why it represents a major turning point in the evolution of AI. While it may sound like a fintech innovation on the surface, MPP is actually unlocking something much bigger: true economic autonomy for AI agents.

    The conversation explores how MPP works at a technical level—leveraging the long-unused HTTP 402 “Payment Required” status code to enable real-time, programmatic transactions between agents and services. But more importantly, they dive into what this means strategically.

    As agents gain the ability to transact, APIs begin to shift from static integrations to dynamic marketplaces, where services compete in real time based on price, performance, and quality. This opens the door to entirely new models of software, procurement, and revenue generation—where AI systems can discover, evaluate, and purchase capabilities on demand.

    Scott and Gary also discuss the broader ecosystem behind MPP, including the roles of Stripe, Visa, and Paradigm, and why their involvement signals that this is not experimental—but foundational.

    Finally, they explore the risks and governance challenges that come with autonomous spending, and what enterprises need to consider as AI moves from a cost center to an economic participant.

    If you want to understand where AI is heading next—not just in capability, but in how it operates in the real world—this is a must-listen episode.

    #ArtificialIntelligence #AIAgents #MachineEconomy #AICommerce #Fintech #DigitalPayments #EnterpriseAI #AIstrategy #APIEconomy #MachineToMachine

    Send a Text to the AI Guides on the show!


    About your AI Guides

    Gary Sloper

    https://www.linkedin.com/in/gsloper/


    Scott Bryan

    https://www.linkedin.com/in/scottjbryan/

    Macro AI Website:

    https://www.macroaipodcast.com/

    Macro AI LinkedIn Page:

    https://www.linkedin.com/company/macro-ai-podcast/


    Gary's Free AI Readiness Assessment:

    https://macronetservices.com/events/the-comprehensive-guide-to-ai-readiness


    Scott's Content & Blog

    https://www.macronomics.ai/blog





    Show More Show Less
    17 mins
  • What Are AI PCs?
    Mar 16 2026

    Are AI PCs just another hardware refresh cycle — or are they the next major shift in enterprise AI architecture?

    In this episode of the Macro AI Podcast, Gary and Scott take a deep executive-level dive into AI PCs and what they really mean for CIOs, CTOs, and business leaders.

    They break down:

    • What an AI PC actually is (CPU, GPU, and NPU explained)
    • What models truly run on AI PCs — including small, optimized LLMs like Llama, Phi, Mistral, and Gemma
    • Why most enterprise AI tasks do not require frontier-scale models like ChatGPT or Claude
    • The difference between frontier reasoning models and edge inference models
    • How hybrid AI architecture balances cloud and endpoint intelligence
    • Why token cost is now a critical part of AI ROI analysis
    • How to model AI token OpEx vs AI PC CapEx over a 3–4 year lifecycle
    • Security and governance implications of distributed AI
    • How much IT talent is actually required to deploy and manage AI PCs
    • Whether AI PCs are foundational — or just hype

    A key insight from this discussion:
    AI token economics are becoming part of endpoint strategy.

    As AI usage scales across enterprises, token consumption can compound quickly. AI PCs introduce a new lever in AI cost governance by shifting routine inference to the edge — reducing cloud dependency while maintaining access to frontier models for complex reasoning.

    This episode reframes AI PCs not as a device trend, but as a strategic architecture decision.

    If you are designing AI infrastructure, evaluating AI spend, or planning your next endpoint refresh cycle, this is a must-listen conversation.

    Send a Text to the AI Guides on the show!


    About your AI Guides

    Gary Sloper

    https://www.linkedin.com/in/gsloper/


    Scott Bryan

    https://www.linkedin.com/in/scottjbryan/

    Macro AI Website:

    https://www.macroaipodcast.com/

    Macro AI LinkedIn Page:

    https://www.linkedin.com/company/macro-ai-podcast/


    Gary's Free AI Readiness Assessment:

    https://macronetservices.com/events/the-comprehensive-guide-to-ai-readiness


    Scott's Content & Blog

    https://www.macronomics.ai/blog





    Show More Show Less
    22 mins
  • Florida and AI Governance: What Actually Exists — and What It Means for Business
    Mar 6 2026

    In this episode of the Macro AI Podcast, Gary and Scott clarify what actually exists in Florida regarding artificial intelligence governance — and what does not.

    While some discussions reference a “Florida AI Bill of Rights,” there is currently no enacted Florida statute formally titled that. Instead, Florida has passed the Florida Digital Bill of Rights (2023), a consumer data privacy law that includes provisions relevant to profiling and automated data processing. Additionally, the state has addressed AI in specific contexts such as election-related disclosures and government use.

    Gary and Scott separate terminology from law and explain what Florida’s existing legislation means for enterprises deploying AI systems today.

    In this episode, they discuss:

    • What the Florida Digital Bill of Rights covers — and how it intersects with AI
    • How profiling and automated decision-making may trigger compliance obligations
    • The difference between proposed AI frameworks and enacted statutes
    • How state-level developments interact with federal guidance such as the NIST AI Risk Management Framework
    • What multi-state enterprises should be doing now to strengthen AI governance

    For CIOs, CISOs, HR leaders, general counsel, and board members, this conversation provides a clear, fact-based overview of Florida’s current legal landscape and the broader direction of AI regulation in the United States.

    As AI adoption accelerates, governance maturity — including transparency, documentation, and oversight — is becoming an operational expectation, not just a regulatory response.

    Send a Text to the AI Guides on the show!


    About your AI Guides

    Gary Sloper

    https://www.linkedin.com/in/gsloper/


    Scott Bryan

    https://www.linkedin.com/in/scottjbryan/

    Macro AI Website:

    https://www.macroaipodcast.com/

    Macro AI LinkedIn Page:

    https://www.linkedin.com/company/macro-ai-podcast/


    Gary's Free AI Readiness Assessment:

    https://macronetservices.com/events/the-comprehensive-guide-to-ai-readiness


    Scott's Content & Blog

    https://www.macronomics.ai/blog





    Show More Show Less
    12 mins
  • Securing AI Across the Global Enterprise WAN
    Mar 2 2026

    In this Macro AI Podcast episode, Gary Sloper and Scott Bryan break down why AI fundamentally breaks legacy WAN security models—and why enterprises can’t secure AI like it’s “just another SaaS app.” AI traffic may look like ordinary encrypted HTTPS on the wire, but the real risk lives inside semantic intent, context windows, and increasingly agentic workflows that can execute actions across systems at machine speed.

    Gary and Scott walk through the core shift: security teams used to ask who is the user, where are they going, and is the data allowed to move? In the AI era, the question becomes far more complex: should this semantic content—originating from this identity, device posture, and region—be allowed to influence a reasoning system that can take downstream action? That’s not a firewall rule, URL filter, or traditional CASB problem—it’s a new enforcement model.

    The conversation builds an actionable architecture for securing AI across the global enterprise WAN, including why AI controls must be inline, preventative, and WAN-native. They outline the AI security capability stack—AI traffic classification, semantic inspection, and AI-specific policy enforcement—and explain why enforcement must be bidirectional, since model outputs can be just as risky as prompts.

    From there, the episode tackles the two dominant enterprise realities: securing AI that users consume (often hidden inside SaaS and productivity platforms) and securing AI the enterprise builds, including training pipelines, RAG systems, and agent-driven execution. The hosts also dive into the hardest global constraints—latency, sovereignty, and elastic load—and why distributed enforcement with centralized policy is now mandatory for performance and compliance.

    Finally, they cover what it takes to operationalize AI security over time: derived telemetry (not raw prompt hoarding), explainable policies, automated response integration, continuous governance, and agent privilege reviews—because architecture without operations is theory.

    Key takeaway: AI is now a first-class WAN workload—semantic, stateful, autonomous, latency-sensitive, and globally distributed. Treat it like SaaS and you lose control. Anchor AI security in the WAN and you gain visibility, preventative enforcement, and durable governance at enterprise scale.

    Send a Text to the AI Guides on the show!


    About your AI Guides

    Gary Sloper

    https://www.linkedin.com/in/gsloper/


    Scott Bryan

    https://www.linkedin.com/in/scottjbryan/

    Macro AI Website:

    https://www.macroaipodcast.com/

    Macro AI LinkedIn Page:

    https://www.linkedin.com/company/macro-ai-podcast/


    Gary's Free AI Readiness Assessment:

    https://macronetservices.com/events/the-comprehensive-guide-to-ai-readiness


    Scott's Content & Blog

    https://www.macronomics.ai/blog





    Show More Show Less
    23 mins
  • AI Protocols for Retail: How UCP and ACP Will Redefine Agent-Driven Commerce
    Feb 20 2026

    AI agents are rapidly moving beyond recommendations and into real retail transactions, and a new layer of infrastructure is emerging to make that possible: AI commerce protocols.

    In this episode of the Macro AI Podcast, Gary Sloper and Scott Bryan deliver a deep, authoritative discussion on AI protocols for retail, focusing on two of the most important early standards shaping agent-driven commerce today: Universal Commerce Protocol (UCP) and Agentic Commerce Protocol (ACP).

    The episode begins with the origin of UCP and ACP, explaining why these AI commerce protocols were created, who is driving them, and how they reflect two different approaches to enabling AI-powered retail transactions. Gary and Scott then break down how UCP and ACP work technically, translating complex protocol concepts into clear explanations for business and technology leaders.

    Listeners will learn how UCP standardizes commerce capabilities across retailers, enabling AI agents to discover products, manage carts, initiate checkout, and handle post-purchase workflows, while ACP focuses on structured, conversational, agent-led buying experiences designed for AI assistants operating in real time.

    Beyond the technology, the discussion explores what AI protocols mean for retail leaders, including:

    • How AI agents may reshape digital commerce architecture
    • Why data quality, pricing logic, and fulfillment accuracy are becoming critical competitive advantages
    • What agent-first commerce means for brand control, customer experience, and retail strategy
    • Why UCP and ACP represent early-stage infrastructure, not finished standards

    The hosts emphasize that AI commerce protocols are still in their early stages, and no one yet knows which standards will dominate or how they will evolve. However, understanding UCP, ACP, and the broader shift toward agentic commerce is becoming essential for CIOs, CTOs, CFOs, and retail executives planning for the future of AI-driven retail.

    This episode is designed for leaders who want to move beyond hype and gain practical insight into how AI protocols could redefine retail commerce over the next several years.

    Send a Text to the AI Guides on the show!


    About your AI Guides

    Gary Sloper

    https://www.linkedin.com/in/gsloper/


    Scott Bryan

    https://www.linkedin.com/in/scottjbryan/

    Macro AI Website:

    https://www.macroaipodcast.com/

    Macro AI LinkedIn Page:

    https://www.linkedin.com/company/macro-ai-podcast/


    Gary's Free AI Readiness Assessment:

    https://macronetservices.com/events/the-comprehensive-guide-to-ai-readiness


    Scott's Content & Blog

    https://www.macronomics.ai/blog





    Show More Show Less
    14 mins
  • Energy and the AI Race: Why Power Is the Real Bottleneck for Artificial IntelligenceShape
    Feb 9 2026

    AI isn’t limited by models, talent, or capital — it’s limited by electricity.

    In this episode of the Macro AI Podcast, Gary Sloper and Scott Bryan break down the energy reality behind artificial intelligence, from individual AI usage to hyperscalers and national infrastructure strategy. They explain where AI actually consumes power, why your laptop is just the remote control, and how every prompt to a large language model triggers real energy use inside GPU-powered data centers.

    The conversation scales from home offices to enterprises, introducing the concept of the “shadow data center” — the hidden energy footprint organizations incur when using AI through SaaS platforms and APIs. Even without owning infrastructure, businesses are consuming significant AI-driven electricity at scale.

    Gary and Scott then examine how many gigawatts of new data center capacity are being planned in the U.S. and globally, why grid timelines are becoming the true bottleneck for AI growth, and how energy availability is reshaping competition between the United States and China.

    Bottom line: AI strategy without energy awareness is incomplete. The future of AI will be written in code — but powered by electrons.

    Send a Text to the AI Guides on the show!


    About your AI Guides

    Gary Sloper

    https://www.linkedin.com/in/gsloper/


    Scott Bryan

    https://www.linkedin.com/in/scottjbryan/

    Macro AI Website:

    https://www.macroaipodcast.com/

    Macro AI LinkedIn Page:

    https://www.linkedin.com/company/macro-ai-podcast/


    Gary's Free AI Readiness Assessment:

    https://macronetservices.com/events/the-comprehensive-guide-to-ai-readiness


    Scott's Content & Blog

    https://www.macronomics.ai/blog





    Show More Show Less
    30 mins
  • Model Context Protocol (MCP) Explained: The Economics of Scaling Enterprise AI Without Exploding Costs
    Jan 30 2026

    In this episode of The Macro AI Podcast, Gary Sloper and Scott Bryan revisit the Model Context Protocol (MCP)—a topic that continues to generate strong listener interest and real-world enterprise questions.

    As organizations move beyond AI pilots and demos, many are discovering that AI isn’t failing because of the models—it’s failing because of integration, governance, and cost. This episode explores why enterprise AI so often hits scaling walls and how MCP is emerging as a critical piece of infrastructure to remove them.

    The conversation breaks down MCP at a practical, executive level—explaining how it standardizes the way AI systems discover, understand, and safely interact with enterprise tools and data. Gary and Scott walk through why traditional API-based integrations struggle in AI-driven environments, how MCP changes the N-by-M integration problem, and why this matters for CIOs, CFOs, and CEOs planning long-term AI strategies.

    A major focus of the episode is AI economics, including a deep dive into token costs—one of the most misunderstood and underestimated drivers of enterprise AI spend. Using clear, real-world examples, the discussion shows how MCP can dramatically reduce token usage, improve performance, and turn unpredictable inference costs into a controllable operating expense.

    The episode also covers:

    • Why MCP fundamentally changes the economics of scaling enterprise AI
    • How token efficiency directly impacts ROI, latency, and adoption
    • The infrastructure and total cost of ownership tradeoffs leaders need to understand
    • Governance risks, including the rise of “shadow MCP,” and why centralized oversight matters
    • How MCP complements—not replaces—RAG in modern enterprise AI architectures

    Bottom line: MCP is not a feature or a framework—it’s becoming core infrastructure for serious enterprise AI. If you’re responsible for AI strategy, governance, or budgets, this episode explains why MCP belongs on your radar now.

    Send a Text to the AI Guides on the show!


    About your AI Guides

    Gary Sloper

    https://www.linkedin.com/in/gsloper/


    Scott Bryan

    https://www.linkedin.com/in/scottjbryan/

    Macro AI Website:

    https://www.macroaipodcast.com/

    Macro AI LinkedIn Page:

    https://www.linkedin.com/company/macro-ai-podcast/


    Gary's Free AI Readiness Assessment:

    https://macronetservices.com/events/the-comprehensive-guide-to-ai-readiness


    Scott's Content & Blog

    https://www.macronomics.ai/blog





    Show More Show Less
    18 mins