Episodes

  • EP 35: AI Algorithmic Trading: The New Market Makers
    Feb 22 2026

    Welcome to the final episode of the AI in Finance series, exploring algorithmic trading and AI market makers—genuinely the wild west of AI in finance. Here's context most people don't realize: 60-70% of equity market volume already comes from algorithmic trading, with high-frequency trading alone accounting for roughly 50%. When you think about the stock market, you're thinking about a system that's already majority AI and algorithms, not human traders.

    Sam and Mac explore what fundamentally differentiates AI algorithmic trading from traditional algorithmic trading. Traditional algorithms follow fixed rules: if condition X, then execute action Y—deterministic and predictable. AI algorithms learn and adapt dynamically, recognizing complex patterns across multiple variables, adjusting strategies in real time based on changing market conditions, and optimizing behaviors continuously.

    The technical models include reinforcement learning (AI learning optimal strategies through trial and error in simulations), LSTMs for time series prediction, and increasingly transformer models adapted for financial data—same basic architecture as ChatGPT but trained on market data instead of language. These models are exceptional at understanding that the same price movement means different things in different contexts: high volatility versus low volatility, bull market versus bear market.

    Regulatory landscape remains challenging. The SEC requires reasonable oversight, but defining "reasonable" for systems executing thousands of trades per second is genuinely difficult. In practice, this means kill switches, risk limits built into algorithms, monitoring systems that flag unusual patterns, and automatic shutoffs when volatility triggers occur.

    Show More Show Less
    15 mins
  • EP 32: AI Fraud Detection - Fighting Fire with Fire
    Feb 22 2026

    Over 50% of fraud now involves AI. FIDZY surveyed 562 fraud professionals globally and found AI-powered fraud has become the norm, not the exception. We're talking about deepfakes, synthetic identities, and AI-powered phishing so sophisticated it's basically indistinguishable from legitimate communications. The counter punch? 90% of banks are now using AI to fight back—fighting fire with fire.

    Sam and Mac paint the threat landscape: deepfake calls that sound exactly like your bank's fraud department, using your bank's actual spoofed phone number, with perfect voice and professional script asking for your PIN. California bank customers received dozens of these calls and many fell for it because the technology is that convincing.

    This is an arms race. Fraudsters use AI, banks use AI—there's no final victory. As bank AI gets smarter at detection, fraud AI evolves to evade those systems. It's like computer viruses and antivirus software—never-ending evolution and counter-evolution. The economic stakes are enormous: Deloitte estimates US banking losses from fraud could increase from $12.3 billion in 2023 to $40 billion by 2027, more than tripling in four years due to generative AI sophistication.

    Human oversight remains essential. 88% of banking professionals say human oversight is non-negotiable. AI identifies potential issues and surfaces them to analysts, but humans make final calls on complex cases. The benefit: 43% of institutions report increased efficiency because AI handles high-volume straightforward cases, freeing human experts for complex nuanced cases requiring judgment.

    Show More Show Less
    17 mins