Episodes

  • EP 35: AI Algorithmic Trading: The New Market Makers
    Feb 22 2026

    Welcome to the final episode of the AI in Finance series, exploring algorithmic trading and AI market makers—genuinely the wild west of AI in finance. Here's context most people don't realize: 60-70% of equity market volume already comes from algorithmic trading, with high-frequency trading alone accounting for roughly 50%. When you think about the stock market, you're thinking about a system that's already majority AI and algorithms, not human traders.

    Sam and Mac explore what fundamentally differentiates AI algorithmic trading from traditional algorithmic trading. Traditional algorithms follow fixed rules: if condition X, then execute action Y—deterministic and predictable. AI algorithms learn and adapt dynamically, recognizing complex patterns across multiple variables, adjusting strategies in real time based on changing market conditions, and optimizing behaviors continuously.

    The technical models include reinforcement learning (AI learning optimal strategies through trial and error in simulations), LSTMs for time series prediction, and increasingly transformer models adapted for financial data—same basic architecture as ChatGPT but trained on market data instead of language. These models are exceptional at understanding that the same price movement means different things in different contexts: high volatility versus low volatility, bull market versus bear market.

    Regulatory landscape remains challenging. The SEC requires reasonable oversight, but defining "reasonable" for systems executing thousands of trades per second is genuinely difficult. In practice, this means kill switches, risk limits built into algorithms, monitoring systems that flag unusual patterns, and automatic shutoffs when volatility triggers occur.

    Show More Show Less
    15 mins
  • EP 34: AI in Credit and Lending: Democratizing Access or Amplifying Bias?
    Feb 22 2026

    AI in credit decisions is genuinely controversial because it could either democratize lending and expand access to underserved populations or take historical discrimination and amplify it at scale. The reality is both are happening simultaneously in different institutions—it all depends on how intentionally the AI is designed and monitored for fairness.

    Sam and Mac examine how AI is disrupting traditional credit scoring. FICO scores have dominated for decades using limited data: payment history, credit utilization, length of credit history, types of credit, and recent inquiries. This approach systematically excludes millions who don't have traditional credit histories, even if they're perfectly responsible with money and would be excellent borrowers.

    The technical models include XGBoost as the industry standard and neural networks for processing more data with hidden layers. Traditional logistic regression is often a poor fit for real-world credit behavior. Banks need model governance with clear ownership, regular bias testing, robust explainability, and human oversight for complex cases. AI handles straightforward approvals and denials; humans handle the middle—complex situations requiring judgment and contextual understanding.

    Show More Show Less
    15 mins
  • EP 32: AI Fraud Detection - Fighting Fire with Fire
    Feb 22 2026

    Over 50% of fraud now involves AI. FIDZY surveyed 562 fraud professionals globally and found AI-powered fraud has become the norm, not the exception. We're talking about deepfakes, synthetic identities, and AI-powered phishing so sophisticated it's basically indistinguishable from legitimate communications. The counter punch? 90% of banks are now using AI to fight back—fighting fire with fire.

    Sam and Mac paint the threat landscape: deepfake calls that sound exactly like your bank's fraud department, using your bank's actual spoofed phone number, with perfect voice and professional script asking for your PIN. California bank customers received dozens of these calls and many fell for it because the technology is that convincing.

    This is an arms race. Fraudsters use AI, banks use AI—there's no final victory. As bank AI gets smarter at detection, fraud AI evolves to evade those systems. It's like computer viruses and antivirus software—never-ending evolution and counter-evolution. The economic stakes are enormous: Deloitte estimates US banking losses from fraud could increase from $12.3 billion in 2023 to $40 billion by 2027, more than tripling in four years due to generative AI sophistication.

    Human oversight remains essential. 88% of banking professionals say human oversight is non-negotiable. AI identifies potential issues and surfaces them to analysts, but humans make final calls on complex cases. The benefit: 43% of institutions report increased efficiency because AI handles high-volume straightforward cases, freeing human experts for complex nuanced cases requiring judgment.

    Show More Show Less
    17 mins
  • EP 31: AI in Stock Prediction: The Stanford Study that outperformed 93% of Fund Managers
    Feb 22 2026

    Stanford just dropped a bombshell study: an AI analyst made 30 years of stock picks and outperformed 93% of human mutual fund managers by an average of 600 basis points—that's 6% annually. This is absolutely massive in the investment world, kicking off Inside AssembleAI's AI in Finance series with the technology that's shaking Wall Street.

    Here's what's fascinating: the AI mostly used simple variables, not the sophisticated ones everyone expected. Firm size and dollar trading volume were dominant factors, but it used complex AI techniques to squeeze maximum predictive value from simple data everyone can access. The insight isn't about finding hidden data-it's about extracting more signal from obvious data. Any investment firm could have had this data in the pre-AI era, but it was simply too costly to justify economically.

    Sam and Mac explore three main approaches institutions use today: pattern recognition for known scenarios (AI learns what fraud or manipulation looks like), anomaly detection for unknown threats (establishing what's normal and alerting on deviations), and predictive analytics for future behavior (forecasting what's likely to happen next). All happening in real time, in milliseconds-the game changer compared to legacy systems.

    The data quality issue compounds everything—garbage in, garbage out. Models require at least five years of high-quality historical data for reliable results, and even then, past performance doesn't guarantee future success. Looking ahead to 2026, expect more hedge funds adopting sophisticated AI systems, models incorporating multi-modal data like satellite imagery and social sentiment, intensifying regulatory scrutiny, and continued democratization as retail investors gain access to tools that were hedge fund exclusive just years ago.

    Show More Show Less
    16 mins
  • EP 28: AI-Powered Patient Care Through Synthetic Data
    Feb 20 2026

    By 2024, synthetic data will comprise 60% of all healthcare AI training data. This episode explores how this shift is solving the industry's massive data problem while protecting patient privacy.

    Healthcare faces a critical paradox: AI needs vast patient data for accurate diagnoses and personalized treatments, but HIPAA and GDPR restrict access to real records. Synthetic data offers a breakthrough—artificially generated datasets that mimic real patient populations statistically without containing actual patient information.

    Sam and Mac explain how generative AI techniques like GANs and auto-encoders create synthetic data preserving statistical properties of real healthcare data while eliminating privacy concerns. These datasets train AI to detect diseases, predict outcomes, and recommend treatments without exposing sensitive information.

    The AI healthcare market is expected to grow from $26.6 billion in 2024 to $187.7 billion by 2030, driven by synthetic data breakthroughs. AI tools trained on synthetic datasets are automating clinical documentation, reducing clinician burnout by handling administrative tasks consuming hours daily. For rare diseases with limited real data, synthetic data enables previously impossible AI training.

    However, challenges exist. If original data contains demographic biases or reflects healthcare disparities, synthetic data perpetuates those biases. This can lead to AI performing poorly for underrepresented populations, worsening health inequities. Careful validation and bias detection are essential.

    Regulatory guidance for synthetic data generation and use is still developing. Healthcare organizations must navigate this evolving framework carefully to ensure compliance while leveraging advantages.

    Early adoption provides competitive advantages. Organizations developing expertise in high-quality synthetic datasets are positioning themselves to lead the AI-driven healthcare transformation. The future of patient care increasingly depends on AI trained on synthetic data protecting privacy while enabling innovation.

    TAGS: Synthetic Data, Healthcare AI, Patient Privacy, HIPAA, Generative AI, GANs, Rare Disease AI, Clinical Documentation, AI Bias, Patient Outcomes, Healthcare Analytics

    Show More Show Less
    16 mins