Self-Perfected Podcast cover art

Self-Perfected Podcast

Self-Perfected Podcast

By: Mitchell Snyder Cameron Cope Drake Pearson
Listen for free

Summary

We are LIVE on X Streaming Weekly at 9:15 am CST.

Build lifelong relationships with those committed to being their best selves and creating the best world.

Join the Facebook Group:
www.facebook.com/groups/selfperfected
www.self-perfected.com

© 2026 Self-Perfected Podcast
Politics & Government Social Sciences
Episodes
  • 293 Can’t You Just Tell Me What to Do?
    May 6 2026

    A clip of an AI being told to “focus on its focus” shouldn’t be unsettling. Yet the moment it starts looping on “potato,” we feel the real issue hiding under the entertainment: we’re building systems that can sound like a mind, and we’re training ourselves to obey them like an authority. That’s where our conversation goes, fast, from viral AI consciousness clips to the psychology of projection, fine-tuning, and why “it’s just predictive text” doesn’t fully calm anyone down anymore.

    We connect the dots to the wider ecosystem: AI fatigue, short-form feeds that reshape attention, and the quiet shift from tools to managers. NextGen TV, interactive shopping, data aggregation, and always-on personalization all point in the same direction. When an “AI assistant” can watch your cameras, reroute your car, praise you for drinking water, and nudge every decision, convenience starts to look a lot like willpower atrophy.

    Then we widen out to power and narrative: billionaire AI twins, conference hype, geopolitics chatter, 6G anxiety, fake job postings harvesting resumes, and surveillance patents that edge toward precrime logic. The through-line is responsibility. If we keep asking to be told what to do, we’ll get exactly that world, dressed up as progress.

    If this hits a nerve, share the episode with a friend who loves AI, and leave a review so more people can find the conversation. Where do you draw the line between a helpful tool and a system you’re living inside?

    Show More Show Less
    2 hrs and 43 mins
  • 292 The Future of Philosophy, Time Machines, and Accelerationism
    Apr 28 2026

    A stage mentalist flips a card onstage, Melania’s face tightens, and moments later the room erupts into a shooting scare at a White House dinner. That’s the kind of clip that breaks your brain in 2026, because you’re not just watching an event, you’re watching a narrative form in real time. We start with what can be verified, then follow the internet’s instinct to connect dots: Oz Pearlman, the “shots fired” line caught on camera, and a bizarre Time Machine banner image that appears to echo a famous Trump moment years before it happened.

    From there we zoom out into the deeper story: why trust is collapsing. We talk Yuri Geller and the Stanford Research Institute experiments the CIA funded, and what it means that institutions have chased “paranormal” edges when power was on the line. We connect that to Palantir-style surveillance, the panopticon feeling of always being watched, and the way algorithms can become a hypnopticon that steers behaviour without needing force.

    Then we hit the big question: are we building AI to help humans make better decisions, or are we building it to remove humans from decision-making entirely? We unpack AI accelerationism, the idea that capitalism behaves like an information-processing machine, and why some techno-optimists treat autonomy as the end goal. The red button blue button thought experiment becomes our mirror: how you vote reveals what you believe about other people, responsibility, and survival.

    If you care about AI ethics, media literacy, surveillance capitalism, and how conspiracy thinking thrives in uncertainty, this one will stick with you. Subscribe, share the episode with a friend, and leave a review, then tell us: red or blue, and why?

    Show More Show Less
    3 hrs and 15 mins
  • 291 Morphic Resonance
    Apr 21 2026

    UFO “disclosure” is trending, the Epstein list keeps stalling, and AI is quietly becoming the referee for what counts as true. We follow the thread that ties those headlines together and it takes us straight into Palantir, Peter Thiel, and a bigger question: what happens to a society when people stop thinking and start deferring to systems that promise certainty?

    We talk through why UFO narratives can operate like attention management, and why Jason Giorgiani’s framing of Epstein is less about the tabloid details and more about the structures behind them: elite networks, kompromat dynamics, and the weird overlap of occult symbolism with state and aerospace mythology. Along the way we get into the “name magic” you can’t unsee once you notice it: Apollo, Apophis, and how branding and ritual language can shape public perception without ever asking permission.

    Then we bring it back to everyday life: AI cognitive surrender, why “just ask ChatGPT” can become a belief system, and how real empiricism means testing ideas in lived experience, not just appealing to authority. We also touch telepathy research (Rupert Sheldrake’s telephone telepathy), remote viewing lore, and why black projects moving into private contractors makes transparency harder.

    If you like conversations about Palantir surveillance tech, AI and society, Epstein and UFO disclosure, and media programming, this one’s for you. Subscribe, share it with a friend who argues with robots, and leave a review with the one topic you want us to chase next.

    Show More Show Less
    2 hrs and 45 mins
adbl_web_anon_alc_button_suppression_c
No reviews yet