Why Calling Your AI "Intelligent" Is a Leadership Mistake with Patrick Rooney, Founder of Leonis Strategy
Failed to add items
Add to basket failed.
Add to wishlist failed.
Remove from wishlist failed.
Adding to library failed
Follow podcast failed
Unfollow podcast failed
-
Narrated by:
-
By:
Summary
In this episode, I interviewed Patrick Rooney, a cognitive science–trained AI practitioner and founder of Leonis Strategy, about how founders mischaracterize AI by collapsing “scripted autonomy” (agents doing tasks while you step away) into personhood autonomy (will, rights, interiority). Patrick argues this isn’t just sloppy language but a leadership issue that shapes how teams relate to technology. They discuss why LLMs are plausibility engines rather than truth-seekers, how humans can pursue truth, beauty, and goodness for their own sake, and why leaders must own inputs, outputs, and responsibility instead of outsourcing judgment. We explored why LLM training is text-bound and disconnected from lived experience, the appearance-versus-reality problem behind Turing-test thinking, practical cautions around anthropomorphizing AI, and why doubling down on in-person human connection is a strategic response to AI at scale.
01:53 LLMs Are Plausibility Engines
05:10 Leadership And Culture Values
07:34 Why LLMs Aren't Intelligent
08:54 Turing Test And Training Limits
12:42 Language Detached From Reality
14:48 Personhood Rights And Ethics
19:01 Anthropomorphism Risks
19:34 Human Ownership Mindset
20:25 Outsourcing Your Thinking
22:24 IP Training Fears
24:34 Responsibility Still Human
28:41 Leading In AGI Hype
29:38 Grounding In Real Life
Connect with Patrick:
• https://leonisstrategy.com/
• https://www.linkedin.com/in/prooney1/
Connect with Raul:
• Work with Raul: https://dogoodwork.io/apply
• Free Growth Resources: https://dogoodwork.io/free-growth-resources