- Slice of Technology
- Posts
- #21 - Cassie Kozyrkov, CEO at Kozyr
#21 - Cassie Kozyrkov, CEO at Kozyr
Join us on this week's episode of the Slice of Technology AI podcast, hosted by Jared S. Taylor! Our Guest: Cassie Kozyrkov, CEO at Kozyr.
What you’ll get out of this episode:
Defining Decision Intelligence: Cassie describes decision intelligence as the art of turning information into actionable insights, prioritizing purposeful use over arbitrary AI application.
Lessons from Google: She reflects on her experience guiding AI leadership at Google, highlighting the importance of starting with clear goals and collaborative approaches.
Ethical AI Essentials: Cassie discusses the growing need for ethical standards in generative AI, emphasizing privacy, data accuracy, and thoughtful testing.
Accountability in AI: Advocating for transparency, Cassie outlines the "Kozyr Criteria" for accountability: scoring, data sourcing, and testing standards.
Practical Leadership Advice: Cassie cautions leaders against over-relying on AI for solutions, urging a creative approach to AI's role in addressing complex challenges.
Watch
Listen
Read More
Introduction to Decision Intelligence: A New Era in Data Utilization
In this illuminating interview, Cassie Kozyrkov, a pioneer in decision intelligence, explains how this innovative discipline transcends traditional data science and artificial intelligence (AI). Rather than treating AI as an autonomous tool, decision intelligence views it as part of a broader strategy for turning data into meaningful actions at any scale. With her extensive background in guiding teams at Google and now leading her own company, Kozyr, Cassie emphasizes that decision intelligence combines data analysis, AI, and business sense to steer leaders towards intentional, impactful decision-making.
Guiding Lessons from Google: Starting with Purpose, Ending with Impact
During her time at Google, Cassie observed how projects too often launched without clear objectives or accountability frameworks, leading to what she calls a "graveyard of potentially good ideas." She explains that true impact comes when projects are built around a clear vision and involve diverse teams from the start. A common misstep is letting a specific discipline dominate the approach; for instance, an AI researcher may focus only on what’s technologically possible rather than what’s useful, while a data scientist might get lost in data details. For Cassie, impactful projects require a decision-maker who can keep both the big picture and each discipline’s strengths in mind.
Cassie highlights several ethical considerations essential to responsible AI deployment, especially in generative AI applications. As privacy laws evolve, a critical but overlooked challenge is the inability of AI systems to "forget" user data once it’s baked into the model. This conflict, she notes, puts many generative AI models at odds with legal requirements like the GDPR's "right to be forgotten." Cassie warns that without better mechanisms for data deletion, companies face a costly decision if privacy breaches arise, as they may have to discard entire models to stay compliant.
Further, Cassie advises that data sets, just like educational textbooks, are authored with biases—both in what they include and omit. She encourages companies to critically examine the origin and intention behind the data they use, viewing datasets not as objective truths but as "curricula" for their models, shaping what these systems "learn" to produce.
Cassie’s Accountability Framework: The Kozyr Criteria
Cassie presents her three accountability pillars, known as the "Kozyr Criteria," to promote ethical and practical AI. First, how a model scores success and failure must be defined to clarify which outcomes the AI will prioritize. For example, when evaluating a medical diagnostic tool, decision-makers must weigh the consequences of various errors, such as a benign tumor misclassified as malignant or vice versa, balancing harm against benefit for real-world impacts.
Second, she emphasizes the importance of understanding the data sources, stating that data "authorship" affects model reliability. By recognizing the bias inherent in any dataset, organizations can make more informed choices about their training material, aiming for fairer and more accurate models.
Finally, Cassie stresses the need for clear benchmarks to judge if a model is "good enough." Predefined metrics help prevent rushed deployments that can compromise user safety and trust. Generative AI systems, for instance, present the complex task of evaluating answers with multiple "right" responses, making objective grading a challenge.
Cassie’s Practical AI Leadership Advice
Cassie reminds leaders that AI should be seen as a last resort for automation. Traditional methods provide explicit control over tasks, avoiding the unpredictability AI can sometimes introduce. “Only when the problem is too complex for traditional methods should AI be considered,” she advises, encouraging leaders to revisit “impossible” projects in their archives to see if AI might now make these achievable.
Looking Ahead: Cassie Kozyrkov at HumanX Conference
Cassie looks forward to the upcoming HumanX conference, which brings together thought leaders in AI and ethics. With a speaker lineup she describes as "full of interesting ideas," she’s excited about learning from others who are equally passionate about responsible AI.
In her work and speaking engagements, Cassie champions a thoughtful, accountable, and human-first approach to decision intelligence. As AI continues to reshape industries, her insights provide a roadmap for ethical and effective AI leadership.
Reply