The AI Safety Crisis No One’s Talking About
Chan Tzu Kit, an AI Risks and Safety Advisor
04-Apr-25 12:00

Embed Podcast
You can share this podcast by copying this HTML to your clipboard and pasting into your blog or web page.
Close
As AI advances at breakneck speed, the spotlight is often on innovation, automation, and productivity. But behind the buzz lies a growing risk, one that could have existential consequences.
In this episode of Enterprise Explores, we speak with Chan Tzu Kit, an AI Risks and Safety Advisor who has worked with leading university groups at Stanford, Yale, and NTU, as well as Malaysia’s own National AI Office. Together, we unpack the less-discussed but critical domain of AI safety, frontier models, and AGI (Artificial General Intelligence).
Tzu Kit explains why he believes we are building the world’s most powerful jet engine, without seatbelts. From deepfake scams to potential loss of control over superintelligent systems, we discuss the real-world risks, global power dynamics, and Malaysia’s urgent need to take AI safety seriously.
We also explore the ratio of capabilities to safety researchers, the gaps in global governance, and why countries like Malaysia must prepare now, or risk being left behind in the AI arms race.
Produced by: Roshan Kanesan
Presented by: Roshan Kanesan
This and more than 60,000 other podcasts in your hand. Download the all new BFM mobile app.
Categories: technology, managing
Tags: AI safety, artificial general intelligence, ai, governance, safety standards,