AI Safety Panel Discussion

Practical Insights from Legal, Technical, and Ethical Experts on AI Risk and Responsibility
As large language models (LLMs) become more powerful and widely deployed, the stakes of responsible AI development continue to rise. Model builders face growing pressure not only to prevent harm, but to navigate complex social, legal, and operational dynamics that shape how AI is used in practice.
This panel brings together leading voices from legal, technical, and research to explore what it means to build AI systems that are not just powerful, but safe by design. Drawing from firsthand experience in law, product, and deployment, the panelists tackle real-world questions about AI safety, risk management, and regulatory frameworks.
You'll learn about:
- The operational challenges of shipping safe, reliable AI models in a fast-moving market
- How human psychology impacts AI development and risk management
- Pressing issues in AI governance such as evolving legislative frameworks and the legality of using web-scraped data
Watch the full discussion for a cross-disciplinary look at how safety is being redefined in the AI era.