Armchair Architects: The Danger Zone (Part 2)
HTML-код
- Опубликовано: 28 апр 2024
- In this second part of a two-part episode on the dangers of AI and how to deal them in the context of large language models (LLMs), David is joined by our #ArmchairArchitects, Uli and Eric (@mougue) for a conversation that touches on how LLMs are different from traditional models, the challenge of trusting the model architecture, ethical considerations including bias, privacy, governance, and accountability, plus a look at some practical steps, such as reporting, analytics, and data visualization.
Be sure to watch Armchair Architects: The Danger Zone (Part 1) aka.ms/azenable/146 before watching this episode of the #AzureEnablementShow.
Resources
• Microsoft Azure AI Fundamentals: Generative AI learn.microsoft.com/training/...
• Responsible and trusted AI learn.microsoft.com/azure/clo...
• Architectural approaches for AI and ML in multitenant solutions learn.microsoft.com/azure/arc...
• Training: AI engineer learn.microsoft.com/credentia...
• Responsible use of AI with Azure AI services learn.microsoft.com/azure/ai-...
• Blog: Considerations for Ethical and Responsible Use of AI in Applications techcommunity.microsoft.com/t...
Related Episodes
• Armchair Architects: The Danger Zone (Part 1) aka.ms/azenable/146
• Watch more episodes in the Armchair Architects Series aka.ms/azenable/ArmchairArchi...
• Watch more episodes in the Well-Architected Series aka.ms/azenable/yt/wa-playlist
• Check out the Azure Enablement Show aka.ms/AzureEnablementShow
Chapters
0:00 Introduction
0:18 Open questions from Part 1
2:15 Practical tips to get started
3:14 LLMs are not traditional models
3:33 Financial services approach
5:06 Commoditization of Generative AI
6:45 Feedback for Prompt tuning
9:10 Content safety
12:10 Teaser for next episodes - Наука
*promosm* 👏