Agents Tools & Function Calling with Amazon Bedrock (How-to)
HTML-код
- Опубликовано: 23 апр 2024
- Agents for Amazon Bedrock 👉 docs.aws.amazon.com/bedrock/l...
Resources:
🌐 Learn more: aws.amazon.com/bedrock/
Follow AWS Developers!
🐦 Twitter: / awsdevelopers
💼 LinkedIn: / aws-developers
👾 Twitch: / aws
📺 Instagram: awsdevelope...
#generativeai #amazonbedrock #codingtutorial - Наука
I'm a manager for the team responsible for all of my company's GenAi features. I'll definitely be asking everyone on my team to watch this video and any others related to this. Looking forward to seeing more vids like this one.
Mike speaks very fast but he uses very simple english so any non-native speaker can easily understand him. Thanks Mike!
Literally in love with your videos Mike! I always learn something new, and in an easy to digest way.
That's fantastic to hear! 😀 ☁️ 🙌
Excellent plain English explanations, and a real live demonstration, instead of pre-recorded cheating videos. Excellent!
We're glad you like it! 😀
Great job. Very easy to follow!
Thank you! 😊 🤝 ☁️
I still have some questions around the bigger picture of the Bedrock architecture... I understand agents and their use cases but I thought the idea when building an app was to 'front-end' the agents with a broader context FM that would be the actual chatbot interface? In other words, I am working on an application that will have a number (maybe 6 or so) of specialized agents that I thought would be invoked by the chat interface FM on an as needed basis. Also, can agents interact with each other in the background? If I can't front-end the agents I would need a chat interface for every agent I build which I very much doubt is the way the architecture is designed. Do you have something that shows a complete end-to-end application that encompasses all components?
What about a demo using multiple agents, multiple llm, langchain and langsmith to do tracing?
There is no link to the github repo in the description.
This is cool. Can this be modified to use Alexa so that the input comes in through voice as slots.
That's conceivable. Have an experiment. The Alexa platform already extracts intent from the input, so the input through Alexa becomes deterministic... but that sounds fun to play around with.