Building Reliable LLM Apps with OpenAI
HTML-код
- Опубликовано: 10 июн 2024
- A lot of data professionals want to explore freelancing, but lack the systems, tools, and guidance on how to get started. If you're curious about how we help data analysts, engineers, and scientists beyond these videos, then check out: www.datalumina.com/data-freel...
🔗 GitHub Repository
github.com/daveebbelaar/opena...
🛠️ My Development Workflow
• My Development Workflo...
⏱️ Timestamps
00:00 Introduction
01:29 OpenAI Default Response
09:41 OpenAI JSON Mode
14:56 OpenAI Function Calling
23:53 Pydantic + Instructor
35:25 Output Validation
41:04 Content Filtering
45:17 Use Case Example
47:38 What is Data Freelancer?
👋🏻 About Me
Hi there! I'm Dave, an AI Engineer and the founder of Datalumina. On this channel, I share practical coding tutorials to help you become better at building intelligent systems. If you're interested in that, consider subscribing!
#openai #pydantic #instructor #json - Наука
This is gold. Thanks for sharing Dave!
Glad you enjoyed it!
Please keep on making unique content like this that solves pains of gen AI developers for which solutions aren't that straightforward.
The exact video i needed with Pydantic and Instructor - Thank you Dave!
Wow! Knowledge bomb.
Please make more videos like this.
More to come!
Thanks Dave, love ur content and channel
Insane content. Thank you.
man, you're a really good teacher!
I appreciate that!
Dave.. your content is so specific for us GenAI devs. I LOVE it. Please keep it up!
More to come!
@@daveebbelaar I have a follow up question. If you want to "prompt" the LLM to output AI generated emails in a specific format (e.g. intro paragraph/hook of 30 words max, main body of e.g. 50 words max and a CTA of 15 words max) what would be your suggested approach? The traditional way of just giving an example when prompting is very unreliable in this regard but wondering which of your discussed approaches would be best.
This content is awesome !!!!!
THANK YOU!
Very helpful!
Great content - thank you for sharing:)
This was great, thanks. I've had questions about this previously
Thanks! The different methods can definitely be confusing at first.
@@daveebbelaar They certainly can!
I was wondering, do you know of a way to make a RAG using something like Flowise AI work with tools? Eg, have a RAG chatbot that is able to call on functions (POST to a webhook), for example when it sees fit to? I have attempted to configure this in Flowise, but always get stuck at merging the RAG and the tool together...
I suspect something like the solutions you cover in this video could work for that sort of requirement... 🙏
You are great Dave, helping us a lot. Thank you for your effort here.
Does Instructor library also work with Assistant API of Open AI instead of Chat completion API? I mean instead of client.chat.completions.create, using client.beta.threads.runs.create format. Does this work with Instructor as well? One another question is, are you really using Chat Completion API for your project with your real world client that you mention in the video? If so, why don't you use assistant API? Is not that easier? Is there any drawbacks of Asisstant API over Chat Completion API?
yea i would like to know as well since we using threads and runs this solution does not work unless you build around chat completions
the problem is, openai doesn't always strictly replies in JSON even you tell it to do so.
Hey there new to channel and pretty new to AI still in learning process :) tbh I think this video is soo advanced for me to grasp the idea :) but I have some insights on it can you correct me if Iam wrong :)
My insight: "You are building a software for responses depending on pretrained LLM models " ?
Won't it be same if I simply pass schema inside the system message rather than using instructorGPT/function calling thing?
can we use it with runs as well
Is there an example of the content filtering for JavaScript? I can see instructor has a JavaScript version but can’t see any information or examples on content filtering. Would appreciate any help!
@daveebbelaar, if I'm not mistaken, I think "max_retries=1" means retries are allowed once. If you don't want to allow any retries, it needs to be "max_retries=0", correct?
Hmm, while that would make sense, I am not sure. I tried many examples with max_retries=1, and they all failed. I can't see anything in the docs about this. It would require further testing and looking at the API calls.
Hi, thanks for this tutorial. But the Git repo is not available. It shows 404 error. Thanks
Ah, it was still set to private. It's fixed now - thanks!
@@daveebbelaar Yeah, its working now. Thanks 👍
Great Stuff there. Really like the use case as it is not new, message classification, but how to do this with an LLM instead of a local ML model and do it reliably!