- Видео 16
- Просмотров 25 394
HaiHai Labs
Добавлен 9 фев 2023
One-shot Minecraft functions with OpenAI's o1-preview API
You can use OpenAI's new o1-preview via the API to craft minecraft functions.
you can use these functions to build stuff.
so you can describe your structures in plaintext, and see them materialize in the game.
you can use these functions to build stuff.
so you can describe your structures in plaintext, and see them materialize in the game.
Просмотров: 380
Видео
3 things to learn from Cursor's popularity.
Просмотров 952 месяца назад
Three patterns you can learn from Cursor's popularity to help you think about which AI apps will win. 1. They brought the LLM closer to where the user already works and closer to their data (the codebase, which is used as context). 2. There's a validity-check: does your code run (or tests pass)? If not add the stack trace back into the chat, and the LLM fixes it. This combats hallucinations. 3....
Prompt Engineer Faster
Просмотров 7377 месяцев назад
Learn how to experiment with your prompts faster and better Hey, I'm GreggyB from HaiHai Labs. Here's how you can iterate on your prompts faster and better, based on my experience building a bunch of apps at haihai.ai. We talk about: - Moving from Hello World to complex apps with OpenAI APIs - Techniques to extract your prompts from code - Building with the Assistants API - A CMS for Prompts - ...
Prompt Injection and the Pen15 Problem
Просмотров 73Год назад
Ben Stein, founder of QuitCarbon, joins us for Better with AI Ep 2 to talk about the joy of coding with LLMs, prompt injection, and productionizing your LLM powered app.
A conf organizer writes content faster with AI.
Просмотров 221Год назад
Kirk organizes the Business of Software conference and has a hard time finding time to shipping content. We write some Python to turn transcripts into blog posts.
How to Fine-Tune GPT 3.5-Turbo
Просмотров 13 тыс.Год назад
Learn the four steps to fine-tune GPT 3.5 Turbo using Python and LangChain: 1. Prepare your data 2. Upload your data 3. Train your model 4. Use your model For copy/pastable code check out: haihai.ai/finetune
Why Can't ChatGPT Play Hangman?
Просмотров 279Год назад
ChatGPT is amazing, and sometimes it fails at simplest of tasks. Understanding why will help you use ChatGPT for the tasks it excels at and avoid being led astray by it at other times.
How AI Changed Coding - and what that means for your job.
Просмотров 138Год назад
AI's already had a profound impact on how developers write code, creating opportunity for developers who learn to harness these new tools, and peril for those who don't. What does this mean for your job?
Getting Started with the GPT-4 API and JavaScript
Просмотров 729Год назад
GPT-4 launched this week. There's a waitlist for GPT-4 API access, but invites have already started going out. Once you get your access, it's dead simple to get started building - especially if you've already played with the ChatGPT APIs. Check out the full tutorial here for copy-and-pasteable code: www.haihai.ai/gpt-4-js
Mel, an 83 year old developer, sees ChatGPT API for the first time.
Просмотров 340Год назад
Melvyn is an 83 year old developer. He builds chatbots in Python. This was his reaction the first time he saw the ChatGPT API in action. Getting started with the ChatGPT API and JavaScript: www.haihai.ai/chatgpt-api-js/ Getting started with the ChatGPT API and Python: www.haihai.ai/chatgpt-api
The ChatGPT API is OP. Build a CLI Chatbot in 16 LOC with Python + ChatGPT API.
Просмотров 2,2 тыс.Год назад
The ChatGPT API is OP. Build a CLI Chatbot in 16 LOC with Python ChatGPT API.
How complex can it get?
Ah thanks. This is an editor called cursor and I’m writing python here.
So what language is this and where can i get it? I am tinkering in python and use chat gpt occasionally to help. This looks worth a dabble. Now subbed
I get the overall message, and it is great, but in reality, it's not every kid in America, and it's not only 20$/month. Still need food, shelter, electricity, basic literacy, a computer, an internet connection, and THEN it's the upgraded plan. Total cost : $1500 to $5000.
Can the content for training be collected from ChatGPT-4? For example, after chatting with ChatGPT-4, can the desired content be filtered and integrated into ChatGPT-3.5 for fine-tuning? Is this approach feasible and effective? Are there any considerations to keep in mind?
which Python version do you use in this video?
I will receive an email regarding the account that I use to access the OpenAI key?
Thanks for the great video. It will be very useful
nice clock brah 🕰
Your assertion I believe to the effect that we are not just using AI to push out volume and of low quality, rather high quality and bespoke to our abilities - mind blowing. This is what I've been thinking and trying to do also. This conversation is highly enlightening and affirmative to me and I thank you for providing this. I look forward to more content like this. I like also your practical guidance in earlier videos. Kudos do you !
GPT 3.5 is floored as it is censored. so you need to find out were it is floored before you can ask the question which make's it 3 to 8 times slower to get the right answer you wanted if it comply's to the rules that gpt 3.5 go's by. else your going to get a error message or some thing that evades the answer to same face.
What is the difference between custom gpts vs fine tuning gpts?
What if I coloned the voice of a dead person?😂
How do you get an email suggesting that your file is ready?
difference is there's no profit in replacing grandmasters. writers, actors, workers, though? you don't need to pay an AI. that is the real problem
You answered many of my questions in this video
Woah, exactly what I am looking for. Hope you follow this use case all the way through, sharing how you build it, how it is in practice, hou you add content later, how you expose it for use to your friend
I did a test using questions and answers about the elections process in Brazil. It had 67 questions and answers. I tried the default 3 epochs, 5, 7 and even 12. In none of the cases I managed to get the same response I had trained on, for exact same system message and user message. I tried in Portuguese and English language, and the result was the same. Yes, it gave a different response compared to the base model, but yet, never a correct answer. For the English dataset test I trimmed the 67 questions to only 10. You can check the loss of the training using its api and the numbers was erratic. I guess that at least in gpt3.5-turbo fine tuning, it's not possible to get it increase it's knowledge. I did some tests with open-source llms, but I still have to train with llama2. Maybe fine-tuning isn't really fit for that, and you have to use embeddings and vector databases to achieve that.
cara estou tendo problemas em adicionar o training data e o validation data do fine-tuning, ele pede em formato de prompt-completion, porem sempre que tento adicionar ele diz q esta em prompt-completion e precisa estar em chat-completion, oq acaba criando um loop de problemas, poderia me mandar algum modelo de validação e treinamento que saiba que funciona para eu testar?
@@luiseduardo2249 mas deu erro nas chamadas da api da openai? Qual erro exatamente?
@@echofloripa estava com um erro da plataforma msm, resolvi. saberia me dizer se há como enviar de alguma forma um input para o gpt (configurado com fine tuning) do power automate por exemplo, usando API, apenas vejo a glr usando chat bot
@@luiseduardo2249 faz tempo que não mexo na integração com a openai, comecei um projeto pessoal de App mobile educacional em Flutter que não deu tempo para mais nada. Não conheço este power automate, mas imagino que possa programar uma integração.
Love it. Thanks for this video!
Hi Greg, I would like to discuss a collaboration with your channel. It would be great if we can discuss in detail.
thank you for sharing your knowledge
Hey, I'm getting an error while running the code it says configuration is not a constructor, I am not much into coding can you please help me. Thanks
Awesome! You feed it the book of Proverbs and it spits out Ecclesiastes.
Excellent video
Thank you! ☺️
Do I Need an paid Account for Fine Tuning? Because I tried it out today, and I got an error, that fine tuning cant be created with an exploring Account…😢 (I still have 4 Dollars on my exploring Account)
Since I had already seen two similar videos I fast forwarded to the end just to see what you had trained it to do. I was like hm the meaning of life... Totally thought it was going to answer with 42. Apparently we have different holy books!
nice man! Good video.
I am creating a assistant to my whatsapp. Do you think that i can train a fine tune model with all the products that i have on my database?
Hi i have a quick question... I prepared my data but iam getting an error while launhing the upload file saying that in Line 1 of my data there is no dictionary, even tho in line 1 i just have the message system text. Do you know how to fix that?
Do you need the step of generating a synthetic user question for each message, or can you leave the user strings blank, if, like in the present example you are training on a bunch of text and not an actual chat?
How do I actually do this
Hey Brother , really nice video! I was wondering if I could help you edit your videos and also make a highly engaging Thumbnail which will help your video to reach to a wider audience .
Good videos! The algorithm sent me, and I've subscribed to see where the channel goes. Best of luck :) I definitely understand the point of the video, buuut, I might have countered with a demonstration that ChatGPT can generate the code for a hangman game from even the most basic prompt! Or how to hide the secret word from the user in the system or other non-user facing message.
Liked and subscribed
Thanks! Is number of epochs something you want to choose? If so, how many should I choose and how to come up with this number?
You just scratched a very deep problem, thank you! I asked ChatGPT to make up a random list of anything (step1) and randomly choose one item (step2) and output only this one item. For the reason you presented it didn't work as I had supposed. It can't make step 1 and with it "in mind" make a step2. It doesn't "store" data. I wish you posted more vids BTW.
Fun Fact you can fine tune but you can't get the model so they keep you paying for the API your better off training locally on your own machine
Good luck deploying something as powerful at scale at a cheaper price. If you have any luck let me know.
@@D3ADPIX hugging Face will let you fine tune for free on a small dataset and for 20 a month you get a lot of resources you should look into it
This is awesome. Detailed but to the point. Thanks for sharing, looking forward to more AI videos
Sir would you please share this fine tuning dataset with me as I'm a student and i don't have a access to GPT-4. If you wanna help please reply.
thanks for sharing, great job
What I'm really trying to figure out, is there a way to successfully train it on newer documentation, like say newer library docs like React? So when I'm using it to help me code I get more up to date syntax?
Do you need distinct user questions for each line of the jsonl file? I am preparing my data and most of the lines in the file contain "can you provide more details on this?" for the user content. In addition, if I remove entries with duplicate user questions, the data shrinks significantly which I think might be an issue since we should have as many lines as possible in our dataset.
You’ll get better results with distinct questions
Great work!
Thank you!
Thank you. Good to see you are using Proverbs. Very good Book of the Bible.
Amazing video mate, you know i-v only one issue while doing this. While running FineTuningJob, the terminal returns this error: 'openai' has no attribute 'FineTuningJob' Which version of openai are you currently using?
Make sure you pip install openai -upgrade to the latest version
Hi. Very informative video. I had a quiestion. You have used langchain here, but when we don't use langchain and directly use openai, what should the system prompt be when using the fine tuned model?
System message is optional.
I have OpenAI key, but I'm not able to follow your video, can you make some detailed blog, share snippets, I would like to learn and create something similar in my cultural sacred book. I'm a coding beginner. I follow Udemy videos, etc. However I am very new to AI stuff. It would be very helpful if you did that Thanks.
Sure thing! haihai.ai/finetune
nice to have this feature
Agreed!
bro thank you so much for this video
My pleasure
legend
reply = response["choices"][0]["message"]["content"] TypeError: 'NoneType' object is not subscriptable