HaiHai Labs
HaiHai Labs
  • Видео 16
  • Просмотров 25 394
One-shot Minecraft functions with OpenAI's o1-preview API
You can use OpenAI's new o1-preview via the API to craft minecraft functions.
you can use these functions to build stuff.
so you can describe your structures in plaintext, and see them materialize in the game.
Просмотров: 380

Видео

3 things to learn from Cursor's popularity.
Просмотров 952 месяца назад
Three patterns you can learn from Cursor's popularity to help you think about which AI apps will win. 1. They brought the LLM closer to where the user already works and closer to their data (the codebase, which is used as context). 2. There's a validity-check: does your code run (or tests pass)? If not add the stack trace back into the chat, and the LLM fixes it. This combats hallucinations. 3....
Prompt Engineer Faster
Просмотров 7377 месяцев назад
Learn how to experiment with your prompts faster and better Hey, I'm GreggyB from HaiHai Labs. Here's how you can iterate on your prompts faster and better, based on my experience building a bunch of apps at haihai.ai. We talk about: - Moving from Hello World to complex apps with OpenAI APIs - Techniques to extract your prompts from code - Building with the Assistants API - A CMS for Prompts - ...
Prompt Injection and the Pen15 Problem
Просмотров 73Год назад
Ben Stein, founder of QuitCarbon, joins us for Better with AI Ep 2 to talk about the joy of coding with LLMs, prompt injection, and productionizing your LLM powered app.
A conf organizer writes content faster with AI.
Просмотров 221Год назад
Kirk organizes the Business of Software conference and has a hard time finding time to shipping content. We write some Python to turn transcripts into blog posts.
How to Fine-Tune GPT 3.5-Turbo
Просмотров 13 тыс.Год назад
Learn the four steps to fine-tune GPT 3.5 Turbo using Python and LangChain: 1. Prepare your data 2. Upload your data 3. Train your model 4. Use your model For copy/pastable code check out: haihai.ai/finetune
Why Can't ChatGPT Play Hangman?
Просмотров 279Год назад
ChatGPT is amazing, and sometimes it fails at simplest of tasks. Understanding why will help you use ChatGPT for the tasks it excels at and avoid being led astray by it at other times.
How AI Changed Coding - and what that means for your job.
Просмотров 138Год назад
AI's already had a profound impact on how developers write code, creating opportunity for developers who learn to harness these new tools, and peril for those who don't. What does this mean for your job?
Asking GPT-4 to create the Game of Life
Просмотров 78Год назад
full story at: haihai.ai/gol
Getting Started with the GPT-4 API and JavaScript
Просмотров 729Год назад
GPT-4 launched this week. There's a waitlist for GPT-4 API access, but invites have already started going out. Once you get your access, it's dead simple to get started building - especially if you've already played with the ChatGPT APIs. Check out the full tutorial here for copy-and-pasteable code: www.haihai.ai/gpt-4-js
Mel, an 83 year old developer, sees ChatGPT API for the first time.
Просмотров 340Год назад
Melvyn is an 83 year old developer. He builds chatbots in Python. This was his reaction the first time he saw the ChatGPT API in action. Getting started with the ChatGPT API and JavaScript: www.haihai.ai/chatgpt-api-js/ Getting started with the ChatGPT API and Python: www.haihai.ai/chatgpt-api
The ChatGPT API is OP. Build a CLI Chatbot in 16 LOC with Python + ChatGPT API.
Просмотров 2,2 тыс.Год назад
The ChatGPT API is OP. Build a CLI Chatbot in 16 LOC with Python ChatGPT API.

Комментарии

  • @obscure045
    @obscure045 Месяц назад

    How complex can it get?

  • @haihailabs
    @haihailabs 2 месяца назад

    Ah thanks. This is an editor called cursor and I’m writing python here.

  • @hengis73
    @hengis73 2 месяца назад

    So what language is this and where can i get it? I am tinkering in python and use chat gpt occasionally to help. This looks worth a dabble. Now subbed

  • @jfbarras
    @jfbarras 5 месяцев назад

    I get the overall message, and it is great, but in reality, it's not every kid in America, and it's not only 20$/month. Still need food, shelter, electricity, basic literacy, a computer, an internet connection, and THEN it's the upgraded plan. Total cost : $1500 to $5000.

  • @kingturtle6742
    @kingturtle6742 8 месяцев назад

    Can the content for training be collected from ChatGPT-4? For example, after chatting with ChatGPT-4, can the desired content be filtered and integrated into ChatGPT-3.5 for fine-tuning? Is this approach feasible and effective? Are there any considerations to keep in mind?

  • @Antonego64
    @Antonego64 8 месяцев назад

    which Python version do you use in this video?

  • @v3teff
    @v3teff 9 месяцев назад

    I will receive an email regarding the account that I use to access the OpenAI key?

  • @joao.morossini
    @joao.morossini 9 месяцев назад

    Thanks for the great video. It will be very useful

  • @johan-mattias
    @johan-mattias 10 месяцев назад

    nice clock brah 🕰

  • @JonBrookes
    @JonBrookes 10 месяцев назад

    Your assertion I believe to the effect that we are not just using AI to push out volume and of low quality, rather high quality and bespoke to our abilities - mind blowing. This is what I've been thinking and trying to do also. This conversation is highly enlightening and affirmative to me and I thank you for providing this. I look forward to more content like this. I like also your practical guidance in earlier videos. Kudos do you !

  • @timduck8506
    @timduck8506 11 месяцев назад

    GPT 3.5 is floored as it is censored. so you need to find out were it is floored before you can ask the question which make's it 3 to 8 times slower to get the right answer you wanted if it comply's to the rules that gpt 3.5 go's by. else your going to get a error message or some thing that evades the answer to same face.

  • @trackerprince6773
    @trackerprince6773 11 месяцев назад

    What is the difference between custom gpts vs fine tuning gpts?

  • @theformidablefrog1967
    @theformidablefrog1967 Год назад

    What if I coloned the voice of a dead person?😂

  • @farahimad4432
    @farahimad4432 Год назад

    How do you get an email suggesting that your file is ready?

  • @BlownOutSpeakers
    @BlownOutSpeakers Год назад

    difference is there's no profit in replacing grandmasters. writers, actors, workers, though? you don't need to pay an AI. that is the real problem

  • @hosseinbadrnezhad2019
    @hosseinbadrnezhad2019 Год назад

    You answered many of my questions in this video

  • @stuartrobertson1359
    @stuartrobertson1359 Год назад

    Woah, exactly what I am looking for. Hope you follow this use case all the way through, sharing how you build it, how it is in practice, hou you add content later, how you expose it for use to your friend

  • @echofloripa
    @echofloripa Год назад

    I did a test using questions and answers about the elections process in Brazil. It had 67 questions and answers. I tried the default 3 epochs, 5, 7 and even 12. In none of the cases I managed to get the same response I had trained on, for exact same system message and user message. I tried in Portuguese and English language, and the result was the same. Yes, it gave a different response compared to the base model, but yet, never a correct answer. For the English dataset test I trimmed the 67 questions to only 10. You can check the loss of the training using its api and the numbers was erratic. I guess that at least in gpt3.5-turbo fine tuning, it's not possible to get it increase it's knowledge. I did some tests with open-source llms, but I still have to train with llama2. Maybe fine-tuning isn't really fit for that, and you have to use embeddings and vector databases to achieve that.

    • @luiseduardo2249
      @luiseduardo2249 11 месяцев назад

      cara estou tendo problemas em adicionar o training data e o validation data do fine-tuning, ele pede em formato de prompt-completion, porem sempre que tento adicionar ele diz q esta em prompt-completion e precisa estar em chat-completion, oq acaba criando um loop de problemas, poderia me mandar algum modelo de validação e treinamento que saiba que funciona para eu testar?

    • @echofloripa
      @echofloripa 11 месяцев назад

      @@luiseduardo2249 mas deu erro nas chamadas da api da openai? Qual erro exatamente?

    • @luiseduardo2249
      @luiseduardo2249 11 месяцев назад

      @@echofloripa estava com um erro da plataforma msm, resolvi. saberia me dizer se há como enviar de alguma forma um input para o gpt (configurado com fine tuning) do power automate por exemplo, usando API, apenas vejo a glr usando chat bot

    • @echofloripa
      @echofloripa 11 месяцев назад

      @@luiseduardo2249 faz tempo que não mexo na integração com a openai, comecei um projeto pessoal de App mobile educacional em Flutter que não deu tempo para mais nada. Não conheço este power automate, mas imagino que possa programar uma integração.

  • @gcash3074
    @gcash3074 Год назад

    Love it. Thanks for this video!

  • @NeeloppherSyed
    @NeeloppherSyed Год назад

    Hi Greg, I would like to discuss a collaboration with your channel. It would be great if we can discuss in detail.

  • @power-of-ai
    @power-of-ai Год назад

    thank you for sharing your knowledge

  • @RitikGupta-e9l
    @RitikGupta-e9l Год назад

    Hey, I'm getting an error while running the code it says configuration is not a constructor, I am not much into coding can you please help me. Thanks

  • @barryhoffman9956
    @barryhoffman9956 Год назад

    Awesome! You feed it the book of Proverbs and it spits out Ecclesiastes.

  • @leasing.life.
    @leasing.life. Год назад

    Excellent video

  • @XWSshow
    @XWSshow Год назад

    Do I Need an paid Account for Fine Tuning? Because I tried it out today, and I got an error, that fine tuning cant be created with an exploring Account…😢 (I still have 4 Dollars on my exploring Account)

  • @Branstrom
    @Branstrom Год назад

    Since I had already seen two similar videos I fast forwarded to the end just to see what you had trained it to do. I was like hm the meaning of life... Totally thought it was going to answer with 42. Apparently we have different holy books!

  • @LeonardoBoz
    @LeonardoBoz Год назад

    nice man! Good video.

    • @LeonardoBoz
      @LeonardoBoz Год назад

      I am creating a assistant to my whatsapp. Do you think that i can train a fine tune model with all the products that i have on my database?

  • @oskarrost9774
    @oskarrost9774 Год назад

    Hi i have a quick question... I prepared my data but iam getting an error while launhing the upload file saying that in Line 1 of my data there is no dictionary, even tho in line 1 i just have the message system text. Do you know how to fix that?

  • @p0gue23
    @p0gue23 Год назад

    Do you need the step of generating a synthetic user question for each message, or can you leave the user strings blank, if, like in the present example you are training on a bunch of text and not an actual chat?

  • @brianwallace6640
    @brianwallace6640 Год назад

    How do I actually do this

  • @Haydenralph532
    @Haydenralph532 Год назад

    Hey Brother , really nice video! I was wondering if I could help you edit your videos and also make a highly engaging Thumbnail which will help your video to reach to a wider audience .

  • @thenoblerot
    @thenoblerot Год назад

    Good videos! The algorithm sent me, and I've subscribed to see where the channel goes. Best of luck :) I definitely understand the point of the video, buuut, I might have countered with a demonstration that ChatGPT can generate the code for a hangman game from even the most basic prompt! Or how to hide the secret word from the user in the system or other non-user facing message.

  • @pypypy4228
    @pypypy4228 Год назад

    Liked and subscribed

  • @pypypy4228
    @pypypy4228 Год назад

    Thanks! Is number of epochs something you want to choose? If so, how many should I choose and how to come up with this number?

  • @pypypy4228
    @pypypy4228 Год назад

    You just scratched a very deep problem, thank you! I asked ChatGPT to make up a random list of anything (step1) and randomly choose one item (step2) and output only this one item. For the reason you presented it didn't work as I had supposed. It can't make step 1 and with it "in mind" make a step2. It doesn't "store" data. I wish you posted more vids BTW.

  • @TheGuillotineKing
    @TheGuillotineKing Год назад

    Fun Fact you can fine tune but you can't get the model so they keep you paying for the API your better off training locally on your own machine

    • @D3ADPIX
      @D3ADPIX Год назад

      Good luck deploying something as powerful at scale at a cheaper price. If you have any luck let me know.

    • @TheGuillotineKing
      @TheGuillotineKing Год назад

      @@D3ADPIX hugging Face will let you fine tune for free on a small dataset and for 20 a month you get a lot of resources you should look into it

  • @jordan-jones
    @jordan-jones Год назад

    This is awesome. Detailed but to the point. Thanks for sharing, looking forward to more AI videos

  • @Shahid_An-AI-Engineer
    @Shahid_An-AI-Engineer Год назад

    Sir would you please share this fine tuning dataset with me as I'm a student and i don't have a access to GPT-4. If you wanna help please reply.

  • @Vincent-mx4rk
    @Vincent-mx4rk Год назад

    thanks for sharing, great job

  • @NamasteMax
    @NamasteMax Год назад

    What I'm really trying to figure out, is there a way to successfully train it on newer documentation, like say newer library docs like React? So when I'm using it to help me code I get more up to date syntax?

  • @astera-pt9je
    @astera-pt9je Год назад

    Do you need distinct user questions for each line of the jsonl file? I am preparing my data and most of the lines in the file contain "can you provide more details on this?" for the user content. In addition, if I remove entries with duplicate user questions, the data shrinks significantly which I think might be an issue since we should have as many lines as possible in our dataset.

    • @haihailabs
      @haihailabs Год назад

      You’ll get better results with distinct questions

  • @tobiasabdon
    @tobiasabdon Год назад

    Great work!

  • @JackVucivic
    @JackVucivic Год назад

    Thank you. Good to see you are using Proverbs. Very good Book of the Bible.

  • @PabloRosa-y7p
    @PabloRosa-y7p Год назад

    Amazing video mate, you know i-v only one issue while doing this. While running FineTuningJob, the terminal returns this error: 'openai' has no attribute 'FineTuningJob' Which version of openai are you currently using?

    • @haihailabs
      @haihailabs Год назад

      Make sure you pip install openai -upgrade to the latest version

  • @rajutukadya2919
    @rajutukadya2919 Год назад

    Hi. Very informative video. I had a quiestion. You have used langchain here, but when we don't use langchain and directly use openai, what should the system prompt be when using the fine tuned model?

    • @haihailabs
      @haihailabs Год назад

      System message is optional.

  • @shubhamtyagi6281
    @shubhamtyagi6281 Год назад

    I have OpenAI key, but I'm not able to follow your video, can you make some detailed blog, share snippets, I would like to learn and create something similar in my cultural sacred book. I'm a coding beginner. I follow Udemy videos, etc. However I am very new to AI stuff. It would be very helpful if you did that Thanks.

    • @haihailabs
      @haihailabs Год назад

      Sure thing! haihai.ai/finetune

  • @huilinghuang7811
    @huilinghuang7811 Год назад

    nice to have this feature

  • @Teawos98
    @Teawos98 Год назад

    bro thank you so much for this video

  • @Teawos98
    @Teawos98 Год назад

    legend

  • @athirashibu6089
    @athirashibu6089 Год назад

    reply = response["choices"][0]["message"]["content"] TypeError: 'NoneType' object is not subscriptable