Entry Point AI
Entry Point AI
  • Видео 12
  • Просмотров 79 736
AI in Google Sheets: GPT-4 In Your Spreadsheet
🎁 Join our Skool community: www.skool.com/entry-point-ai
Learn an amazing trick to use OpenAI models directly inside Google Sheets. Watch as I transform whole columns of data using an AI prompt in seconds! After this video, you'll be able to do awesome things like write creative copy, standardize your data, or extract specific details from unstructured text.
Topics covered:
0:27 Demo of LLM calls directly in Google Sheets
5:33 The custom function in Google Sheets and how it works
9:39 Where to add your API key
10:53 How to create your OpenAI API key
12:57 Wrapping it up
The code shown in the video can be found in this Github repository:
github.com/markhennings/google-sheets-ai/tree/main
Just copy, ...
Просмотров: 407

Видео

Build Fine-tuning Datasets with Synthetic Inputs
Просмотров 1,8 тыс.2 месяца назад
There are virtually unlimited ways to fine-tune LLMs to improve performance at specific tasks... but where do you get the data from? In this video, I demonstrate one way that you can fine-tune without much data to start with - and use what little data you have to reverse-engineer the inputs required! I show step-by-step how to take a small set of data (for my example I use 20 press releases I p...
How LLMs Generate Text: Tokens, Temperature, & Top P
Просмотров 1 тыс.4 месяца назад
🎁 Join our Skool community: www.skool.com/entry-point-ai In this video, I explain how language models generate text, why most of the process is actually deterministic (not random), and how you can shape the probability when selecting a next token from LLMs using parameters like temperature and top p. I cover temperature in-depth and demonstrate with a spreadsheet how different values change the...
LoRA & QLoRA Fine-tuning Explained In-Depth
Просмотров 23 тыс.5 месяцев назад
🎁 Join our Skool community: www.skool.com/entry-point-ai In this video, I dive into how LoRA works vs full-parameter fine-tuning, explain why QLoRA is a step up, and provide an in-depth look at the LoRA-specific hyperparameters: Rank, Alpha, and Dropout. 0:26 - Why We Need Parameter-efficient Fine-tuning 1:32 - Full-parameter Fine-tuning 2:19 - LoRA Explanation 6:29 - What should Rank be? 8:04 ...
Prompt Engineering, RAG, and Fine-tuning: Benefits and When to Use
Просмотров 45 тыс.7 месяцев назад
🎁 Join our Skool community: www.skool.com/entry-point-ai Explore the difference between Prompt Engineering, Retrieval-augmented Generation (RAG), and Fine-tuning in this detailed overview. 01:14 Prompt Engineering RAG 02:50 How Retrieval Augmented Generation Works - Step-by-step 06:23 What is fine-tuning? 08:25 Fine-tuning misconceptions debunked 09:53 Fine-tuning strategies 13:25 Putting it al...
Fine-tuning 101 | Prompt Engineering Conference
Просмотров 2,5 тыс.7 месяцев назад
🎁 Join our Skool community: www.skool.com/entry-point-ai Intro to fine-tuning LLMs (large language models0 from the Prompt Engineering Conference (2023) Presented by Mark Hennings, founder of Entry Point AI. 00:13 - Part 1: Background Info -How a foundation model is born -Instruct tuning and safety tuning -Unpredictability of raw LLM behavior -Showing LLMs how to apply knowledge -Characteristic...
"I just fine-tuned GPT-3.5 Turbo…" - Here's how
Просмотров 9979 месяцев назад
🎁 Join our Skool community: www.skool.com/entry-point-ai In this video, I'm diving into the power and potential of the newly released GPT-3.5's fine-tuning option. After fine-tuning some of my models, the enhancement in quality is undeniably remarkable. Join me as I: - Demonstrate the model I fine-tuned: Watch as the AI suggests additional items for an e-commerce shopping cart and the rationale...
No-code AI fine-tuning (with Entry Point!)
Просмотров 95011 месяцев назад
👉 Sign up for Entry Point here: www.entrypointai.com Entry Point is a platform for no-code AI fine-tuning, with support for Large Language Models (LLMs) from multiple platforms: OpenAI, AI21, and more. In this video I'll demonstrate the core fine-tuning principles while creating an "eCommerce product recommendation" engine in three steps: 1. First I write ~20 examples by hand 2. Then I expand t...
28 April Update: Playground and Synthesis
Просмотров 71Год назад
28 April Update: Playground and Synthesis
How to Fine-tune GPT-3 in less than 3 minutes.
Просмотров 4,2 тыс.Год назад
🎁 Join our Skool community: www.skool.com/entry-point-ai Learn how to fine-tune GPT-3 (and other) AI models without writing a single line of python code. In this video I'll show you how to create your own custom AI models out of GPT-3 for specialized use cases that can work better than ChatGPT. For demonstration, I'll be working on a classifier AI model for categorizing keywords from Google Ads...
Can GPT-4 Actually Lead a D&D Campaign? 🤯
Просмотров 346Год назад
If you want to create / fine-tune your own AI models, check out www.entrypointai.com/
Entry Point Demo 1.0 - Keyword Classifier (AI Model)
Просмотров 224Год назад
Watch over my shoulder as I fine-tune a model for classifying keywords from Google Ads.

Комментарии

  • @LisaQiyaLi
    @LisaQiyaLi 3 часа назад

    Thanks for sharing! Curious - can you fine tune the model by providing images? For example, one use case is resumes. What if I'd like to upload resume examples that are in PDF or JPEG format?

  • @seanmeverett
    @seanmeverett 10 часов назад

    Such a key point: probabilities, not facts. They’re quantum in nature, not binary.

  • @PlayOfLifeOfficial
    @PlayOfLifeOfficial 11 часов назад

    What I learned: We are no we’re near AGi

  • @Sonic2kDBS
    @Sonic2kDBS 2 дня назад

    Some nice details here. Keep on.

  • @deepstum
    @deepstum 2 дня назад

    Excellent explanation. Best wishes for your professional endeavors. While I rarely comment on RUclips videos, this one deserves all the praise.

  • @Ak_Seeker
    @Ak_Seeker 7 дней назад

    Your Videos are very informative, Thank you

  • @Ak_Seeker
    @Ak_Seeker 7 дней назад

    Thank you

  • @funkyflorion
    @funkyflorion 10 дней назад

    Good summary, thanks!

  • @coco-ge4xg
    @coco-ge4xg 13 дней назад

    omg I always distracted by his blue eyes😆and ignoring what his talking

  • @nafassaadat8326
    @nafassaadat8326 14 дней назад

    can we use QLoRA in a simple ML model like CNN for image classification ?

  • @RoryWilliamson
    @RoryWilliamson 15 дней назад

    This is one of the best explanations i've seen, thanks

  • @BorHouse
    @BorHouse 15 дней назад

    Tomorrow I have a presentation on RAG and you are like an angel to me right now 😅

  • @italoaguiar
    @italoaguiar 15 дней назад

    Excellent video!! Congrats and thanks! +1 subscriber ;)

  • @opalrun
    @opalrun 16 дней назад

    Great tutorial, thanks for posting it.

  • @raphaelsweden6141
    @raphaelsweden6141 22 дня назад

    I found this useful, possibly the code needs to be improved so that the google sheet doesn't make repeated and unnecessary calls to OpenAI. At least to me it looks like calls are made every time the sheet is opened and when certain unrelated things are changed in the sheet. Overall, highly interesting.

    • @EntryPointAI
      @EntryPointAI 22 дня назад

      Thanks and agreed. It shouldn't run the code just from opening a worksheet, but it would be worth looking into more. I added a version to the code repository that uses Google Sheets' built-in caching, but it has some of its own limitations. I was thinking that going through a proxy with caching would be beneficial. Open to any ideas to reduce API calls and token usage.

    • @darrin.jahnel
      @darrin.jahnel 20 дней назад

      Your videos are fantastic. Because of the amazing value you deliver on RUclips, I discovered Entry Point and became a customer.

    • @EntryPointAI
      @EntryPointAI 20 дней назад

      Awesome! Thanks for sharing - see you on the Masterclass today 😉

  • @Aspect0529
    @Aspect0529 24 дня назад

    Fantastically done!

  • @ArunkumarMTamil
    @ArunkumarMTamil 24 дня назад

    how is Lora fine-tuning track changes from creating two decomposition matrix?

    • @EntryPointAI
      @EntryPointAI 20 дней назад

      The matrices are multiplied together and the result is the changes to the LLM's weights. It should be explained clearly in the video, it may help to rewatch.

    • @ArunkumarMTamil
      @ArunkumarMTamil 20 дней назад

      @EntryPointAI My understanding: Orignal weight = 10 * 10 to form a two decomposed matrices A and B let's take the rank as 1 so, The A is 10 * 1 and B is 1 * 10 total trainable parameters is A + B = 20 In Lora even without any dataset training if we simply add the A and B matrices with original matric we can improve the accuracy slighty And if we use custom dataset in Lora the custom dataset matrices will captured by A and B matrices Am I right @EntryPointAI?

    • @EntryPointAI
      @EntryPointAI 15 дней назад

      @@ArunkumarMTamil Trainable parameters math looks right. But these decomposed matrices will be initialized as all zeroes so adding them without any custom training dataset will have no effect.

  • @naevan1
    @naevan1 27 дней назад

    I love this video man. watched it at least 3 times and came back to it before a job interview also. Please do more tutorials /explanations !

  • @JoshVonSchaumburg
    @JoshVonSchaumburg 27 дней назад

    I work in technical presales and delivery management. I probably watched 10 RUclips videos to try to better understand RAG vs. fine tuning (which I'm now calling TAG :)). This was by far the best explanation. I'm sending to all my coworkers!

    • @EntryPointAI
      @EntryPointAI 27 дней назад

      Awesome!! Glad I could help :)

  • @stutters3772
    @stutters3772 28 дней назад

    This video deserves more likes

  • @Mel-lp4hz
    @Mel-lp4hz 28 дней назад

    Very nice and clear explanation, thanks!

  • @FernandoOtt
    @FernandoOtt 29 дней назад

    Awesome content Mark. A Question. I need to create an AI psychologist and I need to store college data, but this college data is kind of a guide of what to speak, and not the content itself. In that case, what is the best approach, RAG or Fine-tuning?

    • @EntryPointAI
      @EntryPointAI 29 дней назад

      If it’s “how” to deliver the content, most likely fine-tuning.

    • @FernandoOtt
      @FernandoOtt 29 дней назад

      Tks Mark. I still question if fine-tuning works for that cause isn't the "how" for like personality but it is for something like "use the technique X or Y" to continue the interaction. Do you think it is still a fine-tuning approach?

  • @gileneusz
    @gileneusz Месяц назад

    Great explanation and a very advanced web service-well thought out and executed! From your previous video, I learned that fine-tuning is more like shaping intuition, since the model operates on statistical patterns and isn't really "learning" in the human sense. With that in mind, I have a question about the capabilities of fine-tuning: Can it be used to impart factual knowledge? For example, if I input data from a medical textbook, could the model be trained to make medical diagnoses in a way that approximates learning? (I used chatgpt to make this comment more clear ;) )

    • @EntryPointAI
      @EntryPointAI 28 дней назад

      It's very hard to impart factual knowledge through fine-tuning. However, theoretically it's possible. You would need a larger corpus of text, and a better way to approach it might be "continued pre-training" - ideally before instruction-tuning is done to make the model conversational. To be honest, I haven't been able to experiment much in this area, so I can't say definitively. So take this answer with a grain of salt!

  • @anujlahoty8022
    @anujlahoty8022 Месяц назад

    Loved the contnt! Simply explained no BS.

  • @Jas.in-bx3eg
    @Jas.in-bx3eg Месяц назад

    Uncanny, avatar moderator.

  • @kunalnikam9112
    @kunalnikam9112 Месяц назад

    In LoRA, Wupdated = Wo + BA, where B and A are decomposed matrices with low ranks, so i wanted to ask you that what does the parameters of B and A represent like are they both the parameters of pre trained model, or both are the parameters of target dataset, or else one (B) represents pre-trained model parameters and the other (A) represents target dataset parameters, please answer as soon as possible

    • @EntryPointAI
      @EntryPointAI 28 дней назад

      Wo would be the original model parameters. A and B multiplied together represent the changes to the original parameters learned from your fine-tuning. So together they represent the difference between your final fine-tuned model parameters and the original model parameters. Individually A and B don't represent anything, they are just intermediate stores of data that save memory.

    • @kunalnikam9112
      @kunalnikam9112 27 дней назад

      @@EntryPointAI got it!! Thank you

  • @darrin.jahnel
    @darrin.jahnel Месяц назад

    This is one of the best AI videos I've seen (and I've watched hundreds). Great job!

  • @gianni4302
    @gianni4302 Месяц назад

    great video mate cheers

  • @SergieArizandieta
    @SergieArizandieta Месяц назад

    wow I'm noobie in this field n I been testing fine-tunen my own chatbot with differents techniques, n I found a lot of stuff, but It's not commonly find a some explanation to understand the main reason of the use of it, ty a lot < 3

  • @Blogservice-Fuerth
    @Blogservice-Fuerth Месяц назад

    Great Content 🙏🏽

  • @ecotts
    @ecotts Месяц назад

    LoRa (Long Range) is a physical proprietary radio communication technique that uses a spread spectrum modulation technique derived from chirp spread spectrum. It's a low powered wireless platform that has become the de facto wireless platform of Internet of Things (IoT). Get your own acronym! 😂

    • @EntryPointAI
      @EntryPointAI Месяц назад

      Fair - didn’t create it, just explaining it 😂

  • @louisrose7823
    @louisrose7823 Месяц назад

    Great video!

  • @TheBojda
    @TheBojda Месяц назад

    Nice video, congrats! LoRA is about fine-tuning, but is it possible to use it to compress the original matrices to speed up inference? I mean decompose the original model's original weight matrices to products of low-rank matrices to reduce the number of weights.

    • @rishiktiwari
      @rishiktiwari Месяц назад

      I think you mean distillation with quantisation?

    • @EntryPointAI
      @EntryPointAI Месяц назад

      Seems worth looking into, but I couldn't give you a definitive answer on what the pros/cons would be. Intuitively I would expect it could reduce the memory footprint but that it wouldn't be any faster.

    • @TheBojda
      @TheBojda Месяц назад

      @@rishiktiwari Ty. I learned something new. :) If I understand well, this is a form of distillation.

    • @rishiktiwari
      @rishiktiwari Месяц назад

      ​@@TheBojdaCheers mate! Yes, in distillation there is student-teacher configuration and the student tries to be like teacher with less parameters (aka. weights). This can also be combined with quantisation to reduce memory footprint.

  • @yotubecreators47
    @yotubecreators47 Месяц назад

    Website app get started is loading for long time for me I am not able to register

    • @EntryPointAI
      @EntryPointAI Месяц назад

      You may have caught us when we were experiencing a moment of downtime, can you try again?

  • @RafaelPierre-vo2rq
    @RafaelPierre-vo2rq 2 месяца назад

    Awesome explanation! Which camera you use?

    • @EntryPointAI
      @EntryPointAI 2 месяца назад

      Thanks, it’s a Canon 6d Mk II

  • @Termicidal
    @Termicidal 2 месяца назад

    gpt4 is good at writing stories and thats it... it will not prompt you to make stealth,perception checks. it will not keep track of your inventory. it will not even keep track of how many spell slots or long/short rests you've used. you have to REMIND IT TO DO ALL THESE THINGS. its like im DMing myself after all the mistakes ive had to correct

  • @neiltaggart1
    @neiltaggart1 2 месяца назад

    Very clear & simple - well done!👏

  • @YLprime
    @YLprime 2 месяца назад

    Dude u look like the lich king with those blue eyes

    • @practicemail3227
      @practicemail3227 Месяц назад

      True. 😅 He should be in acting career ig.

    • @EntryPointAI
      @EntryPointAI 29 дней назад

      You mean Lich King looks like me I think 🤪

  • @drstrangeluv1680
    @drstrangeluv1680 2 месяца назад

    I loved the explanation! Please make more such videos!

  • @spalczynski
    @spalczynski 2 месяца назад

    Can you make a video about tuning validation? How to reliably test that tuning gives us the results we're looking for?

  • @saraswathisripada
    @saraswathisripada 2 месяца назад

    Very well explained, thank you.

  • @tanveeriqbal6680
    @tanveeriqbal6680 2 месяца назад

    Very clear explanation in its simple form, thanks @entrypointai

  • @thunken
    @thunken 2 месяца назад

    Nice explanation! I signed up for the masterclass, but, its 4am my time; might be a challenge

  • @marksaunders8430
    @marksaunders8430 2 месяца назад

    Hands down, the best video I've found for this topic

  • @jong-keyongkim565
    @jong-keyongkim565 2 месяца назад

    The most efficient and informative content on those terms and their respective usage. 👍

  • @vediodiary1754
    @vediodiary1754 2 месяца назад

    Oh my god your eyes 😍😍😍😍everybody deserves hot teacher😂❤

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w 2 месяца назад

    This is really well presented

  • @sivavenu5100
    @sivavenu5100 2 месяца назад

    very clear explanation, <3 from india

  • @zebirdman
    @zebirdman 2 месяца назад

    Love this video - thank you! Question: When RAG is used you say it shows up in the context window (i.e., where the user types in the prompt). However, my understanding / experience has been that once the knowledge is provided to the LLM via a vector database, the user can just type the prompt/query and nothing else is needed. Can you provide an example of a user's prompt/query when using RAG?

    • @EntryPointAI
      @EntryPointAI 2 месяца назад

      The knowledge has to be retrieved for each new user inquiry, and inserted into a prompt alongside it each time.

  • @devtest202
    @devtest202 2 месяца назад

    Excellent material, I want to ask you a question, try to train the gpt2 (fine tuning) model for these purposes, do you see it as valid? On the other hand, do you have examples of how you have trained the model? It is very interesting what you say about how with a few examples you can see good results, for example if I want to train him so that he acts like a salesperson and that if he is asked about a certain type of product he always responds with a particular logic. I think it's great in the final part when you say that the great model would be a rag with only variables and the promt is somehow already trained

    • @EntryPointAI
      @EntryPointAI 2 месяца назад

      I haven’t tried GPT 2, I would stick to more recent models for best results. For a sales agent that might be a good one to combine with RAG, to where relevant product info is looked up and inserted into the prompt, and then fine-tuning teaches the model how to “sell” it.