- Видео 12
- Просмотров 79 736
Entry Point AI
США
Добавлен 22 мар 2023
The modern platform for fine-tuning large language models.
AI in Google Sheets: GPT-4 In Your Spreadsheet
🎁 Join our Skool community: www.skool.com/entry-point-ai
Learn an amazing trick to use OpenAI models directly inside Google Sheets. Watch as I transform whole columns of data using an AI prompt in seconds! After this video, you'll be able to do awesome things like write creative copy, standardize your data, or extract specific details from unstructured text.
Topics covered:
0:27 Demo of LLM calls directly in Google Sheets
5:33 The custom function in Google Sheets and how it works
9:39 Where to add your API key
10:53 How to create your OpenAI API key
12:57 Wrapping it up
The code shown in the video can be found in this Github repository:
github.com/markhennings/google-sheets-ai/tree/main
Just copy, ...
Learn an amazing trick to use OpenAI models directly inside Google Sheets. Watch as I transform whole columns of data using an AI prompt in seconds! After this video, you'll be able to do awesome things like write creative copy, standardize your data, or extract specific details from unstructured text.
Topics covered:
0:27 Demo of LLM calls directly in Google Sheets
5:33 The custom function in Google Sheets and how it works
9:39 Where to add your API key
10:53 How to create your OpenAI API key
12:57 Wrapping it up
The code shown in the video can be found in this Github repository:
github.com/markhennings/google-sheets-ai/tree/main
Just copy, ...
Просмотров: 407
Видео
Build Fine-tuning Datasets with Synthetic Inputs
Просмотров 1,8 тыс.2 месяца назад
There are virtually unlimited ways to fine-tune LLMs to improve performance at specific tasks... but where do you get the data from? In this video, I demonstrate one way that you can fine-tune without much data to start with - and use what little data you have to reverse-engineer the inputs required! I show step-by-step how to take a small set of data (for my example I use 20 press releases I p...
How LLMs Generate Text: Tokens, Temperature, & Top P
Просмотров 1 тыс.4 месяца назад
🎁 Join our Skool community: www.skool.com/entry-point-ai In this video, I explain how language models generate text, why most of the process is actually deterministic (not random), and how you can shape the probability when selecting a next token from LLMs using parameters like temperature and top p. I cover temperature in-depth and demonstrate with a spreadsheet how different values change the...
LoRA & QLoRA Fine-tuning Explained In-Depth
Просмотров 23 тыс.5 месяцев назад
🎁 Join our Skool community: www.skool.com/entry-point-ai In this video, I dive into how LoRA works vs full-parameter fine-tuning, explain why QLoRA is a step up, and provide an in-depth look at the LoRA-specific hyperparameters: Rank, Alpha, and Dropout. 0:26 - Why We Need Parameter-efficient Fine-tuning 1:32 - Full-parameter Fine-tuning 2:19 - LoRA Explanation 6:29 - What should Rank be? 8:04 ...
Prompt Engineering, RAG, and Fine-tuning: Benefits and When to Use
Просмотров 45 тыс.7 месяцев назад
🎁 Join our Skool community: www.skool.com/entry-point-ai Explore the difference between Prompt Engineering, Retrieval-augmented Generation (RAG), and Fine-tuning in this detailed overview. 01:14 Prompt Engineering RAG 02:50 How Retrieval Augmented Generation Works - Step-by-step 06:23 What is fine-tuning? 08:25 Fine-tuning misconceptions debunked 09:53 Fine-tuning strategies 13:25 Putting it al...
Fine-tuning 101 | Prompt Engineering Conference
Просмотров 2,5 тыс.7 месяцев назад
🎁 Join our Skool community: www.skool.com/entry-point-ai Intro to fine-tuning LLMs (large language models0 from the Prompt Engineering Conference (2023) Presented by Mark Hennings, founder of Entry Point AI. 00:13 - Part 1: Background Info -How a foundation model is born -Instruct tuning and safety tuning -Unpredictability of raw LLM behavior -Showing LLMs how to apply knowledge -Characteristic...
"I just fine-tuned GPT-3.5 Turbo…" - Here's how
Просмотров 9979 месяцев назад
🎁 Join our Skool community: www.skool.com/entry-point-ai In this video, I'm diving into the power and potential of the newly released GPT-3.5's fine-tuning option. After fine-tuning some of my models, the enhancement in quality is undeniably remarkable. Join me as I: - Demonstrate the model I fine-tuned: Watch as the AI suggests additional items for an e-commerce shopping cart and the rationale...
No-code AI fine-tuning (with Entry Point!)
Просмотров 95011 месяцев назад
👉 Sign up for Entry Point here: www.entrypointai.com Entry Point is a platform for no-code AI fine-tuning, with support for Large Language Models (LLMs) from multiple platforms: OpenAI, AI21, and more. In this video I'll demonstrate the core fine-tuning principles while creating an "eCommerce product recommendation" engine in three steps: 1. First I write ~20 examples by hand 2. Then I expand t...
28 April Update: Playground and Synthesis
Просмотров 71Год назад
28 April Update: Playground and Synthesis
How to Fine-tune GPT-3 in less than 3 minutes.
Просмотров 4,2 тыс.Год назад
🎁 Join our Skool community: www.skool.com/entry-point-ai Learn how to fine-tune GPT-3 (and other) AI models without writing a single line of python code. In this video I'll show you how to create your own custom AI models out of GPT-3 for specialized use cases that can work better than ChatGPT. For demonstration, I'll be working on a classifier AI model for categorizing keywords from Google Ads...
Can GPT-4 Actually Lead a D&D Campaign? 🤯
Просмотров 346Год назад
If you want to create / fine-tune your own AI models, check out www.entrypointai.com/
Entry Point Demo 1.0 - Keyword Classifier (AI Model)
Просмотров 224Год назад
Watch over my shoulder as I fine-tune a model for classifying keywords from Google Ads.
Thanks for sharing! Curious - can you fine tune the model by providing images? For example, one use case is resumes. What if I'd like to upload resume examples that are in PDF or JPEG format?
Such a key point: probabilities, not facts. They’re quantum in nature, not binary.
What I learned: We are no we’re near AGi
Some nice details here. Keep on.
Excellent explanation. Best wishes for your professional endeavors. While I rarely comment on RUclips videos, this one deserves all the praise.
Thank you very much!
Your Videos are very informative, Thank you
Thank you
Good summary, thanks!
omg I always distracted by his blue eyes😆and ignoring what his talking
can we use QLoRA in a simple ML model like CNN for image classification ?
This is one of the best explanations i've seen, thanks
Tomorrow I have a presentation on RAG and you are like an angel to me right now 😅
Excellent video!! Congrats and thanks! +1 subscriber ;)
Great tutorial, thanks for posting it.
I found this useful, possibly the code needs to be improved so that the google sheet doesn't make repeated and unnecessary calls to OpenAI. At least to me it looks like calls are made every time the sheet is opened and when certain unrelated things are changed in the sheet. Overall, highly interesting.
Thanks and agreed. It shouldn't run the code just from opening a worksheet, but it would be worth looking into more. I added a version to the code repository that uses Google Sheets' built-in caching, but it has some of its own limitations. I was thinking that going through a proxy with caching would be beneficial. Open to any ideas to reduce API calls and token usage.
Your videos are fantastic. Because of the amazing value you deliver on RUclips, I discovered Entry Point and became a customer.
Awesome! Thanks for sharing - see you on the Masterclass today 😉
Fantastically done!
how is Lora fine-tuning track changes from creating two decomposition matrix?
The matrices are multiplied together and the result is the changes to the LLM's weights. It should be explained clearly in the video, it may help to rewatch.
@EntryPointAI My understanding: Orignal weight = 10 * 10 to form a two decomposed matrices A and B let's take the rank as 1 so, The A is 10 * 1 and B is 1 * 10 total trainable parameters is A + B = 20 In Lora even without any dataset training if we simply add the A and B matrices with original matric we can improve the accuracy slighty And if we use custom dataset in Lora the custom dataset matrices will captured by A and B matrices Am I right @EntryPointAI?
@@ArunkumarMTamil Trainable parameters math looks right. But these decomposed matrices will be initialized as all zeroes so adding them without any custom training dataset will have no effect.
I love this video man. watched it at least 3 times and came back to it before a job interview also. Please do more tutorials /explanations !
I work in technical presales and delivery management. I probably watched 10 RUclips videos to try to better understand RAG vs. fine tuning (which I'm now calling TAG :)). This was by far the best explanation. I'm sending to all my coworkers!
Awesome!! Glad I could help :)
This video deserves more likes
Very nice and clear explanation, thanks!
Awesome content Mark. A Question. I need to create an AI psychologist and I need to store college data, but this college data is kind of a guide of what to speak, and not the content itself. In that case, what is the best approach, RAG or Fine-tuning?
If it’s “how” to deliver the content, most likely fine-tuning.
Tks Mark. I still question if fine-tuning works for that cause isn't the "how" for like personality but it is for something like "use the technique X or Y" to continue the interaction. Do you think it is still a fine-tuning approach?
Great explanation and a very advanced web service-well thought out and executed! From your previous video, I learned that fine-tuning is more like shaping intuition, since the model operates on statistical patterns and isn't really "learning" in the human sense. With that in mind, I have a question about the capabilities of fine-tuning: Can it be used to impart factual knowledge? For example, if I input data from a medical textbook, could the model be trained to make medical diagnoses in a way that approximates learning? (I used chatgpt to make this comment more clear ;) )
It's very hard to impart factual knowledge through fine-tuning. However, theoretically it's possible. You would need a larger corpus of text, and a better way to approach it might be "continued pre-training" - ideally before instruction-tuning is done to make the model conversational. To be honest, I haven't been able to experiment much in this area, so I can't say definitively. So take this answer with a grain of salt!
Loved the contnt! Simply explained no BS.
Uncanny, avatar moderator.
In LoRA, Wupdated = Wo + BA, where B and A are decomposed matrices with low ranks, so i wanted to ask you that what does the parameters of B and A represent like are they both the parameters of pre trained model, or both are the parameters of target dataset, or else one (B) represents pre-trained model parameters and the other (A) represents target dataset parameters, please answer as soon as possible
Wo would be the original model parameters. A and B multiplied together represent the changes to the original parameters learned from your fine-tuning. So together they represent the difference between your final fine-tuned model parameters and the original model parameters. Individually A and B don't represent anything, they are just intermediate stores of data that save memory.
@@EntryPointAI got it!! Thank you
This is one of the best AI videos I've seen (and I've watched hundreds). Great job!
great video mate cheers
wow I'm noobie in this field n I been testing fine-tunen my own chatbot with differents techniques, n I found a lot of stuff, but It's not commonly find a some explanation to understand the main reason of the use of it, ty a lot < 3
Great Content 🙏🏽
LoRa (Long Range) is a physical proprietary radio communication technique that uses a spread spectrum modulation technique derived from chirp spread spectrum. It's a low powered wireless platform that has become the de facto wireless platform of Internet of Things (IoT). Get your own acronym! 😂
Fair - didn’t create it, just explaining it 😂
Great video!
Nice video, congrats! LoRA is about fine-tuning, but is it possible to use it to compress the original matrices to speed up inference? I mean decompose the original model's original weight matrices to products of low-rank matrices to reduce the number of weights.
I think you mean distillation with quantisation?
Seems worth looking into, but I couldn't give you a definitive answer on what the pros/cons would be. Intuitively I would expect it could reduce the memory footprint but that it wouldn't be any faster.
@@rishiktiwari Ty. I learned something new. :) If I understand well, this is a form of distillation.
@@TheBojdaCheers mate! Yes, in distillation there is student-teacher configuration and the student tries to be like teacher with less parameters (aka. weights). This can also be combined with quantisation to reduce memory footprint.
Website app get started is loading for long time for me I am not able to register
You may have caught us when we were experiencing a moment of downtime, can you try again?
Awesome explanation! Which camera you use?
Thanks, it’s a Canon 6d Mk II
gpt4 is good at writing stories and thats it... it will not prompt you to make stealth,perception checks. it will not keep track of your inventory. it will not even keep track of how many spell slots or long/short rests you've used. you have to REMIND IT TO DO ALL THESE THINGS. its like im DMing myself after all the mistakes ive had to correct
Very clear & simple - well done!👏
Dude u look like the lich king with those blue eyes
True. 😅 He should be in acting career ig.
You mean Lich King looks like me I think 🤪
I loved the explanation! Please make more such videos!
Can you make a video about tuning validation? How to reliably test that tuning gives us the results we're looking for?
Will do
Very well explained, thank you.
Very clear explanation in its simple form, thanks @entrypointai
Nice explanation! I signed up for the masterclass, but, its 4am my time; might be a challenge
Hands down, the best video I've found for this topic
The most efficient and informative content on those terms and their respective usage. 👍
Oh my god your eyes 😍😍😍😍everybody deserves hot teacher😂❤
This is really well presented
very clear explanation, <3 from india
Love this video - thank you! Question: When RAG is used you say it shows up in the context window (i.e., where the user types in the prompt). However, my understanding / experience has been that once the knowledge is provided to the LLM via a vector database, the user can just type the prompt/query and nothing else is needed. Can you provide an example of a user's prompt/query when using RAG?
The knowledge has to be retrieved for each new user inquiry, and inserted into a prompt alongside it each time.
Excellent material, I want to ask you a question, try to train the gpt2 (fine tuning) model for these purposes, do you see it as valid? On the other hand, do you have examples of how you have trained the model? It is very interesting what you say about how with a few examples you can see good results, for example if I want to train him so that he acts like a salesperson and that if he is asked about a certain type of product he always responds with a particular logic. I think it's great in the final part when you say that the great model would be a rag with only variables and the promt is somehow already trained
I haven’t tried GPT 2, I would stick to more recent models for best results. For a sales agent that might be a good one to combine with RAG, to where relevant product info is looked up and inserted into the prompt, and then fine-tuning teaches the model how to “sell” it.