Great video Jessica and so informative!! I’m working on a project now implementing Gen AI (gen fallback, generators). Identifying proper use cases are so important to yield the best results while thinking about the # of LLM calls.
Can you make a video talking about smaller more effecient models (Orca, Phi II, Gemini Nano, etc) Do they have a future, and if so, what does it look like? Will more sota models leverage the techniques used by smaller models to become more effiecient? Or will they always remain separate?
There are pros and cons to each approach. Larger models are scaled in a way that makes their capabilities proportional to their parameters. So, larger models are smarter and that will always be the case. Both techniques feed off of one another, so improvements in one will lead to improvements in another. It's cheaper and easier and faster to iterate over smaller models and any gains made throughout the process are applied to larger models. Not sure if this helps. Anyone can feel free to correct me if I misrepresented any information.
absolutely love this video. You really answers so many questions to a person who had to know how thing work from the very beginning in order to learn a new skill. Thank you so much.
Excellent explanation! A minor note: the analogy of curtain makes sense, but then you mentioned fine-tuning makes structural changes to the parameters, which is not accurate. It just changes the values of the parameters.
How does it change the value ? Is it token change ? Basically it means that once you've tuned your model f(x) no longer equals y but actually z right ?
I once attended a whole day IBM sales presentation in Delhi for telco CRM/Billing system.. it was an educational experience more than sales.. IBM sales is really good
this is a great start to costing running models, I think you need to think/explain more along the lines of business i.e. adding in all biz file/google/365 docs, biz emails, other biz data sales cash flow, stock usage, forecasting usage of consumables lettuces coffee... all the things biz work off
anyone noticed she kept on talking * while * writing ? women are real multitaskers - i swear to God my brain is 100% monotask and i could never Ever: write AND do anything else. The apex of my manly monotaskiness is to be able to talk while i'm driving (but i can only talk about light subjects, if you talk about anything a little more involved, i will just not follow you.
I think customized language models will become more important over time. Companies will want artificial intelligence applications specific to their fields of activity, and individuals will want artificial intelligence applications specific to their special interests. Not to sound like I'm telling fortunes, but with improvements in cost, customized smaller models may become more dominant in the market.
There are mistakes with the information provided. PEFT and Lora are separate things model size is influenced mostly by numerical choice and how you compile the GPU kernel. ...
Bot? There is another comment saying the exact same thing. Interesting.. I'm noticing a pattern.. just noticed this on another video. Not knocking whoever's behind doing this. But if your going through the trouble of using different accounts why use the same exact comment? Anyways. I'm just halfway curious. Don't really care tbh. I have other reasons behind my curiosity not necessarily bad .. just couldn't resist but to address and pry to a degree not to expose but . Eh idk. Do not wish to further elaborate.
Stumbled upon this and feel like asking : how did IBM miss the LLM train? Watson was very impressive IMHO. Very much ahead of its time. How could IBM not capitalize on it? Why was it OpenAI that ended up with the language model breakthrough? Which innovation openAI had that IBM could not think of? Was it RLHF?
And PHI-2 with 2,7 B billion parameters. proves that we have spent a lot of time and money on computerization that is wasted because of bad data. with better data PHI-2 LLM can be equivalent to gpt 3 175 billion parameters . and there is still the possibility to reduce LLM to 1 billion parameters with the same capabilities
I think, that LLM or GAI Look like Spread-Sheet if concern the facts that this type of engine inject By SELF toward tokens and Spell Out tokens..!! AND This type of tokens look like iterated by LLM or GAI, because that is also programs using Computer Iterations...! AND The LLM or GAI's using cost can be acquired using calculations over Time/Number of Tokens/Weight of Meaning.... But, I know that this calculations is just approximation by User. Thank you for NICE Video! and I'm korean.
How is it that, these videos still give such basic generic examples? Use cases for example. She couldn't find different use cases that an enterprise might have? She had to give the example of a car dealership???
anthropomorphism makes you forget you have another (abiet sophisticated ) search engine. Worse is the Model can enforce that idea by using personal pronouns
Another excellent videos that makes you understand the fundamentals of an otherwise complicated subject.
Great video Jessica and so informative!! I’m working on a project now implementing Gen AI (gen fallback, generators). Identifying proper use cases are so important to yield the best results while thinking about the # of LLM calls.
Yes, we need to select the suitable LLMs for pickings up the request with cost effective way. Thus the cost of operation should be lowered.
Can you make a video talking about smaller more effecient models (Orca, Phi II, Gemini Nano, etc)
Do they have a future, and if so, what does it look like?
Will more sota models leverage the techniques used by smaller models to become more effiecient?
Or will they always remain separate?
There are pros and cons to each approach. Larger models are scaled in a way that makes their capabilities proportional to their parameters. So, larger models are smarter and that will always be the case.
Both techniques feed off of one another, so improvements in one will lead to improvements in another.
It's cheaper and easier and faster to iterate over smaller models and any gains made throughout the process are applied to larger models.
Not sure if this helps. Anyone can feel free to correct me if I misrepresented any information.
absolutely love this video. You really answers so many questions to a person who had to know how thing work from the very beginning in order to learn a new skill. Thank you so much.
Excellent explanation! A minor note: the analogy of curtain makes sense, but then you mentioned fine-tuning makes structural changes to the parameters, which is not accurate. It just changes the values of the parameters.
How does it change the value ? Is it token change ? Basically it means that once you've tuned your model f(x) no longer equals y but actually z right ?
I once attended a whole day IBM sales presentation in Delhi for telco CRM/Billing system.. it was an educational experience more than sales.. IBM sales is really good
Thanks Jessica for this video, really eye opening and introspective at the same time.......
this is a great start to costing running models, I think you need to think/explain more along the lines of business i.e. adding in all biz file/google/365 docs, biz emails, other biz data sales cash flow, stock usage, forecasting usage of consumables lettuces coffee... all the things biz work off
Very interesting and useful. Thanks for explaining so many topics !
Great video: really clear and professional (unlike a couple of the saddos commenting). Thanks!
For a moment I thought she is AI generated)
Truth
Don’t blame you . Pretty
Yeah and looked finely tuned!
nope u didn't
😂😂😂
You walk into a dealership & ask a salesperson how much a vehicle will cost.
Answer: This vehicle will cost you whatever you're willing to pay.
anyone noticed she kept on talking * while * writing ? women are real multitaskers - i swear to God my brain is 100% monotask and i could never Ever: write AND do anything else. The apex of my manly monotaskiness is to be able to talk while i'm driving (but i can only talk about light subjects, if you talk about anything a little more involved, i will just not follow you.
I always need to Write while I Talk
Great and concise, thanks! But ... is she writing from the right to the left? 🤔
I think customized language models will become more important over time. Companies will want artificial intelligence applications specific to their fields of activity, and individuals will want artificial intelligence applications specific to their special interests. Not to sound like I'm telling fortunes, but with improvements in cost, customized smaller models may become more dominant in the market.
what types of AI apps would individuals want apart from personal assistants that would need customizing?
I very much agree with you... Google could be much more efficient by giving specific detail.
@@CahangirIndustry specific LLMs. If I am a pancreatic cancer research company, I don’t want to know about Renaissance in Europe
Very good video thanks a lot !
Incredibly helpful video. Please make more!
Excellent explanation. A great understanding of how AI works
There are mistakes with the information provided.
PEFT and Lora are separate things
model size is influenced mostly by numerical choice and how you compile the GPU kernel.
...
Great explanation Jessica
Daaaaamn woman. Good explanation.
very good - thanks
awsome , 100% focued :D thx for the professionalisme :D
What software solution powers this mirrored whiteboard in front of you? It’s awesome and I want to use it?
I think it can be simple done by rotating/fliping the video itself :)
Excellent explanation. A solid understanding of how AI works. Thanks IBM
Bot? There is another comment saying the exact same thing. Interesting.. I'm noticing a pattern.. just noticed this on another video. Not knocking whoever's behind doing this. But if your going through the trouble of using different accounts why use the same exact comment? Anyways. I'm just halfway curious. Don't really care tbh. I have other reasons behind my curiosity not necessarily bad .. just couldn't resist but to address and pry to a degree not to expose but . Eh idk. Do not wish to further elaborate.
Very nicely and intelligently explained 3:49 pm ( Christmas Day 2023)
Great video
They used an interesting technique to record the video.
hi. please help me. how to create custom model from many pdfs in Persian language? tank you.
Stumbled upon this and feel like asking : how did IBM miss the LLM train? Watson was very impressive IMHO. Very much ahead of its time. How could IBM not capitalize on it? Why was it OpenAI that ended up with the language model breakthrough? Which innovation openAI had that IBM could not think of? Was it RLHF?
You can easily google the answer to your question
So precise..
How can i speak to someone at IBM about working together.
Why LLMs Cost So Much (noun clause) NO question mark / vs
Why DO LLMs cost so much?
(Question form)
And PHI-2 with 2,7 B billion parameters. proves that we have spent a lot of time and money on computerization that is wasted because of bad data.
with better data PHI-2 LLM can be equivalent to gpt 3 175 billion parameters . and there is still the possibility to reduce LLM to 1 billion parameters with the same capabilities
There are 1B models on huggingface made for RAGs.
Does IBM have anything to do with this AI booming?
Small and powerfulmodels will win out.Phi 2 and Orca2 are some good examples.
How much of this can be done with GPTs?
A GPT is just one type of an LLM
If you cannot find the best man, take the next best.
Looks like it all depends...
Nice
I think, that LLM or GAI Look like Spread-Sheet if concern the facts that this type of engine inject By SELF toward tokens and Spell Out tokens..!! AND This type of tokens look like iterated by LLM or GAI, because that is also programs using Computer Iterations...! AND The LLM or GAI's using cost can be acquired using calculations over Time/Number of Tokens/Weight of Meaning.... But, I know that this calculations is just approximation by User. Thank you for NICE Video! and I'm korean.
😗
THEN A COMMON PERSON CAN'T DO A LLM FROM SCRATCH???
🙏🏼
LLM IS BLA BLA BLAAAAA??????
What makes them so expensive? Simple. Their Architecture is not right.
She is 36 years old Isn't it?
1:19 So IBM does not believe consumers need to have their data protected.
Nancy Pi did it first 😤
It is not intelligent to pay for AI! It’s simply marketing!
How is it that, these videos still give such basic generic examples? Use cases for example. She couldn't find different use cases that an enterprise might have? She had to give the example of a car dealership???
People will pay for that 😅😅😅 ???
🦾🥳
anthropomorphism makes you forget you have another (abiet sophisticated ) search engine. Worse is the Model can enforce that idea by using personal pronouns
amazon bedrock!!
Drink from de bottle
so sad that people cant even write a speech anymore.
Kinda boring explanation.