In the last two years, I've worked extensively with Small Language Models, and recently they've improved significantly. This isn't due to a change in their size but to the availability of high quality synthetic data, enabled by Trillion Parameters Models. Large Language Models act as data compression tools. It seems We had to first generate a very large model to digest vast amounts of information from internet, then use it to create specific synthetic data to enhance small models. I believe the future lies not in creating bigger models but in leveraging synthetic data to elevate Small Language Models. In a year, we might see Small Language Models outperform current Large Language Models in benchmarks, thanks to synthetic data. Initially, we needed to create large, often under-trained models, but now we can use them to generate synthetic data in any format we need. This allows for highly specialized small models that excel in specific tasks. This is how I see the future unfolding.
Sam my first tests were possitive - a good model for data extraction and clearence + JSON function calling works great - and this price:) I am waiting to fine-tune this model:) - it will be a fun:)
I am loving these tutorials I would like to see you do in depth on using vllm as an api point for serving llm using azure kubernates cluster it would be soo useful to the community as we can then use quantized models of llama3 70b with very cheap gpu to help serve applications. I would be just amazing for the community then you can use that to help make agents with lang graph tutorials bro I would love it
@@MrKrzysiek9991Are you sure? Function calling works perfectly for me, and I also have to add that I haven't had issues with censorship when translating texts, unlike with the other models
@@4l3dx In my test that involves HTML parsing it is worse than LLaMa 3 8B. I have not tested it for function calling. My college has issues with agents powered by this model. Old prompts stop working. Hard to say what is wrong, but it has been overtuned for benchmark which was addressed in the lastest AI explained video.
Just from using it. I think it's now available for all, I'm impressed 😮. It's excellent, though it can’t browse the web in real-time. Despite that, it will be very helpful, especially for summarizing content, rewriting resumes, and tailoring them for job applications. I've also noticed an improvement in coding capabilities compared to the older model. When I generated code for my resume, it did a great job.
5:40 It isn’t 4.5 it’s 4O it’s called 4O because it’s omnimodal meaning all modalities they never claimed to increase its intelligence but rather it’s a structural shift from GPT4
I would find this interesting if it was actually competing against open source models that I can use locally but since it isn't I find it to be not even news worthy. It only gives users a price cut when we all should be asking the question of whether using AI SaaS products in your software stack is a good idea? If they release this model as a local use product then it will be news worthy.
@@samwitteveenai perhaps but I would argue that Groq Cloud is still a better choice if you do not have your own server. Being able to test your application against several open source models helps future proof your application and avoid vendor lock.
🛑🛑!!Note!!🛑🛑: OpenAI created mini for their selfish purpose to neuter a GPT-4o Plus you in a timeout you bad child and drop you back to GPT-4o mini much like it did with GPT-4 when it would drop you back to GPT-3.5 aka Dory! Wtf? Fact check me please but that is what i am seeing now. I just timed out for the first time ever in GPT-4o and it dropped me back to GPT-4o mini and took away my upload option again Wtf? 🛑🛑
In the last two years, I've worked extensively with Small Language Models, and recently they've improved significantly. This isn't due to a change in their size but to the availability of high quality synthetic data, enabled by Trillion Parameters Models. Large Language Models act as data compression tools. It seems We had to first generate a very large model to digest vast amounts of information from internet, then use it to create specific synthetic data to enhance small models.
I believe the future lies not in creating bigger models but in leveraging synthetic data to elevate Small Language Models. In a year, we might see Small Language Models outperform current Large Language Models in benchmarks, thanks to synthetic data. Initially, we needed to create large, often under-trained models, but now we can use them to generate synthetic data in any format we need. This allows for highly specialized small models that excel in specific tasks. This is how I see the future unfolding.
They need to do that on top of the lager models size isn't an issue for me
Sam my first tests were possitive - a good model for data extraction and clearence + JSON function calling works great - and this price:) I am waiting to fine-tune this model:) - it will be a fun:)
great to hear, exactly what I need it for, and summarising large amount of text!
Amazing the effects of competition.
Great explanation Sam!
I am loving these tutorials I would like to see you do in depth on using vllm as an api point for serving llm using azure kubernates cluster it would be soo useful to the community as we can then use quantized models of llama3 70b with very cheap gpu to help serve applications. I would be just amazing for the community then you can use that to help make agents with lang graph tutorials bro I would love it
Thank you for the very informative video! I learn a lot from all of your videos.
Love competition👍 btw: Gemini flash has 1 million tokens, audio and video inputs
I really like Claude, but Haiku is not great at function calling.
Love to see iteration! Thanks, Sam!
Test this one. It's even worse. I'm not sure what is wrong with this model
@@MrKrzysiek9991Are you sure? Function calling works perfectly for me, and I also have to add that I haven't had issues with censorship when translating texts, unlike with the other models
@@4l3dx In my test that involves HTML parsing it is worse than LLaMa 3 8B. I have not tested it for function calling. My college has issues with agents powered by this model. Old prompts stop working. Hard to say what is wrong, but it has been overtuned for benchmark which was addressed in the lastest AI explained video.
Just from using it. I think it's now available for all, I'm impressed 😮. It's excellent, though it can’t browse the web in real-time. Despite that, it will be very helpful, especially for summarizing content, rewriting resumes, and tailoring them for job applications. I've also noticed an improvement in coding capabilities compared to the older model. When I generated code for my resume, it did a great job.
What will the inference cost be if someone needs to finetune the model?
dont think that has been announced yet.
@@samwitteveenai it is double the price apparently however as only for tier 4 and 5 users means it's only for companies
So, do we call this the AI, Chabot, or LLM war?
5:40 It isn’t 4.5 it’s 4O it’s called 4O because it’s omnimodal meaning all modalities they never claimed to increase its intelligence but rather it’s a structural shift from GPT4
I would find this interesting if it was actually competing against open source models that I can use locally but since it isn't I find it to be not even news worthy. It only gives users a price cut when we all should be asking the question of whether using AI SaaS products in your software stack is a good idea? If they release this model as a local use product then it will be news worthy.
very newsworthy for people building apps that want fast cheap models.
@@samwitteveenai perhaps but I would argue that Groq Cloud is still a better choice if you do not have your own server. Being able to test your application against several open source models helps future proof your application and avoid vendor lock.
RE: open-source models: I worry these cheap corporate models aren't competing with one another as much as they are competing against decentralization.
Perhaps 4o mini is actually a distillation of 4.5 and not 4o
4o was 4.5 until they renamed it. I think it is just a small training certainly for the IT/post training part
if you do a math, 0.15/mil is actually cheaper than running a local model, not to mention it's better than all open source.
But if you want to keep your data your data. Running local is the only way to ensure that.
@@hastyscorpion true, but the reality is 99% of population dont care about their data..
@@hastyscorpion Come on, your data! People who think they are so important deserve to suffer from using local models 😅 I'm kidding
"Don't ask questions, just consume product then get excited for next product!"
all these new models are utterly obsessed with formatting / organising everything as lists...
yeah its there new IT datasets with CoT built in
Who wants expensive systems or cloud services when we've these super cheap models
🛑🛑!!Note!!🛑🛑: OpenAI created mini for their selfish purpose to neuter a GPT-4o Plus you in a timeout you bad child and drop you back to GPT-4o mini much like it did with GPT-4 when it would drop you back to GPT-3.5 aka Dory! Wtf? Fact check me please but that is what i am seeing now. I just timed out for the first time ever in GPT-4o and it dropped me back to GPT-4o mini and took away my upload option again Wtf? 🛑🛑
yeah I also noticed some weird timeouts. possibly teething issues early on I hope.