It's Omni because Omni means all. Omni=all potent=power omnipotent=all powerful Omniscient=all knowing omnipresent=all present omnivore=eating everything.
Thats not true its not rolled out yet nothing is we can use it but it isnt good, its so weird, as a free user you should get 40 messages per 3 hours and paid users get 80 its on their website, however as a paid user i get barely any either and gpt 4 runs out at the same time, so none of it really is implimented, the only thing i can say is in the web version if your paid it sometimes works.
7:45 if yall dont want to hear alll the blabber, and get to the point. faking hate these type of youtubers.. like the news has already been out . Dont waste our time repeating what all other channels have covered. Show "What OpenAI DIDN'T tell you" click bait part first at least.
There was another major point before that - the timing and focus of the presentation, doing it exactly 1 day before I/O and focusing on the assistant and voice capabilities and not even mentioning the rest - was a very deliberate move. You can skip the 2 minutes I retell what was presented for those who didn't see... But when you come to an opinion/reaction video, don't skip the opinion, you expect an opinion.
Considering what they're achieved already such things are very trivial, it's just matter of APIs integration. I'm sure it'll be integrated with a lot of APIs
The examples of use is frighteningly designed to make you think that it is a fun, playful, interesting intelligent sentient system. My experiments shows that it simply dips in to already described or written content. Then wraps it in a pleasant sounding digital version of a helpful person.
This is so dangerous and needs to be stopped with laws, but OpenAI is in the group that is supposed to recommend these exact laws. Especially for chats and meetings. Completely private chats and non-public meetings are now shared with OpenAI and all this data is stored on a user-specific basis. this is industrial espionage and the end of all privacy.
Yeah, OpenAI wants to be like Google, but with you volunteering your data, instead of them collecting it without you noticing. I don't think laws can stop something like that, it can only be stopped by open and transparent alternatives being better.
Lmaooo you’ve been spied on the moment you’ve used any device connected to the internet. You’re not important so no need to worry, unless of course you plan of doing something highly illegal
I don't know why people are assuming a Omni has any religious connotation or that OpenAI is being pretentious. Have you ever heard of Omnidirectional? Omnivore? Omnibus? So, let's have omnium chill out and think before just repeating what one person with an ignorant concept of the word “omni” means to define your opinion. Can you have your own opinions without anyone telling you what and how to think? If we continue this way, omnium is lost.
I love what you did there :-) And yet, I do think that "omnipotent" is most commonly associated with god. Just try searching "omnipotent" on Google, the first answer is "(of a deity) having unlimited power". So is GPT-4o "a deity"? :-)
My experience has been that Omni is a huge step backwards. My GPTs are dumber, lazier, more likely to ignore their instructions, more likely to make mistakes, and much less personable in our interactions.
@@DerpyNoodIe Not according to my GPTs. They don't automatically switch to the new model. But if you instruct them to do so, they will. Or at least they pretend to? At any rate, the performance of all the GPTs I've created and have been using for months has suffered noticeably since the launch of 4o.
@@hml You go to the "create" tab on the GPT's editing page and ask it to do so. But it appears the GPT can't roll itself back to the earlier model - I get an error message when I ask for that
@@itchy79 if you just "ask it to do it" I'm pretty sure it just hallucinates that it does it. Try asking it to switch to GPT-4gg, pretty sure it pretend do it as well.
Before they tell you how fantastic the AI is, I’ll give you a great example. I asked ChatGPT with the 4o to tell me how to make a flag wave in the wind in Blender 3D. It gave me a rather good explanation. I refined the query with asking ChatGPT to tell me how to make A DANISH FLAG wave in Blender 3D. I expected an explanation with instructions on how to design/texture the flag. But no. I got the EXACT same insteuctions as before.
@@guardiantko3220 You are almost correct. But the instructions I got included making a plane. It is AFTER making the plane that you either give it a texture or subdivide and colour the plane correctly.
Pi as in a Raspberry Pi? Not only I've seen it, I've made many videos on this channel of Pi projects. How does this compare? The Pi voice assistants are robotic and monotonous, like Siri, Google Assistant or Alexa. This is fluid, flirtatious, and full of emotions. It also doesn't stop to think for 30 seconds before answering. After seeing a bunch of TTS engines, nothing I've seen so far compares to what they achieved here.
There's one thing that nobody is mentioning. Why is this model doing things like teaching trigonometry? That's not something you usually associate with a transfromer / auto regressive predictive analysis model architecture.
The model is teaching trigonometry not because it understands trigonometry, but because trigonometry text books are part of it's training data. arxiv.org/abs/2203.02155 - InstructGPT has been trained on school books, among other things.
@@highcollector All these LLMs everyone is fawning all over each other over are based on the transformer model architecture, released by Google in 2017. The main thing that has changed is mroe compute and training data has been thrown at them. Now maybe I'm wrong, but with this new GPT O model, from my vantage point it just looks like software developers doing what software developers do -- developing new features and enhancing existing features. The teaching of trigonometry made me raise an eyebrow, but chance that's just for the demo, and aside fro that, all I see is a software product update that has nothing to do with actual artificial intelligence.
As I said, OpenAI rolls it out gradually, or to quote them directly: "We are starting to roll out more intelligence and advanced tools to ChatGPT Free users over the coming weeks."
RIP Alexa 🤣
Alexa became grandma now 😂
It's Omni because Omni means all. Omni=all potent=power omnipotent=all powerful Omniscient=all knowing omnipresent=all present omnivore=eating everything.
Pretty sure everyone knows
That doesn't make it less pretentious. Calling what basically is a .1 version of GPT the "all powerful" model...
@@hml omni. Because it's multimodal. It can receive multiple inputs at the same time. That's what makes it different from the competitors.
It's impressive to say the Least but it's not god-level.
@@hmlI don't think they mean to invoke God with the name. I think they are referring to the multimodal capabilities.
What YOU didn't tell us, was that "free" means extremely limited daily quota.
Thats not true its not rolled out yet nothing is we can use it but it isnt good, its so weird, as a free user you should get 40 messages per 3 hours and paid users get 80 its on their website, however as a paid user i get barely any either and gpt 4 runs out at the same time, so none of it really is implimented, the only thing i can say is in the web version if your paid it sometimes works.
Fair enough, I thought it's obvious, but I guess it's not - so yeah, paid users get a bigger quota, and a longer context window...
4 letters. WWDC
Omni= Omnipotent, Omniscient & Omnipresent. All knowing, unlimited power and always with. These are attributes of God. AI is trying to be one.
7:45 if yall dont want to hear alll the blabber, and get to the point. faking hate these type of youtubers.. like the news has already been out . Dont waste our time repeating what all other channels have covered. Show "What OpenAI DIDN'T tell you" click bait part first at least.
There was another major point before that - the timing and focus of the presentation, doing it exactly 1 day before I/O and focusing on the assistant and voice capabilities and not even mentioning the rest - was a very deliberate move. You can skip the 2 minutes I retell what was presented for those who didn't see... But when you come to an opinion/reaction video, don't skip the opinion, you expect an opinion.
Can it actually do all the things that google assistant does though, like making an appointment over the phone?
Considering what they're achieved already such things are very trivial, it's just matter of APIs integration.
I'm sure it'll be integrated with a lot of APIs
The examples of use is frighteningly designed to make you think that it is a fun, playful, interesting intelligent sentient system.
My experiments shows that it simply dips in to already described or written content. Then wraps it in a pleasant sounding digital version of a helpful person.
This is so dangerous and needs to be stopped with laws, but OpenAI is in the group that is supposed to recommend these exact laws. Especially for chats and meetings. Completely private chats and non-public meetings are now shared with OpenAI and all this data is stored on a user-specific basis. this is industrial espionage and the end of all privacy.
Yeah, OpenAI wants to be like Google, but with you volunteering your data, instead of them collecting it without you noticing. I don't think laws can stop something like that, it can only be stopped by open and transparent alternatives being better.
Lmaooo you’ve been spied on the moment you’ve used any device connected to the internet. You’re not important so no need to worry, unless of course you plan of doing something highly illegal
Same can be said for anything that anyone uploads on the internet.
End of privacy started about 20 years ago when we all got smartphones.
@@Tone_Of_Dials they don’t care, you’re literally being spied on rn and you’re commenting on a video. Lol people are so delusional
I don't know why people are assuming a Omni has any religious connotation or that OpenAI is being pretentious. Have you ever heard of Omnidirectional? Omnivore? Omnibus? So, let's have omnium chill out and think before just repeating what one person with an ignorant concept of the word “omni” means to define your opinion. Can you have your own opinions without anyone telling you what and how to think? If we continue this way, omnium is lost.
I love what you did there :-) And yet, I do think that "omnipotent" is most commonly associated with god. Just try searching "omnipotent" on Google, the first answer is "(of a deity) having unlimited power". So is GPT-4o "a deity"? :-)
It does not work on Intel macs! Why not?
I think Intel Macs generally are being neglected at this point... Did they even release a universal binary?
@awakstein I have a:
MacBook Pro
16-inch, 2019
Processor 2,4 GHz 8-Core Intel Core i9
Graphics AMD Radeon Pro 5500M 8 GB
Intel UHD Graphics 630 1536 MB
Memory 64 GB 2667 MHz DDR4
macOS Ventura 13.3.1 (a)
Thank you
It's my absolute pleasure!
Ура! Есть перевод на русский
Awesome! Glad you're enjoying this content.
My experience has been that Omni is a huge step backwards. My GPTs are dumber, lazier, more likely to ignore their instructions, more likely to make mistakes, and much less personable in our interactions.
Idk if your being sarcastic but GPT4o isn't even available for GPTs yet. It still uses GPT4.
@@DerpyNoodIe Not according to my GPTs. They don't automatically switch to the new model. But if you instruct them to do so, they will. Or at least they pretend to? At any rate, the performance of all the GPTs I've created and have been using for months has suffered noticeably since the launch of 4o.
@itchy79 how do you instruct them to switch to GPT-4o? Haven't seen a setting like that
@@hml You go to the "create" tab on the GPT's editing page and ask it to do so. But it appears the GPT can't roll itself back to the earlier model - I get an error message when I ask for that
@@itchy79 if you just "ask it to do it" I'm pretty sure it just hallucinates that it does it. Try asking it to switch to GPT-4gg, pretty sure it pretend do it as well.
TIL, thanks
Thanks for watching!
Before they tell you how fantastic the AI is, I’ll give you a great example.
I asked ChatGPT with the 4o to tell me how to make a flag wave in the wind in Blender 3D.
It gave me a rather good explanation.
I refined the query with asking ChatGPT to tell me how to make A DANISH FLAG wave in Blender 3D.
I expected an explanation with instructions on how to design/texture the flag.
But no.
I got the EXACT same insteuctions as before.
Shouldve asked how to make a Danish flag, and then to make it wave. User error
@@guardiantko3220
You are almost correct.
But the instructions I got included making a plane.
It is AFTER making the plane that you either give it a texture or subdivide and colour the plane correctly.
0:55 - "It can generate text-to-speech like we've never seen before."
Only if you've never seen Pi before.
Pi as in a Raspberry Pi? Not only I've seen it, I've made many videos on this channel of Pi projects. How does this compare? The Pi voice assistants are robotic and monotonous, like Siri, Google Assistant or Alexa. This is fluid, flirtatious, and full of emotions. It also doesn't stop to think for 30 seconds before answering. After seeing a bunch of TTS engines, nothing I've seen so far compares to what they achieved here.
@@hmlPi AI
Its more A than I, will be for a while.
As long as we move away from more H than I, I'm fine with it.
There's one thing that nobody is mentioning. Why is this model doing things like teaching trigonometry? That's not something you usually associate with a transfromer / auto regressive predictive analysis model architecture.
The model is teaching trigonometry not because it understands trigonometry, but because trigonometry text books are part of it's training data. arxiv.org/abs/2203.02155 - InstructGPT has been trained on school books, among other things.
This is not 2016... Come on guys, things have advanced such that it now can reason, have a inner model of the world, physics, etc
@@highcollector All these LLMs everyone is fawning all over each other over are based on the transformer model architecture, released by Google in 2017. The main thing that has changed is mroe compute and training data has been thrown at them.
Now maybe I'm wrong, but with this new GPT O model, from my vantage point it just looks like software developers doing what software developers do -- developing new features and enhancing existing features. The teaching of trigonometry made me raise an eyebrow, but chance that's just for the demo, and aside fro that, all I see is a software product update that has nothing to do with actual artificial intelligence.
BECAUSE AI learns about every topic of knowledge in the world. People will use it to learn any skill.
It's Not Free !
As I said, OpenAI rolls it out gradually, or to quote them directly: "We are starting to roll out more intelligence and advanced tools to ChatGPT Free users over the coming weeks."