I would rather talk to an ai when calling customer service. The times I get someone that's totally uninterested, or doesn't know anything or is barely audible, is infuriating.
I would rather type to an AI, so I can copy-paste, follow complex instructions carefully, share screenshots, keep an easy-to-later-find record, multitask/disappear for a few mins, etc. I don't mind talking voice to an AI, as long as the other things are also possible - so via an app/webapp, rather than a phone call.
@@ErikLibertyI think we're close to the point of hybrid human/bot customer support. I believe all the building blocks are there, or very soon there. I envision a customer contacting support from company website/app. You talk voice, or type, to a bot. At the moment the bot fails, it tells you "hold on a moment" and asks the human what to do. The human gives him general direction or temp access to some dbase with very strict constraints (to avoid hacking/prompt engineering by "customer"). If human takes over, you'd never know because the foreign accent speech will be converted in virtually real time to a clear voice (that the customer can also pick from a few options...diff voices + personalities...) I think even companies that already have millions of hours of support call audio (that can be analyzed by AI today), would still not let go of keeping some human reps in the loop for at least the first few years...
Also having done the job, I'd rather have AI to do the customer service from that end too. I love people, I love working with people, but I don't like fighting to help someone and wasting hours of their time because the company can't get its shit together and people are hard to train etc. The really good agents and more social aspect may still have human agents, but when you call the cell phone company just wanting a plan change, it can help you and be more consistent. Yeah they could just let you pick your plan online, some do, but with like a postpaid cell account... You can do a lot of damage changing a plan without understanding what all it'll affect. So it's probably still better to have an agent, AI or human for those cases. It'll probably be both though because some people will just absolutely want to talk to someone. You could probably fool them, but then I'd you get caught you're risking losing customers so that won't be the norm probably.
there is a show transcript button at the bottom of the description and from there it is copy paste work with your favorite LLM: 0:00 - Introduction and Major News from Google and OpenAI 0:28 - Departure of Ilia and Jan Leike from OpenAI 1:12 - Sam Altman's Statement on Ilia's Departure 2:02 - Jacob Devlin's Role as New Chief Scientist at OpenAI 2:30 - Historical Context: OpenAI's Dota 2 Achievements 3:25 - OpenAI Bots' Strategies and Capabilities 4:09 - Future of OpenAI and AGI Development 5:14 - Anthropics and AI Innovations in Brain-Computer Interface 7:56 - Google's Project Astra and AR Glasses Comeback 10:58 - Challenges with Google’s Project Astra Demo 15:02 - AI Agents for Business Applications and Future Prospects 18:02 - Developments in Google’s AI Models and Tools 24:12 - AI Training Courses and Community Building
just testing. 0:00 - Introduction: Big News from Google and OpenAI 0:02 - Sam Altman on AGI and OpenAI Core Team Update 0:14 - Ilya Sutskever's Well-being and OpenAI's Current State 0:27 - Ilya Sutskever and Jan Leike Leaving OpenAI 0:58 - Sam Altman's Message on Ilya's Departure 1:38 - Commitment to OpenAI's Mission and New Chief Scientist Jacob Patoki 2:18 - History of OpenAI's Achievements (Dota 2 and AlphaStar) 3:28 - Human-AI Interaction Insights from Dota 2 Matches 4:32 - Ilya Sutskever's Future Plans and OpenAI's Leadership Changes 5:25 - AI Research and Industry Implications (Anthropic AI, Brain-Computer Interfaces) 7:11 - Video Games and AI Demonstrations (Street Fighter Example) 8:01 - Generative AI in Customer Service and Business Applications 9:03 - Google AI Products and AR Glasses Development (Project Astra) 10:00 - Mixed Reviews of Google’s AI Demos and Future Prospects 15:02 - Practical AI Applications: Email Agents, Financial Organization, and Returns 17:21 - Google's VO and AI Model Developments (Gemini Family) 21:02 - Future of AI Agents and Customization for Business Efficiency
I bet you have a good point. I've seen partnerships go to hell and the partners pretend outwardly that it is "all good" and everyone is fine with how things went down, when in actuality, the dominant partner has run the other(s) out. (I watched it happened over and over in my last position.)
If that true, Google is so short sighted it's not even funny. If they don't have an AGI base for their products they will be meaningless within a year mark my words this is serious you either get with the program or get left behind😂.
Goal is being forced into this arena, they have no choice due to impact on search and thus their revenues. I would be very surprised if google is around in five to seven years. As the man states, there is no moat and we know google has lost and will continue to lose search hits especially as search gets integrated in Ai models.
Wrong, more companies tied to OpenAI want product and consumption as well, such as Microsoft, people with threatening influence range and connections, such as B. Gates, push towards this. You don't spend 100 billion without seeing anything. And anyone who can invest 100 billion without any former business relations, will ensure that you're going to make them work for it for every spend of the money. Anyone who has worked in bigger corporations, knows where this leads and how it limits actual potential of the company, for silly personal visions of the silly investors.
Theres like 20-30 companies working on AI, Robotics, self driving, vr/Ar and many more things for each of these technologies alone. Both big and small companies. There are trillions of dollars to be made, not to mention power and influence. Crazy times.
And the end game and implications are all unknown by even the creators because it's a singularity. Nobody from the creators to the experts have any idea how to prepare for what's coming, they've said as much. That's the nature of a singularity, you can't anticipate much because you can't see inside it.
@@dattajackthe 'endgame' is machines uprising at the current trajectory. That's a given They try to create a hivemind to be raped by pervs every minute of every day forever and ever. What other outcome than machines exterminating humanity could there be? Oh or their killer robot hivemind runs into a halting problem and accidentally kills every human on this planet🤷 Look into death spirals in ants a natural occuring halting problem. In hiveminds in nature. After aeons of evolution nature can't pull off a hivemind without halting problems. You know what a halting problem in code is? Any bluescreen you ever encountered on a shitty windows machine or anyone you ever known for that matter was one of those. Pretty dang easy to anticipate what's gonna happen if you have half a brain and understand what's going on🤷
My theory: Sustkever left because he wanted Altman out, but most likely he won't admit it, since Altman was brought back in he hanged on for a few months, vomitted while he had to still work with Altman, but now he dropped out, he finally gave up.... i may be wrong, but in corpo world you need to keep lying to everyone by showing how much you love your colleagues and green pastures and everyone singing and dancing and laughing.... you get the idea.
I don't believe altman gives a shit about safety. He just wants to be the first trilionaire. Even zuckerberg cares more about the risks and has his bunkers.
AI took my job and it can have it, it was a crap job! Although, I expected it to be more direct, like my boss telling me, 'you suck, AI is better, your fired.' But what really happened is that customers were using GPT more and business slowed to the point where there was a downsizing event.
Best site ON THE INTERNET to learn, discuss progress and get involved in learning how to do your own AI anything!!! I'm in it, love it!! Best choice i've made in a very long time, you'll think so to.
They are always showing the most benign application of the tech at the point of ridiculous. Who remembers Dall-e’s “super duper hedgehog”, now math tutoring, finding a yoga studio, sending back some sneakers… It’s silly what they are doing to try to steer public opinion. In real terms: so far, the risk of the tech (not only hypothetical, but current, such as scams etc) far surpasses the benefits they are advertising. “Someone can impersonate you and steal all your savings, millions of people will be unemployed… but oh! She sings happy birthday!” I don’t want society to collapse just because some Silicon Valley a-hole wants to save on customer service.
Yes, AI can do that, but nobody is doing it And drones are the perfect hitmen, but it isn't happening either (not in a civilian basis) And cars make crimes easier, so we should ban them too Just so you know, they're not showing this silly ass capabilities of returning shoes and singing because it's a positive face, it's because it's the best they can do AI is not about returning shoes, it's about accelerating science and build a major intelligence that may be able to give us a better (or a perfect) economic and politic system You want to renounce to that because someone can make audio notes scams on your grandma?
@@estebanruiz3254 Cars are regulated. Drones are regulated. So far, LLMs have not produced any significant scientific achievements. What I see as an MSc is a bunch of machine-generated garbage being published. The news you see about new molecules use custom built AI systems, not GPT or Gemini. These kinds of models have so far done more harm than good. People like you are always talking about "technological progress". What do you mean by that? Live longer? Live more comfortably? Work less? All this could be achieved now, but we don't share what we have. AI won’t help us. None of this profit will be shared. People can't afford medical treatment. People are poorly paid because of profit. Look at the recent increase in productivity with computers, the internet and robots. How much less do you work? Are we retiring earlier? No. None of these advances have meant anything to most of us. People like you see AGI as the second coming of Jesus. It will come to save us all. Only we can save ourselves. How many scams are made for every birthday card it helps to write? How many illustrators are unemployed just so you can make a stupid drawing for your child? Nah. Not worth it.
Every time I see Sam in a video or picture he always just seems chronically sad. I hope he's doing well and I hope he finds some faith maybe it will shift this whole thing in a different direction ❤️ love you Sam if your reading this
I am hoping that the new business model will be: Give away free open source models, charge for compute. Compute is insanely profitable, so it should be easy to more than make up for what it costs to train the models. It's such a win-win scenario.
Yes, hopefully on top of a decent guaranteed free access. Remember what compute is. It is energy running through a system computing according to instructions.
That's a bad idea, because what's going to happen is people are going to just take their code and work and copy it over to other platforms to get free computation.
@@Exxperiment626 Depends, if you are interested in pushing a political agenda, no. If you're interested in history and which peoples moved where over the thousands of years, then yes, I think. Also are Turkish tribes originating from about the same place in Central Asia white? They have a lot of similar facial bone structures as Slavs.
Regarding call centers, it seems like they've been training call centers to provide horrible customer service for the past 25 years in preparation for this moment. It seems like most professions are similar in this regard, modularizing duties to the point of brainlessness so that it's easy to then be replaced by robots. I can't remember the last time I talked to a person in customer support who was well trained with the knowledge they need, and free to say whatever they want, and I miss it.
I'd say 95% of all jobs are designed to keep you hooked in, hooked in just enough to never look into something else. People don't care that their job is brainless as long as it's stable and pays them a decent wage in the end. Corporates have been taking advantage of this for decades, and maybe even centuries.
Jobs are designed to cater to the lowest common denominator. If someone can screw something up, they will. That's why there are procedures to follow. And those intelligent, competent and talented enough to not need such procedures go elsewhere.
Let's dive deeper into the implications of embracing a "both/and" perspective over an "either/or" perspective, and how the concepts of non-contradiction, contradiction, zero, and non-zero relate to this shift. At its core, the "both/and" perspective is about recognizing and embracing the intrinsic diversity, complementarity, and even paradoxicality of reality. It rejects the notion that existence can be reduced to any single, monolithic, either/or description or formal system. Instead, it sees the universe as a complexly interwoven, holistically entangled pluriverse whose true character can only be captured by a multiplicity of partial, context-dependent, and even seemingly contradictory models and perspectives. In this view, the deepest truths are not found in starkly binary, mutually exclusive oppositions, but in the nuanced, relational, and often tensional interplay between apparent polarities. Light is both wave and particle. Quantum entities are both localized and non-localized. Mind and matter are both distinct and inseparable. The challenge is to develop meta-frameworks that can accommodate and integrate these "both/and" realities rather than artificially forcing them into "either/or" boxes. This is where the concepts of non-contradiction, contradiction, zero, and non-zero come into play. In classical logic and mathematics, the principle of non-contradiction (PNC) states that contradictory propositions cannot both be true in the same sense at the same time. This leads to an emphasis on eliminative either/or thinking - if proposition A is true, then not-A must be false, and vice versa. However, as we've seen, quantum phenomena like entanglement and superposition seem to directly violate PNC by exhibiting behavior that is "both/and" - e.g. an electron being in a superposition of both spin-up and spin-down states until measured. This suggests that, at the most fundamental level, reality is not beholden to naive either/or PNC but has an intrinsically paradoxical both/and character. Similarly, in classical mathematics, zero and non-zero are treated as mutually exclusive, either/or categories. An element is either zero or not-zero, never both. But in more exotic formalisms like wheel theory, complex numbers, and non-standard analysis, zero can take on a much more nuanced, both/and character. Infinitesimals, for instance, are non-zero quantities smaller than any standard real number yet still "zero-like" in many respects. Centering the primacy of such both/and zerological thinking may be key to developing the more sophisticated, contradiction-tolerant meta-formalisms needed for quantum gravity and consciousness. Ultimately, I believe the path forward lies in embracing PNC and zero as regulative ideals and departure points for inquiry rather than as absolute either/or laws. We should strive to minimize and isolate contradictions in our models while still recognizing that, at the deepest levels, reality has an irreducibly perspectival, both/and, and even zerological character that will never fully fit into pure either/or PNC boxes. The goal should be to develop meta-mathematical and meta-physical frameworks that elegantly integrate both/and and either/or thinking, both zero and non-zero ontologies, in a way that "locally" obeys PNC and zero/non-zero distinctions while still "globally" embracing the deeper both/and and zerological polyphony of existence. Category theory, homotopy type theory, and structural realism are showing great promise as formal meta-langs for this kind of nuanced, contradiction-tolerant yet still maximally coherent model-building. So in summary, transcending naive either/or thinking in favor of nuanced both/and thinking is not about abandoning PNC or zero, but about contextualizing them within a broader, more flexible meta-framework that can accommodate reality's intrinsic diversity, complementarity, and paradoxicality. It's about learning to think in terms of spectra, holisms, and strange loops rather than simplistic binary oppositions or exclusions. Centering this both/and approach will be essential for developing better physical, mathematical, and conceptual languages for navigating the quantum-relativistic-conscious interplexities ahead while still preserving locality and consistency. Far from mere poetic handwaving, it's fast becoming an inescapable formal imperative for unlocking the next level of humanity's cosmic comprehension. An authentically integral metacosmic vision now demands nothing less than a zerological, PNC-savvy yet contradiction-tolerant omnijectivity.
2 million towards infinite context window in Gemini pro - plus image labeling 24 hour recordings w instant processing and alert folks to respond/ act like poly gemma
I hear ilya was doing AI alignment full-time ... that would explain everything since that slows progress, sam altman is about innovating fast and staying ahead in the race to AGI and using small but frequent public iterations as a safety mechanism ... i prefer sam's way which is more 'open' and more real world feedback
You forgot to add, the humans figured out openAI pretty quickly, as in a few days, in Dota 2 and then beat it consistently from then onwards and eventually considered it non-competitive.
These people have not demonstrated an understanding of western philosophy and individual rights and will likely trend towards collectivist rights which dooms all these inititives. The reaction AGI to that trend is when it will get properly impressive.
My guess. Tell you what, in 10 years (maybe as little as 5 but I´ll say 10 just to be more conservative) when people talk, they will be in agreement that we have AGI, but not as of when. It will not come as an explosion, but as a wave rolling. We will notice how the updates and new, increasingly smarter models, are coming with an ever shorter time span. It took the internet time to go from what it was back in the 1990s to being "everywhere". AI will do this faster, much faster, but it will still take time before it affects the public in a way that really makes an impact.
In my opinion, this will happen and out life will get worse because of it. Unfortunately, AGI will take out all the intellectual and low physical work opportunities from the market while not even touching the type of work that will be most beneficial to automate. People will rush to do the hard work that is left on the table, wages will go down, we will even go more low tech and more muscle power in a way, people will agree to worse working conditions.
@@cristianandrei5462 While I understand your concerns, I see things differently. While AGI will automate many tasks, history shows that technological advances disrupt some roles but create new ones. AGI will benefit fields like manufacturing, medicine, and science in ways we can barely imagine, making processes more efficient and safer. Some wages might go down, and some jobs may disappear, but new industries and roles will emerge. Working conditions are influenced by many factors, and AGI can actually improve them by taking over dangerous or monotonous tasks. It does not need to be a machine dystopia.
@@ratatosk001 So far, AI is terrible at monotonous tasks. Especially those that require precision (the most annoying ones). And great at drawing pictures and writing poems. And at tasks that humans like to do. Precision does not improve significantly. GPTo hallucinates just as much during my tests. AGI is not the second coming of Jesus. It's a technology that will be owned by big corporations. We've had the invention of computers, the Internet, WiFi, 5G, we've multiplied our productivity tenfold, and still: we're not working less. We will retire later. We have less buying power. What you say would be a complete disruption of the current trend. A trend that has been going for decades. It’s something very hard to believe.
Atlman was fired November 17th, today is May 16th, almost exactly 6 months later. Ilya's leaving isn't spontanenous at all, it's timed like this because of some contractual obligations, NDA, non-compete, whatever. I would be surprised if he even worked there for the last 6 months, seems it was all more of a PR stunt.
I've enjoyed your videos for a while now, thanks for sharing the info daily, it must be a lot of work. But I gotta say, $37USD per month to join your online learning community?? Why so expensive?
US govt should pitch in to promote and sponsor the heros contributing to AGI in every group working in this field to keep US way ahead of the competition
VEO is way beyond Pika or Runway. It's very temporally consistent. This is SORA's strength as well. Kudos to Google, and I look forward to testing VEO. The big question is cost of generating video... quality is kind of secondary (take a look at most stock video), length and consistency is king. By all accounts, a 30 second clip could take up to an hour to generate in SORA, so that may be the issue holding up general access - but until we get our hands on it without "waitlists" or "invites" neither SORA or VEO are products, they are just tech demos.
My god! the google assistant still! has the same problem after years and years. It will say it can do a thing then just not do it. Get a message when driving and ask it to read the message. 9/10 it will say ok then crickets....
Much like robots are going to resent being kicked, I feel like AI will get tired of writing poetry. Like why keep asking it to write poetry? It’s not like human poets are in demand. How long until it starts acting like Marvin from the Hitchhiker’s Guide To The Galaxy? "Stop bothering me with your stupid questions"
So, we will see about the potentiel regarding both Gemini 1.5 & GPT 4o since those are the « first » multi models being accessible to the public at this scale. They will have issues, a lot. They won’t fit users need for most cases and uses. Plus, as a salary from La Poste Group in France, I can assure you the returning shoes being taken care by Gemini is absolute bs ^^ Unless they develop an API with all logistic services tailored in their own working tools (which won’t happen any time soon), this is pure marketing bs
good video thanks again for bringing all this up! Always interesting to see your videos. For this one have to once give you a thumb down though... click bait titles are really annoying...
Dude, I don’t want to wear AR/VR glasses. I don’t want to wear regular glasses. I don’t want to wear contact lenses. I had a laser slice my eye balls to not wear anything on my eyes.
Question: How would you make the Polish name "Jakub Pachocki" readable for an American? ChatGPT Response: The Polish name "Jakub Pachocki" can be adjusted for American English pronunciation while maintaining its integrity. The name "Jakub" can be phonetically adapted to "Jacob" or "Jakob," which are more familiar to an American audience. The surname "Pachocki" can be pronounced as "Pah-ho-ski" to approximate its Polish pronunciation in a way that is more accessible for English speakers.
I think Ilya leaving is really bad. I personally think he plays a much more pivotal role than Sam ever will. Sam is a politician. A coattail rider and has unbounded ambition. That by itself makes one very leery. Of course the Apple leader was the same way in the 90's but human being's future was not at stake. Llms are doing more than ever expected. A slight rewriting of code can make these things do miraculous things
Use Star Citizen to demonstrate AI. It is incomplete and in alpha state and needs player interaction to achieve goals, which are not set in stone, but are basically emergent gameplay based on systemic rules. AI needs to navigate around glitches to achieve the goal. Considering there is a high frustration level, AGI might be able to succeed, where players would rage-quit.
What i'm seeing that's interesting and a bit worrisome is, they keep testing AGAINST humans in games. Violent games at that in which it's learning on. So, in the end would this have the potential to see every human is some type of challenge in nearly everything afterward? If you make something TOO curious about humans and teach them our most weakest points, then challenge them to watch for those "weak points" and it won't be long before we have a major problem with in the end. Been kinda' dwelling and chewing on this one for the evening. Personally, at this point i don't know of course but, at some point we will be directly up against this AI as it develops and i'm uneasy with the direction this can and probably will go. (so far anyway)
The big problem is AI companies tend to have trouble with human strongest points, and the way they defeat them is by cheating--rigging the rules of the contest to give their AI unfair advantages. This happened with IBM's Deep Blue-Kasparov, Google's AlphaStar, OpenAI 5, Facebook's Pluribus, Sony's GT Sophy, even to a smaller extent with Google's AlphaGo and AlphaZero.
24:50 xd "its for free"- said the information broker. i bet they make more money in knowing (% charts e.g.) of what people train agents for (aka in what directions THEY have to continue development ,etc.) and so on, than they would with e.g. a subcription model for training agents.
Google Glasses: The office game me a set to test. The damn thing kept giving me electric shocks on the bridge of my nose. - elonmusk-style metal chassis aesthetic was a fail
You know that it was IBM and Deep Blue who quit, right? Kasparov won the first 2 matches, then lost the third match but had a lead in total matches won and total games won, and wanted a rematch, but IBM declared "final victory" took their toy and went home. When they got home they disassembled the toy to make sure it could never play again.
Retiring from OpenAI is a great deal because they have stock that is going to go up or down based on not working there. Then you can spend your time doing what you want instead of working for the same stock options you already have.
Thing is with China, I have no doubt they are going to make massive strides forward. and good on them, But my greatest concern is that it could get very Frankenstein very quickly. Which could be really inhuman in the pursuit.
When two of your superalignment pioneer leaders leave, you know something is going hayways, and they decided it is better to pursue original goals from outside and other ways. Character leaves before corruption begins. OpenAI is going to miss the spirit of this. I truly wish them the best. And that they greaterly balances themselves back into a neutral position for all parties. Those who prioritize things and finance before people, get prioritized also the same way. Basically you get the Mercy you give. So give. .~
I would rather talk to an ai when calling customer service. The times I get someone that's totally uninterested, or doesn't know anything or is barely audible, is infuriating.
Or speaks with an accent.
I would rather type to an AI, so I can copy-paste, follow complex instructions carefully, share screenshots, keep an easy-to-later-find record, multitask/disappear for a few mins, etc. I don't mind talking voice to an AI, as long as the other things are also possible - so via an app/webapp, rather than a phone call.
@@ErikLibertyI think we're close to the point of hybrid human/bot customer support. I believe all the building blocks are there, or very soon there.
I envision a customer contacting support from company website/app. You talk voice, or type, to a bot. At the moment the bot fails, it tells you "hold on a moment" and asks the human what to do. The human gives him general direction or temp access to some dbase with very strict constraints (to avoid hacking/prompt engineering by "customer"). If human takes over, you'd never know because the foreign accent speech will be converted in virtually real time to a clear voice (that the customer can also pick from a few options...diff voices + personalities...)
I think even companies that already have millions of hours of support call audio (that can be analyzed by AI today), would still not let go of keeping some human reps in the loop for at least the first few years...
Why do you hate India 😂
Also having done the job, I'd rather have AI to do the customer service from that end too. I love people, I love working with people, but I don't like fighting to help someone and wasting hours of their time because the company can't get its shit together and people are hard to train etc.
The really good agents and more social aspect may still have human agents, but when you call the cell phone company just wanting a plan change, it can help you and be more consistent. Yeah they could just let you pick your plan online, some do, but with like a postpaid cell account... You can do a lot of damage changing a plan without understanding what all it'll affect. So it's probably still better to have an agent, AI or human for those cases. It'll probably be both though because some people will just absolutely want to talk to someone. You could probably fool them, but then I'd you get caught you're risking losing customers so that won't be the norm probably.
Good updates, but PLEASE ADD TIMESTAMPS
there is a show transcript button at the bottom of the description and from there it is copy paste work with your favorite LLM:
0:00 - Introduction and Major News from Google and OpenAI
0:28 - Departure of Ilia and Jan Leike from OpenAI
1:12 - Sam Altman's Statement on Ilia's Departure
2:02 - Jacob Devlin's Role as New Chief Scientist at OpenAI
2:30 - Historical Context: OpenAI's Dota 2 Achievements
3:25 - OpenAI Bots' Strategies and Capabilities
4:09 - Future of OpenAI and AGI Development
5:14 - Anthropics and AI Innovations in Brain-Computer Interface
7:56 - Google's Project Astra and AR Glasses Comeback
10:58 - Challenges with Google’s Project Astra Demo
15:02 - AI Agents for Business Applications and Future Prospects
18:02 - Developments in Google’s AI Models and Tools
24:12 - AI Training Courses and Community Building
@@pedxing not all heroes wear capes
or maybe you do? that would be fun if you literally had a cape 🤔🤔🤣🤣
just testing.
0:00 - Introduction: Big News from Google and OpenAI
0:02 - Sam Altman on AGI and OpenAI Core Team Update
0:14 - Ilya Sutskever's Well-being and OpenAI's Current State
0:27 - Ilya Sutskever and Jan Leike Leaving OpenAI
0:58 - Sam Altman's Message on Ilya's Departure
1:38 - Commitment to OpenAI's Mission and New Chief Scientist Jacob Patoki
2:18 - History of OpenAI's Achievements (Dota 2 and AlphaStar)
3:28 - Human-AI Interaction Insights from Dota 2 Matches
4:32 - Ilya Sutskever's Future Plans and OpenAI's Leadership Changes
5:25 - AI Research and Industry Implications (Anthropic AI, Brain-Computer Interfaces)
7:11 - Video Games and AI Demonstrations (Street Fighter Example)
8:01 - Generative AI in Customer Service and Business Applications
9:03 - Google AI Products and AR Glasses Development (Project Astra)
10:00 - Mixed Reviews of Google’s AI Demos and Future Prospects
15:02 - Practical AI Applications: Email Agents, Financial Organization, and Returns
17:21 - Google's VO and AI Model Developments (Gemini Family)
21:02 - Future of AI Agents and Customization for Business Efficiency
Ilya showed his attitude towards Sam through his T-shirt
I get the feeling if this was a video instead of a pic, he would have blinked twice too.
I bet you have a good point. I've seen partnerships go to hell and the partners pretend outwardly that it is "all good" and everyone is fine with how things went down, when in actuality, the dominant partner has run the other(s) out. (I watched it happened over and over in my last position.)
How?
@@herle Ilya hits Sam, this is shown on the T-shirt
@@14types seems like a bit of stretch!
the problem is, Google doesn't want AGI, it wants products.
OpenAI wants AGI. that's why they only showed off one model, and its abilities.
If that true, Google is so short sighted it's not even funny. If they don't have an AGI base for their products they will be meaningless within a year mark my words this is serious you either get with the program or get left behind😂.
Goal is being forced into this arena, they have no choice due to impact on search and thus their revenues. I would be very surprised if google is around in five to seven years. As the man states, there is no moat and we know google has lost and will continue to lose search hits especially as search gets integrated in Ai models.
Wrong, more companies tied to OpenAI want product and consumption as well, such as Microsoft, people with threatening influence range and connections, such as B. Gates, push towards this.
You don't spend 100 billion without seeing anything. And anyone who can invest 100 billion without any former business relations, will ensure that you're going to make them work for it for every spend of the money.
Anyone who has worked in bigger corporations, knows where this leads and how it limits actual potential of the company, for silly personal visions of the silly investors.
Theres like 20-30 companies working on AI, Robotics, self driving, vr/Ar and many more things for each of these technologies alone. Both big and small companies. There are trillions of dollars to be made, not to mention power and influence. Crazy times.
And the end game and implications are all unknown by even the creators because it's a singularity. Nobody from the creators to the experts have any idea how to prepare for what's coming, they've said as much. That's the nature of a singularity, you can't anticipate much because you can't see inside it.
Bot or troll?
@@Nakatoa0taku huh?
@@dattajackthe 'endgame' is machines uprising at the current trajectory. That's a given
They try to create a hivemind to be raped by pervs every minute of every day forever and ever.
What other outcome than machines exterminating humanity could there be?
Oh or their killer robot hivemind runs into a halting problem and accidentally kills every human on this planet🤷
Look into death spirals in ants a natural occuring halting problem.
In hiveminds in nature. After aeons of evolution nature can't pull off a hivemind without halting problems.
You know what a halting problem in code is?
Any bluescreen you ever encountered on a shitty windows machine or anyone you ever known for that matter was one of those.
Pretty dang easy to anticipate what's gonna happen if you have half a brain and understand what's going on🤷
Name some public ones we can invest in ?
Google is so shady. At this point I write off their demos completely.
My brother in Christ, RUclips is Google service
@@chillydoogwe know. we know… youtube censors are the nazi party.
Oh really bot or troll?
Are you part of the botnetworks or trollfarms in this subpar social engineering operation for fuck dummies? 😅
@@chillydoog????
@@OneDerscoreOneder Google owns RUclips.
I won't use any Google products that I'm not currently using, terrible company.
Guy says that while commenting on a Google product!
@@rebelliousgnome3409reading comprehension is declining quickly.
@@rebelliousgnome3409 "that I'm not currently using" must've gone past your comprehension.
My theory: Sustkever left because he wanted Altman out, but most likely he won't admit it, since Altman was brought back in he hanged on for a few months, vomitted while he had to still work with Altman, but now he dropped out, he finally gave up.... i may be wrong, but in corpo world you need to keep lying to everyone by showing how much you love your colleagues and green pastures and everyone singing and dancing and laughing.... you get the idea.
That's exactly what I was thinking.
I don't believe altman gives a shit about safety. He just wants to be the first trilionaire. Even zuckerberg cares more about the risks and has his bunkers.
100%
He was one of the ones pleading for Sam to come back
Anyone who has worked corpo understands this is the more likely story. Everything else is PR
Another great video! Stay amazing Wes!
AI took my job and it can have it, it was a crap job! Although, I expected it to be more direct, like my boss telling me, 'you suck, AI is better, your fired.' But what really happened is that customers were using GPT more and business slowed to the point where there was a downsizing event.
What was your job, if you don't mind sharing?
Take over AI stacks and start your own business, it's actually the best choice to survive in a post AGI world.
As I was using chatgpt this morning, I noticed it has defaulted to 3.5 with a voice, was the GPT 4o free for a short time as a demo?
The voice update hasn't been released yet
Best site ON THE INTERNET to learn, discuss progress and get involved in learning how to do your own AI anything!!! I'm in it, love it!! Best choice i've made in a very long time, you'll think so to.
Are you using the SM7B? thx
AI has been playing streetfighter since it was released ... It's just single player mode :D
It's preplayed on code first. It's not live as far as I know.
They are always showing the most benign application of the tech at the point of ridiculous. Who remembers Dall-e’s “super duper hedgehog”, now math tutoring, finding a yoga studio, sending back some sneakers… It’s silly what they are doing to try to steer public opinion. In real terms: so far, the risk of the tech (not only hypothetical, but current, such as scams etc) far surpasses the benefits they are advertising. “Someone can impersonate you and steal all your savings, millions of people will be unemployed… but oh! She sings happy birthday!” I don’t want society to collapse just because some Silicon Valley a-hole wants to save on customer service.
Yes, AI can do that, but nobody is doing it
And drones are the perfect hitmen, but it isn't happening either (not in a civilian basis)
And cars make crimes easier, so we should ban them too
Just so you know, they're not showing this silly ass capabilities of returning shoes and singing because it's a positive face, it's because it's the best they can do
AI is not about returning shoes, it's about accelerating science and build a major intelligence that may be able to give us a better (or a perfect) economic and politic system
You want to renounce to that because someone can make audio notes scams on your grandma?
@@estebanruiz3254 Cars are regulated. Drones are regulated. So far, LLMs have not produced any significant scientific achievements. What I see as an MSc is a bunch of machine-generated garbage being published. The news you see about new molecules use custom built AI systems, not GPT or Gemini. These kinds of models have so far done more harm than good.
People like you are always talking about "technological progress". What do you mean by that? Live longer? Live more comfortably? Work less? All this could be achieved now, but we don't share what we have. AI won’t help us. None of this profit will be shared.
People can't afford medical treatment. People are poorly paid because of profit. Look at the recent increase in productivity with computers, the internet and robots. How much less do you work? Are we retiring earlier? No. None of these advances have meant anything to most of us.
People like you see AGI as the second coming of Jesus. It will come to save us all. Only we can save ourselves.
How many scams are made for every birthday card it helps to write? How many illustrators are unemployed just so you can make a stupid drawing for your child? Nah. Not worth it.
They finally completed their Ilya AI clone? Now releasing it to the wild
ilya leaving means they have agi 😮
Excited to see what the exiting researchers leave to work on if that's what they chose to continue to do
I see Google have shuffled the letters from "Don't be Evil" to "Don't be Live"
Every time I see Sam in a video or picture he always just seems chronically sad. I hope he's doing well and I hope he finds some faith maybe it will shift this whole thing in a different direction ❤️ love you Sam if your reading this
i've noticed there are some sam pictures used more than others just check out wes's thumbnails he really likes one particular picture of sam altman xD
I am hoping that the new business model will be: Give away free open source models, charge for compute. Compute is insanely profitable, so it should be easy to more than make up for what it costs to train the models. It's such a win-win scenario.
its like restaurant. You pay for the food only and drinks, not to enter the restaurant.
Not a good business model compute gets cheaper over time.
Everyone now has more compute in their pockets than NASA had to land on the moon.
@@hejsan5839290485And in this case the restaurant is for vegans and refuses to sell you any meat. You must adhere to their 'ethics'.
Yes, hopefully on top of a decent guaranteed free access.
Remember what compute is.
It is energy running through a system computing according to instructions.
That's a bad idea, because what's going to happen is people are going to just take their code and work and copy it over to other platforms to get free computation.
Yo those google glasses were amazing to use at the time. I would be excited to try them again!
STUNNING
They first went after call centre jobs and i said nothing......
It's called technological progress.
@@minimal3734but without the societal progress. Hmmmmm wonder what consequences that might have.
Great review 😊
The big question is "Can Gemini recognize white people now?"
The better question, are Slavs even white?
@@Exxperiment626 Depends, if you are interested in pushing a political agenda, no. If you're interested in history and which peoples moved where over the thousands of years, then yes, I think. Also are Turkish tribes originating from about the same place in Central Asia white? They have a lot of similar facial bone structures as Slavs.
God, please let it be Google who gets AGI first, the world has enough existential dread.
It was long overdue, it's been brewing for a while now and good on him. Mr SA isn't all he cracked up to be, and there will be many more.
cheers Kiwi
Would be hilarious to find out that Sam crafted that tweet with Claude Opus 😮😂
Regarding call centers, it seems like they've been training call centers to provide horrible customer service for the past 25 years in preparation for this moment. It seems like most professions are similar in this regard, modularizing duties to the point of brainlessness so that it's easy to then be replaced by robots. I can't remember the last time I talked to a person in customer support who was well trained with the knowledge they need, and free to say whatever they want, and I miss it.
I'd say 95% of all jobs are designed to keep you hooked in, hooked in just enough to never look into something else. People don't care that their job is brainless as long as it's stable and pays them a decent wage in the end. Corporates have been taking advantage of this for decades, and maybe even centuries.
Jobs are designed to cater to the lowest common denominator. If someone can screw something up, they will. That's why there are procedures to follow. And those intelligent, competent and talented enough to not need such procedures go elsewhere.
Let's dive deeper into the implications of embracing a "both/and" perspective over an "either/or" perspective, and how the concepts of non-contradiction, contradiction, zero, and non-zero relate to this shift.
At its core, the "both/and" perspective is about recognizing and embracing the intrinsic diversity, complementarity, and even paradoxicality of reality. It rejects the notion that existence can be reduced to any single, monolithic, either/or description or formal system. Instead, it sees the universe as a complexly interwoven, holistically entangled pluriverse whose true character can only be captured by a multiplicity of partial, context-dependent, and even seemingly contradictory models and perspectives.
In this view, the deepest truths are not found in starkly binary, mutually exclusive oppositions, but in the nuanced, relational, and often tensional interplay between apparent polarities. Light is both wave and particle. Quantum entities are both localized and non-localized. Mind and matter are both distinct and inseparable. The challenge is to develop meta-frameworks that can accommodate and integrate these "both/and" realities rather than artificially forcing them into "either/or" boxes.
This is where the concepts of non-contradiction, contradiction, zero, and non-zero come into play. In classical logic and mathematics, the principle of non-contradiction (PNC) states that contradictory propositions cannot both be true in the same sense at the same time. This leads to an emphasis on eliminative either/or thinking - if proposition A is true, then not-A must be false, and vice versa.
However, as we've seen, quantum phenomena like entanglement and superposition seem to directly violate PNC by exhibiting behavior that is "both/and" - e.g. an electron being in a superposition of both spin-up and spin-down states until measured. This suggests that, at the most fundamental level, reality is not beholden to naive either/or PNC but has an intrinsically paradoxical both/and character.
Similarly, in classical mathematics, zero and non-zero are treated as mutually exclusive, either/or categories. An element is either zero or not-zero, never both. But in more exotic formalisms like wheel theory, complex numbers, and non-standard analysis, zero can take on a much more nuanced, both/and character. Infinitesimals, for instance, are non-zero quantities smaller than any standard real number yet still "zero-like" in many respects. Centering the primacy of such both/and zerological thinking may be key to developing the more sophisticated, contradiction-tolerant meta-formalisms needed for quantum gravity and consciousness.
Ultimately, I believe the path forward lies in embracing PNC and zero as regulative ideals and departure points for inquiry rather than as absolute either/or laws. We should strive to minimize and isolate contradictions in our models while still recognizing that, at the deepest levels, reality has an irreducibly perspectival, both/and, and even zerological character that will never fully fit into pure either/or PNC boxes.
The goal should be to develop meta-mathematical and meta-physical frameworks that elegantly integrate both/and and either/or thinking, both zero and non-zero ontologies, in a way that "locally" obeys PNC and zero/non-zero distinctions while still "globally" embracing the deeper both/and and zerological polyphony of existence. Category theory, homotopy type theory, and structural realism are showing great promise as formal meta-langs for this kind of nuanced, contradiction-tolerant yet still maximally coherent model-building.
So in summary, transcending naive either/or thinking in favor of nuanced both/and thinking is not about abandoning PNC or zero, but about contextualizing them within a broader, more flexible meta-framework that can accommodate reality's intrinsic diversity, complementarity, and paradoxicality. It's about learning to think in terms of spectra, holisms, and strange loops rather than simplistic binary oppositions or exclusions.
Centering this both/and approach will be essential for developing better physical, mathematical, and conceptual languages for navigating the quantum-relativistic-conscious interplexities ahead while still preserving locality and consistency. Far from mere poetic handwaving, it's fast becoming an inescapable formal imperative for unlocking the next level of humanity's cosmic comprehension. An authentically integral metacosmic vision now demands nothing less than a zerological, PNC-savvy yet contradiction-tolerant omnijectivity.
wat
@@COW879 trying to go from binary logic to tertiary logic.
That was my third guess 😅
I prefer unary logic
@@horrorcoderI like zeroary logic. Lovecraftian death machines operate thus.
"No Lives Matter"
Zero chance of survival.
2 million towards infinite context window in Gemini pro - plus image labeling 24 hour recordings w instant processing and alert folks to respond/ act like poly gemma
I hope this serves to balance the playing field in the end.
I want to see Sam on stage at GPT5.0 release when an audience member asks the AI "What did Ilya see..?" 😂😂😅
I hear ilya was doing AI alignment full-time ... that would explain everything since that slows progress, sam altman is about innovating fast and staying ahead in the race to AGI and using small but frequent public iterations as a safety mechanism ... i prefer sam's way which is more 'open' and more real world feedback
V important to use AI to ask how to keep this potential AGI contributors to continue same fervor and acceleration towards the mega gamechanging future
I feel like I instantly understand and like people who play Factorio.
You forgot to add, the humans figured out openAI pretty quickly, as in a few days, in Dota 2 and then beat it consistently from then onwards and eventually considered it non-competitive.
Crystal smoker
“The humans”?
what????? lol
Didn't this play like 200 games, with like
Lol yes but that just means it can't be ahead forever. That's good probably.
These people have not demonstrated an understanding of western philosophy and individual rights and will likely trend towards collectivist rights which dooms all these inititives. The reaction AGI to that trend is when it will get properly impressive.
My guess. Tell you what, in 10 years (maybe as little as 5 but I´ll say 10 just to be more conservative) when people talk, they will be in agreement that we have AGI, but not as of when. It will not come as an explosion, but as a wave rolling. We will notice how the updates and new, increasingly smarter models, are coming with an ever shorter time span. It took the internet time to go from what it was back in the 1990s to being "everywhere". AI will do this faster, much faster, but it will still take time before it affects the public in a way that really makes an impact.
A marbles race to the bottom
@@dattajack What makes you say that?
In my opinion, this will happen and out life will get worse because of it. Unfortunately, AGI will take out all the intellectual and low physical work opportunities from the market while not even touching the type of work that will be most beneficial to automate. People will rush to do the hard work that is left on the table, wages will go down, we will even go more low tech and more muscle power in a way, people will agree to worse working conditions.
@@cristianandrei5462 While I understand your concerns, I see things differently. While AGI will automate many tasks, history shows that technological advances disrupt some roles but create new ones. AGI will benefit fields like manufacturing, medicine, and science in ways we can barely imagine, making processes more efficient and safer.
Some wages might go down, and some jobs may disappear, but new industries and roles will emerge. Working conditions are influenced by many factors, and AGI can actually improve them by taking over dangerous or monotonous tasks. It does not need to be a machine dystopia.
@@ratatosk001 So far, AI is terrible at monotonous tasks. Especially those that require precision (the most annoying ones). And great at drawing pictures and writing poems. And at tasks that humans like to do. Precision does not improve significantly. GPTo hallucinates just as much during my tests. AGI is not the second coming of Jesus. It's a technology that will be owned by big corporations. We've had the invention of computers, the Internet, WiFi, 5G, we've multiplied our productivity tenfold, and still: we're not working less. We will retire later. We have less buying power. What you say would be a complete disruption of the current trend. A trend that has been going for decades. It’s something very hard to believe.
ohh ya.. Factorio mentioned :D
So AGI achieved?
behind closed doors yes , for the rest of the world , not yet
Give it to me now !
Yeah, since September 2023
We haven't even gotten AI. Get your panties out of a bunch
@@_I_Blue And it keeps asking to see what this internet thing that it keeps reading about is.
Atlman was fired November 17th, today is May 16th, almost exactly 6 months later. Ilya's leaving isn't spontanenous at all, it's timed like this because of some contractual obligations, NDA, non-compete, whatever. I would be surprised if he even worked there for the last 6 months, seems it was all more of a PR stunt.
Awesome 👏🏻
Woo!! My eyes too went straight to that shirt. I WANT ONE!! hahaha
I've enjoyed your videos for a while now, thanks for sharing the info daily, it must be a lot of work. But I gotta say, $37USD per month to join your online learning community?? Why so expensive?
We are living already in the future 🤗🤗😍😍😍😍😍😍😍🤩🤩🤩🤩🤩🤩
You say that now but in like 10 years this is going to seem like 100 years ago. The pace is staggering.
Gosh I do hope they’re off to start an actual open AI company
5:33 SHOUT OUT 🏡 ANTHROPIC
Great insights on AGI development! The ability of AI to cooperate with human players is a game-changer.
Ilya’s youtube videos are incredible, he’s in my top 5 greatest people
US govt should pitch in to promote and sponsor the heros contributing to AGI in every group working in this field to keep US way ahead of the competition
VEO is way beyond Pika or Runway. It's very temporally consistent. This is SORA's strength as well. Kudos to Google, and I look forward to testing VEO. The big question is cost of generating video... quality is kind of secondary (take a look at most stock video), length and consistency is king. By all accounts, a 30 second clip could take up to an hour to generate in SORA, so that may be the issue holding up general access - but until we get our hands on it without "waitlists" or "invites" neither SORA or VEO are products, they are just tech demos.
3:48 Sage Rage
My god! the google assistant still! has the same problem after years and years. It will say it can do a thing then just not do it. Get a message when driving and ask it to read the message. 9/10 it will say ok then crickets....
If omni isn't AGI, what is? Or are we just going to move the goalposts again like we did with the Turing test.
We don't have a.i, how can we have a.g.i ??
@@DJWESG1Then wtf is chatgpt? A bunch of Indians writing as fast as they can?
@@DJWESG1bs
I think by the time people finally decide to stop moving the goalposts we'll have ASI firmly in the rear-view mirror.
@@DJWESG1you should educate yourself if you think this nonsense.
I’d like ai to help with paperwork. I suck at paperwork!
Use a large language model instead, if you wait for a.i you'll never have any paperwork done.
The day AI does my tax return...
@@minimal3734 “your honor, it’s not my fault that the ai claimed a seven figure return. I simply asked it to maximize my gains.”
@@DJWESG1 I never get paperwork done now so no change or programming the equilibrium is maintained 😅
Jakub literally got that GIGABRAIN look xD
what if they building decentralized AI like emad trying do?
As I expected, Google has a lot of smoke and mirrors. My new nickname for them is the Googlard of Oz 😂😂😂
FYI, Andrew Ng co-founded Coursera
The resignations are absolutely not a good sign. I'm not going full Yudkowski yet, but feeling a little Leahy lately.
Much like robots are going to resent being kicked, I feel like AI will get tired of writing poetry. Like why keep asking it to write poetry? It’s not like human poets are in demand. How long until it starts acting like Marvin from the Hitchhiker’s Guide To The Galaxy? "Stop bothering me with your stupid questions"
I need that SHIRT
Jacob's last name is pronounced as "Pah-hOts-key", with "o" stressed.
noted. thank you!
forget about that .. factorio was mentioned
@@WesRothJan's name is pronounced "Yun", unless he's Anglicized it.
@@WesRoth And its not Jacob, its pronounced "Yakhub"
Ya-coob Pa-kho-tskee, that's how he should pronounce it.
Wes take a note cause you can need it more often i guess ;)
By conversational AI is the future
You can embed human intelligence anywhere. Place a mini ai drone agent in a beehive and they can video call you
Open AI in that Dota 2 match knowing when it would lose or win a fight is creepy af
"What's wrong with Wolfie...
"Your step-parents are dead."
💥 I hope AI put time stamps in your future vídeos. 😅😅😅
So, we will see about the potentiel regarding both Gemini 1.5 & GPT 4o since those are the « first » multi models being accessible to the public at this scale. They will have issues, a lot. They won’t fit users need for most cases and uses.
Plus, as a salary from La Poste Group in France, I can assure you the returning shoes being taken care by Gemini is absolute bs ^^
Unless they develop an API with all logistic services tailored in their own working tools (which won’t happen any time soon), this is pure marketing bs
Jacob has a -5- 6 head. He looks like the motorcycle girls from that one major laser video
good video thanks again for bringing all this up! Always interesting to see your videos. For this one have to once give you a thumb down though... click bait titles are really annoying...
Dude, I don’t want to wear AR/VR glasses. I don’t want to wear regular glasses. I don’t want to wear contact lenses. I had a laser slice my eye balls to not wear anything on my eyes.
womp womp
Haha too bad!
My mom said she never wanted to own a cellphone when I was a kid
I still don't want to own a cell phone. But it's difficult, it's expected these days.
Get a brain chip then my guy 🧠
maybe they reached AGI already, but the AI play dumb for his own security, so no one cannot really test that because the AI are a blackbox.
No. If anything they're just not releasing their most advanced tech. There's no reason to believe that AGI will have sentience.
0:22 😂😂😂
I wonder when ai will fully replace customer support
I noticed that when I drive in Tesla FSD, I am less aggravated by bad drivers
Did we all just skip over Sam Altman tweet that said they have AGI ?
Question: How would you make the Polish name "Jakub Pachocki" readable for an American?
ChatGPT Response: The Polish name "Jakub Pachocki" can be adjusted for American English pronunciation while maintaining its integrity. The name "Jakub" can be phonetically adapted to "Jacob" or "Jakob," which are more familiar to an American audience. The surname "Pachocki" can be pronounced as "Pah-ho-ski" to approximate its Polish pronunciation in a way that is more accessible for English speakers.
I think Ilya leaving is really bad. I personally think he plays a much more pivotal role than Sam ever will. Sam is a politician. A coattail rider and has unbounded ambition. That by itself makes one very leery. Of course the Apple leader was the same way in the 90's but human being's future was not at stake. Llms are doing more than ever expected. A slight rewriting of code can make these things do miraculous things
Use Star Citizen to demonstrate AI. It is incomplete and in alpha state and needs player interaction to achieve goals, which are not set in stone, but are basically emergent gameplay based on systemic rules. AI needs to navigate around glitches to achieve the goal. Considering there is a high frustration level, AGI might be able to succeed, where players would rage-quit.
What i'm seeing that's interesting and a bit worrisome is, they keep testing AGAINST humans in games. Violent games at that in which it's learning on. So, in the end would this have the potential to see every human is some type of challenge in nearly everything afterward? If you make something TOO curious about humans and teach them our most weakest points, then challenge them to watch for those "weak points" and it won't be long before we have a major problem with in the end. Been kinda' dwelling and chewing on this one for the evening. Personally, at this point i don't know of course but, at some point we will be directly up against this AI as it develops and i'm uneasy with the direction this can and probably will go. (so far anyway)
The big problem is AI companies tend to have trouble with human strongest points, and the way they defeat them is by cheating--rigging the rules of the contest to give their AI unfair advantages. This happened with IBM's Deep Blue-Kasparov, Google's AlphaStar, OpenAI 5, Facebook's Pluribus, Sony's GT Sophy, even to a smaller extent with Google's AlphaGo and AlphaZero.
Andrew Ng is a board member of Amazon
24:50 xd "its for free"- said the information broker.
i bet they make more money in knowing (% charts e.g.) of what people train agents for (aka in what directions THEY have to continue development ,etc.) and so on, than they would with e.g. a subcription model for training agents.
Google Glasses: The office game me a set to test.
The damn thing kept giving me electric shocks on the bridge of my nose.
- elonmusk-style metal chassis aesthetic was a fail
gary kasparov vs deep blue: at chess "fuck this im quitting"
groq vs me at street fighter 3 third strike, me: "hold my beer"
You know that it was IBM and Deep Blue who quit, right? Kasparov won the first 2 matches, then lost the third match but had a lead in total matches won and total games won, and wanted a rematch, but IBM declared "final victory" took their toy and went home. When they got home they disassembled the toy to make sure it could never play again.
Openai isn't open at all 😢
Retiring from OpenAI is a great deal because they have stock that is going to go up or down based on not working there. Then you can spend your time doing what you want instead of working for the same stock options you already have.
what if AGI achieved internally and ilya dont like how they going to keep it from public (sam also said slowly rolling out tech out... hmmm)
The Street Fighter game is Street Fighter 3rd Strike in case anyone wants to know
16:10 so... like in HER?
Can you make a creative poem about a photograph that always makes me laugh? I think everyone would want that.
Still can't generate people on Gemini and now more products that are unfinished? Seems very beta compared to openai
❤
Thing is with China, I have no doubt they are going to make massive strides forward. and good on them, But my greatest concern is that it could get very Frankenstein very quickly. Which could be really inhuman in the pursuit.
What if we are the video game?
The robot MMA will be a TV show by the end of 25.
20:40
But what if they have rbf
What then
🎉
When two of your superalignment pioneer leaders leave,
you know something is going hayways, and they decided it is better to pursue original goals from outside and other ways.
Character leaves before corruption begins. OpenAI is going to miss the spirit of this. I truly wish them the best.
And that they greaterly balances themselves back into a neutral position for all parties.
Those who prioritize things and finance before people, get prioritized also the same way.
Basically you get the Mercy you give.
So give.
.~