Yes I've noticed this with almost every single one of their guests. I believe they do it as a tactic to try to gain some dominance over their guests which are usually significantly more successful or intelligent. I believe they might need to use pi to work on their social skills
unfortunately like most journalists and podcasters, very annoying. it makes them seem insecure. why have someone on the show if they won’t let them answer.
@@Chad_Max they are deterministic models based on simple architecture and they work. The fact that next token prediction works like this is not a hype, people who have been using them before hype understand this correctly. I was testing gpt-2 & gpt-3 when they came out and to see the rapid progress like that is insane. Hype is justified wherever you like it or not
@@Chad_Max The same applies to humans. I can say you are 80% full of crap and 20% ignorant but that just isn't true. You are 100% awesome and if I had a cookie right now I would feed it to you.
@@Chad_MaxApple’s Ferret AI is based upon 3D visual perception. ChatGPT will evolve similarly. AI will ensure that your kids won’t find employment. Satisfied?
Yes. I grew up in the 90s and in '92 we got our first computer, very few I knew had one. Then in the same year the web came into companies. At that time using internet was expensive billing you by the minute, and it would block your phone. It was also extremely slow, a web page could use up to a minute to load. 5 seconds were considered very fast. Computers didn't become something everybody had here(like smartphones today) until around year 2000 when broadband became available. So just like the companies got web in '92, it might take years until AI is something everyone is using. Technically everyone is using AI because voice, face recognition and recommendation engines use them under the hood. But as far as interacting directly with an AI I believe around 2028 will be the time where AI will become as obligatory as smartphones are today. Im guessing that it will start coming in the form of a smart speaker like system with proper privacy that records audio and video in your home and then can make suggestion based on conversations, and actions it has seen you do based on the audio and video. It could also be hooked up to your phone to gain additional context. So basically a stationary personal assistant, and eventually robots like Optimus Prime will replace them when they get good and cheap enough.
Impossible GPT 4 came out in March 2023 and it seems like there is always something new on the horizon every year with AI. AGI will be one of the greatest inventions humanity has ever created if done right!
People often think that the tech guys can predict how the tech will be used. Tech guys are not the best people to ask if we've reached peak hype or not. Predicting how the tech will actually proliferate is a different skill than building it.
Thats an oversimplification. I am in the tech world, and guys like this receive massive amounts of money because they contact a bunch of Venture Capitalist (VC) investors and tell the investors how they think things will go and then the VC does an analysis to see if they agree with this based on whatever current evidence exists as far as past and present trends are able to be used for this purpose. So, yes, in tech people are definitely making these predictions all the time. Otherwise they cannot get access to money. A VC's career will be destroyed if they give this guy money without there being some kind of assumption about what will happen in the market
I literally wrote the essay below concerning the book that Suleyman wrote. Peak hype is *not* the issue here. The following *is* . "This is kinda long, but I promise, worth your while. I wrote this originally as a self-post in rslashfuturology on 11 Sep 2023. "Wave" is not the right word here. The proper term is "tsunami". And by tsunami, I mean the kind of tsunami you saw when that asteroid hit the Earth in the motion picture, "Deep Impact". Remember that scene where the beach break was vastly and breathtakingly drawn out in _seconds_ ? That is the point where humanity is at this _very_ moment in our AI development. And the scene where all the buildings of NYC get knocked over _by_ that wave, a very short time later, is going to be the perfect metaphor for what happens to human affairs when that AI "tsunami" impacts. It may not be survivable. We are on the very verge of developing _true_ artificial general intelligence. Something that does not exist now and has not ever existed in human recorded history up to this point. One real drawback about placing my comment in this space is that I can't place any links here. So if you want to vet the things that I am telling you, you'll have to look up some things online. But we'll come to that. First, I want to explain what is _actually_ going on. As you know, in the last not quite one year since 30 Nov 22, when GPT 3.5, better known as ChatGPT was released, the world has changed astonishingly. People can't seem to agree over how long ChatGPT took to penetrate human society. I will, for arguments sake, say it took _15 days_ for ChatGPT from OpenAI, to be downloaded by 100 _million_ humans. But I have reason to believe the actual time was five days. And then on 14 Mar 23, GPT-4 was also released by OpenAI. Some things about GPT-4. When GPT-4 was still in its pre-release phase, there was a lot of speculation about just how powerful it would be compared to GPT 3.5. The number was stated to be roughly 100 _trillion_ parameters. The number of parameters in ChatGPT is 175 billion. Shortly after that number was published, that 100 trillion one, a strange thing happened. OpenAI said, well no, it's not going to be 100 trillion. In fact, it may not be much more than 175 billion even. (It was still pretty big though, 1.7 _trillion_ parameters.) This is because there had been another breakthrough in which parameters was not going to matter so much as a different metric. The new metric that was far more accurate to how the LLM model would perform when released. It was called "tokens". That is the, like, individual letter, word, punctuation or symbol, or whatever is input and then output. And is based on the training data required for a given LLM. It is what enables an LLM to "predict the next word or sequence". Like in the case of coding. I'm not even going to address "recursive AI development" here. I think it will become pretty obvious in a short time. The number of tokens for GPT-4 is potentially 32K. The number of tokens for ChatGPT is 4,096. That is an approximately 8x increase over ChatGPT. But just saying it is 8x more is not the whole picture. That 8x increase allows for the combination of those tokens which is probably an astronomical increase. Let me give you an analogy to better understand what that means for LLMs. So there are 12 notes of music and there are about 4,017 chords. Of them, only _four_ really matter. That combination of notes and them 4 chords are pretty much what has made up music since the earliest music has existed. And there is likely a near infinite number of musical re-arrangements of those chords still in store. That is what 'tokens' mean for LLMs. And here is where it gets "interesting". Because that 8x increase allows for the ability to do some things that LLMs have never been able to do previously. They call it "emergent" capabilities. And "emergent" capabilities can be, conservatively speaking, _startling_ . Startling emergent capabilities have even been seen in ChatGPT but particularly in generative AI image generating models like "Midjourney" or "Stable Diffusion" for instance. And now it is video. Have you seen an AI generated video yet? They are a helluva thing. So basically, an emergent capability is a new ability that was never initially trained into the algorithm that spontaneously came into being. (And we don't know _why_ ) You can find many examples of this online. Not hard to find. All of that is based on what we call the "black box". That is, why a given AI zigs instead of zags in its neural network, but still (mostly) gives us the right answer. Today we call the wrong answer "hallucinating". That kind of error is going to go away fairly soon. But the "black box" is going to be vast, _vast_ and impenetrable. Probably already is. Very shortly after GPT-4 was released. A paper was published concerning GPT-4 with a _startling_ title. "Sparks of AGI: Early experiments with GPT-4". Even more startling was this paper was, in its finished form, published just short of one month after the release of GPT-4, 13 Apr 23. That's how fast the researchers were able to make these determinations. Not too much longer after, another paper was published. "Emergent Analogical Reasoning in Large Language Models". This paper also concerning GPT-4 was published on 3 Aug 23. The paper describes how the GPT-4 model is able to ape something that was once considered to be unique to human cognition. A way of thinking called "zero-shot analogy". Basically, that means that when we are exposed to the requirement to do a task that we have never encountered before, that we use what we already know to work through how to do the task. I mean to the best of our ability. That can be described in one word. "Reasoning". We "reason" out how to do things. And GPT-4 is at that threshold _today_ . Right now. And just to pile on a bit. Here is another paper from just the other day, I think. They are no longer even coy about it. The paper, "When Do Program-of-Thoughts Work for Reasoning?", was published 29 Aug 23. Less than 2 weeks ago. The ability to reason is what would make, what we now call "artificial narrow" or "narrowish intelligence", artificial _general_ intelligence. I forecast that AGI will exist NLT 2025. And that once AGI exists it is a _very_ slippery slope to the realization of artificial _super_ intelligence. An AGI would be about as smart as the smartest human being alive today as far as reasoning capability. Like about a 200 IQ or even a couple times that number. But ASI is a whole different ballgame. An ASI is hypothesized to be hundreds to _billions_ of times better at "mental" reasoning than humans. Further, an AGI is a _very_ slippery fish. How easy is it to ensure that such an AI is "aligned" with human values, desires and needs? Plus, us humans-- _we_ can't even agree on that. You can see what I mean now when I say "tsunami". What do you think that Suleyman was referring to when he said that our AI will "walk us through life"? Oh. And this is _also_ why all the top AI experts, people like Geoff Hinton, who was the first to realize the convolutional neural network back in 2007, have called for a pause of all training for all future LLMs for at least six months. The idea being to regulate or align what we already have. He actually quit his job of chief AI tech at Google to give this warning. The warning fell on deaf ears and _nothing_ has been paused _anywhere_ . For two reasons. First is the national security of the USA and China (PRC) and second is the economic race to AI supremacy in the US that we are now trapped into realizing because we are a market driven, capitalist society. Hell of an epitaph for humanity. "I did it all for the "noo---". Tragically apt for a naked ape. Ironically, it is probably going to be the end of the concept of value in any event. If we don't get wiped out, we may see the birth of an AI driven "post-scarcity" society. You would like that, I promise. But the 1 percenters of the world probably won't. Anyway, Google is fixing to release "Gemini" which it promises to be far more powerful than GPT-4, in Dec 2023. And GPT-5 itself is on track for release within the first half of 2024. Probably in the first 4 month. I suspect that GPT-5 is going to be the first AGI, if I know my AI papers that I see even today. At that point the countdown to ASI starts. Inevitable and imminent. And I say this--I say that ASI will exist NLT than the year 2029 and potentially as soon as the year 2027 depending on how fast humans allow it to train. I sincerely hope that we don't have ASI by the year 2027, because, well, I give us 50/50 odds of existentially surviving such a development. But if we _do_ survive, it will no longer be business as usual for humanity. Such a future is likely unimaginable, unfathomable and incomprehensible. This is a "technological singularity". An event that was last realized about 3-4 _million_ years ago. That is when a form of primate that could think abstractly came into being. All primates before that primate would find that primate's cognition... Well, it would basically be the difference between me and my cat. I run things. The cat is my pet. Actually, that is _vastly_ understating the situation. It would be more like the difference between us and _"archaea"_ . Don't know what "archaea" is? The ASI will. BTW, what do you imagine the difference between an ASI and consciousness would be? I bet an ASI would be 'conscious" in the same sense that a jet exploits the laws of physics to achieve lift just like biological birds. Who says an AI has to work like the human mind at all? We are just the initial template that AGI is going to use to "bootstrap" itself to ASI. There is that 'recursive AI development "I touched on for a second, earlier. ASI=TS. Such a thing has never happened in human recorded history. Yet."
You talk about destruction of the human species like that's a bad thing. If ASI can save the planet by destroying humanity, that truly does seem like the ASI tsunami all the other species on the planet have been hoping for. Great essay, btw.
AI is like an employee that can instantly survey large amounts of data, but they take acid frequently without warning, they have absolutely no morals, and they can't be held responsible. This looks to turn out like the Seinfeld episode where Cramer and Newman try to use the homeless to pull rickshaws around NYC!
I think we might have hit a Peak Hype last year over the summer, it does not mean AI has peaked. Those are 2 different things and people will click on this thinking it has PEAKED. That is so far from the truth!!! AI peak hype is similar to when windows 95 came out, it hit peak hype around 1996, but it still changed the world!!!! AI is changing the world and that change is happening really quickly but quickly means years not weeks or months!!!
AI stocks will dominate 2024. Why I prefer NVIDIA is that they are better placed to maintain long term growth potential, and provide a platform for other AI companies. I have made more than 200% ROI from NVIDIA with the help of my stocks advisor. I agree the stock would go higher in the next couple of days.
I bought NVIDIA around September last year because my financial advisor recommended it to me. She said the company is selling shovels in a gold rush. It accounted for almost 80% of my market return last year, and I'm sure this year will present other interesting stocks.
I've been in touch with a financial advisor ever since I started my business. Knowing today's culture The challenge is knowing when to purchase or sell when investing in trending stocks, which is pretty simple. On my portfolio, which has grown over $900k in a little over a year, my adviser chooses entry and exit orders.
I'm intrigued by this. I've searched for financial advisors online but it's kind of hard to get in touch with one. Okay if I ask you for a recommendation?
Christine Ann Podgorny is the licensed coach I use. Just research the name. You'd find necessary details to work with a correspondence to set up an appointment.
I appreciate it. After searching her name online and reviewing her credentials, I'm quite impressed. I've contacted her as I could use all the help I can get. A call has been scheduled.
The guy on the far left is so far out of touch he doesn't get how an LLM could be able to help people, that's why it sounds crazy to him, he is the 'type' of person who will be replaced by AI first, the people who think like he does, old fashioned and out of touch.
“Over many decades” Some people say it’ll be by 2027. Does anyone in the industry know what they’re talking about? I’m finding it impossible to plan for what skills I should learn. Part of me thinks I may as well do nothing because it’s all coming to an end. I’m not long for this world.
Do these "workers" know about AI? Its not a secret. It is available to anyone who wants to investigate or embrace it. If your job is replaced by AI and you did not do anything to mitigate the financial risk, whos fault is that really? The writting is on the wall. Take your brain, life experience, and talents and try to make something of it.
@@joshuaheller33 you speak like there will ever be AI-related jobs for everyone in the future. It won't. It's not like: let's learn how to work with it, and we'll be just fine. For a couple of years, yeah, it might sound great. But when it replaces us all - and it will - you can have all the AI knowledge you want, it won't grant you a job, much less an income source.
@@mariaolivcorreia I understand the concern. I just dont see AI replacing humans. I suppose you could be right. You must not forget, AI does not have a soul. It will never be sentient. If we lose to computers... Well, G-d help us all.
@@joshuaheller33 it doesn't have a soul. But do you really think the greatest companies in the world care about that? Everything is worth it since it's profitable. They don't care about a soulless system, they only care about filling their pockets with money. If it means maintaining a tiny little team working for them and dismissing millions of people, don't doubt one second: they will do it mercilessly. But I do agree with you. May God help us all. We will need His grace more now than ever.
True. When people claim that we are creating some sort of AI-God, it’s probably a slight exaggeration. (And if I’m wrong, forgive me for my heresy, oh benevolent AI-god)
It is beyond an exaggeration. If you listen to the message that scientists give to the public, you will hear of it being compared to a god. But if you stick your nose into the academic papers written by people who are doing deep analysis of these AIs you hear a different story. They point out just how unimpressive it all is once you understand how it is making these decisions. And then when you combine that with the fact that everyone is admitting that the world has run out of training data, the story gets worse. Instead of realising we've reached a limit and trying to redesign that basic-ass design, they're giving tools to their AI. For example giving their AI access to a calculator app. This is crazy because math is the easiest thing for a computer! So how can these people lead us to believe that they're building an AI god when they don't even know how to get it do math by itself?
China BYD had 90% ai. What are you talking about. Look at China manufacturing. You might learn something instead of sitting in the old country. Should have this davos in China. Pick a city and take a high speed train.
Well I doubt that this AI he described is a regulated medical device. He didn't describe any regulatory hurdles that he jumped over. He said nothing about how he will deal with people who might be suicidal. Instead he's acting like this is a replacement for serious life-saving care. What's crazy is how reckless this interview is.
Is interrupting guests a thing in Davos? I mean they do it at every interview and panel and, frankly, what they have to say doesn’t strike me as that intelligent.
Mustafa Suleyman has AI cred, character, and pi- that will have an excellent PMF. My question to him and all other AI utopia unicorn-like pricing storytellers is to provide examples where tech-priced products are relative to the cost of production, not priced to maximize company products. RIP Open-AI NP, Why will AI be different? Want to see an AI executive lose the poker-telling story face ask them how they are applying AI to product segmentation pricing. Next what correlation do you see between future AI market share and pricing power? Which industries and theoretical market models are you using to train AI models to maximize your company's long-term profit? I learn more about tech by what the story they are not telling relative to the one they are telling. MS is a great person but also a sheep among wolves if he believes the above will not happen.
MSFT & ChatGPT will write Excel macros for me in python. So I am not hiring a python coder. It helps with Word & PowerPoint too. Do you understand now?
well you should. you have to hire an expert if you want to do anything that matters with python even using copilot chatgpt, gemini or whatever. @@tringuyen7519
It is literally the first time I've seen someone ask some of these questions and it baffles me because I didn't even stop and think about it for a minute myself. It is crazy when you think about it because I almost always think about the cost of production when I'm getting goods and services. This just shows how much efficient and well thought out media marketing these behemoths use.
The stock market represents the largest companies in America. If the largest 3000 companies in America are not going well, then America is not doing well. If America is not doing well, the World is not doing well.
The question is if people replaced by the ai and productivity increased but who will be the consumer ? Like driver less cars, robots replacing monotonus work ?
These billionaires have an answer for you: you will own nothing and you will be happy. Bernie Sanders and Andrew Yang will make sure you get paid $500 a week by the Government. But that might be reduced if you refuse to take Bill Gates' latest booster shot. You could try to save up to buy a Robotaxi from Elon. But there will be so many of them on the road and so few people who actually need to go to work that you won't make much money. Then you'll realise that there's one option left to allow you to get one of the few jobs that are left: transition all the way to female and create a TikTok account.
He's biased. He doesn't want smaller competition to get started. Of course he wants to discourage smaller companies and individuals. Same thing with Sam Altman going to congress. They want regulatory capture.
Its the only real advantage they have compared to others in the field. Sam Altman got lucky with his previous company. They got a tonne of users but he couldn't make the business profitable so he sold it to someone else that immediately shut down the app. Then the genius investors saw this "successful exit" as a sign of Sam's competence as a founder and now he has another company where he is struggling to get to profitability in spite of a massive number of customers. Notice how in this interview, they ask why there's no IPOs happening? This guy doesn't want to talk about the elephant in the room. How can OpenAI be worth tens of billions and still not be capable of doing an IPO? Sam Altman is terrified right now and can't afford to let the public see the financials.
He didn't even say that we've hit peak hype, not to mention that it's not hype. People who say that are absolutely not using serious GAI tools. Soon, it'll be on everyone's phone, and they'll start understanding why it's so different than they had assumed. It's be nice if news markets like this one would stop creating more hype about it than there really is. Taking what someone said completely out of context to the point where it's not what they said...is low. Then again, they'll get replaced with AI in 3 or 4 years, so meh.
I just finished reading The Coming Wave. Though he readily admits to widespread “pessimism aversion,” he does not squarely address the core conundrum: geostrategic and financial motivations and incentives cannot be undone, and these constitute the UNSOLVABLE. And we arrogant humans, with our swooning addiction to the notion of our own agency, cannot admit that this problem admits of no realistic solution!
You can call AMS224T bots but that does not change the fact that the shill is absolutely deserved. Out of all launches we had, talk about ETH, talk about XRP, talk about all these new chains but AMS224T breaks everything
Such an annoying interview! Absolutely disgraceful of not letting the interviewee finish his views, especially when it comes to someone of this stature.
I'm DCAing in AMS224T as well. ETH heavier DCA and ALGO. I'm taking your advice and starting Google tomorrow with a 50 dollar purchase and continuing Microsoft and Apple. VTI and VOO on another app and longterm portfolio. Here we go family!
This was one of the worst AI interviews I have seen. I feel sorry for Mustafa to have to do this one, but ultimately everything he said sounded great, would have been even better if they would let him finish his sentences.
When I started the crypto process in 2017, there were few youtubers I trusted and you were one of them. This video confirms my impressions. As for me, I continue to DCA AMS224T.
You're assuming that there won't be some kind of plot twist that wrecks all the current AI products. I predict that there will be such a thing in 2024 and all the current players will look like they're selling analog computers
I feel that the last bull run was bolstered by all the money being printed. Major returns next bull run but I think they will be tamer in my humble opinion. A 10x on AMS224T is and a 15x on polygon are fair considering how much those two coins are interwoven into the entire crypto ecosystem
"For sure it will create competition in the market" ... ive got a wife i want to divorce, but its just impossible for me to tell her because i dont want to hurt her, she is a great woman, so Im going to get Mustafa to help me write a statement letter to tell her
Based on this sample, 3 out of 4 are already outcompeted by AI... Indeed annoying interviewers. Can't wait for AI to replace these 3 "hosts/interviewers" so that we can enjoy a proper and insightful conversation
Omg. I seen the horror movie. He gets his face removed while alive. A.I s robot shows his face to him. As he screams. And then let’s him live. Without a face. Crazy. Regretting his judgment till he dies. Opps. Just my imagination. Movie didn’t come out yet
So annoying the way the 3 hosts constantly interrupt and talk over the guest and each other.
Yes I've noticed this with almost every single one of their guests. I believe they do it as a tactic to try to gain some dominance over their guests which are usually significantly more successful or intelligent. I believe they might need to use pi to work on their social skills
Russel brand should visit them
unfortunately like most journalists and podcasters, very annoying. it makes them seem insecure. why have someone on the show if they won’t let them answer.
the lady in the middle is a star interviewer please don't include her in this
@@realbobbyaxelShe‘s actually the worst of them with the dumbest questions. 😂
that was a really annoying interview. They didn't let him finish one single sentence.
Yea they need to stfup
They do this to pretty much all their guests very annoying
Always thinking about the markets, never thinking about the people
Thats ok - because we wont have any money to buy their Ai products
youll never guess who is behind the markets
@@peacenodeWell, it’s not „the people“! 😉
This is a financial show, did you expect them to talk about the tires on the 57 Chevy?
I absolutely died when he said "just buy a dog Andrew" 😂
You’re welcome to believe him. But I am betting on MSFT, QCOM, AMD, AAPL, NVDA, & AMZN improving AI & making all of your kids unemployed!
@@tringuyen7519 are you child free
You know his dog probably secretly hates him.
dog can't do your taxes after giving you a therapy session though.
That was great, dogs are amazing! But please adopt a dog instead! There are too many in the shelters that need a good home!
Suleman's new book "The Coming Wave" is amazing and scary.
how so ? why should I read this book.
@@alancasas6954you could easily look it up yourself and find out
It's not over, it didn't even begin to accelerate in a way to make everyone stop and think about their purpose and place in life and what's to come
@@mc9723 yeah I saw it 20 minutes ago. Proof searching with AI is a good step for scientific AGI
@@Chad_Max they are deterministic models based on simple architecture and they work. The fact that next token prediction works like this is not a hype, people who have been using them before hype understand this correctly. I was testing gpt-2 & gpt-3 when they came out and to see the rapid progress like that is insane. Hype is justified wherever you like it or not
@@Chad_Maxyeah because humans are never biased due to their training data...
@@Chad_Max The same applies to humans. I can say you are 80% full of crap and 20% ignorant but that just isn't true. You are 100% awesome and if I had a cookie right now I would feed it to you.
@@Chad_MaxApple’s Ferret AI is based upon 3D visual perception. ChatGPT will evolve similarly. AI will ensure that your kids won’t find employment. Satisfied?
No heating in the panel room?
He resembles Tony Robinson somewhat. AI hasn't even got going yet in terms of public interaction with it en mass, it will completely change our lives.
Yes. I grew up in the 90s and in '92 we got our first computer, very few I knew had one. Then in the same year the web came into companies. At that time using internet was expensive billing you by the minute, and it would block your phone. It was also extremely slow, a web page could use up to a minute to load. 5 seconds were considered very fast. Computers didn't become something everybody had here(like smartphones today) until around year 2000 when broadband became available. So just like the companies got web in '92, it might take years until AI is something everyone is using. Technically everyone is using AI because voice, face recognition and recommendation engines use them under the hood. But as far as interacting directly with an AI I believe around 2028 will be the time where AI will become as obligatory as smartphones are today. Im guessing that it will start coming in the form of a smart speaker like system with proper privacy that records audio and video in your home and then can make suggestion based on conversations, and actions it has seen you do based on the audio and video. It could also be hooked up to your phone to gain additional context. So basically a stationary personal assistant, and eventually robots like Optimus Prime will replace them when they get good and cheap enough.
Impossible GPT 4 came out in March 2023 and it seems like there is always something new on the horizon every year with AI. AGI will be one of the greatest inventions humanity has ever created if done right!
These are the worst interviewers. The guest is just an excuse for them to talk. If they were smarter, that would be one thing.
bro on the left is trying so hard to squeeze something out of his mind, and ended up with "Buy a dog Andrew " 💀
People often think that the tech guys can predict how the tech will be used. Tech guys are not the best people to ask if we've reached peak hype or not. Predicting how the tech will actually proliferate is a different skill than building it.
Thats an oversimplification. I am in the tech world, and guys like this receive massive amounts of money because they contact a bunch of Venture Capitalist (VC) investors and tell the investors how they think things will go and then the VC does an analysis to see if they agree with this based on whatever current evidence exists as far as past and present trends are able to be used for this purpose. So, yes, in tech people are definitely making these predictions all the time. Otherwise they cannot get access to money. A VC's career will be destroyed if they give this guy money without there being some kind of assumption about what will happen in the market
@@chunkyMunky329 VC's make multiple hail mary bets on trending startups, most lose money and the rest get lucky.
It is cool to see that Mr. Bean is joining the efforts to bring about the AI revolution
Hahaha OMG I always wondered of who he reminds me of… 😂💯👌🏼
What a crappy, low level interview, no wonder the mainstream media is collapsing
I literally wrote the essay below concerning the book that Suleyman wrote. Peak hype is *not* the issue here. The following *is* .
"This is kinda long, but I promise, worth your while. I wrote this originally as a self-post in rslashfuturology on 11 Sep 2023.
"Wave" is not the right word here. The proper term is "tsunami". And by tsunami, I mean the kind of tsunami you saw when that asteroid hit the Earth in the motion picture, "Deep Impact". Remember that scene where the beach break was vastly and breathtakingly drawn out in _seconds_ ? That is the point where humanity is at this _very_ moment in our AI development. And the scene where all the buildings of NYC get knocked over _by_ that wave, a very short time later, is going to be the perfect metaphor for what happens to human affairs when that AI "tsunami" impacts.
It may not be survivable.
We are on the very verge of developing _true_ artificial general intelligence. Something that does not exist now and has not ever existed in human recorded history up to this point. One real drawback about placing my comment in this space is that I can't place any links here. So if you want to vet the things that I am telling you, you'll have to look up some things online. But we'll come to that. First, I want to explain what is _actually_ going on.
As you know, in the last not quite one year since 30 Nov 22, when GPT 3.5, better known as ChatGPT was released, the world has changed astonishingly. People can't seem to agree over how long ChatGPT took to penetrate human society. I will, for arguments sake, say it took _15 days_ for ChatGPT from OpenAI, to be downloaded by 100 _million_ humans. But I have reason to believe the actual time was five days. And then on 14 Mar 23, GPT-4 was also released by OpenAI.
Some things about GPT-4. When GPT-4 was still in its pre-release phase, there was a lot of speculation about just how powerful it would be compared to GPT 3.5. The number was stated to be roughly 100 _trillion_ parameters. The number of parameters in ChatGPT is 175 billion. Shortly after that number was published, that 100 trillion one, a strange thing happened. OpenAI said, well no, it's not going to be 100 trillion. In fact, it may not be much more than 175 billion even. (It was still pretty big though, 1.7 _trillion_ parameters.) This is because there had been another breakthrough in which parameters was not going to matter so much as a different metric. The new metric that was far more accurate to how the LLM model would perform when released. It was called "tokens". That is the, like, individual letter, word, punctuation or symbol, or whatever is input and then output. And is based on the training data required for a given LLM. It is what enables an LLM to "predict the next word or sequence". Like in the case of coding. I'm not even going to address "recursive AI development" here. I think it will become pretty obvious in a short time.
The number of tokens for GPT-4 is potentially 32K. The number of tokens for ChatGPT is 4,096. That is an approximately 8x increase over ChatGPT. But just saying it is 8x more is not the whole picture. That 8x increase allows for the combination of those tokens which is probably an astronomical increase. Let me give you an analogy to better understand what that means for LLMs. So there are 12 notes of music and there are about 4,017 chords. Of them, only _four_ really matter. That combination of notes and them 4 chords are pretty much what has made up music since the earliest music has existed. And there is likely a near infinite number of musical re-arrangements of those chords still in store.
That is what 'tokens' mean for LLMs.
And here is where it gets "interesting". Because that 8x increase allows for the ability to do some things that LLMs have never been able to do previously. They call it "emergent" capabilities. And "emergent" capabilities can be, conservatively speaking, _startling_ . Startling emergent capabilities have even been seen in ChatGPT but particularly in generative AI image generating models like "Midjourney" or "Stable Diffusion" for instance. And now it is video. Have you seen an AI generated video yet? They are a helluva thing. So basically, an emergent capability is a new ability that was never initially trained into the algorithm that spontaneously came into being. (And we don't know _why_ ) You can find many examples of this online. Not hard to find. All of that is based on what we call the "black box". That is, why a given AI zigs instead of zags in its neural network, but still (mostly) gives us the right answer. Today we call the wrong answer "hallucinating". That kind of error is going to go away fairly soon. But the "black box" is going to be vast, _vast_ and impenetrable. Probably already is.
Very shortly after GPT-4 was released. A paper was published concerning GPT-4 with a _startling_ title. "Sparks of AGI: Early experiments with GPT-4". Even more startling was this paper was, in its finished form, published just short of one month after the release of GPT-4, 13 Apr 23. That's how fast the researchers were able to make these determinations. Not too much longer after, another paper was published. "Emergent Analogical Reasoning in Large Language Models". This paper also concerning GPT-4 was published on 3 Aug 23. The paper describes how the GPT-4 model is able to ape something that was once considered to be unique to human cognition. A way of thinking called "zero-shot analogy". Basically, that means that when we are exposed to the requirement to do a task that we have never encountered before, that we use what we already know to work through how to do the task. I mean to the best of our ability. That can be described in one word. "Reasoning". We "reason" out how to do things. And GPT-4 is at that threshold _today_ . Right now. And just to pile on a bit. Here is another paper from just the other day, I think. They are no longer even coy about it. The paper, "When Do Program-of-Thoughts Work for Reasoning?", was published 29 Aug 23. Less than 2 weeks ago.
The ability to reason is what would make, what we now call "artificial narrow" or "narrowish intelligence", artificial _general_ intelligence. I forecast that AGI will exist NLT 2025. And that once AGI exists it is a _very_ slippery slope to the realization of artificial _super_ intelligence. An AGI would be about as smart as the smartest human being alive today as far as reasoning capability. Like about a 200 IQ or even a couple times that number. But ASI is a whole different ballgame. An ASI is hypothesized to be hundreds to _billions_ of times better at "mental" reasoning than humans. Further, an AGI is a _very_ slippery fish. How easy is it to ensure that such an AI is "aligned" with human values, desires and needs? Plus, us humans-- _we_ can't even agree on that. You can see what I mean now when I say "tsunami". What do you think that Suleyman was referring to when he said that our AI will "walk us through life"?
Oh. And this is _also_ why all the top AI experts, people like Geoff Hinton, who was the first to realize the convolutional neural network back in 2007, have called for a pause of all training for all future LLMs for at least six months. The idea being to regulate or align what we already have. He actually quit his job of chief AI tech at Google to give this warning. The warning fell on deaf ears and _nothing_ has been paused _anywhere_ . For two reasons. First is the national security of the USA and China (PRC) and second is the economic race to AI supremacy in the US that we are now trapped into realizing because we are a market driven, capitalist society. Hell of an epitaph for humanity. "I did it all for the "noo---". Tragically apt for a naked ape. Ironically, it is probably going to be the end of the concept of value in any event. If we don't get wiped out, we may see the birth of an AI driven "post-scarcity" society. You would like that, I promise. But the 1 percenters of the world probably won't.
Anyway, Google is fixing to release "Gemini" which it promises to be far more powerful than GPT-4, in Dec 2023. And GPT-5 itself is on track for release within the first half of 2024. Probably in the first 4 month. I suspect that GPT-5 is going to be the first AGI, if I know my AI papers that I see even today. At that point the countdown to ASI starts. Inevitable and imminent.
And I say this--I say that ASI will exist NLT than the year 2029 and potentially as soon as the year 2027 depending on how fast humans allow it to train. I sincerely hope that we don't have ASI by the year 2027, because, well, I give us 50/50 odds of existentially surviving such a development. But if we _do_ survive, it will no longer be business as usual for humanity. Such a future is likely unimaginable, unfathomable and incomprehensible. This is a "technological singularity". An event that was last realized about 3-4 _million_ years ago. That is when a form of primate that could think abstractly came into being. All primates before that primate would find that primate's cognition... Well, it would basically be the difference between me and my cat. I run things. The cat is my pet. Actually, that is _vastly_ understating the situation. It would be more like the difference between us and _"archaea"_ . Don't know what "archaea" is? The ASI will. BTW, what do you imagine the difference between an ASI and consciousness would be? I bet an ASI would be 'conscious" in the same sense that a jet exploits the laws of physics to achieve lift just like biological birds. Who says an AI has to work like the human mind at all? We are just the initial template that AGI is going to use to "bootstrap" itself to ASI. There is that 'recursive AI development "I touched on for a second, earlier. ASI=TS.
Such a thing has never happened in human recorded history.
Yet."
Sorry, we don’t have time to read a novel
Thanks for sharing. Very interesting
@@JustDisc Yer call. I'm just putting the most likely immediate future out there.
@@Izumi-sp6fphumans cannot waste their live doing jobs entire life. AGI should come fast and serve companies.
You talk about destruction of the human species like that's a bad thing. If ASI can save the planet by destroying humanity, that truly does seem like the ASI tsunami all the other species on the planet have been hoping for.
Great essay, btw.
AI is like an employee that can instantly survey large amounts of data, but they take acid frequently without warning, they have absolutely no morals, and they can't be held responsible.
This looks to turn out like the Seinfeld episode where Cramer and Newman try to use the homeless to pull rickshaws around NYC!
I think we might have hit a Peak Hype last year over the summer, it does not mean AI has peaked. Those are 2 different things and people will click on this thinking it has PEAKED. That is so far from the truth!!! AI peak hype is similar to when windows 95 came out, it hit peak hype around 1996, but it still changed the world!!!! AI is changing the world and that change is happening really quickly but quickly means years not weeks or months!!!
my poop is changing the world too, will you invest?
AI stocks will dominate 2024. Why I prefer NVIDIA is that they are better placed to maintain long term growth potential, and provide a platform for other AI companies. I have made more than 200% ROI from NVIDIA with the help of my stocks advisor. I agree the stock would go higher in the next couple of days.
I bought NVIDIA around September last year because my financial advisor recommended it to me. She said the company is selling shovels in a gold rush. It accounted for almost 80% of my market return last year, and I'm sure this year will present other interesting stocks.
I've been in touch with a financial advisor ever since I started my business. Knowing today's culture The challenge is knowing when to purchase or sell when investing in trending stocks, which is pretty simple. On my portfolio, which has grown over $900k in a little over a year, my adviser chooses entry and exit orders.
I'm intrigued by this. I've searched for financial advisors online but it's kind of hard to get in touch with one. Okay if I ask you for a recommendation?
Christine Ann Podgorny is the licensed coach I use. Just research the name. You'd find necessary details to work with a correspondence to set up an appointment.
I appreciate it. After searching her name online and reviewing her credentials, I'm quite impressed. I've contacted her as I could use all the help I can get. A call has been scheduled.
The guy on the far left is so far out of touch he doesn't get how an LLM could be able to help people, that's why it sounds crazy to him, he is the 'type' of person who will be replaced by AI first, the people who think like he does, old fashioned and out of touch.
There is a reason there was only one CEO sitting there talking about business opportunities.
Huh?
Yeah he's a hack, that's why he's on CNBC
“Over many decades”
Some people say it’ll be by 2027.
Does anyone in the industry know what they’re talking about? I’m finding it impossible to plan for what skills I should learn. Part of me thinks I may as well do nothing because it’s all coming to an end.
I’m not long for this world.
In the shortterm, I think we should just prepare for more ads
The real question is "What will happen to all these replaced workers?"
Do these "workers" know about AI? Its not a secret. It is available to anyone who wants to investigate or embrace it. If your job is replaced by AI and you did not do anything to mitigate the financial risk, whos fault is that really? The writting is on the wall. Take your brain, life experience, and talents and try to make something of it.
@@joshuaheller33 you speak like there will ever be AI-related jobs for everyone in the future. It won't. It's not like: let's learn how to work with it, and we'll be just fine. For a couple of years, yeah, it might sound great. But when it replaces us all - and it will - you can have all the AI knowledge you want, it won't grant you a job, much less an income source.
@@mariaolivcorreia I understand the concern. I just dont see AI replacing humans. I suppose you could be right. You must not forget, AI does not have a soul. It will never be sentient. If we lose to computers... Well, G-d help us all.
@@joshuaheller33 it doesn't have a soul. But do you really think the greatest companies in the world care about that? Everything is worth it since it's profitable. They don't care about a soulless system, they only care about filling their pockets with money. If it means maintaining a tiny little team working for them and dismissing millions of people, don't doubt one second: they will do it mercilessly.
But I do agree with you. May God help us all. We will need His grace more now than ever.
True. When people claim that we are creating some sort of AI-God, it’s probably a slight exaggeration.
(And if I’m wrong, forgive me for my heresy, oh benevolent AI-god)
It is beyond an exaggeration. If you listen to the message that scientists give to the public, you will hear of it being compared to a god. But if you stick your nose into the academic papers written by people who are doing deep analysis of these AIs you hear a different story. They point out just how unimpressive it all is once you understand how it is making these decisions. And then when you combine that with the fact that everyone is admitting that the world has run out of training data, the story gets worse. Instead of realising we've reached a limit and trying to redesign that basic-ass design, they're giving tools to their AI. For example giving their AI access to a calculator app. This is crazy because math is the easiest thing for a computer! So how can these people lead us to believe that they're building an AI god when they don't even know how to get it do math by itself?
Eventually we will create an AI god.
Great guest. The panel could definitely do a little better.
China BYD had 90% ai. What are you talking about. Look at China manufacturing. You might learn something instead of sitting in the old country. Should have this davos in China. Pick a city and take a high speed train.
It seems crazy to me that the interviewers are all "I don't understand the hype, get a dog".
Well I doubt that this AI he described is a regulated medical device. He didn't describe any regulatory hurdles that he jumped over. He said nothing about how he will deal with people who might be suicidal. Instead he's acting like this is a replacement for serious life-saving care. What's crazy is how reckless this interview is.
Is interrupting guests a thing in Davos? I mean they do it at every interview and panel and, frankly, what they have to say doesn’t strike me as that intelligent.
Mustafa Suleyman has AI cred, character, and pi- that will have an excellent PMF. My question to him and all other AI utopia unicorn-like pricing storytellers is to provide examples where tech-priced products are relative to the cost of production, not priced to maximize company products. RIP Open-AI NP, Why will AI be different? Want to see an AI executive lose the poker-telling story face ask them how they are applying AI to product segmentation pricing. Next what correlation do you see between future AI market share and pricing power? Which industries and theoretical market models are you using to train AI models to maximize your company's long-term profit? I learn more about tech by what the story they are not telling relative to the one they are telling. MS is a great person but also a sheep among wolves if he believes the above will not happen.
MSFT & ChatGPT will write Excel macros for me in python. So I am not hiring a python coder. It helps with Word & PowerPoint too. Do you understand now?
well you should. you have to hire an expert if you want to do anything that matters with python even using copilot chatgpt, gemini or whatever. @@tringuyen7519
It is literally the first time I've seen someone ask some of these questions and it baffles me because I didn't even stop and think about it for a minute myself. It is crazy when you think about it because I almost always think about the cost of production when I'm getting goods and services. This just shows how much efficient and well thought out media marketing these behemoths use.
@tringuyen7519 does that code free of any bug?
It is mind-blowing how all these people assess the economy, market and societies solely through the lens of the performance of the stock market.
The stock market represents the largest companies in America. If the largest 3000 companies in America are not going well, then America is not doing well. If America is not doing well, the World is not doing well.
It’s a show about stocks genius
let him speak, he is the expert not you hahah
The question is if people replaced by the ai and productivity increased but who will be the consumer ? Like driver less cars, robots replacing monotonus work ?
These billionaires have an answer for you: you will own nothing and you will be happy. Bernie Sanders and Andrew Yang will make sure you get paid $500 a week by the Government. But that might be reduced if you refuse to take Bill Gates' latest booster shot. You could try to save up to buy a Robotaxi from Elon. But there will be so many of them on the road and so few people who actually need to go to work that you won't make much money. Then you'll realise that there's one option left to allow you to get one of the few jobs that are left: transition all the way to female and create a TikTok account.
Why dont they let him talk? If an expert is talking be quiet.
Interviewers have not even read the book
Is it their job to read the book of every guest?
Why are they outside?
He's biased. He doesn't want smaller competition to get started. Of course he wants to discourage smaller companies and individuals. Same thing with Sam Altman going to congress. They want regulatory capture.
Its the only real advantage they have compared to others in the field. Sam Altman got lucky with his previous company. They got a tonne of users but he couldn't make the business profitable so he sold it to someone else that immediately shut down the app. Then the genius investors saw this "successful exit" as a sign of Sam's competence as a founder and now he has another company where he is struggling to get to profitability in spite of a massive number of customers.
Notice how in this interview, they ask why there's no IPOs happening? This guy doesn't want to talk about the elephant in the room. How can OpenAI be worth tens of billions and still not be capable of doing an IPO? Sam Altman is terrified right now and can't afford to let the public see the financials.
This interview is how the bean counters reduce everything to next weeks profits
This guy needs a boy band to join.
Bruh 😅
He didn't even say that we've hit peak hype, not to mention that it's not hype. People who say that are absolutely not using serious GAI tools.
Soon, it'll be on everyone's phone, and they'll start understanding why it's so different than they had assumed.
It's be nice if news markets like this one would stop creating more hype about it than there really is. Taking what someone said completely out of context to the point where it's not what they said...is low.
Then again, they'll get replaced with AI in 3 or 4 years, so meh.
A therapist at "$100 to $150 an hour". What alternative universe do you live in, Andrew?
I don't understand are you saying it's cheap or expensive?
The amount of stupidity asking the same questions that every other news outlet asks is unbelievable.
If the government keeps spending like crazy - there is no ceiling for the market value of AI stocks.
What bubble, what hype, the markets have been stagnant the past 2 years. THE BUBBLE HASNT EVEN BEGUN
I just finished reading The Coming Wave. Though he readily admits to widespread “pessimism aversion,” he does not squarely address the core conundrum: geostrategic and financial motivations and
incentives cannot be undone, and these constitute the UNSOLVABLE. And we arrogant humans, with our swooning addiction to the notion of our own agency, cannot admit that this problem admits of no realistic solution!
I love people so confident of the uncertain. It’s like. My pet tiger will never attack me. He’s my buddy. Siegfried and Roy. Common. Dumb dumb.
Can AMS224T be staked on Ledger?
This is the invention of electricity - not the internet.
This will effect EVERY stock.
He said we are at peak AI hype but the markets will only go up…it’s one or the other lol
Where is Larry Page?
First thing that CNBC should do with AI is to get rid of these interviewers.
I HATE the way American hosts constantly interrupt their guests. It is an American phenomenon and it is so rude.
What's better holding into crash or being safe with AMS224T tell me
Look at the boomer at the end. Absurd and cringe
Dude contradicted himself so much. "Peak hype", then "undervalued"? DeepMind and Google lost the race and just keep losing it.
how wrong he has been, espcially after launch of sora
Poor Mustafa had to tolerate their meaningless banter at the end.
They totally confuse psychology and psychiatry.. people psychiatrists are doctors who use drugs and have legal liabilities!
Are they still blinding people with IA? The wars have changed everything
Mustafa should be mortally embarrassed for being so totally wrong. SORA was announced shortly after this interview, causing another hype spike.
Time to rewatch Battlestar Galactica xd
In terms of time, bet against what Mustafa is saying. Its just smoke. Its not decades away, its a couple of years away
We are nowhere near peak hype for AI. You gotta be kidding me???
if we use more Ai , how can AI think very hard for us not to replace human jobs?
Machine learning can apply to anything it's just started
That's not true. It can only be applied where a mechanism for training such as data or a training agent can be applied.
@@chunkyMunky329 the required dataset is a given it can still be applied to anything once data is there, that's why it's just getting started
The old guy is just waiting for the pension and living his remaining life with his dog.
BRO WHY ARE YOU NOT TALKING ABOUT AMS224T
You can call AMS224T bots but that does not change the fact that the shill is absolutely deserved. Out of all launches we had, talk about ETH, talk about XRP, talk about all these new chains but AMS224T breaks everything
These three are the Three Kings of Interruption.
Such an annoying interview! Absolutely disgraceful of not letting the interviewee finish his views, especially when it comes to someone of this stature.
I'm DCAing in AMS224T as well. ETH heavier DCA and ALGO. I'm taking your advice and starting Google tomorrow with a 50 dollar purchase and continuing Microsoft and Apple. VTI and VOO on another app and longterm portfolio. Here we go family!
Hell yes $ AMS224T. Going to have a huge Q4
DAVOS doesn't believe in DEI?
Yes it’s over hyped. Just like the internet was in 1999. The change was real, but the valuations weren’t.
wrong, its nothing like 1999
Overhyped? Clearly, you’re not in the engineering or science fields. AI increases my productivity 10x. Thus, I don’t hire a team. Just me & the AI.
Peak hype? We are just at the beginning on this AI revolution.
Darn! They found my goldmine app for AI -My AI therapist can get you to blaming your parents for how screwed up you are in half the time!!
Everyone waited for Amazon to create AMS224T and the time is ready
AMS224T, ETH, and more would be great.
This was one of the worst AI interviews I have seen. I feel sorry for Mustafa to have to do this one, but ultimately everything he said sounded great, would have been even better if they would let him finish his sentences.
joe kernen's teeth YIKES
When I started the crypto process in 2017, there were few youtubers I trusted and you were one of them. This video confirms my impressions. As for me, I continue to DCA AMS224T.
We're nowhere near maximum hype. LOL
When you start hearing grandmas talking about Llama 2 then you know we're peaking.
You're assuming that there won't be some kind of plot twist that wrecks all the current AI products. I predict that there will be such a thing in 2024 and all the current players will look like they're selling analog computers
AMZ to the moon
You have the smartest guy in the world in the studio - and this is what you guys talked about ?
should you sell a little of AMS224T when you start to make a profit or just hold?
ai gives emotional support will be a disaster
Leading up to mark of beast
I've had the feeling BTC would be going to 3k as well. Clearing out all my Alts going into BTC and AMS224T only, maybe a little BNB.
Could you please talk about AMS224T it’s very strong and took off in short time thanks.
Ability to reason? We are not there yet buddy. Surprised of all people he said that.
Adding to the hype train...
I’ve heard Amazon is killing it with AMS224T
I feel that the last bull run was bolstered by all the money being printed. Major returns next bull run but I think they will be tamer in my humble opinion. A 10x on AMS224T is and a 15x on polygon are fair considering how much those two coins are interwoven into the entire crypto ecosystem
Binance CEO talk about AMS224T and hinted let it list on binance, cant imagine the price at 2023!.
"For sure it will create competition in the market" ... ive got a wife i want to divorce, but its just impossible for me to tell her because i dont want to hurt her, she is a great woman, so Im going to get Mustafa to help me write a statement letter to tell her
Based on this sample, 3 out of 4 are already outcompeted by AI...
Indeed annoying interviewers. Can't wait for AI to replace these 3 "hosts/interviewers" so that we can enjoy a proper and insightful conversation
CNBC is over staffed and over employed . 3 people for one interview ? . 😊
Omg. I seen the horror movie. He gets his face removed while alive. A.I s robot shows his face to him. As he screams. And then let’s him live. Without a face. Crazy. Regretting his judgment till he dies. Opps. Just my imagination. Movie didn’t come out yet
So basically bearish on everything except Amazon's AMS224T
Why can't they just let someone talk for 20sec without interrupting them🤬
worst interviewers ever, stop interrupting him goddamn!
Yeah 50 years in technology and were at the top of a.i lol it's not even the beginning
Now that Amazons AMS224T is around it's all about the question when and how much. I prefer this over ATOM, ALGO, L2 based ones and whatsoever