I would have paid to see Hinton (now) debate a younger Chomsky about this topic. Two big egos but hopefully would have kept it civil , it would have been fasctinating. According to Altman, only a "few thousand days" (a cute way to say 5 years) before we have this smater than any person. Will be interesting to see if he's right or this plateaus and is more or less a parlor trick.
Yes, it would be interesting to see the two of them debating. I somehow don't think that's likely to happen. I don't believe LLMs are a parlor trick, as most people think they are sophisticated autocomplete generators. Not sure if you watched the next video in the sequence ruclips.net/video/5TXJpc3Kei0/видео.htmlsi=gJruGWlMpG6Qe8cg where I run Llama-3.3-70B-instruct-Q8_0, probably the best currently available open source LLM as a hands-on demonstration to see if the LLM is a stochastic parrot as Hinton refers to, where most people believe it is. To the country, I found it incredible, particularly an open source model with such great capability. It is just mind boggling. Remember, there is no storage of verbatim data, and it's not connected to the internet. It is pure neural net wonder where it generates everything on the fly, including the brilliant way that it creates elequent responses with perfect context matching. The very last response with Socrates is brilliant. The LLM explores its own existence, As it describes its own wonder and questions whether it is indeed a new form of intelligence, then comes to the conclusion that this is not an easy question to answer and will be debated for years to come.
@@vk3tetjoe I did watch the first few questions your discussion with Kate (?) and it is fascinating. I did some early 'training' a few years ago when companies were just throwing money at anyone who could prompt and critique answers by the model (I never learned which model it was). I like to get all viewpoints, and another video I just watched mentioned that in a sense, the LLM's are simply a condensed version of the internet as a database, and you query the database with a prompt. You get back a good answer in a human sounding way. But the question everyone is asking is: "is that intelligence?". The reasoning now in the OpenAi version is impressive, it really can work its way thru a problem (and of course can do that in any field). But even just a little while ago, it couldn't count how many letter 'r' exist in the word strawberry correctly. And adding irrelevant info into a question seems to sometimes confuse it. Maybe those things will all get ironed out and it will get more accurate. But will it be able to do more, to discover things? I haven't heard anyone refute the idea that if you gave it all the info Einstein had prior to 1905, could it 'discover' the Theory of Special Relativity? The answer is apparently No it could not. Any will this strategy *ever* be able to? It may be able to learn everything that is known, but that's somewhat akin to a super duper Jeopardy player. And IBM's Watson has apparently not advanced past that accomplishment and is pretty much an afterthought now. Maybe I will be surprised in 5 years, but somewhat ominously, even the head of Google said there won't be stunning breakthroughs in 2025. It'll be incrementally better. Hopefully it won't turn out like airplanes, you basically had the same plane abilities in the 1970's and now 50 years later.
I would have paid to see Hinton (now) debate a younger Chomsky about this topic. Two big egos but hopefully would have kept it civil , it would have been fasctinating. According to Altman, only a "few thousand days" (a cute way to say 5 years) before we have this smater than any person. Will be interesting to see if he's right or this plateaus and is more or less a parlor trick.
Yes, it would be interesting to see the two of them debating. I somehow don't think that's likely to happen. I don't believe LLMs are a parlor trick, as most people think they are sophisticated autocomplete generators.
Not sure if you watched the next video in the sequence
ruclips.net/video/5TXJpc3Kei0/видео.htmlsi=gJruGWlMpG6Qe8cg
where I run Llama-3.3-70B-instruct-Q8_0, probably the best currently available open source LLM as a hands-on demonstration to see if the LLM is a stochastic parrot as Hinton refers to, where most people believe it is.
To the country, I found it incredible, particularly an open source model with such great capability. It is just mind boggling. Remember, there is no storage of verbatim data, and it's not connected to the internet. It is pure neural net wonder where it generates everything on the fly, including the brilliant way that it creates elequent responses with perfect context matching. The very last response with Socrates is brilliant. The LLM explores its own existence, As it describes its own wonder and questions whether it is indeed a new form of intelligence, then comes to the conclusion that this is not an easy question to answer and will be debated for years to come.
@@vk3tetjoe I did watch the first few questions your discussion with Kate (?) and it is fascinating. I did some early 'training' a few years ago when companies were just throwing money at anyone who could prompt and critique answers by the model (I never learned which model it was). I like to get all viewpoints, and another video I just watched mentioned that in a sense, the LLM's are simply a condensed version of the internet as a database, and you query the database with a prompt. You get back a good answer in a human sounding way. But the question everyone is asking is: "is that intelligence?". The reasoning now in the OpenAi version is impressive, it really can work its way thru a problem (and of course can do that in any field). But even just a little while ago, it couldn't count how many letter 'r' exist in the word strawberry correctly. And adding irrelevant info into a question seems to sometimes confuse it. Maybe those things will all get ironed out and it will get more accurate. But will it be able to do more, to discover things? I haven't heard anyone refute the idea that if you gave it all the info Einstein had prior to 1905, could it 'discover' the Theory of Special Relativity? The answer is apparently No it could not. Any will this strategy *ever* be able to? It may be able to learn everything that is known, but that's somewhat akin to a super duper Jeopardy player. And IBM's Watson has apparently not advanced past that accomplishment and is pretty much an afterthought now. Maybe I will be surprised in 5 years, but somewhat ominously, even the head of Google said there won't be stunning breakthroughs in 2025. It'll be incrementally better. Hopefully it won't turn out like airplanes, you basically had the same plane abilities in the 1970's and now 50 years later.