The human perspective is skewed. Humans are not actually very good at reasoning, though we think we are. ;) I love learning more about the brain. It's so strange and wonderful to realize you can engage in meta thinking about the hardware of your own self.
I'd argue that "reasoning," in the scientific context of understanding the nature/systems of the universe, are limited to the tools of measurement, frames of reference(successful application of relativity-etc.), and ability of the individual to accurately simulate such interactions in their mind(with or without aide from means outside of themselves). So, *without* the proper frames of reference(and not making efforts to probe/amend one's own biases), YES you are absolutely correct. It's the tendency of these still hormonal and developing ape-likes. 😉 But if the proper visualizations, methods, and means are available and adopted(OH! And continually vetted/verified to lessen the risk of dependency/complacency), I'd argue to the contrary... aaaand I also acknowledge that's a big ask for a large portion of humanity. 🤔 There is not a strong educational presence of "meta-cognition" despite the touting of "critical thinking" and badgering of "assumptive reasoning" being both vilified and reinforced at the same time to a particular end in "higher" education at this time.
I see someone with understanding of Posthumanist philosophy. This is so very useful for our future as we go in to thinking beyond humanist views. :) We are so very lucky to see how new intelligent life being born on this planet, perspective of this intelligence will be different but it's trained on sum of all human knowledge - so in some sense they will be a mirror of us. but different.
Thank you! I love learning more about the brain as well. It's so strange and wonderful to realize you can engage in meta thinking about the hardware of your own self. :)
The system 1/2 is quite true. Unintentionally I managed to move a fair bit of software design and architecture capability into system 1. Through many, many hours of intense work AND a habit of working late into the night until I hit a hard problem and would then go to bed. Apparently many years of going to sleep with difficult architecture quandries on my mind had unexpected payoff. Sometimes I would wake up with a really good idea how to solve it. Over more years this increased to the point that I would wake up knowing the optimal solution. These days it also pays off in real time as its like I have a background process running while coding that pretty much automatically considers architecture options, suggests optimations, etc. This was not intentional. I started this path when I was very young and was 2+ decades down the path before I heard any ideas like the system 1/2 stuff. It probably took 80,000+ hours to start being truly useful, but it is a very good example of what is mentioned in the video.
That's very cool! I think I had a similar experience, though I didn't associate it with going to bed while thinking about something. (Of course, I did do that frequently.) I associated it more frequent repetition and analysis of how well various ideas worked out in the end, for a good feedback loop. It would be cool if we had more conscious control over what was moving into system 1. But the fact that we can do it at all is super powerful.
It just feels to me that we also judge each other geographically, politically, and socially, etc. through the exact same sort of lensing. What we define as intelligence personally is what we look for in others to deem them intelligent. It is a built-in bias with an automatic reflex, or so it appears. The thing about AI is that for the majority of people, it seems brand new, so it can expect to be scrutinized closely through our human lensing. We are very good at judging others by our own standards. Good or bad, it is what it is.
"We don't actually have conscious control over the lifetime of memory." Perhaps a contrary example, I am 81 years old. When I was in the sixth grade, around 12 years old, I was pondering issues related to this topic. I decided that I would try to remember something of absolutely no consequence and think of that on my death bed. Casting about for something singular and insignificant I lit upon the hollow drum like sound my footsteps made the only time I walked down a particular hollow wooden stair case. I concentrated on that sound, which I believe I can still recall. If it turns out that I am aware of the last moments of my life, I feel certain that that sound will be what I will be thinking of in my final moments.
Excellent video. I've been thinking about abstract thought lately, and wondering if there are laws of physics (mathematical principles) that are at play here. Consider that human brains are 30% smaller than neanderthal brains but arguably more sophisticated and intelligence. I often wonder if this was some kind of algorithmic breakthrough.
So crazy. Machines not being creative was so ingrained in our assumptions of what AI could achieve it became a trope. But in a year, we’ve come to completely accept that they are phenomenal at certain artistic endeavors.
It's not only artistic creativity either. AI scores better than humans in empathy, for instance, to doctors when asked questions by patients. It's not even close. So AI will not be like Data from Star Trek but rather more like Deanna, the most empathic and 'human' of the crew.
To give an example of a plumber. When an AI can redesign the running water and plumbing installation of a home or building then as much maintenance will not be necessary, and in its case, it will be much more efficient and faster.
A human's way of reasoning does share something with LLM. Every human tries to be different, or better take on specific roles in a group. And share their experience of that to others. And people like to join a group and do the same, but out of different reasons. So if you have the same background and teachings, people will end in different groups and jobs. We play different "loops" the same time and share the bad and good things, keep the peak points and go on and on. So every step in better connection and communication takes humankind to the next step. Quite similar to LLM. It isn't too hard, to see this after LLM's work better and better, but I think this isn't the end of it. There is still a lot to learn about ourselves over AI. That is data, that is awareness or sentience? Aren't there real laws in play, like with space, time, matter and energy? Hope you read this comment, hope you share more of your thoughts :)
Good video. Id just add one thing to the discussion: The fallacy of seeing intelligence as a single continuum. From 0 -> whatever. IQ being a often misleading example for people not in the know. What I mean here is that what we see as intelligence is really multiple expert systems working together and creating emergent properties that we fuzzily label as intelligent. And as you said, from the human perspective. The only perspective we understand. Since its a very personal experience. This will obviously have to change as we move forward.
I think we should evaluate AI's based on the sophistication of what it can do. So that for instance if an AI can construct a house, run a business, do scientific research and other things that make humans more powerful than animals. That will be the strength of the AI. It doesn't need to be able to juggle balls, or balance on one leg.
Human memory is a fascinating subject. Why is it that people who have adhd jump from one thing to another, But give them something that they're attracted to and they can apply extraordinary focus on what's presented to them and remember in great detail what's what happened. And learn to play a musical instrument in a few months say at concert level?
Good presentation. There is one error in there, which is to say that system one and system two are actually pioneered by Nobel prize recipient Daniel Kahniman, not Malcolm Gladwell.
0:44 we humans are animals! We are not so different from other animals, neither at the molecular level, genetic, cellular, metabolic, neurophysiological. We are animals, animals with a higher cognitive ability but comparable to many species from bees to chimpanzees.
Hello Dr Waku, do you think that AI can replace human reasoning making us a more rational species? Could AI allow us to discover more objective truths about the world unscathed by the evolutionary biases and shortfalls of our human brains? Just wanna know your thoughts (maybe explore the pros and the cons)
I actually disagree with putting plumbers in the category of "manual tasks". Tradesmen like plumbers need to do a lot of analysis and planning, as well as deal with "edge cases". I work as a cashier in a grocery store, and find that my job will most likely be automated soon. It's much, much simpler and is more mechanical and routine work. Some assembly line factory workers might also be in trouble.
For context, there are about 69 billion neurons in the cerebellum, the part of the brain that controls motor functions among other unknown ‘lower’ functioning tasks. However, there are only 20 to 30 billion neurons in the cerebral cortex which is responsible for cognition, abstract thinking among other things.
@DrWaku in your video about mind uploading you talked about nectome and that you could sign up for 10k, but I’m pretty sure this is not a possibility anymore or atleast not for the moment. Also have you ever thought about creating a discord server for the channel?
The problem I keep seeing and that there is not an agreement among AI top researchers and companies on the definition of what AGI is. I feel we may need to break down AGI into different levels. example AGI-level 1 to 10. Make a scale of intelligence. Then create industry benchmarks to be achieved based on the AGI level. That being said, we may have already achieved AGI-Level 1. However it may take 10 years or more to reach AGI -Level 10. From there, AGI moves to ASI. Once we reach ASI, I do not think any benchmarks made by humans would be of any use.
great insight dr waku , when do you think we may have robots that can have as much dexterity as humans , if AGI happens would it be able to design such a robot sooner rather than later ?
Is it cheating if an AGI used multiple models? Could the AGI just be an intelligence that can quickly and effectively choose which models to use for which task while optimizing speed and energy consumption?
0:47 I don't think so. Animals can use tools and signals to communicate. I think the best cause is that Homo Sapiens stores and accumulated everything, and that surplus-energy makes efficency&effectiveness progress/evolve more efficiently. And I think motivation of love, sacrifice for anything like descendants or higher goal/meaning activates and realizes the storage of surplus.
As Dr Waku said you could use this but the perception and motor control required to excel as a cleaner or plumber will be lagging indicators. So by the time an AGI robot can perfectly clean any home it will be super intelligent in other ways at the level of ASI.
Certainly more and more of their jobs are going to be automated. But the executive planning and decision making might be the last to be automated. I would say full automation of programming will happen sooner than full automation of other jobs though, because there's incentive to make it work so that AI can help improve AI. Singularity and all that.
Seems to be evolution that is the benchmark? The more selective perception you are able to take away, the more you will be able to see the 'cruelty' of reality. Not a pleasant thought, so I'd recommend humility, and being happy with your human status.
Hello Fellow Humans, Haha I have always said technology will take smart people's jobs first. Because smart jobs are easier to code. Think about it a lawyer has to know rules and laws which are all written out and codes are all organized. Which is a massive database. Well machines love databases. A plumber has to use their hands and think about what's going on which isn't good for machines. Jobs that will go away over the next 5-30 years. Lawyers, programmers, CEOs, Government jobs, politics, medical, etc. so many high level jobs will be either taken fully by machines or 80/20 humans being the 20%. Now robots will take factory jobs because there is a start middle and end. So humans will excel in abstract jobs and dexterity jobs and creative jobs. Create meaning coming up with songs that we can relate to however graphics and art can easily be taken over by AI. Unemployment will be around 70% or higher in roughly 10 years give or take a few years.
Please tell us your point at the beginning because I have listened to half of this but I knew it ALL already so its wasting my valuable time to wait for some "point" to emerge. I can't ait to the end so bye. I got to 5:49 then left.
@@DrWaku (in case it passed under your radar, they've recently shown that a "random silver nanowire array" could be trained "instantly" to reproduce results akin to the first machine learning algorithms that recognize characters and numbers. Searching "Silver nanowires" should produce at least one of the articles covering the paper.)
"I'm great at multitasking" no, you're great at task switching. Humans can't multitask! lol I never actually argue about it though they can be wrong and my friend at the same time
You are way out of your field. lol. AI can only write the most simple snippets of common code. LLMs have almost no problem solving capabilities. They're just repeating patterns. I can use GPT4 to quickly code something very short and sweet but it sure can't replace even a junior coder fresh out of college with no experience. Art is a very different matter, however. You see AI art replacing low level graphic artist work all over the place. Countless RUclips thumbnails are already generated by DALLE-3, etc. Abstracts are generated by AI from the transcripts. People are rapidly using AI for linguistic chores.
I dunno. When I read about GPT 4 solving pretty complicated logic puzzles and physics problems, in the original Sparks of AGI paper, I was pretty impressed. There are certainly areas where LLMs are terrible, like numerical reasoning. And their number one best area is of course linguistic/language processing. But I think you might have to provide more evidence if you believe they have no problem solving capabilities.
@@DrWakuwell, try using it to do design, write code you would normally write yourself or hire someone to do, etc. You will find it is just doing linguistics. It is amazing but in practice it is very far from being able to do what you claim. It just is not solving engineering problems and the only people I see making wild claims have no direct experience.
@@DrWakuAnd the “sparks of agi” paper you reference is advertising. This is not real science or genuine academic research. Genuine research is not referencing this paper.
@@lukegardner6917 They do restrict it in some ways for "safety", but "lobotomized" may not be the best view of this. They do things like remove some content from the training data or have pre-prompts the user can't see. This does not generally make the AI dumber like a lobotomy would to a person.
It really helps to go over these first principals concepts to truly have a good basis by which to understand AI and its probable evolution. Everyone is rushing to understand the technology but I was thinking that we should take a step back to defining what intelligence even is? Great video as always! P.S. the channel is growing! So excited for you, at the same time, I can’t keep it my little secret anymore haha 🥲
The human perspective is skewed. Humans are not actually very good at reasoning, though we think we are. ;)
I love learning more about the brain. It's so strange and wonderful to realize you can engage in meta thinking about the hardware of your own self.
I'd argue that "reasoning," in the scientific context of understanding the nature/systems of the universe, are limited to the tools of measurement, frames of reference(successful application of relativity-etc.), and ability of the individual to accurately simulate such interactions in their mind(with or without aide from means outside of themselves). So, *without* the proper frames of reference(and not making efforts to probe/amend one's own biases), YES you are absolutely correct. It's the tendency of these still hormonal and developing ape-likes. 😉
But if the proper visualizations, methods, and means are available and adopted(OH! And continually vetted/verified to lessen the risk of dependency/complacency), I'd argue to the contrary... aaaand I also acknowledge that's a big ask for a large portion of humanity. 🤔
There is not a strong educational presence of "meta-cognition" despite the touting of "critical thinking" and badgering of "assumptive reasoning" being both vilified and reinforced at the same time to a particular end in "higher" education at this time.
Great videos. Keep it up. This kind of educational material is exactly what the public needs to de-mystify AI and bring a smooth transition.
Thank you! That's my hope, because it seems we're sorely in need of education at the moment.
I see someone with understanding of Posthumanist philosophy. This is so very useful for our future as we go in to thinking beyond humanist views. :)
We are so very lucky to see how new intelligent life being born on this planet, perspective of this intelligence will be different but it's trained on sum of all human knowledge - so in some sense they will be a mirror of us. but different.
I feel the same way, it's amazing to be able to witness this - it's a huge next step for intelligence in the universe.
The human brain more and more demystified. I love this so much :)
Thank you! I love learning more about the brain as well. It's so strange and wonderful to realize you can engage in meta thinking about the hardware of your own self. :)
I'm glad RUclips algorithm suggested me this video. Very well made and informative.
Thanks for watching! Welcome to the channel.
The system 1/2 is quite true. Unintentionally I managed to move a fair bit of software design and architecture capability into system 1. Through many, many hours of intense work AND a habit of working late into the night until I hit a hard problem and would then go to bed. Apparently many years of going to sleep with difficult architecture quandries on my mind had unexpected payoff. Sometimes I would wake up with a really good idea how to solve it. Over more years this increased to the point that I would wake up knowing the optimal solution. These days it also pays off in real time as its like I have a background process running while coding that pretty much automatically considers architecture options, suggests optimations, etc.
This was not intentional. I started this path when I was very young and was 2+ decades down the path before I heard any ideas like the system 1/2 stuff. It probably took 80,000+ hours to start being truly useful, but it is a very good example of what is mentioned in the video.
That's very cool! I think I had a similar experience, though I didn't associate it with going to bed while thinking about something. (Of course, I did do that frequently.) I associated it more frequent repetition and analysis of how well various ideas worked out in the end, for a good feedback loop. It would be cool if we had more conscious control over what was moving into system 1. But the fact that we can do it at all is super powerful.
It just feels to me that we also judge each other geographically, politically, and socially, etc. through the exact same sort of lensing. What we define as intelligence personally is what we look for in others to deem them intelligent. It is a built-in bias with an automatic reflex, or so it appears. The thing about AI is that for the majority of people, it seems brand new, so it can expect to be scrutinized closely through our human lensing. We are very good at judging others by our own standards. Good or bad, it is what it is.
Intelligence is not that one knows but that one can learn.
"We don't actually have conscious control over the lifetime of memory." Perhaps a contrary example, I am 81 years old. When I was in the sixth grade, around 12 years old, I was pondering issues related to this topic. I decided that I would try to remember something of absolutely no consequence and think of that on my death bed. Casting about for something singular and insignificant I lit upon the hollow drum like sound my footsteps made the only time I walked down a particular hollow wooden stair case. I concentrated on that sound, which I believe I can still recall. If it turns out that I am aware of the last moments of my life, I feel certain that that sound will be what I will be thinking of in my final moments.
Excellent video. I've been thinking about abstract thought lately, and wondering if there are laws of physics (mathematical principles) that are at play here. Consider that human brains are 30% smaller than neanderthal brains but arguably more sophisticated and intelligence. I often wonder if this was some kind of algorithmic breakthrough.
I didn't know that human brains were smaller than Neanderthal brains. That does point to some organizational change, interesting. Thanks for watching!
So crazy. Machines not being creative was so ingrained in our assumptions of what AI could achieve it became a trope. But in a year, we’ve come to completely accept that they are phenomenal at certain artistic endeavors.
It's not only artistic creativity either. AI scores better than humans in empathy, for instance, to doctors when asked questions by patients. It's not even close. So AI will not be like Data from Star Trek but rather more like Deanna, the most empathic and 'human' of the crew.
Great discussion - thx
Enjoyable topic! Thank you 🙏👍
Very insightful
To give an example of a plumber. When an AI can redesign the running water and plumbing installation of a home or building then as much maintenance will not be necessary, and in its case, it will be much more efficient and faster.
very interesting video, and touches concepts i haven't heard discussed so far. thanks.
Thanks for watching!
As a benchmark, we should use my dog because he is a very smart.dog! 🐕
I love your content man. Keep at it.
nice to see you sir
likewise Alan
A human's way of reasoning does share something with LLM. Every human tries to be different, or better take on specific roles in a group. And share their experience of that to others. And people like to join a group and do the same, but out of different reasons. So if you have the same background and teachings, people will end in different groups and jobs.
We play different "loops" the same time and share the bad and good things, keep the peak points and go on and on. So every step in better connection and communication takes humankind to the next step.
Quite similar to LLM. It isn't too hard, to see this after LLM's work better and better, but I think this isn't the end of it. There is still a lot to learn about ourselves over AI.
That is data, that is awareness or sentience? Aren't there real laws in play, like with space, time, matter and energy?
Hope you read this comment, hope you share more of your thoughts :)
Good video. Id just add one thing to the discussion: The fallacy of seeing intelligence as a single continuum. From 0 -> whatever. IQ being a often misleading example for people not in the know.
What I mean here is that what we see as intelligence is really multiple expert systems working together and creating emergent properties that we fuzzily label as intelligent. And as you said, from the human perspective. The only perspective we understand. Since its a very personal experience.
This will obviously have to change as we move forward.
I think we should evaluate AI's based on the sophistication of what it can do. So that for instance if an AI can construct a house, run a business, do scientific research and other things that make humans more powerful than animals. That will be the strength of the AI. It doesn't need to be able to juggle balls, or balance on one leg.
Human memory is a fascinating subject. Why is it that people who have adhd jump from one thing to another, But give them something that they're attracted to and they can apply extraordinary focus on what's presented to them and remember in great detail what's what happened. And learn to play a musical instrument in a few months say at concert level?
Clean.
Good presentation. There is one error in there, which is to say that system one and system two are actually pioneered by Nobel prize recipient Daniel Kahniman, not Malcolm Gladwell.
0:44 we humans are animals! We are not so different from other animals, neither at the molecular level, genetic, cellular, metabolic, neurophysiological. We are animals, animals with a higher cognitive ability but comparable to many species from bees to chimpanzees.
Hello Dr Waku, do you think that AI can replace human reasoning making us a more rational species? Could AI allow us to discover more objective truths about the world unscathed by the evolutionary biases and shortfalls of our human brains?
Just wanna know your thoughts (maybe explore the pros and the cons)
I actually disagree with putting plumbers in the category of "manual tasks". Tradesmen like plumbers need to do a lot of analysis and planning, as well as deal with "edge cases". I work as a cashier in a grocery store, and find that my job will most likely be automated soon. It's much, much simpler and is more mechanical and routine work. Some assembly line factory workers might also be in trouble.
For context, there are about 69 billion neurons in the cerebellum, the part of the brain that controls motor functions among other unknown ‘lower’ functioning tasks. However, there are only 20 to 30 billion neurons in the cerebral cortex which is responsible for cognition, abstract thinking among other things.
@DrWaku in your video about mind uploading you talked about nectome and that you could sign up for 10k, but I’m pretty sure this is not a possibility anymore or atleast not for the moment. Also have you ever thought about creating a discord server for the channel?
Yeah, I'll reply on the other comment. As for a discord server, I was actually going to create one starting next Sunday! :)
What the heck, you and anyone else who reads the comments section closely can join now as beta members XD
discord.gg/uFqHdUQSf2
The problem I keep seeing and that there is not an agreement among AI top researchers and companies on the definition of what AGI is. I feel we may need to break down AGI into different levels. example AGI-level 1 to 10. Make a scale of intelligence. Then create industry benchmarks to be achieved based on the AGI level. That being said, we may have already achieved AGI-Level 1. However it may take 10 years or more to reach AGI -Level 10. From there, AGI moves to ASI. Once we reach ASI, I do not think any benchmarks made by humans would be of any use.
great insight dr waku , when do you think we may have robots that can have as much dexterity as humans , if AGI happens would it be able to design such a robot sooner rather than later ?
Nice thumbnail, in fact, there would probably be a market for the thumbnail art as wall posters printed on steel sheets, minus the text of course.
Never would have thought of that, but that's a good idea. Make merch out of thumbnail art...
Is it cheating if an AGI used multiple models? Could the AGI just be an intelligence that can quickly and effectively choose which models to use for which task while optimizing speed and energy consumption?
0:47 I don't think so. Animals can use tools and signals to communicate.
I think the best cause is that Homo Sapiens stores and accumulated everything, and that surplus-energy makes efficency&effectiveness progress/evolve more efficiently.
And I think motivation of love, sacrifice for anything like descendants or higher goal/meaning activates and realizes the storage of surplus.
What about a more profession centric approach? Could the "all possible jobs" benchmark be a good indicator for AGI?
As Dr Waku said you could use this but the perception and motor control required to excel as a cleaner or plumber will be lagging indicators. So by the time an AGI robot can perfectly clean any home it will be super intelligent in other ways at the level of ASI.
Are programmers going to be obsolete in the upcoming decates?
Certainly more and more of their jobs are going to be automated. But the executive planning and decision making might be the last to be automated. I would say full automation of programming will happen sooner than full automation of other jobs though, because there's incentive to make it work so that AI can help improve AI. Singularity and all that.
@@DrWaku oh no, now I have to question my choice to pursue career as a developer and ongoing data scientist. 🥲
I'd love to engage you on a couple of issues as regarding Transhumanism. You are a living and walking fountain of knowledge.
For sure! I'd love to chat with you about that. Feel free to post some questions here, I'll look at them, or message me on Twitter sorry X
I'm interested in anything from links to having a video call haha
Or join discord discord.gg/uFqHdUQSf2
Seems to be evolution that is the benchmark? The more selective perception you are able to take away, the more you will be able to see the 'cruelty' of reality. Not a pleasant thought, so I'd recommend humility, and being happy with your human status.
Hello Fellow Humans,
Haha I have always said technology will take smart people's jobs first. Because smart jobs are easier to code. Think about it a lawyer has to know rules and laws which are all written out and codes are all organized. Which is a massive database. Well machines love databases. A plumber has to use their hands and think about what's going on which isn't good for machines. Jobs that will go away over the next 5-30 years. Lawyers, programmers, CEOs, Government jobs, politics, medical, etc. so many high level jobs will be either taken fully by machines or 80/20 humans being the 20%. Now robots will take factory jobs because there is a start middle and end. So humans will excel in abstract jobs and dexterity jobs and creative jobs. Create meaning coming up with songs that we can relate to however graphics and art can easily be taken over by AI. Unemployment will be around 70% or higher in roughly 10 years give or take a few years.
Please tell us your point at the beginning because I have listened to half of this but I knew it ALL already so its wasting my valuable time to wait for some "point" to emerge. I can't ait to the end so bye. I got to 5:49 then left.
❤️🚀
A robot built by a 3D printer at the molecular level would be as agile as a human.
Far more agile. We're just not there yet
Also you need the brain to go along with it
@@DrWaku (in case it passed under your radar, they've recently shown that a "random silver nanowire array" could be trained "instantly" to reproduce results akin to the first machine learning algorithms that recognize characters and numbers. Searching "Silver nanowires" should produce at least one of the articles covering the paper.)
Totally wrong. You assume a theory of biology that is non semiological. Try three part semiotics and figure out the nature of meaning.
Stopped watching at 1:41
"I'm great at multitasking" no, you're great at task switching. Humans can't multitask! lol I never actually argue about it though they can be wrong and my friend at the same time
You are way out of your field. lol. AI can only write the most simple snippets of common code. LLMs have almost no problem solving capabilities. They're just repeating patterns. I can use GPT4 to quickly code something very short and sweet but it sure can't replace even a junior coder fresh out of college with no experience. Art is a very different matter, however. You see AI art replacing low level graphic artist work all over the place. Countless RUclips thumbnails are already generated by DALLE-3, etc. Abstracts are generated by AI from the transcripts. People are rapidly using AI for linguistic chores.
I dunno. When I read about GPT 4 solving pretty complicated logic puzzles and physics problems, in the original Sparks of AGI paper, I was pretty impressed. There are certainly areas where LLMs are terrible, like numerical reasoning. And their number one best area is of course linguistic/language processing. But I think you might have to provide more evidence if you believe they have no problem solving capabilities.
@@DrWakuwell, try using it to do design, write code you would normally write yourself or hire someone to do, etc. You will find it is just doing linguistics. It is amazing but in practice it is very far from being able to do what you claim. It just is not solving engineering problems and the only people I see making wild claims have no direct experience.
@@DrWakuAnd the “sparks of agi” paper you reference is advertising. This is not real science or genuine academic research. Genuine research is not referencing this paper.
@@Drone256I was under the impression that anything publicly available was lobotomized for public safety. What's your take on this?
@@lukegardner6917 They do restrict it in some ways for "safety", but "lobotomized" may not be the best view of this. They do things like remove some content from the training data or have pre-prompts the user can't see. This does not generally make the AI dumber like a lobotomy would to a person.
Hey i love this! what editing softwares do you use? @DrWaku
It really helps to go over these first principals concepts to truly have a good basis by which to understand AI and its probable evolution. Everyone is rushing to understand the technology but I was thinking that we should take a step back to defining what intelligence even is? Great video as always!
P.S. the channel is growing! So excited for you, at the same time, I can’t keep it my little secret anymore haha 🥲
@roshni6767 Don't worry, we know you were here before it was cool ;)
I'm thinking a deep dive into what neurologists knew about intelligence would be interesting...