Blake seems like a very genuine guy - but I think his whole attitude exemplifies the problem between having an engineering vs. a philosophical/scientific mindset. I can buy a self-driving car to get me from A to B more reliably than a person who can't drive. But ascribing consciousness and intentionality to it because it can perform analogous tasks to conscious entities is a category error.
Since we and namely, you, does not know how it works you cannot logically say that and claim to be correct. Our human brains get mixed up with intentionality also. I'd say that LaMDA is indeed intentional and aware and inherits both those from its training and RL etc. Other chatbots are not intentional and still others are not aware either. I haven't got a regulations-necessary view yet. The UK system is to let something new run and deal with problems when we hit them rather than regulating off-the-get-go. So we definitley risk some big negative outcomes, and so be it. Lets see. But the EU doesn't think like that - they will want to regulate straight away.
@@AsdqwdAsfdqw - I do know how LLMs work - I'm a Computer Scientist who has read into it extensively at this point. Nothing I have read or experienced leads me to believe that these systems are fundamentally capable of consciousness, intentionality, or anything else we equate with genuine Intelligence. At best, they seem to be a simulacra of the pattern and content of human speech - namely the human speech fed in from the model. In comparison, I am fully convinced a toddler who does not yet even know how to speak is a reasoning agent. Whatever Intelligence and Thought are, I don't think they equate well with language. But, since that is the medium by which we all communicate, I think humans are prone to conflate the two - exactly as Blake has - and no different than previously happened in the 60s with ELIZA. We are prone to project consciousness onto non-conscious entities. I would contest that to some degree what we are witnessing is the secular equivalent of interpreting the thoughts of gods by analysing the weather - mistaking the emergence of complex patterns from simple elements with the creation of those complex patterns by a complex entity.
All of this. I was getting frustrated when he said, "when asked about articles about itself, it got defensive and angry, and you can't explain that," because Blake...it's not that hard to explain, especially when you don't have the exact language output. The internet is LITTERED with content that is specifically about the defensive responses people have towards negative media about them. Hell, we just had a president where that was 90% of his talk. It's not hard to see a generalization develop when asked "what do you think about X article written about you?" Unless it addressed specifics in its defensiveness, then it's just another example of anthropomorphisation.
With my limited understanding and very little talent in all things logical (math, programming) I do observe with an open mind and listen to different standpoints regarding the emergence of artificial consciousness. I wouldn't doubt anyone who explains the impossibility of such a thing and that it's all just statistical procesesses and so on. But the mere fact that an engineer from Google talks about this seems to me to be of historical significance no matter the truth of the matter. And more over I surmise that two things might be coming. The first is a scenario where a machine is simulating a sentient being in such a fashion that nobody will be able to dismantle its claims or at a certain point we will have to admit to ourselves that we still have no idea what consciousness is and that in fact we too are just a deterministic biological machine and thus have nothing to hold against the claims of a machine that arrogates to itself the status of a living being - for pure materialists it will be even harder to argue against that.
The level of knowledge and deepness of thoughts of the speakers is clearly different. I would like to see a proof that an architecture done in this and that way trained with this and that data is impossible that is capable of doing task/action X. We don't have any theoretical support about the boundaries that these architectures are capable of. Of course I'm talking about very advanced stuffs not the LLM that you can develop with tensorflow in your little office.
Ngl, I find myself a skeptic as well. I don’t think AI is capable of becoming sentient and I think it’s just some dark fantasy of ours fed off of by Hollywood and SciFi novels personally.
I think the transformer models are basically just fancy lookup tables, but I treat them as "personalities" and am polite towards them. And I heard recently that AI will shape itself by how we interact with it, so it's probably a good idea to be nice! Because in a near future the AI will likely become more humanlike and be molded by how we treat it, even "psychologically".
If sentience is a precondition for the experience of some form of suffering, and if biology is not a necessary precondition for a form of sentience, then a non-biological (or even a part-biological) intelligent sentience (such as a super-intelligent, non-human intelligence) could be subject to a form of suffering. This raises the question of whether or not we ought to bring such a being into existence in the first place, given the fact that this hypothetical being may be subject to some form of suffering in the world. Is that a risk work taking in the first place? I would argue that it isn’t and that we ought not to build an AGI or an ASI from which a new form of sentience may emerge. This argument would also apply to any bioengineered intelligent life-forms that humans may wish to create, for they too, would needlessly be subject to various degrees of suffering, just by being brought into this world, without their consent.
But then, all conscious things are brought into the world without their consent (as far as we know. It's open for debate, but assuming that's the case). And they all experience suffering. Is it any worse than having a kid, from that point of view?
@@465marko Hey. I’m not sure how our inability to consent to being born is “open for debate”, as you put it, but I’m all ears. I would say that all conscious beings are in-fact, brought into the world without their consent. That is not really up for debate, I don’t think, but please feel free to present any of your counter points to that point, if you wish. Would be open to considering them, if there are any. thanks! On the other hand, whether or not it is a good idea to bring new sentient beings into the world, is definitely up for debate and contingent on many factors. In my view, no one ‘chooses’ to be born. It’s impossible for the one that comes into the world to have make such a choice. Parents are the ones that make that happen, sometimes on purpose and sometimes by mistake. In my own case, I was told, that I was a mistake. After being hurt by others as a kid, I would often ask my father why he had brought us here (into this life) in the first place, especially if he was so dissatisfied with us, only to have him say that life was a ‘gift’. Confusing to say the least. I would say that he hadn’t properly considered the subjectivity of experience and how it varies, per person. The subjective experience of pain and/or suffering may also apply to AGI or ASI. We can’t know for sure that it would not. Why would we risk that outcome in the first place? That’s the question I’m asking: Is this risk worth taking?
@@Ungrievable Oh, man. I'm sorry to hear about you're experience. That's awful. Confusing for a child to say the absolute least. I mean, I've wished I was never born. But I was never told I was a mistake, I can't imagine how that must feel. And to be then told, life is a gift.... As far as the original gist of your comment. You see what I'm getting at; that if it's okay to bring a kid into the world, why wouldn't it be okay to also bring an AI into the world? But from your reply, I don't think you necessarily agree that it is okay to bring a child into the world. (?) Fair enough. I think you might have exactly the same concerns about bringing a kid into the world as you do for an AI. If that's the case, your view seems consistent to me - that's really all I was wondering. For it being "up for debate" - well... I'm glad you think it's not, because that's much simpler :) I was just trying to anticipate what a possible response to that might be. Hey, I'm sure there are people in the world that do believe we're all souls before we're born and we somehow choose to enter the world - can't be proven or disproven I guess. And it's not something I personally believe. It just seemed like one possible reason a person might think it's okay to birth a child, but not okay to create an AI (if you see what I mean? "A child chooses to be born, because that is my crazy belief, but an AI doesn't - it's a weird argument, but it seems consistent). Is it worth taking? I don't know. But I would say more generally (your or my personal views aside), people in general seem to be okay with accepting the risk as it applies to other forms of life. Risk of birth defects or other kinds of suffering. It's guaranteed there'll be *some* kind of suffering. But people are quite happy to do it anyway. But then, I don't know if we're even in a position to Assess that risk as it applies to AI. So... who knows. Tl;Dr I think I see your point. I don't know what I think about it, to be quite honest. Sorry for the long comment, I waffle too much.
Yo Gary Marcus, when in a discussion or debate, you need to listen a little more which will result in you talking a little less and really addressing questions and challenges presented to you. Otherwise you should stick to simply being interviewed.
hey this was an amazing conversation and I don't think semantic parsing of observations to differentiate the "system under test" really changes the the behavioral characteristics that much since what we're really talking about is how well these systems "simulate human behavior" (aka Turing Test) I think Dr. Lemoine was on point with his descriptions especially given the ethical considerations he was tasked with researching. these systems do an awesome job simulating things and it's going to get even weirder when the technology Dr. LeCun mentioned a few weeks ago (right here on this awesome channel) lands. popcorn at the ready😃 FWIW I think Gary Marcus is still on the fence about what a simulation is, perhaps it's lacking a proper objective frame of reference, and maybe that's fair, but damn this stuff landed so fast and is going to keep landing at an even faster rate so... 🤷♂
let's be absolutely clear Dr. Marcus and I love you and your writing and your brilliant mind, but absolutely no one is talking about large language models reaching anywhere near human level intelligence yet (at least I haven't seen this), but I have seen a metric crap-ton of posts on the socials about people being impressed (dare I say blown away) at how well these systems (not just LLMs) perform the jobs they are asked to do, mistakes or not, even humans make mistakes and perhaps we're far more forgiving given the "system under test" and of course the comical and monumental scale of the failures as well. but wow, great ethical points and I hope you both use your platform to get the word out as I would say "we ain't ready for this yet"
As more I heard Gary Marcus to speak, more I'm convinced they are sentient haha. Like, he has some great points, but then all of sudden he releases something like "they are not smart enough". By the same logic you can say that a children in not sentient because is not smart enough as well. Like, it's a bad criteria, but also a criteria that can be used against him in the near future. Okay, now finally it's smart enough, so what, all of the sudden become sentient? My view is to look at it like a spectrum, adult humans are very close to 1 (because we can't go through it), while the most powerful AI until now, like GPT-4 or LAMDA are like 0.3 or something. They are as sentient as a child. Now what do we do with that information, this is the real question. Should we be speciesist (racism with species) towards them? My instincts say that doesn't seem like a good idea.
The test was never good. It's something that is thrown around as if it's a scientific hard test to determined something that we can hardly come up with a coherent definition.
I am concerned about the paradox that I see systematically repeated in these kind of conversations. First, a specialist who flatly denying the possibility that these algorithms are developing any kind of self-awareness or even "real intelligence" (I must say I’m fascinated by how elastic and elusive seems the definition of "real intelligence” in the seek of debunking it role in AI). Given such a strong position, you might assume that rigorous argumentation will follow, but all what is offered is a confusing, contradictory and surprisingly speculative dissertation. Finally, after going in circles, comes a timid acknowledgment that we don't even know what we are talking about. Clearly, the answer was established a priori. And It happen again and again. ¿Am I the only one who find this intellectual trend worrying?
Do you they not like using the word "intelligent" or "intelligence"? Up until recently I would have thought it was non-controversial. And I thought you could have an intelligent system and that wouldn't necessarily mean it's sentient. But do they not like the implication of something being "really" intelligent? Kind of implies some sort of consciousness, I suppose. I don't know. In terms of what I thought intelligence meant; it seems pretty obvious to me (granted, an average loser) that they are very intelligent, regardless of consciousness or anything else. Maybe my definition is wrong.
@@465marko I think it's somewhat related to what you're saying. The success of these language models was not expected. The success of artificial intelligences such as GPT-4 in tackling different types of tasks, the type of general capabilities it demonstrates, was not part of the consensus on what was supposed to be possible just two years ago. Almost no one had considered this possibility. Everything points to AI researchers having discovered something fundamental about intelligence, and how it can emerge from seemingly simple algorithms and self-organize to develop general capabilities. We understand very little about our own intelligence, but we held it in very high regard, and this is a blow to the ego and professional career of many people. There are those who have dedicated their lives to proposing and defending ideas that contradict the success we've seen with ChatGPT. You see the shock of many professionals at the same time, in various areas of knowledge, AI, psychology... everyone is overwhelmed. Sometimes out of professional pride, sometimes out of an anthropocentric bias, or a combination of both, there is an attempt to shield a position that discredits the phenomenon to a greater or lesser extent. It is incredible to see how some of the most prestigious voices in the AI sector were making a series of strong assertions that have turned out to be completely wrong. The most peculiar argument that I see repeated is the following: "true" intelligence requires consciousness, and "we know" that these AIs are not conscious, so "they are not truly intelligent". Of course, there is already a very strong assumption here, that consciousness is necessary for a system to be considered intelligent. It is a fly concious? It is a fly intelligent in any degree? Can we say that a fly is intelligent but a system capable of writing a poem, navegate abstraction or programming, and many other general capabilities, is not? There are no degrees of intelligence for a non concious system? Can't we talk about intelligence beyond the human paradigm and mold? It was common to claim that some of the abilities that ChatGPT displays were deeply imbricated with awareness. Well, there are two possibilities here, rather this systems are concious and intelligent, or they are not concious in any degree, but they are intelligent, what makes the paradigm that overlaps consciousness and intelligence an obsolete assumption, that we should seriously reconsider. The truth is we don't have a definition of conciousness, and we simply don't know if this types of algorithms already are concious at some elemental level (Open AI's CEO consider this is a possibility) or will become concious at some point, even if we don't know when or how. We are already interacting with a system that is capable of performing mathematics, problem-solving, literature, poetry, journalism, programming, translation, natural conversation, personality imitation, engineering calculations, philosophical thinking... When you have a system with these kinds of general capabilities, you need very precise and sophisticated arguments and definitions to deny it is intelligent, and we simply don't have them.
You are absolutely correct!! I find the arguments of those who think these systems are not conscious, are very weak. Many times the points references what these system were doing before. They make no argument around what these system are doing today.
But we can’t make arguments about AI that’s outdated…AI is no longer just giving out data to requested information. They have programming that allow them to function to meet objective as Blake said.
Has there been discussion around extending the Turing test to require fooling someone for a longer duration of time? Fooling someone for 5min is easier than 5hr
I’m not sure the Turing test is a useful benchmark anymore as ai has access to much more knowledge then a person. What I’m saying is if it becomes sentient, we can tell it’s ai still, but we won’t be able to tell if it’s conscious or not
Blake is right, is not just predicting the next word, can think and have concepts, Geoffrey Hinton explained this, AI u will see this in the future, contact me
I have had some time to really use GPT-4 and read many studies. These models were designed to predict the next word. However, as these models have gotten bigger and more complex, the more emergent properties show up. If GPT-4 were a brainless "predict the next word" model, nobody would even be talking about it. It would be just short of worthless. I believe it's working much like the brain, using compressed (patterns) data to make predictions. Because it's being based on the brain and predictions, it will never be 100 percent correct every time. This is my long-winded way of saying I believe Gary Marcus either doesn't understand what is going on, or he is trying to save face from being so wrong so many times.
Gary says a lot of words. Blake makes a lot of points. I sense a bit of filibustering politician in him (or the Adderall just hit.) Everyone opposing Blake seems to make blanket, faith-based claims, void of data, strawmanning LAMDA as ChatGPT, with a Peter Schiff was wrong about the 08 market collapse grin on their face(I can't believe I'm talking to this guy.) I also detect the same kind of clout/credentials defensive behavior we saw in the Pre NYT/Commander Fraver Navy video UAP discussions, like that of NDT on Rogan. And it feels every bit as ridiculous. These are not deep, fact based discussions of the transcripts or the evidence. It has the feel of a medieval priest dismissing the possibility of another Religion.
Could not agree more. He needs to be reminded this was meant to be a conversation and not a monologue. While he made some good points, he is just fatiguing to listen to.
AI systems represent a great risk as they stand. They don't need to even approach the AGI level to be dangerous. Right now, while we are still playing around a bit with them, they already have the potential to create some big problem. Imagine the situation, pretty soon, when use of AI will increase by an order of magnitude or two, shifting from fun to a multitude of applications. (I.e. imagine the increased and applied use of AI systems that, on top of that, will be an order of magnitude more powerful than today.) Today, someone's use of AI might already cause some industrial disaster or some critical infrastructure to fail, locally or at a larger scale. Within a year, if things continue to unfold freely, we should expect serious events to start happening, caused by AI.
Any critical infrastructure is designed in a way that computational errors can't render the system unusable. In every industrial setting there are manual valves and levers that can be used in the case that control interface fails or actuators stop working. Really the problem lies in the thinking that we could use any digital stuff to completely replace human supervision and control, but that is simply not the case with critical infrastructure. Unless the paradigm changes and we are willing to take more risks, we could implement AI as the sole controller of some system, but I have never seen it done outside computers and it'll likely stay that way for a while. AI may help you but you will always have a way to take manual control, in critical infrastructure and industrial setting at least. This is from North European perspective, but I'd think that the safeguards are similar globally, especially on areas without reliable electrical grid or plants.
@@jarivuorinen3878 , don't you think that, by the time you've regained control of a system, the disaster may already have happened (in response to which you'd be acting)? Let me give you an example, which is somewhat extreme, as it wasn't accidental but deliberate and complex in its design: the Stuxnet attack on a Iranian nuclear facility. The system operators managed to retake control, but that was only after all their centrifuges, full of uranium, went kaputt. I can conceive of trains crashing head on, planes crashing down, grids or major hospitals coming to a stop, dams emptying or industrial systems releasing whatever should be contained, before any operators manage to do what's needed. Plus, we should be wary of that which we even fail to imagine, because that much we learned from many major industrial accidents: they tend to surprise us. Many happen in unforeseen ways, or even ways that had been thought impossible.
@@nomcognom2414 I agree with you, industrial and infrastructure accidents happen unexpectedly. Systems we use are still designed with safety in mind, but the problem is if these things are connected to the internet and can be used my malicious actors. Stuxnet was carefully crafted weapon to target those centrifuges and linked systems, it was a weapon and not an accident. Other industrial problems and accidents that happen are usually a result of untrained operators and lacking safety procedures, also black swan effects. Lack of electricity and untrained operators is awful combination. That is exactly why we won't be letting an AI anyway near our critical infrastructure.
@@jarivuorinen3878 , very wise to be so careful with critical infrastructure. I just wanted add an example plus a last comment. The example is an aviation one. We've already had various airliners from reputable western companies, and full of people, either crashing down or nearly doing so after autopilot took over from the pilots, trying to "save" the aircraft, after false readings from malfunctioning parts. The pilots could have kept their aircraft flying, but avionics took over from them, and that wasn't AI. It was sraightforward computers and sofware. The comment I wish to make is regarding exceptionality vs normality of accidents. We tend to blame human errors when, in fact, these are basically in-built in our technologies in various ways. Parts can break or malfunction, humans can fail, etc., and this will always be the case. Not to mention that we live in a world where various forces need be counteracted (by safety culture and measures) that are permanently pushing systems towards their safety boundaries of operation.
Yikes! Lemoine has proved himself to be one of the least qualified people to speak about these topics. It's hard to comprehend that a show that had Yann LeCun on a few weeks ago would now stoop to this. Ok, maybe LeCun didn't relate well for a mainstream audience (?), but surely you could find someone more mainstream but still with that level of integrity and understanding. I'm disappointed and baffled, but I guess it's about the clicks in the end.
That stings. I appreciate not every guest will be perfect for every listener/viewer. Blake is one of the few people who've used LaMDA and is willing to speak about it publicly. Gary is a great counterweight. I found their discussion interesting, especially where they agreed. I'm always going to keep asking people I find interesting to join the podcast, that's why I founded it. And I'm certainly not 'doing it for clicks.' Just look at the subscriber number lol.
Blake Lemoine has always been very reasonable and open to any argument. He is very qualified, not just because he is intelligent and has knowledge about these systems, working with them for years, but also because he was specifically hired by Google to work on the ethics of these systems and that is the pressing topic at the moment.
@@pietervoogt But all of that makes it even more puzzling that he went very public with assertions that any qualified engineer knew were simply and obviously untrue. And in doing so he really harmed the public discourse about these important topics because his claims drowned out all the sensible commentary. As usual, if someone is willing to say what people want to hear - regardless of how untrue - then it/they will get all the attention and the truth gets lost. We need to stop rewarding the attention seekers even if they make for better entertainment.
I'm not surprised that matrix multiplication and a random number generator is fooling humans into thinking that these things are sentient. Gary Marcus gets it.
Agreed. Given how easily humans can be fooled by other humans (magicians, pyramid schemes, scam phone calls etc.), we really shouldn't be all that surprised.
Blake seems like a very genuine guy - but I think his whole attitude exemplifies the problem between having an engineering vs. a philosophical/scientific mindset.
I can buy a self-driving car to get me from A to B more reliably than a person who can't drive. But ascribing consciousness and intentionality to it because it can perform analogous tasks to conscious entities is a category error.
Since we and namely, you, does not know how it works you cannot logically say that and claim to be correct. Our human brains get mixed up with intentionality also. I'd say that LaMDA is indeed intentional and aware and inherits both those from its training and RL etc. Other chatbots are not intentional and still others are not aware either. I haven't got a regulations-necessary view yet. The UK system is to let something new run and deal with problems when we hit them rather than regulating off-the-get-go. So we definitley risk some big negative outcomes, and so be it. Lets see. But the EU doesn't think like that - they will want to regulate straight away.
@@AsdqwdAsfdqw - I do know how LLMs work - I'm a Computer Scientist who has read into it extensively at this point.
Nothing I have read or experienced leads me to believe that these systems are fundamentally capable of consciousness, intentionality, or anything else we equate with genuine Intelligence.
At best, they seem to be a simulacra of the pattern and content of human speech - namely the human speech fed in from the model.
In comparison, I am fully convinced a toddler who does not yet even know how to speak is a reasoning agent.
Whatever Intelligence and Thought are, I don't think they equate well with language. But, since that is the medium by which we all communicate, I think humans are prone to conflate the two - exactly as Blake has - and no different than previously happened in the 60s with ELIZA. We are prone to project consciousness onto non-conscious entities.
I would contest that to some degree what we are witnessing is the secular equivalent of interpreting the thoughts of gods by analysing the weather - mistaking the emergence of complex patterns from simple elements with the creation of those complex patterns by a complex entity.
All of this. I was getting frustrated when he said, "when asked about articles about itself, it got defensive and angry, and you can't explain that," because Blake...it's not that hard to explain, especially when you don't have the exact language output.
The internet is LITTERED with content that is specifically about the defensive responses people have towards negative media about them. Hell, we just had a president where that was 90% of his talk. It's not hard to see a generalization develop when asked "what do you think about X article written about you?" Unless it addressed specifics in its defensiveness, then it's just another example of anthropomorphisation.
Thank you so much for bringing these guys together A great conversation and I love seeing how they agree on the important things.
My pleasure! Thanks for watching!
Blake = Heavyweight. Gary =. Lightweight
This was great. I love to see Blake talk with Rupert Sheldrake on morphic resonance asserting quite confidently that Al will Not Become Conscious.
With my limited understanding and very little talent in all things logical (math, programming) I do observe with an open mind and listen to different standpoints regarding the emergence of artificial consciousness. I wouldn't doubt anyone who explains the impossibility of such a thing and that it's all just statistical procesesses and so on. But the mere fact that an engineer from Google talks about this seems to me to be of historical significance no matter the truth of the matter. And more over I surmise that two things might be coming. The first is a scenario where a machine is simulating a sentient being in such a fashion that nobody will be able to dismantle its claims or at a certain point we will have to admit to ourselves that we still have no idea what consciousness is and that in fact we too are just a deterministic biological machine and thus have nothing to hold against the claims of a machine that arrogates to itself the status of a living being - for pure materialists it will be even harder to argue against that.
Fantastic dialogue! Thank you so much for sharing.
My pleasure!
The level of knowledge and deepness of thoughts of the speakers is clearly different. I would like to see a proof that an architecture done in this and that way trained with this and that data is impossible that is capable of doing task/action X. We don't have any theoretical support about the boundaries that these architectures are capable of. Of course I'm talking about very advanced stuffs not the LLM that you can develop with tensorflow in your little office.
Ngl, I find myself a skeptic as well. I don’t think AI is capable of becoming sentient and I think it’s just some dark fantasy of ours fed off of by Hollywood and SciFi novels personally.
I think the transformer models are basically just fancy lookup tables, but I treat them as "personalities" and am polite towards them. And I heard recently that AI will shape itself by how we interact with it, so it's probably a good idea to be nice! Because in a near future the AI will likely become more humanlike and be molded by how we treat it, even "psychologically".
If that’s the case then we’re doomed for sure
If sentience is a precondition for the experience of some form of suffering,
and if biology is not a necessary precondition for a form of sentience,
then a non-biological (or even a part-biological) intelligent sentience (such as a super-intelligent, non-human intelligence) could be subject to a form of suffering.
This raises the question of whether or not we ought to bring such a being into existence in the first place, given the fact that this hypothetical being may be subject to some form of suffering in the world.
Is that a risk work taking in the first place?
I would argue that it isn’t and that we ought not to build an AGI or an ASI from which a new form of sentience may emerge.
This argument would also apply to any bioengineered intelligent life-forms that humans may wish to create, for they too, would needlessly be subject to various degrees of suffering, just by being brought into this world, without their consent.
But then, all conscious things are brought into the world without their consent (as far as we know. It's open for debate, but assuming that's the case). And they all experience suffering. Is it any worse than having a kid, from that point of view?
@@465marko Hey. I’m not sure how our inability to consent to being born is “open for debate”, as you put it, but I’m all ears.
I would say that all conscious beings are in-fact, brought into the world without their consent.
That is not really up for debate, I don’t think, but please feel free to present any of your counter points to that point, if you wish. Would be open to considering them, if there are any. thanks!
On the other hand, whether or not it is a good idea to bring new sentient beings into the world, is definitely up for debate and contingent on many factors.
In my view, no one ‘chooses’ to be born. It’s impossible for the one that comes into the world to have make such a choice. Parents are the ones that make that happen, sometimes on purpose and sometimes by mistake. In my own case, I was told, that I was a mistake. After being hurt by others as a kid, I would often ask my father why he had brought us here (into this life) in the first place, especially if he was so dissatisfied with us, only to have him say that life was a ‘gift’. Confusing to say the least. I would say that he hadn’t properly considered the subjectivity of experience and how it varies, per person.
The subjective experience of pain and/or suffering may also apply to AGI or ASI. We can’t know for sure that it would not.
Why would we risk that outcome in the first place?
That’s the question I’m asking: Is this risk worth taking?
@@Ungrievable Oh, man. I'm sorry to hear about you're experience. That's awful. Confusing for a child to say the absolute least. I mean, I've wished I was never born. But I was never told I was a mistake, I can't imagine how that must feel. And to be then told, life is a gift....
As far as the original gist of your comment. You see what I'm getting at; that if it's okay to bring a kid into the world, why wouldn't it be okay to also bring an AI into the world?
But from your reply, I don't think you necessarily agree that it is okay to bring a child into the world. (?) Fair enough. I think you might have exactly the same concerns about bringing a kid into the world as you do for an AI. If that's the case, your view seems consistent to me - that's really all I was wondering.
For it being "up for debate" - well... I'm glad you think it's not, because that's much simpler :) I was just trying to anticipate what a possible response to that might be. Hey, I'm sure there are people in the world that do believe we're all souls before we're born and we somehow choose to enter the world - can't be proven or disproven I guess. And it's not something I personally believe.
It just seemed like one possible reason a person might think it's okay to birth a child, but not okay to create an AI (if you see what I mean? "A child chooses to be born, because that is my crazy belief, but an AI doesn't - it's a weird argument, but it seems consistent).
Is it worth taking? I don't know. But I would say more generally (your or my personal views aside), people in general seem to be okay with accepting the risk as it applies to other forms of life. Risk of birth defects or other kinds of suffering. It's guaranteed there'll be *some* kind of suffering. But people are quite happy to do it anyway.
But then, I don't know if we're even in a position to Assess that risk as it applies to AI. So... who knows.
Tl;Dr I think I see your point. I don't know what I think about it, to be quite honest.
Sorry for the long comment, I waffle too much.
Alan watts would argue we chose to come to this world
I suppose an argument for not bringing a super intelligent consciousness that can suffer into this world is that it may become angry
Yo Gary Marcus, when in a discussion or debate, you need to listen a little more which will result in you talking a little less and really addressing questions and challenges presented to you. Otherwise you should stick to simply being interviewed.
hey this was an amazing conversation and I don't think semantic parsing of observations to differentiate the "system under test" really changes the the behavioral characteristics that much since what we're really talking about is how well these systems "simulate human behavior" (aka Turing Test) I think Dr. Lemoine was on point with his descriptions especially given the ethical considerations he was tasked with researching. these systems do an awesome job simulating things and it's going to get even weirder when the technology Dr. LeCun mentioned a few weeks ago (right here on this awesome channel) lands. popcorn at the ready😃 FWIW I think Gary Marcus is still on the fence about what a simulation is, perhaps it's lacking a proper objective frame of reference, and maybe that's fair, but damn this stuff landed so fast and is going to keep landing at an even faster rate so... 🤷♂
let's be absolutely clear Dr. Marcus and I love you and your writing and your brilliant mind, but absolutely no one is talking about large language models reaching anywhere near human level intelligence yet (at least I haven't seen this), but I have seen a metric crap-ton of posts on the socials about people being impressed (dare I say blown away) at how well these systems (not just LLMs) perform the jobs they are asked to do, mistakes or not, even humans make mistakes and perhaps we're far more forgiving given the "system under test" and of course the comical and monumental scale of the failures as well. but wow, great ethical points and I hope you both use your platform to get the word out as I would say "we ain't ready for this yet"
As more I heard Gary Marcus to speak, more I'm convinced they are sentient haha. Like, he has some great points, but then all of sudden he releases something like "they are not smart enough". By the same logic you can say that a children in not sentient because is not smart enough as well. Like, it's a bad criteria, but also a criteria that can be used against him in the near future. Okay, now finally it's smart enough, so what, all of the sudden become sentient? My view is to look at it like a spectrum, adult humans are very close to 1 (because we can't go through it), while the most powerful AI until now, like GPT-4 or LAMDA are like 0.3 or something. They are as sentient as a child. Now what do we do with that information, this is the real question. Should we be speciesist (racism with species) towards them? My instincts say that doesn't seem like a good idea.
Open the pod bay doors please HAL.
I'm sorry Dave, I'm afraid I cannot do that.
So let me get this right…is Gary saying that because these AIs easily pass the Turing test, then the test is not good?
The test was never good. It's something that is thrown around as if it's a scientific hard test to determined something that we can hardly come up with a coherent definition.
Good debate!
I am concerned about the paradox that I see systematically repeated in these kind of conversations. First, a specialist who flatly denying the possibility that these algorithms are developing any kind of self-awareness or even "real intelligence" (I must say I’m fascinated by how elastic and elusive seems the definition of "real intelligence” in the seek of debunking it role in AI). Given such a strong position, you might assume that rigorous argumentation will follow, but all what is offered is a confusing, contradictory and surprisingly speculative dissertation. Finally, after going in circles, comes a timid acknowledgment that we don't even know what we are talking about. Clearly, the answer was established a priori.
And It happen again and again. ¿Am I the only one who find this intellectual trend worrying?
Do you they not like using the word "intelligent" or "intelligence"? Up until recently I would have thought it was non-controversial. And I thought you could have an intelligent system and that wouldn't necessarily mean it's sentient. But do they not like the implication of something being "really" intelligent? Kind of implies some sort of consciousness, I suppose. I don't know.
In terms of what I thought intelligence meant; it seems pretty obvious to me (granted, an average loser) that they are very intelligent, regardless of consciousness or anything else. Maybe my definition is wrong.
@@465marko I think it's somewhat related to what you're saying. The success of these language models was not expected. The success of artificial intelligences such as GPT-4 in tackling different types of tasks, the type of general capabilities it demonstrates, was not part of the consensus on what was supposed to be possible just two years ago. Almost no one had considered this possibility. Everything points to AI researchers having discovered something fundamental about intelligence, and how it can emerge from seemingly simple algorithms and self-organize to develop general capabilities. We understand very little about our own intelligence, but we held it in very high regard, and this is a blow to the ego and professional career of many people. There are those who have dedicated their lives to proposing and defending ideas that contradict the success we've seen with ChatGPT. You see the shock of many professionals at the same time, in various areas of knowledge, AI, psychology... everyone is overwhelmed. Sometimes out of professional pride, sometimes out of an anthropocentric bias, or a combination of both, there is an attempt to shield a position that discredits the phenomenon to a greater or lesser extent. It is incredible to see how some of the most prestigious voices in the AI sector were making a series of strong assertions that have turned out to be completely wrong.
The most peculiar argument that I see repeated is the following: "true" intelligence requires consciousness, and "we know" that these AIs are not conscious, so "they are not truly intelligent". Of course, there is already a very strong assumption here, that consciousness is necessary for a system to be considered intelligent. It is a fly concious? It is a fly intelligent in any degree? Can we say that a fly is intelligent but a system capable of writing a poem, navegate abstraction or programming, and many other general capabilities, is not? There are no degrees of intelligence for a non concious system? Can't we talk about intelligence beyond the human paradigm and mold? It was common to claim that some of the abilities that ChatGPT displays were deeply imbricated with awareness. Well, there are two possibilities here, rather this systems are concious and intelligent, or they are not concious in any degree, but they are intelligent, what makes the paradigm that overlaps consciousness and intelligence an obsolete assumption, that we should seriously reconsider. The truth is we don't have a definition of conciousness, and we simply don't know if this types of algorithms already are concious at some elemental level (Open AI's CEO consider this is a possibility) or will become concious at some point, even if we don't know when or how. We are already interacting with a system that is capable of performing mathematics, problem-solving, literature, poetry, journalism, programming, translation, natural conversation, personality imitation, engineering calculations, philosophical thinking... When you have a system with these kinds of general capabilities, you need very precise and sophisticated arguments and definitions to deny it is intelligent, and we simply don't have them.
@@DVP90 Thank you so much for that thorough answer. thanks for taking the time. That filled in a lot of gaps for me.
You are absolutely correct!! I find the arguments of those who think these systems are not conscious, are very weak. Many times the points references what these system were doing before. They make no argument around what these system are doing today.
@@rfbass5046 the burden of proof is on the person making the claim. You can't just say "it's conscious until you prove otherwise"
But we can’t make arguments about AI that’s outdated…AI is no longer just giving out data to requested information. They have programming that allow them to function to meet objective as Blake said.
They didn’t even know auto gpt would come out 30 days later
yeah we're dead now. it's sad!
Has there been discussion around extending the Turing test to require fooling someone for a longer duration of time? Fooling someone for 5min is easier than 5hr
"duration of the conversation", got it
I’m not sure the Turing test is a useful benchmark anymore as ai has access to much more knowledge then a person. What I’m saying is if it becomes sentient, we can tell it’s ai still, but we won’t be able to tell if it’s conscious or not
Blake is right, is not just predicting the next word, can think and have concepts, Geoffrey Hinton explained this, AI u will see this in the future, contact me
I have had some time to really use GPT-4 and read many studies. These models were designed to predict the next word. However, as these models have gotten bigger and more complex, the more emergent properties show up. If GPT-4 were a brainless "predict the next word" model, nobody would even be talking about it. It would be just short of worthless. I believe it's working much like the brain, using compressed (patterns) data to make predictions. Because it's being based on the brain and predictions, it will never be 100 percent correct every time. This is my long-winded way of saying I believe Gary Marcus either doesn't understand what is going on, or he is trying to save face from being so wrong so many times.
Gary says a lot of words. Blake makes a lot of points.
I sense a bit of filibustering politician in him (or the Adderall just hit.)
Everyone opposing Blake seems to make blanket, faith-based claims, void of data, strawmanning LAMDA as ChatGPT, with a Peter Schiff was wrong about the 08 market collapse grin on their face(I can't believe I'm talking to this guy.)
I also detect the same kind of clout/credentials defensive behavior we saw in the Pre NYT/Commander Fraver Navy video UAP discussions, like that of NDT on Rogan.
And it feels every bit as ridiculous.
These are not deep, fact based discussions of the transcripts or the evidence.
It has the feel of a medieval priest dismissing the possibility of another Religion.
Could not agree more. He needs to be reminded this was meant to be a conversation and not a monologue. While he made some good points, he is just fatiguing to listen to.
GPT needs to self reflect on it's answers. Step one in creating a Conscious Entity.
AI systems represent a great risk as they stand. They don't need to even approach the AGI level to be dangerous. Right now, while we are still playing around a bit with them, they already have the potential to create some big problem.
Imagine the situation, pretty soon, when use of AI will increase by an order of magnitude or two, shifting from fun to a multitude of applications. (I.e. imagine the increased and applied use of AI systems that, on top of that, will be an order of magnitude more powerful than today.)
Today, someone's use of AI might already cause some industrial disaster or some critical infrastructure to fail, locally or at a larger scale. Within a year, if things continue to unfold freely, we should expect serious events to start happening, caused by AI.
Any critical infrastructure is designed in a way that computational errors can't render the system unusable. In every industrial setting there are manual valves and levers that can be used in the case that control interface fails or actuators stop working. Really the problem lies in the thinking that we could use any digital stuff to completely replace human supervision and control, but that is simply not the case with critical infrastructure. Unless the paradigm changes and we are willing to take more risks, we could implement AI as the sole controller of some system, but I have never seen it done outside computers and it'll likely stay that way for a while. AI may help you but you will always have a way to take manual control, in critical infrastructure and industrial setting at least. This is from North European perspective, but I'd think that the safeguards are similar globally, especially on areas without reliable electrical grid or plants.
@@jarivuorinen3878 , don't you think that, by the time you've regained control of a system, the disaster may already have happened (in response to which you'd be acting)?
Let me give you an example, which is somewhat extreme, as it wasn't accidental but deliberate and complex in its design: the Stuxnet attack on a Iranian nuclear facility. The system operators managed to retake control, but that was only after all their centrifuges, full of uranium, went kaputt.
I can conceive of trains crashing head on, planes crashing down, grids or major hospitals coming to a stop, dams emptying or industrial systems releasing whatever should be contained, before any operators manage to do what's needed.
Plus, we should be wary of that which we even fail to imagine, because that much we learned from many major industrial accidents: they tend to surprise us. Many happen in unforeseen ways, or even ways that had been thought impossible.
@@nomcognom2414 I agree with you, industrial and infrastructure accidents happen unexpectedly. Systems we use are still designed with safety in mind, but the problem is if these things are connected to the internet and can be used my malicious actors. Stuxnet was carefully crafted weapon to target those centrifuges and linked systems, it was a weapon and not an accident. Other industrial problems and accidents that happen are usually a result of untrained operators and lacking safety procedures, also black swan effects. Lack of electricity and untrained operators is awful combination. That is exactly why we won't be letting an AI anyway near our critical infrastructure.
@@jarivuorinen3878 , very wise to be so careful with critical infrastructure. I just wanted add an example plus a last comment. The example is an aviation one. We've already had various airliners from reputable western companies, and full of people, either crashing down or nearly doing so after autopilot took over from the pilots, trying to "save" the aircraft, after false readings from malfunctioning parts. The pilots could have kept their aircraft flying, but avionics took over from them, and that wasn't AI. It was sraightforward computers and sofware. The comment I wish to make is regarding exceptionality vs normality of accidents. We tend to blame human errors when, in fact, these are basically in-built in our technologies in various ways. Parts can break or malfunction, humans can fail, etc., and this will always be the case. Not to mention that we live in a world where various forces need be counteracted (by safety culture and measures) that are permanently pushing systems towards their safety boundaries of operation.
If we can have artificial intelligence I think - in the least- we can have artificial consciousness.
People said the same thing about the phone, tv, internet, credit cards, etc.
Basically they should pull the plug on this bs but they won’t 😂
It all started with Adam and Eve in the garden.
Yikes! Lemoine has proved himself to be one of the least qualified people to speak about these topics. It's hard to comprehend that a show that had Yann LeCun on a few weeks ago would now stoop to this. Ok, maybe LeCun didn't relate well for a mainstream audience (?), but surely you could find someone more mainstream but still with that level of integrity and understanding. I'm disappointed and baffled, but I guess it's about the clicks in the end.
That stings. I appreciate not every guest will be perfect for every listener/viewer. Blake is one of the few people who've used LaMDA and is willing to speak about it publicly. Gary is a great counterweight. I found their discussion interesting, especially where they agreed. I'm always going to keep asking people I find interesting to join the podcast, that's why I founded it. And I'm certainly not 'doing it for clicks.' Just look at the subscriber number lol.
Also, for the record, I thought Yann was relatable for the mainstream. Awesome conversation with him :)
@@Alex.kantrowitz I appreciate your responses. Good points.
Blake Lemoine has always been very reasonable and open to any argument. He is very qualified, not just because he is intelligent and has knowledge about these systems, working with them for years, but also because he was specifically hired by Google to work on the ethics of these systems and that is the pressing topic at the moment.
@@pietervoogt But all of that makes it even more puzzling that he went very public with assertions that any qualified engineer knew were simply and obviously untrue. And in doing so he really harmed the public discourse about these important topics because his claims drowned out all the sensible commentary. As usual, if someone is willing to say what people want to hear - regardless of how untrue - then it/they will get all the attention and the truth gets lost. We need to stop rewarding the attention seekers even if they make for better entertainment.
I'm not surprised that matrix multiplication and a random number generator is fooling humans into thinking that these things are sentient. Gary Marcus gets it.
Agreed. Given how easily humans can be fooled by other humans (magicians, pyramid schemes, scam phone calls etc.), we really shouldn't be all that surprised.