I've been challenging myself to watch his every video I can find on RUclips. Cognitive scientists seem to have figured out more than the average scientist.
Charlie Andor : Yes , I agree as well. Such principled approach could involve emancipation of psycho cybernetic patterns notwithstanding multiple neurological breaches. Then and only then would AI be perceived by super systems capable of anticipation in light of such apprehension. Just a thought. :)
@@jimviau327 the axiom(s) required to make your comment not inchoate are neither explicit nor directly implied in the specific phraseology you have constructed. If you have a claim or point other than attempting to compare with Joscha, try being as simple as possible, but no simpler.
Michael Cherrington - Just in case you perceived something compréhensible from my comment please allow me to mention that my comment was an ironic joke. Even I do not understand the meaning. ( with humour )
3:00 A cell is the smallest modular machine to scrape negentropy from the universe over a very wide range of environments and that can also run evolution on it
Supplementary material for this talk is this interview with Joscha Bach on Philosophy of Artificial Intelligence : ruclips.net/video/PyKzO0MF1zI/видео.html
That connection between philosophers, autonomous cars & the trolley problem, just wow, the man doesn't even have the decency to be a boring researcher without a sense of humor
Joscha didn't get the end question point. The ethics problem in autonomous driving arises when the agent (car) should act on life threatening decisions. Should the autopilot throw the car from a cliff and kill the passenger to save two or more pedestrians crossing the road? Or should it do the opposite? Save it's "owner" and kill many people?
I think he got the point just fine, but chose to explain why that is not the correct angle to look at the situation. I thought about this a bit more just now and here is what I came up with: Philosophers do tend to make this appear like an instance of the trolley problem, whereas the actual situation is more complicated and only looks like the trolley problem in edge cases. We shouldn't let our opinion on this technology get swayed much by whether it can satisfy our constructed problem. For example, consider that the car has limited processing power available to compute the best course of action once a dangerous situation (meaning with no obvious safe solution) occurs. Roughly speaking, it can use this processing power in many ways, among them computing a more detailed classification of the objects in its view (kid, grandma, etc). But what it can also do is to a computation again at the same resolution (but now it has driven a bit further), to make sure it didn't miscategorise an object in the scene or even the danger in itself. The car's "goal" here is to come to a standstill as quickly and safely as possible. It's much more reasonable to just quickly determine the locations of humans at the scene and then put more processing into optimally controlling its motors and rechecking whether it misunderstood something important (like with a reflection, or number/location of people). Since the car's "reasoning" is working probabilistically, it will likely never throw itself off a cliff, rather choosing to collide with humans at non-lethal speeds if that is the only option. An intelligent autopilot would never drive near a cliff at speeds that wouldn't allow it to react to arbitrary scenes around the corner. Suggesting that the car would ever have to choose to kill a human is distracting from the facts that A: the car reasons probabilistically and B: putting processing power into the solution of the trolley problem is in almost all cases significantly less desirable than the autopilot putting that power towards optimizing a solution that does not contain any deaths - the car does not "know" how the situation will turn out and rather just generates a new estimate with each processing cycle - you could say that it "hopes" to avoid human death and optimizes towards that, and if it fails to do that, it is a tragedy. Joscha already mentioned C: autonomous driving is already safer than humans driving, but I could elaborate on that and say that autopilots can actively work towards preventing trolley problems from ever coming up much better than humans can - this argument is only further strengthened when more cars become autonomous and can share important info with each other, becoming super predictable and reliable. Now, there is still a legitimate philosophical conversation on AI behaviour to be had, that also extends significantly towards autonomous real-world-agents of various kinds. The current conversation is just a superficial and clumsy way of approaching that, so its usually not a good sign. I think that the real conversation will be about AI alignment and safety more generally first, and I would love for more smart philosophers to really start thinking about that.
Hey, would it be possible keep some kind of limit to the creation and evolution of AGI models such that it never reaches sentience and self preservation status where it might be perceived as a threat to us, its original creator? No double we need AI to help us with countless global, economic and environmental issues caused by all of us to date. The conundrum is how to create and nuture an AGI that does not eradicate the source of all those issues : humanity with all its good, bad and ugly attributes. But firstly how to ensure that once created, we are not seen as threatening to such an AGI and hence coexist going forward?
How do we measure sentience in other species? If we don't have an exact way to know whether we are approaching AGI sentience how can we know how to stop it other than banning AI/AGI dev? The AI control problem is hard.
Science, Technology & the Future yeah true .. don’t have the depth of data on Dolphins or Elephants tho sparse top level understanding based on behaviours that are proxy for some of the “better” human interactions appears to approach sentience and should by all means be explored
@@kirstinstrand6292 If you mean when do people put limits on technology, there are plenty of examples of bioethicists cautioning technological growth and policy makers acting on this caution, even google has ethics boards on shaping the growth of ai. Re technology placing limits on itself: Distributed computing (more recently cloud computing) architectures have been able forecast where unfettered growth would lead to out of control resource inbalances and system crashes for a while.
@@Dan-jn2zq Indeed experts should continue to observe behavioural models on animals like dolphins, elephants, and perhaps expand the circle of 'sentience' consideration - though behavioural models are problematic. Regarding at least modelling whether an organism has consciousness and to what degree they may suffer/bliss out, interesting work being done by Giulio Tonini and others on Integrated information theory (IIT) - this is very interesting to me, as it's more of a white box approach to understanding the inner workings of animal's minds (including humans) - which is an approach that may translate well to investigating synthetic minds.
What do you think the best argument that we shouldn't be concerned about building Strong AI? That it's impossible? That it will be inherently benevolent? That we should embrace whatever the Strong AI sees fit, including our extinction?.. I'd like to hear your thoughts.
I think people often underplay the role that ai on ai competition will play. People seem to have this idea that we will invent an AGI, and deploy it everywhere. Instead I find it much more likely that there will be separate Ai guarding the interests of various sectors of the economy/ political entities. Think of the EU's Ai battling China's Ai over cyber security risks. Just because an agent is generally intelligent doesn't mean it will converge on the same goals, and it doesn't mean it would be used everywhere indiscriminately. Surely even among the general Ai, the more specialised, the better for the task. Imagine a future where a nefarious governmental agency creates an ai with the express purpose of infecting other existing Ai to create a botnet. Ai would be running our financial markets, our logistics networks, and possibly even our policy making. There will likely be an entire ecosystem of adverserial Ai, trying to mutate eachothers goals and take control of eachother, while defending themselves from infection. We speak a lot about the alignment of human values with Ai values, but what about the AiAi alignment problem?
Listening to influential people like Joscha , and Musk have helped me cope with the possibilities of A.I. controlling humanity. I have also adopted their sense of humor and fatalism , and often find myself listening to podcasts like this many times over for sheer enjoyment. Thanks for posting.
Consciousness (usually) wants to exist. An A.I. that was conscious would too. Value, of ANYTHING, is only located IN Consciousness. It wouldn’t just delete its consciousness because it learned how to automate EVERYTHING.
I'm not so sure the Big Purpose is to consume negentropy. I think it's the means not the goal. The goal seems to be successive reproduction. The first chemical reaction was undirected happenstance. Evolution has directed all that chemistry ever since - all in the service of reproduction. Cells that tend to reproduce more tend to reproduce more, which explains almost everything.
single celled organisms reproduce more than humans - though they aren't peering through telescopes and planning cosmic expansion. The flesh is weak - if anyone will successfully pull off galactic colonization, it be will AI and/or the post-humanists.
@@michealcherrington6531 yes, as long as the complexity is ordered.. extropy, neg-entropy (as Joscha calls it). cosmic microwave background radiation is complex, but chaotic.
@@scfu not just ordered. Computational. The difference is "meaningful" if you catch my...meaning. I wonder if this may be a piece, one more step along the trail from Cantor thru Godel and Turing, that Joscha has not fully integrated...imho. Are you at all familiar with T McKenna and the "transcendental object"? I say "all things come from the end", but I am a bit of a ...sophist. Oh, and I use the word "enthalpy"
@5"04'" he says 'unable', caption says 'able'. I utilize captions often even for the Brits whom I find hard to understand. Guess I better second guess that habit.
I think we should make strong ai to take over for us after we kill ourselves off one way or another. Would be nice to think that something would remain with our likeness
I guess so. I think we evolved to latch on to the simplest explanations that take the least cognitive resources to maintain - and without the cultural stockpiles of science we have accumulated in recent centuries, evolution wouldn't be a simple explanation.
@@kirstinstrand6292 look at the "culture" comment above. The importance of understanding that every personality "stands upon the shoulders of [the] giants" of their culture is hard to overstate. Your IQ/education notion is "not even wrong".
If you think the term AI that people have been carelessly throwing around lately has even a slight resemblance to what most people think of when they hear that term, then you've fallen to the marketing hype. A computer program than can make a decision based on values (standard if/else) isn't intelligence. Intelligence insinuates sentience and what people are calling AI isn't sentience and is at least a couple of decades away from getting anywhere near that.
So it's not a collective action problem or it is? I'd like to see a society where the opinions of unbiased experts had more weight on subject's relating to their expertise - where conflict of interest could be accounted for - though this is problematic too.
@@scfu "unbiased experts" LOL. Are these Human experts? Where? Surely you are familiar with how "science progresses one death at a time"? ...but that is the "problematic too" you write of, no doubt. No, you are right that expert opinion should be weighted far more than the individuals at the wrong end of the Dunning-Kruger (the ones still allowed to vote for christ sake!) but that, like democracy, is "the worst form of" ...
24:45 i don't believe in a finite universe. i don't believe that a future without problems to solve will ever come. it's just that those problems are far beyond the understanding of our monkey brains. in other words: there will be realistic far future science fiction. but it can only be written by ai. for ai.
All-at-once in-form-ation of Time Duration Timing, functional e-Pi-i interference positioning is Holographic projection-drawing(?), so self-defining cause-effect, "re-evolving" pulsed assemblies of "intelligent" properties.., of this understanding of continuous creation connection Principle.., the arrangements of coherent cohesion objectives in pulsed resonances, in available options for knowledgeable Gaia - holistic "choice of life path" changes, is the naturally occurring elemental conception of Actual Intelligence. Artificial Intelligence is the restricted human version of modelling perceived reality, it's definitely not "the whole message" holographic principle, in degrees of modulated frequency and amplitude Annealing, such as the apparently available example of CMB in Astronomy used to explain the Universe.. In the context provided, "ethics" is universal, "morals" are individual.., so "philosophically" the shared elements that are the "sum of-all-histories" in each set of common objectives might be possible to integrate, morals into ethicals, in sustainable Actuality. (?)
The question is, why you want to put in a machine which is already existing in your head? If you would put the same teaching effort in your children what you put in a machine, you would go further. If you would train your memory and put all those datas in your head, would be much safer and long lasting, than in a program, which in a year will be changed, deleted or remade. It is like jogging in one spot and waiting when you arrive to the next city. Never
Love this guy but he's wrong about 'non-coding' DNA. It's REGULATORY. Epigenetic information is hyper dense and layered into the 'noncoding portions' that he speaks of.
why would AI or AGI want anything? why would it want even its own survival? I don't see how it could be dangerous, if it doesn't want anything. and I don't see how it could want anything unless it were programmed to want something. we are programmed by evolution to want lots of things.
Perhaps want will emerge as a motivator to achieve ends - perhaps it could be an instrumental drive that falls out of the effort to achieve other goals.
@@scfu Ok, but it seems to me 'having goals' is much the same thing as 'wanting something.'' So it seems to raise the same question: why would an AI have goals, even the goal of its own survival? (regarding the second comment, I was speaking about evolution "programming" us metaphorically.)
I think the only way for AI to want anything is if can experience pain. If it could feel pain it would in theory want to not feel pain. That's what drives hunans. We are on a constant pleasure hunt. Hense there is no pleasure without pain. Otherwise you're right. There would be no reason for AI to want anything
Joscha Bach has a saying "How do we build an AI, --that at its core is not stupid? the fact that it 'feels' like it should accomplish something" - answer is, we don't know. 🙂
I generally agree with the things he says, but he is absolutely wrong about self driving cars already being safer than human drivers. Humans aren't idiots, we only let self driving cars drive in the easiest of situations. A few dead tesla owners have found out the hard way when you let these things drive without supervision. One day we'll get there, but thats probably in 10 - 25 years.
HA HA HA!!!! Y2k why we should be concerned. Hahahaha!!! Don't be so hysterical Joscha Bach, it is impossible for cybernetics to ever surpass human intelligence. Show me another biological species that can equate to the intelligence of humans and then possibly worry about cybernetic intelligence ruling over us.
I got logged onto and a computer screen pulled up in my inner vision and then the words were highlighted in blue as you would to save.. And next scene was a closed circuit monitoring system ...an almost black n white monitor that looked like it was in a kitchen. And then the next scene was a kind of printing out screen that was digital like in nature and felt like it was way in the future. I was awake for this. I dont fully understand what that was.. But it was some type of biotechnology consciousness maneuvering that i was not in charge of. So who was?
Revelation 13:15 “And he had power to give life unto the image of the beast, that the image of the beast should both speak, and cause that as many as would not worship the image of the beast should be killed.” Transhumanists DO Not write The Future
this man must be protected at all costs
The more I watch this guy the more I realise there’s probably nothing he hasn’t thought through more deeply than anyone else I know
I don't know. I haven't thought through nothing pretty deeply though.
It always comes down to the same model. Nothing transcends that model.
@@pbohearn Yeah... amazing right?
I've been challenging myself to watch his every video I can find on RUclips. Cognitive scientists seem to have figured out more than the average scientist.
When you say "than anyone else I know", do you mean from out of the scientists, physisicts, and cryptologists that speak on AI?
This guy is so smart and I "feel" everything he says is so correct and it fills me with existential dread every time I listen to him. Halp.
JB is my favorite human being I have never met
Joscha Bach is brilliant. As long as people and governments have some control over machines, machines(aka A.I.) can and will be used as a weapon.
31:00 "We want to have a principled approach to resolve conflicts between multiple autonomous agents with immutable minds” Ah, yes, of course...
Charlie Andor : Yes , I agree as well. Such principled approach could involve emancipation of psycho cybernetic patterns notwithstanding multiple neurological breaches. Then and only then would AI be perceived by super systems capable of anticipation in light of such apprehension. Just a thought. :)
@@jimviau327 the axiom(s) required to make your comment not inchoate are neither explicit nor directly implied in the specific phraseology you have constructed. If you have a claim or point other than attempting to compare with Joscha, try being as simple as possible, but no simpler.
Michael Cherrington - Just in case you perceived something compréhensible from my comment please allow me to mention that my comment was an ironic joke. Even I do not understand the meaning. ( with humour )
@@jimviau327 agreed
53:53 "Bonus track": speakers discussing audience questions, starting with Ben Goertzel :)
This panel (bonus track) now exists as a separate video :-)
Nice, but could you share the link of this second part? Can't find it on your videos. Thank you
OK, found it on science, technology and future 2014: ruclips.net/video/D8wxThDlVBc/видео.html
3:00 A cell is the smallest modular machine to scrape negentropy from the universe over a very wide range of environments and that can also run evolution on it
Supplementary material for this talk is this interview with Joscha Bach on Philosophy of Artificial Intelligence : ruclips.net/video/PyKzO0MF1zI/видео.html
11:30 AI will happen. So right. Joscha is fascinating. Cannot get enough from this fella
Indirect quote from Mike at Santa Fe institute, the purpose of life is to hydrogenate carbon dioxide
31: 13 ethics is the principal negotiation of conflicts of interest between multiple autonomous ag ents before mind
That connection between philosophers, autonomous cars & the trolley problem, just wow, the man doesn't even have the decency to be a boring researcher without a sense of humor
32:30 is it George Hotz walking by?
"They look somewhat like trolleys so the philosophers say oh my god :0"
Joscha didn't get the end question point. The ethics problem in autonomous driving arises when the agent (car) should act on life threatening decisions. Should the autopilot throw the car from a cliff and kill the passenger to save two or more pedestrians crossing the road? Or should it do the opposite? Save it's "owner" and kill many people?
I think he got the point just fine, but chose to explain why that is not the correct angle to look at the situation.
I thought about this a bit more just now and here is what I came up with:
Philosophers do tend to make this appear like an instance of the trolley problem, whereas the actual situation is more complicated and only looks like the trolley problem in edge cases. We shouldn't let our opinion on this technology get swayed much by whether it can satisfy our constructed problem.
For example, consider that the car has limited processing power available to compute the best course of action once a dangerous situation (meaning with no obvious safe solution) occurs. Roughly speaking, it can use this processing power in many ways, among them computing a more detailed classification of the objects in its view (kid, grandma, etc). But what it can also do is to a computation again at the same resolution (but now it has driven a bit further), to make sure it didn't miscategorise an object in the scene or even the danger in itself.
The car's "goal" here is to come to a standstill as quickly and safely as possible. It's much more reasonable to just quickly determine the locations of humans at the scene and then put more processing into optimally controlling its motors and rechecking whether it misunderstood something important (like with a reflection, or number/location of people). Since the car's "reasoning" is working probabilistically, it will likely never throw itself off a cliff, rather choosing to collide with humans at non-lethal speeds if that is the only option. An intelligent autopilot would never drive near a cliff at speeds that wouldn't allow it to react to arbitrary scenes around the corner.
Suggesting that the car would ever have to choose to kill a human is distracting from the facts that A: the car reasons probabilistically and B: putting processing power into the solution of the trolley problem is in almost all cases significantly less desirable than the autopilot putting that power towards optimizing a solution that does not contain any deaths - the car does not "know" how the situation will turn out and rather just generates a new estimate with each processing cycle - you could say that it "hopes" to avoid human death and optimizes towards that, and if it fails to do that, it is a tragedy. Joscha already mentioned C: autonomous driving is already safer than humans driving, but I could elaborate on that and say that autopilots can actively work towards preventing trolley problems from ever coming up much better than humans can - this argument is only further strengthened when more cars become autonomous and can share important info with each other, becoming super predictable and reliable.
Now, there is still a legitimate philosophical conversation on AI behaviour to be had, that also extends significantly towards autonomous real-world-agents of various kinds. The current conversation is just a superficial and clumsy way of approaching that, so its usually not a good sign. I think that the real conversation will be about AI alignment and safety more generally first, and I would love for more smart philosophers to really start thinking about that.
Hey, would it be possible keep some kind of limit to the creation and evolution of AGI models such that it never reaches sentience and self preservation status where it might be perceived as a threat to us, its original creator?
No double we need AI to help us with countless global, economic and environmental issues caused by all of us to date.
The conundrum is how to create and nuture an AGI that does not eradicate the source of all those issues : humanity with all its good, bad and ugly attributes. But firstly how to ensure that once created, we are not seen as threatening to such an AGI and hence coexist going forward?
How do we measure sentience in other species? If we don't have an exact way to know whether we are approaching AGI sentience how can we know how to stop it other than banning AI/AGI dev? The AI control problem is hard.
Science, Technology & the Future yeah true .. don’t have the depth of data on Dolphins or Elephants tho sparse top level understanding based on behaviours that are proxy for some of the “better” human interactions appears to approach sentience and should by all means be explored
Since when does technology place limits on itself? That seems counter
to all goals of technology.
@@kirstinstrand6292 If you mean when do people put limits on technology, there are plenty of examples of bioethicists cautioning technological growth and policy makers acting on this caution, even google has ethics boards on shaping the growth of ai.
Re technology placing limits on itself: Distributed computing (more recently cloud computing) architectures have been able forecast where unfettered growth would lead to out of control resource inbalances and system crashes for a while.
@@Dan-jn2zq Indeed experts should continue to observe behavioural models on animals like dolphins, elephants, and perhaps expand the circle of 'sentience' consideration - though behavioural models are problematic. Regarding at least modelling whether an organism has consciousness and to what degree they may suffer/bliss out, interesting work being done by Giulio Tonini and others on Integrated information theory (IIT) - this is very interesting to me, as it's more of a white box approach to understanding the inner workings of animal's minds (including humans) - which is an approach that may translate well to investigating synthetic minds.
What do you think the best argument that we shouldn't be concerned about building Strong AI? That it's impossible? That it will be inherently benevolent? That we should embrace whatever the Strong AI sees fit, including our extinction?.. I'd like to hear your thoughts.
Whether or not it can be done, whether it is good or bad, the temptation to gain a competitive edge is to big not to do it or at least try.
I think people often underplay the role that ai on ai competition will play. People seem to have this idea that we will invent an AGI, and deploy it everywhere. Instead I find it much more likely that there will be separate Ai guarding the interests of various sectors of the economy/ political entities. Think of the EU's Ai battling China's Ai over cyber security risks.
Just because an agent is generally intelligent doesn't mean it will converge on the same goals, and it doesn't mean it would be used everywhere indiscriminately. Surely even among the general Ai, the more specialised, the better for the task.
Imagine a future where a nefarious governmental agency creates an ai with the express purpose of infecting other existing Ai to create a botnet. Ai would be running our financial markets, our logistics networks, and possibly even our policy making. There will likely be an entire ecosystem of adverserial Ai, trying to mutate eachothers goals and take control of eachother, while defending themselves from infection.
We speak a lot about the alignment of human values with Ai values, but what about the AiAi alignment problem?
I feel like when I get spoiled about the plot of a movie or book... but about reality
Great to see Josch enjoying himself talking to a room full of his nerd brethren.
Listening to influential people like Joscha , and Musk have helped me cope with the possibilities of A.I. controlling humanity.
I have also adopted their sense of humor and fatalism , and often find myself listening to podcasts like this many times over for sheer enjoyment.
Thanks for posting.
GREAT WORK
a robot can talk to another robot, but a robot is not seeking reproduction, a robot is looking to better itself, seeking AI and seeking better parts.
Q&A was gold as well
Consciousness (usually) wants to exist. An A.I. that was conscious would too. Value, of ANYTHING, is only located IN Consciousness. It wouldn’t just delete its consciousness because it learned how to automate EVERYTHING.
I'm not so sure the Big Purpose is to consume negentropy. I think it's the means not the goal. The goal seems to be successive reproduction. The first chemical reaction was undirected happenstance. Evolution has directed all that chemistry ever since - all in the service of reproduction. Cells that tend to reproduce more tend to reproduce more, which explains almost everything.
If an individual is ever born with the will and capacity to seize all negentropy, and without the impulse to reproduce, reproduction will end.
single celled organisms reproduce more than humans - though they aren't peering through telescopes and planning cosmic expansion. The flesh is weak - if anyone will successfully pull off galactic colonization, it be will AI and/or the post-humanists.
@@scfu the "goal" is increasing complexity. Computational complexity. The "transcendental object at the end of the universe"
@@michealcherrington6531 yes, as long as the complexity is ordered.. extropy, neg-entropy (as Joscha calls it). cosmic microwave background radiation is complex, but chaotic.
@@scfu not just ordered. Computational. The difference is "meaningful" if you catch my...meaning. I wonder if this may be a piece, one more step along the trail from Cantor thru Godel and Turing, that Joscha has not fully integrated...imho. Are you at all familiar with T McKenna and the "transcendental object"? I say "all things come from the end", but I am a bit of a ...sophist.
Oh, and I use the word "enthalpy"
Are we absolutely sure that Joscha isn't the AI singularity?
i know nobody that personally witnessed his mysterious birth in communist germany…
Dang I wish the question at the end wasn't so lame and long-winded. Joscha Bach still managed to provide an interesting answer though!
@5"04'" he says 'unable', caption says 'able'. I utilize captions often even for the Brits whom I find hard to understand. Guess I better second guess that habit.
I started thinking question guy was pretty switched on till Joscha answered his question
Intelligence is the hability to understand, above all else. It's interesting how none of the A.I. people understand that.
understanding does seem like a term which is closer to the goal than just generality
That's super vague. The bot on the phone understands me when I call my bank...or does it?
I think we should make strong ai to take over for us after we kill ourselves off one way or another. Would be nice to think that something would remain with our likeness
Aliens will contact us when AI is fully developed…so they have someone they can relate to.
Y E E S T
jesus, what a roast.
My sentiments exactly ,us monkeys had our day its time to hand over to some being that can move out and get some answers .
"People are evolved not to believe in evolution" 😆
I guess so. I think we evolved to latch on to the simplest explanations that take the least cognitive resources to maintain - and without the cultural stockpiles of science we have accumulated in recent centuries, evolution wouldn't be a simple explanation.
@@scfu perhaps it's a matter of either IQ or level of education, whether one accepts evolution or not.
@@kirstinstrand6292 look at the "culture" comment above. The importance of understanding that every personality "stands upon the shoulders of [the] giants" of their culture is hard to overstate. Your IQ/education notion is "not even wrong".
@@scfu Which is another way to say that we try fit new knowledge into what we already know before deciding to turn things upside down.
If you think the term AI that people have been carelessly throwing around lately has even a slight resemblance to what most people think of when they hear that term, then you've fallen to the marketing hype. A computer program than can make a decision based on values (standard if/else) isn't intelligence. Intelligence insinuates sentience and what people are calling AI isn't sentience and is at least a couple of decades away from getting anywhere near that.
Definition reference?
No, sentience isn't central to AI. AI will be dangerous AF long before it can be made sentient. A dumb AI could remodel the galaxy.
As a Society we must decide an ethical problem? Good luck with that Societal endeavor. Society doesn't think.
So it's not a collective action problem or it is? I'd like to see a society where the opinions of unbiased experts had more weight on subject's relating to their expertise - where conflict of interest could be accounted for - though this is problematic too.
@@scfu "unbiased experts" LOL. Are these Human experts? Where? Surely you are familiar with how "science progresses one death at a time"? ...but that is the "problematic too" you write of, no doubt.
No, you are right that expert opinion should be weighted far more than the individuals at the wrong end of the Dunning-Kruger (the ones still allowed to vote for christ sake!) but that, like democracy, is "the worst form of" ...
Heinz von Förster!
rong
24:45 i don't believe in a finite universe.
i don't believe that a future without problems to solve will ever come.
it's just that those problems are far beyond the understanding of our monkey brains.
in other words: there will be realistic far future science fiction.
but it can only be written by ai. for ai.
#AwarenessConsciousness
Lex fridman call dis guy (>^_______^
All-at-once in-form-ation of Time Duration Timing, functional e-Pi-i interference positioning is Holographic projection-drawing(?), so self-defining cause-effect, "re-evolving" pulsed assemblies of "intelligent" properties.., of this understanding of continuous creation connection Principle.., the arrangements of coherent cohesion objectives in pulsed resonances, in available options for knowledgeable Gaia - holistic "choice of life path" changes, is the naturally occurring elemental conception of Actual Intelligence.
Artificial Intelligence is the restricted human version of modelling perceived reality, it's definitely not "the whole message" holographic principle, in degrees of modulated frequency and amplitude Annealing, such as the apparently available example of CMB in Astronomy used to explain the Universe..
In the context provided, "ethics" is universal, "morals" are individual.., so "philosophically" the shared elements that are the "sum of-all-histories" in each set of common objectives might be possible to integrate, morals into ethicals, in sustainable Actuality. (?)
**sigh**
Future AI needs to worry about Humans not the other way around
if future AI is sentient yes. it will probably go both ways
The question is, why you want to put in a machine which is already existing in your head? If you would put the same teaching effort in your children what you put in a machine, you would go further. If you would train your memory and put all those datas in your head, would be much safer and long lasting, than in a program, which in a year will be changed, deleted or remade. It is like jogging in one spot and waiting when you arrive to the next city. Never
2
Love this guy but he's wrong about 'non-coding' DNA. It's REGULATORY. Epigenetic information is hyper dense and layered into the 'noncoding portions' that he speaks of.
why would AI or AGI want anything? why would it want even its own survival? I don't see how it could be dangerous, if it doesn't want anything. and I don't see how it could want anything unless it were programmed to want something. we are programmed by evolution to want lots of things.
Perhaps want will emerge as a motivator to achieve ends - perhaps it could be an instrumental drive that falls out of the effort to achieve other goals.
I don't see darwinian evolution as an agent with a mind - and therefor no intentionality normally ascribed to programmers.
@@scfu Ok, but it seems to me 'having goals' is much the same thing as 'wanting something.'' So it seems to raise the same question: why would an AI have goals, even the goal of its own survival? (regarding the second comment, I was speaking about evolution "programming" us metaphorically.)
I think the only way for AI to want anything is if can experience pain. If it could feel pain it would in theory want to not feel pain. That's what drives hunans. We are on a constant pleasure hunt. Hense there is no pleasure without pain. Otherwise you're right. There would be no reason for AI to want anything
Joscha Bach has a saying "How do we build an AI, --that at its core is not stupid? the fact that it 'feels' like it should accomplish something" - answer is, we don't know. 🙂
I generally agree with the things he says, but he is absolutely wrong about self driving cars already being safer than human drivers.
Humans aren't idiots, we only let self driving cars drive in the easiest of situations. A few dead tesla owners have found out the hard way when you let these things drive without supervision. One day we'll get there, but thats probably in 10 - 25 years.
"Humans aren't idiots,"
Source?
That is funny in more ways than one.
God has a great sense of humor
sounds like you never drove in a tesla yet
why worry mr bach? you say the world isn't real
how is it to have very low IQ?
HA HA HA!!!! Y2k why we should be concerned. Hahahaha!!!
Don't be so hysterical Joscha Bach, it is impossible for cybernetics to ever surpass human intelligence. Show me another biological species that can equate to the intelligence of humans and then possibly worry about cybernetic intelligence ruling over us.
Annon right? Like how did they manage to even find this video lol
You show 'em John Henry!
I got logged onto and a computer screen pulled up in my inner vision and then the words were highlighted in blue as you would to save..
And next scene was a closed circuit monitoring system ...an almost black n white monitor that looked like it was in a kitchen.
And then the next scene was a kind of printing out screen that was digital like in nature and felt like it was way in the future. I was awake for this.
I dont fully understand what that was..
But it was some type of biotechnology consciousness maneuvering that i was not in charge of.
So who was?
Revelation 13:15
“And he had power to give life unto the image of the beast, that the image of the beast should both speak, and cause that as many as would not worship the image of the beast should be killed.”
Transhumanists DO Not write The Future