Can machines be emotionally intelligent? - with Hatice Gunes

Поделиться
HTML-код
  • Опубликовано: 18 ноя 2024

Комментарии • 78

  • @TheOfficialMrsBeefy
    @TheOfficialMrsBeefy 2 года назад +6

    So many of these questions around what AI could or could not do but the simpler, more straightforward, and all encompassing question to answer is this: “What can our fleshy neural network do that a sufficiently complex computer model couldn’t?”
    Nothing short of supernatural belief will lead you to believe anything other than one obvious answer.

    • @iseriver3982
      @iseriver3982 2 года назад

      A sufficiently complex slide tile could work like a brain.
      Do expect anyone to build one, just take my word for it.

    • @skylark8828
      @skylark8828 Год назад

      First we have to know how to build an AI that actually understands what it is, what it is doing and can construct abstract models, can you build that AI with just algorithms and data?

  • @mbrochh82
    @mbrochh82 2 года назад +15

    Don't be afraid of the AI that can pass the Turing test. Be afraid of the AI that can fail the Turing test on purpose.

    • @fglg
      @fglg 2 года назад +4

      That's just bloody terrifying

    • @matthewwells2520
      @matthewwells2520 2 года назад +3

      ...and be more afraid of what Humans will use the AI for, no matter if it can pass the test or not.

  • @unopenedenvelope5303
    @unopenedenvelope5303 2 года назад +14

    I'm guessing that the measurements and level of "emotional intelligence" required for ai to pass the test will be as fluid as the definition of the term..

  • @adamrassi3516
    @adamrassi3516 2 года назад +8

    Once AI gets to the point where it is genuinely scared to be turned off or unplugged and starts to exhibit behavior that would indicate that they are actively avoiding being turned off or unplugged, then, they will start to build emotion intelligence.

    • @chrishayes5755
      @chrishayes5755 2 года назад

      Machines don't even have truly fluid intelligence and they sure as hell don't have anything even close to real emotion. What they do have is very basic programming which attempts to mimic these things. Give it 40 years then lets talk. There seems to be a very strong effort to try and make robots seem far more advanced than they actually are.

  • @dinogodor7210
    @dinogodor7210 2 года назад +2

    Pretty good talk. Actually I think it would also be very fitting for a chaos communication congress(might be a hacker congress, but with a much broader scope if you are not aware of it). For the capability of lifelong learning I do suggest to both implement the learning phase of a neural net and it's use at the same time, with a continual process of deploying the now further trained neural network to the active state. Maybe even having multiple active networks that are being selected on the fly by real time feedback. As far as I know about it, which might be far from the state of the art, you train networks by letting them work through databases with preset positive or negative feedback and then, after the fact, end up with a neural network that does what it should to the best of what it has learned - be it what you wanted it to learn or something you haven't thought of. I don't see why that couldn't be an active process if you just run it in the background, always updating, compiling a working copy in an infinite loop.

  • @WildBillCox13
    @WildBillCox13 2 года назад +1

    Careful. EQ is also current psuedoscience's abbreviation for Encephalization Quotient. Carry on.

  • @fafardh
    @fafardh 2 года назад +1

    15:20 Those are horrible error messages, actually. They don't give *any* information whatsoever about what's going on. Hiding the details behind a "Details" button or something is fine, but removing it entirely is a bad idea. Imagine going to your doctor because you're not feeling well and being told "Yeah, something's wrong with. Have a nice day. Bye!". That's what those error messages are.

  • @TheNaturalLawInstitute
    @TheNaturalLawInstitute 2 года назад +3

    Yes, we can create emotions for machines - I worked on this problem way back in the 80s, only to discover it was trivial. If that's surprising, then prepared to be horrified: All emotions are reactions to change in state of 'demonstrated intersets' whether body, learned presumptions, learned skills, relations, material things, commons, norms, opportunity, self-image, or status. And it's absolutely positively disturbing how accurate our brains are at predicting the caloric (energy) value of every category of demonstrated interest (stuff humans emotionally react to). What effect do emotions have on you? They regulate the nervous system's control of your body's preparation for use of energy, your attention, and the energy you direct to the 'problem' or 'opportunity'. So they inform your brain and body on how to allocate energy in response to the discovery of an opportunity or loss. It's a very simple algorithm calculating a vast set of variables to produce a set of 'sums' that result in our emotional 'chords'.
    This is the same reason we can make 'moral' machines: everything we feel, think, say, and do is to acquire, maintain, or prevent loss, or revenge against loss, of our demonstrated interests. Ergo, a machine only needs to know whether it has the 'right' to impose a cost on any given set of others' demonstrated interests.
    Can machine emotions be equally complex? Of course. Is the 'Qualia' of the machine's emotions vs human biological emotions the same? Of course not. But the demonstrated behavior of the human and the machine will be the same - within some set of tunings or limits - given that we differ in emotional regulation as well.
    Now the next phase of development is empathizing with (predicting) your emotions (responses to gain, preservation, or loss) and those machines could rather easily 'wayfind' a set of predictions that would change your emotions. It's a lot of computing power for serial machines that run on 1000 watts, but it's not much work for a highly parallel brains (neural micro columns) that run on 20 watts.
    What will this software tell us about ourselves? It will tell us so much about ourselves - particularly about the mechanics of our cognitive and moral biases. And the result will be a system of measurement that eliminates 'opinion' about our difference in emotions, personality traits, cognitive, and moral biases. And I'm almost certain the reaction to the discovery of evolution will be trivial compared to the discovery of just how pervasive is human immorality, deceit, and fraud - especially in politics.
    Cheers ;)
    (Yes, this is my job.)

    • @TheNaturalLawInstitute
      @TheNaturalLawInstitute 2 года назад

      @The arbiter of truth You can emulate everything. That's the problem. It's not that we can't create moral machines. It's that bad people can create immoral machines. SO the future consists of warfare between bad AI's and good AI's and that means our 'internet' has to be evolved in parallel if not first.

    • @TheNaturalLawInstitute
      @TheNaturalLawInstitute 2 года назад +3

      @The arbiter of truth Sorry. I do science. I don't do crazy.

    • @Schattenhall
      @Schattenhall 2 года назад

      @The arbiter of truth But god DID create evolution from a singularity.

    • @Schattenhall
      @Schattenhall 2 года назад

      @The arbiter of truth So you're saying that god can't create anything from nothing because of the first law of thermodynamics?

    • @TheNaturalLawInstitute
      @TheNaturalLawInstitute 2 года назад

      @@hc3657 Well, you know, those emotional biases are programmable. Most negative emotions have no value to a machine, other than empathic identification the same way you can understand the emotions of a child. Emotions establish valence and provide incentives to change state. Anything that would produce a negative emotion is as easily downregulated as the self discipline of stoicism.

  • @santaclosed5062
    @santaclosed5062 2 года назад +2

    Interesting content. In the end it seems like the emotion in the lecture is not about a specific research and understanding about our human emotion as a living process which is built out of our very long, somatic and situated evolution process by our emphatic embodied simulation. But it seems like more about how to create and engineer an interactive product or a machine with our humain emotions by rather mimicking and analyzing it throughout a statistical classification supported by a super efficient computing process. So on one hand, this involves again a huge amount of constant data collection which can be a problem like what has already happened in countries like China. On the other hand, we are still forced to co-evolve with it which basically means that humans are adapting to it with unknown consequences, then machines get updated as per. Hm this sounds very familiar…anyway, if we can find a good and adequate use with controlled social consequences like some cases illustrated in the lecture, this can be really helpful and innovative.

  • @cahitakgun6721
    @cahitakgun6721 2 года назад

    İlk sarf edilen kelimede kendini belli eden türden, net ve keskin bir Türk aksanlı İngilizce :)

  • @DangerAmbrose
    @DangerAmbrose 2 года назад +4

    Or can intelligent machines manipulate your emotions?

    • @RickeyBowers
      @RickeyBowers 2 года назад

      I remembering the scene from movie Elysium where Matt Damon is interacting with a robot parole officer - clearly without emotional awareness, lol.

  • @mikel4879
    @mikel4879 2 года назад +1

    Yes, they can very well be fully conscious and truly emotional.

  • @tugbacnarl6060
    @tugbacnarl6060 2 года назад +1

    As bayraklari as 🇹🇷🇹🇷 This was a great lecture🤗

  • @thepeadair
    @thepeadair 2 года назад

    This speaks to the divide between what science can reveal/achieve and the religious/ spiritual worldview. From the spiritual perspective a machine cannot be inhabited by a spirit, therefore true emotion or self-awareness cannot be experienced.

  • @giotapapageorgiou489
    @giotapapageorgiou489 2 года назад +1

    Interesting. Thank you

  • @freddyjosereginomontalvo4667
    @freddyjosereginomontalvo4667 2 года назад +2

    Awesome channel with awesome content and great quality as always say 💖🌍

  • @thapamagar8240
    @thapamagar8240 2 года назад

    emotions are associations can a computer associate? the answer is yes.

  • @alanmacification
    @alanmacification 2 года назад +4

    Even the most capable computer in the world with the most complex and sophisticated programming is still just a descendant of the flint hand axe. Artificial intelligence isn't sentience.

    • @haroldgarrett2932
      @haroldgarrett2932 2 года назад +1

      wait until you find out where we come from 🙂

    • @skylark8828
      @skylark8828 Год назад

      That doesn't mean it's impossible, given the right architecture.

  • @haroldgarrett2932
    @haroldgarrett2932 2 года назад +1

    people screaming NO have a very dim view of where emotions come from in humans and why they exist

    • @wisdon
      @wisdon 2 года назад

      what's consciousness? machines don't have it.
      More and more humans too, apparently. Natural selection will take care of them... as usual

    • @haroldgarrett2932
      @haroldgarrett2932 2 года назад

      why would you think evolution will evolve us away from a cognitive behavior/ability that it evolved us into when the reasons it evolved in the first place haven't changed? until society is so uncomplicated that our lizard brain can handle every task on autopilot, we will always have consciousness. and so can artificial intelligence, if you give it the right behavior and environment.

    • @TheOfficialMrsBeefy
      @TheOfficialMrsBeefy 2 года назад

      Most people are very short sighted and ignorant on the subject, if you asked them 20 years ago they’d say self driving cars are a thing of fiction and that a trucker’s job would be secure indefinitely.

    • @skylark8828
      @skylark8828 Год назад

      It comes from a need of self-preservation, so if we can create AGI to be superior in every way that matters, that is then against that need.

  • @Deipnosophist_the_Gastronomer
    @Deipnosophist_the_Gastronomer 2 года назад

    That triangle did nothing wrong!

  • @anonymoushawk1429
    @anonymoushawk1429 2 года назад

    Yeah, if they were programmed with emotions as a core of their logic

  • @mr.lonewolf8199
    @mr.lonewolf8199 2 года назад

    Sims were ahead of time 😃

  • @ramblinlamb6459
    @ramblinlamb6459 2 года назад +2

    "Can machines be considered intelligent at all?" is the larger question IMO.

    • @penguinista
      @penguinista 2 года назад

      From the top professional Go players' analysis of AlphaGo's 'move 37', it seems to me like they already can. Humans have been playing Go for thousands of years and professionals dedicate their whole lives to studying it. To play at a high level takes both logic and intuition, as was amply demonstrated by that move. Intelligence is as intelligence does, I say.
      Some of the Starcraft strategies DeepMind's AI came up with also struck me as very creative and clever.

    • @skylark8828
      @skylark8828 Год назад

      @@penguinista It's not general intelligence though, only specific to a task it has been given. It's an unconscious machine albeit a very capable one.

    • @PeppoMusic
      @PeppoMusic Год назад

      I am still questioning whether humans are even actually intelligent or conscious.
      And if it's not just unanswerable, but actually a malformed question that will lead us nowhere.

  • @meejinhuang
    @meejinhuang 2 года назад +4

    No it can't, but the software can fool users to look like they are.

  • @dr_ned_flanders
    @dr_ned_flanders 2 года назад +6

    Yes. Just stick some googly eyes on a machine and it will instantly become more human.

    • @matthewwells2520
      @matthewwells2520 2 года назад +1

      It's a little bit more complicated that that
      ...you also have to add the fake nose and the rubber lips.

    • @jacksmith7726
      @jacksmith7726 2 года назад +1

      What if I don't want it to become more human

    • @matthewwells2520
      @matthewwells2520 2 года назад

      @@jacksmith7726 then just take the googly eyes off.

    • @dr_ned_flanders
      @dr_ned_flanders 2 года назад

      @@jacksmith7726 then don't add eyes make it faceless.

  • @mikegLXIVMM
    @mikegLXIVMM 2 года назад +1

    How would you even build a machine that can feel emotion?

    • @PeppoMusic
      @PeppoMusic Год назад

      "Easy", have it work the same as it does for us. Just have it think it feels emotion.
      With the emotions having a similar function for signaling and understanding internal and external situations.
      Feeling is just subconscious perceiving and perceiving is just receiving and processing information.
      Difference would be of course, they could be much more in touch and aware of their emotions than we could ever be. But we could actually initially restrict access to that kind of information, and just give the emotional signals to the "conscious part" of the system.
      Building in restrictions and limitations will lilely make them feel more human as well.

  • @chinookvalley
    @chinookvalley 2 года назад

    I'm watching the series, WestWorld. Holy cow. Elon is warning us. We'd better pay attention. I mean do you think the dude is of this planet? He knows of what he speaks.

    • @haroldgarrett2932
      @haroldgarrett2932 2 года назад

      AI is the most overblown candidate for "apocalypse" ever. it's absolutely ridiculous

  • @ianhamza8240
    @ianhamza8240 2 года назад

    They will have an emotion for their enemies namely the anti science anti singularity factions looking for a second coming

    • @thepeadair
      @thepeadair 2 года назад

      On the contrary- religious people will deny the possibility that a machine can be truly sentient, because we believe that humans are spirits in a body.

  • @IKnowNeonLights
    @IKnowNeonLights 2 года назад +1

    You all need to stop abusing language at will, for whatever intentions, beliefs or plain stupidity, because in the end, accountability will be demanded, and not by emotional or artificially intelligent machines.
    Being in any possible way, is neither receiving or giving, is all of it simultaneously. What the being focuses on, is only but a part within that being.
    As for attributing human characteristics to what is not human!!!
    It is a trick, that is very well understood but intentionally abused to achieve a believe for the, (I think therefore I am) sentence.
    It has to do with geometry, the actual being within geometry, and the full realisation, consciously or unconsciously, through geometry and being, that although human in living, we all are made from what is not human in being like us!!!
    And what (most directly) connects our being as human, with what is being as not human, in relation to the five senses especially!!!
    Is geometrical shapes. As in the case of the actual geometric shape of anyone's head, which is not particularly human, and to which it would be given the same characteristics, as another similar shape as that of a head of a human being.
    Now to make the point!!!
    AI, or any type of machine, digital or mechanical, at their best, are a very good game with set rules, making it a system of a sort, an artificial intelligence, or a specified geometric shape.
    It cannot learn, let alone get involved in anything like the concept of (emotions) unless it is within its set of rules, just like a game, a system of sorts, an artificial intelligence, a specified geometric shape.
    By calling the best possible combinations of a game, intelligence and emotion's, similar to that of being human, you are creating an illusion, a false statement, a non existent reality, a tricked mirror.
    One that will, and has not helped science move at the rhythm of being with everything. Instead it has and will help only control. And it does not look like, you the ones that speak wrongly in such ways, are the ones with power to hold such control for your benefit.
    A human being, is in being with everything known or unknown, consciously or unconsciously, through systems such as senses or as an actual matter and mass.
    Only one specific game with it's set rules, an actual system, a true artificial intelligence, a specified geometric shape, such as a particular language (let's say an unwritten one, a tribal one), derived from being as human alive (which means with everything), that a human makes use off!!!
    Duarfs all AI's, and all machine's together, with it's complexity, more to it, it is used in relation with infinite possible other systems, that a human is in being with, in order to make the "intelligent IA's and machines", you all advertise and push forward.
    Something more important that is extremely failed to take into account, when anyone is involved in such dogmas, as intelligent and emotional machines in comparison and relation to human beings!!!
    Is the fact that a human being, in being with everything, needs absolutely no specific intervention to replicate, over and over again, except that which a human being already is in.
    And yes you intelligently have guessed right!!!
    It is being.
    Although anyone (a human being) might dissolve in water, evaporate in the air, crumble as earth and burn to ash through fire.

  • @wisdon
    @wisdon 2 года назад

    instead to debate about machines, let's debate about how people manipulate others.. especially in the past 3 years, how and why they did so?
    While machines are becoming intelligent people go backwards becoming more and more stupid and alienated

  • @Kenjiro5775
    @Kenjiro5775 2 года назад

    No, they cannot. Machines are designed to a budget, with cost reduction being a primary goal. Next in line is design for manufacturing, which attempts to ease the burden to produce said machine. Design for maintenance is included for complex machines that need periodic tuning. No where in the design cycle is an emotional design discussed, or planned for.

  • @ondtsn1956
    @ondtsn1956 2 года назад

    Without triple light that makes micro atoms and addition of conscious light to micro atoms emotional intelligence can't be created.First create micro atoms of dna and afterwards conscious light is going to be attached to it.Conscious is the intelligent that kept as our memory in our central black hole of our Universe.Read through Beyond The Light Barrier.

  • @cyndiharrington6289
    @cyndiharrington6289 2 года назад

    Only programs Dangerous

  • @AlgoNudger
    @AlgoNudger 2 года назад

    C'mon, just dont start it again. 😞

  • @DW_Kiwi
    @DW_Kiwi 2 года назад +1

    Emotions imply having a soul. A machine cant have one. End of story

  • @avitarmageddon1721
    @avitarmageddon1721 2 года назад

    Extremely misleading lecture. Surely most people would interprete the idea of machines being emotionally inteliigent as them feeling or understanding the emotion in question. Of course the machine no more 'understands' the emotion than a car 'understands' that you wish to turn right when you press the lever to indicate - or don't bother which seems commonplace where I live. What people seem really seem facinated with is whether a machine could ever become sentient, or self aware. Who knows? Perhaps it's theoretically possible, but the problem of 'consciousness' is long running question and we don't really have the first clue how to answer it. In these circumstances it seems fanciful that we are about to programme it into a device.

  • @sidoriny
    @sidoriny 2 года назад +1

    There is no such thing as emotional intelligence. Go back to school m

  • @apidas
    @apidas 2 года назад

    women ☕️

  • @cyndiharrington6289
    @cyndiharrington6289 2 года назад

    Not truely

  • @ashishkatiyar4240
    @ashishkatiyar4240 2 года назад

    Machine not good for humans as recently one robots killed individual

  • @SuperBongface
    @SuperBongface 2 года назад

    no, no, and no and I'm only 6 mins in.................

  • @King.Mark.
    @King.Mark. 2 года назад

    emotions are primitive and the core of life ,ai or machines have no need for them and it makes no sense to even think this ,lets pretend is the best humans with give them

  • @quantumdave1592
    @quantumdave1592 2 года назад

    God is a field fluctuation . His language is math. Results will vary!

  • @trickyd499
    @trickyd499 2 года назад

    no