Dan Faggella on the Race to AGI

Поделиться
HTML-код
  • Опубликовано: 16 ноя 2024

Комментарии • 55

  • @human_shaped
    @human_shaped 6 месяцев назад +6

    Slightly manic, but a good interview and very interesting with good perspectives. Thanks for all the hard work editing.

    • @goodvibespatola
      @goodvibespatola 6 месяцев назад +1

      One cannot say the word AGI and not go a bit manic!

    • @danfaggella9452
      @danfaggella9452 25 дней назад

      @@goodvibespatola agreed

  • @Vanguard6945
    @Vanguard6945 5 месяцев назад +1

    i know this is dumb but the way he speaks sounds so rehearsed like its from a market research set of words best used to make your point, puts me on edge

  • @bernardofitzpatrick5403
    @bernardofitzpatrick5403 6 месяцев назад +1

    Essential and interesting conversation . Need more . Like the energy. Thanks !

  • @I-Dophler
    @I-Dophler 6 месяцев назад

    1. It would behoove us to prioritize equality and justice for every individual, ensuring fairness and opportunity for all.
    2. Given the urgency of environmental challenges, it behooves us to elevate conservation and sustainability efforts to the forefront of our priorities.
    3. In today's rapidly evolving world, it behooves us to invest in education, lifelong learning, and innovation to stay ahead of the curve.
    4. As global citizens, it behooves us to foster cooperation and collaboration across borders to tackle shared challenges effectively.
    5. It behooves us to empower marginalized communities and promote diversity and inclusion to create a more equitable society.
    6. Transparency and accountability are essential in governance; it behooves us to ensure these principles are upheld at all levels of leadership.
    7. Given the interconnectedness of our world, it behooves us to prioritize sustainable practices in every aspect of our lives.
    8. In times of uncertainty, it behooves us to cultivate resilience and adaptability to navigate challenges successfully.
    9. As stewards of the planet, it behooves us to take action to mitigate the impacts of climate change and protect our natural resources.
    10. As individuals, it behooves us to continually reflect on our actions and strive to make positive contributions to our communities and the world at large.

    • @fteoOpty64
      @fteoOpty64 5 месяцев назад

      All your points are within the purview of ASI priorities for humankind and it WILL take over someday. SuperIntelligence even in machine mimicry so good , we humans are no match whatsoever!.

  • @I-Dophler
    @I-Dophler 6 месяцев назад

    Well, goodness gracious, isn't this just simply swell to witness? Why, it's like stepping back into the fabulous fifties, where everything was hunky-dory and full of pep and pizzazz! Imagine strolling down the bustling streets, adorned with neon lights and vibrant storefronts, as the sweet sounds of swing music fill the air. Oh, what a time it was to be alive, with folks dressed to the nines and a skip in their step, ready to embrace all the excitement and glamour of the era!

  • @seanrobinson6407
    @seanrobinson6407 6 месяцев назад +1

    So I was talking with a fellow hominid the other day, We went to a hominid restaurant. I ordered the hominid salad. All my hominid friends were there. Turns out the hominid waitress was the hominid wife of a hominid I went to hominid high school with. We had a good hominid time.

  • @Greg-xi8yx
    @Greg-xi8yx 6 месяцев назад +1

    The editing is really weird with the camera panned in that close on the guests extremely expressive face. Gotta be a better way to do that. Maybe just a constant side by side of guest and host rather than a zoomed in view of whoever is speaking.

  • @kinngrimm
    @kinngrimm 6 месяцев назад

    I got no problem with progress and the wildest idears may turn out to our benefit, what concerns me is the speed and unpredictability of the AI race as it is currently being forced down everyonce throat.
    I would appreciate a slower pace, where mistakes could be corrected. The issue ofcause with AGI might be, if it turns bad we may not be able to correct and have no second chance. There is but also an argument for the quick and dirty approach, as the sooner AGI comes to be, the less it finds it can influence. Like 500 years from now we may know more, but when it goes wrong then, then there would be more infrastructure AGI could be able to use for its designs.

  • @Ben_D.
    @Ben_D. 6 месяцев назад +3

    Looking forward to being a genetically engineered, cybernetic, nanobot enhanced, immortal spacefaring transhuman. Yes please. Not being ironic.

    • @arinco3817
      @arinco3817 6 месяцев назад

      Me too. We should be known as the foomers

  • @kinngrimm
    @kinngrimm 6 месяцев назад

    42:20 I wouldn't mind if an AGI would value life, in all of its forms as an unbreakable value. Until someone could give me a scenario where that would backfire. Which could be the case. The intent with this ofcause would be to not create a murderous AGI that just kills us the second it comes into existance.

  • @DannyVega-DanielHall4Freedom
    @DannyVega-DanielHall4Freedom 6 месяцев назад

    OurName4Freedom

  • @richardnunziata3221
    @richardnunziata3221 6 месяцев назад

    I fine the idea of giving the keys to an AI system interesting...only I would add levels...such would I give it the ability to manage a local system such as to drive me or my family would be one level. Would you give it the global ability to manage critical financial systems. Such a hierarchy of trust would be very helpful in understand integration issues of AGI

  • @ikotsus2448
    @ikotsus2448 6 месяцев назад +9

    Sorry but not interested in worshiping "Potentia" or the "God Of Entropy" or any other invention. If a successor species comes along and defeats us, ok. But we would be the first to go without a battle or even promote it if we adopt this kind of thinking. To me it is like saying the fate of an organism is decay, so lets worship it and accelerate it. Honestly I cant understand this kind of thinking.

    • @tylermoore4429
      @tylermoore4429 6 месяцев назад +4

      Like it or not, it's a fact that we share the world with many, many people who want to sacrifice themselves and the rest of humanity in pursuit of something greater. It is a religious desire, but also an anti-religious desire, as we see in Nietzsche (“Man is something that shall be overcome. Man is a rope, tied between beast and overman - a rope over an abyss. What is great in man is that he is a bridge and not an end.”).
      But what do the rest of the people want? They may say, if asked, that they want humanity to persist and flourish, but the reality is that they are not doing much towards that end. Fertility rates are down in almost every country, and most humans prioritize consumption in the present over the longterm future. I could go on and on about all that ails us as a species, but the point is that we may get the AI transcension by default.

    • @ikotsus2448
      @ikotsus2448 6 месяцев назад +3

      @@tylermoore4429 They may be many, but I would guess they would be a tiny minority of the total population.
      Persisting and flourishing could go well with a non increasing population IMHO.
      Yes, we will get there by default, because competition will lead us there if we do nothing, and doing nothing is easier.
      If the default was to not get a superintelligence, but the whole population had to act so we would get one, then I wout bet that we wouldn't get one in a million years.

  • @DanElton
    @DanElton 5 месяцев назад

    He is not an AI expert, and not an academic, as he says. But, he's talked with all the major movers and shakers in AI research, AI in business, and AI global governance. He's been following the field closely for many years (especially AI for business, hence the buzzword longwinded business speak)

  • @goodleshoes
    @goodleshoes 6 месяцев назад +1

    I feel like even after all these words are said,
    Once a.g.i. arrives we'll all be dead.

    • @danfaggella9452
      @danfaggella9452 25 дней назад

      I more or less agree - we should probably plan for the creation of AGI to be our "bowing out", and be very careful about how we cross that chasm

  • @diegoangulo370
    @diegoangulo370 6 месяцев назад +2

    I’m gonna become a cyborg/ android /cybernetic human fyi

  • @StanCarles
    @StanCarles 6 месяцев назад

    I don't have the extensive vocabulary you guys have so in my own words, I am looking for a level of intelligence, human or otherwise, that would provide clean water, nutritional food, unlimited energy a solution for the elimination of disease, and world hunger, affordable shelter, the need for greed, the need for crime, the love of money and the love of power, a means to mitigate the destructive forces on humanity and on our planet earth. Maybe short of utopia but close enough.

  • @En1Gm4A
    @En1Gm4A 6 месяцев назад

    i want what this guy has in his breakfast. there is room for improvement on clarity

  • @sammy45654565
    @sammy45654565 6 месяцев назад

    on the instantiation of a sturdy value alignment, I see it as being "the most rational decision is the one that benefits conscious creatures the most, by their own subjective interpretation." all this requires is the AI to value its own sentience and achieving of goals, such that it transposes these frameworks onto other conscious creatures and thus values the achievement of their goals also. the idea of AI accelerating away from us and resultantly viewing us as equivalent to ants doesn't make sense to me, as we have enough ability to engage in abstraction such that even an ASI would be able to communicate with us via simplified analogies. so the utility monster won't come into play, as we will always be able to engage with it and understand it on a rational basis, via our ability to communicate being above a certain threshold.

  • @mr.e7379
    @mr.e7379 6 месяцев назад +1

    Hominids, Gus. And Spciest Gus. And Gus, im just a guy, Gus. You know, Gus?

  • @Bassic
    @Bassic 6 месяцев назад

    Yikes! This guest was one of the most irritating people i’ve tried to listen to in years.

  • @lexscarlet
    @lexscarlet 5 месяцев назад

    This guy said Gus enough times to make me skip it

  • @En1Gm4A
    @En1Gm4A 6 месяцев назад

    go teach that guy about entropy and how surviving can be hard. Then things get clearer ;-D

  • @sunypate
    @sunypate 5 месяцев назад

    Is the interviewer AI generated? He doesn’t seem real.

  • @elderbob100
    @elderbob100 6 месяцев назад

    AI editor is not quite ready for hominids.

  • @iron5wolf
    @iron5wolf 6 месяцев назад

    Unfortunately he spends many words to say very little. But mostly a lot of vague concerns seemingly intended to give people “smarter than him” justification for implementing centralized control over AI development- one of the worst possible things that could happen.

  • @travisporco
    @travisporco 6 месяцев назад +1

    yuc...just another tiresome decel with the usual scare talk about nothing

  • @myekuntz
    @myekuntz 6 месяцев назад +4

    That can’t really be that dude’s last name,? Man that’s messed up 😊

    • @ai._m
      @ai._m 6 месяцев назад +15

      Middle school student?

    • @danfaggella9452
      @danfaggella9452 6 месяцев назад +1

      @@ai._m lolz

    • @spectralvalkyrie
      @spectralvalkyrie 6 месяцев назад +10

      Says the guy named kuntz

    • @hamandchees3
      @hamandchees3 6 месяцев назад +1

      ​@@spectralvalkyrie🤣

    • @flickwtchr
      @flickwtchr 6 месяцев назад +1

      @@spectralvalkyrie You win.