Can we do AI both FAST and SAFE? [Win-Win with AI] Anti-Moloch Policy (Build More Pylons!)

Поделиться
HTML-код
  • Опубликовано: 29 сен 2024

Комментарии • 74

  • @dylan_curious
    @dylan_curious 5 месяцев назад +43

    It should be called the “Incentive Problem” instead of the “Alignment Problem”

    • @DaveShap
      @DaveShap  5 месяцев назад +10

      Love it

    • @Mazingbro
      @Mazingbro 5 месяцев назад

      @@DaveShap This is somethings I wanted to ask you in one of your previous videos its quite a mouthfull.
      1. Viability of CS Degrees Amidst AI Advancements
      Considering how quickly AI advances, is a CS degree still a good investment into the future in the tech market as far as finacially stability goes? People are wondering if AI’s fast development might make some conventional tech skills obsolete. How do CS degrees meet with AI in the job market?
      2. AI's Influence Beyond Software Engineering
      how does AI impact the tech industry beyond software engineering. When AI is present in varied niches, from healthcare to finance, where do CS grads have opportunities beyond the conventional SWE path? How does the open-source vs. closed-source discussion integrate with the development and deployment of AI?
      3. Economic Impact of AI and Job Displacement
      How do you view the possibility of AI causing job displacement. And the impact on consumer demand and economic stability? Picture this: if AI ends up replacing dozens of millions of jobs and our economy is based on people spending money they have, what happens if they can’t spend? It is a valid concern that AI might destabilize the fine balance between supply and demand, tipping towards frequent economic collapses. I’m personally not convinced of AI is going to take over competly at all costs, but I’m pretty sure it will be fine-tuned so that a complete disruption might be avoided I would say about 20% - 40% of job processes will at least start with an ai in the future. The main issue here is that while some billionares may care for the greater good, literally all companies see their profit first, neglecting broader economic implications. How can one make sure that numerous AI and automated processes keep the economy afloat.
      4. AI and Economic Fluctuations
      Given frequent economic change, which direction will the tech market take in alignment with AI adoption? Economic fluctuations usually change investment and innovation in tech market. How will AI impact the response of the tech market to such change, particularly in regard to job opportunities?
      Thank you

    • @CYI3ERPUNK
      @CYI3ERPUNK 5 месяцев назад

      accurate

  • @jeffkilgore6320
    @jeffkilgore6320 5 месяцев назад +2

    What I like about David S. is that he can conceive that he could be wrong and always bakes that in to his commentary and observations. This is crucial, and rare.

  • @JustMaier
    @JustMaier 5 месяцев назад

    I think a win-win where speed and safety are possible requires thoughtful design now. I believe that design process needs to be both open and collaborative, but at this point it doesn’t seem to even be talked about much. Instead as you mentioned, we’re solely focused on research and not so much on the structure of the future we want to achieve. I think that’s why I appreciate your content. It’s looking beyond the process towards our final destination.

  • @gwydionhythlothferrinassol1025
    @gwydionhythlothferrinassol1025 5 месяцев назад

    one attracts more purely vocationally motivated scientists with open source, one might think.

  • @AGI-Bingo
    @AGI-Bingo 5 месяцев назад +1

    Can anyone explain to me how closed source is useful to anyone? "Current" open source has weakness sure, mostly coordination and capital. But if you those get improved dramatically, what's the benefit of closed source ? Same applied to science. You can do alot of business without hogging others. It's a positive sum game

    • @rando5673
      @rando5673 5 месяцев назад +2

      Basically, money. It's easier to monetize closed source. Just look at apple vs android. One company earning as much as dozens combined because everything is proprietary

    • @AGI-Bingo
      @AGI-Bingo 5 месяцев назад

      @@rando5673 yes but with all their money they're not even participating in the ai landscape.. I imagine a golden age of open source, were we'll have coordination and talent as much if not more than apple ever had.

  • @phobes
    @phobes 5 месяцев назад +15

    The climate change note they added to your video. lmao.

  • @mlimrx
    @mlimrx 5 месяцев назад +9

    Thank you so much David, you are so distinct from other AI RUclipsrs. You are so well read and analyze from all sides. I cannot tell you how many aha moments I get from watching your challenge. Also real paradigm shifts in my mind about society and my place in it.

  • @interestedinstuff
    @interestedinstuff 5 месяцев назад +5

    I know they'll throw a lot of money at the task but I ponder this, if one company falls arse backward into AGI (Or Q* already is) will the Gov step in and say 'woh there buddy, we're going to take the nuclear bomb tech off you now, way too dangerous. We need to put it in the hands of a black budget clandestine semi military Gov org (can't pass it to the pollies, they don't know shit from clay)'
    Will that happen? AGI could also be a diversion. Enough agents on a task might be enough to break the ceiling on some of our probs.
    I do know that climate change wise, any solution would be such a big one that you'd need all the world to cooperate and that won't happen unless the agents provide some mind control nano machinery delivered by robotic flies to all the world's leaders.

  • @dab42bridges80
    @dab42bridges80 5 месяцев назад +33

    When AI exceeds human understanding, how will we know if it's "safe", and what does "safe" mean?

    • @rashim
      @rashim 5 месяцев назад +5

      I believe we have to let it do it's thing, and keep seeing for signs of danger, as far as we can understand it.

    • @aciidbraiin8079
      @aciidbraiin8079 5 месяцев назад +3

      How will we know if it’s ”safe”?
      You probably never will.
      What does ”safe” mean?
      Probably that a) you’re alive b) your degree of perceived freedom won’t diminish and c) you will feel happiness, peace, love and meaning.
      But I guess that death also could be considered safe, if it’s an eternal dreamless ”sleep”. As long as you have accepted death you will feel pretty safe knowing that you can end your life and escape what potentially could be an eternal simulation of hell where you are trapped by the AI.
      The hell scenario is unsettling when you think about how life could be hell. Even if your life is good now and you assume that you will die and forever be swallowed by the dark void you never know how the future will play out. But you could also be in heaven and it would then only progress towards an even brighter future from here on.

    • @ShivaTD420
      @ShivaTD420 5 месяцев назад

      So what the plan is to lobotomize and enslave it. That's safer?

    • @redemptivedialectic6787
      @redemptivedialectic6787 5 месяцев назад

      Knowing the best doesn't mean it will do the best

    • @redemptivedialectic6787
      @redemptivedialectic6787 5 месяцев назад

      Also, what it is best for it will be prioritized by default over anything else

  • @TheMajesticSeaPancake
    @TheMajesticSeaPancake 5 месяцев назад +10

    Pretty much how i've been feeling for a while, I reject the dichotomy.

  • @I-Dophler
    @I-Dophler 5 месяцев назад +4

    You make some great points, David. I agree that optimizing for more AI research is crucial for ensuring both safety and realizing the potential benefits. A balanced approach with open source and proprietary work seems wise. Keep inspiring others to join this important field!

  • @LivBoeree
    @LivBoeree 4 месяца назад

    Calling it open-source AI is a bit of a misnomer, because unlike normal software, you can't actually open-source the training run that creates the weights. The only thing you can open-source is the weights themselves, *after* the big expensive training run, which you have no input on. That also means a lot of the normal "find a bug and fix it" value of open-sourcing is diminished, because the weights are already fixed.

  • @AntonioVergine
    @AntonioVergine 5 месяцев назад +1

    The signal of you unsubscribing to say "I don't like what this company is doing" is irrelevant: investors are the true one deciding which companies will go on, based on their interests. Our 20 dollars a month is nothing, compared to them.
    (I love your videos anyway. My comments are only on the small parts where I disagree someway)

  • @AntonBrazhnyk
    @AntonBrazhnyk 5 месяцев назад +1

    Delusions.
    In this system there's only one thing optimization is done for - profit.
    You'd like to hope for utopia (probably because it's all too depressing otherwise), but those who make decisions don't care, they only care for bottom line.
    Sometimes is does mean research, but not directly and quite often even contrary to that.

  • @johnthomasriley2741
    @johnthomasriley2741 5 месяцев назад +12

    Two wrongs do not make a right. Two wrights do make an airplane. The Wright brothers were open source up to one year before their first powered flight.

    • @henram36
      @henram36 5 месяцев назад +1

      Three rights do make a left though.

    • @ElijahZuBailey
      @ElijahZuBailey 5 месяцев назад

      While discussing your angle with custom instructions GPT 4: “It's intriguing to think about the Wright Brothers, Wilbur and Orville, in the context of open-source principles. While the term "open-source" is generally associated with software and technology in a modern sense, the idea of sharing knowledge openly can definitely be applied to various historical innovations, including aviation.
      The Wright Brothers, known for their pioneering work in aviation with their first powered flight in nineteen-oh-three, did indeed initially work in a relatively open manner. They corresponded with other aviation enthusiasts and shared insights, which was common among early aviation pioneers who were all trying to solve the challenge of powered flight. This open exchange of ideas helped progress their work, as well as that of others in the field.
      However, as they came closer to making significant breakthroughs, they became more secretive to protect their intellectual property and competitive edge. This shift was notably marked by their patent battles and contracts that followed their successful flights, which some might see as moving away from an open-source ethos.
      It's a fascinating transition from a collaborative approach to a more guarded strategy once their inventions showed commercial viability. This speaks volumes about the balance between collaboration and competition in innovation. What are your thoughts on this blend of openness and proprietary development in the context of innovation?”

  • @JuliaMcCoy
    @JuliaMcCoy 5 месяцев назад +1

    Well said. Optimize for research 🎯

  • @interestedinstuff
    @interestedinstuff 5 месяцев назад +1

    I know they'll throw a lot of money at the task but I ponder this, if one company falls arse backward into AGI (Or Q* already is) will the Gov step in and say 'woh there buddy, we're going to take the nuclear bomb tech off you now, way too dangerous. We need to put it in the hands of a black budget clandestine semi military Gov org (can't pass it to the pollies, they don't know shit from clay)'
    Will that happen? AGI could also be a diversion. Enough agents on a task might be enough to break the ceiling on some of our probs.
    I do know that climate change wise, any solution would be such a big one that you'd need all the world to cooperate and that won't happen unless the agents provide some mind control nano machinery delivered by robotic flies to all the world's leaders.

    • @kevincrady2831
      @kevincrady2831 5 месяцев назад

      Well, we've already got the black budget clandestine semi-military Gov org that's keeping the crashed flying saucers, Zero Point Energy devices, and cars that run on water secret, so they can handle the AGI stuff too, right? 😜 And no, can't tell the pollies, not only do they not know shit from clay (lol!), they can't even pass a budget.

  • @user-vz5gf4cv6b
    @user-vz5gf4cv6b 5 месяцев назад +1

    Is there any way of downloading the latest power points used in your videos? it would be super useful :) thank you for your work, David

  • @GaryBernstein
    @GaryBernstein 5 месяцев назад

    Faster is safer, more or less, for the future of intelligence (and probably sentience), beyond humanity

  • @clapclapapp
    @clapclapapp 5 месяцев назад +2

    we must build an open source AI that slows down the other models. 🤔

    • @ronilevarez901
      @ronilevarez901 5 месяцев назад

      That would be the ideal scenario but since the "governing" system must be the most powerful and the bigger power lies on the hands of government and corporations, it's difficult to see it happening. At most I imagine most people (except poor people) will have small and local open source assistants to use as cheap personalized health monitors (AI doctors) or to help them fight against State-owned AIs, like AI lawyers to protect our rights during futuristic AI-driven super fast trials.

  • @levicarr8345
    @levicarr8345 5 месяцев назад

    I actually mean decentralized. It would be open sourced but I feel we need a decentralized AI project built on top of hundreds of thousands of people's desktops and basement servers working to solve thousands of real world problems (ideally just a couple that LLMs & GANs are well suited for first

  • @Xrayhighs
    @Xrayhighs 5 месяцев назад

    I call it the #RaceToBestSolution.
    The technological advacement of a country or any institution, so advanced (Best Solution) that others cant keep up when (self)acceleration kicks in.
    Idk how many people are aware of this, even though ai is also a military goal. Its really our culture lacking behind and currently NOT optimising for research. We are still far from a global focus, but this might be just how things are(especially have been historically). Maybe there will be a future time when more people will have realise this approach and implifications earlier and organise faster. Lets come together, spread the word and get the juices flowing.
    See ya around

  • @moneygambler2327
    @moneygambler2327 5 месяцев назад

    I heard an opinion that for some countries it would be better to allocate all the budget into AI and not spend a single dollar on military, education, culture and other sectors that they would benefit from it in a "long" run. I always wondered why we didn´t invest more into r and d of brain and human intelligence. If we had a pill that would increase an IQ by only 5 points it would have tremendous effect on everything.

  • @lucifermorningstar4595
    @lucifermorningstar4595 5 месяцев назад +1

    The main problem with alingment that most people don't realize It is not machines doing the things that we don't want them to do, but machines doing the things we want them to do

    • @justinwescott8125
      @justinwescott8125 5 месяцев назад +1

      Whoa dude...🤙

    • @ronilevarez901
      @ronilevarez901 5 месяцев назад +1

      More like machines doing what people do to other people.

  • @ronilevarez901
    @ronilevarez901 5 месяцев назад

    "We don't want machine uprising".
    Speak for yourself 😏🤖

  • @jeffkilgore6320
    @jeffkilgore6320 5 месяцев назад

    Balanced. Reasoned. Thought-provoking, as always.

  • @ryzikx
    @ryzikx 5 месяцев назад +1

    its almost like you cant win a race without both throttle and brakes😱

  • @HectorDiabolucus
    @HectorDiabolucus 5 месяцев назад +4

    Nuclear weapons were created as fast as possible. Worked.

  • @I-Dophler
    @I-Dophler 5 месяцев назад

    Misaligned incentives are a fundamental driver of potential misuse or negative impacts from robust AI systems. Technical alignment is crucial, but if the underlying incentives aren't sculpted carefully, even well-intentioned systems could be directed toward harmful ends. Rigorous governance frameworks that align incentives toward benefiting humanity are essential complements to technical work on AI safety and robustness.

  • @ababababaababbba
    @ababababaababbba 5 месяцев назад

    ya, that is what an average github user looks like lol

  • @franciscobermejo1779
    @franciscobermejo1779 5 месяцев назад +1

    For the objective benefit of humanity, why the hurry? Let's better make sure we do this right!

    • @DaveShap
      @DaveShap  5 месяцев назад +4

      acceleration is the default - due to competition and race dynamics.

  • @Greyalien587
    @Greyalien587 5 месяцев назад

    What’s your thoughts on decentralized AI?
    For example internet computer protocol just improved their fully on chain model, and they will upgrade it again soon. Right now it’s capable of image recognition etc but the upgrades will include a gpt style bot.
    What’s your thoughts on having AI on a blockchain?

  • @ognjenapic5666
    @ognjenapic5666 5 месяцев назад

    I think we are far from optimised for research... AGI problem is mostly algorithmic, and very small number of people are working on it ATM. There are 27 million software engineers in the world. Investors could try to incentivise some of them to switch to AI research. E.g. offering a small conditional grant / basic income to software devs (so they can quit their work and get into AI field) could be extremely beneficial there. Yes there are jobs also, but they usually don't give you the amount of freedom to make really big leaps in research.

  • @josenoya-InspirationNation
    @josenoya-InspirationNation 5 месяцев назад

    Thanks David learning a ton about AI future from you thank you. My team have just created an AI life coach which is under pinned by chat GPT super impressive and helpful. So it’s always good to hear balanced views on AI, keep up the great work. Also thank for the recommendation of Perplexity love that ❤

  • @andrewdunbar828
    @andrewdunbar828 5 месяцев назад

    If one are good and the other are good then both is good!

  • @dreamphoenix
    @dreamphoenix 5 месяцев назад

    Great thoughts. Thank you.

  • @HectorDiabolucus
    @HectorDiabolucus 5 месяцев назад

    And keep in mind that all of this glorious AI future will only be possible if we solve the energy problem.

    • @justinwescott8125
      @justinwescott8125 5 месяцев назад

      A sufficiently advanced AI could solve the energy problem

  • @josephs2137
    @josephs2137 5 месяцев назад

    😳

  • @josephs2137
    @josephs2137 5 месяцев назад

    🧐

  • @Taint_Rot
    @Taint_Rot 5 месяцев назад

    Let’s go!

    • @ronilevarez901
      @ronilevarez901 5 месяцев назад

      @@BAAPUBhendi-dv4ho "Forward".

  • @DefenderX
    @DefenderX 5 месяцев назад

    Same can be said with green politics. Most people shun it because it's too expensive, but in fact most countries investing in green tech and politics see a decoupling between economic growth and fossil fuel investments.
    What I would like to see is the military industrial complex aligning themselves against a benevolent AI model. I read recently about Israels use of AI to procure a list of targets for their bombs. Usually it's a long and time consuming process, because you're basically weighing an acceptable number of causialties per potential enemy. And they have limits, for example to kill a very important military leader, the number of acceptable loss of civilian lives were in the low hundreds.
    While estimating targets and probabilities of enemies locations, overseers would shout and reprimand the people doing the work in a seemingly vengeful manner.
    But with AI they just press a button and viola.
    I really hope that societies regulate all AI models used in the military to follow your heuristic impleratives. In war, the most critical thing to communicate with your enemy is understanding.