Why Not Just: Think of AGI Like a Corporation?

Поделиться
HTML-код
  • Опубликовано: 4 авг 2024
  • Corporations are kind of like AIs, if you squint. How hard do you have to squint though, and is it worth it?
    In this video we ask: Are corporations artificial general superintelligences?
    Related:
    "What can AGI do? I/O and Speed" ( • What can AGI do? I/O a... )
    "Why Would AI Want to do Bad Things? Instrumental Convergence" ( • Why Would AI Want to d... )
    Media Sources:
    "SpaceX - How Not to Land an Orbital Rocket Booster" ( • How Not to Land an Orb... )
    Undertale - Turbosnail
    Clerks (1994)
    Zootopia (2016)
    AlphaGo (2017)
    Ready Player One (2018)
    With thanks to my excellent Patreon supporters:
    / robertskmiles
    Jordan Medina
    Jason Hise
    Pablo Eder
    Scott Worley
    JJ Hepboin
    Pedro A Ortega
    James McCuen
    Richárd Nagyfi
    Phil Moyer
    Alec Johnson
    Bobby Cold
    Clemens Arbesser
    Simon Strandgaard
    Jonatan R
    Michael Greve
    The Guru Of Vision
    David Tjäder
    Julius Brash
    Tom O'Connor
    Erik de Bruijn
    Robin Green
    Laura Olds
    Jon Halliday
    Paul Hobbs
    Jeroen De Dauw
    Tim Neilson
    Eric Scammell
    Igor Keller
    Ben Glanton
    Robert Sokolowski
    Jérôme Frossard
    Sean Gibat
    Sylvain Chevalier
    DGJono
    robertvanduursen
    Scott Stevens
    Dmitri Afanasjev
    Brian Sandberg
    Marcel Ward
    Andrew Weir
    Ben Archer
    Scott McCarthy
    Kabs Kabs Kabs
    Tendayi Mawushe
    Jannik Olbrich
    Anne Kohlbrenner
    Jussi Männistö
    Mr Fantastic
    Wr4thon
    Dave Tapley
    Archy de Berker
    Kevin
    Marc Pauly
    Joshua Pratt
    Gunnar Guðvarðarson
    Shevis Johnson
    Andy Kobre
    Brian Gillespie
    Martin Wind
    Peggy Youell
    Poker Chen
    Kees
    Darko Sperac
    Truls
    Paul Moffat
    Anders Öhrt
    Lupuleasa Ionuț
    Marco Tiraboschi
    Michael Kuhinica
    Fraser Cain
    Robin Scharf
    Oren Milman
    John Rees
    Shawn Hartsock
    Seth Brothwell
    Brian Goodrich
    Michael S McReynolds
    Clark Mitchell
    Kasper Schnack
    Michael Hunter
    Klemen Slavic
    Patrick Henderson
    / robertskmiles
  • НаукаНаука

Комментарии • 790

  • @MrGustaphe
    @MrGustaphe 5 лет назад +832

    "Instead of working it out properly, I just simulated it a hundred thousand times" We prefer to call it a Monte Carlo method. Makes us sound less dumb.

    • @riccardoorlando2262
      @riccardoorlando2262 5 лет назад +123

      Through the use of extended computational resources and our own implementation of the Monte Carlo algorithm, we have obtained the following.

    • @plapbandit
      @plapbandit 5 лет назад +26

      Hey man, we're all friends here. Sometimes you've just gotta throw shit at the wall til something sticks. Merry Christmas!

    • @pafnutiytheartist
      @pafnutiytheartist 5 лет назад +10

      Well it's the second best thing to actually working it out properly

    • @silberlinie
      @silberlinie 5 лет назад +7

      ...simulatet it a few MILLION times...

    • @jonigazeboize_ziri6737
      @jonigazeboize_ziri6737 5 лет назад +1

      How would a statistician solve this?

  • @user-go7mc4ez1d
    @user-go7mc4ez1d 5 лет назад +592

    "Like Starcraft".
    That aged well....

    • @Qwerasd
      @Qwerasd 5 лет назад +15

      Was about to comment this.

    • @CamaradaArdi
      @CamaradaArdi 5 лет назад +6

      I don't even know if alphaStar had played vs. TLO by then, but I think it did.

    • @RobertMilesAI
      @RobertMilesAI  5 лет назад +242

      It said 'for now'!

    • @guyincognito5663
      @guyincognito5663 5 лет назад +8

      Robert Miles you lied, 640K is not enough for everyone!

    • @Zeuts85
      @Zeuts85 5 лет назад +23

      I wouldn't say this has been demonstrated. So far AlphaStar can only play as and against Protoss, and it hasn't played any of the top pros. Don't get me wrong, I think Mana is an amazing player, but until it can consistently beat the likes of Stats, Classic, Hero, and Neeb (without resorting to super-human micro), then one can't really claim it has beaten humans at Starcraft.

  • @dirm12
    @dirm12 5 лет назад +310

    You are definitely a rocket surgeon. Don't let the haters put you down.

  • @618361
    @618361 5 лет назад +282

    For anyone interested in the statistics of the model in 6:16
    The cumulative distribution function (cdf) of the maximum of multiple random variables is, if they are all continuous random variables and independent of one another, the product of the cdfs. This can be used to solve analytically for the statistics he shows throughout the video:
    Start with the pdf (bell curve in this case) for the quality of one person's idea and integrate it to get the cdf of one person. Then, since each person is assumed to have the same statistics, multiply that cdf by itself N times, where N is the number of people working together on the idea. This gives you the cdf of the corporation. Finally, you can get the pdf of the corporation by taking the derivative of its cdf.
    For fun, if you do this for the population of the earth (7.5 billion) using his model (mean=100, st.dev=10) you get ideas with a 'goodness' quality of only around 164. If an AI can consistently suggest ideas with a goodness above 164, it will consistently outperform the entire human population working together.

    • @horatio3852
      @horatio3852 4 года назад +4

      thx u))

    • @harry.tallbelt6707
      @harry.tallbelt6707 4 года назад +9

      No, actually thank you , though

    • @cezarcatalin1406
      @cezarcatalin1406 4 года назад +9

      That’s if the model you are using is correct... which might not be.
      Edit: Probably it’s wrong.

    • @drdca8263
      @drdca8263 4 года назад +1

      Oh, multiplying the CDFs, that’s very nice. Thanks!

    • @618361
      @618361 4 года назад +25

      @@cezarcatalin1406 That's a valid criticism. The part I felt most iffy about was the independence assumption. People don't suggest ideas in a vacuum, they are inspired by the ideas of others. So one smart idea can lead to another. It's also possible that individuals have a heavy tail distribution (like a power law perhaps) instead of a gaussian when it comes to ideas. This might capture the observation of paradigm-shattering brilliant ideas (like writing, the invention of 0, fourier decomposition, etc.). Both would serve to undermine my conclusion. That being said, I didn't want that to get in the way of the fun so I just went with those assumptions.

  • @sashaboydcom
    @sashaboydcom 4 года назад +69

    Great video, but one thing I think you missed is that a corporation doesn't need any of its employees to know what works, it just needs to survive and make money.
    This means that the market as a whole can "know" things that individuals don't, since companies can be successful without fully understanding *why* they're successful, or fail without anyone knowing why they fail. Even if a company succeeds through pure accident, the next companies that come along will try to mimic that success, and one of *them* might succeed by pure accident, leading to the market as a whole "knowing" things that people don't.

    • @AtticusKarpenter
      @AtticusKarpenter Год назад +3

      And.. thats pretty much not effective way of doing things, if we see modern HollyWoke, or Ubisoft

    • @glaslackjxe3447
      @glaslackjxe3447 Год назад +2

      This can be seen as part of AI training, if a corporation has the wrong goal or wrong solution it will be outcompeted/fail and the companies that survive have better selected for successful ways to maximise profit

    • @monad_tcp
      @monad_tcp Год назад

      @@AtticusKarpenter I bet those are not following market signals and not succeeding at the market, yet they survive from income from other "sources", the stupid ESG scores

    • @rdd90
      @rdd90 Год назад

      This is true, but only for tasks with a small enough solution space that it's feasible to accidentally stumble across the correct solution. This is unlikely to be the case for sufficiently hard intellectual problems. Also, a superintelligence will likely be better at stumbling across solutions than corporations, since the overhead of spinning up a new instance of the AI will likely be less than that of starting a new company (especially in terms of time).

  • @TheOneMaddin
    @TheOneMaddin 5 лет назад +46

    I have the feeling that AI safety research is the attempt to outsmart a (by definition) much smarter entity by using preparation time.

    • @oldvlognewtricks
      @oldvlognewtricks 4 года назад +19

      I seem to remember Mr. Miles mentioning in several videos that trying to outsmart the AI is always doomed, and a stupid idea (my wording). Hence all the research into aligning AI goals with human interests and which goals are stable, rather than engaging in a cognitive arms race we would certainly lose.

    • @martinsmouter9321
      @martinsmouter9321 4 года назад +2

      It's a try to get a history boost if we can have more time and resources we might be able to overwhelm it.
      A little bit like building a fort: you know bigger armies will come, so you build structures to help you be more efficient in fighting them off.

    • @augustday9483
      @augustday9483 Год назад +2

      And it looks like we've run out of prep time. AGI is very close. And the pre-AGI that we have right now are already advanced enough to be dangerous.

  • @yunikage
    @yunikage 4 года назад +92

    "we're going to pretend corporations dont use AI"
    ah yes, and im going to assume a spherical cow....

    • @brumm0m3ntum94
      @brumm0m3ntum94 3 года назад +12

      in a frictionless...

    • @Tomartyr
      @Tomartyr 2 года назад +7

      vacuum

    • @linnthwin7315
      @linnthwin7315 Год назад +2

      What do you mean my guy just avoided an infinite while loop

  • @petersmythe6462
    @petersmythe6462 5 лет назад +451

    "You can't get a baby in less than 9 months by hiring two pregnant women."
    Wow we really do live in a society.

    • @williambarnes5023
      @williambarnes5023 5 лет назад +73

      If you hire very pregnant women, you can get that baby pretty quick, actually.
      The 200 IQ move here is to go to the orphanage or southern border. You can just buy babies directly.

    • @e1123581321345589144
      @e1123581321345589144 5 лет назад +14

      It they're already pregnant when you hire them, then yeah, it's quite possible

    • @dannygjk
      @dannygjk 5 лет назад +13

      I think it's safe to assume that the quote is meant to be read as two women who just became pregnant.
      To assume otherwise is to assume that whoever said it doesn't have enough brain cells to be classified as a paramecium.

    • @isaackarjala7916
      @isaackarjala7916 4 года назад +23

      It'd make more sense as "you can't get a baby in less than 9 months by knocking up two women"

    • @diabl2master
      @diabl2master 4 года назад +4

      Oh shut up, you know what he meant

  • @petersmythe6462
    @petersmythe6462 5 лет назад +337

    Corporations still have basically human goals, just those of the bourgeoisie.
    AI can have very inhuman goals indeed.
    A corporation might bribe a goverment to send in the black helicopters and tanks to control your markets so it can enhance the livelihood of the shareholders.
    An AI might send in container ships full of nuclear bombs and then threaten your country's dentists with nuclear annihilation if they don't take everyone's teeth because its primary goal and only real purpose in life is to study teeth at large sample sizes.

    • @SA-bq3uy
      @SA-bq3uy 5 лет назад +3

      Humans cannot have differing terminal goals, some are just in a better position to achieve them.

    • @fropps1
      @fropps1 5 лет назад +46

      @@SA-bq3uy What do you mean by that? I feel like it's pretty self-evident that people can have different goals. I don't have "murdering people" as a terminal goal for example, but some people do.

    • @SA-bq3uy
      @SA-bq3uy 5 лет назад +7

      @@fropps1 These are instrumental goals, not terminal goals. We all seek power whether we're willing to accept it or not.

    • @fropps1
      @fropps1 5 лет назад +46

      @@SA-bq3uy If your argument is what I think it is then it's reductive to the point where the concept of terminal goals isn't useful anymore.
      I don't happen to agree with the idea that people inherently seek power, but if we take that as a given, you could say that the accumulation of power is an instrumental goal towards the goal of triggering the reward systems in the subject's brain.
      It is true that every terminal goal is arrived at by the same set of reward systems in the brain, but the fact that someone is compelled to do something because of their brain chemistry doesn't tell us anything useful.

    • @SA-bq3uy
      @SA-bq3uy 5 лет назад +2

      @@fropps1 All organisms are evolutionarily selected according to the same capacities, the capacity to survive and the capacity to reproduce. The enhancement of either is what we call 'power'.

  • @stevenneiman1554
    @stevenneiman1554 Год назад +16

    I think one of the most important things to understand about both corporations and AIs is that as an agent's capabilities increase, its ability to do helpful things increases, but the risk of misalignment problems which cause it to do bad things increases faster. As an agent with goals grows, it becomes more able to seek its goals in undesirable ways, the efficacy of its actions increases, it becomes more likely to be able to recognize and conceal its misalignment, AND it becomes less likely you'll be able to stop it if you do discover a problem.

  • @flamencoprof
    @flamencoprof 5 лет назад +41

    As a reader of Sci-Fi since the Sixties, I remember at the dawn of easily available computing power in the Eighties I wrote in my journal that the Military-Industrial complex might have a collective intelligence, but it would probably be that of a shark!
    I appreciate having such thoughtful material available on YT. Thanks for posting.

  • @Primalmoon
    @Primalmoon 5 лет назад +80

    Only took a month for the Starcraft example to become dated thanks to AlphaStar. >_

    • @spencerpowell9289
      @spencerpowell9289 4 года назад +5

      AlphaStar arguably isn't at a superhuman level yet though(unless you let it cheat)

    • @rytan4516
      @rytan4516 4 года назад +3

      @@spencerpowell9289 By now, AlphaStar is now beyond my skill, even with more limitations than myself.

  • @visigrog
    @visigrog 5 лет назад +46

    In most corporate settings, a few individuals get to pick which ideas are implemented. From experience, they are almost always not close to the best ideas.

  • @Soumya_Mukherjee
    @Soumya_Mukherjee 5 лет назад +105

    Great video Robert. See you again in 3 months.
    Seriously we need more of your videos. Love your channel.

  • @jonathanedwardgibson
    @jonathanedwardgibson 4 года назад +6

    I’ve long thought Corporations are analog prototypes of AI lumbering across the centuries, faceless, undying, immortal, without moral compass as they clear-cut and plow-under down another region in their mad minimal operating rules.

    • @MrTomyCJ
      @MrTomyCJ Год назад

      Corporations clearly do have a very important moral compass, and even Miles himself considers that so far humanity has been progressing. The fact some are corrupt doesn't mean corporations as a concept are intrinsecally bad, just like with humans in general.

  • @jennylennings4551
    @jennylennings4551 5 лет назад +7

    These videos deserve way more recognition. They are very well made and thought out.

  • @DavenH
    @DavenH 5 лет назад +16

    Every one of your videos kicks ass. Some of the most interesting material on the subject.

  • @morkovija
    @morkovija 5 лет назад +159

    Been a long time Rob! Glad to see you

    • @d007ization
      @d007ization 5 лет назад +2

      Y'all are way more intelligent than I lol.

    • @shortcutDJ
      @shortcutDJ 5 лет назад +1

      1,5 x speed = 1.5 more fun

    • @stevenmathews7621
      @stevenmathews7621 5 лет назад +2

      @@shortcutDJ not sure about that..
      there might be diminishing returns on that ; P

    • @MrGustaphe
      @MrGustaphe 5 лет назад +1

      @@shortcutDJ Surely it's 1.5 times as much fun.

    • @diabl2master
      @diabl2master 4 года назад

      @@MrGustaphe No, simply 1.5 more units of fun.

  • @EmilySucksAtGaming
    @EmilySucksAtGaming 4 года назад +7

    "can you tell I'm not a rocket surgeon" I literally just got done playing KSP failing at reworking the internal components of my spacecraft

  • @Garbaz
    @Garbaz 5 лет назад +2

    Very interesting! And I really like the little "fun bits" you edit into your videos!

  • @eclipz905
    @eclipz905 5 лет назад +37

    Credits song: Bad Company

  • @thrallion
    @thrallion 5 лет назад +2

    Once again wonderful video. One of the most interesting and well spoken channels on RUclips!

  • @blahblahblahblah2837
    @blahblahblahblah2837 4 года назад +1

    Love the Dont Hug Me I'm Scared reference!
    Also _wow_ this has become my favourite channel. I wish I had found it 2 years ago

  • @buzz092
    @buzz092 5 лет назад +2

    Excellent clerks reference! Also the video was outstanding as usual. :P

  • @V1ctoria00
    @V1ctoria00 4 года назад +1

    I binged several of your videos and I noticed this example about the rocket comes up another time. As well as the example just before it. Thought I was somehow rewatching one over again.

  • @acorn1014
    @acorn1014 5 лет назад +6

    I noticed an interesting quirk about the model that ignores the difficulty of finding the right task. If you take 361 people and have them all play Go, they can think of every move on the board, so they'd be able to beat our current AI, but this is not the case, so this is how important that ability to determine these things gets.

  • @Ybalrid
    @Ybalrid 4 года назад

    A coworker just shared this video with me. I had no idea you had your own RUclips channel. I like Computerphile a lot, including your ML/AI videos so I instantly subscribed!

  • @arthurguerra3832
    @arthurguerra3832 5 лет назад

    Finally! I was tired of rewatching your old videos. haha Keep'em coming

  • @TXWatson
    @TXWatson 5 лет назад +4

    Looking forward to episode 2 of this! I've thought of the utility of this analogy in being that corporations, as intelligent nonhuman agents, give us the opportunity to experiment with designing utility functions that might be less harmful when implemented.

  • @DieBastler1234
    @DieBastler1234 5 лет назад +2

    Content and presentation is brilliant, I'm sure matching audio and video quality will follow.
    Subbed :)

    • @RobertMilesAI
      @RobertMilesAI  4 года назад

      Is this about the black and white bits at the start that are just using the phone's internal mic, or is the there a problem with my lav setup?

    • @theblinkingbrownie4654
      @theblinkingbrownie4654 5 месяцев назад

      ​@@RobertMilesAIMaybe they watched the video before it finished processing the higher qualities, do you release videos before they're done fully processing?

  • @donaldhobson8873
    @donaldhobson8873 5 лет назад +117

    This is all making the highly optimistic assumption that the people in the corporation are cooperating for the common good. In many organizations, everyone is behaving in a "stupid" way, but if they did something else, they would get fired.

    • @gasdive
      @gasdive 5 лет назад +20

      Yes, but individual neurons are 'stupid'. Individual layers of a neutral net are 'stupid'

    • @stevenmathews7621
      @stevenmathews7621 5 лет назад +5

      you might be missing Price's Law there.
      (an application of Zipf's Law)
      a small part (the √ of the workers) is working for the "common good"

    • @NXTangl
      @NXTangl 4 года назад +16

      Also that the workers/CEOs are always aligned with shareholder maximization, as opposed to personal maximization. A company can destroy itself to empower a single person with money and often does.

    • @Gogglesofkrome
      @Gogglesofkrome 4 года назад +2

      what is this 'common good,' anyway? is it some ideologically driven concept that differs entirely between all humans? Ironically it is this very 'common good' which drives many companies to do evil. After all, the road to hell is paved in human skulls and good intentions.

    • @NXTangl
      @NXTangl 4 года назад +2

      @@Gogglesofkrome Common good of the shareholders in this case.

  • @jared0801
    @jared0801 5 лет назад +1

    Great stuff, thank you so much for the video Rob

  • @cherubin7th
    @cherubin7th 5 лет назад +6

    A corporation can also do something like alphago's search tree. Many people have ideas and others improve on them in different directions. Bad directions are canceled until a very good path is found. Also many corporations in competition behave like a swarm intelligence. But still great video!

  • @brunogarnier2855
    @brunogarnier2855 5 лет назад +5

    Thank you for this great video.
    It could be interresting to go through the same exercise, but with the whole world's economy.
    and evaluate the "invisible hand of the market" as an artificial selection AI...
    Have a good week-end !

    • @MrTomyCJ
      @MrTomyCJ Год назад

      I find that market's personification ("invisible hand") as a horrible mistake, as the whole point of the market is precisely that it's not a single entity, it doesn't have a particular intention. It's just a network of people with DIFFERENT ones.

  • @zzzzzzzzzzz6
    @zzzzzzzzzzz6 5 лет назад

    I've always wondered this and have been pushing this idea... awesome to have a full video on it!
    Well not the 3 follow on conclusions, but the comparison to AI systems

  • @Mr30friends
    @Mr30friends 5 лет назад +5

    This video is actually amazing. Wow. So much useful information covered. And not just useful for people interested in AI. Most of this could apply anywhere from how businesses work to how different political systems work and to pretty much anything else.

  • @ThePlayfulJoker
    @ThePlayfulJoker 4 года назад +2

    This video is the kind that chanced my mind twice in only 14 minutes. I love the fact that it had a true discussion on the subject and not just a half-baked opinion.

  • @tho207
    @tho207 5 лет назад +1

    should someone can bring AGI to us, they must be a person like you. your sensibleness and sensitivity is outstanding. I'll resume the video now, cheers

  • @DJHise
    @DJHise 5 лет назад +8

    It took one month since this video was made, for AI to start crushing Starcraft professional players.
    (AlphaStar played both Dario Wunsch and Grzego rz Komincz, who were ranked 44th and 13th in the world respectively, were both beat 5 to 0.)

  • @willemvandebeek
    @willemvandebeek 5 лет назад

    Merry Christmas Robert! :)

  • @AiakidesAkhilleus
    @AiakidesAkhilleus 5 лет назад +1

    Great quality video, congratulations

  • @commenter3287
    @commenter3287 5 лет назад +1

    I have enjoyed your computerphile videos, but these scripted ones are even better. I had never heard the AI/Corporation comparison before, so in one succinct video you introduced me to a very interesting analogy and analyzed the problems with the analogy very well.

  • @JM-us3fr
    @JM-us3fr 5 лет назад +1

    This was my question! Thanks Rob for answering it

  • @pacibrzank78
    @pacibrzank78 5 лет назад +1

    Every haircut you had so far was on point

  • @Bootleg_Jones
    @Bootleg_Jones 5 лет назад +8

    I love that you used XKCD's Up Goer Five as your example rocket blueprint. Definitely one of the best comic's Randall has ever put out.

  • @ricardoabh3242
    @ricardoabh3242 4 года назад

    Always really interesting and clear, with an nice open ended storyline

  • @qmillomadeit
    @qmillomadeit 5 лет назад +57

    I've always thought about the connection of corporations to AI as they do seek to seek to maximize their goals in the most efficient way. Glad you put out this very well thought out video :)

    • @dannygjk
      @dannygjk 5 лет назад +3

      Corporations are far from efficient.

    • @ziquaftynny9285
      @ziquaftynny9285 4 года назад +3

      @@dannygjk relative to what?

    • @dannygjk
      @dannygjk 4 года назад +1

      @@ziquaftynny9285 Relative to AI ;)

    • @dannygjk
      @dannygjk 4 года назад +1

      @Stale Bagelz Corporations are plagued with many of the issues that humanity has in general. For example power struggles within the corporation.

    • @PsychadelicoDuck
      @PsychadelicoDuck 4 года назад +2

      @@dannygjk I think it's less "far from efficient", and more a stop-button/specification problem. The institutions (and the people making them up) are very good at maximizing the chances of their success, as given by the metrics that the broader systems (society/government for the institutions, and internal politics for the individuals) evaluate them by. The problems are, those metrics are not necessarily measuring what people think they are measuring (due to loopholes, outright lying, etc.), any attempts to change those metrics will be fought by the organizations currently benefiting from them, and that the fundamental social-economic system those original metrics were designed from presupposed that morality was either a non-factor or would arise naturally from selfish behavior. I'm also going to point out that the "general humanity issues" you mention are greatly exacerbated by that same set of problems.

  • @lobrundell4264
    @lobrundell4264 5 лет назад +4

    Yeesss Rob is back as good as ever!

  • @lucbloom
    @lucbloom Год назад

    Is that a Don’t Hug Me I’m Scared reference in the graph???
    Oh man so awesome.

  • @Supreme_Lobster
    @Supreme_Lobster 5 лет назад +10

    Those layers arent gonna stack by themselves

  • @xDeltaF1x
    @xDeltaF1x 4 года назад +7

    I think the statistical model is a bit flawed/over simplified. Groups of humans don't just select the best idea from a pool but will often build upon those ideas to create new and better ones.

    • @CommanderPisces
      @CommanderPisces 4 года назад

      Basically this just means that an "idea" can actually have several smaller components that can be improved upon. I think this is more than offset by the fact that (as discussed in the video) humans still can't select the best ideas even when they're presented.

  • @adrianmiranda5531
    @adrianmiranda5531 5 лет назад +9

    I just came here to say that I appreciated the Tom Lehrer reference. Keep up the great videos!

  • @cupcakearmy
    @cupcakearmy 5 лет назад

    Amazing content again. Keep it up!

  • @joelkreissman6342
    @joelkreissman6342 4 года назад +2

    I've said it before and I'll say it again, "bureaucracy is a human paperclip maximizer".
    Doesn't matter if it's a private corporation or governmental.

  • @its.dan.eastwood
    @its.dan.eastwood 5 лет назад

    Great video, thanks for sharing!

  • @BM-bu4xd
    @BM-bu4xd 5 лет назад

    Yeah! terrific. Much thanks

  • @bibasniba1832
    @bibasniba1832 4 года назад

    Thank you for sharing!

  • @aenorist2431
    @aenorist2431 5 лет назад +2

    They just prove that corporations are problems in similar ways.
    Not that somehow both are not a problem.
    Corporations have to be tightly controlled by the population (in the form of government) to utilize their potential without allowing their diverging goals to cause excessive damage.

  • @TheConfusled
    @TheConfusled 5 лет назад

    Yay a new video. Mighty thanks to you

  • @faustin289
    @faustin289 4 года назад +8

    "Evaluating solutions is easier than coming up with them"
    This is why I should earn more than my boss....I come up with all the ideas; the only thing he does is criticize and pick what idea to take forward!

    • @oldvlognewtricks
      @oldvlognewtricks 4 года назад +9

      Your reasoning makes perfect sense, assuming people get paid based on the difficulty of their work. Oh, wait...

    • @pluto8404
      @pluto8404 4 года назад +1

      Then become the boss if it is so easy.

    • @landonpowell6296
      @landonpowell6296 4 года назад +3

      @@pluto8404
      Becoming the boss != Doing the boss's work.
      It's not easy to be born rich unless you already were.

    • @MrTomyCJ
      @MrTomyCJ Год назад

      @@landonpowell6296 yeah the issue here is that in reality, the market doesn't directly reward intelligence or hard work, it rewards the satisfaction of consumer's needs. It seems unfair, but the alternative is much worse. Besides, intelligence and hard work may not be strictly necessary but they very often do put you in the right path. And someone being born lucky or rich doesn't really mean they are being unfair to others.

  • @limitless1692
    @limitless1692 5 лет назад

    Wow this video was really interesting ..
    Thanks for creating it

  • @RoboBoddicker
    @RoboBoddicker 5 лет назад

    Last year in the US, one of the big sporting goods retailers stopped carrying semi-automatic rifles and tightened restrictions on their gun sales in the wake of mass shootings. That decision was made solely by the CEO and it definitely didn't please a lot of shareholders. That's another big difference, I think, between corporations and AGI - the big decisions in a corporation are ultimately made by a small group of humans with human values. Not that we can always expect corporations to put morality over profits obviously, but executives can at least *recognize* an egregious situation and make moral judgments. An AGI doesn't have any such safeguards.
    Fantastic video as always, btw!

  • @Verrisin
    @Verrisin 5 лет назад +2

    I like this idea overall. Somewhat smarter, but also somewhat slower. -- Controllable by other grouped-human entities (like governments)
    + a lot of other points, but I think that is kind of the main thing that differentiates it from ASI.

  • @nazgullinux6601
    @nazgullinux6601 5 лет назад

    Loved the "Bad Company" acoustic at the end. As always, another 1-up to those not formally schooled whom routinely spout nonsensical "What-if's" at you as if they are the first person to think of the idea haha.

  • @pierfonda
    @pierfonda 5 лет назад +3

    Ahhh the move 37/Clerks reference!! Perfect

  • @thatchessguy7072
    @thatchessguy7072 Год назад +1

    @9:58 In answer to your rhetorical question, I need to reference the baduk games played between Alphago zero and Alphago master. Zero plays batshit crazy strategies where even the tiniest inaccuracies cause the position to spiral into catastrophe but zero still manages to win. Zero’s strategy does not look good to amateur players, nor to professional players, but it works, it just works. Watching these games feels like listening to two gods talk, one of which has gone mad.
    @10:02 ah… well we recognized move 37 as good after the AI showed that to us.

  • @alexwood020589
    @alexwood020589 Год назад

    I think another important point about idea qualities in large teams is the selection process. No team is coldly evaluating every idea and picking the objective best one. The people who can articulate their ideas best, or shout the loudest, or happen to be the CEO's son are the ones who's ideas get implemented.

  • @loopuleasa
    @loopuleasa 5 лет назад +1

    3:48
    Nice thinking adding "(for now)" text in the video, as Starcraft was already beatne by DeepMind a month ago

  • @richarddeese1991
    @richarddeese1991 5 лет назад

    "...that even governments are sometimes able to move fast enough to deal with them [corporations.]" LOVE IT!!! 😂 Oh, and by the way; LOVE the acoustic rendition of "Bad Company" [by, of course, Bad Company - the ultimate eponymous song!] - BRILLIANT! :D ...and, is that a mandolin? Wonderful! Now, as to these corporations... I think it's pretty clear that most of them act as specialist A.I.s, geared to produce some product or service (or, sometimes, a whole range of them), & as such, they're mostly designed to maximize profits for the shareholders (as you pointed out.) I think this is very much like Deep Thought, or the Go! program; they do indeed act as specialized superintelligences. But they most certainly do NOT qualify in any way as general intelligences, much less general superintelligences. As to the question you posed [quite diplomatically, I must say, as you neatly side-stepped the issue of using any mental health terms!], "Are they 'misaligned'?" Well, in short, YES. Many of them ARE misaligned. They are profit-driven - some of them to the point of getting away with whatever they can. And on that note, the ONLY moral in a capitalist, or 'free-market' society, IS, "What can I get away with - and how much $$$ can I make DOING it?" I'm sorry, but that's it. If a company isn't run by people with good intentions AND good morals &/or ethics, then that's what you end up with, simply by default. In other words, if nobody's 'minding the moral store' so to speak, things WILL do badly wrong all by themselves. I believe this could be proved - at least by example - but I don't know how do prove it, myself. I have merely witnessed (and often worked for!) 54+ years worth of corporate shenanigans which amply proves it to ME. So, YES while some of them DO make good products, &/or have good services, that is ONLY because they are run by strong people with good morals - or, at least, good corporate & social ethics. The main problem is this: when nobody's in charge whose strong enough to infuse a company with their own good values, bureaucracy WILL take over by default, and it is ALWAYS 'misaligned' as you put it. In fact, it is actually badly broken & dysfunctional, by any standard you'd care to judge it by... EXCEPT the standard of, "What can I get away with, and how much $$$ can I make DOING it?" That's it. That's all there is. Probability either shows that, or is useless in gauging that. If we 'train' our A.G.I.s, they're going to HAVE to be given clear psychological tests, examples & exams; they're going to HAVE to be 'taught' by people who do not only NOT teach them, "Maximize profit, dammit - nothing else matters!!!" but rather DO teach them that people matter, intelligent (or 'sentient') beings matter, whether they are flesh or circuits or whatever. If you can't perform your task without harming sentients, then you can't perform you task at all, & you MUST ask for help. Notice that I'm NOT advocating for the 3 (or 4, really) laws of robotics. Lovely sci-fi concept, I'm sure, but lousy real-world philosophy. A.I.s (or A.G.I.s, or whatever new letters someone comes up with tomorrow...) cannot be "programmed" to be "moral" in ANY sense. Doesn't work. Try it. Anyway, that's my take. Thanks for the video! You talk about important things (in my opinion!) tavi.

  • @ToriKo_
    @ToriKo_ 5 лет назад +2

    I just want to say thanks for making these videos! Also nice Undertale reference

  • @GreenDayFanMT
    @GreenDayFanMT 5 лет назад

    Very interessting topic. Thanks for this viewpoint

  • @hayuseen6683
    @hayuseen6683 4 года назад

    Wonderfully well considered problem and presented both bite-sized and expounded on.
    Logicians are some of my favorite people.

  • @EebstertheGreat
    @EebstertheGreat 3 года назад +2

    At 7:14, the graph looks wrong. That histogram should resemble the graph of the probability density of a sample maximum. In general, if X₁, ..., Xₙ are independent and identically distributed random variables (i.e. a sample of size n) with cumulative distribution function Fₓ(x), then S = max{X₁, ..., Xₙ} has cumulative distribution function Fₛ(s) = [Fₓ(s)]ⁿ. So if each X as a probability density function fₓ(x) = Fₓ'(x), then S has probability density function fₛ(s) = n fₓ(s) [ Fₓ(s) ]ⁿ⁻¹ = n fₓ(s) [ ∫ fₓ(t) dt ]ⁿ⁻¹, where the integral is taken from -∞ to s.
    Here, we assumed the variables were normally distributed and set μ = 100 and σ = 20, so fₓ(x) = 1/(20√͞2͞π) exp(-(x-100)²/800), and thus fₛ(s) =
    n/(20√͞2͞π)ⁿ exp(-(s-100)²/800) [ ∫ exp(-(t-100)²/800) dt ]ⁿ⁻¹. The mean of this is E[S] = ∫s fₛ(s) ds, integrating over ℝ. Doing this numerically in the n=100 case gives a mean of 150.152. We can also make use of an approximate formula for large n: E[S] ≈ μ + σ Φ⁻¹((n-π/8)/(n-π/4+1)). For the given parameters and n=100, we get E[S] ≈ 100 + 20 Φ⁻¹((100-π/8)/(101-π/4)) ≈ 150.173. In either case, it is not plausible that you got a mean of 125 with n = 100, σ = 20 like you said. You must have used σ = 10, not σ = 20. That also explains why you wrote "σ = 20" between those vertical bars at 6:31. You probably meant that the distance between μ+σ and μ-σ was 20, i.e. σ = 10.

    • @RobertMilesAI
      @RobertMilesAI  3 года назад +2

      That's correct! Though, since I picked the value for the standard deviation out of thin air, it can just be 10 instead and it doesn't affect the point I was trying to make

  • @DYWYPI
    @DYWYPI Год назад +1

    When thinking about AI as a metaphor for corporations, rather than the other way around, it's not necessarily the superhuman *intelligence* of the AI that is important or that makes them inherently dangerous - merely the fact that the intelligence makes it superhumanly *powerful*. Whether or not we accept that a corporation is significantly more intelligent than a human, they're fairly self-evidently significantly more powerful than one, with more ability to affect change in the world and to gather instrumental resources to increase that ability.

  • @natfrey6503
    @natfrey6503 5 лет назад +1

    Might also consider some forms of government as behaving as AI, even societies for that matter. They can all go awry when citizens that go along with the "program" are convinced their actions are for a higher good. It's the conundrum of how good natured people can participate in the making of an avoidable calamity. But this brings in the question of human evil, or moral failing (as we see so much in large corporations), that even when quite innocuous on an individual level can be brutal when added up on a mass level.

  • @hikaroto2791
    @hikaroto2791 2 года назад

    this was an astoundingly interesting video

  • @DamianReloaded
    @DamianReloaded 5 лет назад +2

    Yay! I'm always waiting for your vids. I always tell people, whenever its brought up, that AGIs are very likely what will destroy us but also probably the only thing that can save us from our own limitations. (besides jebus)

  • @dantenotavailable
    @dantenotavailable 5 лет назад +2

    Also don't forget communication costs. Scaling any human process to 1000 people becomes incredibly difficult due to overhead necessary to keeping everyone pointed in the same direction. Just documenting the suggestions from 1000 people is going to require a significant number of people and time, making sure you get the suggestions documented correctly and unambiguously and then evaluated is going to be a herculean task. It's not for no reason that most Agile Development techniques are most effective at 5 to 6 people and most advice for teams of size 10+ is "split into 2 teams that don't need to coordinate".

  • @shaylempert9994
    @shaylempert9994 5 лет назад

    Just subed!

  • @ehochmuephi8219
    @ehochmuephi8219 Год назад

    Love your stuff man, and Tom Lehrer as well. ;)

  • @ianprado1488
    @ianprado1488 5 лет назад

    Such a creative discussion

  • @definitelynotcole
    @definitelynotcole Год назад

    Love that bit at the start.

  • @albirtarsha5370
    @albirtarsha5370 4 года назад +1

    Anything You Can Do (Annie Get Your Gun) by Howard Keel, Betty Hutton
    AGI:
    Anything you can be, I can be greater.
    Sooner or later I'm greater than you.

  • @ChibiRuah
    @ChibiRuah 5 лет назад +1

    I found this video very good as i thought about this and this expand the comparison and where it fails

  • @ninjagraphics1
    @ninjagraphics1 5 лет назад

    Thanks so much for this

  • @thewhitefalcon8539
    @thewhitefalcon8539 Год назад +1

    This diminishing returns stuff presumably also applies to electronic AGI. Look at the server resources they pour into GPT.

  • @leninalopez2912
    @leninalopez2912 5 лет назад +24

    This is fast becoming even more cyberpunk than Neuromancer.

  • @travcollier
    @travcollier 5 лет назад

    A lot of the "sort of" points are very likely to apply to AGIs (at least in the early days) too.
    Anyways, we could certainly benefit from being better at aligning the goals and actions of corporations with humanity as a whole, and I think AI safety research could help with that while gaining insights about future AGIs.

  • @LeoStaley
    @LeoStaley 5 лет назад +2

    The video you did on computerphile about Asimov's e laws of robotics was the most impactful, consise expression of what the danger of AI development is. You made the point that "you have to solve ethics" and the fact that the people building it are going, "hold on, I'm just a computer programmer, I didn't sign up for that." those two things combined have stuck with me for years.

  • @peabnuts123
    @peabnuts123 Год назад

    I agree with all the analysis in this video, but from a general standpoint it seems wild to even assert that corporations are like superintelligences when we have phrases like "design by committee" or "too many cooks" etc to describe the regression toward the mean when solving problems using a group of people. The differentiating factor of companies' ability to do things has always been person-power in my mind, definitely not their ability to generate solutions for problems. Anyone can have an idea, its the execution that counts. Some things require a lot of people to execute. This is IMO what gives organisations more capability than individuals.

  • @GAPIntoTheGame
    @GAPIntoTheGame 5 лет назад

    Severely underrated channel

  • @LinChio
    @LinChio 5 лет назад

    we miss u! thanks!

  • @petersmythe6462
    @petersmythe6462 4 года назад +1

    The other important thing about corporations is they ultimately rely on people (their workers, customers, and supply chain) for them to function. This is why strikes are so effective and boycotts are also somewhat effective. The actual people that have to cooperate with the corporate leadership apparatus are the majority of human beings. Now, they don't have the choice to not cooperate with at least some corporations, but they can perfectly well agree not to carry out some directive.

  • @bscutajar
    @bscutajar 5 лет назад

    At 11:45 he mentions you can keep adding more people and they will do the job faster. A little algebra shows that for the number adding example, the optimal number of people working in parallel is the sqrt of the number of numbers. Adding more beyond this point will slow down the process.

  • @petersmythe6462
    @petersmythe6462 Год назад

    In some ways your "have each person generate an idea and pick the best" actually understates the problem. There are many types of problems, e.g. picking a move in chess, where ideas are easy to come up with but hard to evaluate.

  • @loopuleasa
    @loopuleasa 5 лет назад

    finally, a good vid from Rob

  • @auto_ego
    @auto_ego 5 лет назад +4

    In part due to your videos, I'm planning to focus on AI in my undergraduate studies (US). I'm returning to school for my final 1.5 years of study after a long break from university. Do you have any recommended reading to help guide/shape/maximize the utility of my studies? Ultimately (in part due to Yudkowsky) I am drawn to this exact field of study: AI safety. I hope that I can make a contribution.

    • @spirit123459
      @spirit123459 5 лет назад +1

      On MIRI's web page (intelligence dot org) is "research guide" - a list of useful books and papers to start you on their research goals. Center for Human-Compatible AI has a bibliography section containing recommended materials. You can also get some interesting papers describing problems with AGI from a bibliography section of Bostrom's "Superintelligence". You can also find a list of papers on MIRI's or FHI's web page. Good luck.

  • @ryanarmstrong2009
    @ryanarmstrong2009 5 лет назад

    That clerks reference for move 37 was phenomenal

  • @geraldkenneth119
    @geraldkenneth119 Год назад

    The term I came up with that might fit a corporation is Ultra-Wide Artificial General Intelligence (UWAGI): an AGI that has genius-level (but not super intelligent) competence in far more areas than you’d expect of a single human, and which can do a very large number of AGI-level tasks at once, but is still not technically super intelligent in the traditional sense. I guess one way to think of it as being superintelligent in terms of “width” as opposed to “depth”

  • @Jack-Lack
    @Jack-Lack 4 года назад +13

    I've already conjectured a year or two ago that corporations are AI, so of course I'm going to say yes. My reasoning is:
    -Corporations make decisions based on their board of directors, which is a hive mind of supposedly well-qualified, intellectual elites.
    -A corporate board will serve the goals of its shareholders, at the expense of everything else. Even if this means firing an employee because they believe they're losing $50/year on that employee, they care more about the $50 than the fact that the employee will be out of work. It also means they may choose not to recall a dangerous product if they think a recall would be the less profitable course of action. Corporate boards are so submissive to the goals of their shareholders that it is reminiscent of the AI who maximizes stamp-collecting at the expense of everything else, even if it destroys the world in the process (see fossil fuel companies who knew about climate change in the 1960's and buried the research on it).
    -AI superintelligence is supposed to have calculation resources that make it beyond human abilities, like a chess AI that is 900 elo rating points stronger than the best human. An AGI superintelligence might manifest superhuman abilities that go beyond just intelligence, but also its ability to generate revenue in a superhuman way and its ability to influence human opinion in a superhuman way. Large corporations also have unfathomable resources to execute their goals, which (in cases like Amazon, Apple, Microsoft, or IBM) can include tens or hundreds of thousands of laborers, countless elite intellectuals, the power to actually influence federal legislation through lobbying, the financial resources to drive their competition out of business or merge with them, and public relations departments that can influence public opinion.
    Really, I think that the way corporations behave is an almost exact model for how AGI would behave.