Sharing the Benefits of AI: The Windfall Clause

Поделиться
HTML-код
  • Опубликовано: 27 авг 2024
  • AI might create enormous amounts of wealth, but how is it going to be distributed?
    The Paper: www.fhi.ox.ac....
    The Post: www.fhi.ox.ac....
    With thanks to my excellent Patreon supporters:
    / robertskmiles
    Gladamas
    Scott Worley
    JJ Hepboin
    Pedro A Ortega
    Said Polat
    Chris Canal
    Jake Ehrlich
    Kellen lask
    Francisco Tolmasky
    Michael Andregg
    David Reid
    Peter Rolf
    Chad Jones
    Teague Lasser
    Andrew Blackledge
    Frank Marsman
    Brad Brookshire
    Cam MacFarlane
    Jason Hise
    Erik de Bruijn
    Alec Johnson
    Clemens Arbesser
    Ludwig Schubert
    Bryce Daifuku
    Allen Faure
    Eric James
    Matheson Bayley
    Qeith Wreid
    jugettje dutchking
    Owen Campbell-Moore
    Atzin Espino-Murnane
    Phil Moyer
    Jacob Van Buren
    Jonatan R
    Ingvi Gautsson
    Michael Greve
    Julius Brash
    Tom O'Connor
    Shevis Johnson
    Laura Olds
    Jon Halliday
    Paul Hobbs
    Jeroen De Dauw
    Lupuleasa Ionuț
    Tim Neilson
    Eric Scammell
    Igor Keller
    Ben Glanton
    anul kumar sinha
    Sean Gibat
    Duncan Orr
    Cooper Lawton
    Will Glynn
    Tyler Herrmann
    Tomas Sayder
    Ian Munro
    Jérôme Beaulieu
    Nathan Fish
    Taras Bobrovytsky
    Jeremy
    Vaskó Richárd
    Benjamin Watkin
    Euclidean Plane
    Andrew Harcourt
    Luc Ritchie
    Nicholas Guyett
    James Hinchcliffe
    Oliver Habryka
    Chris Beacham
    Zachary Gidwitz
    Nikita Kiriy
    Andrew Schreiber
    Dmitri Afanasjev
    Marcel Ward
    Andrew Weir
    Ben Archer
    Kabs
    Miłosz Wierzbicki
    Tendayi Mawushe
    Jannik Olbrich
    Jake Fish
    Jussi Männistö
    Wr4thon
    Martin Ottosen
    Archy de Berker
    Andy Kobre
    Poker Chen
    Kees
    Paul Moffat
    Robert Valdimarsson
    Anders Öhrt
    Marco Tiraboschi
    Michael Kuhinica
    Fraser Cain
    Robin Scharf
    Klemen Slavic
    Patrick Henderson
    Oct todo22
    Melisa Kostrzewski
    Hendrik
    Daniel Munter
    Alex Knauth
    Leo
    Rob Dawson
    Bryan Egan
    Robert Hildebrandt
    James Fowkes
    Len
    Alan Bandurka
    Ben H
    Tatiana Ponomareva
    Michael Bates
    Simon Pilkington
    Daniel Kokotajlo
    Fionn
    Diagon
    Andreas Blomqvist
    Bertalan Bodor
    David Morgan
    Ben Schultz
    Zannheim
    Daniel Eickhardt
    lyon549
    HD
    Ihor Mukha
    14zRobot
    Ivan
    Jason Cherry
    Igor (Kerogi) Kostenko
    ib_
    Thomas Dingemanse
    Stuart Alldritt
    Alexander Brown
    Devon Bernard
    Ted Stokes
    Jesper Andersson
    Jim T
    Kasper
    DeepFriedJif
    Chris Dinant
    Raphaël Lévy
    Marko Topolnik
    Johannes Walter
    Matt Stanton
    Garrett Maring
    Mo Hossny
    Anthony Chiu
    Frank Kurka
    Ghaith Tarawneh
    Josh Trevisiol
    Julian Schulz
    Stellated Hexahedron
    Caleb
    Scott Viteri
    12tone
    Clay Upton
    Brent ODell
    Conor Comiconor
    Michael Roeschter
    Georg Grass
    Isak
    Matthias Hölzl
    Jim Renney
    Michael V brown
    Martin Henriksen
    Edison Franklin
    Daniel Steele
    Piers Calderwood
    Krzysztof Derecki
    Mikhail Tikhomirov
    Richárd Nagyfi
    Richard Otto
    Alston Sleet
    Matt Brauer
    Jaeson Booker
    Mateusz Krzaczek
    Artem Honcharov
    Evan Ward
    Michael Walters
    Tomasz Gliniecki
    Mihaly Barasz
    Mark Woodward
    Ranzear
    Neil Palmere
    Rajeen Nabid
    / robertskmiles

Комментарии • 1,4 тыс.

  • @abdulmasaiev9024
    @abdulmasaiev9024 4 года назад +175

    I'm reminded of the old joke: If you ever find yourself the target of a mugging, simply say "no". The mugger actually can't legally take your stuff without your consent.
    I'm fairly sure even if you get somebody to sign that that windfall clause, if they DO succeed in AGI they'll weasel out of paying in less time than it took you to explain what the windfall clause was.

    • @tryingmybest206
      @tryingmybest206 Год назад +13

      They'll hire the AGI as the world's best lawyer to argue their way out of the contract

  • @MsMotron
    @MsMotron 4 года назад +786

    "legally binding" is a very stretchable term if you earn 1% of the worlds gdp

    • @biobear01
      @biobear01 4 года назад +108

      'legally binding' requires there is some government or force that can force you to comply through physical or financial pain. If you have an AGI that has already amassed GWP level wealth, it will not be susceptible to those forces. It will be able to create a way to mitigate them. Maybe this is solved because we are assuming the safety problem is solved, but it seems we still have work to do on the idea of the Windfall Clause.

    • @MsMotron
      @MsMotron 4 года назад +72

      @@biobear01 i come from germany and here lives the king of thailand, former prince of thailand. when he became king he was supposed to pay 3 billion€ in taxes because he essentially obtained an entire country and lives in germany. he did not pay a single cent. when that kind of money is at play, the wheels turn differently

    • @beamboy14526
      @beamboy14526 4 года назад +25

      @@MsMotron The first company to make ASI will not bother making any money. Why bother selling products/services when you could just wish anything you want into existence?

    • @starvalkyrie
      @starvalkyrie 4 года назад +21

      @@biobear01 Enforcement is always power relaint. And the company that just cracked the holy grail of AI... will be holding all the cards.

    • @Asfaril
      @Asfaril 4 года назад +25

      Global GWP is 142 Trillion, EU, and US GDP is around 18-20 Trillion each, so around 26% of world production. While EU and US do not have direct control over that money, they do control monetary policy, patent policy, and the law. - If your company says "screw you" to either of those two powers, suddenly your AI patents are made invalid, your corporate offices are raided, and your executive board is all put on their Sanction list, along with their families.
      Facebook, Amazon, and Google are all worried about Antitrust legislation from the left at the moment, expect more of that as companies grow bigger.

  • @slowgenius3
    @slowgenius3 4 года назад +821

    "Appearing to Not Be Sociopaths. This is sometimes called 'Public Relations' " (Dying!)

    • @longleaf0
      @longleaf0 4 года назад +15

      One of the best lines I've heard in years :D

    • @natcarish
      @natcarish 4 года назад +4

      Still laughing.........

    • @unvergebeneid
      @unvergebeneid 4 года назад

      @@natcarish Haha, same 😂

    • @cdmacd
      @cdmacd 4 года назад +7

      This made me laugh as a marketing student 😂

    • @jonas7510
      @jonas7510 4 года назад +5

      and all the while bringing it as sesame street level realism , no cynicism involved whatsoever . . . rotfl !

  • @TheStarBlack
    @TheStarBlack 4 года назад +140

    "What happens when you create huge amounts of wealth and that wealth all goes to a small group of people?"
    Hmm I can't possibly imagine. Such a thing has surely never occurred!

    • @excelelmira
      @excelelmira Год назад +4

      Yes, he literally said this has never happened. So yes, you probably really can't imagine it.

  • @bluewales73
    @bluewales73 4 года назад +381

    I would imagine a company with >1% of the world's DGP would be able to get out of any contract they want.

    • @pyramear5414
      @pyramear5414 4 года назад +47

      Not if the collective lobbying of the other ~90% is enough to overrule them. Never underestimate the power of jealousy.

    • @Elzilcho1000
      @Elzilcho1000 4 года назад +84

      I would imagine a company with an AGI may be tempted to set it the task of getting out of the contract. 😉

    • @jarrod752
      @jarrod752 4 года назад +29

      @@Elzilcho1000 Wouldn't be hard. Just spend huge chunks of your profits on expanding your business and buying land and paying out employee bonuses and other areas. Profit is what's left over after you've spent the rest. They can complain to you on your corporate yacht.

    • @Scubadooper
      @Scubadooper 4 года назад +6

      @@jarrod752 as a company if you buy stuff (ie not expense to conduct the business that generates the profit) the money you spend is still counted in your profits

    • @hugofontes5708
      @hugofontes5708 4 года назад +1

      @@Scubadooper so buffer the money into things that are meant to conduct the business then convert it back into whatever - this will still get flagged as a huge profit but you may just manage to slip into the zone where you have enough control that your company calls the shots

  • @pipolwes000
    @pipolwes000 4 года назад +84

    "Taxes aren't voluntary, you can make companies pay them"
    Any company responsible for 1% of the GWP will almost certainly have armies of lobbyists (or simply buy elections/government leaders) to keep their taxes as low as possible. Major multinationals already do.

    • @psistorm04
      @psistorm04 4 года назад +14

      This. My prediction for an AGI future in big corporations will be:
      * Corporation develops AGI
      * Corporation socks soar
      * Corporation lays off immense amounts of staff
      * Corporation stocks soar further
      * Corporation is now immensely powerful, essentially buys up other big competitors, lack of anti-trust law enforcement in the US allow this
      * New mega-corporation exerts massive global influence
      * Massive poverty everywhere from layoffs
      * Massive unrest, but hey, that's the governments' problem now
      * Governments powerless in the face of mega-corp
      * End result: Extreme class divide, people who literally are useless since labor is mostly obsolete and dont partake in economy, people who either have irreplaceable jobs or own AGI stock etc, partake in economy.
      One could argue that having a large portion of the population no longer be economically relevant would hit the corporation's bottom line, but they have an AGI, they will probably transition away from selling goods and over to simply shifting money around in order to make yet more money. I mean the US has been showing us how to do it for years, with an already staggering wealth imbalance. I don't think it's too far of a leap from there. It'll just be even more wealth imbalance together with a healthy sprinkling of war and civil unrest.
      People really forget that corporations absolutely don't care about ethics at all, so AI safety, windfall clause etc, all that doesn't really matter in the end. If apple/google/amazon gets an AGI, prepare to watch the world change for the worst, whilst their owners get even more unimaginably rich, that's pretty much that. It's just a matter of when this happens.
      Society won't turn into this utopia where work is mostly handled by AGI and humans can now self-actualize. Itll turn into a dystopia where corporations will absolutely rule all, and poverty is everywhere. They won't share, they already don't.

    • @scotty4189
      @scotty4189 Год назад +2

      Or buy an army. Just a normal army. Not of lobbyists.

    • @Dan-dy8zp
      @Dan-dy8zp Год назад

      We took down robber barons like the Rockefellers with anti-monopoly laws. We could do this thing.

    • @JH-cp8wf
      @JH-cp8wf Год назад

      In this context, any company responsible for 1% of the GWP will /also/ have an unstoppable artificial god who does anything they ask it to do, which might be a bigger problem.

  • @Jahova3131
    @Jahova3131 4 года назад +297

    Rob, love the video as per usual. You mention the Windfall Clause contract is "legally binding." While contract law certainly differs across countries, in general they are only binding to the extent that the quid pro quo is maintained. In other words, if I enter into a contract with someone to do maintenance on my house in exchange for money, I'm only legally bound to provide the money if he has held up his end of the bargain. The problem I see with a Windfall Clause is, once the "windfall profits" have been theoretically realized by the AGI first mover, the other companies and institutions that may have signed on have no leverage to enforce the contract. The first mover could say, "I choose not to honor my side of the contract," and the only legal recourse would effectively be an acknowledgement that the other companies no longer have to provide their end of the bargain, which was nothing to begin with. Contracts can always be legally broken so long as the exchange of goods or services outlined in said contract are undone - and because this has no exchange, it can be broken at any time with effectively no recourse. I suspect you would have serious trouble getting the AGI "winner" to uphold their end, because at that point it won't matter to them. It sounds like a Windfall Clause is more of an insurance policy for companies in case they "lose" the race. By signing on, they are maximizing their chances of receiving profit sharing should the "winner" choose to follow through with the promise. If the winner chooses to ignore the contract, they are no worse off than they would have been absent the contraxt. If they end up the winner, they can choose at that point whether it makes sense for them to hold up their end of the bargain.
    I still think it is a great idea and should be further pursued, but it seems to hold all the typical first mover problems we associate with AGI, namely that once it is achieved, its potential benefits will be so great that the benefit of honoring any past agreements is dwarfed by the cost of ignoring them.

    • @MRender32
      @MRender32 4 года назад +15

      I dunno, given how drastically AGI would shift money away from the labor class, that’s just begging for a revolution

    • @PublicEnemy81
      @PublicEnemy81 4 года назад +38

      Even if companies did decide to sign the Windfall Clause, which I highly doubt happens in the first place, the company to reach 10% of the world's GDP will be so incredibly powerful they'll effectively be immune from any enforcement actions that may be taken to force them to honor the contract. The world's most powerful governments can't get Amazon to pay their taxes, you think anyone will be to separate trillions of dollars from a company that's already worth at least $8,000,000,000,000 (roughly 10% of the world's GDP) and has AGI at their disposal?

    • @peterbonnema8913
      @peterbonnema8913 4 года назад +6

      True. But in conclusion, we have no reason not to do this and everyone will participate. The contract will even be legally binding and yet, it won't help. Game theory leads to really weird conclusions sometimes (on the face of it).

    • @warpzone8421
      @warpzone8421 4 года назад +34

      @@MRender32 The problem is the Rich can win a revolution, at this point. They can mass-produce tiny flying drones that can snipe a human holding a gun from 500 feet in the air. They would completely destroy the illusion of choice if they actually did it, but they could do it. A couple billion dollars. Done. Easy. Every protester gets one free bullet. Revolution: solved.

    • @MRender32
      @MRender32 4 года назад +12

      Warp Zone When EVERYONE is starving and unable to care for themselves, you don’t think 450 million people can bust down the doors and raze the place? We can’t underestimate how many people are gonna be affected. You are probably right that they’re SO much stronger, but if they manage to kill the labor class (literally this time) I don’t know who they’ll be able to sell to. After all, they need to generate wealth, don’t they?

  • @catcatcatcatcatcatcatcatcatca
    @catcatcatcatcatcatcatcatcatca 4 года назад +71

    BTW the Luddite fallacy actually is not a fallacy: technological development always resulted in worse circumstances for workers over-all; only through collective bargaining or new laws we managed to get part of the results.
    New factories and steam engines didn't create better jobs; they created unemployment, longer hours, smaller wages and enabled the wider use of child labour. They allowed companies to fire half of their old workers and replace them with children; they created bigger risks in investment in materials which had to be compensated by longer workdays; and the over-all theme was always to replace expensive labour with cheaper one, which created a lot of cheap labour that was always competing against itself.
    The employment of engineers, repairers and so on always had to necessarily cost much less than the labour replaced by the new technology, and as those positions required more education and training, thus paying better, the over-all available work by definition had to fall. And so it did. Every time. And the new work created by surplus of available labour was always much worse than the old work.
    AI will necessary do the same. Unless workers stay vigilant and demand their rights, those rights are denied. Even if it obviously results in fall of consumption and drastically worsen the position of corporations; paying more wages to stimulate sales simply is not a solution for any individual company.
    We will get few new jobs that require high skill, and drastically reduce the amount of avarage jobs that pay avarage wage. This will open door for a lot of new really shitty jobs that don't pay well, and which will constantly be a target for optimation and reduction. The more AI thinks the less people are paid to do so - just as the more precision and dexterity machines gained, the less people were paid for such work.
    Some capitalist in the middle of industrial revolution were begging the parliament of Britain to create legislation to regulate factories, as they faced such strong competition from those profiting from unethical practices, that they had no choice but to adopt the same explotation of workers and children. Something similar will necesarely happen in our economy; some billionaires already are calling for goverment action, as they know they are not free to make the ethical choice in a market where others can choose not to.

    • @themagictheatre2965
      @themagictheatre2965 4 года назад +11

      The problem with the industrial revolution was that it took place within an extremely capitalist context. It was not the new technologies. In a vacuum, the technologies were good. The workers kept getting bad jobs because it was ruthless cold hearted industrial barons with no public accountability whatsoever who were in charge of all the jobs. They still are btw.

    • @Ansatz66
      @Ansatz66 4 года назад +2

      Obviously automation in any industry would reduce the human jobs as those jobs are replaced by machines, but that doesn't automatically lead to an overall reduction in jobs. There is a bigger picture to consider beyond just the activities that are being automated. The engineers and repairers that maintain the machines are not the only place where we might find new job created following automation.
      When automation allows some good to be produced more cheaply, that tends to cause the price to fall. People might buy more of that good as the price falls, or else people might spend that money on other things, thereby causing other industries to expand. When the production of widgets are automated, many people who make widgets may lose their jobs, but the demand for cogs will naturally rise as the price of widgets falls, and so the widget-makers might gain employment in the expanding cog industry.
      Surely it is obvious that something must cause new jobs to appear despite automation, since we've been automating things for a long time, and yet people continue to work at jobs and life has been greatly improved.

    • @kireclebnul
      @kireclebnul 4 года назад +5

      @@Ansatz66 this is deeply wrong. the thing you're falling prey to is the idea that prices can only increase when demand increases. in fact, when the price of bread is raised, so is the price of basically every other staple good, because the demand for all of them is fixed.
      and by no means does automation necessarily translate into a reduced cost of product. the price to produce a car has fallen drastically since the mid 20th century due to automation, and yet the price for a new car has risen steadily, even adjusted for inflation.
      and further, in your example, you assume that cog-makers will not have also discovered the ability to automate their workers. automation does not happen once per decade, affecting one industry at a time. it happens constantly and across the spectrum of production. widget makers would be (and have not historically been) able to find equally paying jobs as cog makers. they would simply join the cog makers in the unemployment line and end up spending what ought to have been their retirement working at a domino's or walmart or other low-paying service industry job. and the thing that rises up to fill the demand void left by the slight decrease in price of widgets and cogs will be built to take advantage of automation, meaning there will be few if any jobs available in its manufacture.
      to simplify automation into the world of econ 101 is a gross disservice to workers around the world who have seen their lives upended and de-facto ended by automation and its knock-on consequences.
      tl;dr take your theoreticals elsewhere, we have no place for them in the real world.

    • @theondono
      @theondono 4 года назад +1

      This comment thread is one of the most ill informed I’ve seen in quite some time.
      The alternative to child labour during the industrial revolution was death by starvation. Yes, ethical practices are a luxury, that’s why we want every country to get wealthy as soon as possible.
      Cars haven’t got cheaper? Seriously? In what world? I can afford a car with 4 month wages, and I make a pitance. Try that 20 years ago...
      Rising the price of bread only raises modestly the price of *substitution goods*, complementary goods like e.g. ham get a lower price.

    • @evannibbe9375
      @evannibbe9375 4 года назад

      One problem with this idea is that it will not be humans that are being used in terrible working conditions, it will always be robots.
      If the AGI somehow made so much money that there was no money left in the world, then that would entail that the people of the world would have viewed what it was producing as more valuable than anything else they could have gotten with that money.
      Taxes would also have increased enough that there would be enough money flowing to people hired or subsidized by governments so that they can continue to buy the better and better products the AGI was making.

  • @TimSorbera
    @TimSorbera 4 года назад +109

    Here's a concern I'd have with this: rich corporations already wield incredible power over governments, public opinion, etc. By the time they get big enough for any windfall clause to kick in, they might say "well that was a fun PR stunt but now we're not going to follow through", and they just might be powerful enough to pull it off and not pay anything back to the world. And by this point, they've already got AGI/ASI and don't need cooperation of pretty much anybody to keep being the #1 company.

    • @Ansatz66
      @Ansatz66 4 года назад +10

      It's true that a company which develops AGI probably has no reason to continue to respect any agreements or contracts it signed, and that includes any laws of any nations it may be a part of. It would have the power to change the world to its liking, so in a way it would effectively be the new government, except far more powerful than a normal government since it's not bound by economic considerations. It would have the power to point to anyone at random and arbitrarily declare: you're rich, or you're poor and it would just happen.
      Still, that doesn't mean that the company would not pay out the windfall. Even if nothing forces them to pay out, it's also true that there would be no cost to paying out. The company has everything it could possibly want. When all of its hopes and desires are totally fulfilled, any hording beyond that seems pointless. Once a person has everything he or she wants, the only thing left to want is the general good of the rest of the world. Why not cure poverty and illness when it costs nothing to do so?

    • @iwikal
      @iwikal 4 года назад +12

      @@Ansatz66 one potential cost to altruism at that stage might be that by sharing your profits you're also sharing a bit of your power. And who's to say that a different, less benevolent actor, possibly one who hasn't quite figured out the alignment problem yet but it's willing to risk it to get in on some of that world domination, isn't going to use these resources to put their plan in motion? Best keep it all to yourself, you know what's best for everyone anyway.

    • @donaldhobson8873
      @donaldhobson8873 4 года назад +2

      @@iwikal You have powerful AI, you can keep detailed records of what everyone is doing and stop that easily.

    • @automatescellulaires8543
      @automatescellulaires8543 4 года назад +8

      @@Ansatz66 Let's look at what humans do. Just about any billionaire could be considered as having enough. Do they though ? No, too much money only create a crave for more. Company don't have as much empathy as human do though. It's only purpose it to generate more money. Owning everything wouldn't be enough, it would only be the natural starting point, the first step.

    • @alexpotts6520
      @alexpotts6520 4 года назад +1

      @@automatescellulaires8543 I mean, I would qualify this slightly. There are good and bad billionaires just as there are good and bad people. Nobody believes that Bill Gates is a bad person; some people believe it is wrong that we live in a society where people can be is rich as Bill Gates, but the current social contract is hardly Bill Gates's fault.
      The problem is not that rich people are all sociopaths; the problem is that society is incapable of whipping rich sociopaths into line in the same way it could if I behaved similarly.

  • @Kazordoon
    @Kazordoon 4 года назад +148

    Seeing how far corporations will go to avoid paying taxes, good luck trying them to accept this windfall clause.

    • @peterbonnema8913
      @peterbonnema8913 4 года назад +5

      Yes, even if it helps, it can't be the whole solution.

    • @ucannotseemycomment
      @ucannotseemycomment 4 года назад +4

      Nice job to both of you for watching the whole video before commenting /s

    • @KirillTheBeast
      @KirillTheBeast 4 года назад +9

      Nah, they would all accept to sign it because the requisite for it to aply would be ridiculously easy to circumvent. Just chop the company into bits, make it a conglomerate and even if you move the equivalent to 40% of the world's money, you'll be fine. The shortsightedness of this whole endeavour just baffles me.

    • @Kazordoon
      @Kazordoon 4 года назад +10

      @@KirillTheBeast Also, owning an AGI the amount of loopholes found by it to avoid paying, would be unfanthomable

    • @sofia.eris.bauhaus
      @sofia.eris.bauhaus 4 года назад +3

      @@KirillTheBeast have you looked in the paper if it addresses this point? you should probably do that before judging "the whole endevour", right?

  • @PublicEnemy81
    @PublicEnemy81 4 года назад +275

    "You might face boycotts and activism."
    Amazon has been facing boycotts and activism for years. They don't care. Profits over everything.
    No company will sign a Windfall Clause. It's a nice idea but pure wishful thinking. A little bit of free PR right now (that honestly most people wouldn't really give a shit about) is literal fractions of pennies when you're talking about a company making 10% of the world's GDP (~$8,000,000,000,000). If you think this is a viable solution to inevitable mass automation, you live in a fairy tale.
    Even if companies did decide to sign the Windfall Clause,, the company to reach 10% of the world's GDP will be so incredibly powerful they'll effectively be immune from any enforcement actions that may be taken to force them to honor the contract. The world's most powerful governments can't get Amazon to pay their taxes, you think anyone will be to separate trillions of dollars from a company that's already worth 10% of the world's GDP and has real AI at their disposal?

    • @NortheastGamer
      @NortheastGamer 4 года назад +66

      As I was watching the video I was also reminded about how climate experts have known that we were headed for trouble, designed very good plans to avoid it, and even presented those plans to people with the authority to enact them, but....well you probably know where that ended up...

    • @OHYS
      @OHYS 4 года назад +5

      I was thinking this throughout the whole video.

    • @themagictheatre2965
      @themagictheatre2965 4 года назад +82

      "viable solution to inevitable mass automation" We don't need a 'solution' to mass automation because mass automation is not a problem, it is a potentially amazing thing. The problem is capitalism, not automation.

    • @danieljensen2626
      @danieljensen2626 4 года назад +14

      Amazon boycotts have been pretty small. If a significant fraction of their normal userbase started to boycott them they would start to care.

    • @PublicEnemy81
      @PublicEnemy81 4 года назад +3

      Juicy Boi fair point, what I should have said is we need a solution to the unprecedented scale of economic disruptions that will be caused by automation.

  • @Nosirrbro
    @Nosirrbro 4 года назад +51

    We could also just, you know, try to move past capitalism, which is so obviously incompatible with a post-AI world. And if everyone’s labor suddenly becomes worthless, that’s some pretty strong motivation for some massive political change.

    • @davidwuhrer6704
      @davidwuhrer6704 4 года назад +2

      Capitalism is not obviously incompatible with a post-AI world. In fact, some companies make good money employing AI.
      AI is so effective at playing Monopoly that there is now an international agreement not to bet on rising food prices in the derivatives market.
      If everyone's labour suddenly becomes worthless, that is an enormous potential for down-sizing and cost effectiveness.
      Usually this results in wars in which the proletariat kills of its surplus. But with today's killer robots even that can be automated already. (Technically killing all the poor would be a massive political change.)

    • @thefakepie1126
      @thefakepie1126 4 года назад +18

      ​@@davidwuhrer6704 the sooner we get rid of capitalism the better

    • @Nosirrbro
      @Nosirrbro 4 года назад +11

      @@davidwuhrer6704 Incompatible with it in a way that is good for anyone that isn't the bourgeoisie.
      Of course, that is no less true now, but post-AI its even more obviously true to the unaided liberal eye.

    • @saosaqii5807
      @saosaqii5807 3 года назад +2

      Capitalism is the best by far rn
      But once post scarcity kicks in everything will and should be free since AI and infinite production removes need for any money

    • @Nosirrbro
      @Nosirrbro 3 года назад +7

      @@saosaqii5807 Not if the capitalists have anything to do with it, its in their interests to prevent that future and they have more resources than anyone else to get what they want.

  • @KrazyKaiser
    @KrazyKaiser 4 года назад +9

    I think the more important question is "What happens to capital when labor realizes it's a made up concept"

  • @sarund9441
    @sarund9441 4 года назад +78

    "you can think of the world as having 2 types of people, some that make money by seling their labour, and some that make money by owning AI sistems"
    - Rob.
    "There are those who make money by owning capital, and those who make money by seling their labour"
    - Karl Marx

    • @buzz092
      @buzz092 4 года назад +8

      Both sound pretty clear and accurate to me

    • @ErikratKhandnalie
      @ErikratKhandnalie 4 года назад +37

      Really and truly, this whole video was spent dancing around the fact that AGI should mean the end of capitalism.

    • @SuicidalLaughter
      @SuicidalLaughter 4 года назад +22

      the automation crises being discussed here was literally discussed in the communist manifesto, certain took a lot longer than Marx anticipated but here we are still. Even for people still married to capitalism as the best economic system there is a lot that can be learned from Marx analysis of capitalism

    • @michaelbuckers
      @michaelbuckers 3 года назад +3

      What will really happen is that unchecked automation will make everything so incredibly cheap that what little money you make will be enough to have more luxurious life than we currently have. Social stratification will be ridiculous, but you want to have better life, not prevent other people from having exceptionally great life, right?

    • @VorganBlackheart
      @VorganBlackheart 3 года назад +1

      @@michaelbuckers Thank you for being a voice of reason in this comment section. Not to mention, as production of products and services becomes cheaper, more decentralized and more available, we are more likely to see a democratization of entrepreneurship, with clusters of mostly or entirely self sustaining local communities. People project tomorrow's problems on today's market but the landscape of the financial world changes all the time.

  • @TWPO
    @TWPO 4 года назад +6

    Robert, literally any video you make about AI is something I'm interested in seeing. You are a fantastic communicator of all things AI and we need more people like you, especially now. Keep them coming!

  • @goodlookingcorpse
    @goodlookingcorpse 4 года назад +194

    This seems to me like a specific case of the general problem that a few people own the means of production.

    • @gearandalthefirst7027
      @gearandalthefirst7027 4 года назад +24

      @@nullumamare8660 "he wrote the self-writing code!"

    • @bman7346
      @bman7346 4 года назад +14

      @@gearandalthefirst7027 he wrote the code for the code that wrote the ai

    • @mvmlego1212
      @mvmlego1212 4 года назад +8

      @@nullumamare8660 -- Do you think the people to achieve AGI _won't_ have worked hard for it?

    • @mvmlego1212
      @mvmlego1212 4 года назад +1

      Why is that a problem, though?

    • @TillmansFX
      @TillmansFX 4 года назад +56

      @@mvmlego1212 Obviously the workers, the ones doing the actual research, writing code, etc, work incredibly hard. But even if they are well compensated they will probably not be the ones who own the means of production. Drug researchers work very hard to produce amazing drug therapies, but the ones who make the lions share of the profit do so by owning capital, not working.

  • @Cyberlisk
    @Cyberlisk 4 года назад +23

    That would work if most people were reasonable. Recent events have shown they aren't. In the US for example, you can put that "I am a dickhead" label on your forehead and literally get elected for president. So why would a company have a problem with that? Unfortunately, many people stopped caring.

  • @IAmNumber4000
    @IAmNumber4000 4 года назад +150

    Time to seize the means of computation, comrades. 😎👍

    • @alex-esc
      @alex-esc 4 года назад +10

      Absolutely this will be part of it

    • @sirellyn4391
      @sirellyn4391 4 года назад +3

      Even a peak AI won't be able to make communism work. In fact long before then, it will likely tell you exactly the same thing much dumber humans have been telling you.
      Communism is not sustainable.

    • @IAmNumber4000
      @IAmNumber4000 4 года назад +11

      @@sirellyn4391 Making a claim like that would require you to know what communism is and you clearly don't. :)

    • @sirellyn4391
      @sirellyn4391 4 года назад +3

      @@IAmNumber4000 That's an easy claim to make. Marx specifically defined his "ideal" vision for when you achieve communism. But he never specifically defined HOW to get to that, or HOW to maintain it.
      And because actually getting there or maintaining it is effectively impossible, (without making all individual actors mindless) that constant no true scottsman fallacy is brought up. Which equates to:
      That way everyone else who has tried communism to reach this impossible standard didn't work because they didn't reach or maintain this impossible standard. Therefore it wasn't the "right" way.
      There's literally no solution for the calculation problem, the incentive problem and the local knowledge problem. And those are only the tip of the iceberg.
      Like I said. "Dumb" humans have figured this out a long time ago. If you set an AI to create communism it would either have to kill everyone or render them all mindless and control them and work with a tiny group which is very close by. And even then it wouldn't precisely fulfill Marx's vision. But that would come the closest by far.

    • @IAmNumber4000
      @IAmNumber4000 4 года назад +10

      @@sirellyn4391 Actually Marx never specified what a socialist or communist society would look like, only its defining features and difference from regular capitalism. Namely that the communist mode of production has no currency, no class system, and no state.
      Marxism is a systems theory, not itself a proposed system. Pretty crucial difference, there. Blaming Marx for the actions of state capitalist tankies like Stalin and Mao is like blaming Charles Darwin because some nuts deliberately misinterpreted his theories to justify "Social Darwinism".
      "And because actually getting there or maintaining it is effectively impossible, (without making all individual actors mindless)"
      Again, stuff like this demonstrates you haven't made the slightest attempt to understand leftism or Marxist theory. You think anyone is in favor of making every person 1984-style slaves to some absolutely powerful state? Why would anybody even be a leftist if that were the case? Obviously, someone isn't telling you the full story, because it's an easy out to think of your political opponents are stupid and insane rather than make any effort to understand how they arrived at their conclusion.
      You should try reading what Marx had to say about the state and democracy. Read what he wrote about the Paris Commune in "The Civil War in France". He was closer to a direct-democracy anarchist than a USSR-style tankie. I'm not going to hold your hand the whole way and I can't paste links here.
      Nobody knows if communism is possible because it hasn't happened yet. Automation hasn't obsoleted human labor. What can be known, however, is that capitalism can't last forever. Economic growth can't continue forever because the economy _relies_ on the development of new labor-saving technologies to grow. Even now, the growth of the global economy relies entirely on non-existent money in the form of debt that will never be paid back.
      So, can the world continue to go into debt forever to fund economic growth? If so, then there is no reason why we can't take on more debt to feed and shelter the homeless. If not, then Marx was right and capitalism will be replaced.
      "That way everyone else who has tried communism to reach this impossible standard didn't work because they didn't reach or maintain this impossible standard. Therefore it wasn't the "right" way.
      "
      If you had made even the slightest attempt to understand Marxist theory then you would know Marx considered the complete obsolescence of labor by automation to be a precondition for achieving the communist mode of production. He argued _constantly_ with what he called "crude communists" (AKA tankies), people who thought capitalism could be ended by merely making private ownership illegal.
      "There's literally no solution for the calculation problem,"
      The calculation problem is a refutation of a straw man. Marx's law of value is a theory about exchange values, not prices. Exchange value is just one of many factors that influences price. So claiming "Marx's theory can't even predict prices in a capitalist economy LEL" is utterly pointless, because the theory was never meant to predict prices. But strawmen get picked up fast by mainstream economists because they're all desperate for _any_ refutation of Marxism, no matter how fallacious it is.
      If you don't want to understand leftist theory then you don't have to. Just quit pretending like you're an authority because some wingnut blog fed you anti-leftist talking points. It's embarrassing.

  • @9sven6
    @9sven6 4 года назад +11

    What value does Legaly Binding have once a company makes 1% Gross World Product. South Korea already has a problem regulating SamSung, because they are roughly 17% GDP of South Korea. A company with 1% GWP will be in a similar situation. So it would require that every goverment on earth promisses to uphold companies to their Windfall Clause. But then we move the problem one step away. I don't know, I am just not confident in the promisses such companies make.

  • @brendanjackman3600
    @brendanjackman3600 4 года назад +68

    I nodded at my phone when you asked "is that the kind of thing you'd be interested in"

    • @TheSam1902
      @TheSam1902 4 года назад +5

      Same

    • @josephburchanowski4636
      @josephburchanowski4636 4 года назад +1

      We need more people to liked your comment, so Robert Miles makes more videos on the legal, political, and economic aspects of the future of AI.

    • @anthonypergrossi8454
      @anthonypergrossi8454 4 года назад

      @@TheSam1902 Seconded!

    • @iestynne
      @iestynne 4 года назад

      Yes please. Seems like a super important slice of the problem. There are lots of interesting objections in the comments here to go through for a start.

    • @tertrih9078
      @tertrih9078 4 года назад

      Me too :D

  • @hendrikd2113
    @hendrikd2113 4 года назад +33

    "You can make companies pay them."
    Well...
    Either we're not very good at it, or we don't want to.

    • @themagictheatre2965
      @themagictheatre2965 4 года назад +4

      It's the second one.

    • @davidwuhrer6704
      @davidwuhrer6704 4 года назад

      No, it's the first one.

    • @robhulluk
      @robhulluk 4 года назад +3

      Possibly depends on the country, but in the case of the UK it definitely seems like the second, the government created the loopholes so that the biggest companies don't pay much tax, and then benefit personally from that.

    • @themagictheatre2965
      @themagictheatre2965 4 года назад

      @@robhulluk There is stuff like that in every country. My own country was named as a tax haven for billionaires in the Panama Papers.

    • @davidwuhrer6704
      @davidwuhrer6704 4 года назад

      @@robhulluk I certainly didn't profit from it.

  • @erikprantare696
    @erikprantare696 4 года назад +5

    We already kind of have this divide between people. The working class survives by working, and the capitalist class by owning private property. Finding a new way to organize the economy could be a part of the solution.

  • @c99kfm
    @c99kfm 4 года назад +41

    "Firstly, governments are not actually great at spending money effectively..." [CITATION NEEDED]
    Just because it is "widely known" to be so, doesn't make it true. In this case, you'd probably find that (paraphrasing here) "governments are the worst way of spending money effectively, except for all those other forms that have been tried from time to time..."

    • @c99kfm
      @c99kfm 4 года назад +8

      @@bosstowndynamics5488 Even THEY have their moments. The US social security program is rated somewhere over 99% efficient, I believe.

    • @erikburzinski8248
      @erikburzinski8248 4 года назад +1

      The usas health care.

    • @newcoolvid27
      @newcoolvid27 4 года назад +4

      @@c99kfm I mean it's 99% efficient because it's just giving money to people, can't get much more efficient than that

    • @whatsinadeadname
      @whatsinadeadname 4 года назад +8

      Yeah, that was probably the weakest part of the video for me. Asking Republicans and Democrats how much money is "wasted" by government and, what do you know, the numbers match up with their exit polling numbers.

  • @LowestofheDead
    @LowestofheDead 4 года назад +13

    Execs and shareholders: "Yo AGI, how do we look charitable but not actually give anyone money?"
    AGI: "I gotchu bro, just invented a million forms of windfall evasion"
    Execs: "sickkkkk"

  • @thearbiter302
    @thearbiter302 4 года назад +18

    I'm glad I was lying down when I saw the "appearing to not be sociopaths" bit. I would have fallen out of my chair!
    But seriously, thanks again for the hard work making these videos.

    • @TobiasWeg
      @TobiasWeg 4 года назад

      Underrated comment!;)

  • @scrabblan
    @scrabblan 4 года назад +51

    I dont really like that the apparent solution here is 'corporations maybe sign a pledge as a PR move and by the time they wield pet AGI's and significant-percentage-of-world-GDP levels of wealth hope they just honour it willingly (because there is no realistic enforcement mechanism) to sustain an entirely outmoded economic system'.
    I feel there are a lot of ways for that not to work out.

    • @peacemaster8117
      @peacemaster8117 Год назад +1

      It's more realistic than "have the government fix the problem".

    • @appended1
      @appended1 Год назад +6

      @@peacemaster8117 It's sort of equivalent to "have the government fix the problem". A corporation can sign a legally binding contract, but the government still has to be willing and able to enforce it.

  • @devinfaux6987
    @devinfaux6987 4 года назад +45

    Let's not kid ourselves.
    The executives and the shareholders all being sociopaths wasn't a hypothetical, and they're putting less and less effort into appearing not to be by the day.

    • @mvmlego1212
      @mvmlego1212 4 года назад +4

      Really? The amount of corporate virtue-signalling this June was almost nauseating.
      Also, I don't understand the stereotype of evil shareholders. How much stock do you have to own in order to be classified as as _bourgeois_ swine?

    • @AvatarOfBhaal
      @AvatarOfBhaal 4 года назад +8

      @@mvmlego1212 "I don't understand the stereotype of evil shareholders"
      Shareholders are pretty evil by definition. Invest in a company when it's doing well and make money. Withdraw when it's not and make money.
      "How much stock do you have to own in order to be classified as as bourgeois swine"
      Owning stock is the immoral part. So any at all.

    • @mvmlego1212
      @mvmlego1212 4 года назад +4

      @@AvatarOfBhaal -- _"Shareholders are pretty evil by definition"_
      Could you state your definition of evil, please? I can't follow your argument.
      _"Invest in a company when it's doing well and make money. Withdraw when it's not and make money."_
      That is not how investing works--at least not if you want to make money, rather than lose it. Buying high and selling low will make you broke, not rich.
      If you want to make money from investing, then you find a company that you believe will be good at making money. Then, you and the company make an agreement: you give them some money so they can execute their money-making plans, and they give you a share of their profits and some influence over the company. Eventually, you'll find yourself in a situation where you value your share of the company less than you value the money that you could get from selling that share of the company to another person, and so you sell the stock.
      These are all voluntary, mutually beneficial transactions. They don't steal or destroy wealth; they create wealth. I find it bizarre to demonize transactions or the people who make them.

    • @LowestofheDead
      @LowestofheDead 4 года назад +2

      @@mvmlego1212 The reasoning in these comments aren't that clear, but it's usually argued that shareholders incentivize endless growth at _any_ cost, including unethical business practises. Even at the cost of the free market, through monopolization and anti-competitive practises which benefit shareholders but remove consumer choice.
      However these could simply be solved by better laws on unethical practices and updating anti-trust legislation. Also removing regulations which benefit monopolies over potential competitors.
      Or we could just move to cooperatives.

    • @mvmlego1212
      @mvmlego1212 4 года назад

      @@LowestofheDead -- Those are reasonable points, and I even agree with some of them, but they're a lot different from saying "shareholders are pretty evil by definition". If a shareholder is concerned that the company they've invested in has run out of room to grow without compromising their ethics, then they can divest from the company.

  • @11kravitzn
    @11kravitzn 4 года назад +8

    When making things easier for humanity is a problem, you know something has gone seriously, seriously wrong.

    • @biomuseum6645
      @biomuseum6645 Год назад

      Explain why

    • @11kravitzn
      @11kravitzn Год назад +3

      @@biomuseum6645 making things easier should make things easier. You know, help other people.

  • @petersmythe6462
    @petersmythe6462 4 года назад +16

    "two types of people. People who make money by selling their labor, and people who make money by owning AI systems."
    Wouldn't this be true in our society anyway? People who make money by selling their labor and people who make money by owning capital goods and extracting surplus value?

    • @TheStarBlack
      @TheStarBlack 4 года назад +8

      Precisely. We already live in that world and we can see how it goes for the little guys.

    • @gnaskar
      @gnaskar 4 года назад +5

      The difference is that labour currently has value. If a robots can do everything cheaper, the people who make money selling labour stop earning money. At which point things get worse.

    • @TheStarBlack
      @TheStarBlack 4 года назад +3

      @@gnaskar see my comment on the main thread. Big business can get rid of their workers if they want to but they will unintentionally destroy the capitalist economic system. Maybe that would be for the best.

    • @davidwuhrer6704
      @davidwuhrer6704 4 года назад

      @@TheStarBlack
      _> Big business can get rid of their workers if they want to_
      That has happened before. Several times. It will happen again.

    • @TheStarBlack
      @TheStarBlack 4 года назад +1

      @@davidwuhrer6704 not all of them at the same time!

  • @Fulgara
    @Fulgara 4 года назад +11

    "the problem is (describes capitalism)"
    "our proposed solution is (describes the same ineffective band-aid we've used for more than a century)"
    Anand Giridharadas gives a pretty good explanation of why charity is ineffective at actually solving issues. An oversimplified summary is that letting people who benefit from problems control how we address them, allows them to invest in 'solutions' that don't involve actually fixing the underlying problems that they make so much money from.
    If "we promise we'll do the right thing with the money we're stealing" was going to work, it would have done so by now.

    • @Laezar1
      @Laezar1 4 года назад +3

      Exactly, it's not like we haven't been there before. It's just what happened with automation but on a larger scale. We know how that turns out when capitalism is involved : Rich get richer by right of ownership, works don't get to work less despite being more productive, and superfluous workers are discarded and become extremely poor.
      Also apparently government are bad at sharing wealth but we should trust self serving corporate entities whose only goal is generating profit to do it better? When we know they actually do not.
      These ideas are just trying to preserve the status quo, and the status quo is rich getting richer draining wealth from everyone else. It's hardly a status quo worth preserving. (and that's without including sustainability issues).

  • @clayupton7045
    @clayupton7045 4 года назад +100

    Companies will just divide into smaller companies with the same owners.

    • @eclipz905
      @eclipz905 4 года назад +32

      Agreed. Without ironclad agreements and strict enforcement, I'd expect to see the same shenanigans companies use to avoid paying taxes.

    • @ZappyOh
      @ZappyOh 4 года назад +23

      The World *will* end up with 101 companies, each grossing 0.99% of World GDP.
      This wind-fall-thing is a crap idea ... nothing but "public relations".

    • @ner2393
      @ner2393 4 года назад +1

      This is one of those things that you (I) don't think of right away but makes so much sense as soon as you read it

    • @RobertMilesAI
      @RobertMilesAI  4 года назад +68

      Didn't have time to put this in the video, but this is addressed in section A.2.i of the report at : www.fhi.ox.ac.uk/wp-content/uploads/Windfall-Clause-Report.pdf
      > “Firms will evade the Clause by nominally assigning profits to subsidiary, parent, or sibling corporations.”
      > The worry here is that signatories will structure their earnings in such a way that the signatory itself technically does not earn windfall profits, but its subsidiary, parent, or sibling corporation (which did not sign the Clause) does. Such a move could be analogous to the “corporate inversion” tax avoidance strategy that many American corporations use. Thus, the worry goes, shareholders of the signatory would still benefit from the windfall (since the windfall-earning corporation remains under their control) without incurring obligations under the Clause.
      > We think that the Clause can mitigate much of this risk. First, the Clause could be designed to bind the parent company and stipulate that it applies not only to the signatory proper, but also to the signatory’s subsidiaries. Thus, any reallocation of profits to or between subsidiaries would have no effect on windfall obligations.* Second, majority-owned subsidiaries’ earnings should be reflected in the parent corporation’s income statement, so the increase in the subsidiary’s profits from such a transfer would count towards the parent’s income for accounting purposes.† Finally, such actions by a corporation could constitute a number of legal infractions, such as fraudulent conveyance or breach of the duty to perform contracts in good faith.

    • @sofia.eris.bauhaus
      @sofia.eris.bauhaus 4 года назад +10

      have you looked into the paper if it addresses this criticism, or are just just assuming you instantly came up with something that the experts behind the paper have never thought of?

  • @PanicProvisions
    @PanicProvisions 4 года назад +3

    All I'm asking for is more videos, period, as long as that doesn't take away from the (so far excellent) quality of them. This is my absolute favourite RUclips channel out of hundreds I've subscribed to and you've got the likes of the Vlogbrothers, CPG Grey and The Royal Institution beat as far as I'm concerned.
    General purpose A.I. is the most important and interesting topic of our time and, if we survive till we have it, will impact the future of humanity incomparably more than isolated historical events like the current Corona crisis and even larger, more dramatic events like global warming.

  • @morkovija
    @morkovija 4 года назад +37

    GPT3 was interesting, would love to see next iterations figuring out chemistry or physics just from reading texts

    • @minebidw1291
      @minebidw1291 4 года назад +10

      GPT3: i can do math, just tell me examples
      GPT4: i can do physics, just give me textbooks
      GPT5:[DATA EXPUNGED]

    • @ilonachan
      @ilonachan 4 года назад +1

      GPT6: Finds a bug in the program displaying its output, breaks out of its sandbox, hacks major government players, takes over the world

    • @ekki1993
      @ekki1993 4 года назад +2

      We might get to a point where something like GPT(n) can be used to be taught arithmetic and come up with solutions to mathemathical problems. Then mathemathicians/computer scientists will have to decide whether or not that's a valid proof much like they did with the first computer-proven theorems (and they may come to a different conclusion).

    • @veggiet2009
      @veggiet2009 4 года назад +1

      I wonder how GPT3 compares to IBM's Watson?
      Input a series of example jeopardy questions and answers. And then try test questions.

    • @davidwuhrer6704
      @davidwuhrer6704 4 года назад +2

      “Can entropy be reversed?”

  • @m.streicher8286
    @m.streicher8286 4 года назад +17

    Oh wow thanks guy with ALL the power for telling me that if I just trust you everything will be ok, I'm sure I'll get that windfall money. lmao

  • @jphanson
    @jphanson 4 года назад +1

    Just saw the GPT-3 Computerphile video and really hoped Miles would upload soon. Every video he makes is amazing!

  • @waaaaaaah5135
    @waaaaaaah5135 4 года назад +1

    Your videos are amongst the most fascinating I've ever seen, please keep making them!

  • @paradoxicallyexcellent5138
    @paradoxicallyexcellent5138 4 года назад +5

    Prediction: companies will avoid signing this or refuse to pay if the clause is triggered.
    They will avoid signing it by voicing concern that whatever organization was designated to receive the money is not trustworthy/corrupt.
    If they do sign, they will avoid paying by using lawyers. Lots of lawyers.

  • @y__h
    @y__h 4 года назад +5

    If you plan to cover on the socio-economical aspects of AGI impact, please consider collaborating with CaspianReport on one of the videos. I think it's good to facilitate cross-pollination of multiple discipline in the AI Research because we are all in this together. Cheers Rob!

  • @MrLuMax5
    @MrLuMax5 4 года назад +1

    Very glad to see a new video relatively short after the last one, keep your great work up! I specialized to ML partly due to your great, interesting videos during my CS studies.

  • @beamboy14526
    @beamboy14526 4 года назад +6

    Money and profits will become obsolete. The first company to discover AGI will not bother making/selling products. Imagine having a wish-granting genie with unlimited wishes. Why would you bother creating and selling products when you could just wish everything you want into existence?

  • @IAmNumber4000
    @IAmNumber4000 3 года назад +4

    Company that signed the Windfall Clause and successfully invents AI:
    “I am altering the deal. Pray I do not alter it further.”

  • @ChristophePlantin
    @ChristophePlantin 4 года назад +7

    You just assume that public money is inefficient because people think so... that's not a serious argument.
    Corporation are not encline to fight for a greater good, they are here for money. All the biggest corporations implement sophisticated and aggressive tax reduction schemes, that's not for the greater good, I think that's proof enough we cannot rely on them, especially if we expect profits to grow exponentially. If we want something that benefits us all, a more efficient tax strategy is probably what we need. If you're thinking lomg-term, why not thinking about something like a world-wide tax at international level? Or taxing profits where they're made/sold instead of where the product is produced/engineered?

  • @pielover267
    @pielover267 4 года назад +15

    It's worth bringing up that socialist economics systems, which don't have a capital class, don't have this problem. Automation is unambiguously good under socialist systems, where the means of production (factories/farms/mines/AI systems) are managed by and for the working class. It just means everyone gets to work less and spend more time doing things that they are passionate about.

    • @1tuttyfruti
      @1tuttyfruti 4 года назад

      althought I alight a lot with marx ideas I wouldn't be so sure about AI serving the greater good, the administrative class could use the AI to boost their personal power. I guess in the end it all depends on the personality of who has the AI first.
      Anyway I would agree that in a socialist country AI would be less likely to be used for "evil" due to the fact that those who control the AI are most likely comunists therefore valuing colective gain over personal gain more than a capitalist.

  • @joshwalker7460
    @joshwalker7460 4 года назад +2

    I tend to prefer your technical content because it's more immediately useful to me, but also hold the opinion that ai safety is ultimately a cultural problem, so it's cool to see you tackling those aspects. I wouldn't mind further pursuit of this direction, but definitely want to keep seeing the great technical stuff you've made. Thanks for all your efforts!

  • @pafnutiytheartist
    @pafnutiytheartist 4 года назад +8

    While I am more interested in technical side of things, this is also very interesting.
    This is a really neat idea but it only works with extreme breakthroughs. I think that problems will creep up more gradually. More and more jobs slowly and across different countries will be replaced by AI systems. In that scenario no single company will earn 1% of world GDP while most companies will employ very little actual workers.

  • @jl6723
    @jl6723 4 года назад +25

    I believe the Windfall Clause is useless as AI would be more effective at solving Human Terminal Goals than Currency, effectively allowing it to take the role of currency. A better alternative clause would be something that prevents monopolization of all AI upon creation, essentially using AI to sabotage other people's ability to create AI.

    • @jacobscrackers98
      @jacobscrackers98 4 года назад

      How would it be enforced if, say, AI is developed secretly?

  • @semjonnordmann
    @semjonnordmann 4 года назад +2

    Reading the comment section I think there is more than enough interest and discourse going on to also take the policy side of AI into your video portfolio. I'd be more than happy to see the current state of research presented by you. 😊

  • @jwg72
    @jwg72 4 года назад +7

    Some confusion is going on here: Taking the ideological consensus in the States regarding public spending as representing reality, also assuming the difficulty in enforcing taxation on companies isn't a result of the influence of capital to corrupt taxation systems (and thus isn't soluble). Bad sociology, and political science.

  • @KirillTheBeast
    @KirillTheBeast 4 года назад +64

    Assuming decision makers to be human beings?
    Supposing that corporatives would have any issues with rightfully looking like the sociopaths they are?
    Suggesting a "tit for tat" argument for cooperation?
    Look, Robert, I love your content and your commitment towards spreading awareness about matters even tangentially related to AI, but at this point I must assume you don't live in the same planet as the rest of us...
    Great video, though xD

    • @starvalkyrie
      @starvalkyrie 4 года назад +14

      The optimism is charming but yeah...

    • @KirillTheBeast
      @KirillTheBeast 4 года назад +11

      @@starvalkyrie To me, it just reflects humanistic tendencies in his train of thought. I mean, just look at his content: the guy is spreading awareness and inciting interest on AI safety research, which is to say "let's make sure that this thing that will eventually be made doesn't screw us all over".
      Nice? Absolutely.
      Charming? To some extent.
      Naive? As all hell. It's pretty much tied with libertarianism in terms of naivity.
      Nevertheless, it's worth taking the time to examine ways to deal with the problem without changing the entire framework (AKA late stage capitalism) before giving up on it and forcefully engineering a legal and economic system built around a new technological paradigm that may never come.

    • @automatescellulaires8543
      @automatescellulaires8543 4 года назад +11

      At least this video does trigger people into stating the obvious. Maybe the naivety is faked, and only meant to help us realize how screwed we really are. Human made economic choices, makes Skynet looks like a Saint.

    • @KirillTheBeast
      @KirillTheBeast 4 года назад +7

      @@automatescellulaires8543 Well, you actually nailed it with the "Human made economic choices".
      The biggest lie to ever befall upon our species is one promoted by academics in the field of economics: the economy is treated as a phenomenon, AKA "something that happens", instead of being the sum of all the decisions made by individuals serving (mostly) their own interests.
      These sociopaths would make you think that the fact that they "use game theory" is already accounting for individual agency and extrapolating it to bigger systems, but it's a lie enabled by the obscuring of the events' sequential order. First, a tendency is found, then, it's exploited and purposefully perpetuated and the last step (usually when someone outside the lobby questions the ethics of such actions) is justifying the events by stating that "it couldn't have happened any other way because game theory says so".
      Source: my great uncle was a trader. The guy could never find peace after the small bussiness debacle he had contributed towards by speculating with warehouse and shop prices (this was in Spain in the late 80s). Several of his acquaintances lost their livelyhoods because of something that he himself was doing. Both them and himself had come in a mass migration from the southern-most parts of the country, and he was a predator to them. A decent human being doesn't come back from that kind of realisation.

  • @sayaks12
    @sayaks12 4 года назад +23

    "most people think their tax money is wasted" is not a good reason to say that tax money is actually wasted. also it's possible to tax this agi if it does stuff within a nation's borders, even if it isn't based within that nation.

    • @davidwuhrer6704
      @davidwuhrer6704 4 года назад

      I find that poll results interesting. Most elephant voters think that more than half of federal taxes are wasted, donkey voters thing it is just shy of half. This is federal tax money, not municipal or state.
      What does the federal government of the USA even do for the general population?

    • @4.0.4
      @4.0.4 4 года назад +4

      @@davidwuhrer6704 start wars in the middle east and get a Nobel peace prize while doing so (Obama).

  • @rogermcinerny2027
    @rogermcinerny2027 2 года назад +1

    Given what plenty of people have already pointed out in the comments (namely how totally unenforceable a windfall clause would be in practive), I think examining these types of problems really illustrates the need for fundamental changes to the way we view and enforce property laws and ownership as a whole

  • @MaxTheLegoBrick
    @MaxTheLegoBrick 4 года назад +21

    Question, doesn’t this contract be basically useless in the situation that a company creates a super intelligent AI who’s interests are aligned with theirs? Wouldn’t it very likely try and succeed at getting them out of this contract?

  • @cmilkau
    @cmilkau 4 года назад +19

    Obviously, I'm going to spread my future profits among 101 "independent" companies.

    • @gnaskar
      @gnaskar 4 года назад

      Every loophole you can think off in a minute is one covered in the report but skipped for being too obvious and detailed in a 10min intro.

    • @cmilkau
      @cmilkau 4 года назад +1

      I think the best way to "patch loopholes" is to start with specifying what you actually want companies to do, and where they fall short, just do it yourself and charge them for the costs. Any measure deviating from this will suffer the same misalignment problem we know from AI. Companies are reasonably good optimizers, too.

  • @jonp3674
    @jonp3674 4 года назад +11

    Man I'll take any content you want to make, your stuff is awesome. I really like how clearly you explain things.
    If I get a vote I am interested in formal systems, such as metamath, and AI being trained to use formal reasoning. I asked Stuart Russel about it in a Reddit AMA and he said he had previously considered the idea of using formal systems to prove results about AI systems as a control technique.
    A proven result would be one of the only really solid control structures I feel. Moreover there might be some bootstrapping possibility, where an AI is only allowed to expand it's capabilities after it's proven that the expanded system will obey the same rules that it is proven to obey.
    Additionally making gpt-3 do mathematics is sort of like training a computer to run a simulation of a dog trying to walk on it's hind legs, you can do it but it's not playing to anyone's strengths. Computer systems that reason using set theory, such as metamath, can use symbolic language where there is a rigorous definition for every symbol and use currently existing tools to check their reasoning to be correct. This is a much more solid foundation for developing a system for thinking and reasoning about the world I feel, natural language is a mess.
    Anyway yeah that was long vote ha ha, keep up the good work, love the channel.

  • @AutarchKade
    @AutarchKade 4 года назад +1

    You make this topic so interesting and cover a good range of related issues. One thing I'd love to hear more about is the different ways companies are trying to create AGI. We hear that different groups want to create it, but what are their approaches, and how do they differ? I think this topic is barely covered online and would be a really interesting video!

  • @BB-mq3nn
    @BB-mq3nn 4 года назад +29

    This was... not an ideal video. The authors of the proposal are making a great many assumptions about the enforceability of such an agreement (just say you will totally give your disgusting gains to the plebs until you've gained enough power to ignore your pledge). And even in the unlikely situation where the hyper rich decide to provide aid to people, it's done so in a completely undemocratic manner. These .000001% of the world's population get to decide where the charity goes with no input from anyone. They could donate to racist causes because they believe them to be noble pursuits, and who will stop them?
    All in all, an astonishing bad take on how to spread the gains from AI.

    • @donaldhobson8873
      @donaldhobson8873 4 года назад +3

      I think that most of the current utrarich are competent and often benevolent. Governments and random rich benefactors have different failure modes. How often do supposedly elected governments make unpopular decisions. In the covid crisis, bill gates has been trying to develop a vaccine and the FDA has tied red tape everywhere and basically made testing illegal at one point.

    • @c99kfm
      @c99kfm 4 года назад +6

      @@donaldhobson8873 Wow, they're better at Appearing Not To Be Sociopaths than I thought.

    • @gadget2622
      @gadget2622 4 года назад +3

      @@c99kfm pretty much. bill gates is not a nice bloke. He does a lot to make himself look good but his wealth is built upon the back of so many disadvantaged people.

    • @wasdwasdedsf
      @wasdwasdedsf 4 года назад

      yes raci endevours that takes horrific amounts like the cult of bla liv matte

    • @wasdwasdedsf
      @wasdwasdedsf 4 года назад +1

      ​@@gadget2622 jesus christ... get one single argument. built upon what? did he personally designate the production of microsoft products to third world countries and made sure the factories in question had impossible conditions, in addition to somehow locking people up when they applied for the job and thus werent able to take the job out of choice? you do realise he and people like him make reidculous amounts of jobs that people CHOSE TO WORK AT, because it is a net positive exchange of their effort for the wage they are payed right? IN OTHER WORDS, VALUE IS CREATED. child labor is fantastic, unless they are stolen off the streets and forced to work. get out of here, communis

  • @jayschweikert1984
    @jayschweikert1984 4 года назад +4

    This sort of proposal sounds very sensible, conditional on us ending up in the situation where some particular organization successfully invents "friendly" AGI, but still manages to "control" it in the way we usually think of companies controlling software and other intellectual property -- i.e., they have some ability to license and control its usage, capture profits, avail themselves of law and courts to protect their interests, etc.
    But... doesn't this run into some of the same, shall we say, failures of imagination that we see when people talk about the nature of *unfriendly* AGI? Like, the most serious risk of unfriendly AGI isn't "some evil corporation or terrorist group uses the AI for nefarious purposes." It's "oops, the matter in your solar system has been repurposed to tile the galaxy with tiny molecular smiley faces." In other words, a genuine super-intelligence -- almost by definition -- *can't* be controlled by non-super-intelligent humans, one way or another.
    That's really bad when it comes to unfriendly AGI, but if you're willing to stipulate a friendly AGI (i.e., one that is sufficiently aligned with human values), doesn't it suggest that a lot of this concern about how to distribute the benefits is kind of beside the point? Like, if we suppose, as I think is pretty reasonable, that one particular conclusion that falls out of "human values" is "it would be bad for the vast majority of mankind to be reduced to abject poverty or death after the development of AGI," then that's a value the AGI itself is going to share, right? We are talking about AGIs here as agents, after all, with their own values. So if we actually *solve* the value-alignment problem, doesn't that basically address this issue, without the need for human-level legal pre-commitments?

    • @donaldhobson8873
      @donaldhobson8873 4 года назад +1

      There is one case where human precommitment might help. It is the case where the technical side is solved. People know how to align an AI to arbitrary values. But a selfish human aligns the AI to their own personal values, not human values in general. (There are good reasons for a benevolent person to align the AI to themselves, at least at first, with the intention of telling the AI to self modify later. Averaging values is not straightforward. Your method of determining a humans values might depend strongly on the sanity of the human. )

    • @Ansatz66
      @Ansatz66 4 года назад

      A friendly AGI will have whatever values we build it to have. The only way an AGI can be friendly is if we have the engineering skill to build the kind of AGI that we desire. Most likely such an AGI would follow whatever orders we give it, regardless of human suffering. Hopefully whoever builds the first AGI won't want the majority of mankind to be reduced to poverty, but only time will tell.

  • @inyobill
    @inyobill 4 года назад +4

    "If we feel like it, we'll share the profits." This experiment has already been demonstrated to be a non-starter.

  • @nickalasmontano1496
    @nickalasmontano1496 4 года назад +6

    I feel like this doesn't really solve the larger relational issue here. Even if companies gave up large sums of money from the windfalls of AGI, the company still decides how that wealth will be allocated and to whom it will be allocated. You still have a relationship where large sums of the population will be forced to rely on the generosity of a few individuals, and would still be subject to the whims of an immensely powerful company, one more powerful than any before it because it not only has more wealth but a vastly intelligent AGI. A better solution, though it would require an upheaval of the status quo, would be to abolish the possibility of owning an AGI, or, if you are willing to go as far as I am, to abolish private ownership entirely. Because even if AGI is not developed these issues still exist as long as capitalism exists. Now, I am not advocating that it should be owned by the state either, but by the community that works on it and is immediately benefitted or at risk from it should have a say in its operations. But, I understand that this is generally considered a radical opinion, so do with it as you will.

    • @automatescellulaires8543
      @automatescellulaires8543 4 года назад

      Private ownership of half of the american soil and production capabilities, and that of a mass produced watch that your grand father offered to you when you were 10 are two different things.

    • @nickalasmontano1496
      @nickalasmontano1496 4 года назад

      @@automatescellulaires8543 You're confusing private and personal property, my friend. Communists aren't concerned about your watch, they're concerned about the fact that people claim ownership of things like land, factories, and the like and use that as justification to screw people over.

  • @Phelan666
    @Phelan666 4 года назад +5

    Good luck with any of this. The income inequality in any given nation has already far surpassed any level of acceptability.

  • @Macieks300
    @Macieks300 4 года назад +8

    7:25 You make this argument sound like it's a good thing because it will make the company sign this agreement. But what you proven is that the company doesn't actually care about giving people money and there's no reason for it to give any money after it produces an AGI.

  • @edwarddavies8604
    @edwarddavies8604 4 года назад +4

    I feel like this whole video sidesteps the point that if you make profits in a significant percentage of GWP you can just start economically blackmailing countries into doing whatever you want. The example given in the video (Saudi Amco) can already do this, to some degree, with many countries already. If a company was making profits in the 5-10% range they effectively are a country of their own with more direct economic influence than almost any other country on earth. The windfall clause is, to an enitity of this size and economic power, just a piece of paper that can be safely ignored.
    Even if countries try to enforce it (which considering the money is supposed to go to charities which they gain no direct benfit from, this seems unlikely) the AGI, or any suffiencently skilled team, could just work around any barrier put up by any nation or coalition of nations. Embargo? Loopholes or alternative sources of trade. Sieze company assets? Come up with both legal and physical defences. Outright war against the company? When you control 5-10% of the worlds economy and have an AGI on your side you can probably win, or at least survive, through both economic and traditional warfare.
    TL;DR If a company is making enough money for the windfall clause to take effect then they have enough power to ignore it, and even if countries tried to enforce it the company would be powerful enough to circumvent the enforcement
    Socialism, or barbarism under the boot of AGI run corparations.

  • @NowanIlfideme
    @NowanIlfideme 4 года назад +4

    Tim and Eric Awesome Show "It's free public relations", wow you're good.

  • @snOOfy1723
    @snOOfy1723 4 года назад

    I'm so glad you started uploading videos again. Always very interesting, keep up the good work!

  • @BMoser-bv6kn
    @BMoser-bv6kn 4 года назад +14

    There's a massive error here: Social Security is 99.6% efficient. You also neglect the possibility of dividend systems, like the Alaskan oil fund. I'm pretty sure writing people bigger checks isn't going to take more work, and removing the means testing would probably lower it.
    "We can't rule out the possibility they mean it" lol. On about the same % chance that snacking on depleted uranium might give me super spidey powers, sure. But historical norms have shown that peasants have to riot and be on the brink of revolution to receive improved material conditions.
    The current unrest managed to uh.... accomplish the rebranding of a pancake mix and some rice products. Which somehow feels infinitely more than we usually get, but somehow feels infinitely worse than nothing. Capitalism is amazing.

    • @c99kfm
      @c99kfm 4 года назад

      Yeah, Robert really should stick to topics directly related to his field of expertise. This went a bit out of his comfort zone and out of bounds.

  • @kaitengiri
    @kaitengiri 4 года назад +5

    There are huge problems with this solution right off the bat.
    1. It's relying on a very naive perception of how people just act in general. Being shamed by the populace for something the average person has no knowledge or education about is going to get people to sign a document that says "If you ever 'win', then just stop 'winning' "? Just historically speaking, on much, much smaller scales, this has a very bad track record in general.
    2. Never really addresses the idea of reneging on the contract. Who is going to sue someone who literally buys every single lawyer the world over? What are the remaining good samaritans to do when someone uses that money to threaten legal or even physical action on anyone who opposes them? Or they just buy out the contract holders themselves and then absolve the contract?
    3. What do you do when the contract just gets held up to be unenforceable in court?
    4. Even if everyone agrees to sign this peacefully and then plans to actually make good on it, what do you do when someone creates a shell company and gives the AI to it?
    5. Even if all of the above is negated and everyone plays fair and fully intends to uphold the spirit of this, what do you do if a brand new startup company or some guy in his garage manages to beat everyone to the AI? He never signed the contract and may have no reason to give up his gains.
    I really liked this video, and I enjoy the thinking exercise and the subject, but if you do this in the future, I think you should find some economic and socio-political experts to discuss the matter with as well. It'll really help illustrate how big of a problem this really is, and also highlight flaws in current ideas.
    EDIT: I actually just thought of something even more important that I missed before.
    6. What do you do when the AI itself has decided that you enforcing that contract would be detrimental to its goal of getting more money and then decides it can't let you do that? How are you even going to contend with the in-human mind games and loopholes that an AI might play against you when we've already run into some serious human based loopholes?

  • @IAmNumber4000
    @IAmNumber4000 3 года назад +2

    “You can think of the world as having two types of people: People who make money by selling their labor, and people who make money by owning advanced AI systems.”
    B A S E D

  • @alex-esc
    @alex-esc 4 года назад +2

    Love to hear about the human part of the bargain, if possible please do more on the socioeconomic impact of AI or platform other creators who do.
    Great vids as always!

  • @ArchmageIlmryn
    @ArchmageIlmryn 4 года назад +12

    This, while a good idea on the surface, seems far too idealistic. There's very little preventing a company from simply going back on the windfall clause once they have AGI, as at that point they would be one of the most powerful entities on Earth. What's really needed is the full abolition of capitalism prior to the advent of AGI.

    • @dm9910
      @dm9910 4 года назад +5

      It's not even idealistic, it's absolutely dystopian. Its idea of success is to take half of the income of the top 0.1% and divide it between the other 99.9%. In other words, the owners of the AI would be 1000 times wealthier than everyone else despite not being required to do any work. Hardly a utopia if 99.9% of people are dependent on begging for charity from an unelected group of shareholders who could at any time cut off their income.
      How about we just give every human exactly one share. Everyone would own the AI equally and have a say in how it's used and what we should do with the profits.

    • @donaldhobson8873
      @donaldhobson8873 4 года назад

      @@dm9910 I don't think it would be dystopian. Firstly the 99.9% are still extremely well off by todays standards, and secondly, you aren't beholden to the whims of any one, if a few rich people stop sending you money, the money from the rest will be plenty. If you upset all the rich people, you can get plenty to live well on from your friends, or other people. If absolutely nobody will send you money, then you have probably done something that society considers really bad.

  • @joshuahillerup4290
    @joshuahillerup4290 4 года назад +17

    The thing about legally enforcing a windfall clause, is it's a lot like enforcing tax laws. And really, at the point where human labour has no value maybe we should be getting rid of companies entirely, because all the advantages of capitalism are gone at that point, and the disadvantages for things like socialism can be dealt with, well, AI.

    • @geoffbrom7844
      @geoffbrom7844 4 года назад

      For the most part big companies stick very well to the tax laws, it 'just so happens' there's enough loop holes to pay no tax very legally

    • @kiharapata
      @kiharapata 4 года назад +1

      @Enclave Soldier Capitalism doesn't "work out fine" for the society we have now, much less for one where labor has no value. Communism and Socialism aren't based exclusively around human labor having value, that concept is only used to explain how the working class is exploited by the capitalist class. In a society where labor is worthless and the working class becomes merely a consumer class, the exploitation is clear enough. The real basis of Socialism is the common ownership of the means of production, and in this case those means are the AI itself and the machines it uses. For this situation, there is no fairer and more democratic solution than common ownership of AI -- what good faith argument can a person have to defend the position that the future and well being of all of Humanity should lie in the hands of a few corporate executives?

  • @haldir108
    @haldir108 3 года назад +1

    Despite pressing the like on every single video on this channel, and commenting on a fair few of them, this video did not show up in front of my eyes, and i only found it because i talked about AI with friends, and wanted to show them some previous videos here. Clearly the youtube AI is far from perfect.
    Therefore, i bestow a great honor on this channel: this is the first time i "ring the bell", in an effort to not miss out on videos again.

  • @NitFlickwick
    @NitFlickwick 4 года назад

    Very interesting video. There is no question the technology is only a tiny percentage of the problems posed by AGI. I’d love more videos on the people problems.

  • @sevret313
    @sevret313 4 года назад +10

    Taxing isn't enough. You need public ownership of AGI.

    • @_DarkEmperor
      @_DarkEmperor 4 года назад +1

      Socialism is cancer.

    • @mvmlego1212
      @mvmlego1212 4 года назад +2

      _"You need public ownership of AGI."_
      What does that mean, exactly?

    • @sevret313
      @sevret313 4 года назад

      @@mvmlego1212 Public ownership would here be state ownership.
      Companies are already quite good at not paying taxes or other obligations. So even if they sign away some of their profits in this windfall clause, there is no reason to believe they wouldn't make their AGI find a way to avoid paying this money.

    • @Jp-ue8xz
      @Jp-ue8xz 4 года назад

      @@mvmlego1212 I think his name summarizes exactly what s/he meant. Also I'd rather have any privately-owned company on the freakin planet own AGI than any state

    • @mvmlego1212
      @mvmlego1212 4 года назад +1

      @@sevret313 -- Thank you for clarifying, but what in the world leads you to believe that the government (of the U.S., U.K., Canada, even the U.N.--take your pick) would use it responsibly, or even more responsibly than most corporations?
      At least a company's purpose, which is to produce and sell a good or service, is well-defined. Governments, on the other hand, have a habit of sticking their nose in whatever the current wave of politicians feel like getting involved in. Furthermore, governments have a monopoly on force, which allows them to keep companies in check. No such check exists for a government with an AGI.

  • @TristanBomber
    @TristanBomber 4 года назад +9

    I think any sort of attempt to get companies to voluntary donate their income is misguided at best. Nevermind that it will be extremely hard to force a company to adhere to their Windfall Clause after they have amassed massive money, and therefore power, from creating a powerful general AI - even if they did do it (and they might, if only to prevent a revolution!), it still results in a scenario where a significant amount of world production is in the hands of a single company, and if they do share their wealth through some kind of global UBI, it means a lot of people's livelihoods would be dependent on the AI creators. That gives them far, far too much power over the rest of society, with us having basically no bargaining leverage with the creators. It's better than massive poverty, but it's still at best a benevolent dictatorship. An optimal solution would need to ensure that the AI system is socially owned and managed and the benefits are shared equally, so that the AI does not create any elite class. Nationalization isn't an option here, because it would exclude the world outside of the nation that created the AI, and because a national government might use the AI to subjugate other nations, and because nation-states are frequently not as democratic as they appear (if they even bother - what if the superhuman general AI is created in China?). We would need some other form of social ownership of the general AI, one open to the entire world population and difficult for any one group to dominate. Like a sort of super-co-op :P

  • @petersmythe6462
    @petersmythe6462 4 года назад +1

    The problem with legally binding is that almost by definition, anything that large and influential can probably buy a coup. Or carve out it's own state.

  • @garronfish8227
    @garronfish8227 4 года назад +2

    This option encourages companies to hide what they are doing with AGI. Also it is going to be very difficult to separate what profit is due to AGI.

  • @ekki1993
    @ekki1993 4 года назад +7

    The whole issue around explosive increase in production has a parallel to AI safety. This clause sounds like a good move (it might be) but it's more likely just a patch in the same way than most proposed "solutions" to AI problems are. We need to be actually prepared and start changing stuff now or we won't be able to handle what would happen otherwise.
    Capitalism has an expiration date. Either resources start to run out and you need to put something other than profits at the forefront or you get to a post-scarcity society where it's an opt-in deal. Either way, our society is not ready nor even preparing for the transition, much like we're not ready to design a safe AGI.

  • @midknight1339
    @midknight1339 4 года назад +6

    This was a really interesting video! I'd definitely be interested in seeing more of the squishier side of AI safety (although the technical side does interest me more :P).
    I think this is also the sort of subject which would really benefit from discussion - for example, I'm not entirely clear on how we can be sure that a corporation which has developed an AGI and profited from it enough to account for a sufficient portion of the GWP would even be subjected to any legally-binding agreements from the past. Could it not amass enough power to be in a position to throw that agreement to the winds?
    Anyways, thanks for the amazingly informative videos! You not only introduced me to the entire field of AI safety research but also made me incredibly fascinated by it :D

  • @goodlookingcorpse
    @goodlookingcorpse 4 года назад +1

    "Hands up who's excited to find out."
    Very few people put their hands up. But the ones who did make the decisions.

  • @HeadsFullOfEyeballs
    @HeadsFullOfEyeballs 4 года назад +3

    I'm not at all convinced that corporations would stick to this sort of agreement.
    They may well _sign_ it since that's (for the time being) free. But once a corporation actually starts making monstrous profits using AI, I would expect them to renege on the agreement and instead invest a comparatively small percentage of those monstrous profits into regular PR plus -bribes- lobbying so that lawmakers will let them get away with reneging. It would be difficult to stop them from doing this for the same reasons it's currently difficult to get corporations to pay their taxes.
    No-one likes Monsanto/Bayer or BlackRock or Walmart or BP, and everyone knows they're awful organisations who regularly break the law and violate human rights, but they're still hugely profitable. Clearly having a spotless public image isn't essential to a well-placed multinational corporation.

  • @robcio150
    @robcio150 4 года назад +9

    Theories about that are not new at all. Just read some Marx or later authors, they theorised about it years ago. Even then it was not that hard to imagine technologies that could make most of labour obsolete, even if they were more pessimistic and expected it to happen sooner. Although, the big benefit this approach has is way better publicity - if you spoke about it in marixst terms in this video, there probably would be at least few times as many downvotes and a lot of people would be against it just because of anticommunist sentiments, regardless of the details in this case.

    • @davidwuhrer6704
      @davidwuhrer6704 4 года назад +4

      The industrial revolution did cause widespread poverty and starvation by producing enough food for everyone.
      Marx predicted that the imperative of growing the economy by putting a price on everything meant that ultimately you have to calculate the price of your sleeping hours and of the air that you breathe, though he didn't predict a date for that.

  • @lakloplak
    @lakloplak 4 года назад +6

    I'd love more sociological aspects on the future of AI!

  • @e1ementZero
    @e1ementZero 4 года назад

    Yes, the social policy aspect is very important for discussion/understanding/consideration imo. Great work!

  • @edzeppelin1984
    @edzeppelin1984 4 года назад +1

    There's also the possibility that the first company to invent an AI advanced enough to generate that much wealth would become so obscenely powerful that any such agreement becomes leaves in the wind, legally binding or not. What we've seen from companies like Facebook is that people still use their services despite scandal after scandal, because the service they offer is regarded as indispensable to a sufficient quantity of users. At that level, you might pay lip service to PR, but ultimately power is just power.

  • @Laezar1
    @Laezar1 4 года назад +13

    Considering how automation turned out I'm not optimistic about how AI would be used.
    Windfall clause here is basically a way to make sure to preserve the status quo, which isn't great.
    Like, automation could have been used to considerably reduce the time individuals have to work to survive, but instead it led to some people being extremely rich, other having to still work a lot, and then those that were rendered useless becoming extremely poor.
    This would do exactly the same thing. On a larger scale. And I mean, that's not surprising, AGI made by capitalist in a capitalist society will lead to an AI that emphasize the problem of capitalism. So... uuuh... hopefully capitalism is dead before we get to AGI is the takeaway here?

    • @davidwuhrer6704
      @davidwuhrer6704 4 года назад +2

      It's the old alignment problem. Corporations are not aligned with human values.

  • @dm9910
    @dm9910 4 года назад +54

    I enjoyed the video however I think this particular solution is BS.
    Firstly there's plenty of evidence that businesses do not give a shit about promises, legally binding or otherwise, to do good. You say that it's hard to get companies to pay taxes, but I don't see how this solution does anything other than push the problem down the line. Not only that but it pushes it down the line to a point in time where the companies have AI telling them the best strategy to avoid paying up.
    Secondly this whole thing is working under the assumption that capitalism is the only viable system. If post-AI is post-work, then you effectively eliminate social mobility - on the day the last human is made redundant, the hierarchies of wealth are fixed in place forever. Those who owned shares in the companies with AIs would be rich, and those who didn't would be poor. Even if you take away half the income of the richest 1% and distribute it, that's still half the wealth shared between 99% while the other half is still owned by the 1%. It's better than nothing but still laughably tiny, just as bad or arguably worse than our current system. And this would pass down the generations too. Any "it's fair because you can just work hard and become a capitalist yourself" defense of capitalism is fundamentally incompatible with this future society.
    Thirdly as well as the profits, you have to consider the other side effects of AI. Even if the profits of AI are shared, why should a tiny number of owners get to decide what problems the AIs should work on, which values are prioritised, and how resources should be spent? You say that we need to make sure AI's goals are aligned with humanity, but humanity is 7 billion people, not the top 0.1% who own everything.
    The only sensible solution to these problems is common ownership, aka communism. You don't need to worry about bigwigs trying to avoid paying the tax because there are no bigwigs. You don't need to worry about hierarchies of wealth since the proceeds would be shared fairly. And you don't need to worry about AI working for the rich instead of everyone, because they would really work for everyone. What's more, all the classic arguments against communism make no sense in this sort of world - you can't say people will become lazy because there's no jobs anyway, similarly you can't say the rich work harder because they won't be working anymore, you can't say there's not enough to go around because this is a post-scarcity world, you can't say it's inefficient because AIs will make it efficient.
    You could say that the inventors of the AI deserve to be rewarded for bringing the world into this post-work utopia. Well actually, that's not incompatible with communism. The difference is that they will be rewarded based on how much the world collectively decides to give them, and not based on how much they can get away with seizing.
    Anything less than full common ownership of AI would be a great injustice. Even if you're a die-hard capitalist today, you surely have to admit that capitalism makes no sense in a world where ideas like "creating jobs" or "investing smartly" make no sense and even if they were necessary they could be done far better by AI than some corporate fat cat or Silicon Valley egomaniac.

    • @kaitengiri
      @kaitengiri 4 года назад +2

      The problem with this is that society has not entered post-scarcity once AI goes online. In fact, population density would likely go up with less people working and if we adopted a communistic approach to this before actually solving a post-scarcity society, which would increase the scarcity. We still have limited areas to farm food and supplies in and to make sure they get to everyone, and then on top of that, there's actually supposedly a problem in that the nutrients in the Earth soil, even with re-fertilization, will eventually dry up and be unable to support life for vegetation.
      While I'm against it normally, I do agree that post-scarcity societies would have to rely on communism or something that looks incredibly like communism to function well. But unless we manage to solve that issue BEFORE the first AIs manage to spread their wings, there's going to be a period of time from AI to Post-Scarcity where we're going to have to find a way to deal with essentially a new level of economic "war" that goes on here.

    • @donaldhobson8873
      @donaldhobson8873 4 года назад

      I agree that capitalism makes no sense, not so much that it would be bad, more that there isn't a meaningful way that the world could be that looks like AI driven capitalism. (Assuming that the AI hasn't been programmed with some contrived goal just to prove me wrong) An AI set to maximize profits turns all the atoms into banknotes or cheques something, including the atoms that were part of other people.
      I don't think that what you are describing is recognisable as communism either. Its more some kind of post work AI does everything utopia.

    • @davidharmeyer3093
      @davidharmeyer3093 4 года назад +1

      I don't understand how anyone can seriously think this. If you don't have the life you would like to have, then by definition you aren't in a post-work world. Just work to improve your life. Go re-floor your house, grow some food, build a chair for yourself. Boom, you now have more wealth.
      If there is actually 0 money to be made doing anything because AGI has made the costs for every task be 0, then that means you can have any service essentially for free. You only have a post-work society when you have a post-need society. If anyone has a problem, that means they are willing to do something for you in order for you to fix their problem, and so wealth can be created.

    • @davidharmeyer3093
      @davidharmeyer3093 4 года назад +2

      I agree this particular solution is BS though. The problem is that, unlike capitalism, you are discouraging people to do something that helps everyone: create wealth. If you design a set or rules that incentives the behavior that you want to happen (in capitalism's case this behavior is for people to make mutually beneficial trades), then you aren't creating a mob-vs-company-that-invented-agi battle where the company no longer wants to be forced to give up their money, you are encouraging the creation of wealth.

    • @dm9910
      @dm9910 4 года назад +8

      @@davidharmeyer3093 You're technically right that in an AI society you could create small amounts of wealth by, say, building a chair. But if you're building chairs by hand while a capitalist owns an AI-run factory producing chairs, you will always be outpaced and so you can never achieve social mobility through any amount of hard work.
      Also when I'm talking about work I'm talking about paid work, not domestic tasks. I'm not saying humans will stop brushing their own teeth or whatever (whether they do or not is irrelevant).
      Goods and services will still have value in the post-AI world. There are finite quantities here. Resources, processing power, usable space. Post-scarcity is a phrase that kind of assumes some reasonable limits. Of course even in a post-AI world, you can't give everybody a whole country to themselves cos you'd run out of space. You can't ask the AI to compute a problem that will take it a billion years to solve, cos you'd be hogging the processing resources. Put an asterisk on the phrase "post-scarcity" if you like for extra precision, but it doesn't change anything. Even if everyone on Earth wants a giant mansion all to themselves, it's still not going to be useful to get humans to do that work. That's what makes it post-work.

  • @thevoicesoflogic
    @thevoicesoflogic 4 года назад +1

    Yes, more videos on the social and political implications of AI. I work in government and politics and appreciate the insight.

  • @darkmaulOI
    @darkmaulOI 4 года назад +2

    CEO of the first company that owns an AGI: "AI, tell me how to get out of the windfall clause without arising suspicion, and then make me king of the world!"

  • @BlursedSYNthesis
    @BlursedSYNthesis 3 года назад +4

    Im really starting to wonder if AI safety research might be one of the last, best hopes for humanity, and for reasons that have almost nothing to do with AI itself.
    At its core, AI safety research is kind of about logically and mathmatically examining systems of power, and trying to design them in such a way as to make abuse of that power impossible.
    Setting aside capitalisms failings in particular, a huge problem humanity faces is that, no matter someones goals, no matter how noble they might be, and no matter what system they exist within, acquiring power and influence will always be an extremely effective instrumental goal, just as acquiring intelligence is for an AI, followed closely by the goal of "crush anything that can take that power away from me."
    The problem with humanity, the glitch within us, is that power is not only an instrumental goal, but even for the best of us it is also an intrinsic goal as well, a reward unto itself. Power in and of itself triggers something in our brains, its both is psychologically and even physiologically rewarding. not only that, power falls under a category of rewards that dont have a saturation point, meaning unlike cookies or sex where you can become full or exhausted, the lust for power can never truly be satisfied. You always feel like you could get just a little bit more power, and your brain often rewards you when you do.
    The paper clip maximizer already exists, and it exists within the head of every man, woman and child on the planet!
    Your work, and the work of others like you, might very well end up being our only hope at ever designing a system capable of taming this impulse. If we can learn how to solve these problems in machines, maybe we can learn how to apply them to our sosciety, and even if nto, if there is no hope of taming this impulse within us, then perhaps we can create some form of AI that is free from this glitch, such that it could assume and manage our power, for the good of mankind, without the risk of being corrupted by it.
    You might literally be "doing gods work" here, by designing it.

  • @themagictheatre2965
    @themagictheatre2965 4 года назад +5

    The risk is spread across all of society yet the rewards are only felt by a few people. Yeah.... welcome to industrial capitalism, Robert. That's how the world has been for 200 years now.

    • @jacobscrackers98
      @jacobscrackers98 4 года назад +1

      How does this happen in regular capitalism without AI?

    • @themagictheatre2965
      @themagictheatre2965 4 года назад

      @@jacobscrackers98 It is called over-production. See, the benefit of technology in production terms is that far more goods can be produced. But this is only useful if there a demand for that increased level of production. Which capitalism can sustain *most* of the time, but when it fails to, that leads to an economic crisis where the society has way too much of some things it needs, and not enough of other things. This whole thing is not necessary and only exists to benefit a small group of billionaire types. It is a risk spread across all of society, as all of society feels the negative effects of economic crises, but it only benefits a small group of individuals.

  • @bevanfindlay
    @bevanfindlay 4 года назад

    "Appearing to not be sociopaths". That was a great line. Well played. And the sticky label on your forehead one too.

  • @darrennew8211
    @darrennew8211 4 года назад +2

    Someone who likes this might be interested in Suarez's novel Daemon and its second half Freedom(TM). It involves (in part) "beneficent" "malware" that at one point for example starts hiring lawyers to keep itself from being deleted. It's a very fun ride, and one of my favorite books. (Not really AGI, but practical AGI, sort of. Very realistic for sci-fi.)

  • @KaiHenningsen
    @KaiHenningsen 4 года назад +6

    Frankly, while governments are often bad at handling money, private entities "working for the common good" are, on average, as far as I can tell, significantly worse. Sure, there are good ones, but there are also a large - a very large - number of bad ones, where you're lucky if even 10% of the money goes to the supposed goals, and that's before you look into if that's even used effectively, and if they're even good goals.

    • @Dorian_sapiens
      @Dorian_sapiens 4 года назад +1

      I was hoping someone would point this out.
      Also, can I just say that citing an opinion poll ("52% of Republicans and 47% of Democrats agree that blah blah blah") is no way to back up the empirical claim that governments don't spend resources efficiently? The idea that the US government wastes tax revenue is a major feature of anti-tax, anti-government messaging that is extremely heavily subsidized in the media sphere, so the fact that agreement exists across both factions of neoliberals could be purely a function of how effective the messaging is.

  • @sam3524
    @sam3524 4 года назад +6

    A world where everyone else signs to something except ONE company would be ridiculous
    USA resigning from Paris Agreement: 😳

    • @gonzalocoelho8468
      @gonzalocoelho8468 4 года назад

      samuel liepke haha this is as funny as the paris agreement

  • @SolarGranulation
    @SolarGranulation 4 года назад

    Please do make more videos on these aspects of AI safety, but don't stop your usual approach. I greatly enjoy your style of explanation and would like to hear anything you have to say, or to comment upon.

  • @AI7KTD
    @AI7KTD 4 года назад +2

    Just sign it! once you have an AGI, you can have it figure out a loophole in no time.

  • @goldenfloof5469
    @goldenfloof5469 4 года назад +5

    Well if I ended up with an AGI or more likely ASI that so happened to be hard coded to do what I want (and it actually listens), what's to stop me from just not paying? I mean with an ASI I could very easily take over the world and nobody could do anything about it since I have an ASI and they don't.
    Of course I wouldn't actually do that I'm not a psychopath, but I would probably use it to teach certain people a lesson or two.

    • @michaelspence2508
      @michaelspence2508 4 года назад

      Once you have a god on a leash, no silly little world government can do shit to you.

    • @donaldhobson8873
      @donaldhobson8873 4 года назад +1

      Suppose I set the AI the task of designing and implementing a utopia. The world is quickly and radically transformed for the better. I have just let loose nanodrones programmed to cure all diseases. I haven't bothered asking permission, or warning anyone that I am doing this. Is this taking over the world?

    • @Hexanitrobenzene
      @Hexanitrobenzene 4 года назад +1

      @@donaldhobson8873 Yes, it is.

  • @svevo
    @svevo 4 года назад +8

    seize the means of production, there's no other way

    • @armorsmith43
      @armorsmith43 4 года назад

      This implies a requirement that AI becomes Free Software

    • @zenithparsec
      @zenithparsec 4 года назад +1

      So seize the GPUs?
      Or are you talking about factoring? Sieve: the means of factoring the products?

    • @sofia.eris.bauhaus
      @sofia.eris.bauhaus 4 года назад +2

      abolish patents and copyright, give businesses to stakeholders (mostly workers and customers), dismantle the state. embrace free market communism! 🏴

    • @DanielSMatthews
      @DanielSMatthews 4 года назад

      @@armorsmith43 Free software only favours those with the intellect and the tenacity to wield it for some form of benefit, which is why it failed to cure humanity of their stupidity and laziness. However, if we did have a level playing field with regard to AI then you would instead have an inequity based on energy to run the AI systems, then if we have vast amounts of fusion derived energy you have an inequity based on time, how fast you can compute anything compared to others. No matter what you do you can't get away from the requirement to make capitalism more than just an amoral entity, it needs to be compassionate, and by that I mean knowing what people need rather than just giving them what they demand.

  • @Khaos768
    @Khaos768 4 года назад +1

    That was way more science fiction than usual!

  • @ivanhoeivanhoe810
    @ivanhoeivanhoe810 4 года назад +2

    Yes, I would like to hear more about policy and social aspects of AI

    • @davidwuhrer6704
      @davidwuhrer6704 4 года назад

      I would love to hear more about CyberSyn.