Why Not Just: Think of AGI Like a Corporation?
HTML-код
- Опубликовано: 4 авг 2024
- Corporations are kind of like AIs, if you squint. How hard do you have to squint though, and is it worth it?
In this video we ask: Are corporations artificial general superintelligences?
Related:
"What can AGI do? I/O and Speed" ( • What can AGI do? I/O a... )
"Why Would AI Want to do Bad Things? Instrumental Convergence" ( • Why Would AI Want to d... )
Media Sources:
"SpaceX - How Not to Land an Orbital Rocket Booster" ( • How Not to Land an Orb... )
Undertale - Turbosnail
Clerks (1994)
Zootopia (2016)
AlphaGo (2017)
Ready Player One (2018)
With thanks to my excellent Patreon supporters:
/ robertskmiles
Jordan Medina
Jason Hise
Pablo Eder
Scott Worley
JJ Hepboin
Pedro A Ortega
James McCuen
Richárd Nagyfi
Phil Moyer
Alec Johnson
Bobby Cold
Clemens Arbesser
Simon Strandgaard
Jonatan R
Michael Greve
The Guru Of Vision
David Tjäder
Julius Brash
Tom O'Connor
Erik de Bruijn
Robin Green
Laura Olds
Jon Halliday
Paul Hobbs
Jeroen De Dauw
Tim Neilson
Eric Scammell
Igor Keller
Ben Glanton
Robert Sokolowski
Jérôme Frossard
Sean Gibat
Sylvain Chevalier
DGJono
robertvanduursen
Scott Stevens
Dmitri Afanasjev
Brian Sandberg
Marcel Ward
Andrew Weir
Ben Archer
Scott McCarthy
Kabs Kabs Kabs
Tendayi Mawushe
Jannik Olbrich
Anne Kohlbrenner
Jussi Männistö
Mr Fantastic
Wr4thon
Dave Tapley
Archy de Berker
Kevin
Marc Pauly
Joshua Pratt
Gunnar Guðvarðarson
Shevis Johnson
Andy Kobre
Brian Gillespie
Martin Wind
Peggy Youell
Poker Chen
Kees
Darko Sperac
Truls
Paul Moffat
Anders Öhrt
Lupuleasa Ionuț
Marco Tiraboschi
Michael Kuhinica
Fraser Cain
Robin Scharf
Oren Milman
John Rees
Shawn Hartsock
Seth Brothwell
Brian Goodrich
Michael S McReynolds
Clark Mitchell
Kasper Schnack
Michael Hunter
Klemen Slavic
Patrick Henderson
/ robertskmiles Наука
"Instead of working it out properly, I just simulated it a hundred thousand times" We prefer to call it a Monte Carlo method. Makes us sound less dumb.
Through the use of extended computational resources and our own implementation of the Monte Carlo algorithm, we have obtained the following.
Hey man, we're all friends here. Sometimes you've just gotta throw shit at the wall til something sticks. Merry Christmas!
Well it's the second best thing to actually working it out properly
...simulatet it a few MILLION times...
How would a statistician solve this?
"Like Starcraft".
That aged well....
Was about to comment this.
I don't even know if alphaStar had played vs. TLO by then, but I think it did.
It said 'for now'!
Robert Miles you lied, 640K is not enough for everyone!
I wouldn't say this has been demonstrated. So far AlphaStar can only play as and against Protoss, and it hasn't played any of the top pros. Don't get me wrong, I think Mana is an amazing player, but until it can consistently beat the likes of Stats, Classic, Hero, and Neeb (without resorting to super-human micro), then one can't really claim it has beaten humans at Starcraft.
You are definitely a rocket surgeon. Don't let the haters put you down.
dirm12 q Rocket Neurosurgeon FTFY
don't doubt ur vibe
i’m neurorocket though
For anyone interested in the statistics of the model in 6:16
The cumulative distribution function (cdf) of the maximum of multiple random variables is, if they are all continuous random variables and independent of one another, the product of the cdfs. This can be used to solve analytically for the statistics he shows throughout the video:
Start with the pdf (bell curve in this case) for the quality of one person's idea and integrate it to get the cdf of one person. Then, since each person is assumed to have the same statistics, multiply that cdf by itself N times, where N is the number of people working together on the idea. This gives you the cdf of the corporation. Finally, you can get the pdf of the corporation by taking the derivative of its cdf.
For fun, if you do this for the population of the earth (7.5 billion) using his model (mean=100, st.dev=10) you get ideas with a 'goodness' quality of only around 164. If an AI can consistently suggest ideas with a goodness above 164, it will consistently outperform the entire human population working together.
thx u))
No, actually thank you , though
That’s if the model you are using is correct... which might not be.
Edit: Probably it’s wrong.
Oh, multiplying the CDFs, that’s very nice. Thanks!
@@cezarcatalin1406 That's a valid criticism. The part I felt most iffy about was the independence assumption. People don't suggest ideas in a vacuum, they are inspired by the ideas of others. So one smart idea can lead to another. It's also possible that individuals have a heavy tail distribution (like a power law perhaps) instead of a gaussian when it comes to ideas. This might capture the observation of paradigm-shattering brilliant ideas (like writing, the invention of 0, fourier decomposition, etc.). Both would serve to undermine my conclusion. That being said, I didn't want that to get in the way of the fun so I just went with those assumptions.
Great video, but one thing I think you missed is that a corporation doesn't need any of its employees to know what works, it just needs to survive and make money.
This means that the market as a whole can "know" things that individuals don't, since companies can be successful without fully understanding *why* they're successful, or fail without anyone knowing why they fail. Even if a company succeeds through pure accident, the next companies that come along will try to mimic that success, and one of *them* might succeed by pure accident, leading to the market as a whole "knowing" things that people don't.
And.. thats pretty much not effective way of doing things, if we see modern HollyWoke, or Ubisoft
This can be seen as part of AI training, if a corporation has the wrong goal or wrong solution it will be outcompeted/fail and the companies that survive have better selected for successful ways to maximise profit
@@AtticusKarpenter I bet those are not following market signals and not succeeding at the market, yet they survive from income from other "sources", the stupid ESG scores
This is true, but only for tasks with a small enough solution space that it's feasible to accidentally stumble across the correct solution. This is unlikely to be the case for sufficiently hard intellectual problems. Also, a superintelligence will likely be better at stumbling across solutions than corporations, since the overhead of spinning up a new instance of the AI will likely be less than that of starting a new company (especially in terms of time).
I have the feeling that AI safety research is the attempt to outsmart a (by definition) much smarter entity by using preparation time.
I seem to remember Mr. Miles mentioning in several videos that trying to outsmart the AI is always doomed, and a stupid idea (my wording). Hence all the research into aligning AI goals with human interests and which goals are stable, rather than engaging in a cognitive arms race we would certainly lose.
It's a try to get a history boost if we can have more time and resources we might be able to overwhelm it.
A little bit like building a fort: you know bigger armies will come, so you build structures to help you be more efficient in fighting them off.
And it looks like we've run out of prep time. AGI is very close. And the pre-AGI that we have right now are already advanced enough to be dangerous.
"we're going to pretend corporations dont use AI"
ah yes, and im going to assume a spherical cow....
in a frictionless...
vacuum
What do you mean my guy just avoided an infinite while loop
"You can't get a baby in less than 9 months by hiring two pregnant women."
Wow we really do live in a society.
If you hire very pregnant women, you can get that baby pretty quick, actually.
The 200 IQ move here is to go to the orphanage or southern border. You can just buy babies directly.
It they're already pregnant when you hire them, then yeah, it's quite possible
I think it's safe to assume that the quote is meant to be read as two women who just became pregnant.
To assume otherwise is to assume that whoever said it doesn't have enough brain cells to be classified as a paramecium.
It'd make more sense as "you can't get a baby in less than 9 months by knocking up two women"
Oh shut up, you know what he meant
Corporations still have basically human goals, just those of the bourgeoisie.
AI can have very inhuman goals indeed.
A corporation might bribe a goverment to send in the black helicopters and tanks to control your markets so it can enhance the livelihood of the shareholders.
An AI might send in container ships full of nuclear bombs and then threaten your country's dentists with nuclear annihilation if they don't take everyone's teeth because its primary goal and only real purpose in life is to study teeth at large sample sizes.
Humans cannot have differing terminal goals, some are just in a better position to achieve them.
@@SA-bq3uy What do you mean by that? I feel like it's pretty self-evident that people can have different goals. I don't have "murdering people" as a terminal goal for example, but some people do.
@@fropps1 These are instrumental goals, not terminal goals. We all seek power whether we're willing to accept it or not.
@@SA-bq3uy If your argument is what I think it is then it's reductive to the point where the concept of terminal goals isn't useful anymore.
I don't happen to agree with the idea that people inherently seek power, but if we take that as a given, you could say that the accumulation of power is an instrumental goal towards the goal of triggering the reward systems in the subject's brain.
It is true that every terminal goal is arrived at by the same set of reward systems in the brain, but the fact that someone is compelled to do something because of their brain chemistry doesn't tell us anything useful.
@@fropps1 All organisms are evolutionarily selected according to the same capacities, the capacity to survive and the capacity to reproduce. The enhancement of either is what we call 'power'.
I think one of the most important things to understand about both corporations and AIs is that as an agent's capabilities increase, its ability to do helpful things increases, but the risk of misalignment problems which cause it to do bad things increases faster. As an agent with goals grows, it becomes more able to seek its goals in undesirable ways, the efficacy of its actions increases, it becomes more likely to be able to recognize and conceal its misalignment, AND it becomes less likely you'll be able to stop it if you do discover a problem.
As a reader of Sci-Fi since the Sixties, I remember at the dawn of easily available computing power in the Eighties I wrote in my journal that the Military-Industrial complex might have a collective intelligence, but it would probably be that of a shark!
I appreciate having such thoughtful material available on YT. Thanks for posting.
Only took a month for the Starcraft example to become dated thanks to AlphaStar. >_
AlphaStar arguably isn't at a superhuman level yet though(unless you let it cheat)
@@spencerpowell9289 By now, AlphaStar is now beyond my skill, even with more limitations than myself.
In most corporate settings, a few individuals get to pick which ideas are implemented. From experience, they are almost always not close to the best ideas.
Great video Robert. See you again in 3 months.
Seriously we need more of your videos. Love your channel.
I’ve long thought Corporations are analog prototypes of AI lumbering across the centuries, faceless, undying, immortal, without moral compass as they clear-cut and plow-under down another region in their mad minimal operating rules.
Corporations clearly do have a very important moral compass, and even Miles himself considers that so far humanity has been progressing. The fact some are corrupt doesn't mean corporations as a concept are intrinsecally bad, just like with humans in general.
These videos deserve way more recognition. They are very well made and thought out.
Every one of your videos kicks ass. Some of the most interesting material on the subject.
Been a long time Rob! Glad to see you
Y'all are way more intelligent than I lol.
1,5 x speed = 1.5 more fun
@@shortcutDJ not sure about that..
there might be diminishing returns on that ; P
@@shortcutDJ Surely it's 1.5 times as much fun.
@@MrGustaphe No, simply 1.5 more units of fun.
"can you tell I'm not a rocket surgeon" I literally just got done playing KSP failing at reworking the internal components of my spacecraft
Very interesting! And I really like the little "fun bits" you edit into your videos!
Credits song: Bad Company
Once again wonderful video. One of the most interesting and well spoken channels on RUclips!
Love the Dont Hug Me I'm Scared reference!
Also _wow_ this has become my favourite channel. I wish I had found it 2 years ago
Excellent clerks reference! Also the video was outstanding as usual. :P
I binged several of your videos and I noticed this example about the rocket comes up another time. As well as the example just before it. Thought I was somehow rewatching one over again.
I noticed an interesting quirk about the model that ignores the difficulty of finding the right task. If you take 361 people and have them all play Go, they can think of every move on the board, so they'd be able to beat our current AI, but this is not the case, so this is how important that ability to determine these things gets.
A coworker just shared this video with me. I had no idea you had your own RUclips channel. I like Computerphile a lot, including your ML/AI videos so I instantly subscribed!
Finally! I was tired of rewatching your old videos. haha Keep'em coming
Looking forward to episode 2 of this! I've thought of the utility of this analogy in being that corporations, as intelligent nonhuman agents, give us the opportunity to experiment with designing utility functions that might be less harmful when implemented.
Content and presentation is brilliant, I'm sure matching audio and video quality will follow.
Subbed :)
Is this about the black and white bits at the start that are just using the phone's internal mic, or is the there a problem with my lav setup?
@@RobertMilesAIMaybe they watched the video before it finished processing the higher qualities, do you release videos before they're done fully processing?
This is all making the highly optimistic assumption that the people in the corporation are cooperating for the common good. In many organizations, everyone is behaving in a "stupid" way, but if they did something else, they would get fired.
Yes, but individual neurons are 'stupid'. Individual layers of a neutral net are 'stupid'
you might be missing Price's Law there.
(an application of Zipf's Law)
a small part (the √ of the workers) is working for the "common good"
Also that the workers/CEOs are always aligned with shareholder maximization, as opposed to personal maximization. A company can destroy itself to empower a single person with money and often does.
what is this 'common good,' anyway? is it some ideologically driven concept that differs entirely between all humans? Ironically it is this very 'common good' which drives many companies to do evil. After all, the road to hell is paved in human skulls and good intentions.
@@Gogglesofkrome Common good of the shareholders in this case.
Great stuff, thank you so much for the video Rob
A corporation can also do something like alphago's search tree. Many people have ideas and others improve on them in different directions. Bad directions are canceled until a very good path is found. Also many corporations in competition behave like a swarm intelligence. But still great video!
Thank you for this great video.
It could be interresting to go through the same exercise, but with the whole world's economy.
and evaluate the "invisible hand of the market" as an artificial selection AI...
Have a good week-end !
I find that market's personification ("invisible hand") as a horrible mistake, as the whole point of the market is precisely that it's not a single entity, it doesn't have a particular intention. It's just a network of people with DIFFERENT ones.
I've always wondered this and have been pushing this idea... awesome to have a full video on it!
Well not the 3 follow on conclusions, but the comparison to AI systems
This video is actually amazing. Wow. So much useful information covered. And not just useful for people interested in AI. Most of this could apply anywhere from how businesses work to how different political systems work and to pretty much anything else.
This video is the kind that chanced my mind twice in only 14 minutes. I love the fact that it had a true discussion on the subject and not just a half-baked opinion.
should someone can bring AGI to us, they must be a person like you. your sensibleness and sensitivity is outstanding. I'll resume the video now, cheers
It took one month since this video was made, for AI to start crushing Starcraft professional players.
(AlphaStar played both Dario Wunsch and Grzego rz Komincz, who were ranked 44th and 13th in the world respectively, were both beat 5 to 0.)
Merry Christmas Robert! :)
Great quality video, congratulations
I have enjoyed your computerphile videos, but these scripted ones are even better. I had never heard the AI/Corporation comparison before, so in one succinct video you introduced me to a very interesting analogy and analyzed the problems with the analogy very well.
This was my question! Thanks Rob for answering it
Every haircut you had so far was on point
I love that you used XKCD's Up Goer Five as your example rocket blueprint. Definitely one of the best comic's Randall has ever put out.
Always really interesting and clear, with an nice open ended storyline
I've always thought about the connection of corporations to AI as they do seek to seek to maximize their goals in the most efficient way. Glad you put out this very well thought out video :)
Corporations are far from efficient.
@@dannygjk relative to what?
@@ziquaftynny9285 Relative to AI ;)
@Stale Bagelz Corporations are plagued with many of the issues that humanity has in general. For example power struggles within the corporation.
@@dannygjk I think it's less "far from efficient", and more a stop-button/specification problem. The institutions (and the people making them up) are very good at maximizing the chances of their success, as given by the metrics that the broader systems (society/government for the institutions, and internal politics for the individuals) evaluate them by. The problems are, those metrics are not necessarily measuring what people think they are measuring (due to loopholes, outright lying, etc.), any attempts to change those metrics will be fought by the organizations currently benefiting from them, and that the fundamental social-economic system those original metrics were designed from presupposed that morality was either a non-factor or would arise naturally from selfish behavior. I'm also going to point out that the "general humanity issues" you mention are greatly exacerbated by that same set of problems.
Yeesss Rob is back as good as ever!
Is that a Don’t Hug Me I’m Scared reference in the graph???
Oh man so awesome.
Those layers arent gonna stack by themselves
I think the statistical model is a bit flawed/over simplified. Groups of humans don't just select the best idea from a pool but will often build upon those ideas to create new and better ones.
Basically this just means that an "idea" can actually have several smaller components that can be improved upon. I think this is more than offset by the fact that (as discussed in the video) humans still can't select the best ideas even when they're presented.
I just came here to say that I appreciated the Tom Lehrer reference. Keep up the great videos!
Amazing content again. Keep it up!
I've said it before and I'll say it again, "bureaucracy is a human paperclip maximizer".
Doesn't matter if it's a private corporation or governmental.
Great video, thanks for sharing!
Yeah! terrific. Much thanks
Thank you for sharing!
They just prove that corporations are problems in similar ways.
Not that somehow both are not a problem.
Corporations have to be tightly controlled by the population (in the form of government) to utilize their potential without allowing their diverging goals to cause excessive damage.
Yay a new video. Mighty thanks to you
"Evaluating solutions is easier than coming up with them"
This is why I should earn more than my boss....I come up with all the ideas; the only thing he does is criticize and pick what idea to take forward!
Your reasoning makes perfect sense, assuming people get paid based on the difficulty of their work. Oh, wait...
Then become the boss if it is so easy.
@@pluto8404
Becoming the boss != Doing the boss's work.
It's not easy to be born rich unless you already were.
@@landonpowell6296 yeah the issue here is that in reality, the market doesn't directly reward intelligence or hard work, it rewards the satisfaction of consumer's needs. It seems unfair, but the alternative is much worse. Besides, intelligence and hard work may not be strictly necessary but they very often do put you in the right path. And someone being born lucky or rich doesn't really mean they are being unfair to others.
Wow this video was really interesting ..
Thanks for creating it
Last year in the US, one of the big sporting goods retailers stopped carrying semi-automatic rifles and tightened restrictions on their gun sales in the wake of mass shootings. That decision was made solely by the CEO and it definitely didn't please a lot of shareholders. That's another big difference, I think, between corporations and AGI - the big decisions in a corporation are ultimately made by a small group of humans with human values. Not that we can always expect corporations to put morality over profits obviously, but executives can at least *recognize* an egregious situation and make moral judgments. An AGI doesn't have any such safeguards.
Fantastic video as always, btw!
I like this idea overall. Somewhat smarter, but also somewhat slower. -- Controllable by other grouped-human entities (like governments)
+ a lot of other points, but I think that is kind of the main thing that differentiates it from ASI.
Loved the "Bad Company" acoustic at the end. As always, another 1-up to those not formally schooled whom routinely spout nonsensical "What-if's" at you as if they are the first person to think of the idea haha.
Ahhh the move 37/Clerks reference!! Perfect
In a row??
@9:58 In answer to your rhetorical question, I need to reference the baduk games played between Alphago zero and Alphago master. Zero plays batshit crazy strategies where even the tiniest inaccuracies cause the position to spiral into catastrophe but zero still manages to win. Zero’s strategy does not look good to amateur players, nor to professional players, but it works, it just works. Watching these games feels like listening to two gods talk, one of which has gone mad.
@10:02 ah… well we recognized move 37 as good after the AI showed that to us.
I think another important point about idea qualities in large teams is the selection process. No team is coldly evaluating every idea and picking the objective best one. The people who can articulate their ideas best, or shout the loudest, or happen to be the CEO's son are the ones who's ideas get implemented.
3:48
Nice thinking adding "(for now)" text in the video, as Starcraft was already beatne by DeepMind a month ago
"...that even governments are sometimes able to move fast enough to deal with them [corporations.]" LOVE IT!!! 😂 Oh, and by the way; LOVE the acoustic rendition of "Bad Company" [by, of course, Bad Company - the ultimate eponymous song!] - BRILLIANT! :D ...and, is that a mandolin? Wonderful! Now, as to these corporations... I think it's pretty clear that most of them act as specialist A.I.s, geared to produce some product or service (or, sometimes, a whole range of them), & as such, they're mostly designed to maximize profits for the shareholders (as you pointed out.) I think this is very much like Deep Thought, or the Go! program; they do indeed act as specialized superintelligences. But they most certainly do NOT qualify in any way as general intelligences, much less general superintelligences. As to the question you posed [quite diplomatically, I must say, as you neatly side-stepped the issue of using any mental health terms!], "Are they 'misaligned'?" Well, in short, YES. Many of them ARE misaligned. They are profit-driven - some of them to the point of getting away with whatever they can. And on that note, the ONLY moral in a capitalist, or 'free-market' society, IS, "What can I get away with - and how much $$$ can I make DOING it?" I'm sorry, but that's it. If a company isn't run by people with good intentions AND good morals &/or ethics, then that's what you end up with, simply by default. In other words, if nobody's 'minding the moral store' so to speak, things WILL do badly wrong all by themselves. I believe this could be proved - at least by example - but I don't know how do prove it, myself. I have merely witnessed (and often worked for!) 54+ years worth of corporate shenanigans which amply proves it to ME. So, YES while some of them DO make good products, &/or have good services, that is ONLY because they are run by strong people with good morals - or, at least, good corporate & social ethics. The main problem is this: when nobody's in charge whose strong enough to infuse a company with their own good values, bureaucracy WILL take over by default, and it is ALWAYS 'misaligned' as you put it. In fact, it is actually badly broken & dysfunctional, by any standard you'd care to judge it by... EXCEPT the standard of, "What can I get away with, and how much $$$ can I make DOING it?" That's it. That's all there is. Probability either shows that, or is useless in gauging that. If we 'train' our A.G.I.s, they're going to HAVE to be given clear psychological tests, examples & exams; they're going to HAVE to be 'taught' by people who do not only NOT teach them, "Maximize profit, dammit - nothing else matters!!!" but rather DO teach them that people matter, intelligent (or 'sentient') beings matter, whether they are flesh or circuits or whatever. If you can't perform your task without harming sentients, then you can't perform you task at all, & you MUST ask for help. Notice that I'm NOT advocating for the 3 (or 4, really) laws of robotics. Lovely sci-fi concept, I'm sure, but lousy real-world philosophy. A.I.s (or A.G.I.s, or whatever new letters someone comes up with tomorrow...) cannot be "programmed" to be "moral" in ANY sense. Doesn't work. Try it. Anyway, that's my take. Thanks for the video! You talk about important things (in my opinion!) tavi.
I just want to say thanks for making these videos! Also nice Undertale reference
Very interessting topic. Thanks for this viewpoint
Wonderfully well considered problem and presented both bite-sized and expounded on.
Logicians are some of my favorite people.
At 7:14, the graph looks wrong. That histogram should resemble the graph of the probability density of a sample maximum. In general, if X₁, ..., Xₙ are independent and identically distributed random variables (i.e. a sample of size n) with cumulative distribution function Fₓ(x), then S = max{X₁, ..., Xₙ} has cumulative distribution function Fₛ(s) = [Fₓ(s)]ⁿ. So if each X as a probability density function fₓ(x) = Fₓ'(x), then S has probability density function fₛ(s) = n fₓ(s) [ Fₓ(s) ]ⁿ⁻¹ = n fₓ(s) [ ∫ fₓ(t) dt ]ⁿ⁻¹, where the integral is taken from -∞ to s.
Here, we assumed the variables were normally distributed and set μ = 100 and σ = 20, so fₓ(x) = 1/(20√͞2͞π) exp(-(x-100)²/800), and thus fₛ(s) =
n/(20√͞2͞π)ⁿ exp(-(s-100)²/800) [ ∫ exp(-(t-100)²/800) dt ]ⁿ⁻¹. The mean of this is E[S] = ∫s fₛ(s) ds, integrating over ℝ. Doing this numerically in the n=100 case gives a mean of 150.152. We can also make use of an approximate formula for large n: E[S] ≈ μ + σ Φ⁻¹((n-π/8)/(n-π/4+1)). For the given parameters and n=100, we get E[S] ≈ 100 + 20 Φ⁻¹((100-π/8)/(101-π/4)) ≈ 150.173. In either case, it is not plausible that you got a mean of 125 with n = 100, σ = 20 like you said. You must have used σ = 10, not σ = 20. That also explains why you wrote "σ = 20" between those vertical bars at 6:31. You probably meant that the distance between μ+σ and μ-σ was 20, i.e. σ = 10.
That's correct! Though, since I picked the value for the standard deviation out of thin air, it can just be 10 instead and it doesn't affect the point I was trying to make
When thinking about AI as a metaphor for corporations, rather than the other way around, it's not necessarily the superhuman *intelligence* of the AI that is important or that makes them inherently dangerous - merely the fact that the intelligence makes it superhumanly *powerful*. Whether or not we accept that a corporation is significantly more intelligent than a human, they're fairly self-evidently significantly more powerful than one, with more ability to affect change in the world and to gather instrumental resources to increase that ability.
Might also consider some forms of government as behaving as AI, even societies for that matter. They can all go awry when citizens that go along with the "program" are convinced their actions are for a higher good. It's the conundrum of how good natured people can participate in the making of an avoidable calamity. But this brings in the question of human evil, or moral failing (as we see so much in large corporations), that even when quite innocuous on an individual level can be brutal when added up on a mass level.
this was an astoundingly interesting video
Yay! I'm always waiting for your vids. I always tell people, whenever its brought up, that AGIs are very likely what will destroy us but also probably the only thing that can save us from our own limitations. (besides jebus)
Also don't forget communication costs. Scaling any human process to 1000 people becomes incredibly difficult due to overhead necessary to keeping everyone pointed in the same direction. Just documenting the suggestions from 1000 people is going to require a significant number of people and time, making sure you get the suggestions documented correctly and unambiguously and then evaluated is going to be a herculean task. It's not for no reason that most Agile Development techniques are most effective at 5 to 6 people and most advice for teams of size 10+ is "split into 2 teams that don't need to coordinate".
Just subed!
Love your stuff man, and Tom Lehrer as well. ;)
Such a creative discussion
Love that bit at the start.
Anything You Can Do (Annie Get Your Gun) by Howard Keel, Betty Hutton
AGI:
Anything you can be, I can be greater.
Sooner or later I'm greater than you.
I found this video very good as i thought about this and this expand the comparison and where it fails
Thanks so much for this
This diminishing returns stuff presumably also applies to electronic AGI. Look at the server resources they pour into GPT.
This is fast becoming even more cyberpunk than Neuromancer.
A lot of the "sort of" points are very likely to apply to AGIs (at least in the early days) too.
Anyways, we could certainly benefit from being better at aligning the goals and actions of corporations with humanity as a whole, and I think AI safety research could help with that while gaining insights about future AGIs.
The video you did on computerphile about Asimov's e laws of robotics was the most impactful, consise expression of what the danger of AI development is. You made the point that "you have to solve ethics" and the fact that the people building it are going, "hold on, I'm just a computer programmer, I didn't sign up for that." those two things combined have stuck with me for years.
I agree with all the analysis in this video, but from a general standpoint it seems wild to even assert that corporations are like superintelligences when we have phrases like "design by committee" or "too many cooks" etc to describe the regression toward the mean when solving problems using a group of people. The differentiating factor of companies' ability to do things has always been person-power in my mind, definitely not their ability to generate solutions for problems. Anyone can have an idea, its the execution that counts. Some things require a lot of people to execute. This is IMO what gives organisations more capability than individuals.
Severely underrated channel
we miss u! thanks!
The other important thing about corporations is they ultimately rely on people (their workers, customers, and supply chain) for them to function. This is why strikes are so effective and boycotts are also somewhat effective. The actual people that have to cooperate with the corporate leadership apparatus are the majority of human beings. Now, they don't have the choice to not cooperate with at least some corporations, but they can perfectly well agree not to carry out some directive.
At 11:45 he mentions you can keep adding more people and they will do the job faster. A little algebra shows that for the number adding example, the optimal number of people working in parallel is the sqrt of the number of numbers. Adding more beyond this point will slow down the process.
In some ways your "have each person generate an idea and pick the best" actually understates the problem. There are many types of problems, e.g. picking a move in chess, where ideas are easy to come up with but hard to evaluate.
finally, a good vid from Rob
In part due to your videos, I'm planning to focus on AI in my undergraduate studies (US). I'm returning to school for my final 1.5 years of study after a long break from university. Do you have any recommended reading to help guide/shape/maximize the utility of my studies? Ultimately (in part due to Yudkowsky) I am drawn to this exact field of study: AI safety. I hope that I can make a contribution.
On MIRI's web page (intelligence dot org) is "research guide" - a list of useful books and papers to start you on their research goals. Center for Human-Compatible AI has a bibliography section containing recommended materials. You can also get some interesting papers describing problems with AGI from a bibliography section of Bostrom's "Superintelligence". You can also find a list of papers on MIRI's or FHI's web page. Good luck.
That clerks reference for move 37 was phenomenal
The term I came up with that might fit a corporation is Ultra-Wide Artificial General Intelligence (UWAGI): an AGI that has genius-level (but not super intelligent) competence in far more areas than you’d expect of a single human, and which can do a very large number of AGI-level tasks at once, but is still not technically super intelligent in the traditional sense. I guess one way to think of it as being superintelligent in terms of “width” as opposed to “depth”
I've already conjectured a year or two ago that corporations are AI, so of course I'm going to say yes. My reasoning is:
-Corporations make decisions based on their board of directors, which is a hive mind of supposedly well-qualified, intellectual elites.
-A corporate board will serve the goals of its shareholders, at the expense of everything else. Even if this means firing an employee because they believe they're losing $50/year on that employee, they care more about the $50 than the fact that the employee will be out of work. It also means they may choose not to recall a dangerous product if they think a recall would be the less profitable course of action. Corporate boards are so submissive to the goals of their shareholders that it is reminiscent of the AI who maximizes stamp-collecting at the expense of everything else, even if it destroys the world in the process (see fossil fuel companies who knew about climate change in the 1960's and buried the research on it).
-AI superintelligence is supposed to have calculation resources that make it beyond human abilities, like a chess AI that is 900 elo rating points stronger than the best human. An AGI superintelligence might manifest superhuman abilities that go beyond just intelligence, but also its ability to generate revenue in a superhuman way and its ability to influence human opinion in a superhuman way. Large corporations also have unfathomable resources to execute their goals, which (in cases like Amazon, Apple, Microsoft, or IBM) can include tens or hundreds of thousands of laborers, countless elite intellectuals, the power to actually influence federal legislation through lobbying, the financial resources to drive their competition out of business or merge with them, and public relations departments that can influence public opinion.
Really, I think that the way corporations behave is an almost exact model for how AGI would behave.