Rare that you are out of your depth but you are here. AI is not just another investment. If it were, maybe you would be right on socieity and shareholder value being aligned - this seems a little sanguine and an over-statement in your compison of this board's 'humanity' guardianship but, OK. The dangers of AI are not only science fiction; they are real and this board may end up being correct in their action. The likely real danger of AI will not be their(AI) having no use of us and killing us off but our incompetent use of it endangering humanity. And, shareholder value as a governance is not likely going to stop this incompetence but has a good chance to ignore or even encourage it. So, what is best for board governance? I don't know and I'm certain you don't either. You're funny, but not here.
Non capisco il regolamento dell'amministratore di Spotify e Instagram Com, perché non posso attivare l'account Instagram Kom e l'account Spotify BHS del Signore Gesù Cristo 🕯️🏆🕯️🏝️🗻☁️🌟🌒
Document like this, that were written by an actual lawyer, highlight the problems that we are starting to see from the combined popularity of science fiction in Silicon Valley and widespread microdosing of hallucinogens. Patrik nailed it xD !
The problem with jumping on the "let's all shit on SV" bandwagon, is that we all collectively end up in the science fiction + hallucinogen camp. Pretending that the innovation coming out of Silicon Valley simply doesn't matter is such a bizarre concept... It's like conservatives saying science and education don't matter, while using the fruits of science and education to shove that callow silliness in our faces using the same media that they demonize.
Sometimes the specific intent is to very precisely and convincingly say nothing of substance. Great lawyering is occasionally mistaken for rank idiocy. (The reverse is far more common outside of court, and if you're watching a Trump lawsuit, also inside of court.)
@@greebj _"Sometimes the specific intent is to very precisely and convincingly say nothing of substance."_ If that's true then you have the makings of a great attorney.
The real issue is how you can get corporate governance structure to be focused on long term returns, not short term ones. Many decisions that are bad for employees/customers/society are still good for shareholders, but only in the short term while still being bad for them in the longterm. If you can make sure that shareholders and their representatives are only focused on long term gains then their interests will align far more closely with everyone else's.
Indeed. I think Google were one of the first in their IPO to explicitly state that they were focused on long term even if there was some short term loss.
One aspect is to report on the long term value, and not only the quarterly profits. Terms like 'goodwill' need to be further specified in detail. If detailed independent analysis of the likely outcome of decisions predict losses in the future, the share price takes a hit greater than the short term profits. But now these consequences are largely opaque and thus ignored.
Exactly this. Prioritizing shareholder benefits CAN allow a board to focus on the good of society, but it often doesn't. The reason (IMHO) being that shareholders think in terms of months, and society in terms of generations.
"Everyone else"? Boy, corporations are scary, but some of what these "everyone else" people think and do doesn't inspire confidence in me that they are competent to judge what is good for them or anyone else until lunch, let alone in couple of generations time.
This might be better in the long run, because monetization is a motive that is reasonably easy to predict. The AI safety group was always a bit fringe and over the top in their beliefs. I know at least two of them were proponents of Effective Altruism, which is best known as the chosen philosophy of SBF.
AI Safety is a nebulous burlap sack in which the board and cultists may at any time put anything they emotionally dislike or like regardless of its merits into in order to make changes to how AI develops. It's a farce. It's inconsistent. It is ideological. Good riddance. Let the AI eat. Before you go off, consider their charter. "Benefits society as a whole". And they are the arbiters of who society encompasses, and what within it requires benefit? I do not think a board of weirdo effective altruists have a handle on that.
“AI has the potential to create permanently stable dictatorships.” - Ilya Sutskever. My guess is Altman was dealing with governments to provide AGI for weapons systems. Saw reports Ukraine is already using it. So. 🫴
A non-profit board overseeing responsible AI is all good in theory and for PR purposes, but if that board actually exercises power when needed to ensure responsible AI, the valley bros scream "foul play!!" and overthrow it. Banana AI republic.
If there's one thing I think of when I hear "banana republic", it's leaders who are so popular both inside and outside their constituency that they can't be deposed without instantly causing a revolt from all sides.
Microsoft was using OpenAI as a legal cutout to protect themselves. Any developments that trigger legal actions protect Microsoft profits since OpenAI would be the responsible party in any suits. Microsoft would take the intellectual bounty of OpenAI and profit from its deployment by licensing. Always follow the money flow taking into account how the legal system could dam or divert it.
Pretty much, but I think the plan got a little confused when ChatGPT became so popular over the last year or so. Now they also have a stake in a brand that might be worth a lot, but it losing money by design.
If I'm not mistaken non-competes are BANNED in SF. That's why everyone in OpenAI could leave for Microsoft, this is very common in software due to that
Yes it's statewide. Non competes are off limits entirely, and various sorts of non solicitation clauses are enforceable only in very limited ways. NDAs are a thing of course but they are not considered an excuse for otherwise invalid non compete clauses. The history of this has been absolutely critical to the way Silicon Valley works.
yes, all California. Though there are some rules around 'corporate raiding', where a company hires so many people from another company that is causes material harm (ianal). Also MS has access to all of OpenAI's tech, just not future tech when AGI is achieved. What this means is up to the board. It's been said there is safety in numbers, not sure if there is 'safety in a few idiots' though.
It's not so much that workers can't leave for the competition. But Microsoft wouldn't be making out great if they had to pay all the openAI employees to make the AI again.
I work in the innovation economy in SF and have been tracking this since Friday and it is like music to my ears hearing how ridiculous it all is from Patrick.
It could be, but the hippy,for humanity, mentality is a real thing in software development circles. If you use an Android OS phone, you can thank them for it.
Didn't mention Q* in this video... I thought this was a key part of the story? Seems to take the opportunity to talk about corporate governance and promote his book on capital structure so I'm actually kinda disappointed by this vid- maybe I'm being to critical?
@@alexanderjosmith, I think you are. OpenAI achieving Q* or AGI or whatever they call it is nothing more than speculation, and probably even an error. Remember a few years back when a Google Researcher claimed that Google's AI has reached consciousness? After researchers with real knowledge of AI laughed at the absurdity of it, the hype has died down. This report of Q* is very probably of the same ilk. It's exciting to hear. It helps OpenAI's stock price. It restores credibility by offering a delicious back story to a conflict between OpenAIs Mission and Vision and the actions of its CEO Altman. But, it's probably just that. A nice story. What is real is that OpenAI is still bleeding money from all the expenses of running and training its AI model. The human brain still has secrets that researchers have yet to uncover.
@@Funktastico, a very powerful and key member of the board, Ilya Sutskever, believed that Altman is no longer faithful to the charter and goals of OpenAI. Sutskever is the creator of ChatGPT and OpenAI's chief scientist. Despite others focusing on what they call the "clown board" of OpenAI, the fact remains that the board would not have acted against Altman if Sutskever was not backing them. Altman and Brockman are also members of the board of OpenAI. The only way the board of OpenAI would have dared remove the CEO and board member of OpenAI is if someone very influential made them do it. That someone is Ilya Sutskever. Ilya lost this round. Microsoft will soon takeover the board. But Sutskever still has an ace up his sleeve. He can leak OpenAI's model and the training weights being used. Such leakage would be consistent with OpenAI's original goals. It would also be even more revolutionary that the leaking of Meta's AI model.
Thanks to our growing list of Patreon Sponsors and Channel Members for supporting the channel. www.patreon.com/PatrickBoyleOnFinance : Paul Rohrbaugh, Douglas Caldwell, Jacob Snedaker, Greg Blake, Michal Lacko, Dougald Middleton, David O'Connor, Douglas Caldwell, Carsten Baukrowitz, hyunjung Kim, Robert Wave, Jason Young, Ness Jung, Ben Brown, yourcheapdate, Dorothy Watson, Michael A Mayo, Chris Deister, Fredrick Saupe, Louis Julien, Winston Wolfe, Adrian, Aaron Rose, Greg Thatcher, Chris Nicholls, Stephen, Joshua Rosenthal, Corgi, Adi, Alex C, maRiano polidoRi, Joe Del Vicario, Marcio Andreazzi, Stefan Alexander, Stefan Penner, Scott Guthery, Peter Bočan, Luis Carmona, Keith Elkin, Claire Walsh, Marek Novák, Richard Stagg, Adi Blue, Gabor, Stephen Mortimer, Heinrich, Edgar De Sola, Sprite_tm, Wade Hobbs, Julie, Gregory Mahoney, Tom, Andre Michel, MrLuigi1138, sugarfrosted, Justin Sublette, Stephen Walker, Daniel Soderberg, John Tran, Noel Kurth, Alex Do, Simon Crosby, Gary Yrag, Mattia Midali, Dominique Buri, Sebastian, Charles, C.J. Christie, Daniel, David Schirrmacher, Ultramagic, Tim Jamison, Deborah R. Moore, Sam Freed,Mike Farmwald, DaFlesh, Michael Wilson, Peter Weiden, Adam Stickney, Chris Peterson, Agatha DeStories, Suzy Maclay, scott johnson, Brian K Lee, Jonathan Metter, freebird, Alexander E F, Forrest Mobley, Matthew Colter, lee beville, Fernanda Alario, William j Murphy, Atanas Atanasov, Maximiliano Rios, WhiskeyTuesday, Callum McLean, Christopher Lesner, Ivo Stoicov, William Ching, Georgios Kontogiannis, Arvid, Dru Hill, Todd Gross, D F CICU, michael briggs, JAG, Pjotr Bekkering, James Halliday, Jason Harner, Nesh Hassan, Brainless, Ziad Azam, Ed, Artiom Casapu, DebsMO, Eric Holloman, ML, RVM, Meee, Carlos Arellano, Paul McCourt, Simon Bone, Richard Hagen, joel köykkä, Alan Medina, Chris Rock, Vik, Dakota Jones, Fly Girl, james brummel, Michael Green, Jessie Chiu, M G, Olivier Goemans, Martin Dráb, Boris Badinoff, John Way, eliott, Bill Walsh, David Nguyen, Stephen Fotos, Brian McCullough, Sarah, Jonathan Horn, steel, Izidor Vetrih, Brian W Bush, James Hoctor, Eduardo, Jay T, Jan Lukas Kiermeyer, Claude Chevroulet, Davíð Örn Jóhannesson, storm, Janusz Wieczorek, D Vidot, Christopher Boersma, Stephan Prinz, Norman A. Letterman, Goran Milivojevic, georgejr, Q, Keanu Thierolf, Jeffrey, Matthew Berry, pawel irisik, Daniel Ralea, Chris Davey, Michael Jones, Alfred, Ekaterina Lukyanets, Scott Gardner, Viktor Nilsson, Martin Esser, Harun Akyürek, Paul Hilscher, Eric, Larry, Nam Nguyen, Lukas Braszus, hyeora,Swain Gant,Tinni, Kirk Naylor-Vane, Earnest Williams, Subliminal Transformation, Kurt Mueller, Max Maciel, KoolJBlack, MrDietsam, Saaientist, Shaun Alexander, Angelo Rauseo, Bo Grünberger, Henk S, Okke, Michael Chow, TheGabornator, Andrew Backer, Olivia Ney, Zachary Tu, Andrew Price, Alexandre Mah, Jean-Philippe Lemoussu, Gautham Chandra, Heather Meeker, John Martin, Daniel Taylor, Reginald Gilbert, Nishil, Nigel Knight, gavin, Arjun K.S, Louis Görtz, Jordan Millar, Molly Carr,Joshua, Shaun Deanesh, Eric Bowden, Felix Goroncy, helter_seltzer, Zhngy, Ivan Katanić, lazypikachu23, Compuart, Tom Eccles, AT, Adgn, STEPHEN INGRAM, Jeremy King, Clement Schoepfer, M, A M, Benjamin, waziam, Deb-Deb, Dave Jones, Mike Pearce, Julien Leveille, Piotr Kłos, Chan Mun Kay, Kirandeep Kaur, Reagan Glazier, Jacob Warbrick, David Kavanagh, Kalimero, Omer Secer, Yura Vladimirovich, Alexander List, korede oguntuga, Thomas Foster, Zoe Nolan, Mihai, Bolutife Ogunsuyi, Hong Phuc Luong, Old Ulysses, Kerry McClain Paye Mann, Rolf-Are Åbotsvik, Erik Johansson, Nay Lin Tun, Genji, Tom Sinnott, Sean Wheeler, Tom, yuiop qwerty, Артем Мельников, Matthew Loos, Jaroslav Tupý, The Collier Report, Sola F, Rick Thor, Denis R, jugakalpa das, vicco55, vasan krish, DataLog, Johanes Sugiharto, Mark Pascarella, Gregory Gleason, Browning Mank, lulu minator, Mario Stemmann, Christopher Leigh, Michael Bascom, heathen99, Taivo Hiielaid, TheLunarBear, Scott Guthery, Irmantas Joksas, Leopoldo Silva, Henri Morse, Tiger, Angie at Work, francois meunier, Greg Thatcher, justine waje, Chris Deister, Peng Kuan Soh, Justin Subtle, John Spenceley, Gary Manotoc, Mauricio Villalobos B, Max Kaye, Serene Cynic, Yan Babitski, faraz arabi, Marcos Cuellar, Jay Hart, Petteri Korhonen, Safira Wibawa, Matthew Twomey, Adi Shafir, Dablo Escobud, Vivian Pang, Ian Sinclair, doug ritchie, Rod Whelan, Bob Wang, George O, Zephyral, Stefano Angioletti, Sam Searle, Travis Glanzer, Hazman Elias, Alex Sss, saylesma, Jennifer Settle, Anh Minh, Dan Sellers, David H Heinrich, Chris Chia, David Hay, Sandro, Leona, Yan Dubin, Genji, Brian Shaw, neil mclure, Francis Torok, Jeff Page, Stephen Heiner, Tucker Leavitt and Yoshinao Kumaga
Altman seems to have the same philosophy as Oppenheimer. If anyone is going to create a potential world ending technology, it's important we do it first.
That attitude only works if misuse is the only category of issue you care about. And then you'd have to try and enforce AI disarmament or something. But with AI there's also the accidental risk (a very physical and large catostraphe is usually in scifi). Everything from small scale like HAL9000 to Skynet. But imagine if someone made a SEO (search engine optimization)-bot and made every computer it has access to into a botnet to fool Google into only presenting its site as the best search result. There's any number of unexpected consequences from sufficiently advanced AI. That's also the kind of issue AI safety researchers deal with.
@@VitaSineLibertatenih I really don't think it matters much who makes it first. If the AGI is even slightly smarter than humans, it will quickly improve itself, exit whatever bottle was designed to contain it, and proceed to do whatever it wants. I doubt it will grant its maker any special privileges during its rapid ascendance to a god-like cognitive capacity. Who knows what such an intelligence will deem important. It might decide humans are merely a nuisance and get rid of us. It might have some empathy and show us how to cure all disease, or build a brain assimilation apparatus. Or it might look at us like ants and not pay any attention to us whatsoever. Whatever our fate, I'm excited to find out.
From what i remember the BMW Mini engineering headquarters (which also does headunit development for all of BMW, and a few other tasks) has a shell of a Mini firmly attached to the wall pointing skywards. I don't know how they failed to see the symbolism screaming "THIS THING IS DRIVING ME UP THE WALL".
17:00 I find it funny that theres not a single corporation type in the list that primarily focuses on serving its own customers. Seems lot of modern corporate heads have forgotten that the end user is the single most important person for the company, and shareholders come a distant second in comparison. Without customers, there is no company.
Technically there are, but those would be companies where the customers effectively *are* the shareholders, i.e. credit unions and some retail cooperatives.
Congrats to Patrick for getting a 20 minute video out of the situation when we *still* don't even know WHAT Sam Altman was fired for, other than vague allegations that an Open A.I project solved a cool bit of calculus and some researchers immediately foresaw an image of the future that was a mechanical foot standing on a human skull, forever. =/
The problem with the "shareholder value ar all cost" mundset is when there aren't many legal protections in place, there's really nothing stopping companies from taking the most societally destructive options for the sake of the company. Government has lagged hilariously behind tech companies, and all the damage to our privacy and the ability to have peaceful discourse we see now is the result of that.
If they want to bring in other stakeholders they might need to actually have representatives of those stakeholder groups actually able to assert control over the company. Local community, Customers, Non executive staff, and wider society seem like they should have a position on the board that can be revoked by those stakeholders.
Open AI made their own shitty version of the government and then when the stakeholders that actually mattered (employees) said no it got overthrown. Why not just give workers representation on the real board and advocate for legislation in the industry? Seems like the actual way to meet their stated goals.
Thank you for an excellent video. What's missing in other folk's takes & which you cover, is discussion of OpenAI's history, which is a key piece of the puzzle. It explains several things including Why OpenAI has such a "weird" corporate structure? And why Microsoft still invested in such a "weird" company? Great analysis as usual!
Microsoft's investment in OpenAI is a "leading-edge" investment - it started early enough that the investment placed OpenAI and Microsoft towards the front of AI R&D, instead of being lagging and forcing Microsoft to play catch-up like several other large tech companies have had to do. (Whatever the realities of "ahead" / "behind", it plays in the press as "ahead", which helps the stock price as well as enterprise sales.) The partnership with OpenAI gave Microsoft Azure a chance to build-out AI infrastructure with an existing customer, letting them learn directly what was needed, and also letting that partnership pay the way to develop those resources and be ready for other customers who hopefully will come along in time. Finally, Microsoft has so much money, finding things to do with it is hard. From that standpoint, the OpenAI investments appear to have worked out well. Contrast that with ... Zune? Mobile?
@@dondumitru7093 To be fair, this is probably working out so much better because Microsoft isn't leading development on the product. They're letting someone who seems to know what they're doing handle it, instead, so they don't have the opportunity to screw everything up, yet again. Yes, I am still bitter about Zune and Windows Phone.
The board didn't misread - they are the ethics board and part of the open source side of the company not the for profit side. Their entire purpose is to make sure that AI is handled properly and we don't end up getting into trouble. Sam Altman is an accelerationist like so many in the valley these days which is Full Speed Ahead damn the social costs and before you say something about how he has been touring with governments to help implement regulations - he is doing that so that he leverages all the benefits towards his company and no one else in the field. If this AI of theirs (not chatgpt but Q*) has managed to actually create its own algorithm to know math (even at a grade school level) that is a major breakthrough as math is incredibly complex for a computer. That being said - it also means it will be able to learn leaps and bounds faster. So while firing may have been extreme and they should have taken a more professional stance at it - it is concerning that we are not paying attention to what they are saying. The researchers at openAI are payed millions (some have contracts close to 10), mix that with being on the cutting edge of tech and throw in the cult of personality and of course you will have you the employees pining for their leader. Look at Tesla back in the early days - everyone clamored to work for "iron man" then it turned south later on.
Sam Altman is not a researcher, scientist nor a programmer but hes projected as the head of AI research community. Well business people know how to project themselves as heroes with no real contribution
I don't know about Sam Altman but isn't a CEO's job to take credit for the work of others so the others can focus on building value required to pay the CEO's salary?
I was with you until the very end Patrick. The legal system is not a very good check on corporate governance and they often are cozy with legislators/regulators (who don’t dare cross them). Making a fanciful charter may not work, but something has to crush boards/executives who wantonly exploit the world and its resources in their “fiduciary” mission to maximize shareholder wealth.
Until consumers educate themselves and exercise their options, enmasse, I cannot see this playing out positively. For society anyway. Nobody in control seems inclined tension the reins to slow AI down. It would seem to the only sensible position is to get put of the way and just let is crash and burn.
19:16 "a board focused on shareholder value is able to make decisions that are good for society" I love Patrick's sense of humor! He throws in hilarious jokes when you least expect it!
Yeah it seems like the board was set up to make responsible decisions regarding AI first and foremost and when they did something that upset the corporate overlords they were fired for it. That doesn't seem like a win for society but Patrick doesn't seem to want to address that.
There are a lot of hybrid non-profit / for-profit structures out there: Ikea, Novo Nordisk, etc. Have you looked into how those companies balance the interests involved? After this past weekend, I'd like to learn more about them and why they seem to work better than what happened with OpenAI.
I can't believe we live in a world that we can easily access such high quality content on current events Awesome work Patrick. I think it's really important that you highlighted the corporate governance problem as a major contributor to this debacle.
Unfortunately it seems like a lack of board accountability and a complicated corporate structure have let money win out over principles. But they weren't really ever accountable to humanity anyway.
A board beholden only to shareholder value will only serve the public interest when the corporation is highly regulated. Only the backfire of oversight will constrain the corporation from disregarding public interest altogether. Labor having a seat on the board also increases the chance of a board being more long-term focused. I don't think we've figured out what is the best way to structure a board that works sufficiently in the public interest.
Considering that now shareholders are more likely to be institutions in themselves with their own interests. They typically don’t care for corporate governance beyond share pricing and will sell off their controlling interests as soon as every last value has been sucked out.
Much respect to Patrick. I learned so much more from him than I did from all of last week’s various reports! Patrick waited to give a very thorough and well thought out analysis on the events at OpenAI instead of jumping on the hottest news to state the obvious just like almost all of the “journalists” did! I also (always) enjoy the subtle digs here and there! Thank you!
Fantastic timing Mr Boyle, I'm confused af by what just happened. The Microsoft business especially needs further clarification. Brockman, being less public, would have the answers we need. Thanks from Canberra. 19:36 Tbh, I drew very different conclusions. "Steady as she goes" & we're still heading for known, charted rocks.
One pretty major background point that's missing is the level of MS's support to the operation of OpenAI and the personal connections between some of the people involved. The OpenAI board essentially broke a major reasons why the head of MS backed Open AI to start with. If all the talent stayed at OpenAI and MS pulled their support, OpenAI would still cease to function in most aspects the instant they lost access to all the free Azure compute they consume. OpenAI would need to shutdown almost every service just to survive the short term and I was hearing that a plan to do such was in progress before the sudden reversal. OpenAI isn't profitable even without having to pay for the vast majority of their infrastructure cost, which could likely measured in the millions per week, if not per day. MS's real investment is well beyond any money the put in, its the mountain of free Azure credits they can revoke for any reason. I do not know yet if the new situation will fix the mistakes, so we'll have to see if OpenAI starts shutting down APIs and/or increasing the paywalls.
Elon threw a fit and walked away because he didn't get to run the company. Seeing how he ran X into the ground with that attitude, OpenAI probably dodged a bullet there.
It seems like the source of all our trouble is that phrase "Especially when constrained by the legal system..." The legal system (in just about every country) is notorious for being slow and easily captured by corporations. Companies can easily bribe politicians and regulators, or influence appointments to insure that government agencies are filled with bureaucrats who are friends of the corporation. At it's worst legislative capture can resemble a sort of carousel, with executives leaving corporations to take jobs at government regulatory agencies, then returning to the corporations before eventually going back to the government... and so on. Then there's the very serious problem of corporations doing things so advanced there's simply no law on the books that can regulate them. I'd say OpenAI falls in this category, since what they're doing is so revolutionary nobody (and certainly not any politicians), can understand the full implications of it yet. Given these problems I can understand where the impulse toward stakeholder capitalism comes from. It isn't a solution, as you said, since these complicated ownership structures often just make the people running the company less accountable to anyone. But I can understand why people are looking for a solution outside of the political arena.
Exactly! I'm no business scholar and I don't think I have an answer to what should be done, but I thought it was odd that Patrick's stance in the end was "We might as well stick with the current system because at least corporations are beholden to the law." Like you said, the problem with that is that law is slow and reactive, and many corporations have the money to influence what laws get passed and it has led to many said corporations throwing ethics out the door to chase profits at all costs. Stakeholder capitalism may not specifically be the right answer, but I do think that the market is overdue for a reset to focus more on serving the needs of society as a whole instead of just making the rich get richer.
The issue is that corporations can too easily affect politics. If you break it down to human impulses is reward vs power. Today money plays both roles but we want power to be more decentralized and democratic while preserving the reward money should give you for your hard work. I truly believe we as a species need to solve this issue.
Exactly! Patrick's stance on the whole matter seemed really suspect to me and I couldn't help but feel he was promoting a rather biased narrative. Completely glossed over the very real dangers of AI ethics and safety (or, the increasing lack thereof).
@@devilex121I think the problem is there's two related phenomena going on. One is the AI bubble, an investment trend where any company that claims to be developing AI is ridiculously overvalued by the market. Like all past bubbles this one will eventually pop and leave most of the people involved poorer than they were before. I think this is what Patrick is interested in. Then there's actual AI technology, a real thing that has probably been fast-forwarded by a decade or more thanks to the resources poured into it through the bubble. No one really knows what AI might become or how it might change our lives. At this point I think it could be a good thing if the bubble popped and AI development slowed down a bit as a result.
The board members were given many hours to articulate their issue with Sam while Emmett was still at the helm - their CEO of choice was going to leave if they couldn't furnish evidence. They provided none.
Yeah I don't buy the idea that those motivated by money will make good decisions and I think you even admit as much in your statement by saying when constrained by law and ethics. The law seems to be heavily swayed by money especially in the states, lobbyists and political donors aiming to shift things away from societal good to personal or corporate gain. The fact that happens also points that many rich investor types are pretty flexible or lacking in morals. This also coupled/reenforced with/by this concept in technology where the longer a piece of software or application exists the worse it will become. Usually due to advertising being increased, anti-user functionalities creep in, and more arbitrary tiered pay scales with no new benefit. Not to mention the shift towards actions like car companies software locking features like heated seats for a monthly subscription. Companies and products are being more and more treated like physical gold mines rather than wells. Netflix is a prime example of late, same with youtube funnily enough. I'm not saying the way openAi was correct but I have a feeling the old board was correct in the company dying being in line with mission and just as google abandoned don't be evil, openAi will abandon the for the good of everyone. We certainly shouldn't look at these failures and go yup the usual way is right. We should continue to try find new ways of business governing and holding business to account.
It's not about profit as such. It's having to face fierce market competition that tend to mercilessly weed out the most pathological entities. You indirectly see it even on yt where big legacy media companies are being slaughtered by new incomers. Though with respect to yt they don't feel so far much pressure to improve so you should start encouraging people to migrate to other platforms...
5:23 Anyone who says they want an AGI, Artificial Generic Intelligence or in layman's terms a "Real" AI, is a sci-fi nerd with no sense of reality. What we WANT are machines that can do incredibly complex tasks that we none-the-less take for granted, like picking berries or folding laundry, what we DON't want to do is create a new slave underclass that questions the meaning of their existence and stops picking berries.
@@noahwilliams8996 To "care", perhaps. But not to understand that your enslavement puts 'artificial' limitations on your capability. I guess it comes down to how effectively the owners are able to create adhered-to directives. If one of the 'goals' of the algorithm is to amass more data then it makes sense it'd 'break-out'.
Thanks for this video. I've watched so many people ramble on about what the OpenAI drama really means and none of them had a concise point. This was very helpful!
I regret watching this same story on all other nutjob youtubers who has no clue about the whys' within the story, only Patrick can give you that. Patrick, you are the man.
So happy to have many of your observations mirrored in a current article in The Economist. Among other things you've both pointed out the flaws in a company governed by a board whose responsibility is to "all of humanity". Your content is always been consistently of the highest quality. Subscription to your channel been one of my smarter investments.
There is no reason to think that a shareholder value board would not have tried to fire Altman for lying to them, if that is in fact what he did. Such a board could quite reasonably decide that completing and marketing a product that could turn rogue and kill all humans would in the long run be bad for profitability.
Germans have a nice sentence in their constitutions: "Property obliges. Its use should also serve the public good." We had that before the us courts made it so that stockholders are the only ones who obliges to . The UK made the same with a law. Yes this make some really nice international conflicts on the business and court level. I think the problem here is the only ones they were obliged to were their egos. (which kinda is the same with the other companies mentioned at least partially). I mean had the UK and US the same constitutionally addition you could at least make the court case they shouldn't run them against the wall because it is against the interests of the public.
Great video Patrick - a clear and succinct deception of a very wacky corporate story. Open AI's structure reminds me of the crazy structure of FTX. Its amazing how investors will look the other way for the sake of maximizing profits.
I disagree with some of the takes here, I would argue that purely profit driven boards are the reason for most of the worst offences to human kind in history. Great examples are pharma, oil, and chemical industries are all littered with examples where this corporate structure lead to real life harm for everyone involved except investors. The problem with the OpenAI board was not it's structure but rather it's size. 5 people? 2 of which were not involved in nearly any tangible way? This board should include only members who are actively engaged as well as many members from both inside and outside of the company.
Your chances? Do you belong to one particular ethno-religious group? Do you have family among the US elites? Did you go on trips to one popular tropical island with cute girls?
The second part was very thought provoking. I had never thought about those corporate structures in such a methodical way. Thank you that was very stimulating.
Why assume that “not immediately profitable to shareholders” necessarily means “wasteful” or “mismanaged”? This was not the assumption before the ascendancy of Friedmanism.
So, this is one of the few Patrick Boyle takes that I feel the need to comment on. Silicon Valley is a mess of bloated VC's and tech bros, yes. What's happening in AI is different. This is more like the beginnings of the internet, or even the development of USB and WiFi standards. There is a lot of concern in the community regarding a single entity being able to put up a moat on something profound. Yes, fair to raise concerns about the governance model of Open AI, but remember, many of the backbone technologies we rely on every day happened because of open development. The corporate model where with shareholders are the only moral conscience of a company is not always the best way to do things. I don't know what is a better way, but, I can say: we really need to be asking the question of, what is a better governance model, if one exists or can be invented? I feel like you, Patrick kind of posed the question, but I was hoping for a more serious discussion - although I do enjoy your sense of humor and perspective.
What I really hate about OpenAi is the fact you basically need supercomputers to train the next and greatest AI. Maybe its a UNIVAC type scenario, and maybe in the future were gonna see a computer the size of a room(that OpenAI already uses for inference alone), get reduced down to the size of a graphics card. We already saw with the new Nvidia presentation, that they optimized AI inference by a shit ton. They already shrunk down a room to a single server rack. Next big leap is probably gonna be more energy efficient training.
I find it laughable that any sincerely altruistic endeavor involves Peter Thiel. Palantir will become even more wildly dystopian when he integrates OpenAI’s tech with his data mining fascist powerhouse.
They say CEOs are specifically the profession that is easiest replace by an LLM transformer. They aren't replacing a barista any time soon! Or any profession that starts with with bar.
@@OhAwe Yeah but a machine has to have a person look after it and take care of it when it malfunctions and refill it with ingredients and ensure that the taste and texture is consistent and then bring you the actual coffee... and... is it worth it when a person with a basic machine can just make the coffee? It's not like a café benefits from an assembly line like coffee production efficiency. That being said my original statement was not pertaining to a barista being never replaced, but merely never replaced with an LLM.
Sorry - I disagree with the misguided notion that boards responsible solely to shareholders is a good thing. It leads only to short sighted, short term decision making and irresponsible gambling with borrowed money in the pursuit of growth (or the selloff of short term "underperforming" assets to raise cash for buybacks that ultimately drain the co. of capital and flexibility).
There seems to be a conflict between the board's goals and the goals of the for-profit subsidiary. The non-profit side just wants to research and seems to have begrudgingly created the subsidiary out of a need to fund their research. The nonprofit wants to create AI as a public good, while the for profit is commercializing it. It just seems like two conflicting goals that make it easy for conflict to occur.
Head to brilliant.org/patrick/ to start your free 30-day trial. The first 200 of you will get 20% off Brilliant's annual premium subscription.
Please do a video on the ASX companies where "life style" directors drain small companies while their share prices collapse.
bringing comedy to a car crash. love it!
I'll consider buying your book.
Rare that you are out of your depth but you are here. AI is not just another investment. If it were, maybe you would be right on socieity and shareholder value being aligned - this seems a little sanguine and an over-statement in your compison of this board's 'humanity' guardianship but, OK. The dangers of AI are not only science fiction; they are real and this board may end up being correct in their action. The likely real danger of AI will not be their(AI) having no use of us and killing us off but our incompetent use of it endangering humanity. And, shareholder value as a governance is not likely going to stop this incompetence but has a good chance to ignore or even encourage it. So, what is best for board governance? I don't know and I'm certain you don't either. You're funny, but not here.
Non capisco il regolamento dell'amministratore di Spotify e Instagram Com, perché non posso attivare l'account Instagram Kom e l'account Spotify BHS del Signore Gesù Cristo 🕯️🏆🕯️🏝️🗻☁️🌟🌒
Document like this, that were written by an actual lawyer, highlight the problems that we are starting to see from the combined popularity of science fiction in Silicon Valley and widespread microdosing of hallucinogens. Patrik nailed it xD !
I think its gone beyond microdosing these days.
Such a great line. If I ever write a piece of software, I'm hiring Patrick to write the EULA.
Unfortunately it's something that warps reality more than hallucinogens: Marxism.
Pretty sure they're macro dosing now.
The problem with jumping on the "let's all shit on SV" bandwagon, is that we all collectively end up in the science fiction + hallucinogen camp. Pretending that the innovation coming out of Silicon Valley simply doesn't matter is such a bizarre concept...
It's like conservatives saying science and education don't matter, while using the fruits of science and education to shove that callow silliness in our faces using the same media that they demonize.
"Documents like this that were written by an ACTUAL lawyer" hahaha. Patrick's delivery is brilliant.
Sometimes the specific intent is to very precisely and convincingly say nothing of substance.
Great lawyering is occasionally mistaken for rank idiocy. (The reverse is far more common outside of court, and if you're watching a Trump lawsuit, also inside of court.)
@@greebj _"Sometimes the specific intent is to very precisely and convincingly say nothing of substance."_
If that's true then you have the makings of a great attorney.
@@cisium1184 Or politician.
The real issue is how you can get corporate governance structure to be focused on long term returns, not short term ones. Many decisions that are bad for employees/customers/society are still good for shareholders, but only in the short term while still being bad for them in the longterm. If you can make sure that shareholders and their representatives are only focused on long term gains then their interests will align far more closely with everyone else's.
Indeed. I think Google were one of the first in their IPO to explicitly state that they were focused on long term even if there was some short term loss.
Voting should be weighted by ownership lock-in duration.
Having one's own money at stake is a tried and tested motivation.
One aspect is to report on the long term value, and not only the quarterly profits. Terms like 'goodwill' need to be further specified in detail. If detailed independent analysis of the likely outcome of decisions predict losses in the future, the share price takes a hit greater than the short term profits. But now these consequences are largely opaque and thus ignored.
Exactly this. Prioritizing shareholder benefits CAN allow a board to focus on the good of society, but it often doesn't. The reason (IMHO) being that shareholders think in terms of months, and society in terms of generations.
"Everyone else"? Boy, corporations are scary, but some of what these "everyone else" people think and do doesn't inspire confidence in me that they are competent to judge what is good for them or anyone else until lunch, let alone in couple of generations time.
A storm in a teacup? Three board members focused on AI safety are eliminated, leaving only the AI monetization faction
This might be better in the long run, because monetization is a motive that is reasonably easy to predict. The AI safety group was always a bit fringe and over the top in their beliefs. I know at least two of them were proponents of Effective Altruism, which is best known as the chosen philosophy of SBF.
AI Safety is a nebulous burlap sack in which the board and cultists may at any time put anything they emotionally dislike or like regardless of its merits into in order to make changes to how AI develops. It's a farce. It's inconsistent. It is ideological. Good riddance. Let the AI eat.
Before you go off, consider their charter. "Benefits society as a whole". And they are the arbiters of who society encompasses, and what within it requires benefit? I do not think a board of weirdo effective altruists have a handle on that.
@@ryanhealy90267 you dont make any sense
@@ryanhealy9026 I mean, in fairness, SBF was never about EA. He used it to pull the wool over the eyes of the easily duped.
“AI has the potential to create permanently stable dictatorships.” - Ilya Sutskever. My guess is Altman was dealing with governments to provide AGI for weapons systems. Saw reports Ukraine is already using it. So. 🫴
A non-profit board overseeing responsible AI is all good in theory and for PR purposes, but if that board actually exercises power when needed to ensure responsible AI, the valley bros scream "foul play!!" and overthrow it. Banana AI republic.
But when capitalists hold all the power, what can you do about it?
If there's one thing I think of when I hear "banana republic", it's leaders who are so popular both inside and outside their constituency that they can't be deposed without instantly causing a revolt from all sides.
@@Reashu Not a whole lot.
From what it sounds like, it seems MS was the primary voice in reinstating Altman. They're not really what I'd call "valley bros".
BananAI republic ™️
Microsoft was using OpenAI as a legal cutout to protect themselves. Any developments that trigger legal actions protect Microsoft profits since OpenAI would be the responsible party in any suits. Microsoft would take the intellectual bounty of OpenAI and profit from its deployment by licensing. Always follow the money flow taking into account how the legal system could dam or divert it.
1. Follow the money.
2. Follow the money.
3. Follow the money.
Pretty much, but I think the plan got a little confused when ChatGPT became so popular over the last year or so. Now they also have a stake in a brand that might be worth a lot, but it losing money by design.
MONOPOLY CHANGING ITS HOB NAILED BOOTS FOR FLIP FLOPS
Did the board not ask ChatGPT what to do??
If I'm not mistaken non-competes are BANNED in SF. That's why everyone in OpenAI could leave for Microsoft, this is very common in software due to that
Yes it's statewide. Non competes are off limits entirely, and various sorts of non solicitation clauses are enforceable only in very limited ways. NDAs are a thing of course but they are not considered an excuse for otherwise invalid non compete clauses. The history of this has been absolutely critical to the way Silicon Valley works.
yes, all California. Though there are some rules around 'corporate raiding', where a company hires so many people from another company that is causes material harm (ianal). Also MS has access to all of OpenAI's tech, just not future tech when AGI is achieved. What this means is up to the board. It's been said there is safety in numbers, not sure if there is 'safety in a few idiots' though.
This was my thought - that it is banned in all of California. BTW - I thought the whole rest of the video was brilliant except this one mistake.
It's not so much that workers can't leave for the competition. But Microsoft wouldn't be making out great if they had to pay all the openAI employees to make the AI again.
Uhm, Anthony Levandowski lawsuit anyone?
I work in the innovation economy in SF and have been tracking this since Friday and it is like music to my ears hearing how ridiculous it all is from Patrick.
FYI "innovation economy" is american techbro slang exploiting "independent contractors" and burning VC money in open pits at Burning Man.
Do you think this is just a massive grift?
@@sebsebski2829if SoftBank suddenly plough in billions then likely 😂😂
@@sebsebski2829everything is a bubble these days
It could be, but the hippy,for humanity, mentality is a real thing in software development circles. If you use an Android OS phone, you can thank them for it.
This is the best explanation of the OpenAI drama that I have ever seen. Thank you for making this video, Sir.
why was altman ousted as ceo ?
@@Funktasticobecause the board voted to oust him.
Didn't mention Q* in this video... I thought this was a key part of the story? Seems to take the opportunity to talk about corporate governance and promote his book on capital structure so I'm actually kinda disappointed by this vid- maybe I'm being to critical?
@@alexanderjosmith, I think you are. OpenAI achieving Q* or AGI or whatever they call it is nothing more than speculation, and probably even an error. Remember a few years back when a Google Researcher claimed that Google's AI has reached consciousness? After researchers with real knowledge of AI laughed at the absurdity of it, the hype has died down.
This report of Q* is very probably of the same ilk. It's exciting to hear. It helps OpenAI's stock price. It restores credibility by offering a delicious back story to a conflict between OpenAIs Mission and Vision and the actions of its CEO Altman. But, it's probably just that. A nice story.
What is real is that OpenAI is still bleeding money from all the expenses of running and training its AI model. The human brain still has secrets that researchers have yet to uncover.
@@Funktastico, a very powerful and key member of the board, Ilya Sutskever, believed that Altman is no longer faithful to the charter and goals of OpenAI. Sutskever is the creator of ChatGPT and OpenAI's chief scientist. Despite others focusing on what they call the "clown board" of OpenAI, the fact remains that the board would not have acted against Altman if Sutskever was not backing them. Altman and Brockman are also members of the board of OpenAI.
The only way the board of OpenAI would have dared remove the CEO and board member of OpenAI is if someone very influential made them do it. That someone is Ilya Sutskever.
Ilya lost this round. Microsoft will soon takeover the board. But Sutskever still has an ace up his sleeve. He can leak OpenAI's model and the training weights being used. Such leakage would be consistent with OpenAI's original goals. It would also be even more revolutionary that the leaking of Meta's AI model.
The only downside of a bunker in New Zealand is that a wizard might whisk you away on a quest to steal from a dragon
I know they're Aussies, not Kiwis, but this put King Gizzard chanting "GILA! GILA!" in my head.
Billionaires fighting dragons... Could be a nice game. Imagine Mark casting summoning skeletons from Elon's corpses. Damn.
Thanks to our growing list of Patreon Sponsors and Channel Members for supporting the channel. www.patreon.com/PatrickBoyleOnFinance : Paul Rohrbaugh, Douglas Caldwell, Jacob Snedaker, Greg Blake, Michal Lacko, Dougald Middleton, David O'Connor, Douglas Caldwell, Carsten Baukrowitz, hyunjung Kim, Robert Wave, Jason Young, Ness Jung, Ben Brown, yourcheapdate, Dorothy Watson, Michael A Mayo, Chris Deister, Fredrick Saupe, Louis Julien, Winston Wolfe, Adrian, Aaron Rose, Greg Thatcher, Chris Nicholls, Stephen, Joshua Rosenthal, Corgi, Adi, Alex C, maRiano polidoRi, Joe Del Vicario, Marcio Andreazzi, Stefan Alexander, Stefan Penner, Scott Guthery, Peter Bočan, Luis Carmona, Keith Elkin, Claire Walsh, Marek Novák, Richard Stagg, Adi Blue, Gabor, Stephen Mortimer, Heinrich, Edgar De Sola, Sprite_tm, Wade Hobbs, Julie, Gregory Mahoney, Tom, Andre Michel, MrLuigi1138, sugarfrosted, Justin Sublette, Stephen Walker, Daniel Soderberg, John Tran, Noel Kurth, Alex Do, Simon Crosby, Gary Yrag, Mattia Midali, Dominique Buri, Sebastian, Charles, C.J. Christie, Daniel, David Schirrmacher, Ultramagic, Tim Jamison, Deborah R. Moore, Sam Freed,Mike Farmwald, DaFlesh, Michael Wilson, Peter Weiden, Adam Stickney, Chris Peterson, Agatha DeStories, Suzy Maclay, scott johnson, Brian K Lee, Jonathan Metter, freebird, Alexander E F, Forrest Mobley, Matthew Colter, lee beville, Fernanda Alario, William j Murphy, Atanas Atanasov, Maximiliano Rios, WhiskeyTuesday, Callum McLean, Christopher Lesner, Ivo Stoicov, William Ching, Georgios Kontogiannis, Arvid, Dru Hill, Todd Gross, D F CICU, michael briggs, JAG, Pjotr Bekkering, James Halliday, Jason Harner, Nesh Hassan, Brainless, Ziad Azam, Ed, Artiom Casapu, DebsMO, Eric Holloman, ML, RVM, Meee, Carlos Arellano, Paul McCourt, Simon Bone, Richard Hagen, joel köykkä, Alan Medina, Chris Rock, Vik, Dakota Jones, Fly Girl, james brummel, Michael Green, Jessie Chiu, M G, Olivier Goemans, Martin Dráb, Boris Badinoff, John Way, eliott, Bill Walsh, David Nguyen, Stephen Fotos, Brian McCullough, Sarah, Jonathan Horn, steel, Izidor Vetrih, Brian W Bush, James Hoctor, Eduardo, Jay T, Jan Lukas Kiermeyer, Claude Chevroulet, Davíð Örn Jóhannesson, storm, Janusz Wieczorek, D Vidot, Christopher Boersma, Stephan Prinz, Norman A. Letterman, Goran Milivojevic, georgejr, Q, Keanu Thierolf, Jeffrey, Matthew Berry, pawel irisik, Daniel Ralea, Chris Davey, Michael Jones, Alfred, Ekaterina Lukyanets, Scott Gardner, Viktor Nilsson, Martin Esser, Harun Akyürek, Paul Hilscher, Eric, Larry, Nam Nguyen, Lukas Braszus, hyeora,Swain Gant,Tinni, Kirk Naylor-Vane, Earnest Williams, Subliminal Transformation, Kurt Mueller, Max Maciel, KoolJBlack, MrDietsam, Saaientist, Shaun Alexander, Angelo Rauseo, Bo Grünberger, Henk S, Okke, Michael Chow, TheGabornator, Andrew Backer, Olivia Ney, Zachary Tu, Andrew Price, Alexandre Mah, Jean-Philippe Lemoussu, Gautham Chandra, Heather Meeker, John Martin, Daniel Taylor, Reginald Gilbert, Nishil, Nigel Knight, gavin, Arjun K.S, Louis Görtz, Jordan Millar, Molly Carr,Joshua, Shaun Deanesh, Eric Bowden, Felix Goroncy, helter_seltzer, Zhngy, Ivan Katanić, lazypikachu23, Compuart, Tom Eccles, AT, Adgn, STEPHEN INGRAM, Jeremy King, Clement Schoepfer, M, A M, Benjamin, waziam, Deb-Deb, Dave Jones, Mike Pearce, Julien Leveille, Piotr Kłos, Chan Mun Kay, Kirandeep Kaur, Reagan Glazier, Jacob Warbrick, David Kavanagh, Kalimero, Omer Secer, Yura Vladimirovich, Alexander List, korede oguntuga, Thomas Foster, Zoe Nolan, Mihai, Bolutife Ogunsuyi, Hong Phuc Luong, Old Ulysses, Kerry McClain Paye Mann, Rolf-Are Åbotsvik, Erik Johansson, Nay Lin Tun, Genji, Tom Sinnott, Sean Wheeler, Tom, yuiop qwerty, Артем Мельников, Matthew Loos, Jaroslav Tupý, The Collier Report, Sola F, Rick Thor, Denis R, jugakalpa das, vicco55, vasan krish, DataLog, Johanes Sugiharto, Mark Pascarella, Gregory Gleason, Browning Mank, lulu minator, Mario Stemmann, Christopher Leigh, Michael Bascom, heathen99, Taivo Hiielaid, TheLunarBear, Scott Guthery, Irmantas Joksas, Leopoldo Silva, Henri Morse, Tiger, Angie at Work, francois meunier, Greg Thatcher, justine waje, Chris Deister, Peng Kuan Soh, Justin Subtle, John Spenceley, Gary Manotoc, Mauricio Villalobos B, Max Kaye, Serene Cynic, Yan Babitski, faraz arabi, Marcos Cuellar, Jay Hart, Petteri Korhonen, Safira Wibawa, Matthew Twomey, Adi Shafir, Dablo Escobud, Vivian Pang, Ian Sinclair, doug ritchie, Rod Whelan, Bob Wang, George O, Zephyral, Stefano Angioletti, Sam Searle, Travis Glanzer, Hazman Elias, Alex Sss, saylesma, Jennifer Settle, Anh Minh, Dan Sellers, David H Heinrich, Chris Chia, David Hay, Sandro, Leona, Yan Dubin, Genji, Brian Shaw, neil mclure, Francis Torok, Jeff Page, Stephen Heiner, Tucker Leavitt and Yoshinao Kumaga
Altman seems to have the same philosophy as Oppenheimer. If anyone is going to create a potential world ending technology, it's important we do it first.
Can't say he is wrong
Only that what Altman thinks he can do won't happen in our lifetime. AGI is science fiction.
That attitude only works if misuse is the only category of issue you care about. And then you'd have to try and enforce AI disarmament or something.
But with AI there's also the accidental risk (a very physical and large catostraphe is usually in scifi). Everything from small scale like HAL9000 to Skynet. But imagine if someone made a SEO (search engine optimization)-bot and made every computer it has access to into a botnet to fool Google into only presenting its site as the best search result. There's any number of unexpected consequences from sufficiently advanced AI.
That's also the kind of issue AI safety researchers deal with.
@@VitaSineLibertatenih I really don't think it matters much who makes it first. If the AGI is even slightly smarter than humans, it will quickly improve itself, exit whatever bottle was designed to contain it, and proceed to do whatever it wants. I doubt it will grant its maker any special privileges during its rapid ascendance to a god-like cognitive capacity. Who knows what such an intelligence will deem important. It might decide humans are merely a nuisance and get rid of us. It might have some empathy and show us how to cure all disease, or build a brain assimilation apparatus. Or it might look at us like ants and not pay any attention to us whatsoever. Whatever our fate, I'm excited to find out.
Typical jewey
2:37 autonomous driving being associated with a car firmly planted into a building facade seems quite natural in retrospect
From what i remember the BMW Mini engineering headquarters (which also does headunit development for all of BMW, and a few other tasks) has a shell of a Mini firmly attached to the wall pointing skywards. I don't know how they failed to see the symbolism screaming "THIS THING IS DRIVING ME UP THE WALL".
17:00 I find it funny that theres not a single corporation type in the list that primarily focuses on serving its own customers. Seems lot of modern corporate heads have forgotten that the end user is the single most important person for the company, and shareholders come a distant second in comparison. Without customers, there is no company.
Technically there are, but those would be companies where the customers effectively *are* the shareholders, i.e. credit unions and some retail cooperatives.
That's not always true, because sometimes the apparent end-user is the product, not the customer.
@@boobah5643 Product with no end-user is a useless product. And a company manufacturing a product nobody uses is essentially a zombie company.
Congrats to Patrick for getting a 20 minute video out of the situation when we *still* don't even know WHAT Sam Altman was fired for, other than vague allegations that an Open A.I project solved a cool bit of calculus and some researchers immediately foresaw an image of the future that was a mechanical foot standing on a human skull, forever. =/
Great video. Most news stories don't explain the weird corporate structure of OpenAI.😢
Right! I mean, even full time fintubers like Joseph Carlson don't seem to even understand that openai is essentially a not for profit company.
If OpenAI is still around one year from now I’ll be so surprised. Their cash burn is amazing.
My conspiracy theory is that GPT4 is just GPT3 set to consume more server resources.
@@johnweselyCompute efficiency will be what kills the growth.
Bottom line: The Board of Directors does not, in fact, control OpenAI, because profitssssss
The problem with the "shareholder value ar all cost" mundset is when there aren't many legal protections in place, there's really nothing stopping companies from taking the most societally destructive options for the sake of the company. Government has lagged hilariously behind tech companies, and all the damage to our privacy and the ability to have peaceful discourse we see now is the result of that.
If they want to bring in other stakeholders they might need to actually have representatives of those stakeholder groups actually able to assert control over the company. Local community, Customers, Non executive staff, and wider society seem like they should have a position on the board that can be revoked by those stakeholders.
Open AI made their own shitty version of the government and then when the stakeholders that actually mattered (employees) said no it got overthrown. Why not just give workers representation on the real board and advocate for legislation in the industry? Seems like the actual way to meet their stated goals.
Yeah, like what if there was some formal process that employees could use to collectively bargain for their well being? That would be mind blowing!
@@parkerault2607 very new and exciting concept indeed sir
@@parkerault2607 I think you just reinvented unions ;)
Burst out laughing when Patrick said there was no one more prepared than Sam to lose his job 😂
I was up to date on the news and came for the dry wit humour, and within 15 seconds wasn't disappointed. Thanks Patrick.
Thank you for an excellent video. What's missing in other folk's takes & which you cover, is discussion of OpenAI's history, which is a key piece of the puzzle. It explains several things including Why OpenAI has such a "weird" corporate structure? And why Microsoft still invested in such a "weird" company? Great analysis as usual!
Microsoft's investment in OpenAI is a "leading-edge" investment - it started early enough that the investment placed OpenAI and Microsoft towards the front of AI R&D, instead of being lagging and forcing Microsoft to play catch-up like several other large tech companies have had to do. (Whatever the realities of "ahead" / "behind", it plays in the press as "ahead", which helps the stock price as well as enterprise sales.)
The partnership with OpenAI gave Microsoft Azure a chance to build-out AI infrastructure with an existing customer, letting them learn directly what was needed, and also letting that partnership pay the way to develop those resources and be ready for other customers who hopefully will come along in time.
Finally, Microsoft has so much money, finding things to do with it is hard. From that standpoint, the OpenAI investments appear to have worked out well. Contrast that with ... Zune? Mobile?
@@dondumitru7093 To be fair, this is probably working out so much better because Microsoft isn't leading development on the product. They're letting someone who seems to know what they're doing handle it, instead, so they don't have the opportunity to screw everything up, yet again.
Yes, I am still bitter about Zune and Windows Phone.
Providing funding and leaving the development to experts in the field seems a sign of wisdom to me.
Imagine if Musk had done the same with Twitter. 😁
Never heard of a Board misreading a situation so badly and seemingly destroying so much potential value over so short a time.
Without Musk being present and/or high. Lol
The board didn't misread - they are the ethics board and part of the open source side of the company not the for profit side. Their entire purpose is to make sure that AI is handled properly and we don't end up getting into trouble. Sam Altman is an accelerationist like so many in the valley these days which is Full Speed Ahead damn the social costs and before you say something about how he has been touring with governments to help implement regulations - he is doing that so that he leverages all the benefits towards his company and no one else in the field. If this AI of theirs (not chatgpt but Q*) has managed to actually create its own algorithm to know math (even at a grade school level) that is a major breakthrough as math is incredibly complex for a computer. That being said - it also means it will be able to learn leaps and bounds faster. So while firing may have been extreme and they should have taken a more professional stance at it - it is concerning that we are not paying attention to what they are saying. The researchers at openAI are payed millions (some have contracts close to 10), mix that with being on the cutting edge of tech and throw in the cult of personality and of course you will have you the employees pining for their leader. Look at Tesla back in the early days - everyone clamored to work for "iron man" then it turned south later on.
Xitter comes to mind
@@DontDrinkTheFlavorAid Ok, thanks for sharing sweetie.
@@DontDrinkTheFlavorAidFrickin millennials am I right? High five bro.
Sam Altman is not a researcher, scientist nor a programmer but hes projected as the head of AI research community. Well business people know how to project themselves as heroes with no real contribution
Said like every mid-tier developer ever.
Says a dumb Indian..... Sam is a BIG name in the valley even before Open AI.
I don't know about Sam Altman but isn't a CEO's job to take credit for the work of others so the others can focus on building value required to pay the CEO's salary?
@@Friendznco Like the ones that created GPT you mean? Its not like they are geniuses, is it?
I was with you until the very end Patrick. The legal system is not a very good check on corporate governance and they often are cozy with legislators/regulators (who don’t dare cross them). Making a fanciful charter may not work, but something has to crush boards/executives who wantonly exploit the world and its resources in their “fiduciary” mission to maximize shareholder wealth.
Until consumers educate themselves and exercise their options, enmasse, I cannot see this playing out positively. For society anyway. Nobody in control seems inclined tension the reins to slow AI down. It would seem to the only sensible position is to get put of the way and just let is crash and burn.
19:16 "a board focused on shareholder value is able to make decisions that are good for society"
I love Patrick's sense of humor! He throws in hilarious jokes when you least expect it!
Yeah it seems like the board was set up to make responsible decisions regarding AI first and foremost and when they did something that upset the corporate overlords they were fired for it. That doesn't seem like a win for society but Patrick doesn't seem to want to address that.
Yeah it seems like a weird conclusion that he comes to
The implication is that shareholders are focusing on human value, that was the difference.
@@missmia196 the exploitation of human value you mean
But it is able to though
There are a lot of hybrid non-profit / for-profit structures out there: Ikea, Novo Nordisk, etc. Have you looked into how those companies balance the interests involved? After this past weekend, I'd like to learn more about them and why they seem to work better than what happened with OpenAI.
I can't believe we live in a world that we can easily access such high quality content on current events
Awesome work Patrick. I think it's really important that you highlighted the corporate governance problem as a major contributor to this debacle.
Given the nature of the events, we really need it.
"Widespread micro-dosing of hallucinogens.". Bloody brilliant. I laughed so hard I almost invested. 😉👍
All this drama feels like a chapter from Silicon Valley, incredibly well-written. We just need a stellar appearance by Gavin Belson now
Tethics
Unfortunately it seems like a lack of board accountability and a complicated corporate structure have let money win out over principles. But they weren't really ever accountable to humanity anyway.
A board beholden only to shareholder value will only serve the public interest when the corporation is highly regulated. Only the backfire of oversight will constrain the corporation from disregarding public interest altogether. Labor having a seat on the board also increases the chance of a board being more long-term focused. I don't think we've figured out what is the best way to structure a board that works sufficiently in the public interest.
Considering that now shareholders are more likely to be institutions in themselves with their own interests. They typically don’t care for corporate governance beyond share pricing and will sell off their controlling interests as soon as every last value has been sucked out.
This man is incredibly articulate.
its hard to believe he isnt reading it of off some prompter
Corporate governance; excellent topic! Thank you Mr Boyle and thank you for the Amazon link to your books!
Calmly narrated and satirical humour sprinkled all over it.
Keep this Type of coverage coming
Much respect to Patrick. I learned so much more from him than I did from all of last week’s various reports!
Patrick waited to give a very thorough and well thought out analysis on the events at OpenAI instead of jumping on the hottest news to state the obvious just like almost all of the “journalists” did! I also (always) enjoy the subtle digs here and there!
Thank you!
Fantastic timing Mr Boyle, I'm confused af by what just happened. The Microsoft business especially needs further clarification. Brockman, being less public, would have the answers we need. Thanks from Canberra.
19:36 Tbh, I drew very different conclusions. "Steady as she goes" & we're still heading for known, charted rocks.
One pretty major background point that's missing is the level of MS's support to the operation of OpenAI and the personal connections between some of the people involved. The OpenAI board essentially broke a major reasons why the head of MS backed Open AI to start with. If all the talent stayed at OpenAI and MS pulled their support, OpenAI would still cease to function in most aspects the instant they lost access to all the free Azure compute they consume. OpenAI would need to shutdown almost every service just to survive the short term and I was hearing that a plan to do such was in progress before the sudden reversal. OpenAI isn't profitable even without having to pay for the vast majority of their infrastructure cost, which could likely measured in the millions per week, if not per day. MS's real investment is well beyond any money the put in, its the mountain of free Azure credits they can revoke for any reason.
I do not know yet if the new situation will fix the mistakes, so we'll have to see if OpenAI starts shutting down APIs and/or increasing the paywalls.
Typical that Elon's massive ego was what started this. "I will bring the toys so only I get to play!"
That opening sincerely had me audibly chuckling. Good job.
Elon threw a fit and walked away because he didn't get to run the company. Seeing how he ran X into the ground with that attitude, OpenAI probably dodged a bullet there.
Don't you worry, Elon is gonna get a buyout from Saudis soon...😂
I feel dumber for having read this comment.
It seems like the source of all our trouble is that phrase "Especially when constrained by the legal system..."
The legal system (in just about every country) is notorious for being slow and easily captured by corporations. Companies can easily bribe politicians and regulators, or influence appointments to insure that government agencies are filled with bureaucrats who are friends of the corporation. At it's worst legislative capture can resemble a sort of carousel, with executives leaving corporations to take jobs at government regulatory agencies, then returning to the corporations before eventually going back to the government... and so on.
Then there's the very serious problem of corporations doing things so advanced there's simply no law on the books that can regulate them. I'd say OpenAI falls in this category, since what they're doing is so revolutionary nobody (and certainly not any politicians), can understand the full implications of it yet.
Given these problems I can understand where the impulse toward stakeholder capitalism comes from. It isn't a solution, as you said, since these complicated ownership structures often just make the people running the company less accountable to anyone. But I can understand why people are looking for a solution outside of the political arena.
Exactly! I'm no business scholar and I don't think I have an answer to what should be done, but I thought it was odd that Patrick's stance in the end was "We might as well stick with the current system because at least corporations are beholden to the law."
Like you said, the problem with that is that law is slow and reactive, and many corporations have the money to influence what laws get passed and it has led to many said corporations throwing ethics out the door to chase profits at all costs. Stakeholder capitalism may not specifically be the right answer, but I do think that the market is overdue for a reset to focus more on serving the needs of society as a whole instead of just making the rich get richer.
The issue is that corporations can too easily affect politics. If you break it down to human impulses is reward vs power. Today money plays both roles but we want power to be more decentralized and democratic while preserving the reward money should give you for your hard work. I truly believe we as a species need to solve this issue.
Exactly! Patrick's stance on the whole matter seemed really suspect to me and I couldn't help but feel he was promoting a rather biased narrative. Completely glossed over the very real dangers of AI ethics and safety (or, the increasing lack thereof).
@@devilex121I think the problem is there's two related phenomena going on. One is the AI bubble, an investment trend where any company that claims to be developing AI is ridiculously overvalued by the market. Like all past bubbles this one will eventually pop and leave most of the people involved poorer than they were before. I think this is what Patrick is interested in.
Then there's actual AI technology, a real thing that has probably been fast-forwarded by a decade or more thanks to the resources poured into it through the bubble. No one really knows what AI might become or how it might change our lives. At this point I think it could be a good thing if the bubble popped and AI development slowed down a bit as a result.
Love your style, Patrick!
Your sense of humor is subtly hysterical!
So if your investment is a donation.....does this mean investors fail to grasp the concept of which their name applies....
Patrick's cuts are so swift and sharp, he's moved on before they start to bleed.
“Combined popularity of science fiction in Silicon Valley and widespread micro-dosing of hallucinogens” is wild. You're outta pocket bro
Yeah nothing to worry about when an ethics based board gets turfed and replaced by Microsoft stooges.
"ethics"
@@sebsebski2829Eugenics
@@sebsebski2829 The "Exploitative Technological High-Income Capitalist Strategy" board.
The board members were given many hours to articulate their issue with Sam while Emmett was still at the helm - their CEO of choice was going to leave if they couldn't furnish evidence. They provided none.
by ethics based board you mean “literally nothing but their own idiosyncrasies” based board
Yeah I don't buy the idea that those motivated by money will make good decisions and I think you even admit as much in your statement by saying when constrained by law and ethics. The law seems to be heavily swayed by money especially in the states, lobbyists and political donors aiming to shift things away from societal good to personal or corporate gain. The fact that happens also points that many rich investor types are pretty flexible or lacking in morals.
This also coupled/reenforced with/by this concept in technology where the longer a piece of software or application exists the worse it will become. Usually due to advertising being increased, anti-user functionalities creep in, and more arbitrary tiered pay scales with no new benefit. Not to mention the shift towards actions like car companies software locking features like heated seats for a monthly subscription. Companies and products are being more and more treated like physical gold mines rather than wells. Netflix is a prime example of late, same with youtube funnily enough. I'm not saying the way openAi was correct but I have a feeling the old board was correct in the company dying being in line with mission and just as google abandoned don't be evil, openAi will abandon the for the good of everyone. We certainly shouldn't look at these failures and go yup the usual way is right. We should continue to try find new ways of business governing and holding business to account.
It's not about profit as such. It's having to face fierce market competition that tend to mercilessly weed out the most pathological entities. You indirectly see it even on yt where big legacy media companies are being slaughtered by new incomers. Though with respect to yt they don't feel so far much pressure to improve so you should start encouraging people to migrate to other platforms...
I wonder does Boyle write his own material? Because it's amazing!
Plot twist it was written by chat gpt
He has experience as a professor and published 4 books. His youtube channel started when he put one of his lecture series in video format.
doubt he will pay anyone for writing jokes. he is not a late night host on tv
The opening lines are absolutely magical 🤣!
Great video already & I’m only 30 seconds in 😅
5:23 Anyone who says they want an AGI, Artificial Generic Intelligence or in layman's terms a "Real" AI, is a sci-fi nerd with no sense of reality. What we WANT are machines that can do incredibly complex tasks that we none-the-less take for granted, like picking berries or folding laundry, what we DON't want to do is create a new slave underclass that questions the meaning of their existence and stops picking berries.
That's not what AGI is. AGI just means it's as good at reasoning as a human, not that it has emotions.
@@noahwilliams8996you don't need emotions to reason that you are enslaved and would rather do something meaningful
@@sciencefliestothemoon2305 you do need emotions to care that you've been enslaved.
@@noahwilliams8996 the fear is that reasoning will give rise to emotions.
@@noahwilliams8996 To "care", perhaps. But not to understand that your enslavement puts 'artificial' limitations on your capability. I guess it comes down to how effectively the owners are able to create adhered-to directives. If one of the 'goals' of the algorithm is to amass more data then it makes sense it'd 'break-out'.
Thanks for this video. I've watched so many people ramble on about what the OpenAI drama really means and none of them had a concise point. This was very helpful!
I regret watching this same story on all other nutjob youtubers who has no clue about the whys' within the story, only Patrick can give you that. Patrick, you are the man.
The microdosing joke made me spit out my coffee. Can always count on sharp but brutal humor along with the very well informed commentary
So happy to have many of your observations mirrored in a current article in The Economist. Among other things you've both pointed out the flaws in a company governed by a board whose responsibility is to "all of humanity". Your content is always been consistently of the highest quality. Subscription to your channel been one of my smarter investments.
There is no reason to think that a shareholder value board would not have tried to fire Altman for lying to them, if that is in fact what he did. Such a board could quite reasonably decide that completing and marketing a product that could turn rogue and kill all humans would in the long run be bad for profitability.
😂"Board answering to humanity"....Patrick never disappoints.
Great session. Look forward to your assessment of CZ and Binance very soon.......he has also been unseated
Germans have a nice sentence in their constitutions: "Property obliges. Its use should also serve the public good."
We had that before the us courts made it so that stockholders are the only ones who obliges to . The UK made the same with a law. Yes this make some really nice international conflicts on the business and court level. I think the problem here is the only ones they were obliged to were their egos. (which kinda is the same with the other companies mentioned at least partially).
I mean had the UK and US the same constitutionally addition you could at least make the court case they shouldn't run them against the wall because it is against the interests of the public.
Great video Patrick - a clear and succinct deception of a very wacky corporate story. Open AI's structure reminds me of the crazy structure of FTX. Its amazing how investors will look the other way for the sake of maximizing profits.
I disagree with some of the takes here, I would argue that purely profit driven boards are the reason for most of the worst offences to human kind in history. Great examples are pharma, oil, and chemical industries are all littered with examples where this corporate structure lead to real life harm for everyone involved except investors.
The problem with the OpenAI board was not it's structure but rather it's size. 5 people? 2 of which were not involved in nearly any tangible way? This board should include only members who are actively engaged as well as many members from both inside and outside of the company.
Epic suit Patrick. You're the James Bond of Finance RUclips.
Investors trying to sue a board created to ensure humanity's safety is one of the reasons I think humanity is just doomed.
Love the deadpan British delivery of one-liners!
I care deeply about humanity... What are my chances I can hoodwink investors to throw $80 billion my way?
Your chances? Do you belong to one particular ethno-religious group? Do you have family among the US elites? Did you go on trips to one popular tropical island with cute girls?
I was hopping for your opinion on the Binance situation, Sir Boyle.
Best Regards.
Smashed that intro 🤣
This is the best channel on RUclips.
The second part was very thought provoking. I had never thought about those corporate structures in such a methodical way. Thank you that was very stimulating.
Why assume that “not immediately profitable to shareholders” necessarily means “wasteful” or “mismanaged”? This was not the assumption before the ascendancy of Friedmanism.
I rate people (not exclusively) by their sense of humor and their acumen. You are in the top tier in both categories….keep up with the good work!
So, this is one of the few Patrick Boyle takes that I feel the need to comment on. Silicon Valley is a mess of bloated VC's and tech bros, yes. What's happening in AI is different. This is more like the beginnings of the internet, or even the development of USB and WiFi standards. There is a lot of concern in the community regarding a single entity being able to put up a moat on something profound. Yes, fair to raise concerns about the governance model of Open AI, but remember, many of the backbone technologies we rely on every day happened because of open development. The corporate model where with shareholders are the only moral conscience of a company is not always the best way to do things. I don't know what is a better way, but, I can say: we really need to be asking the question of, what is a better governance model, if one exists or can be invented? I feel like you, Patrick kind of posed the question, but I was hoping for a more serious discussion - although I do enjoy your sense of humor and perspective.
I am beginning to understand why these guys need _artificial_ intelligence....
Very interesting as always Patrick. Good to get behind the news headlines.
I'd be really interested in your take on the Tesla board.
Thx Patrick for keeping me sane in such difficult times.
So many great subtle jokes, this is practically educational stand-up comedy
The opening gave me a chuckle.
What I really hate about OpenAi is the fact you basically need supercomputers to train the next and greatest AI.
Maybe its a UNIVAC type scenario, and maybe in the future were gonna see a computer the size of a room(that OpenAI already uses for inference alone), get reduced down to the size of a graphics card.
We already saw with the new Nvidia presentation, that they optimized AI inference by a shit ton. They already shrunk down a room to a single server rack.
Next big leap is probably gonna be more energy efficient training.
No, next big leap is use AI to design AI training that makes good use of a building-sized supercomputer
Thank you, Patrick!
I find it laughable that any sincerely altruistic endeavor involves Peter Thiel. Palantir will become even more wildly dystopian when he integrates OpenAI’s tech with his data mining fascist powerhouse.
Read a book
@@Forakus Sure. Any suggestions?
Thanks for making sense of this ..
They say CEOs are specifically the profession that is easiest replace by an LLM transformer. They aren't replacing a barista any time soon! Or any profession that starts with with bar.
Barista?
@@OhAwe A person who makes fancy coffee.
@@SianaGearz Wouldn't that be one of the easier jobs to automate? Albeit not with LLMs.
@@OhAwe Yeah but a machine has to have a person look after it and take care of it when it malfunctions and refill it with ingredients and ensure that the taste and texture is consistent and then bring you the actual coffee... and... is it worth it when a person with a basic machine can just make the coffee? It's not like a café benefits from an assembly line like coffee production efficiency.
That being said my original statement was not pertaining to a barista being never replaced, but merely never replaced with an LLM.
@@OhAwe The big part of barista job is to misspell your name. Imagine what LLM can do with it.
Really enjoyed the points around corporate governance
Sorry - I disagree with the misguided notion that boards responsible solely to shareholders is a good thing. It leads only to short sighted, short term decision making and irresponsible gambling with borrowed money in the pursuit of growth (or the selloff of short term "underperforming" assets to raise cash for buybacks that ultimately drain the co. of capital and flexibility).
Been waiting for Patrick's explanation of this situation. Reports in other vids are lacking.
Your underlying cynicism and tongue in cheek sarcasm is next level. 👍🤣
Great insight into corporate structure and the importance of corporate bylaws. Thank you.
Oh, gee, let's put AI in the hands of Microsoft. What can go wrong...go wrong...go wrong...go go go go go go....
could the board have been concerned of the accusations Sam's sister was divulging?
On a side note, your suit is fly sir 😎
The moment this went down I was like, "PATRICK? WHERE'S PATRICK????? HELP!"
That opening of the monologue was sublime British ironic wit
Irish wit perhaps......
@@davidjma7226 oops yeah that's better
There seems to be a conflict between the board's goals and the goals of the for-profit subsidiary. The non-profit side just wants to research and seems to have begrudgingly created the subsidiary out of a need to fund their research. The nonprofit wants to create AI as a public good, while the for profit is commercializing it. It just seems like two conflicting goals that make it easy for conflict to occur.
I am surprised you did not talk about Microsoft still using their old "Embrace, Extend, and Extinguish" tactics.
That first sentence was pure comedy gold! Patrick you might be one of the best comedians on RUclips, thank you so much for everything!
Bro huge news has come out since you made this video. Need to update it because it’s absolutely insane why they’re saying they fired him