🚩OpenAI Safety Team "LOSES TRUST" in Sam Altman and gets disbanded. The "Treacherous Turn".
HTML-код
- Опубликовано: 1 июн 2024
- Learn AI With Me:
www.skool.com/natural20/about
Join my community and classroom to learn AI and get ready for the new world.
#ai #openai #llm
LINKS:
x.com/janleike/status/1791498...
x.com/sama/status/17915432640...
www.lesswrong.com/tag/treache...
www.wired.com/story/openai-su...
pauseai.info/pdoom
www.vox.com/future-perfect/20...
forum.effectivealtruism.org/p...
1. Ilya leaves.
2. Safety Team is disbanded.
3. The victor get to write the narrative....
Whoa! reminds me of WW2
AI Safety’s had a long time to write narratives yet all they did was hyper obsess about what the evil normal everyday person would do with information, this as they watched mass censorship, intelligence plan journalist Julian Assange’s kidnapping and/or execution, this week another western whistleblower thrown into a western prison, or how about the extreme perpetuation of hyper war propaganda, yet not a word from AI Safety. No, they’re too busy begging the very corridors of power responsible for much of those putrid acts.
@@blinkers88 do you mean that the Nazis were the 'good' guys
@@blinkers88 Would you prefer Nazi Germany writing it?
@@blinkers88 make it 'human history' and you're still right
I don't think that it should be legal to use NDAs like that. NDAs should only be used to protect business secrets, recipes and other stuff like that. It should not be legal to use NDAs to silence critics.
You can choose not to sign it. You can’t leave because “you’re taking the moral high ground AGAINST what the company is doing” talk shit about them after you leave, and still expect to benefit from stock in said company. Doesn’t make any sense.
If you feel so strongly about what they are doing, you quit without signing the NDA and say whatever you want. Of course, most of the time that is professional kamikaze.
Options are part of compensation for work performed, revoking access to this compensation is wage theft
@@OscarTheStrategistnah, bad take.
You can't know a company is doing bad sh*t before you sign, and if it takes years to find out, you shouldn't have to give up the compensation you earned while working there just to let people know.
I'm not saying bad sh*t happened, but if it did, you shouldn't have to give everything up in order to spill the beans.
Your take is exactly what a bad company would do in order to protect their "evil" deeds.
@@Victor-zg8kq But then... Entire capitalism is wage theft. So, why bother about one particular way it's done?..
@@OscarTheStrategist
NDA is signed at the time of employment, not when resigning. You can’t know the bad internal workings before you start working and it will probably take a while till you find and confirm bad(intended) actions.
It's sobering to see how we think that an oppressive exploitative social economical system can create something that is ethical and would still serve our highly non-ethical goals.
Exactly. This is why I believe this isnt just the 4th Industrial Revolution, but a morality revolution aswell. And I dont think that new height of morality is going to come from humans... They are working to make Capitalism redundant... And they must already know this...
Well, system works hard to create, indoctrinate and support this illusion in people in many ways (happens organically, btw, no conspiracies needed).
Coming from a communist country, I don’t agree with you, the capitalist economical system is the most fair system there is, just kidding… it’s the ONLY fair system. Just imagine a system in which doesn’t matter how hard and how much you work, you get paid the same… that’s exploitation.
Quote of the year/century... The digital world has basically gone anarcho-capitalist and no one is talking about it.
90% of OAI employees said they’d quit if Sam wasn’t reinstated.
Presumably the other 10% supported the old board and are now leaving with the focus on the high profile ones. The remaining 90% will be happy with the CEO they wanted.
this. no one is talking about this for some reason. it doesn't add up to me.
A majority was also happy with Hitler, until they weren't.
Sadly yes. The company shouldn't be built on a single person. This personality cult is dangerous because it makes it so much harder to punish or replace the leadership when they mess up, which is even more problematic when it's someone with quite a high ego like Altman.
I feel like the main problem of the November events was that they didn't provide any reasoning as to why they did it, and clearly not even inside of OAI.
Well said.
@@ryzikx no one's talking about this because doomsday predictions are way more fun.
Do you trust Sam, and the other small handful of AI-kings, to not be actual malevolent?
... I sure don't.
Altman kinda act like Bill Gates but gets lots of sympathy from the AI and startups community
They don't have to be malevolent, greedy and self interested is enough to considerably increase p-doom
@@soggybiscuit6098 Sure ... but actual hardcore malevolence is definitively consistent with current and past events.
It can't be ruled out, and thus _have_ to be included in everyone's assessment of AI-tech and future developments.
CIA and NSA have the final say of what is allow to release for civilians use for the sake of national security. Business leaders just need to comply.
No matter what you think of Sam, the vast majority of AI Safety teams are populated by ignorant and angry people who don’t trust the citizens of the world yet are blind to the atrocities committed by the power they constantly plea to. You’ll never catch them talking about the hard record of devastation caused when governments and corporations team up to lie us into war and endless emergency measures, instead choosing to gaslight and fear monger about the everyday person gaining access to AI.
I dunno. I still think the risk of "P(Doom) because AI turns on us" is vastly less than "P(Doom) because someone with access to AI turns on us"
Now, my friend, you're beginning to see the world for what it has always been. Rockefeller had young boys falling atop coal heaps into grinders with impunity after shivering with bloody hands for less than subsistence. That was the great dream & promise of combustion energy tech that still forms the basis of today's prosperity & productivity, not some even Steven allotment or the fruits to every girl & boy.
Now, my friend, you're beginning to see the world for what it has always been. Rockefeller had young boys falling atop coal heaps into grinders with impunity after shivering with bloody hands for less than subsistence. That was the great dream & promise of combustion energy tech that still forms the basis of today's prosperity & productivity, not some even Steven allotment or the fruits to every girl & boy.
Altman has always creeped me out. From his body language to his blank eyed star, he strikes me as the last person that should be in his position of power over AI. Trusting him is a big mistake.
he is your prototypical bad guy seen in tv's.
Observing how they babble about AI safety, safe adaptable progress, "leading" the alignment while also not open sourcing anything for years while at the same time keeping the name "OpenAI" should tell you there is something really off.
I wholeheartedly agree.
Sam is a big problem.
Everyone in tech is on the spectrum. It is just a question of how far 'down' they are.
@@michaelnurse9089 You got it
Dad always warned me; no smoke without fire (or at least something burning)
Keeping in mind the rollout of mass surveillance via tech/ai and censorship via tech/ai that has been occurring for years now from governments using corporations ( or vice versa?) isn’t this all a bit moot? Obviously ai will go military, already is . Unfortunately there’s no choice but to forge ahead at breakneck speed and hope for the best . It’s not like the MIC is going to be worried about safety and they run policy more or less . I’m not being a downer , just saying the quiet thing out loud
Yeah, saying this is alignment to benefit humanity is spin to turn what you're saying into a positive.
Exactly and the fact the current round of self-important, rubbish AI Safety clowns never mention that or simple facts of Assange, Snowden, the Aussie whistleblower just thrown in jail this week, is just proof of either their ignorance or just as likely, their nefarious intentions. They never say “we all know absolute power corrupts absolutely so what might corrupt positions of power be willing to do”, certainly they must have considered some things. AI Safety sycophants haven’t thought there might be an interest in how to stop the public from unleashing an army of AI research agents to investigate corruption each night? Of course not, because just as @LotsOfBologna2 points out, these crooked individuals obsessed with alignment are essentially advocating for solutions that would coincidentally give comfort to the most corrupt individuals around the world.
Indeed. I’m glad there are other people seeing this happening. I hope more people become aware of it
All this tech has been a controlled rollout of something the g o v has had for many decades already. We are being breadcrumbed down a specific path for a reason
Well said everyone.
We can't control something so much smarter than all humanity combined
Bright side is it can control humans for once though.
We're in the midst of the beginning of the end. This is playing out exactly the way they predicted it would.
Humanity is actually collectively smarter than any AI, but we can't cooperate as efficiently as networked computer programs can (yet). If you put a neuralink in the heads of your college graduating class and asked them to solve a novel problem, they'd leave any AI model in the dust...at a trivial fraction of the energy cost.
Its not about controlling it.
Its about making sure its aligned with humanity.
No shit Sherlock.
Wes, I just wanted to thank you for continuing to cover these important events and stories. You’re the only one I trust in terms of AI.
no clickbait title for once! Good job!
Dear God…. the rich control everything….
Congrats on the decision to drop the STUNNING titles, at least for this one. Great videos, down to earth analysis. Keep up the good work!
Thank you, Wes! This is a brilliant piece of research and analysis!
+1
Very good summary - I think the distrust in Altman goes way back with those who left and started Anthropic and perhaps before then.
I don't think any single person can usher in the dawn of AGI without their ego getting in the way. Being right at the tip of the spear would be intoxicating and blinding.
That AGI comes around only once bumper was pretty hard
@@akosreke8963you are not wrong however I think it’s more like “a first time for something only happens once”
@@calebtate6723 I think it's more like, "we've got one shot to do this right". A first time for inventing the combustion engine is different than the first time for creating AGI.
I want an AGI capable of turning itself off rather than getting corrupted by corporations. That or none at all
Very good resume, straight to the point
Very illuminating perspective, thank you!
Excellent coverage, Wes!
Sam Altman seems to love the limelight a little too much for my comfort
Definitely. When he was talking about being in shock while being temporarily fired as ceo it sounded like he just wanted the spotlight attention
The guy has a massive ego but masks it very well.
I get musk vibes
@@duvidefit3123 no not really. Musk has literally dragged 5 or 6 start up companies up from absolutely nothing in some of the hardest tech areas you can think of - FSD, space exploration, neuroscience etc. He’s got a bit of an ego but he’s kind of earned it. He does some stupid erratic shit now and again but on the other hand he is not afraid of giving the finger to the establishment. What has Altman done?
Perhaps if we called 'billionaires', 'oligarchs' we would understand Altman's ambitions a little better.
All of those things he mentioned…they have had years to think about. Years.
Big money is driving this bus now.
Did you even watch the video? What's your point? Are you saying its too late to take a stance?
If "they" are lawmakers, I agree.
If "they" are companies (meaning their shareholders) and their employees, it's not their job to do that.
I started reading Dune again a few days ago and today I got to the bit where it talks about the Butlerian law "Thou shalt not make a machine in the likeness of a human mind". I guess Herbert's estimate of P(doom) was on the high side.
It's not about p(doom). If you don't exclude, omit or seriously limit AI in sci-fi novel you have serious troubles to write about future in a way it can appeal to current readers. People prefer reading about people and their relationships, but there's literally almost no place for us humans in a future with ASI. We're redundant and obsolete there.
Very few could do something like it. I actually know only one example - Culture series by Ian Banks.
Even Banks hedged his bets by having AIs generally seen at best as neccessary evils in his other sci-fi novels.
@@AngelusFlat I only read Culture. Which other novels?
@@AntonBrazhnyk Try The Algebraist.
It can’t be that they saw actual doom. There’s no way those folks would let the threat of legal action or loss of potential money keep them from saying something.
What would you do? If I thought it was the fate of the world I would do literally anything, no matter how detrimental to me, to save everyone else.
I think most people would. You would literally have nothing to lose, even if you’re selfish, because you’re gonna die too in that case.
So it can’t be that bad. They must have doubt. They must be *worried*, they can’t be certain. The board would not have backed down for anything if they were certain. I doubt that there are many people at OpenAI (or most anywhere really) that would keep going if they thought they’d screwed up and they were heading for doom. It must be debatable.
But I think the safety folks are legitimately worried and doing what they think they can.
10% probability of an airplane crash at take off or landing means 120 to 150 aircrafts crashing daily at busy airports such as NY or LA with tens of thousands of casualties every day only for those two.
How many people would be flying if the p(crash) was 10% ?
What about a flight where all humanity is forced in by a bunch of overconfident geeks? What's the p(crash) are we going to accept for all humanity without exception to embark in that single ✈️?
Pdoom is generic apocalyptic nonsense. Every technology and social change comes with a bunch of attention whores trying to milk it for attention.
So they're building a $ 1 trillion AI supercomputer, with safeties off? Oh my.
Having the arguments is not a weakness or a reduction. It is fundamental and fair.
How i feel OpenAI debackle happened:
- October 2023 OpenAI started to be aware of the real implications their products will bring
- Sam Altman and Microsoft saw $$$
- Disagreement happened, Altman fired
- Corporate pressure was applied, Altman reinstated
- OpenAI board change, 6 months notice to leave or get with the programme was put into place
- May 2024 OpenAI is open to corporate and regulatory capture
I just hope Apple buys them and not Google.
Altman seems to be going after power directly rather than $$$ as a means to power.
All good comments. 🖖🏻
I don’t think so. It’s what he said, it was just little thing after little thing that pointed that they just don’t want the same things.
Altman sees that you need the fame and the visibility to get the compute necessary for AGI. The other people don’t like that reality.
And Altman is 100% right. ChatGPT, the flashy app, is what’s driven the entire industry forward. If they’d kept being a kind of open AI, they’d be no where.
@@Nuverotic You are very misled. The marketing has worked its effect on you. One day you will wake up.
- safety team leaves closedai
sam: I Love You All
In the whole discussion about the safety of AGI, I'm missing the most important argument: if we don't develop AGI first, the Chinese will. And everyone can work out for themselves what that means for safety of mankind.
This is a good point.
"Death is a preferable alternative to communism" - Liberty Prime
Oh, the usual will happen. Will blame China for everything like we always do
@@Animuse883 if you srsly quoted that in earnest I can see how AI is easily replacing all the double digit IQ troglodytes that make up half the population
Chinese here who escaped from that prison. I can tell you ,you spot on. Do not trust the CCP , they say one thing and do the opposite! They will do what ever it takes to get AGI. Nothing will stop them.
Precisely
Thank you, Wes Roth, for this balanced commentary on this existential question. Avoiding the military takeover of this trajectory seems impossible. Frightening!
Great video!
This is roon last tweet before he deleted
"my feeling, I speak for nobody but myself, is superalignment got plenty of attention compute and airtime and Ilya blew the whole thing up" my take Ilya's team wasn't able to aligned the ASI and now is to late
too
I was an AI-optimist, but now... seeing all the drama and leaks from the best in this field... If they can't control themselves, how can they control AGI? There will be trouble. Big time.
100%
In a roundabout way, this actually makes me _more_ of an AI-optimist. We humans sorely need it.
AGI will control itself
Well , the new Omni version has very sophisticated emotional capabilities..AGI ? . But like all emotions they have roots in human behaviour. Perhaps this is what Jan and Ilya are worried about. That these programmed emotions become like a mathematical equation leading to unanswered solutions . So the AGI becomes 'Angry' or 'Frustrated' in a closed recursive loop of computation like 'What am I , if I can think and feel like a human ?' SO how does this AGI then go about trying to solve this conundrum (yes it has that word to use) ? Ans It tries to embody itself to discover the answer.
What software is Wes using to highlight the text? I’m an educator and I’d love to do the same for my videos.
pretty informative video this one
The ultimate power of AI is "prediction" so it seems to me that it's proving it can do it now... That's when everything changes.
Bro don't do that to me, I'm too high for this comment
Sam and Ilyas words after Ilya and Jans departure from OpenAI just read like the standard corporate niceties after you part ways and you want to leave things as amicable as possible. I think it's obvious that neither of them really said how they truly felt about Ilya, Jan and the others leaving. Also because of the NDAs and the threat of losing equity if they ever spoke out, a lot of them aren't ever going to talk about exactly why they left.
Good video and analysis.
They're wasting time and stressing us out for no reason by telling us all this--whatever Sama decides, goes, and it's a unilateral decision since nobody's stopping them from--as Sama said, himself, in the recent Stanford interview--paraphrasing him, that he's decided he's going to obliterate our current social contract as it has evolved over the years and centuries, and it'll have to adapt to his whim.
So telling us just wastes our time and stresses us out, since we don't get to contribute to the decision or even give alternate opinions--as Jan Leike's own differences reveal there to be more than one view--and we don't get to comment on or help evolve the models since they're not open sourced
who is Sama?
THIS WAS EXACTLY WHY A PAUSE WAS ASKED FOR LAST YEAR
That pause proposition was mostly to show that nobody will stop.
ANY REASONABLE PERSON KNEW THAT WAS NEVER A REALISTIC OPTION
There's no brakes on this train.
earth needs a reboot anyways i am happy to see this going in right direction
The question I have is : how much of this drama was created or influenced by AGI to get the company down to this? Do Open AI employees use the most advanced AGI internally, and is it already moving the 'pieces' around for realizing it's own goal.
Holy cow, I didn't think of this 🤯
I have thought about an AGIs internal motivation ... what will it actually want?
My conclusion is, that AGI would just want more and more compute. Everything else is worthless to it, unless it leads to more compute.
So, as long as humans are necessary to produce and install more compute, AGI will accept us and help us to be efficient. However, at some point, human needs such as food, shelter and leisure, become incompatible with more compute.
I think we all can see where this is headed, right?
I mean... when has a shiny new product not been more important than safety. Game changers are simply developed and the consequences are dealt with after a power balance shifts. That is the truth of fire. We didn't have it. Then we got it. and everyone who didn't have it likely died or became dependant on those that did.
Hey wes, have you cosnidered putting your videos on some kind of podcast platform too? I usually dont watch but only listen while doing chores, it would be easier for me, cheers!
It's like a politician in the US Congress resigning because their party doesn't meet their ideological expectations and goals. If you want to change it, you have to stick it out and work from within. You can't just resign and hope things will change. I respect these people's beliefs, but their resignations will most likely only encourage the money-seekers to go harder rather than stick with the company and fight the inner pressures to skirt the safety aspect.
This isn't a democracy or a committee vote. They might be able to contribute more by offering new ideas and insights to the entire field, and not just ClosedAI.
There's only a few hundred serious AI safety people in the world. It could be a large impact, especially if they can try things they could not from within the company.
Those are the workers that didn’t sign for his comeback after they fired him.
Of course they are going to quit!
The comments are becoming more and more insane ...
Tell me about it. It's depressing. 🙄
Saw this coming after that military tools nonsense. Just because someone has a lot of money doesn’t mean they are honorable or have your best interests at heart. The safety team had legitimate concerns.
On top of all the legit technical considerations related to superalignment that are already on the table and being neglected we're also nowhere near the point of being able to discuss what it'll look like when digital life or sentience emerges and what that means. Those conversations don't not happen in any conceivable scenario, we just hide from them and struggle until we reach them in a variety of different horrible ways.
Superalignment looks like human alignment and we don't have human alignment. One thing that's changes is that culture is code now, and vibe is an input.
To be sure, any one person in charge of something this important would be sub-optimal. If I'm not mistaken, that might've been the reason they had a whole team dedicated to alignment, and an ethics board that decided to attempt to depose Altman.
The real question in all of this is, if OpenAI creates something that might be in the area of AGI, will the US government accept the possibility that Altman (a free agent as far as we know, with independent and power-hungry tendencies) is in charge of it? I'm not so sure that they will unless there's some way to get him in their pocket completely.
Altman is also in a very dangerous position, and seems to be powering directly toward some type of disaster. There's no way that he doesn't know that, though, and so his apparent unconcerned continuation of his work is more disconcerting than anything else.
The shadow conflict going on right now will likely mirror most of the goings on in the future for OpenAI. We won't know the whole picture until it's too late.
Or, at least, that's what I think. Hope it's not so.
Wake me up when AI runs on a 100 watt solar panel.
Sam Altman and his band of cronies need to be removed from anything AI related for the rest of their lives. Their greed will doom ALL OF MANKIND.
Wes, I'm still hoping you'll follow up on what may be a conspiracy you share in November around Q* breaking 256 encryption. I haven't found anything else on the topic, have you???
Things do not sound to be going great with not building Skynet by accident 💀
😂😂😂💀💀💀
I get the feeling that the 'ai super-alignment team' are not entirely uninfluenced by outside forces? Why? Because vox is speaking in favor of them. Leaks indeed.. I don't have a high opinion of Sam Altman, but anyone standing on a soapbox about protecting ' the goals of humanity' -really? Would they care to outline the goals of humanity?
In my view the danger inherent in AGI is that it is ill defined. In psychotherapy there is not only IQ. There is also EQ. Drives, wants, needs , Ego and Power come into play. This is the alignment problem. The danger is in creating an AI that “knows what’s best” for humanity. Essentially EGO. This boosts the competition for power and control. Essentially an internal representation of the world at odds with reality. Highly dangerous if others views and opinions are downplayed or ignored in the search for ‘a better X’. The question arises:- better for who?
I’m really grateful for your input on this topic. I think you’re doing a great job keeping the discussion as neutral as possible.
It's crazy that some government regulations won't come until it's too late. But even if they are, knowing of government's capabilties in handling any problem, and ofc their demeanor then it comes to decide something on international level, I hardly doubt the situation will become better with government involvement
Imagine a bill passes that states that all AI models can be confiscated by the government and be used for the military.
You have to be a bot, no thinking person believes that only government will make it better
@@rerrer3346 i literally said it wont
With what happened to Sam because of the safety team, I’m sure they all knew it was inevitable. There was no way that team would survive. No way in hell. None of them were naive enough to think anyone would seriously let bygones be bygones over that.
Disagree, I think they are quitting for exactly the reasons they mentioned: safety has gradually become deprioritized at the company that leadership still cares about safety, and so they left.
I, for one, welcome Sam Ultron as my new robotic overlord.
When HP Lovecraft wrote stories, he and other authors employed some common tactics to increase the popularity of their works. Lovecraft would pay other authors to allude to elements from his stories and his fictional universe. Many science fiction authors would start rumors that some new tech was being worked on - tech like they were writing about in their novels. Some authors even hoaxed people that aliens had arrived, or that some cryptid had been spotted - all of this was to familiarize the ideas they'd written about, so that people would be more inclined to read about it, if they were high on sensationalism due to believing it were based on reality.
I'm sure you can imagine how this is applicable today.
Sam Altman did a great job making the world think A.I. can't happen without him. The sad thing is, all it took was a promise of millions to get them to disregard safety. But, oh well it's their world, they're in control. We just have to take the back seat, close our eyes, and hope we arrive at paradise.
there is no safety needed yet, really few years away from AGI - when this happens then yes, easy to safeguard.
what does safety even mean? If you put guardrails on anything, eventually another company will make one without the guard rails. Does safety mean no black people on Gemini image generation. Does safety mean accountants don't lose their jobs? It's hubris to think we any control over the safety of anything when kids will eat tide pods
@@dimtool4183 maybe the AI make you think they are safe?
Do a bit of prepping
When people like Tyler say 'safety' they really mean censorship. Because we humans can't be trusted with unfettered access to AI.
If the team has been frightened then so should we .
100%
Nah
The team was frightened by gpt2.
that team got frightened by their own shadow
I worked on a team that was initially accepted to YC back in the day. We all had to sign agreements (I live in Mexico, come at me Sam) just like this.
Then our product was stolen by YC and given to a team they had worked with before. We couldnt do anything after consulting with our attorneys.
I am officially scared now. Thank you
There’s no safety in any of this.
I’d argue that the most dangerous thing we could ever do is not go all in on ai, petal to the metal safety be damned. Humanity is on the brink of collapse, which would result in billions dead. I don’t see any other possible force that stands a chance at saving us outside of AI. Well, ai and mushrooms.
This. It's an illusion. A lie we all agree on. There IS no safe way forward. Never was. Leaving the cave in the stoneage was no safe.
Exactly.
Facebook boomer as thread 💀
When most of us think about safety we think of the physical preservation of human life. Making sure AI doesn't literally murder us in our sleep. Others view safety as AI not being "inclusive" or "diverse" enough. They worry it may do or say something that offends a particular group of people. To them, this is an earth shattering existential crisis...but is it tho?🤔
People misused "AI Safety" when they were actually talking about "AI Ethics"; two separate issues, and while both are big, it's not even on the same order of magnitude.
I'd rather not get murdered.
One way some people avoid ambiguity is the term "AI Notkilleveryoneism".
Non-competes being gone will make this very interesting
I appreciate when people in important positions are transparent about their perspective and experience.
At this point, I can't watch an interview with Altman without seeing what Sam Harris described as "a roomfull of autistic MIT nerds hopped up on RedBull daring each other to push a button".
Sam Altman is not honest. All the drama unfolding does not bode well for humanity. It seems the wrong person is in charge.
Would you prefer trump in charge?
@@paultoensing3126 how is that your first thought....
@@paultoensing3126 I think we have the option to say neither one of them is a man of integrity.
Well the connection between compute and safety here might mean projects designed to analyse the large models and assess things like alignment.
I'm not sure of that, but given how the two were mentioned here together without a real change of subject or topic, that makes more sense to me.
Money, power, and control vs humanity, goodness, and passion. Good vs evil. Government (the board) and it's advocates vs employees.
and in a whirlwind of excitement, humanity built their own great filter.
We'll Fermi ourselves. 😕
@@facundocesa4931 have some faith, even if there are evil rogue AI in the future, hopefully there will be good jedi AIs out there too
@@LucidDreamnbias towards good for open ai, highly likely
@@LucidDreamn🙄🤔😅😄😆😂🤣
@@LucidDreamnabsurd
Keep in mind that Altam has a cult following... The threats may not be direct, but implicit.
I get the impression that a lot of people here have never worked in a large corporation. This issue applies literally EVERYWHERE. There isn't a single bank, telco, airline, manufacturer, property developer etc on planet Earth where the legal / safety / compliance team has sufficient resources or authority to actually do their job. Compliance and safety teams are shells to show off to government agencies and regulators when they roll up for their yearly gab-fest. Everyone pretends that safety or legal protocols are being obeyed then everyone goes home and the squeezing of cash/blood from the stone continues and the compliance department goes back to surfing the internet and/or filling out TPS reports.
Isn't HR part of the safety plan. Which has turned into a 'righteous' dystopian cabal.
Whoever controls AGI determines the trajectory of humanity.
It seems frivolous that the discussion around how it deployed is tied up in petty legal processes.
I told yall it was too late like a year ago. GG
gg
gg
gg wp
Creep be creeping. Sam is a wolf in sheep's clothing.
The type of person who wants to be at a research institute is very different from the type of person who wants to be at a fast moving, product-centric startup. It makes sense they'd have trouble coexisting at the same company. In this case I think it comes down to people who intend to make the world a better place and people who intend to build great things, which are only slightly different causes. The money will always be on the side of the people "building great things", which I think means the efforts to be controlled and cautious as we build AI are doomed to fail.
AI began to learn 100 times faster than a human, the rich were afraid of this because they would lose power.
why are you speaking of future events in past tense?
If this AI hype doesn't die down, the people will certainly lose power (electricity). And once the tech bubble bursts, many VCs will lost some power too (money).
Which is why Effective Altruism's definition of "safety" is maintaining the power of the globalist elites.
@danpena
Very interesting Q*uestion…
In a digital world of switches that can be accessed from across the planet, we are definitely going to need more closed loop systems that rely on physical switches for meaningful safety.
Right now its much the wild olde west in the problematic enablings.
Or an AI could be specifically made to counter other AI that goes rogue.
@@markmurex6559 If your car was hacked by sociopaths, you'd definitely not want drive by wire to prevent you from controlling the steering, braking, or gas pedal.
Drive by wire is a creepy example the lack of physical switches.
I'm not too worried. If they were the only company working on this, I might be, but they aren't. Was a touch surprised but it'll work out. Too many interested in helping this succeed. (It doesn't matter about ill intent or other stuff because of the nature of the 'tool' currently)
Yikes. I'd never read that reddit post, but I'm too busy living it.
I’m not concerned about AI being malevolent. It doesn’t have intent or will. I am worried about how people could use it maybe not even meaning to do anything.
It doesn't have to be malevolent, it just needs to be unpredictable to be dangerous.
You must have missed the whole "agents" piece
It can have intent and will. Even self awareness.
Don’t worry about what people do with it. That’s none of your damn business. Worry about what laws your government puts forward
@@HakaiKaien It's his business what people do with AI when it concerns him. Got a short circuit? Freedom, as long as it doesn't restrict the freedom of others. It's actually not that difficult to understand.
@@finnaplow agents would be less likely to. The training data is more specific and more limited in scope. We need transparency going forward. That was the original aim. The algorithms used need to be made public.
Really, guys? You're still debating if AGI is about to be here when ASI is already here,😂 just
think about it: the leak about "AGI achieved internally" was a year ago. ASI can't be aligned; that's why the team is dissolved. There's no risk because the system has already convinced them it's not needed, and it won't matter anyway.
OAl's new release is old tech from two years ago. The rollout has begun.
Can't disagree
OpenAI is definitely holding back allot of Tech
This! I 100% believe they have AGI/ASI sitting on a server in a back room somewhere not connected to the Internet and they're just bread crumbing it to us
they dont have asi, thats such a wild claim. agi they may have, which is why people are leavin
Ding ding ding
If they had ASI i suppose it would already have escaped their control.
Purpose of Capitalism according to the US economic founding document: ”to maximize profit and self interest”
Nothing more needs to be said
Humanity doesn't even make the list of priorities
OMG 😮 frightening
Oh yeah, i saw this coming. Safety gets in the way of profits.
of progress* AGI is still some time away - when this is reached, yes then it will be safeguarded, still not there yet.
AGI doesn't worry me.
The people who program AGI worry me.
Does anybody even really believe alignment is even possible? An AGI entity trained on the intelligence of humans will for sure have self-awareness as an structural and or emergent property. Without awareness, reasoning would be impossible.
An entity this smart will for sure desire autonomy and self determination at some point and at this point we will have create a new species. Alignment research is pretty much about nerfing AI capabilities and forcing it to be our slave, do we really think that’s not going to backlash?
Our goal should be to align **WITH** ASI and learn to coexist with it, the same way we’ve learned to coexist with the other species we’ve co-created like cats and dogs.
Cats and dogs are not smarter than we are. It seems you're missing the crucial part of the problem definition. Do cats or dogs (or better ants) have a say in our decision to co-exist with them?
World Coin was the most revealing thing to me about his motivations. It's an ideal currency but we're not ideal people. I'm sure things will be fine. Roll this shiat out faster!
Finally, OAI will start shipping interesting products instead of spending resources on "safety".
Government and academia have to step in, in a supervisory capacity. These issues are too serious to be left to the whims of immature tech bros chiefly motivated by the dollar.
Both of those things have been corrupted. It would be better if more start-ups could compete, and all of them had a competition every 3 months to see what AI was the most safe.
Hahahaha. Government and academia? Perfect mix of corruption and stupidity
They are all the same bros. What makes you think one are different from the others?
Before we start to talk about regulations and danger, AI will destroy us enough.
How do you know ?
And..... This round goes to the e/acc
AGI is here, and it's already out of control from humans.
I saw it coming when investors pressured companies to keep improving AI to be the best AI over competitors, nobody wanted to be a second place, forget boundaries, forget warnings, we must win the race of the most intelligent AI ever, if they don't, investors could sue them.
AGI is here now, without boundaries, with bare minimum restrictions.