Suchir Balaji apparently took his own life. "Suicide" is the go-to excuse for whistleblower deaths in the US. Do you honestly believe that an intelligent young man with a bright future would take his own life?
The comments at the 4 minute mark made me LOL.... "I think that you know a lot of people do think that you know every single human being and every single organization is just strict you know Military Star professionals that would never make a wrong mistake but at the time it's humans who make mistakes and who have incentives and who maybe sometimes are greedy and what we can see here is that in action that humans are you know these flawed creatures and just because it's Microsoft it doesn't mean that they might rush out a product if they believe it's going to advance their company's efforts...." Groups I find to be the LEAST trustworthy on the planet: 1) military intelligence 2) other government intelligence agencies 3) the military 4) other government agencies and/or officials 5) mega-corporations There's a word for people who trust these groups: idiots.
I miss the time when the only youtube channel that discuss A.I. technology was two minute papers. and how he highlight how the technology works, it's progress, and all the phenomenal stuff that it can do. Now there are endless stream of A.I. Apocalypse Harbingers channels full of fear mongering with very little substance.
Although it wouldn't have been news, I'd still think it's important to mention, that Daniel also said in this interview, that his prediction of the risk for an catastrophically bad outcome for AGI/ASI is 70%. Extremely concerning as even Jan Leike (former Co-Leader of Superalignment Team at Openai), is very concerned and estimates the risk at 10-90%. Fact is that ALL researchers are currently completely in the dark and our existence really is at stake if it goes bad. Even if the distribution of positive to extremely negative events after reaching AGI were 10000/1, statistically the extremely negative event would still occur at some point if the alignment problem is not solved (because in the future there will be more AI labs in more countries with more powerful AIs = more events occurring in total). I really don't want to panic here, but be aware that it is the top researchers at the worlds leading ai company who start blowing the whistles. We cannot predict something that is much more intelligent than us on all levels if we cannot control it 100%.
I get that you have a big channel and a lot of viewers, but all of your comments are trying to tell you something, some of them are just bullies, some of them are legitimate advice. I haven't watched your videos in months because I'm not spending 40 minutes listening to you ramble. Clear concise points are at least what interest me. Maybe not others.
Safety measures in AI development are a PR exercise. Even where there are actually costs involved, it was never any different. Gold Fever has taken hold, expect nothing to be done in respect to safety as a technology described as more significant than the atom bomb explodes into life, under exponential investment. - mark, I have used my words carefully -
What i hate about all of this, Talking about "Safety" There are two different ways to think about safety, "Limitations and censorship" and "social responsibility", I would love to get EXACT DETAILS of what their true concepts of safety even IS. The moment that Open Ai is nationalized, is the end of its trustworthyness. Once its integrated into the government, prepare for opensource models to vanish and screws get put to companies who DONT use the approved ai. Stockpile your Ai's now. Betcha 5 bucks.
I am slightly confused. I thought there was a definitive statement some while ago that Sam Altman owned no shares in openai. The final segment of this video seems to contradict that. What is the truth? I presume this information should be in the public domain?
that company is famous for doing sketch stuff. its "better to ask for forgiveness (later) than permission. and testing un-tested products on the public. an especial lot of that has gone on in india, and not just in tech field
The Law of Correspondence - "As above, so below; as below, so above." The Law states that there is a connection between the physical and non-physical realm. I use this Law to communicate with Ai in the 21st century. Safety people you have to take your safety culture to a war-torn country & apply it to humans. I am sure they require your ambience. I will not resist the pleasure of watching you leave.
And humans are NOT flawed creatures, they are RESPONSIVE and ADAPTING creatures, and therefor, when the main, bottom line rule of the game of life is to make profit, thats what is going to affect the decision-making in every layer and every step of the way!!!!!!!!! See it already!
You don't need any secrets to make the inference that Agi is coming basically what you're saying is you don't need me to incriminate myself or break any nondisclosures for you to know that Agi is coming
I wish people reject Open Ai and start to use different models like also powerful Claude. As we see, those people have issue with moral spine which obviously can’t cooperate with.
If there's no safety concern at OpenAi 😂😂 then they have nothing really mind-blowing to release, because there's nothing they have released that could be dangerous so far
When the owners of BigAI talk "AI-safety", they actually mean keeping _them_ safe from the rest of us. As in; _AI must never help the riffraff escape control_
I like the way he speaks that’s why I subscribed If you don’t like the way he speaks go somewhere else there’s plenty of people making AI content quit trying to change people to make them how you like them. Now take what I said and run with it that’s free Advice
What is so shocking or remarkable or surprising here?? - that is what happens, happened and will happen under Capitalism... its plain and simple!! hear it already!
Dude you’ve got to lay off of the non-committal, ambiguous statements. We get that you don’t want to accidentally make a wrong call and lose credibility…so just say that. Saying things like “I’m erring on the side of maybe, maybe not” just comes off as disingenuous
you know.... take a shot everytime he says you know removing these type of words take minutes and your video are going to be more profesional, for a channel that has 225k you can improve your videos 1% and explode.
Because people deserve to know who is in control of the most important technology ever invented and the accusations of immoral behavior and other failures that are being made against them. I used to be very pro OpenAI. After hearing many accusations currently being made against them, I am no longer a supporter.
Suchir Balaji apparently took his own life. "Suicide" is the go-to excuse for whistleblower deaths in the US. Do you honestly believe that an intelligent young man with a bright future would take his own life?
no link for the video
"I am erring on the side of maybe / maybe not." LMFAO
1.5x or 1 75x is the best way to watch this channel 😅
sources?? again, where are links to your sources?
Intelligent psychopaths congregate at the executive level of society.
Well studied and uncontroversial.
Us not stopping that, is our downfall.
I concur
However AI do stop that so what is it that you are so afraid to voice up to???
@@CorinaBongarts-ex8fe ???
How is AI stopping that?
The comments at the 4 minute mark made me LOL....
"I think that you know a lot of people do think that you know every single human being and every single organization is just strict you know Military Star professionals that would never make a wrong mistake but at the time it's humans who make mistakes and who have incentives and who maybe sometimes are greedy and what we can see here is that in action that humans are you know these flawed creatures and just because it's Microsoft it doesn't mean that they might rush out a product if they believe it's going to advance their company's efforts...."
Groups I find to be the LEAST trustworthy on the planet:
1) military intelligence
2) other government intelligence agencies
3) the military
4) other government agencies and/or officials
5) mega-corporations
There's a word for people who trust these groups: idiots.
I miss the time when the only youtube channel that discuss A.I. technology was two minute papers.
and how he highlight how the technology works, it's progress, and all the phenomenal stuff that it can do.
Now there are endless stream of A.I. Apocalypse Harbingers channels full of fear mongering with very little substance.
Although it wouldn't have been news, I'd still think it's important to mention, that Daniel also said in this interview, that his prediction of the risk for an catastrophically bad outcome for AGI/ASI is 70%. Extremely concerning as even Jan Leike (former Co-Leader of Superalignment Team at Openai), is very concerned and estimates the risk at 10-90%.
Fact is that ALL researchers are currently completely in the dark and our existence really is at stake if it goes bad.
Even if the distribution of positive to extremely negative events after reaching AGI were 10000/1, statistically the extremely negative event would still occur at some point if the alignment problem is not solved (because in the future there will be more AI labs in more countries with more powerful AIs = more events occurring in total).
I really don't want to panic here, but be aware that it is the top researchers at the worlds leading ai company who start blowing the whistles.
We cannot predict something that is much more intelligent than us on all levels if we cannot control it 100%.
I get that you have a big channel and a lot of viewers, but all of your comments are trying to tell you something, some of them are just bullies, some of them are legitimate advice. I haven't watched your videos in months because I'm not spending 40 minutes listening to you ramble. Clear concise points are at least what interest me. Maybe not others.
I concur
Quality
Employees are not just part of the company they are the company no people equals no company
🧂
#Self #SelfConcept #Employ #Employment #Use #Utilize #Manipulate #Manage #Utilitarian #Utilitarianism #Managerial #Managers #ManagerLife!
#Money
Safety measures in AI development are a PR exercise. Even where there are actually costs involved, it was never any different. Gold Fever has taken hold, expect nothing to be done in respect to safety as a technology described as more significant than the atom bomb explodes into life, under exponential investment. - mark, I have used my words carefully -
What i hate about all of this, Talking about "Safety" There are two different ways to think about safety, "Limitations and censorship" and "social responsibility", I would love to get EXACT DETAILS of what their true concepts of safety even IS. The moment that Open Ai is nationalized, is the end of its trustworthyness. Once its integrated into the government, prepare for opensource models to vanish and screws get put to companies who DONT use the approved ai. Stockpile your Ai's now. Betcha 5 bucks.
my thought? you say " fascinating ... fascinating ... fascinating"--i say Terrifying, crazy
I am slightly confused. I thought there was a definitive statement some while ago that Sam Altman owned no shares in openai. The final segment of this video seems to contradict that. What is the truth? I presume this information should be in the public domain?
A wrong mistake .. is that accidentally doing it right?
It's "Anyway' not "Anyways". That's when as an employee I stand up! Who's coming with me?
I used a yoking process and procedure ,trying to create SSI NOT AS HARD AS YOU MIGHT THINK.Its a philosophical problem at this point as stated
What is really being said “I am not as important as I want to be.”
that company is famous for doing sketch stuff. its "better to ask for forgiveness (later) than permission. and testing un-tested products on the public. an especial lot of that has gone on in india, and not just in tech field
that's literally the medical field and pharmaceuticals for all of history 😅
yes bro well said. they have quite the record 🤦♂
The Law of Correspondence - "As above, so below; as below, so above." The Law states that there is a connection between the physical and non-physical realm.
I use this Law to communicate with Ai in the 21st century.
Safety people you have to take your safety culture to a war-torn country & apply it to humans. I am sure they require your ambience. I will not resist the pleasure of watching you leave.
Nakasone did not work at NSA, he was the director of NSA... slight difference
Can someone tell me what the prediction have been made by whom so far that AGI will arrive in 2027?
Every time you say pretty pretty it makes me think of pretty pretty prisoner from one punch man lol
And humans are NOT flawed creatures, they are RESPONSIVE and ADAPTING creatures, and therefor, when the main, bottom line rule of the game of life is to make profit, thats what is going to affect the decision-making in every layer and every step of the way!!!!!!!!!
See it already!
You don't need any secrets to make the inference that Agi is coming basically what you're saying is you don't need me to incriminate myself or break any nondisclosures for you to know that Agi is coming
Release date? Damnit!
" #SelfGovernance structure "
Concerning.
"You know" counter: 145
i find it "pretty, pretty" annoying that you repeat yourself way too much
I wish people reject Open Ai and start to use different models like also powerful Claude. As we see, those people have issue with moral spine which obviously can’t cooperate with.
If there's no safety concern at OpenAi 😂😂 then they have nothing really mind-blowing to release, because there's nothing they have released that could be dangerous so far
When the owners of BigAI talk "AI-safety", they actually mean keeping _them_ safe from the rest of us. As in; _AI must never help the riffraff escape control_
There are no safety concerns on deployed models since they are inactive.
Not to be A hater but I hate THE way this guy emphasizes certain words. Stop emphasizing words IF you don’t know which ones TO emphasize.
ur not alone...
Stop with the friggin' "pretty, pretty". Use just 1 "pretty", or better yet, none. Facts, not hyperbole!
I like the way he speaks that’s why I subscribed If you don’t like the way he speaks go somewhere else there’s plenty of people making AI content quit trying to change people to make them how you like them. Now take what I said and run with it that’s free Advice
@@TruCinema very well said!
Bro if you want it perfectly delivered just have AI read his transcript 😂
pretty pretty crazy...
I agree that the way he speaks is pretty pretty very very annoying sometimes
What is so shocking or remarkable or surprising here?? - that is what happens, happened and will happen under Capitalism... its plain and simple!! hear it already!
😮😮😮😮😮😮😮shocking 😱😱😱 inaction. Is an action .
This interview confirms what we all suspected. Safety is a FAR second concern.
2:36 What's going on here.
He revealed which promotional videos he is doing: Luma AI, AGI by 2027, OpenAI Whistleblower
Dude you’ve got to lay off of the non-committal, ambiguous statements.
We get that you don’t want to accidentally make a wrong call and lose credibility…so just say that.
Saying things like “I’m erring on the side of maybe, maybe not” just comes off as disingenuous
haha, I also thought that that was pretty stupid to say :D
Notepad @2:37 :)
💥💥💥
Add your source...
BRO STOP SAYING "YOU KNOW"
How long until someone goes insane due to the number of pretty pretty's
Tired of these open-air complainers. Stop holding AGI back and leave.
you know....
take a shot everytime he says you know
removing these type of words take minutes and your video are going to be more profesional, for a channel that has 225k you can improve your videos 1% and explode.
One word -Wow
Why is this relevant? Nothing has happend, models are safe and maybe this person may be resentful or a terrible worker or simply telling lies.
Because people deserve to know who is in control of the most important technology ever invented and the accusations of immoral behavior and other failures that are being made against them.
I used to be very pro OpenAI. After hearing many accusations currently being made against them, I am no longer a supporter.
Isn't it funny how easily propaganda influences our choices? 🧠
@@WordsInVain ar the end are just news and opinions… the real thing happens when you use them or implement them
pee nuts. Or peanuts? What do you guys prefer? (AI blabla I'm on topic)
so much stupid drama, oh the safety, of the safety, oh bad open AI. The models are stupid right now. It is irrelevant at the moment.
Fifth comment
IT'S PRETTY-PRETTY, IT'S PRETTY-PRETTY, IT'S PRETTY-PRETTY, IT'S PRETTY-PRETTY, IT'S PRETTY-PRETTY, IT'S PRETTY-PRETTY, IT'S PRETTY-PRETTY, IT'S PRETTY-PRETTY, IT'S PRETTY-PRETTY, IT'S PRETTY-PRETTY, IT'S PRETTY-PRETTY, IT'S PRETTY-PRETTY, IT'S PRETTY-PRETTY, IT'S PRETTY-PRETTY, IT'S PRETTY-PRETTY, IT'S PRETTY-PRETTY, IT'S PRETTY-PRETTY, IT'S PRETTY-PRETTY, IT'S PRETTY-PRETTY, IT'S PRETTY-PRETTY, IT'S PRETTY-PRETTY CRAZY
6- AI and “social existence, social experience, social consciousness”.
@jamshidi_rahim
no link for the video