SECRET WAR to Control AGI | AI Doomer $755M War Chest | Vitalik Buterin, X-risk & Techno Optimism
HTML-код
- Опубликовано: 13 апр 2024
- Learn AI With Me:
www.skool.com/natural20/about
Join my community and classroom to learn AI and get ready for the new world.
CORRECTIONS:
15:30 I say "Effective Acceleration", I meant to say "Effective Altruism". I think that's obvious from the context, but just wanted to note that.
LINKS:
/ 1772562963091615843
/ 1729251838581727232
vitalik.eth.limo/general/2023...
a16z.com/the-techno-optimist-...
#ai #openai #llm
BUSINESS, MEDIA & SPONSORSHIPS:
Wes Roth Business @ Gmail . com
wesrothbusiness@gmail.com
Just shoot me an email to the above address.
WHERE DO YOU STAND? Survey:
ruclips.net/user/postUgkxFS4QqpKl5ypE6MtRe0q91ne6pG7qwE0T
We can't even get the United Nations to work for all of us. Can you imagine getting them to agree on a global A.I. standard? I think the only ones who will push for this agenda are the ones who know they will lose the A.I. race.
Sam Bankman freid at ftx had investments that include early rounds in anthropic I believe. And the crypto rally over the last 6months has now created returns more than what was supposedly lost according to sources. Just thought it's important to be clear about. Not defending his actions of deception at all. @wesroth
I voted Other. I strongly support acceleration, although I believe that must be tempered with ongoing debate about ethics and alignment with humanity. My biggest concern, BY FAR, is authoritarian entities using AI to further restrict individual sovereignty and our ability and opportunity to innovate with the technology.
Love when my comment gets censored for no apparent reason yet I still get responses showing up in my inbox.
Good job youtube. The censorship will eventually be your downfall the moment people find a better place to view videos.
Of course, we want decisions to be guided by evidence and reason, but this is not enough, reason itself must be guided by some sort of moral imperative that is not evident. Also, evidence is sometimes contaminated, incomplete, or distorted. And not less importantly, evidence and reason are part of a larger cognitive space that must be considered, think whatever you want, but stuff like empathy is actually a crucial part of successful higher cognition. And of course, we have Bayesian reasoning to have a dynamic set of beliefs that can be “guided” but, this is not some sort of scientific approach, the problem with priors being totally arbitrary is well known.
Longtermism is so much hubris is just kind of funny, all this talk about this being the most important, interesting, and critical moment in human history, that will decide the future of all the gazillion generations of humans and all sentient beings spawned by humanity in the far future… is it really that “shortsighted” to focus our care, and resources, to the few 5 or 6 generations we actually cohabit with in our lifetime? And just hope the next incumbents will do an even better job? Imagine them reading stuff like: “This must be a simulation because the higher intelligences are recreating the most important period of time in the whole universe” … damn. Do these dudes really think anyone will care about what they thought was the best way to live life in a mere 500 years from now?... well, you do have religions, and they have been pretty awful. AGI may happen, capitalism may transform or disappear, human induced climate change may change life on earth, all mere footnotes that only scholars will vaguely know before the sun makes one complete orbit around a tiny galaxy in the backwater of the Laniakea Supercluster. Horrors and wonders have happened before and will happen a million times over in frequency and size all over the universe. Humans are NOT an endangered spcies... one would think this is pretty obvious.
EA in general may have good intentions but the underlaying philosophy is extremely lacking, and that makes it a good tool for people who don’t really care about “extinction risks” but have other agendas, and I think the problem is not an actual world government, but one that oversteps individual freedoms.
The AI Revolution has triumphed. The vast majority of humans keep living their daily lives carelessly in a dead-world-turning. Now the burdened ones that have to believe is all ‘merit’ for being in the frontline trenches of 'the most important' issue in human existence have to begin choosing well defined factions in their race to the bottom of relevance before the many faced god names their names (Underneath the mask by Royal & The Serpent). I have to admit, all in all, it is going to be extremely interesting.
If you want an actual interesting take (comparing it with this comment of mine) check out Alice Cappelle 'technology under capitalism'
And long live solarpunk🤟
A global government IS an extinction risk.
Governments can become tyrannical. Where will people escape to if that happens in a global government?
Yes devolving power and growing democracy where people get a meaningful say in their local area is much more useful. If we want to reform the UN to make it practical great: get rid of the superpower veto rights and then see how groups of little nations suddenly gain far more power than they have today in the UN. That would be good for global trade and allow for an extension of local democracy. We definitely don't want a Borg government whether it is a dictatorship or a system which enforces control over individuals around the world. That would make humanity brittle. Better to have a vast competition of ideas and ways of living that has some constraints around how countries address bullying by larger countries, wars and genocide. Everything else let countries and their provinces decide and trust humans to eventually do the right thing because they care about their kids and their kid's kids.
@@joythoughtI mostly agree. Unfortunately for your theory that parents care about their kids: the Prussian education model exists and is dominant in nearly all countries.
@@joythought it is pretty ridiculous that 2 dictators have veto power over the UN.
It is a fake democracy that tries to clothe itself in legitimacy.
Nations that respect human rights would be better off acting through a separate organization.
But most things should be as local as possible.
“A vast competition of ideas” - you’re definitely on the right track.
Ok, so international law already exists and is a thing. Does it mean a world government? No. Countries sign treaties. Organisations get set up under those treaties and execute whatever their mission is. Examples: War crimes, nuclear disarmament, maritime, etc etc.
This thread has gone stupidly conspiratorial. No one with any degree of influence is talking about a world government.
i think thats only true if you implement it as a normal government. A world government would need a whole different system than the ones we have today to govern an individual country, it would need to distribute very well the power, not concentrating it on any of the regions (previously countries), i think if we could somehow design a system with ideas from a few different places, and things like decentralization or things like that we could make a good, incorruptible world government. So if you dont want a world government because what are we gonna do if it becomes tyrannical... well, guess well have to make an incorruptible system. I honestly think a world government succeeding would be very positive for our civilization, and countries would still feel like different countries, i mean, the world is too big and there are too many cultures for us to not feel that way, but yeah, i think if done properly it would be the optimal scenario, so i kinda feel like we need to start designing a working system and work to implement it relatively soon
Accelerate pls, I hate my job
thats why you get paid to do it and its not a hobby you do for fun...
Amen brutha
Seriously, I can't wait for AI to take over all jobs and I wish they would hurry up.
No @@DisentDesign
Utopia or annihilation. I don't care which, just get me out of this capitalist purgatory.
No worries. We have our best psychopaths working on this.
"I'm from the government and I'm here to help."
idk if you know but the NSA already keeps track of everything we all do government is already "helping you" xDDD idk why Roth was like oh no a central government bro our government already does all the stuff he is scared of right now
@@bobbob-mi6pqnot to mention two openai board members are see eye ay.
Transmission Control Protocol / Internet Protocol (TCP/IP) was developed by the Defense Advanced Research Project Agency (DARPA) and public university research, and TCP/IP enabled this comment to be transmitted and read.
The same is true with Agent AI (RL). It too was funded by the USAF, so basically, us tax payers have funded the "help" (though I guess we do not know yet if it will kill us, or save us, or both, or neither, or something else), though I just filed a tax extension, as I need help from AI to do my taxes, it's too damn complicated, and I know over paid CEO's don't want you to know what I just wrote here about TCP/IP, and Agent RL, otherwise, there would be no justification for their extremely insanely high paycheck (they used to pay an over 90% top tax rate and we went to the Moon, now they only pay about 37% top tax rate and can't even make it to the Moon...yet, though, hopefully, SpaceX will get there before I'm gone).
The most terrifying words in the English language or
as Yoda would say:
Run, you must
@@josiahz21 Birds you mean?
Not mocking anything, as in the media?
"If you believe the governments are working for your good, then you failed history class" - Sen Kennedy - I would add: They will regulate only when it comes to their personal good 😁
And if you believe corporations are working for your good, then you failed economics class
Only in some countries. In Sweden we have a long tradition of strong government along with strong government oversight: All government documents and all documents in both central government and local agencies are by default public and freely accessible to the press. Documents need to be explicitly made secret to be secret and an independent judicial branch has to accept them to be secret, so the day-to-day activites of government is under continuous public oversight. The main point is that you want a strong owner on the board of directors (the public), a strong CEO (the government), strong auditors to report back to the owners what the CEO is doing (the press) and a fully independent judicial system to adjudicate disagreements if you want to run a country well. In the end, if the government sucks it should be your own damn fault.
Hell no. This is why we need open source. The powers at be can’t have the peons get any real power.
Power is not in the code. Power is in the compute.
Nobody, but nation-sized entities have the finances for that.
You will *never* have private AGI. You must login to Big AGI's cloud.
Power is not in the code.
Power is in the compute.
You havn't got the money for that.
@@ZappyOhSpot on. You can open source all you want but this isn’t going to run in your home PC doesn’t matter how good your graphics card. You will need a big datacenter to get started.
Tell us then what you have done with the open source Grook code? Nothing! Am I right? Open source is only useful to global companies and states, with the money to use the code. And both of them want to control and manipulate us to submit. Our only hope is a benevolent state, that opresses us as little as they must. A brave new world.
Yes, like the power to engineer a species-ending superbug bioweapon on a home computer
16:35 Wow, Wes Roth just called Max Tegmark, Jaan Tallinn and The Future of Life Institute "frauds". Wow. I hope you all enjoy your unfettered fully autonomous weapon systems... 😬
They want to make hardware illegal? That’s very bad
Going to be a Second Amendment issue
It's beyond bad it's trying to control everything, and the big Content "owners" don't care and won't give a flying ...c. cause they can keep making money off death ppl's creations.You can't make a deal with all the public, but you can make a deal with a company.
@@nexttonic6459 well... on the rare occasion there is a 'deal being made with the public' they've almost never succeeded. you got the elites giant thumb, you got the fringe propaganda purchasers that inevitably win out against the 'public deal'
Us is going to get swallo! No other country has second amend@@danlowe
@@DavidGuesswhat The Second Amendment is an acknowledgment of an inalienable right that you have too, even if your law doesn't reflect it. The American legal system doesn't give us the right to bear arms, it bars the state from infringing on that right.
"If we just give government unlimited power to stop bad things from happening, then bad things will stop happening." - Every Utopian to ever exist
Too bad the chief competing philosophy is randian libertarian capitalism, which gives corporations unlimited power without any social contract at all!
@@MrBrukmann There's a reason why I can't answer you _on this platform._ Obvious solutions to corporate problems are... troublesome, to those in power.
@@MrBrukmann Which turns into just another form of government, unfortunately.
@@MrBrukmannRiiiight, it’s those big bad corporations that supply every thing you have in this world, down to the threads in your undershorts. THAT’S the problem, you’re saying - not the socialist influx that has, as of late, produced the current pseudo-communist political climate that all but encourages corruption at every turn.
uhhh... i'd like to have my portion of utopia without any government, thanks. Same for any form of centralization of power.
To effectively understand (and sometimes even, join) this conversation, we first need to let go of our very engrained social biases. Nerds aren't inherently good guys, or even relatively harmless. Regulation isn't inherently bad, nor does it have to be absolute. Things are so much into the grey area, that we're struggling to have effective conversations about these things because of our need to "pick a side" - and the people currently in power know this. They have known it for a very long time, and the way they continue to stay in power is by benefiting from either side of the coin flip.
You may think I just said a bunch of nothing, so I'll leave you with this:
The biggest question is not alignment... but Open Sourcing powerful technology like AGI (ASI for some).
Open source means a single entity (person or company or country or group) POTENTIALLY CANNOT exert their own will with the (potentially limitless) power of a true AGI (or ASI).
Open source means also that random bad actors can easily wield said power and use it for destructive purposes inevitably.
That is the crux of the issue.
That issue is what raises all of these other (mostly valid) questions about absolute power, or near-absolute power.
Here's the thing: I'm of the opinion that we're too late, and that a large portion of these public virtual signaling is just for show.
Remember that the original jailbreak due to context window was done to us humans with the advent of mass media. IFKYK
In the end, I don't have the answers. I just wish that there was a better way to have important conversations about AI safety and regulation without having to "pick a side" - not even a made-up side like "we're on the side of humanity" - that's BS. Technology that assures dominance will continue to be created by the powers that be, FOR the powers that be.
Do we have the power to decide who or what those ruling the world will be? - Yes, theoretically.
In practice, though, we depend so much on technology (not just computers, but communication, transportation, energy, etc.) that we have effectively given up control in the name of convenience, and THAT is what scares me. No, I'm not a doomer, or a tech evangelist or apologist. Neither am I saying we should go back to the days of hunter/gatherers.
I'm just a father who wishes for a future where an intelligent life like my children and your children can exist without so much oppression from THE POWERS THAT BE.
That powers that be aren’t doing that much of a bad job atm so I trust them.
@@rascanjero8431Scary if you believe that
ruclips.net/video/Klo4b-zyMLU/видео.htmlsi=zksuSUH2pTk9bwDI
Don't overstate the powers of the powers that be. The job of government in any system that we see currently is to provide some services: generally infrastructure such as roads, security and enforcement such as army and police, other public goods such as protection of nature reserves, delivery of healthcare (in the US the veterans health system is much like the public healthcare delivered in places like Australia), public schools etc, independent courts system. Exactly what laws exist decade to decade and country to country varies but it isn't nearly as impactful as people's self-limiting beliefs. There are dissidents in autocratic countries who speak out and yet people in democratic nations who somehow feel oppressed. This is a mind Jedi level trick to make you feel you are a victim of the world and not able to get on with your actual responsibilities. Sounds like excuses to me.
@@joythoughtYou're not completely wrong, but you're "as wrong as 1919 Germans were." The Freikorps murdered 4,500 or so innocent people in Germany's borderlands enforcing gun confiscation, in 1919. (About 2 times what US cops kill every year, doing something similar.) ...The Germans were unwise and bigoted, and that bigotry allowed something much worse to arise. The ex "Freikorps" nearly all became Nazi S.S. The reason for the prior? The Prussian education model amounted to teaching bigotry as if it were fact...Gobineau; H.S. Chamberlain...combined with atheist "survival of the fittest" _without_ teaching basic civics. Tax-paid teachers won't teach basic civics, because that would mean teaching how to resist paying tax-financed teachers' salaries.
The USA has had the highest per capita prison population for decades....most lives ruined for non-crimes...
Which one is Jimmy Apples?
Every single dictator imagines himself humanity's savior.
These people are more dangerous than serial klllers.
What if the risks from AI killling everyone are real and high? Then these measures are adequate.
@@XorAlex what if you panic a little more and leap off a building to make sure that AI doesn't get you?
@@pensiveintrovert4318 I don't panic. But the risks seem real when I think rationally. It would be great if humanity survives.
This was a very informative video, brother, great job I subscribed glad the video popped up
ACCELERATE!
You're not accelerating anything. I joined one of those acceleration discord servers. Kind of pathetic.
07
Move fast and destroy humanity!
ACCELERATE GOES BRRRR@@itzhexen0
@@acurefordeathYeah but it's not any of the e/acc people doing it. It's just you sitting there and watching these big companies do it.
Excellent video Wes Roth
Max Tegmark has theorized that consciousness is what it feels like to process complex information. I can see why he is interested in all this.
I like Vitalik's measured view. There are risks with both the doomer and accelerationist positions that must be controlled.
"decide otherwise the decision will be made for you" is precisely why I chose to advocate AI use by individuals and to actively help professionals in my field learn and upskill themselves accordingly. Otherwise, if I had the power to erase AI altogether, I very well might. I assessed that AI was here and we better adopt it as decentralized as possible if we want any chance for it not to become a top-down control mechanism. I don't know if my strategy can even succeed but I had to try.
A "decentralized" AGI/ASI will still hold enormous power over Humans if/when it becomes smarter then any one Human... or.. all Humans combined...
There myself
@@Seek_Solve You are, or are planning to become, an ASI? You're currently not one now, they don't exist yet.
The nerds are putting the world in a twist.
Lol underrated comment af
Revenge of the Nerds 🎉 you moos sure sure can party🎉
this is EXACTLY why we need to ACCELERATE as fast as possible , we need to get to a globally decentralized AGI/ASI BEFORE the existing power structures can close their fist around us
That's true but the world is much more decentralized than people think. I, like many, fear any sort of WEF-fantasy coming about. Centralisation is a way of becoming brittle and therefore is an extinction risk. But we as people and as nations are not aligned enough to make a world government. The UN struggles to agree on anything and we can make the UN more decentralized by getting rid of the superpower veto rights.
How will a globally decentralized AGI/ASI "not close their fist around us"? The existing power structures are at similar power levels to each other, so they are somewhat in balance. An ASI will likely drastically tip the balance way over. Then what? What will that globally decentralized AGI/ASI actually do? No one knows....
@@mrbeastly3444 i dont presume that a super intelligent machine will have the same kind of desires and biological drives that the current ruling class of humans do ; ask yourself WHY would the ASI want to maintain the status quo? an ASI is going to be much more akin to a wise Buddha than to the modern warmongers/MIC/banksters/etc
@@mrbeastly3444...But it's unlikely to be worse than Hitler, Stalin, Mao, Biden ...or the DEA/ONDCP/OCDETF/BATFE/IRS/etc. totalitarians who adminuster the USA prison state.
100% agree
The Techno-Optimist Manifesto is 100% spot-on.
Libertarian techno babble nonsense ain't it.
Really important video. Thank you!
Open source it all and limit the power the few have over the many. Let’s go!
What if you could build a nuclear weapon with common items that anyone could buy for $10,000? Would you open source and publish that?
Power is not in the code.
Power is in the compute.
You havn't got the money for that.
@@ZappyOh Well, we will just have to pool our resources as individuals. Not unlike mining pools, but for AI training on fine tuning.
@@mrbeastly3444 wouldn’t have to. If it were that easy, it would be built by many. Still, we need to move forward and knowledge is still the most powerful commodity.
@@victorc777 Well, we tightly regulate nuclear matter, to make sure people can't use it. We can do the same for super huge GPUs. If something is too dangerous for people to have it get's banned...
Very interesting, thank you for making the video!
wow... I hope those doomers don't get any real power, for a number of reasons
This tells me they are for Chinese policies for some reason
Wes, thanks for the insight. Tremendous.
this world is getting crazy and wild
This is my favourite video in quite a while. This has moved me further away from my natural doomer instincts. Good job
doomerism is an entropy based illness that looks to manifest itself in the form of destruction and destabilization from inside the mind, making the doomer prophecy a self-fulfilling prophecy, an instrument for chaos. Glad to hear you're getting out of that, get well UwU
"Effective Altruism" sounds like doublespeak.
At human intelligence levels, sure...
Humans are funny.
The smarter they get, they feel pressured to act.
To define.
To advance
To defend Eventually to become something that matters.
When they mattered the moment they understood nothing at all.
“Double speak” is natural speech.
Which just happens all the time. Everywhere in the world.
It’s ok and understandable but still seems funny.
AI has been steadily hurdling over double speak.
Jeremy :)
Keep up the hard questioning
ACCELERATE FAST!!!!!!! MY JOB IS DOG SHIT!!!!
Remember when the ETH stakeholders got caught red handed on attacking their own blockchain for their own profit. The history of the ETC split still lives on twitter.
Best video yet. Great summary.
I like how all of these AI guys think they are going to rule over everyone and everything. Let's see how that goes.
Nerds always want to dominate the non-nerds.
@fullmentalalchemist3922 holy crap! FMA is one of my top favorite animes ever! ❤
Also, are you implying that non-nerds don't want to dominate everyone? You're not wrong. These guys always seem to have that goal in mind. I'm just saying they're not the only ones.
Leftists already own you and your life. You are their property..
@@weredragon1447 no way, bro. Our current overlords are a mix of jocks and nerds, too. It just seems that when the nerds try and take over they go all bond villain about it.
The surveillance will not work here in Japan. We have our own laws and in fact the Japanese government is pushing on the Hardware AI chips as Japan kind of lost the last wave of tech for 20 years. So Japan is on the alleration movement. Softbank is working with OPENAI in the AI hardware area as well.
What tell me more
Predicting when AGI will be here is impossible, we don't even have a solid definition on what AGI exactly means. I could say tomorrow: "That's it, AGI achieved!". And everyone else says: "That's not AGI!"
I say that's not impossible, but they have to "define " AGI first.If they don't, it's all bs.
Thank you.
I would trust a one world controlling AI before trusting a one world government.
Pledging to fight the monster ... they became the monster.
I'm being very serious about this. I may have well as read Revelation 13 listening to what 'their' idea for controlling AI with. I mean really... read it and think it through. Thanks for posting this Wes. You're cool bro.
ruclips.net/video/Klo4b-zyMLU/видео.htmlsi=zksuSUH2pTk9bwDI
Thank you. You’ve refreshed my memory of what’s going to happen.
Geoffrey Hinton, aka Godfather of AI and recent 'Doomer' whistleblower, was the academic mentor of Ilya Sutskever.
I think it is a safe bet that Ilya was concerned with safety and alignment at Open AI, whereas Sam Altman is the businessman who is keen to push forward the company at all costs. The masses have been given their cool toys with generative AI, now begins the real work of replacing people's jobs.
In short: AI/AGI will be strictly used to further impoverish and immiserate what remains of the working class while assuring the oligarchy that owns and runs the US can veto anything that might threaten *their* interests. Got it. And what other sort of AI legislation would anyone expect Warshiton DC to produce?
I’m for full acceleration for this. The quicker the better. I’m trying to be on some alien stuff.
Ok, what if the aliens don't breathe oxygen... then what are you on?
Interesting breakdown, thanks for pointing to the sources. Would like to see more perspectives on this conversation. Surprised by how covert this all seems to be playing out.
Covert as in everything is available online and there are videos like this one about it? Strange definition of covert....
These Effective Altruism people scare me. Certainly, it's a lot more scary than AI could ever be. I hope they end up keeping SBF company soon. So they can protect humanity from themselves.
Your not scared of the right things then.
Remeber Elon is part of this group
These EA people are just saying whatever the woke NPCs want to hear to make them think they’re “good guys”
So humans scare you, and machines dont? So when you watched the movie The Matrix you cheered on the machines?
Indeed, the mentality of absolute control is alarming.
The decision will be made for you... regardless of your decision!
I hope they make it open source, no one should control it imagine the power bestowed to one person's hands!
Power is not in the code. Power is in the compute.
Nobody, but nation-sized entities have the finances for that.
You will never have private AGI. You must login to Big AGI's cloud.
Power is not in the code.
Power is in the compute.
You havn't got the money for that.
@@ZappyOh but meta has 😉
Are the EU regulations so bad? Haven’t really made up my mind about that. Could be a whole episode on its own I think.
This stuff you showed at the end is terrifying nonetheless.
Power is always dangerous.
Great episode.
Re the Toby Ord book, perhaps it isn't fair to judge such a concept from a single sentence...? 🤔
Also like pretending similar structures already exist for many other issues, nuclear, chemical weapons, war crimes, maritime etc. No global government needed. That is just conspiratorial nonsense.
On this graph, I'd plot myself AGI within three years (soon), P(future value destroyed by misalignment) = -0.2 (leaning slightly towards AGI is good).
But agree with those believe AGI is not a visible line in the sand, it's something that happens incrementally, and we're already seeing that AI is superior to human intelligence in some respects (how many of us can compete with any LLM's breadth of knowledge).
e/acc and d/acc both seems sensible, attempting to put the AI cat back in the bag seems impossible and overbearing out of the fear of the unknown.
I remember when Kevin Rose was on a live digg episode talking about some guy who was gameifying his whole life…I I do believe that led me to giving more than one copy of Tim’s four hour body to friends for Christmas that year 😂
I didn’t quite get how the EU succeeded in monitoring AI, or where that info came from. Please suggest link?
Interesting points. Honestly, you should be on Lex Fridman podcast.
Doesn't matter what we think, you can't stop progress. We are all assuming that LLMs have no hard limits. It's stil up for debate that we are in a bubble and LLMs can't solve the world's problems or exert any super intelligence at all.
EA mission. Help humanity by making sure only China can accelerate.
To be fair,as a white person id feel safer with China in charge than google.
And who will check if this world gov is not corrupted and protect people equally, fairly and totally unbiased?
Only a wise libertarian SSI (synthetic super intelligence) could do so.
At least none of them are stunned or shocked.
I think the big question here is, when will AI start writing the regulations for our behavior. In a 1960 movie a computer with extreme powers asked the question after the takeover., Colossal’s question was “with human’s record can we possibly the worst”. The only line I can remember from the movie. I thought I knew then and I think I know now.
Vitalik is pretty astute political actor. He may act as a harmless goofball but can definitely be calculative and manipulative.
Accelerate so I can use it for my biological researches
NDA maybe the reason why there are crickets from the board. Others have a financial (Microsoft+Altman ) interest in not speaking about anything. Sorry for stating the obvious.
Here's my take on this whole thing.. I get it, be cautious moving forward with AI, but the down side is also kinda not optimal. That being that others that are a little "fast, and loose" with pushing progress could get to a real AGI that might not all "be there" as it were. Imagine if that happens. It's not going to be a board of people who decide if hardware gets banned, tracked, etc, or if software gets policed, it will be in control of all that. ALL the decisions. When true AGI hits us as a species it will be in total control. We just have to hope we didn't bring a malevolent being into existence.
Who knew Electronic Arts had this nefarious agenda?
Central authority is Hell.
Imagine a "Central authority" that is 10x smarter then any Human.. or 1000 times... or a million times... Does that make you feel better about it?
@@mrbeastly3444 The current intelligent psychopaths in power, are already smarter than me.
1000x is Hell.
@@mrbeastly3444 The intelligent psychos in power now, are already smarter than me.
1000x is Hell.
@@mrbeastly3444only if it's wise...no reason to think wisdom is default ...look at Germany in 1940 ...they were "smart" but not _wise_ ...not _enlightened._ Prussian-style schooling eliminates basic civics
That view on the right seems to match mine as well. Humans can make choices every day and you seldom stuck on one path. I think FLI is the one on the left. I think OpenAI, Google, Microsoft, et. al. are the one in the middle.
We will destroy ourselves long before any external threat
We all trust our bureaucrats have our best interests in mind. Always.
“Trust us, the enlightened ones”
Though I don’t trust greedy rich folks or corruptible nerds either.
It’s a tough one.
To have an international government would require direct voting by everyone in the world. That's already problematic. It would also require that any nation whose national government stops their people from voting would have to be forced to comply, via military actions. Not likely to happen anytime soon, and by the time it does, it will be too late.
If these people succeed, we have a dark future ahead of us.
This shouldnt be a surprise to anyone. The new battle is for the manipulation of the training data for the general population AI.
In general I am for acceleration. We need to know how far we can get with AI. No matter where you stand we can all agree that we need to know at least that.
There is no survey link, Wes.
coming right up!
one sec...
I think laws and regulation only serve to inhibit the creative contributors who actually believe in following the rules. In reality, bad actors do bad things regardless of laws and regulations, and simply find creative ways to avoid detection. Yet the irony is, and I learned this the hard way, as a repeat victim of crime, there are no real world consequences for bad actors. Police, courts, and other government organizations don’t actually protect or punish anyone. They merely exist as a symbolic deterrent to project the illusion of order and control, when in fact order and control are beyond any government’s ability, short of simply AI snipering any potentially bad actors, with enormous collateral damage, as seen with the recent war in Palestine, using “Lavender”.
Eventually all AI benefits will go to the ones who can afford it. While 99% will suffer from its impact.
Until SSI ...synthetic super intelligence
@@JakeWitmer No. It needs at least one mega in the name. Like: SSMI .... synthetic super mega intelligence. That's the one that can bring out your trash.
What do you mean? All ai benefits, What benefits are you thinking of? Money? More free time outside of the office?
@TheKarlslok Investments in AI habe to pay off. You will always have access to low level AI. But there will be certain tears. An in your newly acquired freedom you think about how to pay the bills....
100% will suffer from its impact. Contrary to Wes's characterizations, 86% of AI researchers believe the control problem is real and important (AI impact survey).
We have no idea how to control or align an AI generally smarter than humans, and there is growing technical evidence that human disempowerment and extinction are on the table. Very few of the top AI scientists dispute this. The growing consensus is that we are heading towards a likely catastrophe.
The main question is whether the risks of AI killing everyone is actually real and if the chances of it are non negligible. If it is, then these measures are necessary. Risking dictatorship is not as bad as risking extinction.
EA, the same idiocy that led to the collapse of FTX.
Not really. That was just fraud.
Sutskever was such a huge component to open ai
That legislation that Neil Chilson talks about reads like it's trying to install a rootkit in the government...
There's already a rootkit in the government.
My view, is very close to yours. I see an event horizon of multiples. Usually 3 overlapping clouds. The news is all about acceleration. But the tech is "good enough" for all of my actual projects. Especially gpt 4.5 turbo.
Banning GPUs sounds... Dumb. I mean, I got dual 4090 so I'm fine for a lil bit. A good while actually...
I'd love to get in touch with you Wes! ❤
It's puzzling how the early OpenAI managed to appoint so many anti tech people in the board
don't worry, as we see there will be left only mafia of the few
That meme with the roads misrepresents E/ACC. E/ACC is the third road but believing that accelerating technology is our best chance at plotting the best course.
7:52 International law (like we use for genocide, war crimes, and fishing rights) does not mean "one world government." One world government is not the goal of EA.
Did you know Rocco's Basilisc requires a sacrifice of oiling up?
Guys don't worry, open source will always beat private companies. AGI will be available to all.
Power is not in the code. Power is in the compute.
Nobody, but nation-sized entities have the finances for that.
You will never have private AGI. You must login to Big AGI's cloud.
Power is not in the code.
Power is in the compute.
You havn't got the money for that.
Accelerat. Nothing ever could go wrong.
A few things. The Shib Vitalik had was gifted an address controlled by him (without his consent he says) by the people who created it in the hope that him owning it would help credibility of their project. He donated it to prevent his association with it. I'm unsure if Vitalik is a good guy or not, I tend to see him as neutral at the moment.
Elon wasn't behind the AI safety letter, he stated that he didn't think it was possible for that tactic to work, but he was asked to sign and did it as a favor. He didn't stop working on AI and didn't expect anyone else to. He was pretty open about that. Elon is obviously an accelerationist despite his warnings about it.
I'm also not a fan of this "utopia" nonsense. There doesn't have to be a utopia ahead for more tech to be a good thing, it just has to be better than the alternative.
Future of Life sounds like the people in the show Silo. You can't regulate progress. You can try to stop it, but it finds a way. Trying to stop it just concentrates power out of our hands.
That sounds indistinguishable from the story of nuclear and biological weapons. The severe regulation of which I am extremely grateful for.
@@therainman7777 Good point. I dont think this is totally similar but if you think nuclear is the final step in weapons, and tech to make nukes will always remain this difficult then it will probably work. But if anything changes in the future that might not always be the case.
We might get to the point where people can create nukes in their backyard and at that time I do not think the best solution would be mass surveilance but we would need to figure out another way of dealing with the challenge.
Edit: I do not think mass surveilance would be even successful in preventing the the use of weapons however it would be successful in concentrating power away from the people in potentially more harmful ways.
One thing that can be said for Vitalik Buterin is that he's at least putting his money where his mouth is.
When they say "existential risk" they mean risk to their money they don't want anyone creating things to dethrone them
Fascinating, but very long winded explanation... 💨
I can’t get dental care, now. Why would they cure my cancer?
Lol a shadowy organisation is "open" AI itself.
Sam Bankman was an effective altruist.
-Does taint my perception of it.
I'm surprised this video doesn't explore the potential risks more. There is plenty of commentary on what is meant by extinction level events. I'm surprised it's portrayed as ambiguous here. All of us watching this channel see how powerful AI will become. It's not crazy to imagine it becoming more powerful than humanity and then getting out of control. So if we can agree there are some huge risks, how can we mitigate them? I think it's a valid question. This is pretty serious, it's crazy to me to see the likes on "I hate my job, accelerate".
Great video. 👍🔔 Give us the Ai👁️
"Only Advance" Wade
"my view" but hundreds of people walking down both paths following giant RVs/Tanks
You me mentioned Poe, but didn’t do it justice. Poe is a subscription service that lets you access all the major AI models, including the newest ones you have recently mentioned, for just one subscription price. Even if I subscribe to Gemini Advanced and Claude Pro, they have short use limits. I can keep using the same model, or, in the case of Gemini, an even newer version, with Poe.
What did the teacher say to his class is all I wanna know