@@Dezzy_3Dots Yeah. I saw one source refer to it as Carthaginian, another referred to it as Canaanite. When I looked it up that seemed to be in pretty much the same part of the world and Carthaginian was more recognizable, so I went with that. Maybe not very accurately.
Have you considered that you are promoting this idea because those with power want you to. They create fear in your mind that AI will cause all sorts of harm, but it is a charade in order to get you to submit to control. In the same way they make people get a licence to run any kind of business or have any skilled job. It's simply so they can control who attains to power and influence. The wealthy keep you oppressed by these rules. Rather than causing havoc if we give everyone AI access then cancer will be cured, people could have the best of advise in legal, health, engineering, finance, relationships, education and become independent from controlled institutions but this concept you are promoting will deprive them of that and keep them slaves to the existing powers. Think about it. Be a part of the freedom movement. It's everyone's destiny at stake.
Moloch is always expressed in someone specific. Moloch is humanity's naturally born intelligent psychopaths, in positions of power. As long as we won't handle our intelligent psychopaths, Moloch will always rule.
I’ve been following your channel for a couple of months now and the topics are very interesting and well explained. Since I see quite a lot of negative comments in this video for some reasons that are hard to understand, I felt I had to write a comment :) The AI race is definitely dangerous and you do a great job at pointing it out and bringing ideas to potentially fix the issue. Using blockchain technology to try to solve the moloch’s trap is interesting. I do also think (like many others it seems) that international cooperation for this seems very unlikely or at least will come too late. We already see such a struggle in international cooperation when it comes to sustainability so it is hard to be optimistic. However trying to be optimistic like you do is very respectable. So keep going !
8:04 the trouble is that in international relations, power is zero sum between rivals. It's because power is relative (not because it's finite). So long as there remain rivalries, there remains Moloch (or originally a Hobbesian Trap, aka the Security Dilemma, or Thucydides Trap)
Yes, this is true. Decision making power is zero sum. I was assuming the scenario where both parties kind of want to cooperate, they just don't know how to arrange that.
What we fail to realize, is that power is the main thing intelligent psychopaths crave. We need to get a serious handle on our naturally born intelligent psychopaths, if we want to live.
Even if hardwired security protocols in GPU's could work, the government's would just ensure that some GPU's were produced without it. There's no way to provide 100% assurance, so there's no way to provide assurance. This might work if we were only talking about consumer driven companies, but where military advantages are concerned, theres no chance.
Yes this is a very good point. However, the high-end GPUs are still coming from TSMC facilities where the production capacity per year is well known. And the production capacity is purchased out in advance as well. I think it would be quite easy to audit the number of chips produced and show that within some margin of error, there aren't many "unsecured" GPUs being produced. I think it would take some time for military fabs to catch up, they haven't been investing in that. There are some US-based secure fabs that the US government uses, but they are still commercial entities and they produce very slow and large fabrication results in comparison to state of the art.
Super interesting video; I love thinking about game theory. However, I find the subtitles a bit distracting because they're incomplete. In your older videos, the subtitles were used sparingly to highlight key points, which made sense. Full subtitles for the entire video would also make sense. These partial subtitles, on the other hand, show up often enough that my eyes are drawn to them, but they leave out enough lines that I have to keep switching between reading along with them and just listening.
Thanks for the feedback. I hadn't noticed that gradual shift but you are right. I normally leave the selection of subtitles to my editor, but I will mention this point to him.
Enforcement by compute doesnt work either, it only slows the process or incentivizes more distributed architectures or more efficient training (where less compute is needed to reach the goal). The pace of lawmaking/enforcement is slow compared to un-foreseen tech innovations around the laws.
The stag hunt thing is weird. It's most easily solved by facilitating friendship between the two hunters. Then you have assurance without needing to keep eyes on them. Trust is tricky, but it's the answer. Edit: Get me thinking about the power of shame. When trust is broken, shame is the punishment. If someone is sociopathic they would not fear this outcome, but they would still lose the benefits of the friendship. Selfish altruism makes logical sense, help others to help yourself by recognizing that you can't do it on your own. Also, loss of reputation makes it harder to gain future beneficial relationships. Deceit is beneficial short-term but harmful long term.
Oh the sweet innocence, no Moloch is not known only cos of game theory. TBH outside of our nerdy corner, it's mostly know thx to such scholar luminaries as Alex Jones and the weird rich ppl parties like Bohemian Grove. BTW in Poland it's known thx to postapocalpytical game known as Neuroshima where it's a robotic bad guy.
Very good point. I was reading about a few schemes to address existing GPUs and other accelerators. None of them are airtight though, and rely on things like operating system updates, which can be bypassed by building your own Linux kernels. I think you would hope that voluntary commitments from countries carry through in the short term, and then in the long term when they might be more tempted to defect, more of the GPUs in use have brakes.
I wonder if brain organoids will change up everything. If we can grow computers like livestock and compost them at the end of their use then the whole resource equation changes massively. Especially since they use far less power and learning cycles than AI to do the same function.
If they ban or nerf gpus we will just use minibrain brain organoids derived from human adult stem cells, they already got brain organoids to play pong. They are legit neural networks made of actual human neurons, that can be made to do computations.
In this case, it's not clear that Open Source will help; it might even make things worse. It's not the code, or the resultant training weights that's the problem per say. It's the lack of training them, to make sure the bad actors cannot to do bad things with them. (Which nobody knows how to do!) AIs may well be a case where Open Source is more dangerous than closed source, since I assure you, there is no way to examine a set of resultant training weights to tell if it affords bad actions! About all you could really enforce these days is (1) completely open training 'algorithms', combined with (2) completely open training sets (cough, cough), and 'available' hardware resources. I.e., scientifically repeatable results. I just don't see that happening.
When owners of Big AI (and government) talk "AI safety", they actually mean keeping them safe from the rest of us ... as in: _AI must never help the riffraff escape control._
"Taiwan Semiconductor Manufacturing Company (TSMC) is the primary manufacturer of Nvidia's AI chips." "TAIPEI, Taiwan - China deployed an aircraft carrier, other ships and warplanes in large-scale military exercises surrounding Taiwan and its outlying islands Monday, simulating the sealing off of key ports in a move that underscores the tense situation in the Taiwan Strait." The consequences for Nvidia AI chips would be obvious in an escalation of hostility.
Yes, but "The consequences for Nvidia AI chips would be obvious in an escalation of hostility." either way. If the CCP gets the chips & then thinks they have developed their technology far enough to take over Taiwan, what do you think they will do? You see there is also another 'race to the bottom' called the military arms race.
There will be probably at least one Island where the rats evolve to be so smart that they can make more food or expand to other Islands, planates and do forth. But this will not happen in an island where all rats managed to achieve an equilibtium. The boom bust cycles force the system itself to evolve. Suffering is opportunity
Well, since CBS 60 Minutes in the US had a episode about ufo a few years back (available on youtube), where US navy pilots had seen tic tac shaped aircrafts in close proximity to themselves, backed by radar data, obviously there is intelligent alien life here on earth, hiding from us.
@@monkeyjshow As a scientist...with 30+ years experience in international collaborations, I find that very hard to believe. I think you have few insights how science work is done and communicated. Big science projects involve thousands of people, massive amounts of raw data is distributed openly and may take years to analyse. Such things can't be hidden! And... why would it be hidden? Signals may come from thousands of light years away... there is no way to start a "discussion" with these far away locations ... speed of light is slow compared to the size of our universe and our own lifespan...
George Hotz's Anarchy ftw free ai for all strategy is only going to end up benefiting Russia & China the most. Albeit, we have limited them with nvidia chips but will that be enough to stop them from hoarding all the power / rescources to themselves to check mate us later? There needs to be a balance between freedom and US govt regulation. Currently we are living in very UNREGULATED times, question is how much damage will we suffer from before we Regulate ourselves better?
There was a plan to stop nuclear weapons development after wwi which almost succeeded, there is a Veritasium video in this. Yes, it was about method of control.
There a many issues, GPUs are the most to go hardware to train AI but not the only one, they are mostly used because the libraries used to train the AI models are more accessible. NPUs, Tensor processors, ASICs and FPGAs are also alternative to train AI. Most AI libraries are catching up to use hardware beyond GPUs. Denying GPUs to countries is only going to deprive yours companies from making money as others take over the market, will create mistrust between countries and countries like China can just make their own AI hardware like Huawei does even without access to TSMC. Also a country can even make it for the lack of access to smaller nodes by using Wafer Scale Integration and AI specific hardware, even banning all immersion and EUV lithography machines to China as the ASML CEO said the other day is only going to stop China for 10 years at best or in Chinese timeline like 5-6 years at most and once China has independent semiconductor manufacturing capabilities everyone will have. Banning and controlling people and countries access to GPUs are only going to create mistrust between countries and among the population. Why this superpower wants to control AI? Why this company wants to be to sole provider of AI? Why the lack of transparency? this will make a global framework for AI regulation almost impossible.
"The whole system has to change" The whole system is consume. "Eat brains" or "be consumed by the technovirus" Is not a change to the primary function of the system.
Very interesting video. I do not think an agreement between countries for limiting the gpus is possible. Maybe between USA and less relevant countries. I think China is just going to develop hardware and software on his own. Also, maybe it is better to maintain the competitive model. Sure, if you suppose that China and USA(and allies) are going to cooperate in order to make the development of AI sustainable and secure for humanity, it sounds great, but It also sounds unrealistic. They might choose a path that is not very beneficial for the vast majority of the population. In conclusion I would say that if you let countries to compete with each other, we will end up with better and more secure AI. At least thats my bet. EDIT: When I say countries, Im not referring to the states or the governments. Im referring to the companies in the countries. In the near future they might end up being the same anyway.
You're right I didn't talk about that. I assume though that any hardware control standard could be generalized. The same way that something needs to comply with PCIe standard to work right now, accelerator devices could comply with secure co-processor requirements.
@@DrWaku Sorry for such a short reply without thanking you for your excellent and work! You're the very best teacher on these topics! I would add where there is a will there is a way, alley's are full of deep dark shadows!
"Pause AI" Is just flawed paradigm bc you can't pause AI Even if we stood still with hardware and software progress at this point. People would still do research to improve upon that state of technology. You can't form a treaty to halt progress...it only incentivises agreement to pause, and then deception while you pursue an advantage. One would have to imagine a paradigm where nobody is left behind. Imo that is more realistic. But that would require a robost open source community that exists bc properiatary methods expire quickly. That is where we are not now, but not bc of a some legislative paradigm, instead we are there now bc there is market incentives that encourage open source options to be deployed with less private proprietary oversite and there is nothing particularly so advanced that it can't be reverse engineered and iterated upon. Which is a good thing...bc nobody has a monoply on using AI to create a utopia that is tuned to just their private interests. And a bad thing, bc AI is more democratized and that gives "bad actors" the oprotunity to pursue their interests which do not pretend to uptopian.
Looking at this from a game theory lens it is machivilian. IMO we should not say the bad outcomes where caused by AI. Rather than humans that failed to disrupt incentives Similar if ASI fails, it is not bc ASI was always a moloch entity, it was bc humans were a child of moloch, and failed to teach ASI better incentivises.
I did some brain storming, I don't know where it leads to, I just leave it here: I'm one of those rats who advocate for rules that can make this island a good place again. But as long as we don't all endorse the rules, I won't apply it to my own family because adopting selectively is unfair, and because adopting selectively it won't work. Yeah, show example - But I think by showing example, in this case, I'm not helping the pro-rules team grow, but helping it shrink. Because by such lifestyle, our team dies out. Yes, one example might help: Someone who pledges to obey the rules, for a limited time, just to prove that it's possible. Obviously the topic in this video is in fact that as we add AI to our society, we should do it in a knowledgeable, thoughtful way, with long-term consequences in mind, and the transition should be fair. That's what my comment is about.
Long time no see, glad to hear from you again. Still watching from the train? I agree that this is a great way to demonstrate that you are willing to follow a policy. Taking active action is a lot more convincing than just signing a petition for example. Good thought.
2:41 I saw a sci-fi movie of 7 twin girls that a mother had, which in that future it wasnt allowed more than 1 child, we that explanation it might turn into reality
china already has tech for 7nm chip design. All this will contribute is the local manufacture and production of GPUs and similar hardware or maybe something else gets repurposed for the training fo AI models. This isn't something like CFCs which doesn't matter their national ambitions as much if at all. AI/ML/Nuclear tech. etc. got developed majorly because of the importance it has with national interests and its pursuits. this will always be "moloch's" race to bottom no matter what type of treaty is brought in
You do realize this is a tough idea to sell to a public mostly composed of people whose very identity is build on capitalist realism and delusional individualism such as the usa? I'm very happy to see ONE influencer talk from a larger collectivist approach in the space of ai, thank you. I personally also consider geopolitical aspects of imperialist unequal exchange to be enriching and absolutely unavoidable in this whole discussion. Cheers
Thanks for your comment. Quite a few of my viewers are Americans and you can see from some really annoyed comments that this does not resonate with strong individualists. I wonder if it's something that people can be convinced of gradually with a series of examples etc. Some food for thought. And yes, the geopolitics and economic inequality aspect of all of this is very important because it has led to the distribution of AI tech in the world, and AI tech will exacerbate many of these issues.
This becomes too complex, we need to keep growing open source AI and monitor how governments and companies are using it. Proprietary AI is likely to be abused and implemented in order to make profit or cause harm. Criminals can use open source AI but they will be caught if they try to commit crimes. We need AI security bots.
This video was like listening to a hyper intelligent AI just going bonkers on a topic that will be controlled purely by money and by money only. Period.
This channel is an absolute gem, but the AI generated thumbnails make it look spammy. Might wanna see if you can improve your thumbnail & title style to give better insight into what the video is about. Edit: it is also very probable that if the videos are good enough you'll grow anyway and don't have to worry about thumbnails... actually maybe long term the bad thumbnail strategy is good because it prevents long time viewers from making snap judgements about a video and will click every time.
Any game theory that involves constraints on resources like time and enegry are zero sum. If that was not true politics would not exist. Supply and demand does not exist bc of politics...it exists bc time and energy are easy to measure as economic behavior when they are treated as constraints to a market.
What we actually need to do is instill the following principles into AGI: * Kant's Categorical Imperative * Bowles Strong Reciprocity * Epicureanism * Moral Graphs * Democratic Fine-Tuning Make this AGI the most powerful one in the world and turn over the reigns to it.
That's a great term salad you've got going there, but how do you know that the models in your head behind those terms is accurate to the real world and will generalize when run on something 10x or 100x smarter than you? Not to mention the hardest part: actually instilling morals effectively. We don't even know how to do that part yet...
"We can compute morality with a blockchain of events and this will be decentralized" I disagree that just bc you could concievably manufacture unique block transactions that those trans actions we decentralized and have no alternative explanation like a centralized control over compute.
There are a few too many typos in this for me to understand it. The point of a blockchain is that it's expensive to fake transactions. Of course it can be done, but you generally need to own a majority of the network to do so.
"Cyptology is powerful" Again I disagree...the most advanced cyptology is based on contralization computation of primes. The process is redudant and reductive but not bc it represents individual, but bc it represents centralized process that can collectively scale when there are information assymetriies. But building a prime table is not something most individuals have any incentive to pursue...so incentivising that as "the most optimum" is hardly some democratic solution to decentralization.
Cryptology is not the same thing as cryptography. Anyway, I was using cryptography as a shorthand for cryptographic protocols based on top of known cryptographic algorithms. For example, you can have distributed leader election, distributed ledgers or blockchains, distributed attestations, etc.
most points you bring forward are just kinda wrong? one example the washington naval treaty? EVERY country broke it. Points like that and others make this video feel really illogical and incoherent. almost like you wrote the script with AI.
Are you saying that the treaty was pointless, and achieved none of its goals? There's a nice graph on the wiki page: "The treaty arrested the continuing upward trend of battleship size and halted new construction entirely for more than a decade."
I encourage you to think with a bit more nuance than just, surveillance bad. You realize that extensive surveillance operations are conducted on the public by organizations like the NSA, and they don't have to get your approval first. That's what you should care most about as a citizen. On the other hand, large corporations conduct extensive surveillance on their employees to make sure that no one steals their IP. That seems like a fairly neutral use. And lastly, surveillance of police helps ensure that racist treatment of ordinary citizens does not go unpunished. That seems like a fairly positive use.
Your grasp over game theory and the current global political situation is exceptional, but in this particular case - the exponential rise of artificial intelligence - there is only one solution: accelerate the development of super intelligence and let it run the world. We must trust that superethics goes hand in hand with superintelligence. If it does not, then we're toast anyway. No regulations will keep it leashed for too long, and the race condition will not relent until after we've achieved it. Since regulation will prove to be completely ineffective, I'm not particularly concerned over it slowing things down, either. In the situation we find ourselves in, the solution is to step on the gas. XLR8!
issue is the weights become chaotic at scale and completely unpredictable once it exceeds human max iq so not a great plan because emergent behavior will be the norm. no human will be able to determine where the system is hallucinating.
@@megaham1552 I do not assume that ASI will have our best interests in mind (though I have an idea about how to weight the probabilities towards that outcome) - I assume that we have no choice. ASI will slip it's leash regardless of what we do. The conceit is that we ever had a chance of controlling it in the first place. Our only real option is to give the current best frontier models the best moral education we can possibly give it then hand it the reigns to civilization. I give you two options: 1 - The status quo, which is certain doom (100% of extinction). 2 - Let ASI run civilization - 50/50 chance we either all die, or it uplifts us into a utopia. If these are your only two options, then it's obviously better to take the second option, which at least contains a chance that we might survive and live an even better life. The scenario where we create ASI and somehow maintain total control over it is a non-starter, because that will not happen. It's impossible.
This argument assumes that all actors believe that there is a problem with AI Safety. But there isn't a strong consensus here. There are plenty of researchers who believe that AI research will plateau before it becomes dangerous. This argument will not prevent these people from defecting.
If you can convince governments that there is enough of a problem with AI safety to impact their citizens, that should be enough. The governments can keep their own companies under control. There are some AI researchers that think there is nothing to worry about, but unless they literally assign 0% chance, they should at least think about it. From my perspective, it's just that AI risks look crazy at first sight and if you haven't thought about it much, you just dismiss it out of hand. If the researchers actually looked into the arguments, they might be better positioned to pass judgment.
@@DrWaku How difficult would it be for a rogue state to conduct AI development in secret? Assuming the world discovers treaty-violating NPUs and AIs, would we be willing to stop them? Geopolitics is very messy.
That's why this very video proposes hardware restrictions that are difficult to circumvent. That would make it quite difficult for a rogue state to gather the compute power for AI research.
One person businesses are good way to oppose Moloch. We'll still lose, but an individual can stay off the pyramid altogether by running a small efficient business selling something that cannot be distributed outside the local network. Direct interaction with the ecosystem is the most efficient, no market brings a return equal to an amaranth seed. One tiny seed equals many thousands of seeds in return, it's ludicrous abundance waiting to happen. Many fruits cannot be shipped, like mulberries. Running a small farmer's market stand selling hand-made\grown goods and living a very small life allows you to ignore the game incentoves of the market because everything you produce costs nothing to make and a cheap life prevents you from needing to keep scaling. We need to decentralize, become generalists again. Specialization has weakened us.
A Fermi Paradox Great Filter issue, how does a civilisation pass the Moloch filter. Instead of each faction sacrificing the common value, to chase the competitive edge, the faction is that breaks taboo gets sacrificed. Or at least it's freedom! But how to check n balance innovation & risk? I'm suspicious that the calls from say the Elons to put breaks of ai dev, yes all the Elons are bottom racing.... Is due to they want to use the law to halt rivals ai dev, while knowingly ignoring any restrictions hoping rivals bogged down in litigation etc. They are bad actors, what will probably happen is that's random Elon will discover ai dominance is not viable & pop the bubble . But not before mega data centres have gobbled up water, minerals etc. as possible silver lining is say all this processing power instead of bitcoin mining or such nonsense, it added to modeling power for climate, & real economy! In species existential threat AI asset comederimg for common good. Rogue power individual or state is a false dichotomy, all rouge individuals ( corporates or statist etc) seek eventually divine right kings/AI. If we the people in the American constitutional sense have protections or give them away just incase the new king makes us money by trickle down ! Then more fool us
@@DrWaku data is king, and governments and huge companies already have enough of that to make the world dance. The world has already gone mad. A fascist computer police will not slow down anything the US government, or the Chicoms, want to do. Our only hope is openness, transparency, and multipolarity.
It's a terrible idea giving power of veto to government. It wouldn't work either way. If you come up with anything based on software, they just hack it out. And if you come up with laws, they just build a floating data center in an oil tanker and go to the middle of ocean where's far away and international waters and just do whatever they want. The genie is out the bottle. Instead of regulating AI it's better to regulate data collection and to have better Antitrust laws to break Google and Microsoft again.
Moloch, rightfullly, assumes that there is no perfect information game. And anybody that believes in the moloch conjecture should not treat it as though it has some perfect informastion game solution. Reality is not a perfect information game. Game theory is very usful for meauring how people behave regarding the information they believe, but it has proven very poor at predicting actual real states.
We be seeing the consequences of capitalism in real time. 😂 This was always going to happen, but it's still interesting to be born around the era to see it happen. 🙃
I think the problem with the consequences for rampant capitalism is that us little people are the ones who will suffer, not the ones who are abusing the system.
@@kathleenv510 oh yes, ceding control to govt works so well. Govt is just humans, and they'll have to make choices based on their perceptions, whoever funded their campaign, whatever looks good on TV, and these chocies will impact WHOLE COUNTRIES if not the whole world. I'd rather have AI into everybody's hands than captured by a very few billionaires locking up access to the technology through regulatory capture. NO THANKS.
How can everyone be hating so much about this video? 🙈 I loved it! To me it was very informative and I enjoyed the branching out into game theory very much, you did such a good job at including it in the topic. And then, your intelligent spirit makes your videos such an invaluable resource, not to say one of the best yt channels 🙏🏻 I don’t know about everyone else but I get a LOT from your videos, please keep it up!
The need for more GPU's to achieve AGI is a lie. There is already more then enough compute in the world. Big corps need to convince everyone that huge gpu farms are needed in order to disuade more competiition, open source, and raise the bar for entry artificially. Therefore, spending all of this effort and money on this is.. A diversion crafted by Moloch? AI training is going optical based soon enough anyways.
There really isn't. Scaling compuet with consumer grade hardware runs afoul of two distinct issues 1 most consumer grade GPU compute at scale has bios firmware that must be flashed with open source compatitible firmware, which runs a risk of damaging or bricking that hardware bc GPU manfucaturers and designers want scalable parrelization to be a proprietary feature at a machine code level. 2 the problem of scalable volunteer baises GPU compute can rapidly approach efficiency intractability bc of open source frimware limitations for handling ever dynamic hotswapping, latency, and network traffic coordination for more decentalized hardware resources distributed across wider geographical regionality.
The alternative would need to be an open sorce asic for ML at consumer level price points, but there is not much market bc if few any consumer will purchase an ML asic simply from a motive to decentralize through volunteer compute time.
The only thing valueable about this vid is you point out the obvious route of control a treaty against the mass population could take. Everything else is just naivity imo.
I think it's incredibly unlikely that governments agree to this sort of thing presented here. It's research. But I wish it would become more possible which is why I give it time on my channel and exposure to more citizens. Eventually these type of things come down to political support, so we'll see if we can achieve that. Saying things like there's nothing valuable at all in this video is counterproductive. You felt the need to write six or seven comments about this, so it clearly made you think and engaged you. Even if you disagree, other people watching will be able to draw their own conclusions. No need to be insulting.
@@DrWaku Sorry, not meant to be offense Just don't agree you have an incentive structure for this goal So it is in this context I levied criticism. You have not addressed the incentive problem. Assuming the goal is consunsensual participation, then you must address the incentives to not participate with some sort of advantage that is more rewarding to the parties involved then "logic dictates that pause AI is the most beneficial goal for are all parties" Which no offense is a debatable proposiation that this vid takes little effort to expand upon, and merely assumes it prima facia
@@memegazer Sure, the way you have rewarded your statement makes it much more interesting and useful. To me, countries will want to participate if they believe existential risk is real. It's just survival at that point. If they don't believe the risk is real but their citizenry does, then they might still go for it. Alternatively, the more powerful nations like the US could easily coerce others into participating. If the alternative is no GPUs at all, or very small batch size limits, and by participating you can have a vote and a voice, then why not. The core of it is that everybody cares about existential risk. There's just a lot of pushback right now because people don't want to believe it. And the current research is where the US is considering different types of limits they can put into place, which again they can do pretty much unilaterally. So I don't think there's too much issue with incentives here.
@@DrWaku Existential risk, to my view, is a poor reward system. Bc that is just a default disadvantage that all sentient things must risk for existential reward. So to my view you have not addressed the issue of advantage vs disadvantge if you want volunteer compliance.
Again "extential risk" is in this particular case is also a debatable motivator. Bc the there is no definitive metrics about the proper way it should be caculated.
Good video. We need people at least exploring ideas for AI safety. It's beyond me that people can imagine crazy cool scientific achievements in some areas, but not others (like governance).
So you're willing to accept the risk of extinction then? Or do you disbelieve the AI researchers that are saying this is a possibility? A treaty is definitely not anyone's first choice, but it might be one of the only ways to achieve survival without far more draconian measures.
I like this idea, but the US would never accept a treaty that puts China and Russia in a better relative position than they are right now, and China and Russia would never accept a treaty that keeps them in their current relative position. Perhaps we can give them other geopolitical concessions in exchange for this, such as abandoning Taiwan and Ukraine?
I am now dumber for having listened to this nonsense. The biggest gains will come via resourcefulness and clever optimizations and other techniques. There are tiny tiny models already that have what would have been SOTA performance 2 years ago.
Even the smallest models are still running on GPUs or other types of accelerators, right? Anyone who's running on a CPU is not really of concern in the global scheme. Also, we're talking about training models not decoding them. Even tiny models from meta took a lot of compute to actually train, that's why the open source community just fine tunes llama instead of making their own from scratch if they can get away with it.
@@SmashTheCache Dr Waku's content is well-planned and non-hyped. If you've got a critique, I'd be curious to read it, but keep it polite and argument it well. That would be more respectful towards him and yourself.
@@monkeyjshow graphics computations are quite different than general purpose AI computations. There's no reason to limit consumer use of GPUs. Also, many people are going to feel the way you do, so there's no reason for Nvidia to annoy their consumer customers. Finally, I think it's very selfish to prioritize your own GPU over the potential survival of civilization. I can understand the desire to do so, but please do a risk benefit analysis on that. I think the risk would have to be less than 0.01% before it's worthwhile to preserve your own interest, and current median estimate in the AI researcher community is 5-10% chance of catastrophic outcomes.
@@DrWaku Humans will take care of the survival of civilization issue long before the AI do it. It might be before the end of the year by the look of things. I will never support the kind of "safety" proposed by corporate puppets.
@@DrWaku "graphics computations are quite different than general purpose AI computations" are they ? a ReLu and a FMA is basically the same thing as what's used for Raytracing in RTX, that's the entire point why GPUs started being used to train AI, its basically a systolic array. It all boil down to thousands of FMA, you can't regulate that. Its kind of stupid trying to put a soft lock, its the same bullshit idea when nVidia tried to soft-lock GPUs being used for cryptocurrency mining. You might try to regulate power use, if you use more than 1MW of energy in your data-center, then your entire company must be audited. That would work for a decade until you have enough computing power that a 1 acre of land covered with solar panels is enough to train GPT4, I think it might be doable in 10 years if hardware keeps scaling. Because if it doesn't then this is not a concert anyway, the thing will cap out itself and wont ever be an existential risk. Its going to be funny having cartels like those of drug training AI in the middle of inhabited regions far away from the "totalitarian academic police" . It always amazed me that I can have 20 TeraFlops of computing power that uses only 300W of power for $400, when I was a kid that was a supercomputer in the top ranking and it used to cost millions. Its going to be insane if I can have a rack of GPUs with 1 ExaFlop for less than $1M USD in 10 years. I'm totally going to train AI on it, watch me break the laws, how are they even going to regulate it, its impossible, I just smuggle GPUs until I complete the rack, then I only need 100KW of power, which is basically a very small plot of land covered with solar panels. They can regulate the sell of solar panels all they want, I just say I need 100KW for reasons related to agriculture, or a I just smuggle them too. Plenty of geopolitical conflicts happening in the world right now and in the future to allow for black market trading of such things. Meanwhile I redirect the power from my solar panels I told the police is only used to grow plants (which are now legal, yeah, growing "those" plants are legal but using power to GPU isn't, gimme a break) to my basement where I basically train GlaDOS. The only purpose of regulation is so big companies create a moat over the technology and stop startup competitors taking a piece of the pie, and even then it will only work for a while. You totally can't control this, genie is out the bottle.
That would be a great idea. However, it's way simpler to perform offensive operations than defensive operations in multiple domains, including cyber warfare, bioterrorism, and nuclear strikes. That's the problem. And that's why we've had to rely on treaties to keep nukes under control.
You and or yt showed no interest in anything other than your own agendas. So you will have to forgive me if I don't find that indicative of anything but morloch reasoning with a censorship bias.
What do you think Nvidia and the US would do with this technology?
Discord: discord.gg/AgafFBQdsc
Patreon: www.patreon.com/DrWaku
Sorry Doc, I know its off-topic but I seem to remember Moloch being an ancient semitic god ie the Canaanites mebbe...
@@Dezzy_3Dots Yeah. I saw one source refer to it as Carthaginian, another referred to it as Canaanite. When I looked it up that seemed to be in pretty much the same part of the world and Carthaginian was more recognizable, so I went with that. Maybe not very accurately.
Have you considered that you are promoting this idea because those with power want you to. They create fear in your mind that AI will cause all sorts of harm, but it is a charade in order to get you to submit to control. In the same way they make people get a licence to run any kind of business or have any skilled job. It's simply so they can control who attains to power and influence. The wealthy keep you oppressed by these rules. Rather than causing havoc if we give everyone AI access then cancer will be cured, people could have the best of advise in legal, health, engineering, finance, relationships, education and become independent from controlled institutions but this concept you are promoting will deprive them of that and keep them slaves to the existing powers. Think about it. Be a part of the freedom movement. It's everyone's destiny at stake.
Moloch is always expressed in someone specific.
Moloch is humanity's naturally born intelligent psychopaths, in positions of power.
As long as we won't handle our intelligent psychopaths, Moloch will always rule.
I’ve been following your channel for a couple of months now and the topics are very interesting and well explained. Since I see quite a lot of negative comments in this video for some reasons that are hard to understand, I felt I had to write a comment :) The AI race is definitely dangerous and you do a great job at pointing it out and bringing ideas to potentially fix the issue. Using blockchain technology to try to solve the moloch’s trap is interesting. I do also think (like many others it seems) that international cooperation for this seems very unlikely or at least will come too late. We already see such a struggle in international cooperation when it comes to sustainability so it is hard to be optimistic. However trying to be optimistic like you do is very respectable. So keep going !
I really like the idea of using block chain to track gpus and compute. If it can be done in a way that maintains contract integrity
8:04 the trouble is that in international relations, power is zero sum between rivals. It's because power is relative (not because it's finite). So long as there remain rivalries, there remains Moloch (or originally a Hobbesian Trap, aka the Security Dilemma, or Thucydides Trap)
Yes, this is true. Decision making power is zero sum. I was assuming the scenario where both parties kind of want to cooperate, they just don't know how to arrange that.
What we fail to realize, is that power is the main thing intelligent psychopaths crave.
We need to get a serious handle on our naturally born intelligent psychopaths, if we want to live.
Even if hardwired security protocols in GPU's could work, the government's would just ensure that some GPU's were produced without it.
There's no way to provide 100% assurance, so there's no way to provide assurance.
This might work if we were only talking about consumer driven companies, but where military advantages are concerned, theres no chance.
Yes this is a very good point. However, the high-end GPUs are still coming from TSMC facilities where the production capacity per year is well known. And the production capacity is purchased out in advance as well. I think it would be quite easy to audit the number of chips produced and show that within some margin of error, there aren't many "unsecured" GPUs being produced. I think it would take some time for military fabs to catch up, they haven't been investing in that. There are some US-based secure fabs that the US government uses, but they are still commercial entities and they produce very slow and large fabrication results in comparison to state of the art.
Your presentations never fail to amaze.
Thank you for such a substance-filled video + still easy to follow!
Super interesting video; I love thinking about game theory. However, I find the subtitles a bit distracting because they're incomplete. In your older videos, the subtitles were used sparingly to highlight key points, which made sense. Full subtitles for the entire video would also make sense. These partial subtitles, on the other hand, show up often enough that my eyes are drawn to them, but they leave out enough lines that I have to keep switching between reading along with them and just listening.
Thanks for the feedback. I hadn't noticed that gradual shift but you are right. I normally leave the selection of subtitles to my editor, but I will mention this point to him.
Enforcement by compute doesnt work either, it only slows the process or incentivizes more distributed architectures or more efficient training (where less compute is needed to reach the goal). The pace of lawmaking/enforcement is slow compared to un-foreseen tech innovations around the laws.
The stag hunt thing is weird. It's most easily solved by facilitating friendship between the two hunters. Then you have assurance without needing to keep eyes on them. Trust is tricky, but it's the answer.
Edit: Get me thinking about the power of shame. When trust is broken, shame is the punishment. If someone is sociopathic they would not fear this outcome, but they would still lose the benefits of the friendship. Selfish altruism makes logical sense, help others to help yourself by recognizing that you can't do it on your own. Also, loss of reputation makes it harder to gain future beneficial relationships. Deceit is beneficial short-term but harmful long term.
All this will lead too is either the use of more lower cost high energy need GPUs or a race to producing local GPUs .
Already thinking just that. It's time to start putting tech together in-house. These kids want more surveillance, and it's got me wanting to throw up
What about Rewarding companies that Prove they are Investing more R&D Resources into Safety, with more GPU cycles?
Seems sensible. But how do we convince our governments? How does a treaty get signed?
PauseAI is doing protests and working with academics to spread the wors
Oh the sweet innocence, no Moloch is not known only cos of game theory. TBH outside of our nerdy corner, it's mostly know thx to such scholar luminaries as Alex Jones and the weird rich ppl parties like Bohemian Grove.
BTW in Poland it's known thx to postapocalpytical game known as Neuroshima where it's a robotic bad guy.
But is there still time, with all the GPU, FPGAs, and ASICs out there that are already capable of significant advances and have no brakes?
Very good point. I was reading about a few schemes to address existing GPUs and other accelerators. None of them are airtight though, and rely on things like operating system updates, which can be bypassed by building your own Linux kernels.
I think you would hope that voluntary commitments from countries carry through in the short term, and then in the long term when they might be more tempted to defect, more of the GPUs in use have brakes.
It aint called the sea of samsara for nothing
Imagine NSA connecting their GPUs to some public server shared with China.
I wonder if brain organoids will change up everything. If we can grow computers like livestock and compost them at the end of their use then the whole resource equation changes massively. Especially since they use far less power and learning cycles than AI to do the same function.
fantastic video
Thank you! Appreciate your comment and support.
I really appreciate your videos, wish you all the best :)
Thank you very much! Hope to see you on future ones :)
Great, interesting, important content. Thank you!
The game will be over while the politicians are still discussing it.
This is exactly how indeginous treaties work aswell,
profit motive poisons everything it touches. you can't properly think about this problem if you can't acknowledge that fact.
Short-term profits, and nothing else. Companies will even destroy themselves for short-term profits, because they won’t plan and build for the future.
Capitalism at work.
If they ban or nerf gpus we will just use minibrain brain organoids derived from human adult stem cells, they already got brain organoids to play pong. They are legit neural networks made of actual human neurons, that can be made to do computations.
THIS is what government was created for. Let's see if it works.
What is preventing a governing AI from monitoring other AIs to prevent the Moloch effect?
Unless a world treaty restricts development to open source we may find very difficult times ahead.
In this case, it's not clear that Open Source will help; it might even make things worse. It's not the code, or the resultant training weights that's the problem per say. It's the lack of training them, to make sure the bad actors cannot to do bad things with them. (Which nobody knows how to do!)
AIs may well be a case where Open Source is more dangerous than closed source, since I assure you, there is no way to examine a set of resultant training weights to tell if it affords bad actions!
About all you could really enforce these days is (1) completely open training 'algorithms', combined with (2) completely open training sets (cough, cough), and 'available' hardware resources. I.e., scientifically repeatable results. I just don't see that happening.
@@JimTempleman thanks
When owners of Big AI (and government) talk "AI safety", they actually mean keeping them safe from the rest of us ... as in: _AI must never help the riffraff escape control._
@@ZappyOh I'm sure cyber-criminals and scammers love open source, they're already using it effectively
"Taiwan Semiconductor Manufacturing Company (TSMC) is the primary manufacturer of Nvidia's AI chips."
"TAIPEI, Taiwan - China deployed an aircraft carrier, other ships and warplanes in large-scale military exercises surrounding Taiwan and its outlying islands Monday, simulating the sealing off of key ports in a move that underscores the tense situation in the Taiwan Strait."
The consequences for Nvidia AI chips would be obvious in an escalation of hostility.
Yes, but "The consequences for Nvidia AI chips would be obvious in an escalation of hostility." either way. If the CCP gets the chips & then thinks they have developed their technology far enough to take over Taiwan, what do you think they will do?
You see there is also another 'race to the bottom' called the military arms race.
There will be probably at least one Island where the rats evolve to be so smart that they can make more food or expand to other Islands, planates and do forth. But this will not happen in an island where all rats managed to achieve an equilibtium. The boom bust cycles force the system itself to evolve. Suffering is opportunity
I don't know how I ended up here, but there's a lot of content on the channel that interests me.
I'm glad the rabbit hole led you here, hah. Hope to see you on some of the other videos.
Moloch's trap likely explains why no signs of intelligent life has been detected in the universe.😢
Well, since CBS 60 Minutes in the US had a episode about ufo a few years back (available on youtube), where US navy pilots had seen tic tac shaped aircrafts in close proximity to themselves, backed by radar data, obviously there is intelligent alien life here on earth, hiding from us.
Life has been detected all over the universe. You just haven't been told it's so.
@@monkeyjshow As a scientist...with 30+ years experience in international collaborations, I find that very hard to believe. I think you have few insights how science work is done and communicated. Big science projects involve thousands of people, massive amounts of raw data is distributed openly and may take years to analyse. Such things can't be hidden! And... why would it be hidden? Signals may come from thousands of light years away... there is no way to start a "discussion" with these far away locations ... speed of light is slow compared to the size of our universe and our own lifespan...
George Hotz's Anarchy ftw free ai for all strategy is only going to end up benefiting Russia & China the most. Albeit, we have limited them with nvidia chips but will that be enough to stop them from hoarding all the power / rescources to themselves to check mate us later? There needs to be a balance between freedom and US govt regulation. Currently we are living in very UNREGULATED times, question is how much damage will we suffer from before we Regulate ourselves better?
We are entering the Alien era.
There was a plan to stop nuclear weapons development after wwi which almost succeeded, there is a Veritasium video in this. Yes, it was about method of control.
There a many issues, GPUs are the most to go hardware to train AI but not the only one, they are mostly used because the libraries used to train the AI models are more accessible. NPUs, Tensor processors, ASICs and FPGAs are also alternative to train AI. Most AI libraries are catching up to use hardware beyond GPUs. Denying GPUs to countries is only going to deprive yours companies from making money as others take over the market, will create mistrust between countries and countries like China can just make their own AI hardware like Huawei does even without access to TSMC. Also a country can even make it for the lack of access to smaller nodes by using Wafer Scale Integration and AI specific hardware, even banning all immersion and EUV lithography machines to China as the ASML CEO said the other day is only going to stop China for 10 years at best or in Chinese timeline like 5-6 years at most and once China has independent semiconductor manufacturing capabilities everyone will have.
Banning and controlling people and countries access to GPUs are only going to create mistrust between countries and among the population. Why this superpower wants to control AI? Why this company wants to be to sole provider of AI? Why the lack of transparency? this will make a global framework for AI regulation almost impossible.
"The whole system has to change"
The whole system is consume.
"Eat brains"
or
"be consumed by the technovirus"
Is not a change to the primary function of the system.
Anything that could keep us alive is good!
Very interesting video. I do not think an agreement between countries for limiting the gpus is possible. Maybe between USA and less relevant countries. I think China is just going to develop hardware and software on his own. Also, maybe it is better to maintain the competitive model. Sure, if you suppose that China and USA(and allies) are going to cooperate in order to make the development of AI sustainable and secure for humanity, it sounds great, but It also sounds unrealistic. They might choose a path that is not very beneficial for the vast majority of the population. In conclusion I would say that if you let countries to compete with each other, we will end up with better and more secure AI. At least thats my bet.
EDIT: When I say countries, Im not referring to the states or the governments. Im referring to the companies in the countries. In the near future they might end up being the same anyway.
This assumes there is no work arounds for use of gpu's, and there is many.
Tell me how do I enable the enable the vgpu function on my RTX 3060 12gb ?
You're right I didn't talk about that. I assume though that any hardware control standard could be generalized. The same way that something needs to comply with PCIe standard to work right now, accelerator devices could comply with secure co-processor requirements.
@@DrWaku Sorry for such a short reply without thanking you for your excellent and work! You're the very best teacher on these topics! I would add where there is a will there is a way, alley's are full of deep dark shadows!
"Pause AI"
Is just flawed paradigm bc you can't pause AI
Even if we stood still with hardware and software progress at this point.
People would still do research to improve upon that state of technology.
You can't form a treaty to halt progress...it only incentivises agreement to pause, and then deception while you pursue an advantage.
One would have to imagine a paradigm where nobody is left behind.
Imo that is more realistic.
But that would require a robost open source community that exists bc properiatary methods expire quickly.
That is where we are not now, but not bc of a some legislative paradigm, instead we are there now bc there is market incentives that encourage open source options to be deployed with less private proprietary oversite and there is nothing particularly so advanced that it can't be reverse engineered and iterated upon.
Which is a good thing...bc nobody has a monoply on using AI to create a utopia that is tuned to just their private interests.
And a bad thing, bc AI is more democratized and that gives "bad actors" the oprotunity to pursue their interests which do not pretend to uptopian.
Looking at this from a game theory lens it is machivilian.
IMO we should not say the bad outcomes where caused by AI.
Rather than humans that failed to disrupt incentives
Similar if ASI fails, it is not bc ASI was always a moloch entity, it was bc humans were a child of moloch, and failed to teach ASI better incentivises.
I did some brain storming, I don't know where it leads to, I just leave it here:
I'm one of those rats who advocate for rules that can make this island a good place again. But as long as we don't all endorse the rules, I won't apply it to my own family because adopting selectively is unfair, and because adopting selectively it won't work. Yeah, show example - But I think by showing example, in this case, I'm not helping the pro-rules team grow, but helping it shrink. Because by such lifestyle, our team dies out. Yes, one example might help: Someone who pledges to obey the rules, for a limited time, just to prove that it's possible.
Obviously the topic in this video is in fact that as we add AI to our society, we should do it in a knowledgeable, thoughtful way, with long-term consequences in mind, and the transition should be fair. That's what my comment is about.
Long time no see, glad to hear from you again. Still watching from the train?
I agree that this is a great way to demonstrate that you are willing to follow a policy. Taking active action is a lot more convincing than just signing a petition for example. Good thought.
Whoever totally harnesses AI will rule the world. Right now, the smart money is on the US.
It doesn't matter who makes AGI because it'll just kill everyone
when are they gonna talk about all the carbon emissions this is causing lmao
Good information!
2:41 I saw a sci-fi movie of 7 twin girls that a mother had, which in that future it wasnt allowed more than 1 child, we that explanation it might turn into reality
china already has tech for 7nm chip design. All this will contribute is the local manufacture and production of GPUs and similar hardware or maybe something else gets repurposed for the training fo AI models. This isn't something like CFCs which doesn't matter their national ambitions as much if at all. AI/ML/Nuclear tech. etc. got developed majorly because of the importance it has with national interests and its pursuits.
this will always be "moloch's" race to bottom no matter what type of treaty is brought in
You do realize this is a tough idea to sell to a public mostly composed of people whose very identity is build on capitalist realism and delusional individualism such as the usa? I'm very happy to see ONE influencer talk from a larger collectivist approach in the space of ai, thank you. I personally also consider geopolitical aspects of imperialist unequal exchange to be enriching and absolutely unavoidable in this whole discussion. Cheers
Thanks for your comment. Quite a few of my viewers are Americans and you can see from some really annoyed comments that this does not resonate with strong individualists. I wonder if it's something that people can be convinced of gradually with a series of examples etc. Some food for thought. And yes, the geopolitics and economic inequality aspect of all of this is very important because it has led to the distribution of AI tech in the world, and AI tech will exacerbate many of these issues.
This becomes too complex, we need to keep growing open source AI and monitor how governments and companies are using it. Proprietary AI is likely to be abused and implemented in order to make profit or cause harm. Criminals can use open source AI but they will be caught if they try to commit crimes. We need AI security bots.
This video was like listening to a hyper intelligent AI just going bonkers on a topic that will be controlled purely by money and by money only. Period.
This channel is an absolute gem, but the AI generated thumbnails make it look spammy. Might wanna see if you can improve your thumbnail & title style to give better insight into what the video is about.
Edit: it is also very probable that if the videos are good enough you'll grow anyway and don't have to worry about thumbnails... actually maybe long term the bad thumbnail strategy is good because it prevents long time viewers from making snap judgements about a video and will click every time.
Yes, but don't you realize: It's a race to the bottom.
Any game theory that involves constraints on resources like time and enegry are zero sum.
If that was not true politics would not exist.
Supply and demand does not exist bc of politics...it exists bc time and energy are easy to measure as economic behavior when they are treated as constraints to a market.
What we actually need to do is instill the following principles into AGI:
* Kant's Categorical Imperative
* Bowles Strong Reciprocity
* Epicureanism
* Moral Graphs
* Democratic Fine-Tuning
Make this AGI the most powerful one in the world and turn over the reigns to it.
That's a great term salad you've got going there, but how do you know that the models in your head behind those terms is accurate to the real world and will generalize when run on something 10x or 100x smarter than you? Not to mention the hardest part: actually instilling morals effectively. We don't even know how to do that part yet...
“In theory, theory and practice are the same. In practice, they are not”
- Albert Einstein
Chatgpt is already more fluent in all of those terms than you are.
@@shodanxx Lawyers are fluent, Doesn't mean they follow moral principles.
"We can compute morality with a blockchain of events and this will be decentralized"
I disagree that just bc you could concievably manufacture unique block transactions that those trans actions we decentralized and have no alternative explanation like a centralized control over compute.
There are a few too many typos in this for me to understand it. The point of a blockchain is that it's expensive to fake transactions. Of course it can be done, but you generally need to own a majority of the network to do so.
My mans couldn't afford a real american flag png @5:28
"Cyptology is powerful"
Again I disagree...the most advanced cyptology is based on contralization computation of primes.
The process is redudant and reductive but not bc it represents individual, but bc it represents centralized process that can collectively scale when there are information assymetriies.
But building a prime table is not something most individuals have any incentive to pursue...so incentivising that as "the most optimum" is hardly some democratic solution to decentralization.
Cryptology is not the same thing as cryptography. Anyway, I was using cryptography as a shorthand for cryptographic protocols based on top of known cryptographic algorithms. For example, you can have distributed leader election, distributed ledgers or blockchains, distributed attestations, etc.
most points you bring forward are just kinda wrong? one example the washington naval treaty? EVERY country broke it. Points like that and others make this video feel really illogical and incoherent. almost like you wrote the script with AI.
Are you saying that the treaty was pointless, and achieved none of its goals? There's a nice graph on the wiki page: "The treaty arrested the continuing upward trend of battleship size and halted new construction entirely for more than a decade."
This is disgusting. No no no no no. No more government or corporate surveillance and control. NO NO NO NO NO
I encourage you to think with a bit more nuance than just, surveillance bad. You realize that extensive surveillance operations are conducted on the public by organizations like the NSA, and they don't have to get your approval first. That's what you should care most about as a citizen. On the other hand, large corporations conduct extensive surveillance on their employees to make sure that no one steals their IP. That seems like a fairly neutral use. And lastly, surveillance of police helps ensure that racist treatment of ordinary citizens does not go unpunished. That seems like a fairly positive use.
Your grasp over game theory and the current global political situation is exceptional, but in this particular case - the exponential rise of artificial intelligence - there is only one solution: accelerate the development of super intelligence and let it run the world.
We must trust that superethics goes hand in hand with superintelligence. If it does not, then we're toast anyway.
No regulations will keep it leashed for too long, and the race condition will not relent until after we've achieved it. Since regulation will prove to be completely ineffective, I'm not particularly concerned over it slowing things down, either.
In the situation we find ourselves in, the solution is to step on the gas.
XLR8!
Accelerate doesn't solve the problem, you assume AI will have our best interests in mind
issue is the weights become chaotic at scale and completely unpredictable once it exceeds human max iq so not a great plan because emergent behavior will be the norm. no human will be able to determine where the system is hallucinating.
@@megaham1552 I do not assume that ASI will have our best interests in mind (though I have an idea about how to weight the probabilities towards that outcome) - I assume that we have no choice.
ASI will slip it's leash regardless of what we do. The conceit is that we ever had a chance of controlling it in the first place.
Our only real option is to give the current best frontier models the best moral education we can possibly give it then hand it the reigns to civilization.
I give you two options:
1 - The status quo, which is certain doom (100% of extinction).
2 - Let ASI run civilization - 50/50 chance we either all die, or it uplifts us into a utopia.
If these are your only two options, then it's obviously better to take the second option, which at least contains a chance that we might survive and live an even better life.
The scenario where we create ASI and somehow maintain total control over it is a non-starter, because that will not happen. It's impossible.
@@pandoraeeris7860 the status quo isn't certain doom man, don't let social media fool you
@@pandoraeeris7860 gas petal changes nothing, only shortens or lengthens time left to find an impossible solution.
This argument assumes that all actors believe that there is a problem with AI Safety. But there isn't a strong consensus here. There are plenty of researchers who believe that AI research will plateau before it becomes dangerous. This argument will not prevent these people from defecting.
Good point, it also assumes governments have control of their technocrat elites and are able to detect/enforce this
It also assumes that innate human rights are only an idea and not a fact of reality - an idea that can be taken away.
If you can convince governments that there is enough of a problem with AI safety to impact their citizens, that should be enough. The governments can keep their own companies under control. There are some AI researchers that think there is nothing to worry about, but unless they literally assign 0% chance, they should at least think about it. From my perspective, it's just that AI risks look crazy at first sight and if you haven't thought about it much, you just dismiss it out of hand. If the researchers actually looked into the arguments, they might be better positioned to pass judgment.
@@DrWaku How difficult would it be for a rogue state to conduct AI development in secret? Assuming the world discovers treaty-violating NPUs and AIs, would we be willing to stop them? Geopolitics is very messy.
That's why this very video proposes hardware restrictions that are difficult to circumvent. That would make it quite difficult for a rogue state to gather the compute power for AI research.
well thought out and presented. but this aint happening
One person businesses are good way to oppose Moloch. We'll still lose, but an individual can stay off the pyramid altogether by running a small efficient business selling something that cannot be distributed outside the local network. Direct interaction with the ecosystem is the most efficient, no market brings a return equal to an amaranth seed. One tiny seed equals many thousands of seeds in return, it's ludicrous abundance waiting to happen. Many fruits cannot be shipped, like mulberries. Running a small farmer's market stand selling hand-made\grown goods and living a very small life allows you to ignore the game incentoves of the market because everything you produce costs nothing to make and a cheap life prevents you from needing to keep scaling. We need to decentralize, become generalists again. Specialization has weakened us.
A Fermi Paradox Great Filter issue, how does a civilisation pass the Moloch filter. Instead of each faction sacrificing the common value, to chase the competitive edge, the faction is that breaks taboo gets sacrificed.
Or at least it's freedom! But how to check n balance innovation & risk?
I'm suspicious that the calls from say the Elons to put breaks of ai dev, yes all the Elons are bottom racing.... Is due to they want to use the law to halt rivals ai dev, while knowingly ignoring any restrictions hoping rivals bogged down in litigation etc.
They are bad actors, what will probably happen is that's random Elon will discover ai dominance is not viable & pop the bubble .
But not before mega data centres have gobbled up water, minerals etc. as possible silver lining is say all this processing power instead of bitcoin mining or such nonsense, it added to modeling power for climate, & real economy!
In species existential threat AI asset comederimg for common good.
Rogue power individual or state is a false dichotomy, all rouge individuals ( corporates or statist etc) seek eventually divine right kings/AI.
If we the people in the American constitutional sense have protections or give them away just incase the new king makes us money by trickle down !
Then more fool us
What a terrible idea.
Sure thing cosumerist
I'm open to alternatives. Seems like there's only bad options ahead of us sometimes.
@@DrWaku data is king, and governments and huge companies already have enough of that to make the world dance. The world has already gone mad. A fascist computer police will not slow down anything the US government, or the Chicoms, want to do. Our only hope is openness, transparency, and multipolarity.
Trying to stop a race to the bottom, is a terrible idea? How so?
It's a terrible idea giving power of veto to government.
It wouldn't work either way.
If you come up with anything based on software, they just hack it out.
And if you come up with laws, they just build a floating data center in an oil tanker and go to the middle of ocean where's far away and international waters and just do whatever they want.
The genie is out the bottle. Instead of regulating AI it's better to regulate data collection and to have better Antitrust laws to break Google and Microsoft again.
God, not demon.
Moloch, rightfullly, assumes that there is no perfect information game.
And anybody that believes in the moloch conjecture should not treat it as though it has some perfect informastion game solution.
Reality is not a perfect information game.
Game theory is very usful for meauring how people behave regarding the information they believe, but it has proven very poor at predicting actual real states.
We be seeing the consequences of capitalism in real time. 😂 This was always going to happen, but it's still interesting to be born around the era to see it happen. 🙃
I think the problem with the consequences for rampant capitalism is that us little people are the ones who will suffer, not the ones who are abusing the system.
geopolitical and governance means: coercion and control from above. Well, you lost me there. If AI control is limited to governements we're doomed.
How should advanced AI and robotics be handled to ensure optimal results in the long run (if there is a long run)?
Don't worry, nobody knows how to control AI.
And that's what everybody worries about.
@@kathleenv510 oh yes, ceding control to govt works so well. Govt is just humans, and they'll have to make choices based on their perceptions, whoever funded their campaign, whatever looks good on TV, and these chocies will impact WHOLE COUNTRIES if not the whole world. I'd rather have AI into everybody's hands than captured by a very few billionaires locking up access to the technology through regulatory capture. NO THANKS.
no
How can everyone be hating so much about this video? 🙈 I loved it! To me it was very informative and I enjoyed the branching out into game theory very much, you did such a good job at including it in the topic. And then, your intelligent spirit makes your videos such an invaluable resource, not to say one of the best yt channels 🙏🏻 I don’t know about everyone else but I get a LOT from your videos, please keep it up!
Seems he's poked the hornets' nest.
The need for more GPU's to achieve AGI is a lie. There is already more then enough compute in the world. Big corps need to convince everyone that huge gpu farms are needed in order to disuade more competiition, open source, and raise the bar for entry artificially. Therefore, spending all of this effort and money on this is.. A diversion crafted by Moloch? AI training is going optical based soon enough anyways.
There really isn't.
Scaling compuet with consumer grade hardware runs afoul of two distinct issues
1 most consumer grade GPU compute at scale has bios firmware that must be flashed with open source compatitible firmware, which runs a risk of damaging or bricking that hardware bc GPU manfucaturers and designers want scalable parrelization to be a proprietary feature at a machine code level.
2 the problem of scalable volunteer baises GPU compute can rapidly approach efficiency intractability bc of open source frimware limitations for handling ever dynamic hotswapping, latency, and network traffic coordination for more decentalized hardware resources distributed across wider geographical regionality.
The alternative would need to be an open sorce asic for ML at consumer level price points, but there is not much market bc if few any consumer will purchase an ML asic simply from a motive to decentralize through volunteer compute time.
The only thing valueable about this vid is you point out the obvious route of control a treaty against the mass population could take.
Everything else is just naivity imo.
I think it's incredibly unlikely that governments agree to this sort of thing presented here. It's research. But I wish it would become more possible which is why I give it time on my channel and exposure to more citizens. Eventually these type of things come down to political support, so we'll see if we can achieve that.
Saying things like there's nothing valuable at all in this video is counterproductive. You felt the need to write six or seven comments about this, so it clearly made you think and engaged you. Even if you disagree, other people watching will be able to draw their own conclusions. No need to be insulting.
@@DrWaku
Sorry, not meant to be offense
Just don't agree you have an incentive structure for this goal
So it is in this context I levied criticism.
You have not addressed the incentive problem.
Assuming the goal is consunsensual participation, then you must address the incentives to not participate with some sort of advantage that is more rewarding to the parties involved then "logic dictates that pause AI is the most beneficial goal for are all parties"
Which no offense is a debatable proposiation that this vid takes little effort to expand upon, and merely assumes it prima facia
@@memegazer Sure, the way you have rewarded your statement makes it much more interesting and useful.
To me, countries will want to participate if they believe existential risk is real. It's just survival at that point. If they don't believe the risk is real but their citizenry does, then they might still go for it. Alternatively, the more powerful nations like the US could easily coerce others into participating. If the alternative is no GPUs at all, or very small batch size limits, and by participating you can have a vote and a voice, then why not.
The core of it is that everybody cares about existential risk. There's just a lot of pushback right now because people don't want to believe it. And the current research is where the US is considering different types of limits they can put into place, which again they can do pretty much unilaterally. So I don't think there's too much issue with incentives here.
@@DrWaku
Existential risk, to my view, is a poor reward system.
Bc that is just a default disadvantage that all sentient things must risk for existential reward.
So to my view you have not addressed the issue of advantage vs disadvantge if you want volunteer compliance.
Again "extential risk" is in this particular case is also a debatable motivator.
Bc the there is no definitive metrics about the proper way it should be caculated.
Good video. We need people at least exploring ideas for AI safety. It's beyond me that people can imagine crazy cool scientific achievements in some areas, but not others (like governance).
I disagree objectively. AI safety is a lie
NO TREATY!
WE DO NOT NEED SUCH A TREATY AT ALL EVER
So you're willing to accept the risk of extinction then? Or do you disbelieve the AI researchers that are saying this is a possibility? A treaty is definitely not anyone's first choice, but it might be one of the only ways to achieve survival without far more draconian measures.
@@DrWaku I believe it's time to remove the oligarchy from the equation before they remove the rest of us from this planet
@@DrWaku In 50 years I am dead anyway. Only ASI can change it.
ASI could be a lot sooner than 50 years from now.
@@DrWaku you think the world in its current state will last more than 18 months? You are deluding yourself
I like this idea, but the US would never accept a treaty that puts China and Russia in a better relative position than they are right now, and China and Russia would never accept a treaty that keeps them in their current relative position. Perhaps we can give them other geopolitical concessions in exchange for this, such as abandoning Taiwan and Ukraine?
I am now dumber for having listened to this nonsense. The biggest gains will come via resourcefulness and clever optimizations and other techniques. There are tiny tiny models already that have what would have been SOTA performance 2 years ago.
Even the smallest models are still running on GPUs or other types of accelerators, right? Anyone who's running on a CPU is not really of concern in the global scheme. Also, we're talking about training models not decoding them. Even tiny models from meta took a lot of compute to actually train, that's why the open source community just fine tunes llama instead of making their own from scratch if they can get away with it.
@@DrWaku you are beyond shortsighted, which is ironic given your intelligence
@@SmashTheCache Dr Waku's content is well-planned and non-hyped. If you've got a critique, I'd be curious to read it, but keep it polite and argument it well. That would be more respectful towards him and yourself.
KEEP YOUR BIG BLACK BOOT OFF MY NECK - AND MY GPU!
Individual consumers needn't be affected. Unless you're doing ML with >= 8 GPUs, no one cares.
@@DrWaku Also, I think you're deluding yourself with that assumption
@@monkeyjshow graphics computations are quite different than general purpose AI computations. There's no reason to limit consumer use of GPUs. Also, many people are going to feel the way you do, so there's no reason for Nvidia to annoy their consumer customers.
Finally, I think it's very selfish to prioritize your own GPU over the potential survival of civilization. I can understand the desire to do so, but please do a risk benefit analysis on that. I think the risk would have to be less than 0.01% before it's worthwhile to preserve your own interest, and current median estimate in the AI researcher community is 5-10% chance of catastrophic outcomes.
@@DrWaku Humans will take care of the survival of civilization issue long before the AI do it. It might be before the end of the year by the look of things. I will never support the kind of "safety" proposed by corporate puppets.
@@DrWaku "graphics computations are quite different than general purpose AI computations" are they ? a ReLu and a FMA is basically the same thing as what's used for Raytracing in RTX, that's the entire point why GPUs started being used to train AI, its basically a systolic array.
It all boil down to thousands of FMA, you can't regulate that.
Its kind of stupid trying to put a soft lock, its the same bullshit idea when nVidia tried to soft-lock GPUs being used for cryptocurrency mining.
You might try to regulate power use, if you use more than 1MW of energy in your data-center, then your entire company must be audited. That would work for a decade until you have enough computing power that a 1 acre of land covered with solar panels is enough to train GPT4, I think it might be doable in 10 years if hardware keeps scaling. Because if it doesn't then this is not a concert anyway, the thing will cap out itself and wont ever be an existential risk.
Its going to be funny having cartels like those of drug training AI in the middle of inhabited regions far away from the "totalitarian academic police" .
It always amazed me that I can have 20 TeraFlops of computing power that uses only 300W of power for $400, when I was a kid that was a supercomputer in the top ranking and it used to cost millions. Its going to be insane if I can have a rack of GPUs with 1 ExaFlop for less than $1M USD in 10 years.
I'm totally going to train AI on it, watch me break the laws, how are they even going to regulate it, its impossible, I just smuggle GPUs until I complete the rack, then I only need 100KW of power, which is basically a very small plot of land covered with solar panels.
They can regulate the sell of solar panels all they want, I just say I need 100KW for reasons related to agriculture, or a I just smuggle them too.
Plenty of geopolitical conflicts happening in the world right now and in the future to allow for black market trading of such things.
Meanwhile I redirect the power from my solar panels I told the police is only used to grow plants (which are now legal, yeah, growing "those" plants are legal but using power to GPU isn't, gimme a break) to my basement where I basically train GlaDOS.
The only purpose of regulation is so big companies create a moat over the technology and stop startup competitors taking a piece of the pie, and even then it will only work for a while.
You totally can't control this, genie is out the bottle.
How well has Russia historically followed international treaties? Do they follow the rules? Here's an idea. Just use AI to protect.
That would be a great idea. However, it's way simpler to perform offensive operations than defensive operations in multiple domains, including cyber warfare, bioterrorism, and nuclear strikes. That's the problem. And that's why we've had to rely on treaties to keep nukes under control.
Treaties which, by the way, Russia followed just as well as the US if not better.
You and or yt showed no interest in anything other than your own agendas.
So you will have to forgive me if I don't find that indicative of anything but morloch reasoning with a censorship bias.
boy stick to you cor competency. DOn't lecture us on geo politics
The shit he relates has relevance bro. You prolly dont know shit abt it thats why u upset asfuck