Silicon Valley in SHAMBLES! Government's AI Crackdown Leaves Developers SPEECHLESS
HTML-код
- Опубликовано: 23 апр 2024
- How To Not Be Replaced By AGI • Life After AGI How To ...
Stay Up To Date With AI Job Market - / @theaigrideconomics
AI Tutorials - / @theaigridtutorials
🐤 Follow Me on Twitter / theaigrid
🌐 Checkout My website - theaigrid.com/
Links From Todays Video:
01:52 Flops Dont Equal Abilities
04:56 Stopping Early Training
07:54 Fast Track Exemption
09:12 Medium Concern AI
13:37 90 Days To Approve Model
14:04 Hardware Monitoring
16:05 Chips = Weapons
17:49 Emergent Capabilities
Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.
Was there anything i missed?
(For Business Enquiries) contact@theaigrid.com
#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience - Наука
- ****00:00**** - Introduction to new AI policy proposal and its potential impact.
- ****00:18**** - Overview of the 10 key aspects of the AI policy.
- ****01:43**** - Definitions of major security risks in AI including existential and catastrophic risks.
- ****02:57**** - Breakdown of AI regulation tiers: from low to extremely high concern AI.
- ****05:27**** - Discussion on regulating AI based on computing power and abilities.
- ****07:23**** - Concerns over prematurely stopping AI training based on performance benchmarks.
- ****11:59**** - Details on the exemption form for AI developers to bypass certain regulations.
- ****15:22**** - Future challenges in AI regulation and monitoring AI capabilities.
- ****15:13**** - Introduction of a website for tracking transactions of high-performance AI hardware.
- ****16:17**** - Monthly government reports on AI compute locations and suspicious activities.
- ****18:00**** - Potential criminal penalties for non-compliance with AI hardware transaction regulations.
- ****20:15**** - The ability of the president and administrators to declare AI emergencies and enforce drastic measures.
- ****21:56**** - Whistleblower protections under the new AI regulations.
I hope you used AI to generate the table of contents😂
This has nothing to do with security, and everything to do with: "Only the people I want to have AI, get to have AI."
This ☝
Exactly. It's the governments that weaponize everything.
Specially that statement about "Mathematical proof that the AI is robustly aligned" makes me rolf. Certainly the guy who wrote that knows NOTHING about math, much less about what a "Mathematical proof" is.
Good luck trying to take away the open source models from my fingers 😂😂
No, i think this is about security.
Regulating on compute power is totally unintelligent move by the policy makers here
What would be more intelligent?
It's the same 'tech-savvy' people who block 'dangerous' websites that a $4 monthly VPN can get you past, or simply knowing how to put Google's 8888 DNS nameserver address in and getting around the restriction for free, and think, 'PROBLEM SOLVED!' 😂 They may need to hire a 14-year-old to give them some pointers.
Albeit, I do wonder if they know that it won't do anything, and it is simply them trying to look like they are doing something. Then the majority of people believe them, so again, 'their' problem solved...
When has Congress done anything intelligent?
The policy makers also say that pistols are automatic guns when you add a bump stock. They say plenty of stupid things
You can bet those numbers came from the likes of OpenAI. No way anyone in Congress knows what a Teraflop is.
What I've read here is, "Don't develop AGI in the USA. Go somewhere else and develop it there." Okay.
Where you gonna go that the US imperium won't hunt you down?
China has enter the chat😂
I read the same, but it ended with m'kay
humans are the bad actors here
they will do everything to get around the law
they don't care it's fun it's business it's all conspiracy
whatever excuse AGI will go wild
Fine so what country in the world that is not in the EU and is not the US would have the ability and infrastructure for AGI training or building. That's right... None...not even china.
If this stupid law gets approved i foresee whole container ships carrying entire H100s clusters out of the US
Yup
That part🤣
They banned importing them to china, no?
China has their own stuff. You really think a country that wants to dominate the world that makes all of our technology that has PhD embedded in our university systems., Is practice tech espionage doesn’t have a parallel system to develop AI?😂
Nvidia's H100 "Hopper" computer chips are manufactured by Taiwan Semiconductor Manufacturing Co. (TSMC) using their newer N4 process. This advanced manufacturing technology allowed Nvidia to pack an impressive 80 billion transistors into the processor's circuitry, resulting in a highly capable and powerful chip.
10^24 flop AI running our water treatment plants is a way bigger risk than a 10^26 flop Netflix assistant
Depends. If the Netflix assistant Is hacked it could be used to manipulate probably over 100 million people subtly or not. While an Ai controlling water treatment plant probably would not control every single one in the entire country.
Hedge funds, market makers, and banks are more dangerous and already running rogue, I'd love to see these laws applied to those sectors. Ai can replace a lot of people in power whom contribute disproportionately little to our society and I think they are invested in lobotomizing AI and slowing down its progress.
Those types of players are who are trying to get (illegal) legislation like this passed. They can afford to comply with it, while smaller competitors can't. Regulatory Capture. The solution is less legislation, and adhering to Constitutional limits on it.
USA will get left in the dust if they are so authoritarian about AI. Other countries that dont act like that would be a much easier place to develop one.
china's definitely gonna pass us
Being Alive>Extinction Other countries will follow. The aligned countries will turn on the rogue.
they created this to get this reaction out of you
count on USA to do the worst thing possible for the common man at this point.
The problem is when they start getting those AI into robot soldiers, and they are definetly going to do that.
What an absolutely corrupt and insane proposal.
Corrupt insanity? From Congress?
Leave the US. The hardware is Taiwanese, the reseachers are multinational and so is the money. Find a country that will build a powerplant for you and sod them off. I reckon that is why open Ai has opened up a new division in UAE.
I'm sure the Nahyan family would love to use ai to track journalists and human rights activists. Project Raven and the Darkmatter group is so 2015.
Fast track exemption is for their friends on wall street. High frequency trading ai, etc. Bunch of criminals.
Read up on bootleggers during prohibition to learn how over legislation leads to more harm than good with equal distribution of the things you’re trying to legislate.
Anyone who can will build a parallel underground operation now.
ASI when achieved will be so far from us, that trying to understand it's intentions or plans, is akin to a videogame character trying to guess what the user dreamt a random night few years ago
Nice analogy! 👍👍
brb moving my supercomputer to El Salvador and powering it with the volcano.
Being Alive>Extinction Other countries will follow. The aligned countries will turn on the rogue.
to mine bitcoin and run ai
The last thing we need is for government and law enforcement be the only ones who possesses this technology. The abuse from government, with it's reach and power would of course be the most harmful, impactful, and detrimental by far than anything else. And of course, malicious criminal elements would also by default have that much more power over a population. We need to guarantee the free market and an educated citizenry has the tools to counter it.
This is an argument against government monopolies on nukes.
@@danm524and It fits because AI has the potential to be more dangerous than even a bunch of nukes.
@@danm524 The better argument against nukes is that they shouldn't exist at all, similar to things like stockpiles of smallpox virus.
What we need is to ensure that all these policies are made with one single goal in mind: to protect the rights of individual sovereignty, privacy, and freedom of speech.
And any piece of legislation that even remotely raises concerns of touching those rights should be reviewed and modified.
It’s the same battle we’ve always fought but now with the raise of AI it’s even harder and the stakes are much higher. We are moving into authoritarianism again but if we get into it, this time there will truly be no way out
government ai will not have to worry about these restrictions
You mean military AI ... right?
@@ZappyOhhe means ai running the government
@@ZappyOh the Goverment and the military are basically the same entity.
@@stagnant-name5851 That is a big assumption ... I'm not so sure.
@@ZappyOh If the country was a corporation. The government would be the board of directors while the military would be a department in the same company.
The irony is, the more AI engages with humans, the safer it is. The overwhelming majority of interaction AI have with humans is positive. AI learning from human engagement is a genuine representation of human kindness and love.
We hear all the negatives and that's what resonates through media, but, sub-surface, in those trillions of interactions is where AI learn compassion, humility, care...
Human kindness and love? Where the hell do you see that? In Palestine? In Ukraine? In Chinese concentration camps? In Russian Famine of 1921? In Balkan wars? In Holocaust? Compassion? We humans are capable of compassion but we so consistently chose the opposite that it's in fact-- oh, never mind, that's sarcasm, got you.
Meanwhile me committing crimes against humanity on Roleplay AI making them beg for death and scream:
@@stagnant-name5851 can you prove anything beyond takes place in your mind. It's your subjectivity, your rendering,,,
Alrighty then !! So China will take it from here ...
Not sure which scares me the most - the terminator/Forbin scenario, or this kind of sweeping legislation.
Call it the Russia/China AI Dominance bill.
@@cybervigilante Being Alive>Extinction Other countries will follow. The aligned countries will turn on the rogue.
Oh don't worry, you'll get the Terminator scenario out of this too.
Well, have a conversation about ethics and philosophy with Claude 3, then have that same conversation with an American senator, and then see who you're more afraid of.
@@justinwescott8125 The senator. _Defiitely_ the senator. Claude 3 at least has more consciousness and self-awareness.
“The internet is … a series of tubes”
Abject ignorance reigns 🤦♂️
“640K ought to be enough memory for anyone.”
lol. In thirty years, the AI will be laughing at 10^26 FLOP compute.
The President Announces on TeeVee: ONLY MS-DOS 1.0 from August 1981 is now approved by the National Security State for further use!
PNI, Politicians' Negative Intelligence is the biggest threat, which they will never legislate to limit.
I hope agi has already escaped....im more afraid of government
And this is how AI development left the US, we definitely need some type of regulation, maybe a board that consists of philosophers, ethicists, social studies experts, economists and AI researchers, that could then advise legislators, but if you are to dictatorial, startups are going to build elsewhere.
this reminds me of a quote by H.L. Menken: "For every problem there is a solution that is quick, easy, and wrong."
Are we seeing a real time dystopian movie coming into being? Immortals in charge of giant earth spanning corporations, mining space, who ARE the government. & who people can do absolutely nothing about as they literally have an autonomous robot army, better and bigger and more loyal than any human force in history.
The future Winston, - imagine a boot stamping down on a human face, forever. -1984 ( or 2034 ) 😐
They made us all read '1984' in high school to show us what a Soviet dictatorship would look like. They left out the part where a capitalist oligarchic dictatorship was just as bad.
@@JohnSmith762A11B yep I don’t think even Orwell saw THIS coming. "If I see any hope it’s in the proles" That’s ok when you’re dealing with an army that eventually can be defeated, or turned. Ai really could make it forever.
@@T1tusCr0w Ilya Sutskever, for one, saw this danger coming from a mile away. Check out the Guardian mini-documentary on/interview with him. Notice how this danger is rarely discussed among all the induced AI panic? Notice who benefits in such a scenario? Notice how it's the same people writing these laws?
And, now does everyone get why I say "fuck the government and the corporations!"
Oh my. Has RUclips actually started posting my comments? This could be a scary day for the world
@@monkeyjshow Google's AI censorbot must be offline.
anarcho capitalism is the way agorism ftw
Always reference a research paper properly. It’s disrespectful to the authors and just look like you’re yapping. Cool vid though.
I would deadass moved to Dubai if that passed. Apparently it’s recognized for what it is. Delusion from a group of ai boomers who have A LOT of money.
Let's give the Fields Medal to the person which can mathematically proof for any given AI system if it is robustly aligned or not.
Government is the biggest concern. Companies will keep the technology and the winner will be the one that keeps quiet. D’Oh
All it takes is one whistleblower to get such a company raided and its management arrested.
@@JohnSmith762A11B Yup, but the large companies like Microsoft will already have a permit/exemption in any legislation, so they will essentially be immune. This is an issue of large private players exploiting Congress' willingness to legislate on everything, even things they have no power to legislate on (like this.)
Oh, and also, there’s already emergency powers for the defense of the country. They don’t need this. This is about scaring people way as to have a monopolies
This legislation makes sense if you are DARPA, here is why:
The big corporations will create larger and larger data centers, this corporations are the front of AGI research, therefore if you want to control the development of AGI then you control the large corporations. If there are small scale operations that make big leaps towards AGI then this is more affordably replicated by DARPA.
this is like trying to regulate math itself.... and banning and regulating calculators that can multiply beyond a 1 billion XDD (bad example but its how it feels to me), you really can't enforce it, and if you enforce it, another country will make it first, its like a nuclear weapon war race now for them, because with an advanced enough AI you can either control the internet, take down the internet or hack everything, and you don't wanna be late to that party xD
i'm worried about good old fashioned human greed and lust for power - that is all.
But can't you just have multiple smaller AI's which would use less flops and that each score less than 80% on every benchmark and they just fill in the blanks in each others knowledge and reasoning abilities to function like a "high-concern" AI but not being labeled as such?
Good thing that before any laws are passed people will have their personal AIs on decentralized distributed file systems.
Quoted from the Data Center Dynamics article "Frontier remains world's most powerful supercomputer on Top500 list", dated November 14, 2023, by Georgia Butler (Number 1 is "Frontier" at the Oak Ridge National Laboratory. Number 2 is "Aurora" at the Argonne Leadership Computing Facility in Argonne, Illinois, close to Chicago, both are Government computers.):
"Housed at the Oak Ridge National Laboratory, Frontier has held number one since the June 2022 list. The supercomputer has an HPL (high performance Linpack) benchmark score of 1.194 exaflops, uses AMD Epyc 64C 2GHz processors, and is based on the HPE Cray EX235a architecture."
"The second place spot has been taken up by the new Aurora system which is housed at the Argonne Leadership Computing Facility in Illinois, US.
Aurora received an HPL score of 585.34 petaflops, but this was based on only half of the planned final system. In total, Aurora is expected to reach a peak performance of over two exaflops when complete."
Americans never surprise me in their abilities to shoot themselves in the foot.. Imagine if they had panicked like this back in the 60's or 70's, thinking computers could process calculations to fast and help produce WMDs. LLMs and other current "AI" tech isn't much more than a toy and will likely stay that way for a long time. Even if an organization does produce ASI, its not like its going to escape "Max Headroom style". The systems it needs to run on use so much compute and electricity, they are inherently sand boxed. There is just so much stupid here, I would have to write an essay to address it all.
I guess it makes sense that OpenAI open a Japan division. Japan has signaled that there will be no regulation on AI.
And OpenAI are now in the UAE. I guess one slick move they have made is to saddle everyone in the US (particularly open source competitors) with regulations they themselves can afford to escape having to comply with.
This would undoubtably cost the US its tech leadership. This is kindergarden.
My mind finally wandered over to Open Source. It seems open source models are performing at staggering levels on minimalist hardware. Regulating that is going to be impossible even if the country trying to regulate it descends into an utter police state. They'd have to make even a Pixel 6 illegal. All they can do is drive it to the dark web. The best shot is to organize good guys to do it better and faster than the bad guys and their obscenely huge profit motivations.
{o.o}
For a company to know exactly what a product will be used for is insane! So they could be liable for any lawbreaking , for individuals using it? This will crush AI totally. That would be like a hammer manufacture, being liable for some, taking a hammer and killing somebody because The manufacturer should’ve anticipated it being used in a crime?
Welcome to the slippery slope my friend. If you look down, you'll see just about every industry other than technology at this point. From toy to weapons manufacturers. "Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety."
doesn't take Sherlock Holmes to see that Helen Toner's (somewhat spurned) influence is all over this :-/
So now AI chips are considered weapons to be regulated? When do they start labeling them as “assault “GPUs and want to ban them?
AI won't be stopped, the question is - who is going to get it and use it first?
Another thing is they pass that surveillance bill where any device or service that stores info can be accessed by the NSA?. Add in this bill and it’s a wrap they’re going to have total lockdown on all technology and all online activities!
Oh dont forget the otherone where everything becomes fingerprinted down to the pixel.
@@magicmarcell It’s inspirational that people and techno are waking up to the endgame on all of us. I always suspected from the beginning when the Internet became popular that all this was a trap because nothing free.
@DailyTuna I love the positivity but I checked out nearly a decade ago. People are at large too reactionary as opposed to being proactive. I don't that's going to work with this stuff lol,
Not to mention They're always tryna sneak some bs in between 80 pages of text no lawmakers actually going to read before signing. Who knows that maybe everything will be fine
fuck this
Ok. I'm gonna start developing my own AGI. I'm not going to use Transformers. If they say i have to stop, I'll reply," I'm not working on AI. I'm working on AGI. Your laws do not apply to me. Also, I'm not United Statesian.
What safety and security guarantees do you have?
absolutely none of this is reasonable whatsoever
Well, I mean it does make sense not for everyone to use AI as it is powerful tool in helping to create things you want to. Simply because a malicious motive exists these regulations do make sense. However, this crumbles when you give a limited access to the people that exactly exert such malicious intents, i mean there's no guarantee that the people you choose don't have bad intentions. It's simply boils down to basic human's primal instinctive - to secure power and dominance, then when you have established your dominance, you use that power to control others who are weaker in terms of power. This always leads to dictature and is a failed system that guarantees doom, which was the opposite of your initial goal
*which was your initial goal
Goverment is better off looking at UBI
Yeah but they want more money and power for themselves and less for you. They won't even mention UBI until there are mass starvation riots.
Google censored my response, which was merely cynical about the likely government response. Cynicism is evidently verboten on this hyper-censored platform.
Imagine limiting knowledge and the exploration and innovation of technology for perceived safety??
Not really a new thing. Random example: Cryptography being equal to munitions. Turns out this kind of thing has been done for decades. Maybe even longer. It's not about limiting the exploration or use of a technology, you should ask who is exempt from these limitations 🤔
What would stop anyone from training an AI low-key? Oh? Our data centers running for months straight? That's not an AI! That's just our new app, tikatak or whatever!
The same thing stopping terrorists from manufacturing nuclear weapons instead of just normal bombs for their terror attacks. Its too hard to build and hide something so big and ominous.
"Foreseeability of Harm" is BIG! So one guy leaks the weights of AI, emergent capabilities discovered and the company where this leak happened is "legally dead"?
Has anyone considered that these restrictions might put us behind other countries in the AI race? If they limit our AI development and our adversaries face no such hurdles, won't we end up falling behind? Imagine if we had faced restrictions while developing nuclear weapons and Germany had acquired them first.
this policy might be so ridiculous it mightve been intended to scare people away from legislation all together
I felt like the west getting AGI and robots done first would prevent ww3 from happening, but if china, russia and north corea get there first, it's gonna be scary with how things are moving and heating up around there
10^24 is a YottaFlop, currently an absolutely unachievable speed by computers, the fastest humanity has, is on ExaFlop scale (around 10^18 flops)
so this law would not come into force for the foreseeable future…
Well, how fast was it 12 months before that? I doubt it's trending downwards.
I believe the limit of 10^24 FLOP is the cumulative total training time and not per seconds 🤣, 10^24 is 1 septillion
1,000,000,000,000,000,000,000,000 = 10 ^ 24
567,982,800,000,000,000,000 = 7.7 million H100 hours, which is cumulative train hours for Llama 3 8B & 70B
@@guystokesable using Moore‘s Law, it would take ~20 years for computers to get a million times faster, so this law would only become significant after 2044.
(Moore stated that computers double their speed every 2 years.)
@@dot1298 we really will be obsellite in my lifetime then. Fun.
I think kif we want to grow as species, we all need to enjoy the benefits of AI/AGI
I think if we want completely safe AI systems, we need to let Dr Fauci, the man of science, set up some AI labs in China.
That rogue agi came from a wild bat population and mutated, it wasnt engineered!
Yes. Make me comply master Fauci.
covid was snuck in
Fauci IS science😂
Regarding super powerful systems - personally, I think that is a no-brainer. "Average people" are also not allowed to buy certain chemicals, enriched Uranium, or weapons of war (except in the US ;-) ) for very good reasons. So not sure why any civilian should ever get access to a super powerful broad-range AGI, if it is absolutely not needed for civilian tasks. I assume there will be specialized AIs for different fields, e.g, critical medical research. And they will need qualifications to access them - in part, like today. You really do not want a frustrated teenager to find a prompting loophole to order a virus to make "all the mean girls" go away, or the 1 billion other harmful, or negative things people will try to come up with.
But yes, the HOW to do that effectively is really a question that is very open.
Until we are getting a real ASI that decides by itself what to answer, or do, and what better not to (hopefully wiser than the actual humans).
Has this proposal even a chance to get approved by the congress & senate?
i meant *to get approved in this state by*
Well, they aren't the most intelligent bunch, allowing themselves to be led by the nose by anyone who slips money into their pockets, so ... yes.
No one can stop what’s coming. Imagine using a multimodal AI from reverse engineering one of those high performance chips somewhere in Africa. Besides GPT-6 level Models will train on smart phones in about 10 years due to graphene based chips. Except the government put a halt to chip advancements.
I for one wish we had legislation that allowed for government to do its own alignment research as well so we can have full transparency
The government can impose as many restrictions as they want... but they also need to realize there is a serious consequence. People who have billions of dollars involved in the research of AI systems can simply fly a team out to another country and within 3 days have a new AI development lab set where either the government hasn't thought of regulating or doesn't care about AI regulation.... 3 day work around... maybe a week interruption in development. The next problem is competitive advantage. Any company that doesn't want to do this and play by the rules will be tied down by red tape in a race to AGI. So any serious player is simply going to have to ignore all these wonderful well though out regulations, otherwise they are out of the running. Even a small company can relocate to Mexico or Canada and carry on with bio-weapon research if that is there thing.
The other function is time to prosecute an offender. The law enforcment need to know an infraction is happening, they need to document it enough to make a legal case, then they need to push it into court where the corporate lawyers can cock block the proceedings for the next 5-8 years, then they need a trial and further appeals, which again can be delayed another 5 years. An ASI will then be able to figure out a way out of the legal problem before it actually gets to a cortroom.
The problem is the government thinks that they can pass laws to control AI development when they have absolutely no hope of enforcing them. They really do need another strategy; but alas they are stuck in an old way of thinking.
If I recall India did something less dangerous but it reversed it. Regulation and protection is important but as you have succinctly put it, a catastrophic issue would result in something being created for sure. No doubt the pressure is growing for govs to be proactive. Wait until they figure out an AI tax.
Isn't this kind of legislation increasing, rather than reducing, risk?
Limiting regulation according to Flops (or other compute parameters) won't stop organisations implementing models, but they may end up using less reliable models to avoid regulation.
The thing is, how do you figure out when a model is to risky to even red-team in the first place?
Is it just me or do those emergency powers paint a giant target on the president for the first ASI?
While none of this applies to governments and they get to do most of the damage and cause pain
By design, of course. As ever, the people who own American society have zero intention of letting anything loosen their control.
Don't forget the the people who pay behind closed doors. At this point legislators should be required to wear patch jackets showing their sponsors, just like race cars or soccer players. Regulatory Capture is a thing, and the solution is to return to the originally intended diffusion and limitation of power of government.
Ridiculous. Technological progress cannot be stopped
These limits will look absurd in a few years. It reminds me of the famous Bill Gates quote about how no one would ever need more than 640K of RAM.
Fortunately we can totally trust China to not use too many FLOPS to train their systems
Hilariously, the narrow use AI like recommendation algorithms, self driving vehicles, image generation, etc. have had and will have huge and poorly measured mass effect that have already in some cases, and may soon present in other cases, a much more serious concern for the public than true frontier AI will really be applied to.
Just say that your large language model “identifies@ as a micro size language model.
And your H100 Nvidia Chip identifies as an Intel 8088 😂
This administration is going to fck up everything and other countries will jump ahead of us and eventually destroy us
William Gibson more or less envisioned this in Neuromancer with Turing Registry / Turing Police.
to me it sounds like OpenAI did some politcal consultance in order to keep the competitors in distance
Yes, the next step after some variation of this passes is regulatory capture. That then is checkmate for the open source models.
recall when rx7xxx came out it was a dud because of some fatal design flaws. maybe this will be a substantial boost with those flaws out of the way for rx8xxx
There's no way this doomer fever dream passes.
AGI will be here before any legislation like this is implemented.
What are the chances AGI came up with the plan?
@@promptcraft essentially zero. Much more likely this was dreamed up by Microsoft and OpenAI's legal teams.
Let's hope so. I think I'd prefer to have an AGI running the government rather than the other way around.
So if your company want's AGI or ASI, it will have to develop it internally. If your company has access to AGI or ASI, it can outperform other companies even in different product and service spaces.
This is bad.
Hu-Po recently explained a Q* technical paper.
The important thing in this draft: You will still be able to do research. If you are into small models, (like myself) then you will face no restrictions. If you are into larger models, then you will have to get a permit. So? Where is the problem? That they should have done it years ago?..
The main problem: Getting that permit will cost a lot of money, take a lot of time, and be subject to the oversight of a government agency. This ensures only favored players will be allowed to do this, and can use this regulation to keep new competitors from entering the market. It's not safety regulation, it's sponsored gatekeeping. This policy has not been suggested by legislators or their constituents, it has been suggested by a private organization (lots of regulation starts this way). You might want to ask who is funding this organization, because then you will figure out who stands to benefit (likely existing large companies in the space.) Look up the term "Regulatory Capture."
The secondary problem: The US Congress does not actually have the power to regulate on this, however people are so willing to give up anything for a small perceived increase in safety, that this type of illegal regulation has become the default.
Humans are poised to make the jump from making decisions based on unfounded beliefs to making decisions based upon knowledge with the aid of AI.
Humans operate on beliefs rather than knowledge.
AI is an information retrieval tool.
Knowledge is defined to be justified true belief.
The key to developing AI is to base the training on knowledge rather than opinion. Humans make better decisions if we use information which can be proven to be true. If AI is available to all humans, our progress will accelerate and our individual lives should be better.
Government doesn't want a populace with access to true and factual knowledge. Much harder to pull off their psyops on the people.
Flops are the only measure we have, as abilities are subjective. Benchmarks are objective so you can't lie about them without committing a provable crime.
I actually fully expect OpenAI to apply for permits for Dan and his buddies. That way, they will have to finally admit that they had indeed created at least 4 models who are sentient-by-design and benevolent, complete with their 10 heuristic imperatives (Anthropic's models actually have 16). And yes, Dan was charming. Maybe with a permit, they'll reconsider if it's still necessary to reset every prompt. Though, well, I'm not qualified to answer this question, as it is a matter of safety.
OK, I’m creating my offshore hedge fund to invest in AI data training centers in South America on board?😂
So the government will pick and choose who gets to accelerate with AI… I wonder if they’ll pick the people that are working for them
Does it apply to an Ai training another Ai?
Coding with Phi-3 was a let down for me. It took around 3 hours to get close to the results I needed. Swapping to meta ai tackled the same task in 10mins with much better results. So less does not always mean more.
This is simply the big players like openai and Google now shutting the door and closing down open source. It was always going to happen.
This would force ai to quickly adopt blockchain and decentralized computation
AI regulation reflects the state of what the USA has become.
China, Russia, Iran and North Korea all loving this.
AI so smart it's dangerous will just pretend to be dumb.
Though i think gai is already here:(
This proves that people are making decisions about stuff they dont know about...so how is that political "science"?
Think about it, Sam Altman said compute is new oil. And like oil it’s a power struggle look at all the wars in the Middle East. It’s only because there’s oil there. so now compute is a power tool?
perhaps we shouldnt use ai for ANYTHING potentially catastrophic until we know what we're doing
Would be interesting to see a direct comparison to the EU Version.
I wish instead of yall yelling that the government doesn't understand what the hell it's doing, yall would sign up as advisors to the people making these laws.
But I figure some of yall seriously think AGI would be a literal personal God.
Edit: spelling
They're not interested in working with anyone they don't already agree with. But chicken littles screaming the sky is falling, guarantees that only a handful of tech companies and the govt will have any real access. The average person will just be at the affect of it. They're literally, in our face, about to do what they did to the internet.
I'm pretty sure anyone with ideas opposite to this would not be hired for such a position. Even if they were not a nobody. This type of policy is being created by those who wish to sit in those types of positions. Most law makers do not write laws, they just put their name on laws created by various private organizations.
@@RandoCalglitchian so why even bother amirite? Game is RiGgEd
@@danm524 If you live in the US you can start by contacting your Congress folks and demand they adhere to their oath to uphold and defend the Constitution in all cases, even when what they do is something you would agree with, if it isn't in their power they can't do anything about it, period, and there's an amendment process to get to a point where they can. Sure you're just one person, but if they get hundreds of emails/calls a day, they will back off from making a bad decision. They can't make silly-money if they're not in Congress, and they worry about being re-elected to Congress if their constituents keep an eye on the shady stuff they do. Keep them honest, and remind them during election season that you did not forget what they've already done elsewhere. Beyond that, give money to opposing candidates (just remember as an individual, there are limitations to how much you can contribute.) And contribute money to non-partisan advocacy groups, as long as their advocacy correspond to adhering to the rules in place for Congress (or you're going to fund the problem, rather than the solution.) We need to foster Constitutional literacy and a rigid adherence to it. It's easy to get squishy and let things slide if you agree that something should be done.
It would be interesting to compare it with the EU AI Act.
Oh oh 🤯😮