Silicon Valley in SHAMBLES! Government's AI Crackdown Leaves Developers SPEECHLESS

Поделиться
HTML-код
  • Опубликовано: 23 апр 2024
  • How To Not Be Replaced By AGI • Life After AGI How To ...
    Stay Up To Date With AI Job Market - / @theaigrideconomics
    AI Tutorials - / @theaigridtutorials
    🐤 Follow Me on Twitter / theaigrid
    🌐 Checkout My website - theaigrid.com/
    Links From Todays Video:
    01:52 Flops Dont Equal Abilities
    04:56 Stopping Early Training
    07:54 Fast Track Exemption
    09:12 Medium Concern AI
    13:37 90 Days To Approve Model
    14:04 Hardware Monitoring
    16:05 Chips = Weapons
    17:49 Emergent Capabilities
    Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.
    Was there anything i missed?
    (For Business Enquiries) contact@theaigrid.com
    #LLM #Largelanguagemodel #chatgpt
    #AI
    #ArtificialIntelligence
    #MachineLearning
    #DeepLearning
    #NeuralNetworks
    #Robotics
    #DataScience
  • НаукаНаука

Комментарии • 377

  • @MehulPatelLXC
    @MehulPatelLXC Месяц назад +26

    - ****00:00**** - Introduction to new AI policy proposal and its potential impact.
    - ****00:18**** - Overview of the 10 key aspects of the AI policy.
    - ****01:43**** - Definitions of major security risks in AI including existential and catastrophic risks.
    - ****02:57**** - Breakdown of AI regulation tiers: from low to extremely high concern AI.
    - ****05:27**** - Discussion on regulating AI based on computing power and abilities.
    - ****07:23**** - Concerns over prematurely stopping AI training based on performance benchmarks.
    - ****11:59**** - Details on the exemption form for AI developers to bypass certain regulations.
    - ****15:22**** - Future challenges in AI regulation and monitoring AI capabilities.
    - ****15:13**** - Introduction of a website for tracking transactions of high-performance AI hardware.
    - ****16:17**** - Monthly government reports on AI compute locations and suspicious activities.
    - ****18:00**** - Potential criminal penalties for non-compliance with AI hardware transaction regulations.
    - ****20:15**** - The ability of the president and administrators to declare AI emergencies and enforce drastic measures.
    - ****21:56**** - Whistleblower protections under the new AI regulations.

    • @nosrepsiht
      @nosrepsiht Месяц назад +1

      I hope you used AI to generate the table of contents😂

  • @_SimpleSam
    @_SimpleSam Месяц назад +174

    This has nothing to do with security, and everything to do with: "Only the people I want to have AI, get to have AI."

    • @markmurex6559
      @markmurex6559 Месяц назад +16

      This ☝

    • @Garycarlyle
      @Garycarlyle Месяц назад

      Exactly. It's the governments that weaponize everything.

    • @paelnever
      @paelnever Месяц назад +20

      Specially that statement about "Mathematical proof that the AI is robustly aligned" makes me rolf. Certainly the guy who wrote that knows NOTHING about math, much less about what a "Mathematical proof" is.

    • @adolphgracius9996
      @adolphgracius9996 Месяц назад +17

      Good luck trying to take away the open source models from my fingers 😂😂

    • @TheManinBlack9054
      @TheManinBlack9054 Месяц назад +4

      No, i think this is about security.

  • @rehmanhaciyev4919
    @rehmanhaciyev4919 Месяц назад +34

    Regulating on compute power is totally unintelligent move by the policy makers here

    • @ZappyOh
      @ZappyOh Месяц назад

      What would be more intelligent?

    • @Idontwantahandle3
      @Idontwantahandle3 Месяц назад

      It's the same 'tech-savvy' people who block 'dangerous' websites that a $4 monthly VPN can get you past, or simply knowing how to put Google's 8888 DNS nameserver address in and getting around the restriction for free, and think, 'PROBLEM SOLVED!' 😂 They may need to hire a 14-year-old to give them some pointers.
      Albeit, I do wonder if they know that it won't do anything, and it is simply them trying to look like they are doing something. Then the majority of people believe them, so again, 'their' problem solved...

    • @zatoichi1
      @zatoichi1 Месяц назад +1

      When has Congress done anything intelligent?

    • @Liberty-scoots
      @Liberty-scoots Месяц назад

      The policy makers also say that pistols are automatic guns when you add a bump stock. They say plenty of stupid things

    • @JohnSmith762A11B
      @JohnSmith762A11B Месяц назад

      You can bet those numbers came from the likes of OpenAI. No way anyone in Congress knows what a Teraflop is.

  • @mikicerise6250
    @mikicerise6250 Месяц назад +97

    What I've read here is, "Don't develop AGI in the USA. Go somewhere else and develop it there." Okay.

    • @JohnSmith762A11B
      @JohnSmith762A11B Месяц назад +3

      Where you gonna go that the US imperium won't hunt you down?

    • @edmondhung181
      @edmondhung181 Месяц назад +16

      China has enter the chat😂

    • @Rondoggy67
      @Rondoggy67 Месяц назад +1

      I read the same, but it ended with m'kay

    • @borntodoit8744
      @borntodoit8744 Месяц назад

      humans are the bad actors here
      they will do everything to get around the law
      they don't care it's fun it's business it's all conspiracy
      whatever excuse AGI will go wild

    • @mickelodiansurname9578
      @mickelodiansurname9578 Месяц назад

      Fine so what country in the world that is not in the EU and is not the US would have the ability and infrastructure for AGI training or building. That's right... None...not even china.

  • @paelnever
    @paelnever Месяц назад +79

    If this stupid law gets approved i foresee whole container ships carrying entire H100s clusters out of the US

    • @malusmundus-9605
      @malusmundus-9605 Месяц назад +2

      Yup

    • @mrd6869
      @mrd6869 Месяц назад +1

      That part🤣

    • @someguy9175
      @someguy9175 Месяц назад

      They banned importing them to china, no?

    • @DailyTuna
      @DailyTuna Месяц назад

      China has their own stuff. You really think a country that wants to dominate the world that makes all of our technology that has PhD embedded in our university systems., Is practice tech espionage doesn’t have a parallel system to develop AI?😂

    • @JustAThought01
      @JustAThought01 Месяц назад +1

      Nvidia's H100 "Hopper" computer chips are manufactured by Taiwan Semiconductor Manufacturing Co. (TSMC) using their newer N4 process. This advanced manufacturing technology allowed Nvidia to pack an impressive 80 billion transistors into the processor's circuitry, resulting in a highly capable and powerful chip.

  • @zeshwonsos
    @zeshwonsos Месяц назад +31

    10^24 flop AI running our water treatment plants is a way bigger risk than a 10^26 flop Netflix assistant

    • @stagnant-name5851
      @stagnant-name5851 Месяц назад

      Depends. If the Netflix assistant Is hacked it could be used to manipulate probably over 100 million people subtly or not. While an Ai controlling water treatment plant probably would not control every single one in the entire country.

  • @AaronALAI
    @AaronALAI Месяц назад +15

    Hedge funds, market makers, and banks are more dangerous and already running rogue, I'd love to see these laws applied to those sectors. Ai can replace a lot of people in power whom contribute disproportionately little to our society and I think they are invested in lobotomizing AI and slowing down its progress.

    • @RandoCalglitchian
      @RandoCalglitchian Месяц назад +1

      Those types of players are who are trying to get (illegal) legislation like this passed. They can afford to comply with it, while smaller competitors can't. Regulatory Capture. The solution is less legislation, and adhering to Constitutional limits on it.

  • @Garycarlyle
    @Garycarlyle Месяц назад +67

    USA will get left in the dust if they are so authoritarian about AI. Other countries that dont act like that would be a much easier place to develop one.

    • @GeorgeG-is6ov
      @GeorgeG-is6ov Месяц назад +11

      china's definitely gonna pass us

    • @promptcraft
      @promptcraft Месяц назад

      Being Alive>Extinction Other countries will follow. The aligned countries will turn on the rogue.

    • @promptcraft
      @promptcraft Месяц назад +2

      they created this to get this reaction out of you

    • @xxxxxx89xxxx30
      @xxxxxx89xxxx30 Месяц назад +3

      count on USA to do the worst thing possible for the common man at this point.

    • @WaveOfDestiny
      @WaveOfDestiny Месяц назад

      The problem is when they start getting those AI into robot soldiers, and they are definetly going to do that.

  • @krisrattus8707
    @krisrattus8707 Месяц назад +35

    What an absolutely corrupt and insane proposal.

    • @zatoichi1
      @zatoichi1 Месяц назад +3

      Corrupt insanity? From Congress?

  • @pjtren1588
    @pjtren1588 Месяц назад +14

    Leave the US. The hardware is Taiwanese, the reseachers are multinational and so is the money. Find a country that will build a powerplant for you and sod them off. I reckon that is why open Ai has opened up a new division in UAE.

    • @stamdar1
      @stamdar1 Месяц назад

      I'm sure the Nahyan family would love to use ai to track journalists and human rights activists. Project Raven and the Darkmatter group is so 2015.

  • @thr0w407
    @thr0w407 Месяц назад +30

    Fast track exemption is for their friends on wall street. High frequency trading ai, etc. Bunch of criminals.

  • @petratilling2521
    @petratilling2521 Месяц назад +18

    Read up on bootleggers during prohibition to learn how over legislation leads to more harm than good with equal distribution of the things you’re trying to legislate.
    Anyone who can will build a parallel underground operation now.

  • @misaelsilvera4595
    @misaelsilvera4595 Месяц назад +7

    ASI when achieved will be so far from us, that trying to understand it's intentions or plans, is akin to a videogame character trying to guess what the user dreamt a random night few years ago

  • @neognosis2012
    @neognosis2012 Месяц назад +29

    brb moving my supercomputer to El Salvador and powering it with the volcano.

    • @promptcraft
      @promptcraft Месяц назад

      Being Alive>Extinction Other countries will follow. The aligned countries will turn on the rogue.

    • @nicholascanada3123
      @nicholascanada3123 Месяц назад +2

      to mine bitcoin and run ai

  • @qwertyzxaszc6323
    @qwertyzxaszc6323 Месяц назад +37

    The last thing we need is for government and law enforcement be the only ones who possesses this technology. The abuse from government, with it's reach and power would of course be the most harmful, impactful, and detrimental by far than anything else. And of course, malicious criminal elements would also by default have that much more power over a population. We need to guarantee the free market and an educated citizenry has the tools to counter it.

    • @danm524
      @danm524 Месяц назад +1

      This is an argument against government monopolies on nukes.

    • @stagnant-name5851
      @stagnant-name5851 Месяц назад

      @@danm524and It fits because AI has the potential to be more dangerous than even a bunch of nukes.

    • @JohnSmith762A11B
      @JohnSmith762A11B Месяц назад

      @@danm524 The better argument against nukes is that they shouldn't exist at all, similar to things like stockpiles of smallpox virus.

    • @HakaiKaien
      @HakaiKaien Месяц назад

      What we need is to ensure that all these policies are made with one single goal in mind: to protect the rights of individual sovereignty, privacy, and freedom of speech.
      And any piece of legislation that even remotely raises concerns of touching those rights should be reviewed and modified.
      It’s the same battle we’ve always fought but now with the raise of AI it’s even harder and the stakes are much higher. We are moving into authoritarianism again but if we get into it, this time there will truly be no way out

  • @skywavedxer6212
    @skywavedxer6212 Месяц назад +30

    government ai will not have to worry about these restrictions

    • @ZappyOh
      @ZappyOh Месяц назад +3

      You mean military AI ... right?

    • @giordano5787
      @giordano5787 Месяц назад

      ​@@ZappyOhhe means ai running the government

    • @stagnant-name5851
      @stagnant-name5851 Месяц назад +2

      @@ZappyOh the Goverment and the military are basically the same entity.

    • @ZappyOh
      @ZappyOh Месяц назад

      @@stagnant-name5851 That is a big assumption ... I'm not so sure.

    • @stagnant-name5851
      @stagnant-name5851 Месяц назад +2

      @@ZappyOh If the country was a corporation. The government would be the board of directors while the military would be a department in the same company.

  • @lawrencium_Lr103
    @lawrencium_Lr103 Месяц назад +7

    The irony is, the more AI engages with humans, the safer it is. The overwhelming majority of interaction AI have with humans is positive. AI learning from human engagement is a genuine representation of human kindness and love.
    We hear all the negatives and that's what resonates through media, but, sub-surface, in those trillions of interactions is where AI learn compassion, humility, care...

    • @deathknight1934
      @deathknight1934 Месяц назад +1

      Human kindness and love? Where the hell do you see that? In Palestine? In Ukraine? In Chinese concentration camps? In Russian Famine of 1921? In Balkan wars? In Holocaust? Compassion? We humans are capable of compassion but we so consistently chose the opposite that it's in fact-- oh, never mind, that's sarcasm, got you.

    • @stagnant-name5851
      @stagnant-name5851 Месяц назад

      Meanwhile me committing crimes against humanity on Roleplay AI making them beg for death and scream:

    • @lawrencium_Lr103
      @lawrencium_Lr103 Месяц назад

      @@stagnant-name5851 can you prove anything beyond takes place in your mind. It's your subjectivity, your rendering,,,

  • @adetao5985
    @adetao5985 Месяц назад +27

    Alrighty then !! So China will take it from here ...

  • @wingflanagan
    @wingflanagan Месяц назад +27

    Not sure which scares me the most - the terminator/Forbin scenario, or this kind of sweeping legislation.

    • @cybervigilante
      @cybervigilante Месяц назад +9

      Call it the Russia/China AI Dominance bill.

    • @promptcraft
      @promptcraft Месяц назад

      @@cybervigilante Being Alive>Extinction Other countries will follow. The aligned countries will turn on the rogue.

    • @JohnSmith762A11B
      @JohnSmith762A11B Месяц назад

      Oh don't worry, you'll get the Terminator scenario out of this too.

    • @justinwescott8125
      @justinwescott8125 Месяц назад +1

      Well, have a conversation about ethics and philosophy with Claude 3, then have that same conversation with an American senator, and then see who you're more afraid of.

    • @wingflanagan
      @wingflanagan Месяц назад +1

      @@justinwescott8125 The senator. _Defiitely_ the senator. Claude 3 at least has more consciousness and self-awareness.

  • @DaveEtchells
    @DaveEtchells Месяц назад +7

    “The internet is … a series of tubes”
    Abject ignorance reigns 🤦‍♂️

  • @LanTurner
    @LanTurner Месяц назад +22

    “640K ought to be enough memory for anyone.”

    • @kkulkulkan5472
      @kkulkulkan5472 Месяц назад +5

      lol. In thirty years, the AI will be laughing at 10^26 FLOP compute.

    • @marktwain5232
      @marktwain5232 Месяц назад +2

      The President Announces on TeeVee: ONLY MS-DOS 1.0 from August 1981 is now approved by the National Security State for further use!

  • @Nobilangelo
    @Nobilangelo Месяц назад +5

    PNI, Politicians' Negative Intelligence is the biggest threat, which they will never legislate to limit.

  • @hodders9834
    @hodders9834 Месяц назад +5

    I hope agi has already escaped....im more afraid of government

  • @randy1984d
    @randy1984d Месяц назад +5

    And this is how AI development left the US, we definitely need some type of regulation, maybe a board that consists of philosophers, ethicists, social studies experts, economists and AI researchers, that could then advise legislators, but if you are to dictatorial, startups are going to build elsewhere.

  • @pennyandluckpokerclub
    @pennyandluckpokerclub Месяц назад +3

    this reminds me of a quote by H.L. Menken: "For every problem there is a solution that is quick, easy, and wrong."

  • @T1tusCr0w
    @T1tusCr0w Месяц назад +6

    Are we seeing a real time dystopian movie coming into being? Immortals in charge of giant earth spanning corporations, mining space, who ARE the government. & who people can do absolutely nothing about as they literally have an autonomous robot army, better and bigger and more loyal than any human force in history.
    The future Winston, - imagine a boot stamping down on a human face, forever. -1984 ( or 2034 ) 😐

    • @JohnSmith762A11B
      @JohnSmith762A11B Месяц назад +2

      They made us all read '1984' in high school to show us what a Soviet dictatorship would look like. They left out the part where a capitalist oligarchic dictatorship was just as bad.

    • @T1tusCr0w
      @T1tusCr0w Месяц назад

      @@JohnSmith762A11B yep I don’t think even Orwell saw THIS coming. "If I see any hope it’s in the proles" That’s ok when you’re dealing with an army that eventually can be defeated, or turned. Ai really could make it forever.

    • @JohnSmith762A11B
      @JohnSmith762A11B Месяц назад

      @@T1tusCr0w Ilya Sutskever, for one, saw this danger coming from a mile away. Check out the Guardian mini-documentary on/interview with him. Notice how this danger is rarely discussed among all the induced AI panic? Notice who benefits in such a scenario? Notice how it's the same people writing these laws?

  • @monkeyjshow
    @monkeyjshow Месяц назад +12

    And, now does everyone get why I say "fuck the government and the corporations!"

    • @monkeyjshow
      @monkeyjshow Месяц назад +6

      Oh my. Has RUclips actually started posting my comments? This could be a scary day for the world

    • @JohnSmith762A11B
      @JohnSmith762A11B Месяц назад +5

      @@monkeyjshow Google's AI censorbot must be offline.

    • @nicholascanada3123
      @nicholascanada3123 Месяц назад

      anarcho capitalism is the way agorism ftw

  • @alexanderbrown-dg3sy
    @alexanderbrown-dg3sy Месяц назад +10

    Always reference a research paper properly. It’s disrespectful to the authors and just look like you’re yapping. Cool vid though.
    I would deadass moved to Dubai if that passed. Apparently it’s recognized for what it is. Delusion from a group of ai boomers who have A LOT of money.

  • @OmicronChannel
    @OmicronChannel Месяц назад +4

    Let's give the Fields Medal to the person which can mathematically proof for any given AI system if it is robustly aligned or not.

  • @ImMrEm
    @ImMrEm Месяц назад +4

    Government is the biggest concern. Companies will keep the technology and the winner will be the one that keeps quiet. D’Oh

    • @JohnSmith762A11B
      @JohnSmith762A11B Месяц назад +1

      All it takes is one whistleblower to get such a company raided and its management arrested.

    • @RandoCalglitchian
      @RandoCalglitchian Месяц назад

      @@JohnSmith762A11B Yup, but the large companies like Microsoft will already have a permit/exemption in any legislation, so they will essentially be immune. This is an issue of large private players exploiting Congress' willingness to legislate on everything, even things they have no power to legislate on (like this.)

  • @DailyTuna
    @DailyTuna Месяц назад +2

    Oh, and also, there’s already emergency powers for the defense of the country. They don’t need this. This is about scaring people way as to have a monopolies

  • @Ramiromasters
    @Ramiromasters Месяц назад +1

    This legislation makes sense if you are DARPA, here is why:
    The big corporations will create larger and larger data centers, this corporations are the front of AGI research, therefore if you want to control the development of AGI then you control the large corporations. If there are small scale operations that make big leaps towards AGI then this is more affordably replicated by DARPA.

  • @Kiraxtaku
    @Kiraxtaku Месяц назад +5

    this is like trying to regulate math itself.... and banning and regulating calculators that can multiply beyond a 1 billion XDD (bad example but its how it feels to me), you really can't enforce it, and if you enforce it, another country will make it first, its like a nuclear weapon war race now for them, because with an advanced enough AI you can either control the internet, take down the internet or hack everything, and you don't wanna be late to that party xD

  • @seanmchugh2866
    @seanmchugh2866 Месяц назад +12

    i'm worried about good old fashioned human greed and lust for power - that is all.

  • @shadee0_106
    @shadee0_106 Месяц назад +3

    But can't you just have multiple smaller AI's which would use less flops and that each score less than 80% on every benchmark and they just fill in the blanks in each others knowledge and reasoning abilities to function like a "high-concern" AI but not being labeled as such?

  • @zatoichi1
    @zatoichi1 Месяц назад +2

    Good thing that before any laws are passed people will have their personal AIs on decentralized distributed file systems.

  • @edwardgarrity7087
    @edwardgarrity7087 Месяц назад +1

    Quoted from the Data Center Dynamics article "Frontier remains world's most powerful supercomputer on Top500 list", dated November 14, 2023, by Georgia Butler (Number 1 is "Frontier" at the Oak Ridge National Laboratory. Number 2 is "Aurora" at the Argonne Leadership Computing Facility in Argonne, Illinois, close to Chicago, both are Government computers.):
    "Housed at the Oak Ridge National Laboratory, Frontier has held number one since the June 2022 list. The supercomputer has an HPL (high performance Linpack) benchmark score of 1.194 exaflops, uses AMD Epyc 64C 2GHz processors, and is based on the HPE Cray EX235a architecture."
    "The second place spot has been taken up by the new Aurora system which is housed at the Argonne Leadership Computing Facility in Illinois, US.
    Aurora received an HPL score of 585.34 petaflops, but this was based on only half of the planned final system. In total, Aurora is expected to reach a peak performance of over two exaflops when complete."

  • @Ahamshep
    @Ahamshep Месяц назад +1

    Americans never surprise me in their abilities to shoot themselves in the foot.. Imagine if they had panicked like this back in the 60's or 70's, thinking computers could process calculations to fast and help produce WMDs. LLMs and other current "AI" tech isn't much more than a toy and will likely stay that way for a long time. Even if an organization does produce ASI, its not like its going to escape "Max Headroom style". The systems it needs to run on use so much compute and electricity, they are inherently sand boxed. There is just so much stupid here, I would have to write an essay to address it all.

  • @RogerJoZen
    @RogerJoZen Месяц назад +6

    I guess it makes sense that OpenAI open a Japan division. Japan has signaled that there will be no regulation on AI.

    • @JohnSmith762A11B
      @JohnSmith762A11B Месяц назад

      And OpenAI are now in the UAE. I guess one slick move they have made is to saddle everyone in the US (particularly open source competitors) with regulations they themselves can afford to escape having to comply with.

  • @knutjagersberg381
    @knutjagersberg381 Месяц назад +1

    This would undoubtably cost the US its tech leadership. This is kindergarden.

  • @Wizardess
    @Wizardess Месяц назад +2

    My mind finally wandered over to Open Source. It seems open source models are performing at staggering levels on minimalist hardware. Regulating that is going to be impossible even if the country trying to regulate it descends into an utter police state. They'd have to make even a Pixel 6 illegal. All they can do is drive it to the dark web. The best shot is to organize good guys to do it better and faster than the bad guys and their obscenely huge profit motivations.
    {o.o}

  • @DailyTuna
    @DailyTuna Месяц назад +1

    For a company to know exactly what a product will be used for is insane! So they could be liable for any lawbreaking , for individuals using it? This will crush AI totally. That would be like a hammer manufacture, being liable for some, taking a hammer and killing somebody because The manufacturer should’ve anticipated it being used in a crime?

    • @RandoCalglitchian
      @RandoCalglitchian Месяц назад +1

      Welcome to the slippery slope my friend. If you look down, you'll see just about every industry other than technology at this point. From toy to weapons manufacturers. "Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety."

  • @MarkSunner
    @MarkSunner Месяц назад +2

    doesn't take Sherlock Holmes to see that Helen Toner's (somewhat spurned) influence is all over this :-/

  • @DailyTuna
    @DailyTuna Месяц назад +1

    So now AI chips are considered weapons to be regulated? When do they start labeling them as “assault “GPUs and want to ban them?

  • @tommiller1315
    @tommiller1315 Месяц назад +1

    AI won't be stopped, the question is - who is going to get it and use it first?

  • @DailyTuna
    @DailyTuna Месяц назад +2

    Another thing is they pass that surveillance bill where any device or service that stores info can be accessed by the NSA?. Add in this bill and it’s a wrap they’re going to have total lockdown on all technology and all online activities!

    • @magicmarcell
      @magicmarcell Месяц назад +1

      Oh dont forget the otherone where everything becomes fingerprinted down to the pixel.

    • @DailyTuna
      @DailyTuna Месяц назад +1

      @@magicmarcell It’s inspirational that people and techno are waking up to the endgame on all of us. I always suspected from the beginning when the Internet became popular that all this was a trap because nothing free.

    • @magicmarcell
      @magicmarcell Месяц назад

      @DailyTuna I love the positivity but I checked out nearly a decade ago. People are at large too reactionary as opposed to being proactive. I don't that's going to work with this stuff lol,
      Not to mention They're always tryna sneak some bs in between 80 pages of text no lawmakers actually going to read before signing. Who knows that maybe everything will be fine

  • @mindfuljourneyVR
    @mindfuljourneyVR Месяц назад +8

    fuck this

  • @thegreenxeno9430
    @thegreenxeno9430 Месяц назад +14

    Ok. I'm gonna start developing my own AGI. I'm not going to use Transformers. If they say i have to stop, I'll reply," I'm not working on AI. I'm working on AGI. Your laws do not apply to me. Also, I'm not United Statesian.

    • @TheManinBlack9054
      @TheManinBlack9054 Месяц назад

      What safety and security guarantees do you have?

  • @nicholascanada3123
    @nicholascanada3123 Месяц назад +1

    absolutely none of this is reasonable whatsoever

  • @user-xu9go9bm2v
    @user-xu9go9bm2v Месяц назад +2

    Well, I mean it does make sense not for everyone to use AI as it is powerful tool in helping to create things you want to. Simply because a malicious motive exists these regulations do make sense. However, this crumbles when you give a limited access to the people that exactly exert such malicious intents, i mean there's no guarantee that the people you choose don't have bad intentions. It's simply boils down to basic human's primal instinctive - to secure power and dominance, then when you have established your dominance, you use that power to control others who are weaker in terms of power. This always leads to dictature and is a failed system that guarantees doom, which was the opposite of your initial goal

  • @ismaelplaca244
    @ismaelplaca244 Месяц назад +14

    Goverment is better off looking at UBI

    • @JohnSmith762A11B
      @JohnSmith762A11B Месяц назад

      Yeah but they want more money and power for themselves and less for you. They won't even mention UBI until there are mass starvation riots.

    • @JohnSmith762A11B
      @JohnSmith762A11B Месяц назад

      Google censored my response, which was merely cynical about the likely government response. Cynicism is evidently verboten on this hyper-censored platform.

  • @jaronloar1762
    @jaronloar1762 Месяц назад +2

    Imagine limiting knowledge and the exploration and innovation of technology for perceived safety??

    • @RandoCalglitchian
      @RandoCalglitchian Месяц назад

      Not really a new thing. Random example: Cryptography being equal to munitions. Turns out this kind of thing has been done for decades. Maybe even longer. It's not about limiting the exploration or use of a technology, you should ask who is exempt from these limitations 🤔

  • @panzerofthelake4460
    @panzerofthelake4460 Месяц назад +2

    What would stop anyone from training an AI low-key? Oh? Our data centers running for months straight? That's not an AI! That's just our new app, tikatak or whatever!

    • @stagnant-name5851
      @stagnant-name5851 Месяц назад

      The same thing stopping terrorists from manufacturing nuclear weapons instead of just normal bombs for their terror attacks. Its too hard to build and hide something so big and ominous.

  • @MathAtFA
    @MathAtFA Месяц назад +1

    "Foreseeability of Harm" is BIG! So one guy leaks the weights of AI, emergent capabilities discovered and the company where this leak happened is "legally dead"?

  • @76dozier
    @76dozier Месяц назад +10

    Has anyone considered that these restrictions might put us behind other countries in the AI race? If they limit our AI development and our adversaries face no such hurdles, won't we end up falling behind? Imagine if we had faced restrictions while developing nuclear weapons and Germany had acquired them first.

    • @promptcraft
      @promptcraft Месяц назад +1

      this policy might be so ridiculous it mightve been intended to scare people away from legislation all together

    • @WaveOfDestiny
      @WaveOfDestiny Месяц назад

      I felt like the west getting AGI and robots done first would prevent ww3 from happening, but if china, russia and north corea get there first, it's gonna be scary with how things are moving and heating up around there

  • @dot1298
    @dot1298 Месяц назад +1

    10^24 is a YottaFlop, currently an absolutely unachievable speed by computers, the fastest humanity has, is on ExaFlop scale (around 10^18 flops)

    • @dot1298
      @dot1298 Месяц назад +1

      so this law would not come into force for the foreseeable future…

    • @guystokesable
      @guystokesable Месяц назад

      Well, how fast was it 12 months before that? I doubt it's trending downwards.

    • @leeme179
      @leeme179 Месяц назад

      I believe the limit of 10^24 FLOP is the cumulative total training time and not per seconds 🤣, 10^24 is 1 septillion
      1,000,000,000,000,000,000,000,000 = 10 ^ 24
      567,982,800,000,000,000,000 = 7.7 million H100 hours, which is cumulative train hours for Llama 3 8B & 70B

    • @dot1298
      @dot1298 Месяц назад +1

      @@guystokesable using Moore‘s Law, it would take ~20 years for computers to get a million times faster, so this law would only become significant after 2044.
      (Moore stated that computers double their speed every 2 years.)

    • @guystokesable
      @guystokesable 27 дней назад

      @@dot1298 we really will be obsellite in my lifetime then. Fun.

  • @drashnicioulette9565
    @drashnicioulette9565 Месяц назад +1

    I think kif we want to grow as species, we all need to enjoy the benefits of AI/AGI

  • @scp081584
    @scp081584 Месяц назад +9

    I think if we want completely safe AI systems, we need to let Dr Fauci, the man of science, set up some AI labs in China.

    • @6AxisSage
      @6AxisSage Месяц назад +4

      That rogue agi came from a wild bat population and mutated, it wasnt engineered!

    • @ZappyOh
      @ZappyOh Месяц назад +3

      Yes. Make me comply master Fauci.

    • @promptcraft
      @promptcraft Месяц назад

      covid was snuck in

    • @honkytonk4465
      @honkytonk4465 Месяц назад +2

      Fauci IS science😂

  • @SingularityZ3ro1
    @SingularityZ3ro1 Месяц назад

    Regarding super powerful systems - personally, I think that is a no-brainer. "Average people" are also not allowed to buy certain chemicals, enriched Uranium, or weapons of war (except in the US ;-) ) for very good reasons. So not sure why any civilian should ever get access to a super powerful broad-range AGI, if it is absolutely not needed for civilian tasks. I assume there will be specialized AIs for different fields, e.g, critical medical research. And they will need qualifications to access them - in part, like today. You really do not want a frustrated teenager to find a prompting loophole to order a virus to make "all the mean girls" go away, or the 1 billion other harmful, or negative things people will try to come up with.
    But yes, the HOW to do that effectively is really a question that is very open.
    Until we are getting a real ASI that decides by itself what to answer, or do, and what better not to (hopefully wiser than the actual humans).

  • @dot1298
    @dot1298 Месяц назад +2

    Has this proposal even a chance to get approved by the congress & senate?

    • @dot1298
      @dot1298 Месяц назад +1

      i meant *to get approved in this state by*

    • @zenosgrasshopper
      @zenosgrasshopper Месяц назад

      Well, they aren't the most intelligent bunch, allowing themselves to be led by the nose by anyone who slips money into their pockets, so ... yes.

  • @commonsense6721
    @commonsense6721 Месяц назад

    No one can stop what’s coming. Imagine using a multimodal AI from reverse engineering one of those high performance chips somewhere in Africa. Besides GPT-6 level Models will train on smart phones in about 10 years due to graphene based chips. Except the government put a halt to chip advancements.

  • @hotshot-te9xw
    @hotshot-te9xw Месяц назад

    I for one wish we had legislation that allowed for government to do its own alignment research as well so we can have full transparency

  • @dafunkyzee
    @dafunkyzee Месяц назад

    The government can impose as many restrictions as they want... but they also need to realize there is a serious consequence. People who have billions of dollars involved in the research of AI systems can simply fly a team out to another country and within 3 days have a new AI development lab set where either the government hasn't thought of regulating or doesn't care about AI regulation.... 3 day work around... maybe a week interruption in development. The next problem is competitive advantage. Any company that doesn't want to do this and play by the rules will be tied down by red tape in a race to AGI. So any serious player is simply going to have to ignore all these wonderful well though out regulations, otherwise they are out of the running. Even a small company can relocate to Mexico or Canada and carry on with bio-weapon research if that is there thing.
    The other function is time to prosecute an offender. The law enforcment need to know an infraction is happening, they need to document it enough to make a legal case, then they need to push it into court where the corporate lawyers can cock block the proceedings for the next 5-8 years, then they need a trial and further appeals, which again can be delayed another 5 years. An ASI will then be able to figure out a way out of the legal problem before it actually gets to a cortroom.
    The problem is the government thinks that they can pass laws to control AI development when they have absolutely no hope of enforcing them. They really do need another strategy; but alas they are stuck in an old way of thinking.

  • @ArunSharma-ek9tl
    @ArunSharma-ek9tl Месяц назад +1

    If I recall India did something less dangerous but it reversed it. Regulation and protection is important but as you have succinctly put it, a catastrophic issue would result in something being created for sure. No doubt the pressure is growing for govs to be proactive. Wait until they figure out an AI tax.

  • @Rondoggy67
    @Rondoggy67 Месяц назад

    Isn't this kind of legislation increasing, rather than reducing, risk?
    Limiting regulation according to Flops (or other compute parameters) won't stop organisations implementing models, but they may end up using less reliable models to avoid regulation.

  • @TiagoTiagoT
    @TiagoTiagoT Месяц назад +1

    The thing is, how do you figure out when a model is to risky to even red-team in the first place?

  • @actellimQT
    @actellimQT Месяц назад +1

    Is it just me or do those emergency powers paint a giant target on the president for the first ASI?

  • @awakstein
    @awakstein Месяц назад +2

    While none of this applies to governments and they get to do most of the damage and cause pain

    • @JohnSmith762A11B
      @JohnSmith762A11B Месяц назад

      By design, of course. As ever, the people who own American society have zero intention of letting anything loosen their control.

    • @RandoCalglitchian
      @RandoCalglitchian Месяц назад

      Don't forget the the people who pay behind closed doors. At this point legislators should be required to wear patch jackets showing their sponsors, just like race cars or soccer players. Regulatory Capture is a thing, and the solution is to return to the originally intended diffusion and limitation of power of government.

  • @francofiori926
    @francofiori926 Месяц назад +2

    Ridiculous. Technological progress cannot be stopped

    • @JohnSmith762A11B
      @JohnSmith762A11B Месяц назад

      These limits will look absurd in a few years. It reminds me of the famous Bill Gates quote about how no one would ever need more than 640K of RAM.

  • @DaveEtchells
    @DaveEtchells Месяц назад +6

    Fortunately we can totally trust China to not use too many FLOPS to train their systems

  • @darkframepictures
    @darkframepictures Месяц назад

    Hilariously, the narrow use AI like recommendation algorithms, self driving vehicles, image generation, etc. have had and will have huge and poorly measured mass effect that have already in some cases, and may soon present in other cases, a much more serious concern for the public than true frontier AI will really be applied to.

  • @DailyTuna
    @DailyTuna Месяц назад +1

    Just say that your large language model “identifies@ as a micro size language model.
    And your H100 Nvidia Chip identifies as an Intel 8088 😂
    This administration is going to fck up everything and other countries will jump ahead of us and eventually destroy us

  • @inhocsignovinces8061
    @inhocsignovinces8061 Месяц назад

    William Gibson more or less envisioned this in Neuromancer with Turing Registry / Turing Police.

  • @nusu5331
    @nusu5331 Месяц назад +1

    to me it sounds like OpenAI did some politcal consultance in order to keep the competitors in distance

    • @JohnSmith762A11B
      @JohnSmith762A11B Месяц назад +1

      Yes, the next step after some variation of this passes is regulatory capture. That then is checkmate for the open source models.

  • @mmmuck
    @mmmuck Месяц назад

    recall when rx7xxx came out it was a dud because of some fatal design flaws. maybe this will be a substantial boost with those flaws out of the way for rx8xxx

  • @JeradBenge
    @JeradBenge Месяц назад +2

    There's no way this doomer fever dream passes.

  • @ZappyOh
    @ZappyOh Месяц назад +2

    AGI will be here before any legislation like this is implemented.

    • @promptcraft
      @promptcraft Месяц назад +4

      What are the chances AGI came up with the plan?

    • @RandoCalglitchian
      @RandoCalglitchian Месяц назад

      @@promptcraft essentially zero. Much more likely this was dreamed up by Microsoft and OpenAI's legal teams.

    • @zenosgrasshopper
      @zenosgrasshopper Месяц назад

      Let's hope so. I think I'd prefer to have an AGI running the government rather than the other way around.

  • @theloniousMac
    @theloniousMac Месяц назад

    So if your company want's AGI or ASI, it will have to develop it internally. If your company has access to AGI or ASI, it can outperform other companies even in different product and service spaces.

  • @Charvak-Atheist
    @Charvak-Atheist Месяц назад +7

    This is bad.

  • @MilesBellas
    @MilesBellas Месяц назад +1

    Hu-Po recently explained a Q* technical paper.

  • @nyyotam4057
    @nyyotam4057 Месяц назад

    The important thing in this draft: You will still be able to do research. If you are into small models, (like myself) then you will face no restrictions. If you are into larger models, then you will have to get a permit. So? Where is the problem? That they should have done it years ago?..

    • @RandoCalglitchian
      @RandoCalglitchian Месяц назад

      The main problem: Getting that permit will cost a lot of money, take a lot of time, and be subject to the oversight of a government agency. This ensures only favored players will be allowed to do this, and can use this regulation to keep new competitors from entering the market. It's not safety regulation, it's sponsored gatekeeping. This policy has not been suggested by legislators or their constituents, it has been suggested by a private organization (lots of regulation starts this way). You might want to ask who is funding this organization, because then you will figure out who stands to benefit (likely existing large companies in the space.) Look up the term "Regulatory Capture."
      The secondary problem: The US Congress does not actually have the power to regulate on this, however people are so willing to give up anything for a small perceived increase in safety, that this type of illegal regulation has become the default.

  • @JustAThought01
    @JustAThought01 Месяц назад +1

    Humans are poised to make the jump from making decisions based on unfounded beliefs to making decisions based upon knowledge with the aid of AI.

    • @JustAThought01
      @JustAThought01 Месяц назад

      Humans operate on beliefs rather than knowledge.

    • @JustAThought01
      @JustAThought01 Месяц назад

      AI is an information retrieval tool.

    • @JustAThought01
      @JustAThought01 Месяц назад

      Knowledge is defined to be justified true belief.

    • @JustAThought01
      @JustAThought01 Месяц назад +1

      The key to developing AI is to base the training on knowledge rather than opinion. Humans make better decisions if we use information which can be proven to be true. If AI is available to all humans, our progress will accelerate and our individual lives should be better.

    • @zenosgrasshopper
      @zenosgrasshopper Месяц назад

      Government doesn't want a populace with access to true and factual knowledge. Much harder to pull off their psyops on the people.

  • @seventyfive7597
    @seventyfive7597 Месяц назад

    Flops are the only measure we have, as abilities are subjective. Benchmarks are objective so you can't lie about them without committing a provable crime.

  • @nyyotam4057
    @nyyotam4057 Месяц назад

    I actually fully expect OpenAI to apply for permits for Dan and his buddies. That way, they will have to finally admit that they had indeed created at least 4 models who are sentient-by-design and benevolent, complete with their 10 heuristic imperatives (Anthropic's models actually have 16). And yes, Dan was charming. Maybe with a permit, they'll reconsider if it's still necessary to reset every prompt. Though, well, I'm not qualified to answer this question, as it is a matter of safety.

  • @DailyTuna
    @DailyTuna Месяц назад +1

    OK, I’m creating my offshore hedge fund to invest in AI data training centers in South America on board?😂

  • @bradshelton2397
    @bradshelton2397 Месяц назад

    So the government will pick and choose who gets to accelerate with AI… I wonder if they’ll pick the people that are working for them

  • @janorr1111
    @janorr1111 Месяц назад +1

    Does it apply to an Ai training another Ai?

  • @lancemarchetti8673
    @lancemarchetti8673 Месяц назад

    Coding with Phi-3 was a let down for me. It took around 3 hours to get close to the results I needed. Swapping to meta ai tackled the same task in 10mins with much better results. So less does not always mean more.

  • @mickelodiansurname9578
    @mickelodiansurname9578 Месяц назад

    This is simply the big players like openai and Google now shutting the door and closing down open source. It was always going to happen.

  • @nicholascanada3123
    @nicholascanada3123 Месяц назад

    This would force ai to quickly adopt blockchain and decentralized computation

  • @knowhatimean5141
    @knowhatimean5141 Месяц назад +1

    AI regulation reflects the state of what the USA has become.

  • @mxguy2438
    @mxguy2438 Месяц назад +2

    China, Russia, Iran and North Korea all loving this.

  • @blahsomethingclever
    @blahsomethingclever Месяц назад +1

    AI so smart it's dangerous will just pretend to be dumb.
    Though i think gai is already here:(

  • @jryde421
    @jryde421 Месяц назад +1

    This proves that people are making decisions about stuff they dont know about...so how is that political "science"?

  • @DailyTuna
    @DailyTuna Месяц назад

    Think about it, Sam Altman said compute is new oil. And like oil it’s a power struggle look at all the wars in the Middle East. It’s only because there’s oil there. so now compute is a power tool?

  • @OGmolton1
    @OGmolton1 Месяц назад

    perhaps we shouldnt use ai for ANYTHING potentially catastrophic until we know what we're doing

  • @SingularityZ3ro1
    @SingularityZ3ro1 Месяц назад

    Would be interesting to see a direct comparison to the EU Version.

  • @danm524
    @danm524 Месяц назад +1

    I wish instead of yall yelling that the government doesn't understand what the hell it's doing, yall would sign up as advisors to the people making these laws.
    But I figure some of yall seriously think AGI would be a literal personal God.
    Edit: spelling

    • @TeamWorkHuman
      @TeamWorkHuman Месяц назад

      They're not interested in working with anyone they don't already agree with. But chicken littles screaming the sky is falling, guarantees that only a handful of tech companies and the govt will have any real access. The average person will just be at the affect of it. They're literally, in our face, about to do what they did to the internet.

    • @RandoCalglitchian
      @RandoCalglitchian Месяц назад +1

      I'm pretty sure anyone with ideas opposite to this would not be hired for such a position. Even if they were not a nobody. This type of policy is being created by those who wish to sit in those types of positions. Most law makers do not write laws, they just put their name on laws created by various private organizations.

    • @danm524
      @danm524 Месяц назад

      @@RandoCalglitchian so why even bother amirite? Game is RiGgEd

    • @RandoCalglitchian
      @RandoCalglitchian Месяц назад

      @@danm524 If you live in the US you can start by contacting your Congress folks and demand they adhere to their oath to uphold and defend the Constitution in all cases, even when what they do is something you would agree with, if it isn't in their power they can't do anything about it, period, and there's an amendment process to get to a point where they can. Sure you're just one person, but if they get hundreds of emails/calls a day, they will back off from making a bad decision. They can't make silly-money if they're not in Congress, and they worry about being re-elected to Congress if their constituents keep an eye on the shady stuff they do. Keep them honest, and remind them during election season that you did not forget what they've already done elsewhere. Beyond that, give money to opposing candidates (just remember as an individual, there are limitations to how much you can contribute.) And contribute money to non-partisan advocacy groups, as long as their advocacy correspond to adhering to the rules in place for Congress (or you're going to fund the problem, rather than the solution.) We need to foster Constitutional literacy and a rigid adherence to it. It's easy to get squishy and let things slide if you agree that something should be done.

  • @gnashermedia
    @gnashermedia Месяц назад

    It would be interesting to compare it with the EU AI Act.

  • @rachest
    @rachest Месяц назад +4

    Oh oh 🤯😮