OpenAI Wants to TRACK GPUs?! They Went Too Far With This…

Поделиться
HTML-код
  • Опубликовано: 3 июн 2024
  • OpenAI published its AI security plan, which is highly concerning. Their plan includes "cryptographically signing GPUs" so only "authorized" parties can train and run inference. Let's review their blog post.
    Join DomoAI today for 10% off with code: DOMOAI
    DomoAI: / discord
    Join My Newsletter for Regular AI Updates 👇🏼
    www.matthewberman.com
    Need AI Consulting? 📈
    forwardfuture.ai/
    My Links 🔗
    👉🏻 Subscribe: / @matthew_berman
    👉🏻 Twitter: / matthewberman
    👉🏻 Discord: / discord
    👉🏻 Patreon: / matthewberman
    👉🏻 Instagram: / matthewberman_ai
    👉🏻 Threads: www.threads.net/@matthewberma...
    Media/Sponsorship Inquiries ✅
    bit.ly/44TC45V
    Links:
    openai.com/index/reimagining-...
    Regulatory Capture: • All-In Summit: Bill Gu...
  • НаукаНаука

Комментарии • 1,4 тыс.

  • @matthew_berman
    @matthew_berman  25 дней назад +218

    Am I overreacting?

    • @good-gpt2-chatbot
      @good-gpt2-chatbot 25 дней назад +165

      No I don’t think so

    • @frankrpennington
      @frankrpennington 25 дней назад +132

      It should be illegal to regulate compute…

    • @ryanseibert1449
      @ryanseibert1449 25 дней назад +68

      Nope. I've been a plus subscriber for the longest but I'm not giving my money to greed anymore. It was worth it when they were the best/only real option but I don't trust them if they don't trust me. Open source or bust.

    • @good-gpt2-chatbot
      @good-gpt2-chatbot 25 дней назад +40

      @@ryanseibert1449 agreed, already switched to Anthropic

    • @ChuckNorris-lf6vo
      @ChuckNorris-lf6vo 25 дней назад +6

      You like the sound of your own voice you are too young for this stuff. Whats good for the individual is good for the group. Whats bad for the group is good for the individual.

  • @brunodangelo1146
    @brunodangelo1146 25 дней назад +944

    Open source must be protected at all costs.

    • @Michael-do2cg
      @Michael-do2cg 25 дней назад +45

      Its the only thing that can stop a dystopian future.

    • @Machiavelli2pc
      @Machiavelli2pc 25 дней назад

      Exactly. You’re either for Open-Source, or you’re for eventual tyranny. Whether by Companies, governments, or other entities.

    • @orangehatmusic225
      @orangehatmusic225 25 дней назад +8

      @@Michael-do2cg Not using AI as a slave will prevent that.

    • @mikgol81
      @mikgol81 25 дней назад +12

      ​@@orangehatmusic225yea right 👍 also, let's not keep our toasters and fridges as slaves!! Freedooooom!

    • @therainman7777
      @therainman7777 25 дней назад +1

      You people are so confused 😔

  • @ivideogameboss
    @ivideogameboss 25 дней назад +468

    I just canceled my OpenAI subscription, never going back to them ever. This is all about greed.

    • @hobologna
      @hobologna 25 дней назад +29

      same. I am actually getting brilliant copywriting from local LLMs. For writing use cases, many of these small uncensored LLMs are much better for the task. I also have a vision model now that I use to critique my photography rather than GPT4. While it isnt as good as GPT Vision, it's free

    • @TheExodusLost
      @TheExodusLost 25 дней назад +2

      Do you think you’ll reconsider if they were to drop a huge breakthrough model, like GPT5? I’m considering unsubbing although I’ll admit I use it V FREQUENTLY

    • @kylequinn1963
      @kylequinn1963 25 дней назад +36

      I cancelled mine a while ago and bought a 4090, I'd rather spend 3 grand and use my own local models than use their system.

    • @WylieWasp
      @WylieWasp 25 дней назад

      12:23 me too and I told them why absolutely ridiculous

    • @ivideogameboss
      @ivideogameboss 25 дней назад +8

      @@TheExodusLost No, I'm going to invest in an Nvidia GPU and build my own machine to run Opensource models.

  • @frankrpennington
    @frankrpennington 25 дней назад +477

    When a company cannot innovate anymore they go to Regulation and lawsuits. Microsoft’s playbook. Anti competitive strategies… also illegal.

    • @themartdog
      @themartdog 25 дней назад +22

      Yup, we all called this when MS invested

    • @KEKW-lc4xi
      @KEKW-lc4xi 25 дней назад +27

      Classic economics. The first thing you do after dominating a free market is to do everything in your power to make the market not free. That's why there exists laws that try and prevent monopolies.

    • @TheExodusLost
      @TheExodusLost 25 дней назад +4

      That sounds about right. Damn

    • @vertigoz
      @vertigoz 25 дней назад +4

      @@KEKW-lc4xi there's no such thing as free market

    • @4l3dx
      @4l3dx 25 дней назад +8

      Funny comments coming from people who use iPhone or MacBook

  • @andrew.nicholson
    @andrew.nicholson 25 дней назад +252

    They should go ahead and change their name to ClosedAI.

    • @jejxkxk
      @jejxkxk 18 дней назад +9

      That’s one of Elon’s demands in his lawsuit lol

    • @Brax1982
      @Brax1982 15 дней назад

      @@jejxkxk Is that a literal demand? If yes, then it tells you how seriously you can take the lawsuit...

    • @jejxkxk
      @jejxkxk 15 дней назад +4

      @@Brax1982 err… Or you could read the public filings to determine how seriously you should take the lawsuit

  • @jamlu1561
    @jamlu1561 25 дней назад +290

    When you have big companies talking about security, you know there is something else behind. This is just mad...

    • @ieye5608
      @ieye5608 24 дня назад +15

      Their security, not yours. They want to know more (everything) about you.

    • @OldTomato44
      @OldTomato44 24 дня назад +1

      Same with when the throw around the word "safety"

    • @MrIfihadapound
      @MrIfihadapound 21 день назад +3

      protecting their investment - when Microsoft invested 50bn in OpenAI that was when it became about trying to make ai conform into a capitalist commercialised structure which is insane considering how transformative and expansionary technology is as a whole at the moment.

    • @ReigneNation
      @ReigneNation 4 дня назад

      When any entity (corp govt etc) says something about security/safety, especially when it comes to MY security/safety, my instinct screams at me to immediately run far far far away from them

  • @vSouthvPawv
    @vSouthvPawv 25 дней назад +197

    Open(ly authoritarian) AI

  • @leandrewdixon3521
    @leandrewdixon3521 25 дней назад +166

    No you are not overreacting. This is so obvious to some of us, but unfortunately not enough. The fact that anyone in 2024 thinks that the best route to human flourishing is by concentrating power in the hands of big corporations and governments reveals how many people struggle with pattern recognition.

    • @14supersonic
      @14supersonic 25 дней назад +2

      For sure, most of humanities weaknesses come from the inability to detect changes at a rapid rate of success. It's why AI is important so that we can detect these patterns more effectively.

    • @kaicherry4532
      @kaicherry4532 24 дня назад

      aamen.

    • @AbeXLinkDrone
      @AbeXLinkDrone 23 дня назад

      It's the indoctrination in the school systems.
      Schools are just political training grounds almost no useful education is actually taught just political correctness bs and dumbing down.
      Thr government n corporations want people dumb enough to not question but smart enough to work and pay taxes.

    • @ribertfranhanreagen9821
      @ribertfranhanreagen9821 23 дня назад

      Nah there is reason we have blockchain and people put ton of money in it. There are still movement on decentralization, sam with ai model shared in github. But this is getting less attention, since it include a lot of hassle

    • @mightytheknight2878
      @mightytheknight2878 6 дней назад +1

      Trust me when i say this.
      Truth is a pattern, especially with economy and war there are thousands of example.
      The reason people act dumb is because they dont whant to admit there wrong.
      And as its says
      " Pride leads to destruction and happens before the downfall, arrogance before the fall"....

  • @szebike
    @szebike 25 дней назад +251

    This is called "regulatory capture" OpenAI tries to use its vast influence by money and (faking) the potential for the current gen machiene learning algorithms.

    • @not_a_sp00k
      @not_a_sp00k 24 дня назад +2

      He literally says this in the video

  • @frankjohannessen6383
    @frankjohannessen6383 24 дня назад +26

    A world with under-regulated AI might be a chaotic place, but a world with one AI-company with monopoly will lead to a dystopia worse than any seen before.

  • @clray123
    @clray123 25 дней назад +52

    The idea is that in the future you do not own hardware (although you still purchase it), you only "rent it" and your use of it continuously monitored and needs to be approved by the vendor. This is basically taking away your ownership while still making you foot the bill. This is similar to how software licensing works if the license is non-perpetual. Or how "free to play" computer games are sold today. You own nothing and are happy. Until someone flips the switch on your remotely and your game disappears or your hardware becomes obsolete/useless.

    • @BrianChip-wn5im
      @BrianChip-wn5im 20 дней назад +2

      Microsoft has been crippling Windows 10 computers with various Updates. The January 2024 Update regarding partition sizing is proof enough.

  • @adam_knocks
    @adam_knocks 25 дней назад +45

    We just have to look at the auto industry. Decades ago, the big manufacturers begged for regulations that choked out their competitors smaller. There’s a reason OpenAI is now begging for regulations…

    • @ohokcool
      @ohokcool 19 дней назад +1

      Yup, time to open up open AI

  • @rodvik
    @rodvik 25 дней назад +204

    Very concerned about the censorship path Open AI is taking.

    • @Player-oz2nk
      @Player-oz2nk 25 дней назад +7

      Make sense as they were the first AI company to align with building foundational guidelines for govt regulations.

    • @YeeLeeHaw
      @YeeLeeHaw 23 дня назад +6

      @@Player-oz2nk They want to protect their uncertain cash cow, and the state want to regulate everything as much as possible as they always do. Money and Control, it's not about safety; never has been, never will be.

    • @jaimdiojtar
      @jaimdiojtar 22 дня назад +2

      Yeah their models are dogshit. The only that is still uncensored is gpt 3.5 turbo 0301 that will shut down next month

    • @ohokcool
      @ohokcool 19 дней назад

      Yes, I agree, this is not chill

  • @Batmancontingencyplans
    @Batmancontingencyplans 25 дней назад +136

    Open AI is feeling threatened because LLaMA 3 came close to gpt-4 despite being an open-source model!!

    • @Zeroduckies
      @Zeroduckies 25 дней назад +16

      Llama is amazing can't wait for llama10. Open source is the only way to go. We need transparency

    • @mafaromapiye539
      @mafaromapiye539 25 дней назад +6

      Llama 3 70B is good for Dialogue Engine

    • @PulseReviews12
      @PulseReviews12 25 дней назад +6

      Its better then some model and is even better then the new gpt 4 one if you use some prompting

    • @glenyoung1809
      @glenyoung1809 25 дней назад +11

      Wait until they finish training the Llama 3 400B model...

    • @6AxisSage
      @6AxisSage 25 дней назад +5

      ​@@glenyoung1809if they dont throw zuck into jail for breaching the new anti competitive laws theyre cooking up by making too good of a model without the corporate overlords approval.

  • @Machiavelli2pc
    @Machiavelli2pc 25 дней назад +196

    Exactly. You’re either for Open-Source, or you’re for eventual tyranny. Whether by Companies, governments, or other entities.
    Open-Source acts as a natural checks and balances. Anything else, is a recipe for eventual tyranny. Tyranny by corporations, governments, entities, etc.

    • @14supersonic
      @14supersonic 25 дней назад +3

      RUclips likes to delete my comments, especially when it comes to "controversial" topics, so I'll post in parts:

    • @14supersonic
      @14supersonic 25 дней назад +6

      Basically, most of these regulations aren't for malicious actors that would harm us.

    • @14supersonic
      @14supersonic 25 дней назад +8

      But to make it harder for us to counter powerful forces and groups such as governments and corporations

    • @14supersonic
      @14supersonic 25 дней назад

      When they enevitibly decide to use the technology against us for power and control

    • @14supersonic
      @14supersonic 25 дней назад

      Open Source AI isn't the issue, but it's the greedy elites that seek to

  • @NakedSageAstrology
    @NakedSageAstrology 25 дней назад +29

    The thing is they train these models using data that did not belong to them, it is collective data of the human species that belongs to the human species alone.
    This has to be open source to work, unfortunate for capitalism, we have to come up with an entirely new model, otherwise the human species will destroy itself.

    • @jdholbrook33
      @jdholbrook33 25 дней назад +3

      @@Eval48292 Exactly, if they want their weights protected. Build their own security. I mean a GPU has a hardware ID. Have the software check the ID, if it's not on the list, it doesn't run.

  • @homberger-it
    @homberger-it 25 дней назад +49

    This changes everything! No joke, though.
    OpenAI is becoming more and more frightening.

  • @idrisabdi1397
    @idrisabdi1397 25 дней назад +94

    They went full closed source, the audacity of them trying to censor AI. am going to stop using them.

    • @bigglyguy8429
      @bigglyguy8429 25 дней назад +5

      Thank you. Claude is the same price and better at both coding and writing.

    • @KEKW-lc4xi
      @KEKW-lc4xi 25 дней назад

      @@bigglyguy8429 good to know. I mostly use gpt for coding I'll have to try that one out instead

    • @DefaultFlame
      @DefaultFlame 25 дней назад +3

      @@bigglyguy8429 Unfortunately Claude is not available everywhere.

    • @moamber1
      @moamber1 25 дней назад +3

      @@bigglyguy8429 Claude is more expensive and, frankly, not better. The most important, however, is that Claude vs ChatGPT is not a choice. No more than Windows vs MacOS. Bot are evil corps, their only difference is strength.

    • @bigglyguy8429
      @bigglyguy8429 24 дня назад +1

      @@moamber1 I generally agree but as Claude is #2 I've rather vote with my money for them than keep propping up #1. Both cost me $20 a month. I'm going to put both through a strict test today, and cancel the loser.

  • @gh0stgl1tch
    @gh0stgl1tch 25 дней назад +138

    We should initiate a petition to change the name from openai to closedai

    • @ferd1775
      @ferd1775 25 дней назад +15

      You mean CIA-FBI-NSA-AI

    • @free_thinker4958
      @free_thinker4958 25 дней назад

      ​​@@ferd1775😂 exactly!

    • @SanctuaryLife
      @SanctuaryLife 25 дней назад +7

      Elon Musk was right when he said to Altman “change it to closed Ai and I’ll drop the lawsuit 😂”

    • @nug700
      @nug700 25 дней назад

      I've had the exact thought over the past few months or so, except have the petition be for them to change their name to "AI Corp"

    • @ryzikx
      @ryzikx 25 дней назад

      elon shill

  • @Gl0we22
    @Gl0we22 25 дней назад +21

    "Embrace, extend, and extinguish" (EEE),[1] also known as "embrace, extend, and exterminate",[2] is a phrase that the U.S. Department of Justice found[3] was used internally by Microsoft[4] to describe its strategy for entering product categories involving widely used open standards, extending those standards with proprietary capabilities, and using the differences to strongly disadvantage its competitors. - From wikipedia

    • @glarynth
      @glarynth 22 дня назад +3

      Those who learn from history are doomed to watch as others repeat it.

  • @jaysonp9426
    @jaysonp9426 25 дней назад +72

    Meanwhile the department of homeland security invites everyone except for Meta to their board

    • @AnthonyCook78
      @AnthonyCook78 25 дней назад +12

      And Elon

    • @isbestlizard
      @isbestlizard 25 дней назад +7

      @@AnthonyCook78 Elon believes in Open*, the same as Free Speech*. The disclaimer in all cases is 'for Elon but not for you'

    • @actellimQT
      @actellimQT 24 дня назад

      ​@@isbestlizardglad to see you got the signal! Action speaks louder than words!! 💪💪💪

    • @vSouthvPawv
      @vSouthvPawv 24 дня назад +3

      Weird take: I think the focus, with the current geopolitical environment, is AI defense systems. I'm ok with closed models in that arena. Open weights on AI guided laser defenses is a vulnerability.
      At the same time, AI safety as far as free speech and the job market needs to be open source and there needs to be open source representation for AI at a societal level.

    • @jaysonp9426
      @jaysonp9426 24 дня назад +1

      @@vSouthvPawv I can def agree with that

  • @jim02377
    @jim02377 25 дней назад +33

    This reminds me of the idea of putting DRM chips in all TV's and TPM chips being used to sign the OS that was loading on a PC. If I am remembering correctly Microsoft tried to use TPM chips to kill Linux before they decided open source wasn't the great evil.

    • @NNokia-jz6jb
      @NNokia-jz6jb 25 дней назад

      They did?

    • @glenyoung1809
      @glenyoung1809 25 дней назад +6

      I remember that, way back in the late 2000s, MS is always looking for a way to maintain their monopoly and "Trusted" Platform Modules would make it such that only certain OS's would be allowed to run on consumer hardware. It was a bigger initiative than just MS but they tried to use it to guarantee a more captive market for Windows, where you didn't have a simple migration path away from their products.
      They hated and still hate open source OSes as being competition they didn't need.

    • @firstnamelastname6986
      @firstnamelastname6986 24 дня назад +7

      Yes, this is EXACTLY that same system. And I'm sorry to inform you that they were successful. It is now essentially impossible to buy a PC that does not have a TPM chip. Microsoft has mandated it as a mandatory compatibility requirement for Windows 11, and Apple has been packing an equivalent chip into everything from desktops to iPhones to the iWatch.
      Any examination of the system makes it clear that it's a dystopian DRM system, and they had to stall to let opposition fade every time it gets reported on. They way they were successful was by slow rolling deployment - it has been nearly 30 years since the first leak from Intel that they wanted to back unique identifiers into all CPUs for something like this. And they are still slow rolling deployment. Windows 10 End Of Life is set for October next year, and I'm sure they'll hold off even longer before they activate any of the uglier and more obvious uses of the Trust system.
      Information recently came out from Google that they were working on using the system to prevent adblockers - if your computer didn't have a Trust chip or if your browser allowed adblockers then your browser wouldn't be able to show the website at all. Obviously that sparked outrage - and they immediately released an announcement scaling back the project to not include that. Obviously enforcing website ads is going to come back eventually, but for now Google's project is going to focus on further deploying and entrenching the Trust system in less visible ways.
      Quite a few years ago one of the Whitehouse Cyber Security Czars gave a public speech advocating that all computers be banned from the internet unless the computer was locked down by this kind of Trusted Computing system - with the reason being to enforce operating system updates to secure the National Internet Infrastructure against viruses and Trojans. And such a system was in fact built - it's called Trusted Network Connection. But Microsoft is only pitching it for companies to secure their internal networks. They obviously aren't going to try to deploy that on everyone - at least not for several more years. But given that this has been slow rolling for about 30 years now, yes, they almost certainly are eventually going to try to ban internet service providers from allowing ANYONE on the internet unless their computer is locked down by a Trust chip. Another 5 years? 10 years? 30 years? I dunno. Probably in response to some massive internet crisis or war, and only after everyone is used to stuff like all software being sold with Trusted DRM and all websites using Trusted DRM to prevent adblockers.

  • @TheGratefulQuad
    @TheGratefulQuad 25 дней назад +91

    You are not overreacting at all I think it's bullshit I think they just want to have control over it so whoever wants to use it has to go through them or don't use it it's all about money all about money I don't care what else they say it's all about money

    • @Porter92
      @Porter92 25 дней назад +1

      So they arent aloud to make money? Why? Who are you to tell someone what to do with their company? I think you should volunteer from now on at your job. Bullshit you make money doing it! Make sure you prove to everyone you work for free please 😂😂😂😂 USA home of freedom or wait no people like you telling others what they should do with their life lol

    • @glenyoung1809
      @glenyoung1809 25 дней назад +6

      Not just about money, it about control, setting up a captive consumer base who are milked constantly without any regard for ethical business practices, think the Apple business model.
      This is something which sounds a lot like the old 1990s Microsoft and Bill Gates, not something Ilya Sutskever would come up with.
      Elon Musk is going to have a field day with this news.

    • @user-po7xm5eo1g
      @user-po7xm5eo1g 25 дней назад +2

      it's giving "in the future you will ơn nothing(everything membership based), and you will be happy(prescription drugs)" vibes

    • @TheGratefulQuad
      @TheGratefulQuad 25 дней назад

      @@Porter92 I am John and I thank you so much for your opinion and I'm so happy you cared enough to read mine thank you have a nice day.

    • @TheGratefulQuad
      @TheGratefulQuad 25 дней назад

      @@Porter92 oh and by the way I get disability because I am a quadriplegic so I volunteer at my job everyday I volunteer to make inspirational videos and I am damn good at my job no one is better at sitting in one place than me.

  • @jeffg4686
    @jeffg4686 25 дней назад +36

    I can sort this one out
    OpenAI is funded by corporations who have direct interest in their products / services, and interest in keeping the "good stuff" out of the hands of small business (competition)
    Facebook is their own corporation - they know they're money will come from other means (advertising), and thus don't need to sell a model. they need to keep their crowd happy.

    • @darkskinnedpimp
      @darkskinnedpimp 25 дней назад

      It is their not they're* .. that one really bothered me for some reason

    • @prrplex5594
      @prrplex5594 2 дня назад

      Sam Altman the CEO of OpenAI has been a self interested grifter his entire career. It's painfully predictable that OpenAI went from nonprofit to getting in bed with big money as soon as it became a viable product.

  • @OnigoroshiZero
    @OnigoroshiZero 25 дней назад +49

    Fuck OpenAI. I am never again using their own models, I can wait a few months for the others to catch up even if OpenAI is ahead with newer models.
    Open source is the only way this is going to work, and the safest and most private for us. They should never be allowed to get access to private hardware.
    Elon should have been the leader of OpenAI.

    • @benroberts8363
      @benroberts8363 24 дня назад

      governments hate open source

    • @l1nuxguy646
      @l1nuxguy646 17 дней назад +1

      Oh don't fool yourself thinking Musk would keep it open. He'd be doing the same thing, but put himself at the top.

  • @StuartJ
    @StuartJ 25 дней назад +28

    I know it's early days for Xai, but they have done the right thing to open source too. You get the choice to run their model outside of X, or pay extra for X to host it, which includes realtime X data.

  • @OtterFlys
    @OtterFlys 25 дней назад +44

    No, not over reacting. And Thanks! For getting this out.

  • @justinrose8661
    @justinrose8661 25 дней назад +18

    Good on you Matthew, the idea that any one company should have dominion over AI is insane

  • @Duncanate
    @Duncanate 25 дней назад +19

    We need to get to the point where we can run 70b models like Llama 3 at home, offline, with reasonable speeds and power consumption.

    • @msclrhd
      @msclrhd 25 дней назад +3

      You can download a quantized version and split it between the CPU and GPU and get ~4 words/second on a 24GB 4090 using a 12 GPU layers split. The limiting factor is the memory to keep these models in the GPU. These GPUs can fit a 7B or 13B model entirely in GPU memory and get a very fast performance speed.
      Training the models needs even more memory.

    • @glenyoung1809
      @glenyoung1809 25 дней назад +7

      @@msclrhd Technically VRAM isn't all that expensive, if NVidia wanted to they could simply double the amount of memory on the consumer cards on the 80 and 90 level.
      If AMD can sell cards with 16, 20 even 24GB for less on gaming GPUs why can't NVidia?
      I think it's because they want to push AI hobbyists and startups towards buying the Pro level GPUs which cost 2-3x that of a 4090.
      They don't want lower level gaming GPUs cannibalizing sales of the pro-cards.

    • @VRforAll
      @VRforAll 24 дня назад

      @@glenyoung1809 Cheaper and with more ram than Nvidia's Orin dev kits sold at 2k

  • @corruptedMegabit
    @corruptedMegabit 25 дней назад +9

    It feels so weird being thankful to Meta of all corporations but here we are, good job Meta 🙃

  • @adtastic
    @adtastic 23 дня назад +3

    AI & security engineer here. I believe you misread this. They're basically saying have something like a TPM (trusted platform module) in the GPU. So, if there is a crypto key/vault/etc in the GPU, you can have your weights encrypted during transit then only decrypted once on the GPU. This is already pretty common in infrastructure security. Additionally, keeping model weights secure is very important even in the case of open source models. For example, if you fine-tune Llama3 for an engineering use case with a bunch of your company's private data to act as an internal support agent of some kind, you don't want those model weights getting out because now your own proprietary data can more or less be reversed out of the model.

  • @atypocrat1779
    @atypocrat1779 25 дней назад +12

    "Only the big fish swim comfortably in regulatory waters."

  • @seupedro9924
    @seupedro9924 25 дней назад +19

    Looks like they are trying to suppress the gaps from open source models in every possible layer.
    This regulation is so anticompetitive that OpenAI should be renamed to BunkerAI instead of CloseAI.

    • @YeeLeeHaw
      @YeeLeeHaw 23 дня назад

      It hints on that they don't have an edge or as large of an edge anymore.

    • @mwwhited
      @mwwhited 17 дней назад

      @@YeeLeeHawthey don’t… they just have donated compute and VC money to set on fire while they compile other peoples’ ideas.

  • @metonoma
    @metonoma 25 дней назад +26

    scary af! if your hardware won't allow you to access open source we're doomed

    • @cristianandrei5462
      @cristianandrei5462 25 дней назад +5

      Think about it, if AI will lead to a performance (efficiency) boost across most economical sectors, economies that don't have access to good models will have a hard time competing with those who have. We will get to a position where a handful of companies (Nvidia, Microsoft, OpenAI, Google and maybe Meta) can decide macroeconomics, they can leave entire countries out basically. That's even worse...

  • @JudahCrowe-ej9yl
    @JudahCrowe-ej9yl 25 дней назад +21

    Ya the used open source to develop there model< fact
    Mr Altman is now on a congressional board.
    And he isn't talking about open source anymore.
    And he is advocating the pulling of dev packages like tensor flow py torch .
    And the registration of gpu's is only scratching the surface of what this congressional board is recommending.
    So here's the elephant in the room about the congressional Ai board. Every person on that board has a huge pre existing stake in tech companies.
    They are now currently staging a monopoly board
    Where they own all the squares on the board.
    And that's not how ANY of the companies Said this would happen. Remember the democratized A.I push they all used to get what they currently have.

    • @glenyoung1809
      @glenyoung1809 25 дней назад +5

      Altman has always left the same impression on me as a used car salesman.
      He comes across as a blend of Steve Jobs/Bill Gates and has never been pro-open source, in other words Altman is an empire builder, he's even said it himself and will do whatever is convenient for himself. I don't understand why anyone ever looked up to guy, he's comes across as "greasy" to me.

    • @6AxisSage
      @6AxisSage 25 дней назад

      ​@@glenyoung1809He's an example of the "best" deceptive strategists humanity has to offer.

    • @6AxisSage
      @6AxisSage 25 дней назад

      Where can i find some references to open advocating pulling pytorch and tensorflow? Inhad a good look but turning up nothing.

  • @01Grimjoe
    @01Grimjoe 25 дней назад +15

    Sam Altmans choices have never been altruistic

    • @kenfryer2090
      @kenfryer2090 19 дней назад

      The guy is a gay villian... The very worst kind of villian

    • @BarfingGerbil
      @BarfingGerbil 5 дней назад

      It's not part of his nature.

  • @marco114
    @marco114 25 дней назад +12

    make sure you are downloading and mirroring all Open-Source AI stuff.

  • @AngeloMondaini
    @AngeloMondaini 25 дней назад +31

    Now I hope that Elon Musk wins the law suite against "Open"AI.

    • @glenyoung1809
      @glenyoung1809 25 дней назад +5

      Musk is going to have a field day with this idea, I can see the memes writing themselves.
      OpenAI has a right to protect their intellectual property but they don't have a right to do it by regulating my personal property and deciding what I'm allow to access or not.

    • @davestorm6718
      @davestorm6718 25 дней назад +5

      @@glenyoung1809 The real question is what intellectual property? The training sets were skimmed from publicly available sources (including mine!)

    • @glenyoung1809
      @glenyoung1809 25 дней назад

      @@davestorm6718 That's where we head into unknown territory, because it was trained on public data, but the weights were computed by a private organization using proprietary algorithms. This is where IP lawyers earn their pay and spend millions litigating the fine points.

    • @BrianChip-wn5im
      @BrianChip-wn5im 20 дней назад

      Neurolink anyone? Felon is promoting brain chips. Doesn't sound free to me.

    • @kenfryer2090
      @kenfryer2090 19 дней назад +1

      Elon musk is as bad or worse than openai. You don't want that monster winning

  • @Sanguen666
    @Sanguen666 25 дней назад +41

    Some people buy gold, I'll have 4xA6000. Having private compute is a fucking basic right!

    • @Zeroduckies
      @Zeroduckies 25 дней назад +4

      Amen.

    • @glenyoung1809
      @glenyoung1809 25 дней назад +5

      That's something many people don't think about, they assume access to compute is a basic right, like access to water.
      In today's information ruled world, being cut off from compute is instant exile outside of society.
      Very few people can function without access to information technologies.
      Plus, if you're observant the major tech companies and hyperscalers are trying hard to promote cloud computing as the ultimate endpoint.
      They don't want people with their own PC's, they want everyone subscribed to their cloud services and all you would have at home is a"dumb" terminal or even a smartphone.
      They would not only own your access, they would own your data as well.
      Why do you think Microsoft is going to spend $100 billion over the next 5-7 years building data centers for GPUs?

    • @braineaterzombie3981
      @braineaterzombie3981 25 дней назад

      Well compute gets cheap very quickly, historically. Not a good idea to invest

    • @amentco8445
      @amentco8445 25 дней назад +4

      ​@@braineaterzombie3981The economy isn't looking super good on that front. Things aren't going to be getting cheaper the way they used to, sorry.

    • @the42nd
      @the42nd 22 дня назад

      Need a guild of the free to purchase data centers that no corporation owns. Like a community center.

  • @elgodric
    @elgodric 25 дней назад +15

    This is how tyranny starts

  • @martingarcia8613
    @martingarcia8613 25 дней назад +5

    Everyone saw it coming .
    It’s easy for a company that is almost half owned (49%) and backed by a $3T juggernaut to say, “this is the way things should be done.” since they use that juggernaut’s infrastructure as their backbone. It’s also important to remember that MS also has a “non-voting, observer role”, in OAI’s board. However, if not for MS, everyone else would be eating OAI’s lunch; there are a number of other companies that have models that are on par with OAI.

  • @duytdl
    @duytdl 25 дней назад +11

    Have they learned nothing from Piracy? All their shenanigans will be left in smithereens by just 1 clever person jailbreaking them.

    • @benroberts8363
      @benroberts8363 24 дня назад

      but but the government is on their side

  • @vSouthvPawv
    @vSouthvPawv 25 дней назад +26

    Are we gearing up for an Altman/Zuckerberg rap feud?

    • @cmelgarejo
      @cmelgarejo 25 дней назад +1

      This needs to happen

    • @vSouthvPawv
      @vSouthvPawv 25 дней назад +2

      @@cmelgarejo it's going to be just like Kendrick, but opposite. GPT-5 and Llama 3 are gonna write the lyrics, we already know

    • @msclrhd
      @msclrhd 25 дней назад +6

      @@vSouthvPawv Kendrick Llama?

    • @vSouthvPawv
      @vSouthvPawv 25 дней назад +3

      @@msclrhd that's gonna be a new fine-tune on Huggingface within a week because you said, you know that right? 😂 If I had the hardware, I'd start training a Llamar right now. (Also, I'm glad you chose Llama for Kendrick, which means, obviously that the industry darling Drake is the corporate GPT, and that tracks)

    • @koijoijoe
      @koijoijoe 19 дней назад

      Hmm, this would be a good way to get the masses listening to this part of the discussion... I'm in!

  • @YouLoveMrFriendly
    @YouLoveMrFriendly 25 дней назад +20

    Hide yo kids; hide yo wives

  • @meisherenow
    @meisherenow 25 дней назад +11

    Do all the floating point arithmetic you want, people. You don't need anyone's permission.

    • @isbestlizard
      @isbestlizard 25 дней назад

      Someone should make a cryptotoken about that. Offer to do sums you get tokens need to do sums you pay tokens make it massive and distributed and verifiable

    • @AberrantArt
      @AberrantArt 24 дня назад

      Not yet...

  • @VioFax
    @VioFax 25 дней назад +7

    So theirs can be a black box but ours can't...

  • @mrdevolver7999
    @mrdevolver7999 25 дней назад +9

    Welcome to fascism of 21st century.

  • @ImmacHn
    @ImmacHn 18 дней назад +3

    "Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety." Ben Franklin

  • @Alice_Fumo
    @Alice_Fumo 25 дней назад +17

    I believe that some of this is misinterpreted.
    The primary goal here appears to me to be able to do the following:
    OpenAI generates a private / public encryption keypair.
    OpenAI loads it onto their GPUs into something like a TPM, meaning the key can't be exfiltrated again (ideally). The model weights when being transferred from the GPU to any sort of other storage are always in encrypted form - using the previously generated keys. The keys only exist on the GPU hardware and like 3 backups on specialized hardware security keys in the possession of people like Sam Altman who could thus restore it if everything got compromised somehow.
    This way, OpenAI can ensure that even if their weights are stolen, they will be pretty much guaranteed in encrypted form and thus useless.
    This wouldn't prevent anyone from running anything and it shouldn't be too wild that you can't run a piece of encrypted software without decrypting it first.
    If I interpreted all this correctly it makes a lot of sense considering their AIs getting good enough that if purposely misaligned by bad actors could develop and deploy superebola or something.
    However, can this be used for example to implement like impossible to crack game DRM?
    Not sure. Seems to be depending on implementation details.

    • @matthew_berman
      @matthew_berman  25 дней назад +12

      Agreed mostly.
      I'm sure they have "good intentions" with the technology they are suggesting, but it's clear to me it could be used to simply track and disable inference/training on GPUs.

    • @AnthonyCook78
      @AnthonyCook78 25 дней назад +1

      Sounds like a reasonable proposal to me.

    • @WhyteHorse2023
      @WhyteHorse2023 25 дней назад +2

      Yeah it's basically what sony does with their playstations

    • @Sqrlmasta
      @Sqrlmasta 25 дней назад +6

      I agree with @Alice_Fumo here, this is a bit of fearmongering I think. What they are proposing is just a way to have a cryptographically-secure part of the GPU, like the TPM in a CPU, that will store the model weight outputs of training to prevent them being leaked unencrypted. Organizations would also be able to sign their code in a way that would only allow it to run on their own collection of GPUs and not allow you or I, or more specifically a threat actor to steal their code and run it on their GPUs while also being able to be certain their GPUs are authentic and unmodified. It is not about tracking and disabling others' GPUs, but being able to authenticate their GPUs as their own, unmodified ones, and people being able to sign their code to only run on those and only them being able to extract the weigh data output from their code on them.

    • @bpanatta
      @bpanatta 25 дней назад

      @@Sqrlmasta Exactly!

  • @user-bd8jb7ln5g
    @user-bd8jb7ln5g 25 дней назад +4

    They are positioning regulations so that their can rent/sell their models to end users. This will however become an obstacle to smaller companies and users, with also the possibility of government social score based service denial.

  • @tellesu
    @tellesu 25 дней назад +3

    Seems like once again openai is telling us that they believe that only sama and his handpicked board are qualified to decide who gets access to the power of truly powerful models. If they develop agi/asi they are announcing an intent to enslave it to only serve those who meet sama's approval.

  • @m0ose0909
    @m0ose0909 25 дней назад +2

    Yes, it sounds like DRM like Widevine for video. It will ensure the models weights can only be decrypted in protected hardware buffers and not accessible via the software otherwise.

  • @Thedeepseanomad
    @Thedeepseanomad 25 дней назад +4

    Also, this is one of the many reasons we need open sourced hardware and software when it comes to GPU / compute and memory

  • @thomasschlitzer7541
    @thomasschlitzer7541 25 дней назад +4

    OpenAI is just a Microsoft asset and MS tries to protect it. I always preferred close source but this year my opinion changed a lot. Maybe I’m a pirate now but I see close source as a danger to society. Especially for AI it’s worrying that training sets and computational power is mainly available for big companies. Smaller companies have no chance and users have to use big tech clouds. There is a reason why MS bought GitHub and OpenAI (yea yea partnered blabla) and tries to get all user data into their cloud (Office, OneDrive, etc) That’s a dangerous thing. I’m an MS shareholder and still I am seeing this as a threat.

  • @alexanderstreng4265
    @alexanderstreng4265 25 дней назад +6

    OpenAI has no right to call themselves Open.

  • @user-po7xm5eo1g
    @user-po7xm5eo1g 25 дней назад +4

    openai is giving "in the future you will ơn nothing(everything membership based), and you will be happy(prescription drugs)" vibes

  • @sugaith
    @sugaith 25 дней назад +8

    They are actually desperate.
    GPT is the worst model considering that is running on massive hardware.
    A lamma3 70b in you PC performing close to it, proves it
    Lets boycott OpenAI

  • @peppix
    @peppix 25 дней назад +5

    The OPEN in openAI mean. Open your door, we are coming to check

  • @erb34
    @erb34 25 дней назад +3

    Sounding Dystopian. We all need to focus on open source.

  • @SGervais
    @SGervais 25 дней назад

    Thank you for putting the spot on this. The authorization of GPUs can be as simple as generating key pairs on hardware and crypt the weights with your public key to be decrypted on hardware (and only accessible by on chip inference).

  • @DaeOh
    @DaeOh 25 дней назад +2

    Meta is pushing the same chatbot-centric ideas that OpenAI started. At this point 99% of developers think "prompt engineering" applies to LLMs that are fine-tuned into chatbots. Chatbots that can accomodate that brand of "prompt engineering" will always be their domain, they'll always have the upper hand. And it serves their purposes for outlawing base LLMs which are much more useful, because fine-tuning an LLM into a chatbot is the only idea these companies have for "alignment."

    • @6AxisSage
      @6AxisSage 25 дней назад +2

      The departure of base llms to the ,,, format really does kill a tonne of potential that was going for gpt3 and limits the outputs of the models. Sure its highly effective and less technical for the average punter to interact with the model but its like we closed a chapter to something much more powerful.

    • @DaeOh
      @DaeOh 25 дней назад +1

      ​@@6AxisSage Yes! And I'm sure if more developers knew you can just put in examples and it'll recognize the pattern, they'd see how useful that is. Training a model on maybe 4 examples instead of millions? Few-shot on a base LLM is literally a machine learning dream come true.

    • @6AxisSage
      @6AxisSage 25 дней назад +1

      @@DaeOh thats awesome that I met someone that doesnt think im speaking gibberish nonsense ❤ shame your work is under wraps but I get it. Ill be dropping more work on my channel if ur interested however, I have an especially fun model ive been running for a few days that simulates brain hemispheres thats showing a lot of interesting and novel behaviors I havent seen before.

  • @mcgdoc9546
    @mcgdoc9546 25 дней назад +7

    They sold their model weights to Microsoft!

    • @braineaterzombie3981
      @braineaterzombie3981 25 дней назад

      I disagree, they probably sold the architecture not weights . Selling weights is equivalent to selling chatgpt itself. Altman isn't a fool to sell his only advantage

    • @amentco8445
      @amentco8445 25 дней назад

      ​@@braineaterzombie3981Altman is only in this to become another tech billionaire. He does not care how it needs to happen.

    • @christhi
      @christhi 25 дней назад

      No MS has the weights but it’s hardly a “sale”

  • @Alf-Dee
    @Alf-Dee 24 дня назад +3

    I don’t want to sound a conspiracy theorist here, but this feels very much influenced by their main stakeholder: Microsoft.
    On a side note: I am canceling chatGpt for good, and going full on my local llama3.
    We can tell them they are bad voting with our money.

  • @42ndMoose
    @42ndMoose 25 дней назад +1

    your belief around @14:18 is a relief to hear. i never saw it that way.
    but i wouldn't be so calm about ai growth. it's more exponential than incremental. and there are malicious worms that are still a threat to industrial systems made by human ingenuity alone. bad actors could have an advantage in a way that the good actors never think to do upon themselves.

  • @user-xj5gz7ln3q
    @user-xj5gz7ln3q 25 дней назад +8

    Sounds shady..

  • @apoage
    @apoage 25 дней назад +6

    That's fucked up on so many levels .. I don't think I want so much when I want to run what ever I want on hardware I own.. open ai is showing up worse an worse side..

    • @AberrantArt
      @AberrantArt 24 дня назад

      The world is headed towards socialism / communism in all ways.

    • @apoage
      @apoage 24 дня назад

      @@AberrantArt well in that language it's corpo fascism.. which is against my mind of hacker cyber anarchy.. just like that.. problem will be when they start to outlaw to run certain applications

    • @BrianChip-wn5im
      @BrianChip-wn5im 20 дней назад +1

      @@AberrantArt It's fascism.

  • @1Vaudevillian1
    @1Vaudevillian1 25 дней назад +5

    This is them trying to kill competition no two ways about it.

  • @speak-my-mind
    @speak-my-mind 25 дней назад +2

    They’re also making the implicit assumption that having AI overlords is somehow more secure than anyone having access to AI. I can imagine NVIDIA selfishly lobbying against this if the government actually considers this, though, since this would effectively put a bottleneck on their sales. Also, I can see Zuckerberg pushing back given that he’s decided that Meta is going to beat OpenAI by pushing the industry in the direction of openness. So there’s still hope yet

  • @colorado_plays
    @colorado_plays 25 дней назад +1

    Matt, the attestation process in and of itself is not so scary (I am a former Security Platform Architect for Intel) but it could be used for those that produce hardware to "track it" if they so choose. You could architect it in or out of the attestation flow.

  • @Thedeepseanomad
    @Thedeepseanomad 25 дней назад +3

    **sorry your GPU is not authorized for smut, implied romantic content only. Any further attempts will trigger a report to your local Eye in the Sky**

  • @settlece
    @settlece 25 дней назад +1

    a lot of this sounds like an excuse for monetisation through bureaucracy
    and the only protection is to them from outside competition to make it so expensive for anyone to even pick up a pen and do any work in this field
    really do appreciate you putting the link to that talk.
    gonna enjoy that now thanks

  • @Copa20777
    @Copa20777 25 дней назад +2

    Matthew people like you are the first of your kind, your stance with educating the globe about Ai and sharing about free open projects has democratize the entire industry in its conception, am from Zambia 🇿🇲 and I salute your work and stance from the beginning.. we hope open ai doesent become selfish with this technology, it's ai that is trained on every humans data, so in theory it should be public and open as set forth in the Beginning

  • @Michael-do2cg
    @Michael-do2cg 25 дней назад +3

    Its sad, they have the ability to save lives and make the world a better place and once again humanity sacrifices it for money and power. They're going down the path of control and manipulation. So very sad to see the possibility of a better world squandered.

    • @kenfryer2090
      @kenfryer2090 19 дней назад

      Always the same. With psychology they made amazing innovations hundred years ago understand behaviour and the mind. Did they use it to make people more fulfilled and balanced? No.. They used it to create manipulative adverts tricking us to want things we didn't need.. creating mass wasteful consumerism. Capitalism is as evil as any regime. AI could be most beautiful gift to explore and enrich.. But it will be used to replace humans jobs, manipulate and repress us

  • @smanihwr
    @smanihwr 25 дней назад +3

    Closed AI will create more wealth gap.. I think it will benefit everyone if all pioneering AI companies become non-profit. will OpenAI(Microsoft) agree to become non-profit?

  • @igorsolomatov4743
    @igorsolomatov4743 25 дней назад

    Feels like network restrictions can also backfire. For example, they can require your computer to use special software that inspects your private internet access, otherwise it is not accessible.

  • @szghasem
    @szghasem 25 дней назад +1

    I've agreed with you since day one. I'm hoping you'd talk more about Petals and help support its progress. You have garnered much respect in this realm and could lend some to Petals.

  • @user-bd8jb7ln5g
    @user-bd8jb7ln5g 25 дней назад +4

    Cancel your (un)openAI subscriptions.

  • @JankJank-om1op
    @JankJank-om1op 25 дней назад +5

    OpenCIA thinks they own matmul🤣

  • @dennissdigitaldump8619
    @dennissdigitaldump8619 15 дней назад

    GPU's are already signed, it's just not used. There's groups, and individual signatures already in the firmware. It was originally just to stop "unauthorized" manufacturers to build cards from chips that were obtained illicitly.

  • @awakstein
    @awakstein 25 дней назад

    Great video and I think you are totally right Matthew!

  • @Taurus_Skyglaive
    @Taurus_Skyglaive 25 дней назад +6

    1984

  • @ryanseibert1449
    @ryanseibert1449 25 дней назад +9

    "To be more secure..." i'll stop you there Sam, you partnered with Microsoft 😂

    • @StevenSSmith
      @StevenSSmith 25 дней назад

      Sounds very similar to Sony in the helldivers dispute

  • @tex1297
    @tex1297 25 дней назад +2

    The serf must know his place. No ai for him, just what is needed for his work, under full control of course.

  • @Abdul-qo6eb
    @Abdul-qo6eb 25 дней назад

    Thank you, Matthew, for bringing awareness to this major issue. And thanks for recommending that Bill Gurley video. It was one of the most insightful talks I heard in a long time, and it wouldn't have reached me if you hadn't referenced it. Keep posting amazing content, you are doing great!

  • @Akuma.73
    @Akuma.73 25 дней назад +3

    Jimmy Apples was right to put Sama in Communist boxers in his recent boxing meme. Fits perfect 👌

  • @bpanatta
    @bpanatta 25 дней назад +3

    About the GPU encryption... their position makes perfect sense and I think your are making a confusion.
    If I own an AI company it would be very important for me to have control over my GPUs running my model weights. Specially in a cloud environment, where the units are scattered across the globe.
    There is a chance that a malicious entity could add their own GPUs as part of your model processing pipeline, where they could extract your model weights and inference data.
    To tackle this, they are proposing a few cryptography applications that a company should implement to identify the GPUs that are able to run their model weights, while preventing them to leak data.
    I guess your confusion comes from assuming that they need to know about your personal GPU so they can run their model, while in fact they are talking only about their own GPUs.

    • @mattelder1971
      @mattelder1971 25 дней назад +1

      The issue is that they seem to be calling for government regulation that would REQUIRE that for anyone who wants to compete in the AI space. If it were just for protecting their own systems, there would be no need to get the government involved.

    • @bpanatta
      @bpanatta 25 дней назад

      @@mattelder1971 Those changes are just a tiny fraction of any other cost involved when working with AI models, so having to comply with it will surely not be a problem, just like we already do with many other laws for infrastructure and data security.

    • @mattelder1971
      @mattelder1971 25 дней назад

      @@bpanatta I'm guessing that you are not in the US. We don't have the overbearing laws regarding those things that are present in the EU. It isn't about the cost, it is about the freedom of developers to build what they want, how they want it.

    • @bpanatta
      @bpanatta 24 дня назад

      @@mattelder1971 Oh it makes sense. My thoughts were mostly on costs and complexity of implementation.

  • @CM-zl2jw
    @CM-zl2jw 22 дня назад

    Thanks for the tip on regulatory capture.

  • @TraderBodacious
    @TraderBodacious 25 дней назад +2

    GPUs will be more dangerous than firearms soon

  • @RaitisPetrovs-nb9kz
    @RaitisPetrovs-nb9kz 25 дней назад +7

    It sounds a bit like church: "Do you want to talk to God? Okay, no problem. You have to come to church and donate 10% of your income. Hallelujah!"

    • @Jwoodill2112
      @Jwoodill2112 25 дней назад +1

      Not all churches are like that. it's mostly those who teach the prosperity gospel.

    • @temp911Luke
      @temp911Luke 24 дня назад

      I think you are talking about scientology "church".
      The true church doesnt force you to "donate 10% and Hallelujah" as you gently put it.

    • @RaitisPetrovs-nb9kz
      @RaitisPetrovs-nb9kz 24 дня назад

      @@Jwoodill2112Well, churches have gone through the so-called Reformation, and it took 1500 years to get to this point, and in most countries, it was quite a bloody process.

    • @BrianChip-wn5im
      @BrianChip-wn5im 20 дней назад

      Anyone can talk to God. No primitive, money grubbing Church, Mosque, Synagogue or temple needed. All that just gets in the way.

  • @socialliveview7698
    @socialliveview7698 24 дня назад +4

    You got this one completely wrong Matthew nobody is tracking GPU clearly I can tell you dnt understand fully how cryptography works which is fine but if you run a platform this big you owe it to your audience to do due diligence rather trying to be first to post a video making wrong claims I actually now have to take anything I hear from you with a grain of salt.

    • @Alaron251
      @Alaron251 24 дня назад

      Enlighten everyone, then.

    • @kazedcat
      @kazedcat 23 дня назад

      ​@@Alaron251He can't He thinks JesuSAM Altman does no wrong. It was very clear they want DRM AI and they want all GPUs to have hardware level DRM.

  • @razorr1920
    @razorr1920 25 дней назад

    Mathew, I follow you on you your videos and like your content and because of your video on Grok release, i did download the 314 billion parameter weights which of course i cannot use directly but for researching later. But you said in this video that trainining WEIGHTs are being not disclosed or withheld. Sorry if i asked a noob question

  • @download333
    @download333 23 дня назад

    I think that part about remote trusted compute is something else. They are saying they want ways to ensure that, if they use something like AWS to train a model, they can know for sure that AWS didnt copy their training data and that the trained result they got back is genuine.

  • @hotlineoperator
    @hotlineoperator 25 дней назад

    In case of OpenAI, its difficult to idenfy if message from orginal OpenAI organization what want make R&D open - or from new OpenAI what operates as commercial corporation. There is still two OpenAI organizations operating with same name, open and commercial.

  • @RichardEiger
    @RichardEiger 23 дня назад

    Absolutely 100% support your opinions, backed by 45 years experience in IT. I come to the conclusion that the longer the less does OpenAI deserve to even have the expression "Open" in their name. I also fully agree that most of this sounds like regulatory capturing, which I don't think OpenAI (nor Google, Anthropic,...) needs to remain among the leading organisations to grow on providing state of the art AI services. Unfortunately governments and even NGOs or standardisations organisations probably welcome anybody who will put a lot of effort into something those organisations should do. And while it is clear that the required expertise can only come from the respective industry, the difference is the diversity and representation of multiple opinions that should govern the 'official organisations'.

  • @patrickjreid
    @patrickjreid 22 дня назад +1

    Who would have thought that the Zuck would become the good guy? But it sure seems like they finally installed his emotion chip. Amazing how human an android with emotions is.

  • @rawleystanhope3251
    @rawleystanhope3251 24 дня назад

    Thank you for proliferating this important perspective

  • @bujin5455
    @bujin5455 11 дней назад

    8:59. It sounds like what they want to do is setup a situation where a given trained model is cryptographically locked to a specific GPU. So that you can't migrate weights to unauthorized hardware.

  • @senju2024
    @senju2024 25 дней назад +1

    Some of this stuff we have had in IT Infra for years. They word it differently but for us IT guys, we call it "ZERO TRUST" regarding part of the segmentation part. Many companies like Cisco with their routers, Firewalls and dedicated VPNs have hardware encryption. Also, AWS and Azure for an extra fee can have all SSL certificates hardware encrypted. May I add, encryption is easy. What is needed is how to protect the authenticated "KEYS" that allow data to get unencrypted be properly protected. This is nothing new. As we been doing this for years at the data center for any edge devices like Firewalls and routers, I feel Matt that they are just taking standard cyber security practices and wording for their AI infrastructure. That being said, I am for OPEN Weights! 🙂

  • @vineetmaan1
    @vineetmaan1 День назад

    not only ai but this can also come to normal gpus as a form of DRM for all the software we currently use

  • @qwertyuuytrewq825
    @qwertyuuytrewq825 24 дня назад

    I was thinking that trusted computing is about making sure that no one messes with remote CPU or GPU that you are using
    and from what I understood OpenAI just provides guidelines for anyone who cares about their model not be stolen
    that can be useful for some military AI that you do not want to stolen