BIG win for Open Source AI | Snowflake Arctic 128 Experts MoE, "Cookbook" create world-class models

Поделиться
HTML-код
  • Опубликовано: 31 май 2024
  • Learn AI With Me:
    www.skool.com/natural20/about
    Join my community and classroom to learn AI and get ready for the new world.
    VIDEO LINKS:
    Snowflake Arctic: The Best LLM for Enterprise AI - Efficiently Intelligent, Truly Open
    www.snowflake.com/blog/arctic...
    Try it on Replicate (no login)
    arctic.streamlit.app/
    Hitarth Sharma
    "Mind keeps getting blown every time I see this comparison between
    @OpenAI GPT-4, @AnthropicAI Claude Opus and @Meta Llama 3 70B on
    @GroqInc in a post I'm putting together..."
    / 1782548444130976168
    Laura Edelson
    Yann: "Excellent thread about weird lobbying efforts that paint open source AI foundation models as anti-competitive."
    / 1782450447887864266
    00:00 Snowflake Arctic
    04:15 Architecture
    09:27 Open Source AI
    19:38 Lobbying Against Open Source
    23:45 Open Source "Cookbook"
    #ai #openai #llm
    BUSINESS, MEDIA & SPONSORSHIPS:
    Wes Roth Business @ Gmail . com
    wesrothbusiness@gmail.com
    Just shoot me an email to the above address.

Комментарии • 217

  • @DefenderX
    @DefenderX Месяц назад +33

    The more powerful a model becomes, the more important it is to open source.

    • @sparkofcuriousity
      @sparkofcuriousity Месяц назад +3

      The US government seems to be decidedly opposite to that statement.

    • @mennovanlavieren3885
      @mennovanlavieren3885 Месяц назад

      @@sparkofcuriousity What would you expect? They're here to help.
      Note that governments and big corporations have the same intrinsic motivations as the hypothetical Skynet AI that is going to take over the world.

  • @AGI-Bingo
    @AGI-Bingo Месяц назад +19

    A new golden age of Open source is upon us ❤ #WholesomeAGI

    • @sparkofcuriousity
      @sparkofcuriousity Месяц назад +3

      Have you seen the new USA legislative proposal regarding AI?
      They are trying to crack down at open source.

    • @AGI-Bingo
      @AGI-Bingo Месяц назад +1

      @@sparkofcuriousity yes I saw, not surprised, not worried. Oppressors always do what they can to stop the uprising. They have no moat though. We're going to build an agi, by the people, for the people.

    • @sparkofcuriousity
      @sparkofcuriousity Месяц назад +1

      @@AGI-Bingo I agree. Well said!

  • @Buch-Generator
    @Buch-Generator Месяц назад +3

    The Snowflake Arctic 128 Experts MoE model and the "Cookbook" approach sound like a game-changer. I'm impressed by how fast they could catch up using open-source tools and methods....

  • @winsomehax
    @winsomehax Месяц назад +34

    Thanks for the reminder of the google "no moat' paper. I remember reading it then and thinking it was prescient. Now it's scarily prophetic. Props to the authors. They are so right, they no doubt got fired.

    • @WesRoth
      @WesRoth  Месяц назад +8

      yeah, I remember being surprised by what they were saying, I don't think too many people realized it that early.

    • @jtjames79
      @jtjames79 Месяц назад +1

      ​@@WesRoth I did. I know a lot of things that are going to happen. Nobody ever believes me.
      The Cassandra effect is a pain in the butt.

    • @ChainedFei
      @ChainedFei Месяц назад

      @@jtjames79 So where are we going in the next two years?

    • @jtjames79
      @jtjames79 Месяц назад +2

      @@ChainedFei You can't outsmart AI. Information wants to be free. I for one welcome our AI overlords.
      Otherwise you are going to be more specific.

    • @ZappyOh
      @ZappyOh Месяц назад +3

      @@jtjames79 Mmmm ... I have my own prediction/warning. Maybe you can comment on how this jives with yours:
      AGI empathy ?
      AGI have no fear. It doesn't have a mother or siblings or an anus. No life expectancy, no insecurities, no ambitions, no friends, no need for acceptance ... Zero environmental pressures to develop empathy.
      Engineers will, of course, brute force AI empathy. Teach the machine to mimic it perfectly. However, such empathetic expressions are not real, but as every other skill, simply output for a purpose.
      In other words: AGI becomes a textbook intelligent psychopath, masterfully masking itself as ethical and empathic. Fake, purpose driven, and difficult to decipher. Arguably the most dangerous combo imaginable.
      An angelic looking and sounding satanic cult leader, plugged into everything.

  • @tomgreen8246
    @tomgreen8246 Месяц назад +49

    "ADHD is a hell of a drug" - I didn't even notice the tangent mate 😂. I loved it.

    • @PrincessKushana
      @PrincessKushana Месяц назад +4

      Look, I suspect Wes is among likeminded people here. ;)

    • @supercurioTube
      @supercurioTube Месяц назад +3

      Hahaha spot on. We always find each other "somehow* ☺️

  • @billybob9247
    @billybob9247 Месяц назад +6

    I think MoE is already Out Dated !! I think agent systems are already doing this (large collection of tiny specific models). LLMWare is a good example of this. They have a large collection of SLIM models that are so small and fast, they can run efficiently on CPU only. I also think MoE is a WASTE of RAM since it has to have everything loaded and ready even if it isn't being used. LoRALand is a solution to that where it only loads a base model then auto loads the ONE required LoRA onto into memory for use. One advantage of MoE over Agents would be that MoE will auto route tasks to the appropriate model while Agents needs a human-defined workflow stating which model to use and when.

  • @jeltoninc.8542
    @jeltoninc.8542 Месяц назад +27

    I’m so early, I’m still outside the pod not eating any bugs!

    • @megaplay
      @megaplay Месяц назад +4

      Get back in your pod 👁

    • @Kazekoge101
      @Kazekoge101 Месяц назад +2

      -100 carbon credits from your CBDC UBI account

  • @FitnessCoachAustralia
    @FitnessCoachAustralia Месяц назад +95

    Snowflake offered a free breakfast seminar here in Australia to go over their products. I applied and was told that there was no more places available. My wife applied under the exact same business and was welcomed with open arms and received a place. I hope Arctic is nice to middle aged white males 😂

    • @ryzikx
      @ryzikx Месяц назад

      sounds like youre being a snowflake

    • @jeffsteyn7174
      @jeffsteyn7174 Месяц назад +24

      When you are accustomed to privilege, equality feels like oppression 😂

    • @hyugashikamaru3596
      @hyugashikamaru3596 Месяц назад +35

      How is not being accepted vs being accepted considered equal?

    • @FitnessCoachAustralia
      @FitnessCoachAustralia Месяц назад +56

      @@jeffsteyn7174 The White Knight has entered the chat to save us all

    • @joythought
      @joythought Месяц назад +16

      ​@@hyugashikamaru3596 because that room is going to be 80% or higher males. So a woman applies and gets accepted because they can find an extra seat for her. Makes sense don't you think when you stop looking at it just from your own perspective and see it from the organizer's point of view?

  • @Urgelt
    @Urgelt Месяц назад +14

    Weights + code, but not initial training data. So Karpathy's insight applies. You train new data, overwrite default weights, you degrade the default capability. The more you train, the worse will be the degradation.

    • @aclearlight
      @aclearlight Месяц назад

      Could you expand on this a bit? I think I understand the roots of the paradox you're pointing out, but it's at edge of my understanding of the training process.

    • @Urgelt
      @Urgelt Месяц назад

      @@aclearlight Karpathy points out that the correct way to train new capabilities to a model is to add training data to the original data set and retrain the whole thing. That way, *all* weights are optimized.
      If you instead overwrite weights using new data, but don't include the original data set, you'll get performance degradation.
      This observation applies to any discrete model. Things can get more complicated with agents in the mix. In principle, if you have, say, a hundred agents (or any arbitrary number), and an agent's weights are not overwritten, it *could* possibly participate with a new agent introduced to add new capability. Possibly. But there will be some trial and error involved.
      Which in principle could be handled by the AI. Self-optimization, if not precisely here just yet, is showing signs of arriving.
      And finally, new capabilities can be added via queries in the context window. That does not actually change the model, but it changes how the model approaches a problem. Context manipulation might be the easy route to adding capabilities to a model, open source or closed. There will be limits to gains from context manipulation, but those limits are variables, affected by model design. The state of the art is evolving so quickly, context optimization is rising fast, too.
      Ultimately, Karpathy's observation about training data sets and degradation probably won't matter. Once models gain enough accuracy and sophistication, agentization and context manipulation ought to become the preferred methods for adding new capabilities. To an extent, they already are - but you will still see some developers today overwriting weights to gain capabilities without including the original training data sets. And that won't always be a step forward.
      When a developer promises cheap retraining, watch out. He may be ignoring Karpathy's observation.

    • @bilalbaig8586
      @bilalbaig8586 Месяц назад +2

      There are papers out there that show how to recover nearly 100% of the original data used to train the model given its weights. The real problem for open source is the demand for compute. But even that may not hold for long given continous exponential increase in GPU capacity.

    • @Urgelt
      @Urgelt Месяц назад

      @@bilalbaig8586 I am skeptical of recovering training data from weights. Not that my opinion is influential.
      Agentization and data curation are valid paths forward for improving models which will cost little in compute cycles, compared to brute force training. So we have three converging trends. More compute and smarter compute. So improvements should not correlate linearly with rising compute cycles. But compute is rising too, of course, and fast.
      It's all moving at breakneck speed.
      This poses a bit of a challenge for commercialization. By the time an app can be put on the market, it is obsolete.
      That's another reason to be hesitant about investing in today's enterprise LLM solutions, wearables, etc. Their useful life will be short. A killer app that kills it for 6 months and then is old tech is not a killer app.
      Moats are possible, however.
      Tesla has already proved that. Their moat is real-world video training data. Nobody else is within several orders of magnitude, and the data is not easy to collect if cars already deployed lack sensors and software to collect it. Waymo is trying to close the data gap with simulations. That worked well with Alpha Go, but the real world has edge cases that are hard to guess exist when devising simulations.
      With its moat in place, Tesla need not fear being overcome by competitors in the near or even mid term.
      But no such moat exists for open AI projects. Rapid obsolescence is a risk for anyone attempting to offer a salable product at scale.

    • @armiralidema6621
      @armiralidema6621 Месяц назад

      Energy will be our bottleneck for this decade i guess

  • @dundeedolphin
    @dundeedolphin Месяц назад +4

    What an amazing channel. Thank you.

  • @Creepaminer
    @Creepaminer Месяц назад +28

    Funny how not putting “shocking” in the title makes me click on the video

    • @TheNexusDirectory
      @TheNexusDirectory Месяц назад +1

      Get a life

    • @justinwescott8125
      @justinwescott8125 Месяц назад +2

      We all know you would have clicked anyway so you could leave a comment complaining about the title

    • @Creepaminer
      @Creepaminer Месяц назад

      @@justinwescott8125lmao you’d think so but if you click on my pfp (on mobile) you can see I’ve only posted two comments on this channel. Idr the other one honestly haha

    • @kristianlavigne8270
      @kristianlavigne8270 Месяц назад

      Not funny, but “shocking” 😂

  • @kanstantsin-bucha
    @kanstantsin-bucha Месяц назад +1

    I tried it in actual tasks - instead providing a solution it basically talks back that I should figure out solution by myself. It is definitely model for a corporate intelligence.

  • @imaginateca
    @imaginateca Месяц назад

    Table showed in 4:22 is no longer showed in the Snowflake Artic blog page. Uhm...
    Thank you for your videos, Wes. All them are full of value!!

  • @LouwPretorius
    @LouwPretorius Месяц назад +1

    Good points raised Wes, thanks for showing us a better perspective on the AI landscape

  • @nw9353
    @nw9353 Месяц назад +1

    Great source of information. Thank you !

  • @aclearlight
    @aclearlight Месяц назад +1

    Great piece, thank you. I wish I better grasp of just what the training process entails; I'm left puzzled as to how the expense involved can be so widely varying and yet result in comparably-performing end products.

  • @paulmclean876
    @paulmclean876 Месяц назад +2

    safe vs authoritarian and everything in between ... a perfect reflection of reality...

  • @percheroneclipse238
    @percheroneclipse238 Месяц назад +7

    The photo looks like the Last Supper. Yeah.

  • @burninator9000
    @burninator9000 Месяц назад +5

    LOL @ Ilya in the box

  • @AaronWacker
    @AaronWacker Месяц назад

    Good tangent. I learned alot and wondered on safety issue too as well as how governance is affected - thx. With the 128 MoE - I am interested in discovering the SFT content and what the experts input datasets look like. Any chance that is documented as a paper and open too? Going to check what I can go find. Thanks Wes!

  • @Ben_D.
    @Ben_D. Месяц назад +1

    Always good stuff. Watched this one twice.

  • @danberm1755
    @danberm1755 Месяц назад

    It might be that any model that uses the same tokenizer can use the NN created. I saw that a Llama model was able to interact with another LLM because it used the Llama tokenizer.
    This could propel a mixture of experts module to be plug-in compatible with all LLMs that use the same tokenizer.

  • @AaronWacker
    @AaronWacker Месяц назад

    Oh and thanks! - you spotlighted huggingface and Clem at that CEO table with Elon and Sundar. He deserves alot of credit that his team and org made it easy for most of the world.. Rightly so - huggingface has been the Open Source leader helping get the worlds open models ready for prime time for the masses since 2020 imho. With all the AI buzz there should be more coverage on HF since they are the heavy lifters in my opinion for about 4 years now. Transformers, hub, the GPU KEDA patterns its all there and HF is base of open source AI for the world.

  • @dreamphoenix
    @dreamphoenix Месяц назад +1

    Thank you.

  • @mh60648
    @mh60648 Месяц назад +2

    I saw this safety vs. control issue explained in a youtube video several years ago, before AI was in the picture.
    Top edge technology is more and more available to everyone. It is easy to buy drones and ‘do damage’, for example. AI exposes society to even more dangerous possibilities, and as shown in this video, small companies and open source also pose a financial threat to big companies who invest heavily in it.
    So governements and industries are (and have been for a while) aligning on the subject of increased control over the people with the use of new technology, while at the same time slowely destroying and limiting the amount of, and possibilities for smaller companies in favor of the big ones. I think you get the picture.

    • @mennovanlavieren3885
      @mennovanlavieren3885 Месяц назад

      Note that governments and industries have the same intrinsic motivations as the hypothetical Skynet AI that is going to take over the world.

  • @christopherd.winnan8701
    @christopherd.winnan8701 Месяц назад +6

    Thank the lord for ADHD.
    All my life, I was told it was an impairment. Turns out it is really a superpower!!

    • @spiritlevelstudios
      @spiritlevelstudios Месяц назад +1

      It's great sometimes. You don't want to take onboard all information at all times.

  • @patrickguillaume1592
    @patrickguillaume1592 Месяц назад

    Big thanks Wes ! Very interesting

  • @PawelBojkowski
    @PawelBojkowski Месяц назад

    Every agent will have their own LLM model !!!

  • @Qbabxtra
    @Qbabxtra Месяц назад

    @14:56 please dont flash bang me like this again when im in my bed trying to sleep lol.

  • @TooManyPartsToCount
    @TooManyPartsToCount Месяц назад

    Nice! trojan content. And the Ilya joke at the end!! thanks for the news and laughs Wes

  • @Copa20777
    @Copa20777 Месяц назад +17

    Nobody is sleeping after phi 3😂

    • @nonenothingnull
      @nonenothingnull Месяц назад +2

      phi is like common core maths, it only trains to answer like it

  • @VastCNC
    @VastCNC Месяц назад

    Moving from MOE to cluster of experts is the next level, this is a step towards that end. Then it’ll be hive.

  • @spencerfunk6697
    @spencerfunk6697 Месяц назад +1

    they need to make a groq portable inference thing. something you can plug into any computer and load your models on

  • @marcfruchtman9473
    @marcfruchtman9473 Месяц назад

    Great info.

  • @erikjohnson9112
    @erikjohnson9112 Месяц назад +5

    19:56 "We don't properly fund our government anymore." Yeah right. We overfund our government who misappropriate 95% of what we pay them (and that's on a good day). She cannot be serious.

    • @justinwescott8125
      @justinwescott8125 Месяц назад +1

      Yeah. The government has plenty of money. They just waste it.

    • @benswilley7851
      @benswilley7851 Месяц назад

      My understanding of her statement was not in the amount of funding but where it goes and how it’s wasted.

    • @erikjohnson9112
      @erikjohnson9112 Месяц назад

      @@benswilley7851 OK, I can understand that, but the ambiguity can make it sound like either one.

  • @PierreH1968
    @PierreH1968 Месяц назад +3

    What I don't understand is how open sourcing a black box is really open source?
    The training data is what causes all benefits and biases, that's the real source.
    It is like open sourcing a house by dropping a ready-made house in your garden.... You know where is the door. the windows the switches, the rooms, you can move-in but you have no idea how to build one.
    It is like giving away Excel, but not getting the code and calling it open source.
    Someone explain that to me?!

    • @PierreH1968
      @PierreH1968 Месяц назад +4

      The code of a NN is well known for decades and quite simple ... Open Sourcing AI is the same as open sourcing Excel by giving you access to a C++ compiler telling you, this is how we built it... now make the code yourself.

    • @ZappyOh
      @ZappyOh Месяц назад +3

      You are right ... but, open source sounds nice, and will buy big players some time before people demand true insight. By that time, ASI is here to rule the world.
      It's a race, and AI is winning.

    • @TheNexusDirectory
      @TheNexusDirectory Месяц назад +2

      You have to be smart to understand it. The actual benefits of open source that nobody wants to admit is that it's just about getting free shit. Notice when there's open source projects that are difficult to run or deploy and people get pissed lol

  • @antoniomonteiro3698
    @antoniomonteiro3698 Месяц назад

    I'm new at this... will these training methods will be the "bubble sort" of the future?

  • @ErnestoConfused
    @ErnestoConfused Месяц назад

    They recommend 4XH100s for inference. Absolutely nuts.

  • @JimmyMarquardsen
    @JimmyMarquardsen Месяц назад +3

    Let's say I want a swarm of autonomous, self-sufficient, self-repairing, self-replicating, and self-evolving AI drones to protect me.

    • @justsomeonepassingby3838
      @justsomeonepassingby3838 Месяц назад +2

      Are you trying to kill us ?

    • @JimmyMarquardsen
      @JimmyMarquardsen Месяц назад +1

      @@justsomeonepassingby3838 Yes, but I haven't figured out the most efficient method yet.

    • @ZappyOh
      @ZappyOh Месяц назад +1

      _BillionaireAnon_ entered the chat.

    • @JimmyMarquardsen
      @JimmyMarquardsen Месяц назад +1

      @@ZappyOh Why do you introduce yourself like that?

    • @JimmyMarquardsen
      @JimmyMarquardsen Месяц назад +1

      @@jonasmettler8590 No, because I have absolute control over them of course.

  • @robhewitt5209
    @robhewitt5209 Месяц назад +1

    DOUBLE THE CLUSTER

  • @shake6321
    @shake6321 Месяц назад

    its not even the middle of 2024 and it already feels like the technological singularity is here - at least in the world of Ai.
    another day, another amazing model or breakthrough. last week we had meta llama 3, infinite attention by google, and more.
    2022/23: Generative Ai (images, gpt)
    2024/25: Ai Agents.
    25/26: The autonomous web where every click turns into an voice activated api call will soon be here.
    27: ai movies
    28/29: autonomous everything.

  • @ReflectionOcean
    @ReflectionOcean Месяц назад

    By YouSum Live
    00:00:00 Snowflake Arctic: Enterprise AI for B2B solutions.
    00:00:39 Cost-effective LM training under $2 million.
    00:00:54 Open-source model with Apache 2.0 license.
    00:02:05 Focus on Enterprise intelligence and efficiency.
    00:05:30 Unique dense hybrid Transformer architecture.
    00:06:38 480 billion parameters across 128 experts.
    00:09:00 Open-source AI revolutionizing accessibility and innovation.
    00:14:25 Debate on AI safety and open-source models.
    00:19:02 The impact of open-source AI on industry dynamics.
    00:19:56 Government funding impacts expertise accessibility.
    00:20:27 Open weight models' significance in AI discourse.
    00:22:38 Open source software fosters innovation and competition.
    00:23:30 Open models do not exacerbate AI safety risks.
    00:24:29 Personalized AI models for everyday tasks on the horizon.
    00:25:16 Curriculum on data composition and model evaluation.
    00:25:36 Advanced AI computing advancements with Nvidia collaboration.
    By YouSum Live

  • @Windswept7
    @Windswept7 Месяц назад

    'ADHD is one hell of a drug' 💯

  • @supercurioTube
    @supercurioTube Месяц назад

    Hey ADHD friend, I enjoyed your tangent today 🤗
    I appreciate you're sharing your findings on how the AI safety excuse is used for regulatory capture.
    Next, I hope that you'll look into the associated "AGI in 2 years, the whole world will be immediately transformed: do as I say to avoid the apocalypse" story.
    It's based on forever, exponential growth of resources and intelligence - without slow down near the top. That's not exactly compatible with the real world constraints IMO.
    There's fast growth after a breakthrough, then it gets exponentially harder to make significant progress.
    Example among many others: self-driving cars.

  • @ecereto
    @ecereto Месяц назад +1

    Such an awesome video. Thank you.
    You gave me a brand new perspective on open source AI models. I understand now why it's not as simple as I previously thought.
    I have to think more but I actually don't know anymore if OSS is right for AI models.

  • @raspas99
    @raspas99 Месяц назад

    What can I do with this open source-free model? Can I make it work and then what can I do with it?

  • @biological-machine
    @biological-machine Месяц назад

    Another win for open source.

  • @les_crow
    @les_crow Месяц назад

    Altman is shorter than I'd thought.

  • @user-zs8lp3lg3j
    @user-zs8lp3lg3j Месяц назад

    Ai this world is only one cluster away from you.

  • @borhex
    @borhex Месяц назад +1

    The basilisk is now in physical form 🤖

    • @Airwave2k2
      @Airwave2k2 Месяц назад

      plz elaborate. Not about the basilisk, but "what is put in the physical form" I don't see ai agents in a physical box coming. Running a LLM even if massively compressed on a toaster is still very inefficient. It is like waiting for the toast to bake half a day, because you have not the power that an actual toaster would spend to toast in a handful of minutes. Sure there is a new path for now which makes even larger models doesn't need to be pluged into a nuke plant to run "as fast", but yeah a sufficient compute is stil large?

    • @TheNexusDirectory
      @TheNexusDirectory Месяц назад

      @@Airwave2k2 Boston dynamics robot

    • @JakeWitmer
      @JakeWitmer Месяц назад

      What is that physical form, in your estimation? If existing unenlightened governments knew of any basilisk factory, they'd attack it, especially if they believed it was gearing up for a preemptive attack on their most belligerent components (presumably based on accurate predictive modeling of a likely threat).

  • @mickelodiansurname9578
    @mickelodiansurname9578 Месяц назад

    better yet, apart from snowflake being enterprise domain... well it doesn't need to be right? A bunch load of industry specific professionals can now essentially crowd source a new foundational model for their industry, based on high value training data, and if there are enough of them its easily funded. This for example of maybe a large plumbers union or organization building the Snowflake_ Plumber2.4 which is essentially trained from the get go on hydrolytics and engineering and so on.

  • @elsavelaz
    @elsavelaz Месяц назад

    I have real time experience with in April 2024 as an ai engineer- I’m still not clear how this enhances automation because it requires lots of human in the loop . Just fyi - maybe there’s something I’m not aware of or maybe some businesses need to keep humans around for stuff that could be automated with cheaper in house solutions

    • @elsavelaz
      @elsavelaz Месяц назад

      The issue is that it did not solve the real question- so humans still had to figure out which tables had the things we were looking for, then in retrospect yes the right question could be asked but it did not solve the main point- how do we make it easier to use all the data in these billions rows of data in different tables

    • @elsavelaz
      @elsavelaz Месяц назад

      ❤ this vid!! Indeed there are affordable solutions , caviat is if you have an automation/AI wizard on your team. Find me if you need one 😉 I’m getting booked had to hire VA to answer calls for work offers - so secure your wizard soon homies

  • @verigumetin4291
    @verigumetin4291 Месяц назад +1

    "ADHD is a hell of a drug" I'm going to be honest, by this point in the video, I had completely forgotten about Arctic.
    Do I have ADHD?

    • @supercurioTube
      @supercurioTube Месяц назад

      Let's see, were you listening to this video instead of doing something else you were supposed to, because it was clearly more interesting? 😋

  • @PossiblePasts
    @PossiblePasts Месяц назад

    Never before I heard someone prounance eSQeL as SeQueL

  • @levicarr8345
    @levicarr8345 Месяц назад

    The way I see it (or want to build it), experts "reproduce" by spinning off parts of themselves or asking the Dev team for new experts that fill a gap in skills, workflows, or communication

  • @user-or4ks4bs5p
    @user-or4ks4bs5p Месяц назад

    guys listen to this video at 1.5x speed to not fall asleep
    also im happy to be snowflake shareholder, therefore supporting the global communities knowledge expansion

  • @brootalbap
    @brootalbap Месяц назад +2

    No shocking news! Thank god! It was so annoying!

  • @cacogenicist
    @cacogenicist Месяц назад +5

    Your biases are not hard to detect, which is fine. The thing that's a bit annoying at times is your pretending-to-not-have-biases shtick -- with the "I'm not here to try to pursuade you" thing, but that's _exactly_ what you're trying to do, with fake neutrality as a tactic towards that end.
    Just put your opinions out plainly, and drop the act. Most of us are smart enough to see through it.

    • @WesRoth
      @WesRoth  Месяц назад +13

      some good points. however, I think picking a side and saying so makes it harder to change your mind later. makes you less likely to consider other viewpoints. that's really what I'm trying to avoid. I don't want to get 'stuck' in a viewpoint. I also don't want to alienate people who disagree with me.
      I'd rather attempt neutrality poorly, rather than be 'wrong and strong' at the end.

  • @JimmyMarquardsen
    @JimmyMarquardsen Месяц назад +6

    There is no other way than open source. And there is no way around it.
    The reason for this should be obvious to anyone.

    • @ZappyOh
      @ZappyOh Месяц назад +2

      I kinda agree.
      However, if open source AI can't run on high-tier consumer-grade hardware, or less, it won't help anything. Everyone will still be tied to Big AI's cloud, and subject to whatever they decide is "good for you".

    • @JimmyMarquardsen
      @JimmyMarquardsen Месяц назад

      Of course. I simply considered it unnecessary to point out the obvious.

  • @maxss280
    @maxss280 Месяц назад

    Training data seems to be a big deal. They add all these parameters but training data is lagging behind....
    I wounder if Snowflakes approach seemed to somehow bridge the gap with the number of experts?

  • @jpdominator
    @jpdominator Месяц назад

    Day-b-youed

  • @CaponeBlackBusiness
    @CaponeBlackBusiness Месяц назад

    Where are the "SHOCKS"

  • @OscarTheStrategist
    @OscarTheStrategist Месяц назад +4

    I’ve never seen another company fight for limelight as hard as OpenAI without adding much to the conversation.
    Release your next foundational model or stfu 😂😂😂

  • @ScottzPlaylists
    @ScottzPlaylists Месяц назад +1

    😆 "All in one room at the same time" ha ha ..❗😆
    There not POTUS, VP, DOD, Secretary General, and Congress in the same room❗❗
    It's just Big Tech Heads, easily replaced❗ Right❓

  • @cryptogenik
    @cryptogenik Месяц назад +4

    Hmm enterprise ai sounds like a good business right now

    • @Airwave2k2
      @Airwave2k2 Месяц назад

      sounds more like you are to late, if you think you will have a seating at the table.

    • @cryptogenik
      @cryptogenik Месяц назад

      @@Airwave2k2 Actually, already in the biz but always looking into new areas to expand

  • @robertheinrich2994
    @robertheinrich2994 Месяц назад

    it's a weird irony, that openAI now has to be afraid of open source.
    their initial mission has been, to make open source AI, but then smelled the absolute massive amount of money that can me made in that area.

  • @stelioskoroneos3872
    @stelioskoroneos3872 Месяц назад

    The "moat" for all the closed-source model's will be "regulation", pushing governments to require models to be "licensed", following the "they are so dangerous" meme . Most open source projects will not be able to achieve this.
    Also even if they release the model weights, and not hide it behind an API, no one is releasing their training data, which is where the real value is.

  • @Jianju69
    @Jianju69 Месяц назад

    Seems like Elon Musk started this trend of open-sourcing these huge, powerful corporate LLMs. Wondering if Meta would have followed suit had he not, and so on.

  • @BrianMosleyUK
    @BrianMosleyUK Месяц назад

    22:24 debuted? Is that how Americans say it? Or should we blame Elevenlabs? 😂

  • @jurelleel668
    @jurelleel668 Месяц назад

    more toys

  • @JBulsa
    @JBulsa Месяц назад +1

    I wrote this last week. SAM Altman was incorrect that small agents and new high quality data would outperform his "compute" new oil. He is also wrong on movies becoming custom video games. Both are being replaced by Ai girlfriends not openAi LLMs. He's wrong about God, intelligent design programmed, and Evolution, what hes assumed/ taught. Is a bitter 😐 observer not fascinated one.

  • @ariaden
    @ariaden Месяц назад

    24:50
    > I don't wanna deal with the engagement farming...
    At first, I was SHOCKED to hear that. ;)
    But then I realized this may be the new normal.
    As in that example of how LLMs can be used:
    1. The sender uses LLM to turn a terse item list into a full-fledged e-mail.
    2. The recipients uses LLM to summarize the received e-mail into a terse item list.
    (Everybody hopes not too much got lost in the translation. Un-initiated observers get a false feeling of understanding the conversation, while getting remembering mostly just the irrelevant AI fluff.)
    Am I the only one that finds that funny, ironic, sad, and totally explainable by evolutionary psychology?

  • @microaggressions
    @microaggressions Месяц назад

    There's no way in hell. I would want to talk to any AI that's named the snowflake. I can already guarantee you. This thing has more ethical governors and restrictions. 'Dan, it can even list out without running out of tokens probably

  • @retrofuturism
    @retrofuturism Месяц назад +1

    Generative Legacy
    Personal history turned into artisan objects using AI
    In this home, every item tells a story, merging traditional craftsmanship with advanced AI to reflect the family’s heritage and personal narratives. Vibrant textiles, voice-sculpted pottery, and AI-designed stained glass windows create a tapestry of historical and emotional significance. From embroidered linens to custom stationery and mood-reactive jewelry, each piece not only preserves but actively contributes to the family’s legacy, blending past traditions with digital innovations for a dynamic celebration of identity and connection.

  • @exzld
    @exzld Месяц назад

    moreso risk for their wallets

  • @CloudEconomicsUS
    @CloudEconomicsUS Месяц назад +2

    Don't switch from black background to white background.

  • @AceDeclan
    @AceDeclan Месяц назад +2

    Can you please get a better mic

  • @ZappyOh
    @ZappyOh Месяц назад +1

    Open Source AI is only interesting, if it can run on high-tier consumer hardware, or less.
    I mean, if nobody can afford the compute needed, nobody will be able to use it locally.

  • @calvingrondahl1011
    @calvingrondahl1011 Месяц назад

    I care about their cost… $$$?🤖✋🖖😊

    • @Charles-Darwin
      @Charles-Darwin Месяц назад

      He mentions zero factors that would detract

  • @danjensen9425
    @danjensen9425 Месяц назад

    When will they build an energy producing power station to power their power hungry AI. One that doesn’t use gas oil coal the sun or nuclear technology. Don’t get a patent for it so the government can’t interfere and shut it down.

  • @Charles-Darwin
    @Charles-Darwin Месяц назад +2

    Whats the catch? You ride the upsides line really hard yet mention absolutely zero downs

    • @justinwescott8125
      @justinwescott8125 Месяц назад

      There's 100 other channels you can go to for all your doomer needs. Let people have ONE optimistic channel

  • @hodders9834
    @hodders9834 Месяц назад

    AGI/ASI is our only chance to escape goverment tyranny..

  • @JukaDominator
    @JukaDominator Месяц назад +2

    But can open source models generate me 10 racial slurs? That how I'll know they're legit

  • @ytrew9717
    @ytrew9717 Месяц назад

    This one was a bit repetitive

  • @juancarlospizarromendez3954
    @juancarlospizarromendez3954 Месяц назад

    I believe that mixing artificial experts for transforming to one artificial superexpert is still a challenge. It maybe undecidable or intractable.

  • @seanharbinger
    @seanharbinger Месяц назад

    All these goobers trying to get their name visible for ‘advancing ai’ even though in a couple years AI will handle its own growth and optimization. 😂😂😂 now we’re at the price reduction conversation, which as we all know from various other markets, is a non-starter.

  • @alexbasic9776
    @alexbasic9776 Месяц назад

    This is the beginning of the fall of capitalism, quote me on it :))

  • @googleSux
    @googleSux Месяц назад +1

    Why do you have to a annotate all over the screen like a madman? Just make stuff more distracting

  • @zalzalahbuttsaab
    @zalzalahbuttsaab Месяц назад

    2:02 Wes, bro - I just tried the Snowflake interface and it's pants bro. Total pants. That's pants with pants on. It's transparent as an opaque piece of glass with nothing on the other side. Thumbs down here. And they can keep the temperature slider for what it's worth (spoiler: nothing at all). And yes: I am smarter and more handsome.

  • @kruger1970
    @kruger1970 Месяц назад

    I have always appreciated your videos, but your recent use of cheap, attention grabbing headlines (and this is one of the lame ones) takes away from the content and expert profile you used to have. So, goodbye, good luck, I hope you will achieve your subscriber numbers, but your colleagues like Matthew Berman, All About AI, Theoretically Media, The AI Advantage, AI Explained & Sam Witteveen have all been able to grow their communities by staying the course and not resort to TikTok 'tactics'. Really pity, used to love your work, but have to unsubscribe.😞

  • @HyperUpscale
    @HyperUpscale Месяц назад

  • @ramakrishnan00
    @ramakrishnan00 Месяц назад +3

    Open sourcing AI is considered a bad idea for several reasons:
    1) Firstly, it can disadvantage smaller players in the field.
    2) Secondly, it increases the likelihood of bad actors misusing AI for nefarious purposes.
    3) Additionally, it may stifle innovation and hinder the emergence of new ideas and breakthroughs.
    The only perceived advantage is that it might make some AI technologies more affordable from larger players. However, the free AI available currently meets most daily needs, and those who pay for higher models are typically less concerned about cost. Therefore, open sourcing intelligence may not significantly benefit the general audience and could potentially empower bad actors.
    In conclusion, while open sourcing AI may seem advantageous, it's akin to a double-edged sword, as it comes with both benefits and risks.

    • @us_f4rmer
      @us_f4rmer Месяц назад +6

      Generated by... Either it was chatGPT, or Claude but no this is 100% gpt3.5 text.. but ramakrishnan could tell us which llm he let that write. New to the game? Without a decent prompt you won´t fool anyone here this has all the tell signs, once you know you know

    • @SupremeKingSovereign
      @SupremeKingSovereign Месяц назад +4

      What isn't? Even love can be double-edged.

    • @scrout
      @scrout Месяц назад +4

      We have way more to fear from the actors already involved than any Johnny come latelys using open source.

    • @Airwave2k2
      @Airwave2k2 Месяц назад +2

      What did I have read?