How Developers might stop worrying about AI taking software jobs and Learn to Profit from LLMs

Поделиться
HTML-код
  • Опубликовано: 3 июн 2024
  • Right now, the software industry is kind of stuck, where no one wants to hire non-AI developers because the hype is claiming that all the jobs are about to get replaced by garbage like Devin.
    But new evidence indicates that the pace of AI growth is slowing down, and that two years of GitHub CoPilot has been creating a 'Downward Pressure on Code Quality.'
    The future is never certain, but it looks like there's a path for the next few years to be a new boom in software development, and it might start soon.
    00:00 Intro
    03:49 Ezra Klein's interview of Anthropic's CEO from April 12th
    04:06 There are no Exponentials in the real world
    05:10 Research showing LLMs are reaching the point of diminishing returns
    06:28 Article on the stagnation of LLM growth
    06:57 Stanford AI Index Report
    07:18 Research showing that LLMs are running out of training data
    07:28 Research on "Model Collapse"
    08:13 Research showing AI reducing overall Code Quality
    09:04 Quick Recap
    09:27 Implications to Software Developers
    10:56 Parallels to 2008/2009 and the App boom
    11:44 How and when we might know
    12:07 Wrap up
    Papers and references from this video:
    The complexity of the human mind
    mindmatters.ai/2022/03/yes-th...
    www.psychologytoday.com/us/bl...
    Ezra Klein's interview of Anthropic's CEO
    www.nytimes.com/2024/04/12/po...
    3Blue1Brown Video on Logistics and Exponentials
    • Exponential growth and...
    Research showing LLMs are reaching the point of diminishing returns
    garymarcus.substack.com/p/evi...
    arxiv.org/pdf/2403.05812
    paperswithcode.com/sota/multi...
    Research showing that Data Availability is likely the bottleneck
    en.wikipedia.org/wiki/Chinchi...)
    www.lesswrong.com/posts/6Fpvc...
    www.alignmentforum.org/posts/...
    The 2024 Stanford AI Index Report with section on running out of data
    aiindex.stanford.edu/report/
    Research showing we're running out of LLM Training Data
    epochai.org/blog/will-we-run-...
    Research showing "Model Collapse" when training data contains LLM output
    arxiv.org/pdf/2305.17493
    Research showing AI Code Generation reducing code quality
    visualstudiomagazine.com/arti...
    visualstudiomagazine.com/Arti...
    www.gitclear.com/coding_on_co...
    Release Hype about GPT-5
    tech.co/news/gpt-5-preview-re...
  • НаукаНаука

Комментарии • 690

  • @slmille4
    @slmille4 29 дней назад +505

    Ironically LLMs are taking software jobs not directly by writing code but rather by costing so much that there’s not much money left for other projects

    • @InternetOfBugs
      @InternetOfBugs  29 дней назад +59

      There's definitely an aspect of that. There's only so much R&D investment money to go around, and LLMs are shoveling a lot of it into a furnace at the moment.

    • @aibutttickler
      @aibutttickler 29 дней назад

      I work for a startup whose entire business model is centered around a tool built to analyze data using LLMs (GPT-4 to be exact). It costs less than $100/mo to run and it's extremely profitable. What BS are you spouting?

    • @InternetOfBugs
      @InternetOfBugs  28 дней назад +30

      I'm talking about how the "[private] funding for generative AI surged, nearly octupling from 2022 to reach $25.2 billion [in 2023]" and $1.8Billion in 2023 US Federal Government AI spending (source aiindex.stanford.edu/report/). The losses include OpenAI who lost $540 million last year (source www.theinformation.com/articles/openais-losses-doubled-to-540-million-as-it-developed-chatgpt ), Stability AI, who is burning $8Million/month and making only a fraction of that in revenue (source fortune.com/2023/11/29/stability-ai-sale-intel-ceo-resign/), the (source aiindex.stanford.edu/report/) plus a ton of other companies that don't report losses or don't break out their AI divisions, like Anthropic, Meta's AI division, etc.
      I have no doubt there are companies that are making some profit from LLMs, but I seriously doubt it's anywhere close to the $27 billion being spent on it (just in the US).

    • @lattehour
      @lattehour 21 день назад

      haha true but the cost will get (eventually) lower that`s a given

    • @InternetOfBugs
      @InternetOfBugs  20 дней назад +1

      @@lattehour I certainly hope so!!

  • @michaelcoppinger786
    @michaelcoppinger786 29 дней назад +454

    Your content is from a time of RUclips past when people actually cared about delivering quality, original thoughts rather than algorithm optimized drivel. So glad I stumbled across this channel

    • @user-kt5pm3je5f
      @user-kt5pm3je5f 29 дней назад +14

      I agree ...am in my final year in uni and his perspectives has been really encouraging... it's so refreshing to listen to well reasoned arguments

    • @InternetOfBugs
      @InternetOfBugs  29 дней назад +77

      @michaelcoppinger786 I guess it makes sense that my content feels like it's from an earlier time. In case you hadn't noticed - so am I :-)
      LOL
      Thanks for the compliment. I appreciate it.

    • @michaeljay7949
      @michaeljay7949 28 дней назад +1

      Well said

    • @Shiryd
      @Shiryd 24 дня назад +9

      ​@@InternetOfBugs love your energy though! :-) i've noticed some people also from "an earlier time" aren't as truly analytical as you are about the current state of software. on the contrary, they seem to become stagnant on what they already know and just don't like change (which doesn't make any sense with how software is taught nowadays)
      versus you, who clearly take the time to "go the extra thought" and come to a down-to-earth conclusion without coming off as arrogant or "wiseass" lol :p

    • @seriouscat2231
      @seriouscat2231 3 дня назад

      I found this channel through Jonathan Blow and Casey Muratori but can't remember how exactly.

  • @xevious4142
    @xevious4142 29 дней назад +308

    I've got a degree in biomedical engineering and have done computational neuroscience before. The number of clueless programmers out there talking about how AI is the same as the brain almost makes me regret switching to general software as a career. Thank you for mentioning this.

    • @InternetOfBugs
      @InternetOfBugs  29 дней назад +51

      As one of those clueless programmers (at least when it comes to medical stuff - although I'm arguably clueless about far, far more), I appreciate your expert opinion.

    • @seventyfive7597
      @seventyfive7597 28 дней назад +10

      You spread ignorance, I studied neuroscience and computer science in university and have experience with deep learning, and the systems are alike logically, the only difference is that the biological neurons are continuous, and LLMs are clock gated, but the rate is so high that it might've been continuous for that matter. Also, LLMs are NOT frozen if an online (not in terms of the internet, in terms of DL) approach is applied, or if reinforcement (not RLHF) is applied. In general, LLMs are logically almost the same, with some advantages per synapse, but with 10X less "synapses" compared to a healthy young adult. What's holding back the number of synapses is energy budget, and that's about to change

    • @InternetOfBugs
      @InternetOfBugs  28 дней назад +97

      @seventyfive7597 The structure of the neurons might be effectively the same, but the human brain is not just a very large collection of neurons connected at random. The overall systems are vastly different.
      Feel free to take it up with Simon Prince. His Book "Understanding Deep Learning" contradicts you. (Book: mitpress.mit.edu/9780262048644/understanding-deep-learning/)
      You might want to read it.
      I can't link to the relevant section of the book, but here's a condensed explanation from an interview with him on the "Machine Learning Street Talk" podcast:
      ruclips.net/video/sJXn4Cl4oww/видео.html
      Also, feel free to argue with Meta's Turing Award winning Chief A.I. Scientist:
      “The brain of a house cat has about...the equivalent of the number of parameters in an LLM... So maybe we are at the size of a cat. But why aren’t those systems as smart as a cat? ... A cat can remember, can understand the physical world, can plan complex actions, can do some level of reasoning-actually much better than the biggest LLMs. That tells you we are missing something conceptually big to get machines to be as intelligent as animals and humans.”
      observer.com/2024/02/metas-a-i-chief-yann-lecun-explains-why-a-house-cat-is-smarter-than-the-best-a-i/

    • @xevious4142
      @xevious4142 28 дней назад

      @@seventyfive7597 I'm glad I did the biomedical stuff before the CS stuff. Helps me have a healthy skepticism when people claim systems that have orders of magnitude more energy requirements than the brain are "logically equivalent". Clearly we're missing something or we could run LLMs on the energy density of bread like our brains do. I'll trust my meat thinker over these tools for now.

    • @ansidhe
      @ansidhe 28 дней назад

      @@InternetOfBugsJust going by intuition here but if I were to guess I would say that the human (as well as other biological brains, possibly including octopuses 😉) has many more auxiliary processes that the LLMs don’t have. And I’m not even talking about the whole internal synergies’ landscape among all the specialised centres of the brain. Even mere pruning (and dreaming, no less) is something that is only being researched and not yet applied as a standard process of maintaining LLMs.
      My recent candidate for the next hype are KANs (Kolmogorov-Arnold Networks) that apply serious mathematical analysis to transforming the conceptual space of neural networks. Essentially, transformer functions on each perceptron instead of weights and on/off switch functions. The first experiment results look promising and if that turns out to be a breakthrough, my bet would be that we might have some real next-gen creation in our hands. Something that would be beyond the discussion of „human brain has vastly more neurons”… - we would have better neurons that biological brains never evolved to have. That would actually be scary…

  • @roid1510
    @roid1510 16 дней назад +75

    I dont understand why were using LLMs for Creative and Programming tasks instead of Administrative work. I feel like that is where it actually makes sense.

    • @RaPiiDHUNT3R1
      @RaPiiDHUNT3R1 14 дней назад +9

      Yeah we don't need video generators that cost $1m for 10 seconds, we need an excel AI assistant that does the spreadsheets & the schedule management.

    • @vedant.panchal
      @vedant.panchal 13 дней назад +6

      LLMs can make mistakes. They are unreliable. Totally not recommended to use in critical enterprises. Or administrative tools.

    • @pentachronic
      @pentachronic 12 дней назад +3

      They’re being used for everything. Don’t pigeon-hole LLMs. Contract writing, Legal interpretation, Website development, etc.

    • @LathropLdST
      @LathropLdST 9 дней назад

      You will spill that drivel until a LLM powered "payroll clerk" decides this month's salary for you is $7.

    • @crisvis8905
      @crisvis8905 5 дней назад

      I use it for administrative work. Copilot for Microsoft 365 is a crazy time saver. It's not perfect, but it's made my life so much easier.

  • @kokoinmars
    @kokoinmars 29 дней назад +253

    Dude, your devin video made you my hero, sort of. Not because you debunked Devin, but because you broke down the entire process of approaching the problem and laying out the important points for the conversation that should follow. I love your videos and wish you the best. 🥰

    • @alonzoperez2470
      @alonzoperez2470 29 дней назад +2

      The devin team is formed by gold medals coding and math winners in championships.

    • @newbieguy2509
      @newbieguy2509 29 дней назад +19

      @@alonzoperez2470 SO? Whats ur point? Everyone knows that they are LGMs or GM on codeforces. Are u tryna say just coz they are good in competitive coding devin should be real?

    • @alonzoperez2470
      @alonzoperez2470 29 дней назад +2

      @@newbieguy2509 it will most likely be the case. Perhaps. Devin won't straight up replace every coder out there. But it will definitely automate a lot of tasks in the tech industry and many people will definitely be replaced by this AI.

    • @plutack
      @plutack 29 дней назад +3

      @@alonzoperez2470when it can't do the one they showcased in the demo? Anyways safe to say it will get better though

    • @alonzoperez2470
      @alonzoperez2470 29 дней назад +2

      @@plutack look. The project was unveiled like 2 months ago. In my opinion it will take like at least 2 years or more to launch this project if they aren't only aiming "$" like I said the team is composed by geniuses in the math and coding field. I wouldn't take the project seriously if it wasn't composed by individuals who are experts in the video. But yeah just time holds the answer.

  • @stephanb.322
    @stephanb.322 21 день назад +22

    This was a top-tier take.
    All my corporate clients are currently working on using LLMs exactly like you outlined, using them like any other regular API within existing products.
    None of them is using AI to generate code or planning to do so.

    • @michaelnurse9089
      @michaelnurse9089 18 дней назад +2

      Corporations were never going to be the ones to use it like this - too risky. Take a smaller business with less to lose and a bigger focus on cost and you will find that code is being written with AI.

    • @seriouscat2231
      @seriouscat2231 3 дня назад

      @@michaelnurse9089, with all the downsides that come with it.

  • @SeriousCat5000
    @SeriousCat5000 29 дней назад +74

    Great points! I think what a lot of non-programmers don't understand is that actually writing code is a very minor part of a developer's job, with the majority of time spent on "communication" tasks. So even if LLMs could write pristine code, it's still not going to replace developers as only they know the context of what to ask to code, how ot verify it works, and how to implement it.

    • @TheExodusLost
      @TheExodusLost 29 дней назад +12

      Communication isn’t really that crazy of a skill though, a LOT of people have communications skills. Coding is hard for many. To me it’s the largest barrier of entry to normal people creating applications and solving software problems.

    • @BBkeeper
      @BBkeeper 29 дней назад +15

      ​@@TheExodusLostdisagreed. A lot of people *can* communicate. A much smaller amount can communicate *well*

    • @TheExodusLost
      @TheExodusLost 29 дней назад +6

      @@BBkeeper still a much higher number than decent coders.

    • @InternetOfBugs
      @InternetOfBugs  29 дней назад +1

      @SeriousCat5000 Yep. I made a whole video on that here: ruclips.net/video/7-f7rPdj6tI/видео.html

    • @gaiustacitus4242
      @gaiustacitus4242 29 дней назад

      @@TheExodusLost You are correct. Many organizations separate the systems analysis from the software development, assigning the analysis to personnel having experience with both the problem domain and software development and the programming assigned to a team of programmers who are given specific functional tasks to complete. In my experience, many developers should never be permitted to touch a keyboard, much less to write code.

  • @brukts3361
    @brukts3361 28 дней назад +18

    I just want to say how happy I am that you have become a content creator. You've got such a fantastic and well experienced insight into these kinds of topics. Please keep making content - I've been sharing your videos with a bunch of people over the past few weeks!

  • @SR-cm2my
    @SR-cm2my 29 дней назад +4

    I've been following you since your first video. Thank you for bringing much needed nuance to this incredibly difficult conversation.
    As an immigrant with mediocre English language skills (verbal), I often find it difficult to communicate these exact notions about AI to my managers.
    It seems like everybody is riding high on the AI hype train. I've already implemented some amazing multi-modal search capabilities in our application using CLIP. AI is making a difference in my life already. Imagine, having to deal with an ElasticSearch cluster!
    I'm trying to push my company to build more grounded LLM based solutions like these. I wish I was well connected to be an "AI" engineer like you said. Judiciously implementing these amazing features in applications where there is a fit!

  • @truthssayer1980
    @truthssayer1980 29 дней назад +63

    This needed to be said. When one studies how LLMs work, it’s just statistics, calculus, linear algebra, and trigonometry (cosine, sin, etc) all hacked together. It’s extremely unlikely that this architecture can scale to a human brain. It’s a modern miracle that’s it’s scaled to its current capability. If it scales to human intelligence, then we will have much more to worry about than replacing jobs and programmers. We will need to reevaluate our complete understanding of what it means to be alive and conscious. Replacing jobs and the profit motive is an extremely small minded view of such an achievement.

    • @paul_e123
      @paul_e123 28 дней назад +1

      @@e1000sn Exactly. Well said.

    • @Leonhart_93
      @Leonhart_93 26 дней назад +12

      ​@@e1000snThat's almost as reductive as saying everything is made from atoms. Yes, and?

    • @Leonhart_93
      @Leonhart_93 24 дня назад

      @@TheManinBlack9054 There is no way you can make such a claim because you have no way of proving it. Right now the only true intelligent things are alive. Perhaps it's a property of the biological mass, perhaps it's a property that comes with consciousness. But the main point is, so far it was never shown otherwise.
      I don't care what OpenAI's or NVidia's CEO claim to boost their stock value.

    • @arnavprakash7991
      @arnavprakash7991 20 дней назад

      This miracle and the question posed is exactly why its getting so much serious attention

    • @dekev7503
      @dekev7503 20 дней назад +1

      @@TheManinBlack9054At best AI is just a reflection of human knowledge. This is why LLMs cannot solve math ( not just regurgitate a solution that it has seen, or follow a set of instructions ) or understand physics or basic contextual concepts .

  • @user-dy6ze5in5q
    @user-dy6ze5in5q 28 дней назад +3

    I immensely appreciate your clarity sr. you have taken out a creepling doubt out of my heart

  • @djsheets
    @djsheets 28 дней назад

    Amazing channel, highly appreciated content. Keep up the great no-nonsense approach! No click-baiting, just some interesting food for thought. Big-up.

  • @teoblixt8825
    @teoblixt8825 29 дней назад

    I appreciate this video so much, it's clear you've put a lot of thought and research into this and it's explained in such a digestible and understandable way. Thank you!

  • @flioink
    @flioink 29 дней назад +66

    "Converges on boring" - yes, I've noticed that in image generation apps. It's all the same colors and faces - it's predictable and boring.

    • @nigel-uno
      @nigel-uno 23 дня назад +4

      Have you seen any custom models for Stable Diffusion? Tons of variety due to the variety of training data and no corporate DEI finetuning like that at Google creating black nazis.

    • @latt.qcd9221
      @latt.qcd9221 19 дней назад +1

      I haven't experienced that much with Stable Diffusion. Each model you use with it has a different look and feel to it, and you can always train on different styles or characters to get something different.

    • @truck.-kun.
      @truck.-kun. 15 дней назад

      He was specific on LLM and you are talking about image generation. We create tons of images (or using GAN) and experienced people train the models which are super good at creating specific outputs (like varying human faces).
      There is some weird obsession of people training it on Asian faces which makes all the results look like some kpop person, that I agree with.

    • @gz6x
      @gz6x 15 дней назад

      @@truck.-kun. convergence on atheistic. lol

    • @mechanicalmonk2020
      @mechanicalmonk2020 15 дней назад

      ​@@nigel-uno bOrInG iS wHeN bLaCk PeOpLe

  • @flexo9069
    @flexo9069 17 дней назад

    Woah man. Your channel just popped this morning and I have to say I am really enjoying your content, it comes as a breathe of fresh air in the current context.
    Thank you and I hope all is going well for you.

  • @opusdei1151
    @opusdei1151 29 дней назад +3

    Your sound became really good. I think you produced here the most (good and solid without hype) review that is out there!

  • @TheGreenRedYellow
    @TheGreenRedYellow 24 дня назад +7

    You are on point, I actually run into this issue of repeated answers with different LLM models. This AI hype will go away within 2024.
    Love your videos

    • @ccash3290
      @ccash3290 21 день назад +1

      The hype won't die anytime soon.
      We had a whole hype cycle over NFTs which has no uses

    • @albertoarmando6711
      @albertoarmando6711 18 дней назад

      I don't think it will go away, but it will absorb less capital. Unless results shown are outstanding.

    • @Mimic_37
      @Mimic_37 16 дней назад

      I agree. I can already see the hype dying down as we speak

  • @RobShocks
    @RobShocks 28 дней назад

    I was surprised when I saw you only had 30k subscribers. Your videos are so rich and full of great insights. So refreshing to get a balanced view.

  • @marko8095
    @marko8095 29 дней назад +6

    Fascinating, I really like this presentation style, quick with data points. Thanks!

  • @bioman2007
    @bioman2007 24 дня назад +2

    This channel is pure gold. Thank you sir, for your videos!

  • @rishab9082
    @rishab9082 24 дня назад +2

    I always redirect or share link of your videos whenever my friends are demotivated because of the AI hype. Thanks for the researched quality content and your efforts.

  • @sebastianmacchi6802
    @sebastianmacchi6802 2 дня назад

    Glad your video popped up in my feed, you've just earned a new subscriber, great video, nice dissection

  • @pbkobold
    @pbkobold 28 дней назад +19

    Although I agree with your general thesis, we must be prepared for significantly better LLMs too. Exponential growth forever is impossible, but how has Moore's law held even as the Dennard scaling that originally drove it died? Stacking sigmoids. They get more expensive to stack, but things can look exponential for surprisingly long. Some potential next sigmoids for LLMs: Training far past chinchilla optimality like Llama 3 (though this increases data hunger) and enabling tree-reasoning with an "evaluator" model like what people suspect the Q* thing is (though compute expensive / demands inference efficient models). There are interesting ideas to address data hunger as well, though I don't want to belabor the point.

    • @InternetOfBugs
      @InternetOfBugs  28 дней назад +9

      It will certainly be interesting, and I'm looking forward to seeing what happens.
      And if it turns out I'm wrong, and we do get AGI in the next few years, that will be FASCINATING, although I have no idea what the societal or economic implications of that might be. I'm a functionalist, and I do believe humanity will get there, it just doesn't look to me, knowing what I know now, that it will happen in what's left of my lifetime.

    • @dirremoire
      @dirremoire 23 дня назад +2

      @@InternetOfBugs AGI is a red herring. We don't need AGI to radically transform society. What we have now is sufficient, or will be after one or two years at the most. But yeah, most of the denial stems from the economic uncertainty of an AI driver future.

    • @traian118
      @traian118 22 дня назад +1

      Q learning, Markov decision models and such have been here for quite some time. They have been used successfully in finance and other fields. Just like LLM's people hear about them because they are new to the field and are under the impressions that OpenAi is revolutionising in ajy way. I used to deploy LLM back in 2021 and google has been using them for at least 5 years if not more. The original paper states exactly what the shortcomings are, shortcomings that have not been surpassed today. People that have been able to optimese these models, or smaller models in the future are the ones that are going to make all the bucks. Large models like ChatGPT are not feasible from an economic perspective, but having only half of ChatGPT be able to run locally will be a huge thing. I currently deploy on 2 X 4090 a 30Billion parameter model. This is tiny compared to any LLM available for general use. If someone finds a way to make it computationally efficient, just like gradient descent did for training, than that will be huge. Until then, Devin is a Reinforcement learning algo, (Q Learning if you want - this comes from gaming and it's a pathfinding algo), and as you can see that one has real limitation also. Just because someone has large computer to better iterate and get better models out there like OpenAi has, does not mean they have come out with something new or revolutionary. They did come with stable products, that are really difficult to build

    • @AhemedYu
      @AhemedYu 21 день назад

      Moores law isn't exponential lol

    • @tear728
      @tear728 15 дней назад

      Obviously, LLMs are limited by hardware. Does linear increase in compute yield exponential results in LLMs? I'm not sure.
      I'm curious as to what the next sigmoid for hardware will look like - I suspect this will take many more years of research, perhaps a decade or more before any meaningful progress is made. On top of that, costs will likely be too high for a market solution for a period of time as well. At least that is how all the previous sigmoids behaved, why would it be any different now?

  • @jordanrowland2760
    @jordanrowland2760 16 дней назад

    This video earned a subscription from me. Thank you for mentioning the "Model Collapse", because I've been talking to people about this very same thing, but I was calling it "Generation Loss" coming from the audio production world where something is recorded to tape over and over and the artifacts and quality degradations start to compound. I'm glad to know there's an actual name for it!

  • @ddude27
    @ddude27 29 дней назад +24

    Great video! I'm so happy someone in the youtube space actually drops citations when discussing the topic at hand. The extremely ironic part of data quality decreasing is that in a capitalist system information quality isn't the main focus of how to operate a business but making money is... I mean the channel is called internet is full of bugs which which I feel includes the integrity of data quality published on it.

    • @InternetOfBugs
      @InternetOfBugs  29 дней назад +4

      Yep. The first video I published on this channel was about "information quality isn't the main focus of how to operate a business but making money is" : ruclips.net/video/hKqqU1J-WXk/видео.html

    • @gaiustacitus4242
      @gaiustacitus4242 29 дней назад

      @@InternetOfBugs Absolutely correct. The most efficient organization (MEO) collects and stores only the minimum amount of information required to support revenue generation. Unfortunately, as a business grows government regulations create requirements to collect and store data which runs counter to efficient management and operations.
      More often than not, this extraneous information is not managed in accordance with a records retention policy that results in any data collected for no other purpose than to demonstrate regulatory compliance being purged at the earliest opportunity. Maintaining these records has a cost that increases prices, decreases competitiveness, and negatively impacts profits.

    • @bornach
      @bornach 24 дня назад

      It was noticed by some that the word "delve" was rising in usage. One theory is that it has disproportionately higher probability of being output by OpenAI's LLMs, and people are either adopting this trait, or are just dumping ChatGPT output onto blogs, forums, social media without doing any editing of the content. I've found another possible tell of GPT pollution. Search for the exact term "46,449 bananas" and you'll find all manner of articles, resumes, company brochures and product descriptions that only seem to have one thing in common -- a fun fact about the height of Mount Everest randomly inserted into them. The Internet is not just full of bugs; it is rapidly filling up with bananas! 😂

  • @jomoho3919
    @jomoho3919 29 дней назад

    Great to see that someone that can still think clearly and present ideas in a clear and lucid way.

  • @Peilerman321
    @Peilerman321 12 дней назад

    Changed from my TV's RUclips to my phone's just to comment on this video, which I rarely do.
    You brought up a few really good points and it was refreshing to see someone taking a more critical stance about the evolution of AI. Usually most people (me included) tend to get carried away too easily by the hype train and promises these AI companies make.
    Thanks for challenging these popular opinions and for this thought provoking video!

  • @davidnrose2135
    @davidnrose2135 15 дней назад

    Great insight, subbed to support this kind of researched and well supported content. Keep putting this stuff out!

  • @aleksandartomic9048
    @aleksandartomic9048 28 дней назад +2

    Man, your videos are so high quality and insightful.
    I wonder why you don’t have at least 100k+ subs yet, oh wait….

  • @vinipaivas
    @vinipaivas 14 дней назад

    Thank you so much for this video. You came with an original idea and opinion very well based. It’s so good to watch someone being realistic and not blindly jumping into the hype wagon. I will keep a close eye on gpt 5 and what you mention about what we can expect of it.

  • @suryavirkapur
    @suryavirkapur 29 дней назад

    Your videos would be excellent reading but thank god they are videos.
    Amazing work! 👍

  • @demidaniel9253
    @demidaniel9253 16 дней назад +1

    100% agree. I’m a SWE getting a Masters in InfoSys with a concentration in AI, and this is exactly what I’ve been saying for the past semester, AI generated content is ironically poisoning its own well of already limited data. I can’t wait for people to get over the hype so that we can focus on using the existing to tools to make some useful and awesome new and innovative tech

  • @noornasri5753
    @noornasri5753 4 дня назад

    This is the first channel I have been grateful for. Just good content, digestible and well planned.

  • @LubulaAfritech
    @LubulaAfritech 14 дней назад

    This is fantastic. Well explained and very realistic.
    The hype is very dangerous, i live how you logically broke through the hype with facts and papers. Absolutely fantastic you have my subscription

  • @buckets3628
    @buckets3628 14 дней назад

    I offer you my Respect. You seem like a genuine intellectual. Hope you get the chance to be heard by many.

  • @inDefEE
    @inDefEE 7 дней назад

    thanks for so eloquently explaining what i’ve been debating my co workers about for months

  • @DanielXavierDosReis
    @DanielXavierDosReis 29 дней назад

    I mean.... I really need to thank you for all effort you put together to create this video... It help me digest a lot of lies big corps want us to believe and give me such a relif. Really appreciate man! Keep it up!

    • @InternetOfBugs
      @InternetOfBugs  29 дней назад +1

      I appreciate you saying so. Thanks for subscribing.

  • @pcwalter7567
    @pcwalter7567 21 день назад

    Keep up to good content. Thanks for not chasing RUclips hype. Your content is actually worth something.

  • @gabriellang7998
    @gabriellang7998 19 дней назад +4

    There is an AI bubble coming sometime in next 3 years. If past trends are anything to go by, we will then have less than a decade before actually useful and cost efficient ai assistants will become commonplace.
    Make sure to properly rebalance your investment portfolios to profit from the burst.
    Anyway, the management quest to replace expensive developers and testers with cheap ai is far from over and will affect hiring decisions even as the bubble will be bursting, but I can't wait for future youtube videos with collapse stories from Cold Fussion and similar channels.
    If you are still planning to be in programming for the next ten years, you only have to survive the next 3, so it may be a good moment to pause job changes and start learning something interesting. After that, we are likely to have a ton of bad code to fix or better yet, rebuild from the scratch. I intend to ask double for that job :)

  • @coolepizza
    @coolepizza 29 дней назад +4

    Your content is just so nice. I hate the current social media tech culture that is just overhyping anything. It still impresses me how you can influence people in their opinion by marketing (like devin and basically every other AI startup).

  • @mettaursp309
    @mettaursp309 29 дней назад +1

    Gotta say these videos are a breath of fresh air. The VC centric hype over the past 2 years has felt suffocating & it feels great hearing these counterpoints against it. These videos have been really enjoyable to watch.

    • @CodingAfterThirty
      @CodingAfterThirty 28 дней назад +1

      The crazy part is that this hype cycle also encourages other VCs, founders, and managers to jump on the hype train.
      My marketing manager keeps encouraging us to use AI for everything.
      Lol. I want to jump off a building whenever I find a technical blog post that I think will give me the answer I want to start with: " in the world of web development."
      I see a lot of companies abusing AI to automatically generate technical blog posts or documents that improve their websites' SEO but are garbled when it comes to value and content.
      And god forbid someone uses the solution in their code base.

  • @marcomow
    @marcomow 19 дней назад

    amazing summary: clear, schematic, sensible. instantly subscribed

  • @RishabhKhare-su4dz
    @RishabhKhare-su4dz 29 дней назад +2

    Glad to hear someone sensible after a long time. In this time of AI hype, it is easy to lose hope. Great video!

  • @EcomCarl
    @EcomCarl 15 дней назад +1

    Excellent article! It’s crucial for entrepreneurs to focus on tangible applications of existing technologies to create value today, rather than chasing elusive future capabilities. 🚀

  • @kots9718
    @kots9718 18 дней назад

    usually never comment on videos but this video was so fucking incredible. Well researched, coherent and hopeful. Thank you.

  • @KeithSimmons
    @KeithSimmons 28 дней назад

    Excellent work, keep these in depth interesting and sober videos coming!

  • @lutherquick165
    @lutherquick165 20 дней назад

    Your videos are friggen awesome. I watch most of them. Your channel should be 1000 x more followers... Thanks for the great videos. Respectfully

  • @albertoarmando6711
    @albertoarmando6711 18 дней назад

    Good video. I'm not worried about the field in general, there will be software engineering roles and I don't think AI will replace programmers. What worries me is that nobody knows where this is going and what will be the steps needed for us to adapt (or retire). Because, like it or not, this is going to affect the way we work. Uncharted territory for us to explore. And we will learn.
    I'm an independent contractor (mostly Javascript and Python). Let me tell you that until September last year, I didn't have to look for jobs, jobs came to me. The market is changing and there's no clear path to follow. But overall, I'm optimist.

  • @CodingPhase
    @CodingPhase 21 день назад +1

    I agree 100% with you. Great channel much needed on RUclips

  • @geneanthony3421
    @geneanthony3421 12 дней назад +1

    My concern with AI is the same concern I had with outsourcing jobs. New developers need to start somewhere and if you are a new (L1) can take years to get to an L10 (if ever). AI will replace skill levels L1-L4. Eventually when the good people leave, you will have a lack of skilled talent because they never got those low end jobs that might have gotten them from an L1 to an L5. A lot of other people will think the barrier for entry is too high and they'll get replaced by something new once they get those skills. Eventually there's no talent and they blame kids for not being interested in technology.

  • @HunterMayer
    @HunterMayer 29 дней назад

    This day and age calls for caution splashed with a dash of optimism! Thanks for sharing your thoughts on this, it plagues me daily where we're going and how to leverage it. I also find your thoughts insightful, and it doesn't hurt that it resonates with my own experiences.
    🤔🤯 🤤

  • @LeonidasRaghav
    @LeonidasRaghav 14 дней назад

    Thanks for this video. I think it was made before GPT-4o so has that changed any of your opinions? I think that doesn't affect software engineers directly but does seem like it could have a big impact on other industries e.g customer service.

    • @InternetOfBugs
      @InternetOfBugs  7 дней назад

      I don't pay much attention to employment in the customer service industry, so I don't feel comfortable speculating, but I wouldn't be surprised.
      As for GPT-4o, I'm working on a video about that now

  • @efesozen3503
    @efesozen3503 27 дней назад +1

    incredible quality, thank you for sharing your opinions

  • @Not_Even_Wrong
    @Not_Even_Wrong 29 дней назад

    You're folding together a lot of good ideas/info here, thanks for the video!

  • @jazsouf
    @jazsouf 24 дня назад

    Great video! I really liked the analogy with the developer environment in 2008 with mobile apps. Could you share some resources on how to start applying the current LLM models into existing internet products?

  • @testolog
    @testolog 28 дней назад

    From my point of view, after a batch of processing data i would like to say, LLM get fill of entropy with data. LLM is just vector which calculate a perfect next vector based on " "avg" score "(just simplify). Thats means most interesting part will disappear because entropy of data and normal distribution. But there is think, LLM what is catch by level of entropy like work by z-score in 0, they will capable to replace a lot people in white shirts, what will significant make impact on industry and cat it. I'm sure people don't understand, but LLM generally speaking is
    put hot water in one side, cold in other side, LLM will be line between two sides. But with LLM we will move into stage where is not anymore hot or cold water will be generate, and entropy will increase, in the end we get a situation where is LLM get a everything. I have more explanation about it, because i think about years and years. But i'm kind a dump person, so i just scare my 12 years in IT crumble to dust )

  • @LurkingAround
    @LurkingAround 27 дней назад

    Hello there. Do you have any thoughts on KAN? (Kolmogorov-Arnold Networks)
    And their use in LLM? (Edited 19 hours later)

    • @InternetOfBugs
      @InternetOfBugs  26 дней назад +1

      Nope. Never heard of them. I'll look into them and get back to you.
      I really appreciate learning about new things from the folks in my comments. Thanks very much.

  • @erkmenesen
    @erkmenesen 3 дня назад

    Amazing delivery! You, sir, just got a subscriber.

  • @alexanderbluhm8841
    @alexanderbluhm8841 17 дней назад

    Very interesting thank you for sharing your thoughts. I think companies using LLM capabilities already. The fact that new models are coming out doesn’t change anything. Solutions are design model flexible so they can upgrade to those new models in the future

  • @future62
    @future62 29 дней назад

    Love how you got to the heart of the matter. Value creation is all about problem solving. AI is just another tool in the box to be used for that purpose.

  • @danny5534
    @danny5534 29 дней назад

    Great video. I think the focus is always on hype, but there are lots of companies that are already focused on applying LLMs to real world problems.

    • @InternetOfBugs
      @InternetOfBugs  29 дней назад

      It feels to me anecdotally (although I don't have any good data) that the amount of effort being done by companies working on applying LLMs is a tiny. tiny fraction of all the effort being spent on LLMs/"AI". I hope that ratio shifts (or has already shifted, and I haven't seen it, yet).

    • @bornach
      @bornach 24 дня назад

      A lot of it seems like tech companies are throwing spaghetti at the wall hoping something will stick. A few months ago Amazon had a LLM ChatGPT hooked up to their review search bar -- it wasn't obvious they did this until people started prompting it to write limericks and Python scripts. What was that all about?

  • @mattklapman
    @mattklapman День назад

    the industry always likes to add a layer on the stack. The agent layer sits on top between the UI and the "user" and will be LLM terrific.
    Then all the layers below continue with under investment per usual as it bit rots

  • @mike110111
    @mike110111 15 дней назад

    This is great. A relief. It's really freaking me out, how quickly things seem to be progressing. It's kind of nuts if you extrapolate, this machine that you talk to and it does cognitively what any person can do ... scary stuff. That's my entire livelihood. Let's hope you're right and there are limits to this growth. If not ... brave new world, at least for me ...

  • @muhammadasiffarooqi7672
    @muhammadasiffarooqi7672 14 дней назад

    Subscribed. Your thoughts and reasons are so authentic.

  • @StartupAnalytics
    @StartupAnalytics 15 дней назад +1

    Thank you, this is the one of the best and honest videos!

    • @StartupAnalytics
      @StartupAnalytics 15 дней назад

      On the point of models trained on LLMs output over and over again result in less variety in the dataset and convergence of outputs as you mentioned. Something, that has been seen to be happening with recommendation engines where over time, the recommendation based feed converges to a cluster of content not allowing for variety in output.

  • @MaxMustermann-vu8ir
    @MaxMustermann-vu8ir 21 день назад

    Your videos are absolutely spot on.

  • @davidwells4969
    @davidwells4969 15 дней назад

    Very well put. Insightful points

  • @bobsoup2319
    @bobsoup2319 21 день назад +1

    Just for the MMLU benchmark, it is impossible to keep growing linesrky because it’s a score out of 100 and we’re already at like 90

  • @faisalk.7520
    @faisalk.7520 29 дней назад +3

    Sora is getting too crazy. I can't tell if this video is ai generated or not /s
    Great video as always!

  • @mfpears
    @mfpears 23 часа назад

    Thanks for being smart and saying things that make sense. Seriously, there's too much content on this topic that's vapid. But I do think that it will only take a couple more years for another fundamental breakthrough. Btw I also studied physics

  • @andythedishwasher1117
    @andythedishwasher1117 28 дней назад

    I'm a little confused/concerned about how they're acquiring that data on how much generated code is edited or discarded. When I use Copilot in my IDE, tab through one of its completions, then edit what it generated, are they somehow counting that as part of the usage data described in the EULA? If so, that's pretty concerning since it would imply Microsoft keeps track of every completion in my IDE and thus has access to most of the private IP I develop there.

    • @InternetOfBugs
      @InternetOfBugs  28 дней назад

      No. They define it as:
      "Code churn \-\- the percentage of lines that are reverted or updated less than two weeks after being authored"
      So they're counting changes to lines of code in the two weeks after those lines are initially committed to the repo.

  • @blendedplanet
    @blendedplanet 2 часа назад

    NVidias NIM seems to be the right idea, instead of leaving the sweet spot, focus on quality optimal results within a niche domain then paste the agents together where the collective 'machine' likewise has it's own sweet spot. This is literally the same problem we have with humans. Their knowledge, experience and emotional makeup become the limiting factor on getting stuff done or the quality of the team experience. So, clearly the AI breakdowns you're describing are a thing, but it's a hiccup in the path we are headed down. Old school coders like me have to get on the bus or get left behind and that's kinda sad but it's also kinda cool.

  • @Glotaku
    @Glotaku 8 дней назад +1

    This is honestly how I learned to stop worrying and love the bomb

  • @jeramiehendricks2799
    @jeramiehendricks2799 20 дней назад

    I watched this video and immediately subscribed. Great content.

  • @paulsingh11
    @paulsingh11 22 дня назад

    Do you think a Software Engineer/Architect with knowledge of Accounting be more desirable than one without the knowledge of it?
    With this layoff/AI thing going on I got a job as an ERP developer at a manufacturing company. It’s Accounting heavy, I’m wondering if it’s worth learning Accounting at a “mid” level?

  • @_Lumiere_
    @_Lumiere_ 22 дня назад

    These are very interesting thoughts and findings. I dont if they are accurate, only time will tell, but I have many misgivings and uncertainties about AI and these points do fit in very well. Btw, didnt know you had degrees in physics? May I ask what specific degrees you have and how you ended up in software?

  • @WeirdInfoTV
    @WeirdInfoTV 16 дней назад

    Thank you for sharing a non biased view of AI development

  • @palomarAI
    @palomarAI 27 дней назад

    Great points, at same time I wonder if synthetic data can supply a new momentum...however it's sort of intuitive that synthetic data has its own limitations, basically by definition.

    • @InternetOfBugs
      @InternetOfBugs  27 дней назад

      The best treatment I've seen for synthetic data is from arxiv.org/pdf/2404.05090 and it indicates that 20% Synthetic is mostly ok, 50% Synthetic still collapses, just less quickly.
      Unless that paper turns out to have glaring errors, without a complete breakthrough, the best ratio we're going to get by adding synthetic data is not going to buy us much time.

  • @mattymattffs
    @mattymattffs 27 дней назад

    Great video. As much as i do think AI is the future, as an assistant, i love these videos from the skeptics perspective. It keeps us all grounded

    • @InternetOfBugs
      @InternetOfBugs  26 дней назад

      I also think AI is the future. I just haven't seen any evidence that it's going to be revolutionary any time soon.
      I'm guessing that in the next 3-5 years, it will have something similar to the impact that smartphones had. Which is much bigger than nothing, but not a societally disruptive shift.

  • @robertmaracine3126
    @robertmaracine3126 18 дней назад

    Happy to see this channel grows

  • @zmdeadelius
    @zmdeadelius 18 дней назад

    Such a great talk. Thanks for sharing.

  • @SwingingInTheHood
    @SwingingInTheHood 7 дней назад

    I've been working on my AI application for over a year now. Each LLM improvement rolled out by OpenAI, Google and Anthropic has only served to make my application better. Honestly, the only thing I need from GPT-5 is speed and larger context window. And reduced cost. For mines, and I would imagine most business applications, GPT-4 and it's equivalents will do just fine for the next few years,

    • @InternetOfBugs
      @InternetOfBugs  7 дней назад

      Has 4o not gotten fast enough? It doesn't seem to me to be any better in terms of capability, but I'm SO impressed with its speed.

    • @SwingingInTheHood
      @SwingingInTheHood 7 дней назад

      @@InternetOfBugs 4o is pretty fast. As is Gemini Flash. And the input token limits are amazing -- 1M in the case of Gemini Pro 1.5. My issue is that the output context window limits for all of these have not changed: 4K to 8K tokens. And I forgot to mention that reduced pricing is always a bonus. My larger point being that I agree with your conclusion: Start developing now! As the models get better, so will your applications.

  • @bobsoup2319
    @bobsoup2319 21 день назад

    Also Llama 3 disproved the chinchilla paper. It was trained beyond what anyone thought was the reasonable cut off, yet it performs much better than the previous model from like 8 months before

    • @InternetOfBugs
      @InternetOfBugs  20 дней назад

      Congrats! You're the 8th consecutive person in these comments to quote that headline while having no clue what it actually means.
      What Llama-3B found was that adding more data tokens to a given amount of compute scales up log-linearly. The IMPORTANT finding from Chinchilla was not that you couldn't improve results from throwing more DATA at a given amount of COMPUTE . The finding from Chinchilla is that you can't improve results by throwing more *COMPUTE* at a given amount of *DATA*, so the amount of quality training data available is still the limiting factor.

  • @AIWorks2040
    @AIWorks2040 23 дня назад

    could you explain us the local llm and how we can fine tune and customize it

  • @witar.
    @witar. 14 дней назад

    That's actually amazing analysis. Thanks.

  • @SweepAndZone
    @SweepAndZone 15 дней назад

    Very thought out video. Love it!

  • @owenwexler7214
    @owenwexler7214 26 дней назад +2

    I so want to believe this isn’t just copium. We’ll have to see 🙏🏻

  • @b3p
    @b3p 12 дней назад

    Nice one. Might be worth diving deeper on synthetic data, as I believe that's the reasoning for why Llama 3 is so strong at its size ( 8B / 70B models , 15T tok corpus ). Meta didn't mention what portion of that 15T is synthetic, but I imagine a fair amount.
    I'm curious if the "convergence / photo copy chain" problem can be mitigated through increasing the diversity of sampling parameters and methods of sampling (dynamic temperature , min_p instead of top_p, etc). If you consider what percentage of all generative AI output is created through rather vanilla sampling with quantized to hell variants of gpt-3 and gpt-4, there is some hope for diversity and further improvements if open weight models and creative sampling eat away at O aye I's share

    • @InternetOfBugs
      @InternetOfBugs  7 дней назад

      The best treatment I've seen for synthetic data is from arxiv.org/pdf/2404.05090 and it indicates that 20% Synthetic is mostly ok, 50% Synthetic still collapses, just less quickly.
      Unless that paper turns out to have glaring errors, without a complete breakthrough, the best ratio we're going to get by adding synthetic data is not going to buy us much time.

  • @blaked6226
    @blaked6226 18 дней назад

    I like a lot of the points you made, and I don’t think AI is going to take anyone’s job at anytime soon. What devs should be concerned about is the people using AI taking the jobs of those who don’t.
    AI without a human

  • @samysamy5051
    @samysamy5051 29 дней назад +2

    Great video, this might also explain why these companies are pushing more on the generative AI for images and videos. There's more to do there than what we have with generative text.

    • @InternetOfBugs
      @InternetOfBugs  29 дней назад

      That's a possibility. I haven't really been following the economics of Art/Image/Video generation. It's not my area of expertise. But there's a lot going on over there...

    • @carultch
      @carultch 28 дней назад +1

      AI for images and videos is stolen intellectual property, and is the ultimate betrayal to the artists who made that content with their authentic talent.

    • @InternetOfBugs
      @InternetOfBugs  28 дней назад +1

      @carultch Is that not also true of, say, book authors?
      (Not trying to imply artists shouldn't be compensated, I think they absolutely should, but it seems to me - and I could be missing something - that lots of creators are being stolen from, not just artists).

    • @carultch
      @carultch 28 дней назад

      @@InternetOfBugs Book authors too. All kinds of people with creative professions.

    • @carultch
      @carultch 28 дней назад

      @@InternetOfBugs Another issue I find, is that many times, the original author is trying to clear up a misconception on the original webpage. As is expected, the author might start by restating the misconception in the opening paragraph. And then in the next few paragraphs, the author would clear it up with the actual substance of their work.
      Guess what makes search result summaries: the misconception, stated as fact.

  • @afai264
    @afai264 28 дней назад

    Another great video, this is sort of my current thinking too that LLMs will help developers build things faster and automate aspects of the dev process but won't fully replace a developer any time soon, but who knows and it'll be interesting to see if GPT5 makes a dramatic leap forward. Isn't there a view that quantum computing techniques could result in another dramatic advancement in capability? (btw I liked the Hill Street Blues outro statement whether it was deliberate or not!)

    • @InternetOfBugs
      @InternetOfBugs  27 дней назад +1

      The HSB reference is absolutely deliberate. When talking to early-career developers venturing into the scary world of their early dev careers, I often feel like a grizzled old, bald Sergeant hoping they don't take my advice too seriously (or out of context) and get themselves in trouble with it.

    • @afai264
      @afai264 26 дней назад

      @@InternetOfBugs can you use the HSB intro for your next video intro - I'm sure your GenX following will appreciate it!

    • @InternetOfBugs
      @InternetOfBugs  25 дней назад +1

      @@afai264 Hmmm. I wouldn't want to just drop that in to an unrelated video as a "memberberry." That would feel lazy to me, but let me see if I can figure out a way to work it into a connection with some topic in the a future video.

  • @ivospironello6451
    @ivospironello6451 28 дней назад

    It's crucial thst we entrepreneurs create tools for getting more and more data of the real world. The resource that is going to be limited will be definitely data and as we know the performance of a model depends on the quality of its data

  • @andresgomez7264
    @andresgomez7264 15 дней назад

    Great video. Love the content 👍

  • @briandavidgregory
    @briandavidgregory 16 дней назад +1

    This video was published before the GPT-4o demo videos. Have you had a chance to eval this model?

  • @moeenuddinhashim5069
    @moeenuddinhashim5069 13 дней назад +1

    whats your take on chatGPT 4o, i think it is vastly more capable than 3 and 4 - and definitely more "wrappable"

  • @taterrhead
    @taterrhead 18 дней назад +1

    an example of that 'photo-copy' effect is how the UIs of the internet (especially mobile) have gotten so tremendously boring and mundane from years of optimizing on the same problem

    • @InternetOfBugs
      @InternetOfBugs  18 дней назад +1

      It's also the tools. Apple has killed (starting with the iOS 7 "flat design", and doubling down with their new SwiftUI library) the interesting interfaces we used to have on the iPhone.
      I could do a whole video on the decline of the mobile ecosystem, and my theory about the consequences of Apple's obsession with secrecy.
      I should put that on the list.

  • @troymann5115
    @troymann5115 20 дней назад

    This was excellent. I left the ML field recently because all of the vendors were promising AGI but it was obvious that they didn't have that technology. My friends and I are bracing ourselves for the moment when the market discovers that most AI/ML models hallucinate because they were poorly designed. However I do not think this is a win for the leetcode kiddies either. At some point there will be a role combining BA and DS that can gather requirements and define them mathematically where both an AI and a human developer becomes more productive.

  • @CitsVariants
    @CitsVariants 10 дней назад

    Thank you

  • @GodMeowMix
    @GodMeowMix 15 дней назад

    Such a good video. This is just the kind of reassurance I needed from all this AI growth hype.

  • @jamessullenriot
    @jamessullenriot 29 дней назад +7

    Give up on the hype cycle? Then what will Sam Altman and the other CEOs do is not going on podcast after podcast acting as if they (The CEOs) are knee deep in code building AGI themselves 😂