We're 'at least a decade away' from solving AI, says NYU Professor Gary Marcus

Поделиться
HTML-код
  • Опубликовано: 21 сен 2024
  • Gary Marcus, New York University professor emeritus, joins 'Squawk on the Street' to discuss artificial intelligence implications, the future of generative AI, investor decisions, and more.

Комментарии • 114

  • @twyscape
    @twyscape 4 месяца назад +14

    Finally, somebody that tells it as it is. Ai is the future, but we are trying to run before we can even stand up.

    • @reedriter2
      @reedriter2 3 месяца назад

      Could have fooled me. I am currently using AI in my job everyday. You listen to this guy at your own risk.

    • @venkiperni3911
      @venkiperni3911 Месяц назад

      @@reedriter2 tell me use case other than as google search substitution

    • @reedriter2
      @reedriter2 Месяц назад

      @@venkiperni3911 I currently use AI to create Excel and Outlook Macros, summarize and abstract large documents and create custom training courses related to my field specifically suited to my own learning style. The ability for someone to me to write macros is especially an efficiency improvement.

  • @Zach-o6v
    @Zach-o6v 4 месяца назад +10

    He's correct and if you look at all the newest models being released it's a race for faster AI and more ways to combine existing technology but the model's aren't necessarily getting a lot smarter. They are a game changer when it comes to certain things like responding to customers and basic writing and they can improve the accuracy somewhat but it's simply isn't anywhere comparable to human creativity and critical thinking.

  • @haroldpierre1726
    @haroldpierre1726 4 месяца назад +5

    I agree with his assessment that AI is overhyped, with one exception. I believe the current level of investment in AI is necessary to address the hallucination issue, improve the software's power efficiency, and identify more relevant consumer applications. My business relies on AI for many of my business and client interactions, and it has likely replaced one full-time employee that I would have otherwise hired. However, I must supervise the results it generates for me because the errors and hallucinations can be extremely frustrating. I do not use AI for any critical client interactions. I can't take the chance with the hallucinations.

    • @Teting7484f
      @Teting7484f 4 месяца назад +2

      They are putting money into LLM development and compute. I wish it was into research, they cant even agree its an issue.

    • @haroldpierre1726
      @haroldpierre1726 4 месяца назад

      @@Teting7484f I have to assume they are doing research. OpenAI is full of AI researchers. The hallucinations are a big problem. But there is another situation people aren't talking about. They are using LLMs for robotics. Can you imagine hallucinations in a machine that can physically harm humans??? Yeah, they have to fix this before humanoid robots start running free.

    • @nosam1998
      @nosam1998 4 месяца назад

      @@Teting7484f Agreed and it's a catch-22. If they admit that this IS an issue, then the stocks will fall, and money will dry up for spending on "AI".

  • @sirus312
    @sirus312 3 месяца назад +2

    finally someone who is not hyping.

  • @tonyb7275
    @tonyb7275 4 месяца назад +5

    And there it is, AI reality check

  • @tackthekack1
    @tackthekack1 4 месяца назад +2

    In my business I am constantly pitched by SAAS companies that use "AI". I always ask them "walk me through real world applications with data logs, etc to prove the efficacy of your expensive "AI" service." None have ever taken me up on it. Just like crypto, this bubble will pop and in 5-10 years the real companies that leverage this technology will emerge.

  • @HardKore5250
    @HardKore5250 4 месяца назад +2

    Artificial Intelligence was cited for 800 job cuts in April, the highest total since Challenger first tracked job cuts for this reason in May of 2023, when 3,900 cuts were cut due to this reason. Since then, companies cut 5,430 job cuts due to AI replacing workers

    • @joseriffo
      @joseriffo 2 месяца назад +2

      How can you ensure that they have been replaced by "AI" and not by expert systems or automations?

  • @DG-2323
    @DG-2323 4 месяца назад +8

    If you view the advancement of generative computing (probably what “AI” really currently means) as just the introduction of LLMs, something only possible because it USES generational computing, then he’s probably pretty accurate if not a bit negative on how companies will use them (LLMs) to produce profit in the next ~5 years. But if you recognize that generative computing is an entirely new technology with vastly untapped capabilities, as we are only in the first few years of its introduction into our world, then it seems a bit silly to only look at LLMs to assess the impact the technology will have on our economy/society. If anything, LLMs should be instructional so as to prepare ourselves for how big of leaps are now possible using the underlying technology, but we have no idea what else will be made possible and how quickly it can now happen. This is the real reason why companies are investing so heavily, it’s not that they want to make a bunch of competing GPTs, they want to discover the next application of generative computing.

  • @yunusbarna6380
    @yunusbarna6380 3 месяца назад +1

    CNBC people are not happy with what he is saying.

  • @Cool-gk8mc
    @Cool-gk8mc Месяц назад

    Ai is not AGI yet but it does solves a ton of problems. It's not the Hololens. It is great for analysis, writing, brainstorming, and much more. Really fantastic tool. It's not searching for a problem. Literally everyone I know uses it. Nobody I know uses driverless cars. Never compare by analogy.

  • @1983krizz
    @1983krizz 4 месяца назад +5

    “It is a solution in search of a problem” Just what I have been thinking over the last 1.5 year.

    • @kyleolson9636
      @kyleolson9636 4 месяца назад

      I don't think it is a very accurate point, though, because in this case the problems are clear but the solution is simply insufficient. Being able to generate sales and marketing content, respond to customer service requests, write software code, etc are all pretty clear problems. The question is whether current AI technology will be up to the challenge.

    • @Cool-gk8mc
      @Cool-gk8mc Месяц назад

      Everyone that I known uses it for analysis and much much more. Solves more problems that any tool out there. It's also being use for medicine. That is just getting started.

  • @EasyAIForAll
    @EasyAIForAll 4 месяца назад +3

    The journey of a thousand innovations begins with a single line of code. Here's to embracing the uncertainty and marveling at the surprises AI has in store for us, whether it's a decade away or just around the corner.

  • @GrumpDog
    @GrumpDog 4 месяца назад +8

    Oh for.. Do NOT have Gary Marcus on as some kind of AI expert. He is nothing but a CLUELESS doubter.
    I cannot think of another skeptic, who's been as consistently wrong about AI over the last few years, as him.

    • @szebike
      @szebike 4 месяца назад +2

      He is on point on his ctricics though, even the newst shiny language models still outright output garbage like its true, like he said you can use it on small scale things but they are not realiabe enough for any truly transformative changes.

    • @nosam1998
      @nosam1998 4 месяца назад

      @GrumpDog Username checks out :)

    • @Easternromanfan
      @Easternromanfan 2 месяца назад

      He sold a machine learning company to Uber bro

  • @DynamicUnreal
    @DynamicUnreal 4 месяца назад +5

    Most people don’t expect AGI overnight. The development of A.I. will be gradual, but it will be the most rapidly developing “gradual” we’ve ever seen. And as it was almost impossible to predict something like Facebook or Uber with the advent of the internet, there are many future applications of A.I. that will exist within a few years that are hard to imagine now.
    People have to remember that today is likely *the worst that A.I. will ever be* and it’s pretty good already.

  • @reedriter2
    @reedriter2 3 месяца назад

    Listen to Gary Marcus at your own risk.

  • @godblessCL
    @godblessCL 4 месяца назад +2

    A lot of hype, too much energy for a flat landing. They sre in the wrong path to GI

  • @g0d182
    @g0d182 4 месяца назад +9

    ●●Self driving cars have also been in operation in arizona since 2017
    Does Gary actually do any research anymore?

    • @michaelyoungr
      @michaelyoungr 4 месяца назад +8

      Sure, driverless cars exist. So do flying cars and quantum computing. The issue is whether these things are ready to go mainstream, which they are not.

    • @Teting7484f
      @Teting7484f 4 месяца назад

      So do jetpacks lol that doesnt mean they are real. Indians driving them in the background

    • @pavlinpetkov8984
      @pavlinpetkov8984 4 месяца назад +1

      Can you buy a self-driving Tesla or Merc?

    • @whisperingeye4968
      @whisperingeye4968 4 месяца назад +1

      Lol at pretending as if these things are currently being used by the majority of the population. It’s not, it’s niche, super expensive, and doesn’t work correctly

    • @GrumpDog
      @GrumpDog 4 месяца назад +2

      No, he really does not. He's turned into a blind skeptic, and already been wrong about AI over the last couple years, more times than I can remember.

  • @wohola
    @wohola 4 месяца назад +2

    Gary Marcus is a psychology professor which means that this guy study pseudo science for living. Yes, we should listen to him on investment and financial advice. lol

  • @HardKore5250
    @HardKore5250 4 месяца назад

    Addressing AI hallucinations is an important challenge that can potentially be mitigated to a large degree using current generative AI models and techniques, without necessarily requiring artificial general intelligence (AGI).
    Generative AI models like large language models and diffusion models have already demonstrated an ability to generate highly coherent and relevant text, images, etc. when properly trained on curated datasets. With improved training data filtering, retrieval augmentation, constitutional training objectives, and techniques like rejection sampling, we may be able to significantly reduce hallucinations from generative models.
    That said, AGI that has a deeper, more general, and multi-modal understanding of the world could potentially solve hallucinations more definitively by having a unified world model to draw from. An AGI system may be less prone to simple pattern completion errors that lead to hallucinations.
    However, AGI is still a grand challenge with immense unsolved problems around generalization, reasoning, grounding in reality, and avoiding broader failures beyond just hallucinations. So while AGI could be the ultimate solution, making continued progress with current generative AI to detect and mitigate hallucinations is likely the most viable path forward in the near-to-medium term.
    In summary - while AGI may represent the most complete solution by gaining a comprehensive understanding, we can likely make significant strides in reducing AI hallucinations using enhanced generative AI models and techniques without necessarily solving the full AGI challenge first. But both avenues of research are important going forward.

    • @szebike
      @szebike 4 месяца назад +1

      I don't lke the term "hallucinations" its plainly >false information output by the algoritmh< which is an aboslute nogo for any serious endeavor. Seriously anyone professional would vastly prefer a system who could tell you honestly "I don't know" instead of outputting nonsense at any given time. In my opinion these systems are good to get a general info about well documented fields as a starting point when learning to know what to search for on the web but you never can truly trust anything it outputs yet. Those false information output are a fundamental feature not a bug of predictive systems. We currently don't have any openly known approaches to remotely tacke this problem on a fundamental level (and using other LLMs to check on itself just creates a new set of problems aswell as using more trainingdata or web checking). The main issue is that they get vast amounts of money and burn it fast to present one (false) progress after another to statisfy investors instead of using those vast funds to methodically explore new ways to do it in a long term sustainable way (like a new GPT version every 5 years etc.) Creating complete new sets of training data curated by highly educated personell doublecheck etc. isntead of using untrained and vastly underpaid Kenyan people like Open Ai did (google for it its true unfrotunately) ...

  • @MichaelForbes-d4p
    @MichaelForbes-d4p 4 месяца назад +2

    I see the point he is making but it lacks some nuance. It's true that the hallucinations will prevent A.I. from taking on certain types of responsibilities, it won't prevent A.I. from doing most human jobs. After all, we all make mistakes. I'm sure it will be possible to install checks and balances that bring over all capability up to that of most groups of people.

  • @donkeychan491
    @donkeychan491 4 месяца назад +5

    Gary is almost 100% likely to be proven correct - the points he makes are basically irrefutable.

    • @DG-2323
      @DG-2323 4 месяца назад +2

      I agree but I also think there’s another answer in between what he’s saying here, which is that all this investment into what is essentially a new Industrial Revolution over the next 10-20 years won’t be to simply make LLMs profitable (although 2-4 companies probably will), it will be to exploit these newly found technological capabilities LLMs use via generative computing, to find many more opportunities and evolutions of the core technology that originally made all this possible. This is why Nvidia is the king here still, they are building the picks and shovels of what’s to come, we’ve seen it in their last big keynote with robotics and other partnerships, but I think it’s still unknown how this generational computing advancement is applied fully in our economy. To believe it’s just LLMs is to have a very near term view on where the entire human society is advancing towards.

    • @GrumpDog
      @GrumpDog 4 месяца назад +4

      After he's been proven wrong time and time again over the last 3 years? I don't think so. Nothing about his points are irrefutable, they're arbitrary and ignore quite obvious alternatives and context.
      We will almost certainly have AGI in the next few years.. NOT "decades away". I honestly cannot remember how many times I've said "toldyaso" to people like him over the last 5 years, who told me AI wouldn't be able to do something.

    • @jarivuorinen3878
      @jarivuorinen3878 4 месяца назад +2

      @@GrumpDog He is right that hallucination problem with LLM's remains unsolved, this is a fact as far as we publicly know, and investors know.
      Hallucination problem seems to emerge from transformer architecture itself, and scaling up the size of neural network doesn't make this go away. Something else must be done, maybe even architecture modified. Quality of training data also matters in case of LLM's, and this quality isn't always perfect. These are unsolved problems.
      About timescale of progress, stock prices, and that sort of thing Gary may very well be wrong.

    • @szebike
      @szebike 4 месяца назад +1

      @@jarivuorinen3878 Absolutely right, the architecture itself produces those false information outputs its a fundamental feature. For the algorithm your words, sentence and even context are simply numbers in an equation. The output makes perfect sense from a mathematical standpoint and its "true" if you sovle the equation like a computer using numbers. But from a factual standpoint it can be complete garbage because a context is not just a series of number in an equation.

  • @HardKore5250
    @HardKore5250 4 месяца назад

    No, I would not agree that driverless cars have failed overall. While there have certainly been challenges and setbacks in the development and deployment of autonomous vehicle technology, significant progress has been made and driverless vehicles are very much an active area of research and development.
    Some key points about the current state of driverless car technology:
    Companies like Waymo, Cruise, Tesla, and others have autonomous vehicles operating in limited geographic areas and conditions, providing ride services to the public.
    Advanced driver assistance systems (ADAS) with increasing automated capabilities are becoming more common in new vehicle models from traditional automakers.
    Investment and research into driverless technology remains robust from automakers, tech companies, and startups alike.
    Regulatory bodies are actively working on developing frameworks to eventually enable broader deployment of fully self-driving vehicles.
    Technical hurdles remain, especially around operating in extreme conditions, edge cases, and achieving the ultra-high safety levels required for full autonomy everywhere.
    So while the road to full Level 5 autonomy has been longer and more challenging than initially predicted by some, driverless vehicles are very much an active pursuit that has already achieved significant real-world milestones, even if broad consumer deployment is still years away.
    Unless Gary Marcus made these comments very recently, I would be somewhat skeptical of a blanket characterization that "driverless cars failed." The technology is still rapidly iterating and advancing, despite the admitted difficulty of the challenge. More nuance is likely required in discussing the progress made so far.

    • @mikezooper
      @mikezooper 3 месяца назад +1

      The problem is intelligence isn’t about throwing huge data at a basic AI. Humans act far more intelligently with far less data. People need to focus on why that is.

  • @HardKore5250
    @HardKore5250 4 месяца назад +1

    Gary will be disappointed or happy

  • @cmiguel268
    @cmiguel268 4 месяца назад

    Nothing in perfect Gary. Humans allucinate as well. This guy doesn't like AI. Period. The systems may allucinate, but they speed up work.

  • @g0d182
    @g0d182 4 месяца назад +7

    😮😮Gary hasn't evaluated gpt4o yet, notice he hasn't mentioned it here

  • @ML1.0
    @ML1.0 4 месяца назад +1

    I agree soo much...the hype is real..Probably another We Work situation

  • @nikboyz1
    @nikboyz1 4 месяца назад

    This is the guy people should follow not Elon musk or sam Altman

  • @manubhatt3
    @manubhatt3 4 месяца назад +1

    There are certain questions, the answers to which are hard to get/find, but once you get them it is very easy to crosscheck or validate them.
    And there are certain cases, where the cost of error(given it's low frequency) is not very high.
    These are the only use cases of present AI I see.

  • @agentxyz
    @agentxyz 2 месяца назад

    modern day lord kelvin

  • @seanh3697
    @seanh3697 4 месяца назад

    exponential growth

  • @klwback
    @klwback 4 месяца назад +4

    Always a debbie downer somewhere. If companies dont invest in the technology then it wouldnt exist at all and wouldnt be having this conversation. I

    • @andresmarchena6362
      @andresmarchena6362 4 месяца назад

      I think that’s what Debbie downer means. I think he’s just saying that it’s not there yet even if he’s giving you narrated responses questions questions.

    • @donkeychan491
      @donkeychan491 4 месяца назад

      That's the role of entrepreneurs in a capitalist economy: on average they lose money so as a group they end up subsidizing the rest of the economy based on their unrealistic expectations of large profits.

  • @DashCamera
    @DashCamera 4 месяца назад +1

    same as the valuations 😂

  • @RahulKaibarta-g2e
    @RahulKaibarta-g2e 4 месяца назад

    Do you think Revux will pump before XRP?

  • @miraculixxs
    @miraculixxs 4 месяца назад +1

    This has been completely predictable.

  • @bernardteo773
    @bernardteo773 4 месяца назад

    Agree, current AI is just efficiency in compilation and composition. Yup, superficial but mass demands aren't stringent on quality deliverables.😂

    • @princeofyahudah
      @princeofyahudah 4 месяца назад

      Silicon transistor scaling is going to kick the market in the ass 😂

  • @moderncontemplative
    @moderncontemplative 3 месяца назад

    He makes a very good point about hallucinations not being resolved, which is a major part of the problem with current AI models. But I want to point out the fact that, just by scaling up, models will become more capable and smart. Responsible and sensible humans will continue to use AI as powerful assistants, not just Mono- tasking tools. We see this already via GPT 4o, Claude Opus, PI AI, Gemini, etc.. Thus hallucinations are no more of a problem for AI as hallucinations are for human minds. Our minds are the source of its hallucinations. We create the data it consumes and reorganizes into new iterations. We easily profess false information just to be responsive just as AI does.

  • @SathyaD-oy7zl
    @SathyaD-oy7zl 4 месяца назад

    I only invest in presales, biggest returns. That is why I love Revux. The key is to get in early guys.

  • @SANKI__444-Gamer.
    @SANKI__444-Gamer. 4 месяца назад

    My top picks for bull run are DOT, FIL, and SOL. And best ICO to invest is Revux, huge potential.

  • @TheGaryinWales
    @TheGaryinWales 4 месяца назад +1

    i think where people go wrong with AI (and this guy is doing it too) is they are waiting until it's at genius level before its deemed 'ready'. There are a lot of people doing work that AI can now do, we dont need to wait until its 100% because people are way way below 100%. It just needs to be better than some people, which is already is.

    • @tubesteaknyouri
      @tubesteaknyouri 3 месяца назад

      LLM's perform poorly at basic tasks. Consider this example using Meta's Llamba 3:
      You
      For this entire comment, count the number of e's. Be sure to break down the count for each word.
      Meta AI
      Here is the count of e's for each word:
      "For" - 0
      "this" - 0
      "entire" - 2
      "comment" - 1
      "count" - 0
      "the" - 0
      "number" - 0
      "of" - 0
      "e's" - 1
      "Be" - 0
      "sure" - 1
      "to" - 0
      "break" - 1
      "down" - 0
      "the" - 0
      "count" - 0
      "for" - 0
      "each" - 1
      "word" - 0
      Total count of e's: 7

    • @tubesteaknyouri
      @tubesteaknyouri 3 месяца назад

      Billions of dollars wasted on something that cannot reliably count.

    • @TheGaryinWales
      @TheGaryinWales 3 месяца назад

      @tubesteaknyouri if you ask 100 people the perform this same task, some will get it wrong. I use AI everyday to enhance my workflow, yes we check the data, but we find for what we do, it's very accurate.

    • @TheGaryinWales
      @TheGaryinWales 3 месяца назад

      @tubesteaknyouri there are plenty of people in the world that cannot reliably count.

    • @tubesteaknyouri
      @tubesteaknyouri 3 месяца назад

      ​@@TheGaryinWales, counting letters should be easy for an AI. Yet, it returns a different number each time, almost none of them correct. I would like to see some data if you want to claim it is on par with human performance.
      The point of the counting task is to illustrate a fundamental limitation of the architecture of LLMs: it selects words from a distribution, leading to logical errors based on the statistical relationships between words. The numbers 0 and 1 are close in embedding space, but selecting 0 as the number of e's in the word "number" will lead to an error. What makes LLM's particularly opaque is that the statistical distributions are modulated by the context window. Good luck understanding its behavior at a useful level of detail.
      In your work, you still find utility in LLMs even though you have to vet the output. In other fields, that is not the case. Vetting the output defeats the purpose. In other cases, if a person lacks the exertise to vet the output, they will be working with output with an unknown number of errors. This is the problem Dr. Marcus was pointing out. LLMs are unreliable, and therefore have very limited use cases. He was not claiming that it needs to operate at human or super human levels to be useful. Rather he was pointing out that it needs to be reliable and its limitations and abilities need to be clearly defined.

  • @60spf
    @60spf 4 месяца назад

    If you can't do, teach.

    • @szebike
      @szebike 4 месяца назад

      If you truly want to learn something , teach.

  • @amarkmanpeters
    @amarkmanpeters 4 месяца назад +6

    This guy should stay in academia

    • @R4dr1ar
      @R4dr1ar 4 месяца назад +5

      Why? Because you can't tolerate opposing opinions? Because your stocks would go down if true?

    • @Teting7484f
      @Teting7484f 4 месяца назад +2

      Bag holder

  • @beto8493
    @beto8493 4 месяца назад +1

    AI is enough good to be enthusiastic. This is not a jet pack or a FSD.

  • @ohcstatus
    @ohcstatus 4 месяца назад

    Revux keeps popping up in my crypto circles. Seems like a rising star!

  • @jw999
    @jw999 4 месяца назад

    I am under the impression that NYU professors hate modern elite / fashionable stuff... Scott G vs social contract, now this guy vs. AI... And they both make good points.

  • @luizhenriqueribeiro9585
    @luizhenriqueribeiro9585 4 месяца назад +1

    Hinton, the Godfather, knows this guy is a clown ! He didn't made a single minor breakthrough, Gary just want some attention . . .

  • @tamilanda432
    @tamilanda432 4 месяца назад

    I believe Revux token will go 100x after launch on Binance

  • @virendragoyal9894
    @virendragoyal9894 4 месяца назад

    Dude, you need to get Revux NOW!

  • @dfv671
    @dfv671 4 месяца назад +2

    We may not have solved AI yet but AI is good enough to replace humans on most tasks.

    • @whisperingeye4968
      @whisperingeye4968 4 месяца назад

      Lmao no it’s not. If they did, greedy companies would have done it by now. Unemployment still at record lows

    • @dfv671
      @dfv671 4 месяца назад +1

      @@whisperingeye4968Low unemployment is due to fast food and hotel jobs. AI is replacing tech workers who are having a hard time finding jobs!

    • @docjoei2224
      @docjoei2224 4 месяца назад

      speak for yourself - no-way could AI beat me

    • @Easternromanfan
      @Easternromanfan 2 месяца назад

      Until you find out it made up 50 percent of its task

  • @JackMorningstar-nm8gc
    @JackMorningstar-nm8gc 3 месяца назад

    .

  • @小因小因
    @小因小因 4 месяца назад

    I glad to have an adviser who is excellent in what he's doing , you changed my entire life and I will continue to preach and song your praises on your behalf for the whole world to hear you saved me from huge financial debt with just a small investment, thank to my Adviser Bruce Murdock!!!!

    • @zhuzhang-qe8sl
      @zhuzhang-qe8sl 4 месяца назад

      I've been talking to my adviser Bruce Murdock, for long now, mostly because I lack the knowledge and energy to deal with these ongoing market circumstances. there are more aspects of the market than the average individual is aware of. Having an investing counselor is now the best line of action, especially for those who are close to retiring..

    • @ClCknj
      @ClCknj 4 месяца назад

      I was owing a loan of $192,000 to my bank ,
      no longer in debt after I invested $25,000 and got my payout several times for the past two months.
      God bless Bruce Murdock adviser

    • @小因小因
      @小因小因 4 месяца назад

      Run a quick online research with his name.

    • @小因小因
      @小因小因 4 месяца назад

      his website is widely available online..

    • @zhuzhang-qe8sl
      @zhuzhang-qe8sl 4 месяца назад

      I feel thrilled about this, curiously inputted consultant Andras Bohm on the web, and spotted his website. I've seen commentaries about advisors but not one looks this phenomenal

  • @eeeee49976
    @eeeee49976 4 месяца назад

    Arrogant finance people

  • @VeganCheeseburger
    @VeganCheeseburger 4 месяца назад

    Marcus is a goofball

  • @g0d182
    @g0d182 4 месяца назад

    ●●Those who follow Gary are liable to lose jobs and some have lost jobs already.
    ●Citing edge cases and ignoring success cases, is bound to be a recipe for disaster.
    Jobs have already been lost. Gary's comfortable as a contrarian who makes little more contribution than nitpick, nowadays, but many aren't so lucky economically

    • @szebike
      @szebike 4 месяца назад

      To be honest many tech companies hired to inflate their value witohut the need for it and they also hired because for some reason they thought the boom when all people were forced to stay at home two years ago would continue forever. Additionally those AI do an inferior job but many companies are fine with less quality but faster and cheaper. His criticism abut the architectureis on point and correct. No coder who does more than gap filling will losse his job in the near term. Use those chatbots to learn and improve your knowledge base and you can secure new revenue sources or improve your skillset.

  • @beelikehoney
    @beelikehoney 4 месяца назад

    His description of large language models is the same as the human brain, but still, he denies it.