Is AI Actually Useful?

Поделиться
HTML-код
  • Опубликовано: 24 дек 2024

Комментарии • 1,7 тыс.

  • @PBoyle
    @PBoyle  10 месяцев назад +68

    Get Magical AI for free and save 7 hours every week: getmagical.com/patrick

    • @Fx_-
      @Fx_- 10 месяцев назад +4

      You are looking at llms commercialized.
      Look at transformers repurposed. For example instead of guessing the next words we want like an LLm…. They have test some that guess the next evolutions in chemical compounds… can be applied to dna etc.
      Also look at large action models. Im working to put together a universal UI and chat interface with LLM and domain specific vision models as well as backend chatflow-like control over navigation.

    • @DJVARAO
      @DJVARAO 10 месяцев назад

      Dear Patrick,
      Thank you for your usual high-level take on complex subjects. I use AI on a daily basis for writing emails since it does a terrific job with grammar. I also use a new AI tool for finding quick referenced basic info on new subjects (Perplexity). I am very familiar with machine learning since 2000, when my professional career started. So, the best way to help frame the scope and capabilities of any ML model is understanding that they are great interpolators but very bad extrapolators. As expected, you don't expect a cat's face from an AI trained with human faces. Language models are trickier because people think they are more capable, but you clearly pointed out some of their limitations. It can write a poem in the style of Whitman (because, for example, ChatGPT used his works and other works from critics and writers), but it dramatically fails at writing a simple short story from a Latin American Nobel Laureate because it lacks that information. But since it has to give an answer, it hallucinates with conviction.
      Our company leverages ML models to develop the next generation of drug discovery using new technologies, including AI. Thanks to this approach, we can perform precision physics for proteins 10,000 times faster than the best conventional molecular modeling competitor in the market.

    • @alexkirrmann8534
      @alexkirrmann8534 10 месяцев назад

      I hate that everyone thinks AI is a thing, it's NOT AI. It's just a program that does a specific task. I don't see a point of any of this. Generative? What makes it Generative? Can i cut out a lot of its code and it will regen? I mean even the term AI is misleading. I don't see the difference from AI and say any computer. Morons all of you. Buzz word, I would short all of these companies because eventually we are going to realize it's all nonsense. It's like BITCOIN, ignorance of everyone is being used to steal money from morons with a product that already exists. This is a scam and the simple fact no one is calling them out is almost identical to cryptocurrency and the other tech scams out there. Anyone with a little understanding of computers and coding would be able to tell you this is nothing.

    • @latergator915
      @latergator915 10 месяцев назад +6

      But is AI actually useful?

    • @Fulminin
      @Fulminin 10 месяцев назад +8

      I actually had to click the link cause I couldn't tell if it was a joke or not

  • @antoinepageau8336
    @antoinepageau8336 10 месяцев назад +1303

    You can tell this channel is 100% powered by AI, the SIM presenter never blinks.

    • @davidc1878
      @davidc1878 10 месяцев назад +104

      LOL Can't be Google's AI though as the presenter isn't not white.

    • @soilcredibility
      @soilcredibility 10 месяцев назад +62

      If this channel was AI generated then we would get a lot fewer updates on rap news.

    • @2rx_bni
      @2rx_bni 10 месяцев назад +9

      @@davidc1878i screamed 😂

    • @k54dhKJFGiht
      @k54dhKJFGiht 10 месяцев назад +37

      He outsourced that to Blinkest! Bah Dum Chee!

    • @karmaandkerosene_music
      @karmaandkerosene_music 10 месяцев назад +29

      I thought Patrick's right side was paralyzed for almost 2 years. Seriously. I never saw him move it on camera.

  • @luckylanno
    @luckylanno 10 месяцев назад +1002

    My experience matches the study. I am a senior software engineer, so I write a lot of software and write a lot of documentation about that software. Usually the AI generated code only represents the common use case, which is little better than what I could get out of the typical documentation for an open source project, for example. It's a little bit easier to use the AI as a search engine, but if the problem goes outside of the boundaries of the most common use case even a little bit, I'm on my own. Basically, it quickly writes code that I would have just copied and pasted from an example anyway, but I still have to do the hard parts myself.
    It's a little better for generating documentation, but typically I have to do so many edits to fix errors or get the focus or tone right, that the time savings shrinks dramatically.
    I'm a little worried that AI is generally only going to give a best case 50% productivity boost to most, while the market seems to be assuming a productivity revolution... I'm worried for my 401k, I mean.

    • @stevebezfamilnii2069
      @stevebezfamilnii2069 10 месяцев назад +127

      Biggest problem is that ai is prone to bullshiting and if you don't review it work number of errors overflow. I tried doing some engineering test using it and recieved roughly 60% correct answers which is far from spectacular.
      The only real use I see is that ai can generate lot of unique text that nobody will read, which is actually quite a chunk of work for many people.

    • @jimbocho660
      @jimbocho660 10 месяцев назад +7

      @@stevebezfamilnii2069 Did you use a fine tuned LLM or a RAG LLM ? I was led to believe RAG LLMs did very well on Q & A tasks.

    • @Maxkraft19
      @Maxkraft19 10 месяцев назад +56

      This has been my experience as well. The tools work great with some one that knows what they're doing. But if you blindly follow the AI you might delete part of an sql database.

    • @tohafi
      @tohafi 10 месяцев назад +36

      Yeah, it feels like a big bubble to me. Might blow up the tech sector (again 😮‍💨)...

    • @tomlxyz
      @tomlxyz 10 месяцев назад +41

      To me it seems like it's best at writing boiler plate style code, which shouldn't exist in the first place in a good design

  • @caty863
    @caty863 10 месяцев назад +426

    I recently wasted two days trying to diagnose and address a bug in my script thanks to the over-reliance on AI. At the end, I gave up and decided to consult the documentation of the API and scroll through user forums. I immediately got all the answers I needed. I will never think of these AI tools the same ever again.

    • @petersuvara
      @petersuvara 10 месяцев назад +8

      I did videos on my TechSuvara channel which explained the exact same situation! It’s a real concern.

    • @DEBO5
      @DEBO5 10 месяцев назад +12

      There’s a learning curve. Always cross reference with docs. I don’t really ask it to produce code from scratch rather I ask it to provide me with guidance on where to start and background explanations. It’s a pretty decent refactoring tool as well. Can catch some poor programming practices that you wouldn’t have.

    • @mandisaw
      @mandisaw 10 месяцев назад +25

      Thing is, it's worse than a doc-search tool, because it hallucinates. Even with a user forum, you'll get some illuminating back-and-forth assessing possible variants & gotchas. So many bugs don't bite you until you're already in production 😢

    • @ImmaterialDigression
      @ImmaterialDigression 10 месяцев назад +10

      Using AI for anything interfacing with APIs is really tricky because they more often than not don't know what the current API state is. If you want a bash script it's going to be great, if you want to use an API that has had a major version change in the last year it's going to be fairly useless.

    • @M1and5M
      @M1and5M 10 месяцев назад +2

      If there is a documentation, why did you not upload the documentatiin to gpt and then prompt it?

  • @KathyClysm
    @KathyClysm 10 месяцев назад +346

    I work in software marketing, and quite frankly, as excited as everyone was about the new developments in the beginning, we've essentially stopped using any "AI" or LLMs. For coding, all an LLM can give you is basically the most common code you'd find in any library anyway, and if I ask it to code something longer or more complex, LLMs tend to cause more problems then they solve: because they don't actually "understand", they have no concept of contingency or continuity, so for example they switch how they refer to a specific variable mid-code. Ultimately, for the 30min it saves me in coding from scratch or just looking it up in our documentation, I spend 50min bugfixing the LLM code. Same with user documentation - the LLM texts have no concept of terminological consistency so they keep adding synonyms to terms that have a fixed definition in our company terminology etc. And for the marketing part of it, you'd think LLMs are useful for generating those generic fluff texts you need just to fill a website, but because the output of an LLM is - per definition - the most common sequence of sentences and paragraphs from the data set the LLM was trained on, you end up with marketing fluff that is so incredibly boring, bland and lacking in any uniqueness, that it's not even useful for fluff text. The only use we've found for it so far is automated e-mail replies - which we previously handled via a run-of-the-mill productivity tool.

    • @KindredPlagiarist
      @KindredPlagiarist 10 месяцев назад +56

      It's funny. I'm a novelist and when AI got big certain writers I know were all about using it for characterization and plot. It turned out that AI can't really CONVEY character or at least not less hamfistedly than a teenager writing fanfic. Similarly it can plot a book but the plot is always derivative. Essentially it's a machine that churns out bad writing and if you try to edit what it writes, you quickly realize that it's easier just to write it yourself. Some use cases like condensing paragraphs can be helpful but that's about it. And you can always condense a paragraph yourself if you're past your first couple years of writing. My friend who writes SEO optimized copy for large companies, though, uses it all the time.

    • @MrkBO8
      @MrkBO8 10 месяцев назад +23

      LLM in reality is limited language model, AI does not understand the why of things or have any concept of the physical real world. AI would not understand that because its raining some behaviours ought to change, it can say something is wet but does not understand the concept of a road becoming slippery because of rain. People understand traction is less during rain and speed needs to be reduced in corners, it would also not understand that visibility is reduced or how a child in the back seat can be a distraction. This is because it relies on words to learn, it cannot experience the "I am about to die" feeling a person would get approaching a sharp turn in the wet on a cliff top

    • @KathyClysm
      @KathyClysm 10 месяцев назад +30

      ​@@KindredPlagiarist it's pretty much just... fancy paragraph-long predictive text based on the most common words/ideas/phrases in a given context, so you will always end up with something that has already been done so often that even the LLM has realised it's a common trope. If you want your marketing to be successful and stand out from the crowd, it's just not good enough. Plus in our experience, anyone who actually reads the fluff text can tell pretty quickly if it's AI-generated and usually has a negative reaction to that - our feedback has shown customers are almost offended at the thought that they weren't considered "important enough for a human to sit down and write something creative". So it's just not worth it.

    • @yuglesstube
      @yuglesstube 10 месяцев назад +6

      It's improving rapidly.

    • @tundeuk
      @tundeuk 10 месяцев назад

      @@MrkBO8Tesla FSD

  • @germansnowman
    @germansnowman 10 месяцев назад +1491

    My favourite quote regarding Large Language Models: “The I in LLM stands for Intelligence.”

    • @yds6268
      @yds6268 10 месяцев назад +30

      Lmao

    • @nekogami87
      @nekogami87 10 месяцев назад +30

      gonna steal that one :D

    • @jimbojimbo6873
      @jimbojimbo6873 10 месяцев назад +74

      They just spit out a shit ton of content based on predicted behaviour. There is no ‘thinking’ involved per se or intelligence. It’s just throwing a billion things at a wall and the more it sticks the more it will spit that out

    • @yds6268
      @yds6268 10 месяцев назад +34

      @jimbojimbo6873 yeah, if you study a little bit of linear algebra and network theory, those LLM will be completely demystified.

    • @emmanuelbeaucage4461
      @emmanuelbeaucage4461 10 месяцев назад +4

      spit my five alive laughing!

  • @jalliartturi
    @jalliartturi 10 месяцев назад +331

    I play with AI in SEO. What’s interesting is that due to the error rate and generic content it’s actually quicker to write by hand instead of having AI do it and then fix all the mistakes it does.

    • @RogueReplicant
      @RogueReplicant 10 месяцев назад +22

      Ikr, and then A.I. will regurgitate your PAINSTAKING ORIGINAL RESEARCH and pass it off to some dufus as "A.I.-generated", lol

    • @panamahub
      @panamahub 10 месяцев назад +1

      same here

    • @magfal
      @magfal 10 месяцев назад +22

      This applies to coding too.
      Unless what you write is of generic quality and you're reinventing a wheel for the 6000th time.

    • @personzorz
      @personzorz 10 месяцев назад +9

      As someone involved in SEO you are part of the problem and destroying the internet. Quit your job. You provide negative net value.

    • @Toberumono
      @Toberumono 10 месяцев назад +7

      ⁠@@magfalI actually just tried asking an AI to do my most recent assignment (I gave it the details that I had at the time I got the assignment).
      It tried to teach me how to add an event listener in plain JavaScript. Which, admittedly, is a massive improvement over my last experience. That time, I asked it a basic PHP question and got a response back for Python (and no, the code wasn’t related to my question either).
      (Point being, I’m glad to know I’m not alone)

  • @devonglide1830
    @devonglide1830 10 месяцев назад +159

    I'm not knocking AI, I use it quite a bit. But my general feeling (based on how I use it) is that all it's done (doing) is quickly web scraped the top 10 Google pages and summarized them for me. Like I said, in my field that is very, very handy and saves me time because I don't need to skim pages and blogs to find answers. On the other hand, it's never offered an answer or solution that would make me think its done anything remotely original.

    • @TheReferrer72
      @TheReferrer72 10 месяцев назад +3

      Few thing humans do is original so you are not saying much.

    • @MisterFoxton
      @MisterFoxton 10 месяцев назад +13

      "Few" is infinitely better than "zero".

    • @carlpanzram7081
      @carlpanzram7081 10 месяцев назад +3

      Not yet.
      The growth in ability of ai we have witnessed in the last 3 years has me convinced it's going to replace you within a decade tops.
      It's never going to be worse, it's forever going to increase in power. Eventually it's going to be far more intelligent than any human, and by that point, your position will be impossible to defend.
      We will be a bunch of comparatively stupid apes, led by super intelligent AI, it's basically inevitable.

    • @devonglide1830
      @devonglide1830 8 месяцев назад

      @@carlpanzram7081 For me, that would be great. I'll be retired in that time and if AI is able to solve all my health ailments, provide me with eldercare, sure up the environmental problems we're facing, free up leisure time for the world, and produce all our goods - great!
      Unfortunately, I don't see it happening. The current AI is nothing special in the grand scheme of things. Sure it's revolutionary in a similar way to what google was decades ago, but when you look at the world post-google versus pre-google, realistically the overall well-being and health of the world hasn't really advanced much (some might argue it's even regressed in many domains).
      AI isn't AI (currently), it's a handy tool just as spell checker is. However, even current spell checking and auto-complete still has much to be desired, so the idea that AI is going to replace humans in the next 10 years is pretty fanciful thinking in my opinion.

    • @dre6289
      @dre6289 8 месяцев назад

      ​@@carlpanzram7081I don't think you know how thing actually works.

  • @wbmc1
    @wbmc1 10 месяцев назад +84

    As a scientist, generative AI is (at the moment) very limited in its usefulness. Because it doesn't really 'understand' novel situations, it isn't helpful at planning experiments or studies. The most useful area is in summarizing reports or helping with writing. But even there you have to be careful that the AI isn't missing the major thrust of papers or publications (as it can often fixate on certain things, or misinterpret them).
    Non-generative machine learning has been a tool used for years, though. We use it pretty routinely to help correct for errors in sequencing, for instance, and for assessing the accuracy of variant calls in genetics. I'm of the same belief that, while a useful tool, it is one of a dozen tools in a worker's toolbox -- it doesn't replace the worker.

    • @epicfiend1999
      @epicfiend1999 10 месяцев назад +9

      The problem is that companies will rush to replace workers with AI to be 'more efficient' and then either
      A. never realize that they made their services worse, or
      B. After realizing, hope that enough of the industry does it that they can get away with making their services worse
      Either this will lead to a re-adjustment period in the market or everything will just get worse. Either way, it will never be how it was, as ai tools get better we will never return to pre-ai even with large scale industry rollback after mass adoption.

    • @mandisaw
      @mandisaw 10 месяцев назад +8

      Journals are likely also struggling to contain the influx of mis-cited and poorly-written submissions. There was already an issue with fake journals and subpar papers being accepted, this is just going to make things a lot worse for academic research.

    • @coryc9040
      @coryc9040 10 месяцев назад

      ​@@mandisawIt depends. There's tons of junk research out there before AI because the desirable metric was quantity over quality for publication. I could imagine scientific organizations building AI only on high quality publications and creating models that could do most of the heavy lifting for peer review.

    • @joejones9520
      @joejones9520 10 месяцев назад +2

      there is no conceivable job or task that AI cant eventually do better than a human, ie; all new jobs created by ai will be able to be done by ai better than done by a human....this tech revolution is profoundly different than all others before, in fact, there is no comparison.

    • @wandilekhumalo7062
      @wandilekhumalo7062 10 месяцев назад +1

      ​@@joejones9520 hi friend, as someone who works in the field of AI I admire your enthusiasm but I must caution it at the same time what you describe is in the realm of AGI something we are currently very very far from. My hopes for the current versions of LLMs is that they show us how our economic systems are outdated but in terms of solving our biggest problems such as climate change, curing cancer, renewable energy we need vastly different approaches. A paradigm shift in the way we build AI perhaps? The current consensus is more power but Im not convinced in this approach....

  • @kulls13
    @kulls13 10 месяцев назад +141

    I work in a manufacturing shop and I've used AI to quickly create code to complete certain tasks. We don't have any developers on site obviously and some of our coding needs are fairly simple. AI has allowed me to create simple programs to complete a repetitive task without needing a programmer.

    • @mikeynth7919
      @mikeynth7919 10 месяцев назад +6

      I was wondering a bit about that. The AI seemed to be good at grabbing things that are predictable such as a chess game like Go, or writing basic code for things that have been done before, but moving out into something less cut-and-dried it again fails. With the consultant study, I kind of thought that each consultant is an individual with preferences based on individual education and experiences. Sure, the AI can help clear up basic stuff (what assistants and aides are for) but coming to a conclusion when different consultants may come to different ones all honestly? Yeah - no.

    • @KevinJDildonik
      @KevinJDildonik 10 месяцев назад

      If you're too lazy to hire a college intern. And nobody notices you pasting sensitive information into a third party web site. Then yeah AI is for you. Also, enjoy in a month when your whole system goes down due to a crypto scam. Because Jesus Christ your security is garbage.

    • @TheManinBlack9054
      @TheManinBlack9054 10 месяцев назад +14

      @th7919 "things that are predictable such as a chess game like Go" i dont think you understand how go plays. its absolutely not like chess, its far more complex and predictable. thats like comparing tic tac toe with chess

    • @ivok9846
      @ivok9846 10 месяцев назад +2

      @@TheManinBlack9054 both chess and go are useless to humans. along with machines that play it. and humans that made those one purpose machines. and along with humans that play it, if that's only thing they do

    • @carlpanzram7081
      @carlpanzram7081 10 месяцев назад +8

      ​@@ivok9846absolutely brainless take. You don't just not understand the nature of games, but you also miss the point of the value of cognitive work.
      The same capacity that allows us to play chess, enables us to plan any task or abstract process in the future.
      If AI can beat any human in chess and go, how long will it take until it will beat any human in any task?

  • @SH-ly1uy
    @SH-ly1uy 10 месяцев назад +101

    The first serious video I see on the topic. So much better than all these sales bros going “AI is going to change the world within the next 2 years. Hire me and I tell you how”

    • @TheManinBlack9054
      @TheManinBlack9054 10 месяцев назад +4

      But these tech bros are right, AI is incredibly powerful and is going to become only more powerful

    • @bugostare
      @bugostare 10 месяцев назад +12

      ​@@TheManinBlack9054No what you're thinking of is the power CONSUMPTION of so-called "AI", and that is something to worry about, but you got it backwards

    • @100c0c
      @100c0c 10 месяцев назад

      @@bugostare It will for industrial jobs. It just needs to be good as the average worker or much cheaper and a bit more error prone. China is already automating their rail/track constructions with AI.

    • @bugostare
      @bugostare 10 месяцев назад

      @@100c0c China? The same country where new bridges and skyscrapers crumble, and tunnels flood with thousands of people in them?
      The CCP is totally corrupt, incompetent and pretty much everything they say is a lie...
      Regardless, "AI" is a scam as well as it simply doesn't exist, and machine learning is absolutely not intelligent in any way, it is effectively a data analyst bot.
      Whether companies or governments choose to use it to replace jobs has nothing to do with how good it is at anything, just shows management and government incompetence.

    • @Fuckthis0341
      @Fuckthis0341 10 месяцев назад +6

      Just like the last big thing, algorithms, everyone wants to hype something big and vague but usually can’t name specifics. And if they do experts in those fields quickly see how it’s not going to replace people. Algorithmic automation caused tons of losses in real estate and insurance. In my organization they hired back all the people replaced by algorithms because the losses were unsustainable.

  • @lelik0911
    @lelik0911 10 месяцев назад +133

    I appreciate the boldness of consulting firms offering predictions on the future of a nascent technology, as though they have any more insight than we do.

    • @M-dv1yj
      @M-dv1yj 10 месяцев назад

      Why?

    • @lelik0911
      @lelik0911 10 месяцев назад

      The Mckinseys et al have a long history of unsuccessful navel gazing about the future; it’s not uncommon for these predictions to be out by a few orders of magnitude. Search for McKinsey’s market sizing of mobile phones. Or IBM and cloud tech. Or the early bullishness on the internet.
      The future is highly uncertain and the path of technology is unclear. Even where the benefits are clear, the Gartner Hype Cycle advises caution on predicting the pace of adoption or the ultimately end state of its utility. We’re dealing with an unbounded problem.
      Maybe generative AI can replace 70% of tasks done by knowledge workers. Maybe it won’t. Maybe it’ll lead to significant productivity improvements. Maybe it won’t. Maybe it’ll displace labour. Maybe it’ll increase the demand for labour. In future, all these questions will be resolved and hindsight bias will lead us to believe that a certain narrative was nearly laid out. Until then, McKinsey et al are in the same place the rest of us are at: trying to forecast the direction of the wind from the flap of a butterfly’s wing.

    • @take2762
      @take2762 10 месяцев назад

      I'm pretty sure op is being sarcastic. ​@@M-dv1yj

    • @gamewarrior010
      @gamewarrior010 10 месяцев назад +31

      @@M-dv1yjin the 1980s McKinsey thought the total addressable market for mobile phones was around 900,000 globally. They really don’t had that much of a deeper understanding than the average viewer of this channel.

    • @TheManinBlack9054
      @TheManinBlack9054 10 месяцев назад +10

      @@gamewarrior010 no, they do, just because they made a mistake doesnt mean that some random ignorant bloke has the same level of expertise as them

  • @Zachary-Daiquiri
    @Zachary-Daiquiri 10 месяцев назад +306

    Tldr: Ai helps with things it's good at and hurts with things it is bad at. The problem is that it isn't really clear what ai is good or bad at.

    • @coonhound_pharoah
      @coonhound_pharoah 10 месяцев назад +12

      It's good at creative writing for things like descriptions of architecture or scenes, and for writing cheesy character speeches for use in my D&D sessions. The art generators are great for making maps and character portraits. Just don't expect the LLM to do the heavy lifting of designing a campaign or anything.

    • @tomlxyz
      @tomlxyz 10 месяцев назад +22

      It's currently also only good in combination with a user who's also good in that area

    • @andybaldman
      @andybaldman 10 месяцев назад +13

      So, just like humans then.

    • @Teting7484f
      @Teting7484f 10 месяцев назад +10

      No, it will sometimes get things correct when the training set matches the input… when it doesn’t it will likely be incorrect.
      It cannot fact check its self, you ask me a question on geology and ill say idk or let me look at a book or google.

    • @papalegba6796
      @papalegba6796 10 месяцев назад +28

      It's really, REALLY good at lying.

  • @LockFarm
    @LockFarm 10 месяцев назад +531

    Hard not to notice that the definition of a high end consultancy job requiring top students from elite universities is "Come up with an idea for a drink", and "Come up with an idea for a shoe". Yet the people who have the actual technical knowledge to make the drink, or build the shoe don't get a look in. If we compared the respective salaries, I'm willing to bet that the Apprentice extras will be earning double or more that of the people who actually do the work. So when we hear that these corporate experts might be put out of a job by AI... my sympathy is strangely absent.

    • @yds6268
      @yds6268 10 месяцев назад +82

      Thank you for pointing that out. The "idea guys" seem to be valued more than the actual engineers who are going to design that stuff. Which often is impossible to manage, considering the MBA's lack of technical knowledge

    • @arpadkovacs2116
      @arpadkovacs2116 10 месяцев назад +57

      Tech companies have the same issue. As MBAs took over from scientists and engineers, they all seemed to decline eventually.

    • @tomlxyz
      @tomlxyz 10 месяцев назад +20

      The thing is that a good drink isn't necessarily a drink that makes you a lot of money. Nowadays a lot of demand is created by everything but the product itself (lifestyle etc)

    • @cameronhoglan
      @cameronhoglan 10 месяцев назад +10

      It's not what you know it's who you know.... Business has always been like that.

    • @LockFarm
      @LockFarm 10 месяцев назад

      For sure, so if the drinks makers can do "everything but the product itself" with AI, why bother employing expensive consultants?@@tomlxyz

  • @robincray116
    @robincray116 10 месяцев назад +28

    I asked ChatGPT some basic engineering questions I can safely say that ChatGPT is a very knowledgable first year engineering student, at best.
    The problem I think is that the bulk of engineering knowledge is still found in esoteric textbooks, engineering standards behind paywalls and word of mouth between engineers. It also doesn't help that engineering documentation is often company secrets for obvious reasons.

    • @traumateaminternational4732
      @traumateaminternational4732 7 месяцев назад +2

      I am an accounting major, and I can confirm the same in our field. I'm only a Junior, but I could tell that more recent versions of ChatGPT were misinterpreting concepts like Gross Margin. Not at all surprising that the same is true in the field of Engineering.

    • @lehast
      @lehast 4 месяца назад

      My young padawan, you don't want to use AI to lead a team or architect something...
      Pro-tip, in it's current state you want to use it to not waste time building stuff you already know how to do and that don't represent a technical challenge so you can allocate more time in to thinking and architrcting stuff

  • @ricks5756
    @ricks5756 10 месяцев назад +234

    Just a side note: commercially available freelance art projects are starting to become harder to find.
    Illustrators, concept artists, and background artists are losing a lot of paying work in my experience.

    • @lovisericachii4503
      @lovisericachii4503 10 месяцев назад +23

      Well... with how things are with hollywood... Those mofos deserved to be replaced by AI

    • @yuglesstube
      @yuglesstube 10 месяцев назад +11

      Look at Sora. It's a video AI. Quite scary. A studio expansion was cancelled when the owner saw Sora.

    • @Studeb
      @Studeb 10 месяцев назад +104

      @@lovisericachii4503 Well well well, what have we here. An anti woke person being over joyous over the lost jobs in the creative industry. We'll see how long it is before you too lose your job, cause nobody is safe here.

    • @HanSolo__
      @HanSolo__ 10 месяцев назад +40

      There is also a visible overflow and fed up of this AI gunk on lots of internet platforms. It's disgusting to see everything around become more and more "meh", to put it mildly.
      RUclips already saw it coming as they target and strike channels made entirely with AI models. Only the footage of the camera worker left life thing.

    • @Meitti
      @Meitti 10 месяцев назад +9

      Its a bit of a tradeoff. Short freelance gigs of illustrating or graphic designing simple ads is gone, but a clever artist can also use AI to speed up some of the processes and create ads faster.

  • @WorldinRooView
    @WorldinRooView 10 месяцев назад +121

    The skill moat you mention at the end is my gravest concern. I've been at the job I have been for 13 years, and it's my expertise I gained over those years that make me a value to my employer.
    Now with outsourcing tasks, either overseas via remote work, or though AI to do the small and annoying things, you can't learn how the system works to try and push through the annoying things more quickly. This is how humans learn efficiency, and perhaps new methods not thought of by the prior generation.
    Over the past few years, I feel like my workplace is falling backwards more than forwards. I can't fully work with the people I'm supposed to delegate to due to the time zone difference. So it means if they don't get to an urgent task, I have to do it.
    Lately I'm feeling this "AI" thing is literally a salesperson selling a bag of beans hoping for some Deus Ex Machina to save us from our grudging tasks. And to sell the customers 'a solution'. But in the end the "AI" is merely office workers analyzing data with grueling deadlines, not unlike the wizard of oz just being a man behind a curtain.
    The humans will do the work, but the machine will get the credit.

    • @epicfiend1999
      @epicfiend1999 10 месяцев назад +2

      Well said.

    • @robertruffo2134
      @robertruffo2134 10 месяцев назад +2

      @@epicfiend1999 Very well said

    • @j3i2i2yl7
      @j3i2i2yl7 10 месяцев назад +11

      It seems to me that some upper management is inclinded to think of employees 3 or more levels down from them as interchangable, and that type of magager will be inclined to be very enthusiastic about adopting AI.

    • @KevinJDildonik
      @KevinJDildonik 10 месяцев назад +8

      100%. I've has employers in the banking industry talk about replacing everyone with AI. Remind me again the legality of pasting people's private banking information into a random web form. Oh yeah it's a felony. Small detail.

    • @TheManinBlack9054
      @TheManinBlack9054 10 месяцев назад +1

      With all due respect, you do not understand how powerful and intelligent the AI is going to get. Its not a hammer, its a woodworker with a hammer.

  • @guyswartwood3924
    @guyswartwood3924 10 месяцев назад +24

    As a software engineer, i use copilot to assist in making software. I do find it helpful but as it stands, I cannot trust it to write good software. I generally find it's answers wrong about 35% of the time when asking more complex questions which is when I am asking it in the first place. A feature I do really like about copilot is that it sees other code files I am looking at and offers helpful suggestions for the next line I am writing. Right now I don't feel like my job is threatened by ai but who knows about the future...

    • @dough-pizza
      @dough-pizza 10 месяцев назад +1

      I was recently working with a code base where my task involved translating some c++ enums to Java enums. I thought copilot would be able to do this easily since those enums were quite lengthy. Oh how wrong I was.....

    • @wandilekhumalo7062
      @wandilekhumalo7062 10 месяцев назад +1

      How do you deal with the security concerns surely giving a large conglomerate access to your codebase is dangerous right?

    • @Zoltan1251
      @Zoltan1251 10 месяцев назад +3

      I am in finance, so accounting basically. Normal person on the street would expect accountants to be replaced first. Now, look at that, actual artists and even sofware engineers are able to use AI while we it cannot even do simple accounting tasks. What a world we live in.

    • @Saliferous
      @Saliferous 9 месяцев назад

      @@wandilekhumalo7062 That's my concern. Everyone is jumping in, but these companies have shown that they don't believe copyright or safety or privacy is a thing. If you create something with AI, what's the assurance that they aren't using your code to train their models and basically stealing your trade secrets. What use are these tools if you can't copyright anything they make? And everyone is able to copy your results.

  • @mwwhited
    @mwwhited 10 месяцев назад +19

    Part of my role is to examine technology to make sure my fellow develop and our clients are well informed and using the right tool. So far my personal experiments with AI show similar results to these studies. The models are okay at easy, highly repetitive and duplicative work but not very good at highly skilled/technical work. They are good at making things up or doing stuff that has been done hundreds of times in their training data but struggle with creative work and it’s nearly impossible to prevent the “hallucinations” from occurring where the models fabricate something when they don’t actually know the answer.

    • @Insideoutcest
      @Insideoutcest 10 месяцев назад

      Easily the worst part of it is the error checking. Because this is just a sophisticated grammar tree, there are no higher frames of reference to understand the completed product as it was manifested apriori. It is what I would call a "win-more" tool. A tool that works only so far as you can supplant it entirely. That is not helpful and wastes my time if anything. I can parse information more intelligently than chat GPT and cutting through the minutiae actually gets easier, not harder, as you become an actualized troubleshooter/creator.

  • @jasonosunkoya
    @jasonosunkoya 10 месяцев назад +55

    Software engineer here..... using LLMs to write code is like having a junior that you have to constantly go back and tell no thats not the solution. Its good at l33t code though because their are soooo many already done solutions of it on github that have been used to train the model. Where it completely sucks is on specialised enterprise code. So my jobs feel safe for a good while. Until an LLM actually learns logic and reasoning im not worried at all

    • @Raletia
      @Raletia 10 месяцев назад +8

      A "LLM" is never going to logic and reason, all it does it predict which letter comes next in a "string". That's literally it. Sure that's really simplifying, but in the end, the fact remains, a LLM doesn't "understand" anything at all, it has no concept of anything, it's not using logic to solve any problems, it's simply predicting what letter comes next based on a string. We'll need MUCH more sophisticated tools to actually do logic and reasoning. Also there's the computing power problem, our best machines would struggle to simulate more neurons than an insect or maybe at best a very small animal. That's a whole other problem to solve.

    • @dontbeafool
      @dontbeafool 10 месяцев назад +2

      Ad a python amateur chat got allowed me to create code that generated millions in revenues for the firm. Could not do that before. I can prototype things before hiring devs to implement . I can automate things that would've taken me months

    • @jasonosunkoya
      @jasonosunkoya 10 месяцев назад +4

      @@dontbeafool a dev could have also written that python code easily enough then.

    • @dontbeafool
      @dontbeafool 10 месяцев назад +1

      @@jasonosunkoya indeed. But our devs are busy enough building complex systems. Why waste time and money having them build small things.

    • @carlpanzram7081
      @carlpanzram7081 10 месяцев назад +2

      ​@@Raletiadefine "real understanding".
      If AI plays better chess than the best human chess player, beating them 100% of the time, what does that mean to you?
      Ai can teach the best chess players new and novel chess strategies. How is that not a clear. Demonstration of understanding and reasoning?
      How is SORA not demonstrating that AI atleast partly understands the visual aspects of the world, and can therefore estimate a whole bunch of physics.

  • @IllIl
    @IllIl 10 месяцев назад +14

    Absolutely fascinating video! Thanks, Patrick. A lot of what you mentioned resonates with what I intuitively gleaned from having used LLMs personally and at work. "The jagged frontier" is such an excellent way of talking about LLM capabilities. And it's only through trial and error that one gets to suss out where those frontiers lie. Less experienced workers may get the biggest boost, but also have the greatest risk of blindly using incorrect outputs.

  • @XYZ-ft4hw
    @XYZ-ft4hw 10 месяцев назад +12

    Excellent overview.
    I love gpt for writing emails. Saves maybe a few hours a week.
    Beyond that... its easier to google or lookup source material than double check if it is accurate.
    The confusion is the subtlety in how models work vs what people imagine they are doing. Steven wolfram has the most intuitive technical explanation on his blog I have seen.

    • @ivok9846
      @ivok9846 10 месяцев назад

      date/subject line of that blog?

  • @Tudor_Rusan
    @Tudor_Rusan 10 месяцев назад +77

    I'm a medical translator and because I'm a fast typer I prefer translating from scratch to post-editing machine translations.
    Sometimes they are frighteningly smart, but it's a bit like the world's smartest two year-old. You can't rely on it, especially for sensitive documents where you need humans in the loop.

    • @joejones9520
      @joejones9520 10 месяцев назад +4

      it will rapidly improve...your comment may seem hopelessly dated even within a year.

    • @Tudor_Rusan
      @Tudor_Rusan 10 месяцев назад +21

      @@joejones9520 I'll take my chances. They're useful tools, but should never be unsupervised with sensible information.
      A bit like self-driving vehicles. Road cars are still a no-no, but aircraft still have pilots despite their tasks being mostly automated. Then you have farm vehicles in low-risk areas that can be automated.

    • @Srednicki123
      @Srednicki123 10 месяцев назад +15

      @@joejones9520 how do you know? maybe your blind optimism will seem hopelessly naive in one year.

    • @fnorgen
      @fnorgen 10 месяцев назад

      ​@@joejones9520 See, the problem is that although it is certain that they will improve, it's hard to tell how quickly they'll improve in specific ways. For example, we can't make a proper robotic lawyer until we find a way to get a drastically lower hallucination rate, which might require a drastically different training strategy or network structure. It doesn't look like scale alone can solve every problem.
      There's also the possibility that AI capability might stagnate somewhat for years with only modest practical improvements, before some new technique is discovered which suddenly makes them drastically better in the span of a few months. One problem these days is for example that it's really hard to get high quality training data in the quantities that these models need to learn properly.
      Hell, in some ways more modern AI are arguably inferior to more primitive ones. I've found myself preferring to work with outdated Stable Diffusion 1.5 based models rather than the more modern SDXL. The old models make more blunders obviously, and aren't as good at following specific instructions, but I find they're way better at spitting out a wide range of possible outputs for any given input. They're also better at combining seemingly mismatched image elements in interesting ways. I just find them way more creative generally, adding all kinds of fun details unprompted, for better and worse. The new ones tend to just go with the most generic options unless prompted otherwise, requiring much more specific prompts to yield interesting results. At least this was the case the last time I played around with image generation.
      Basically, it's really hard to predict a timeline for when AI will get certain capabilities. Research tends to get spectacularly stuck on tasks that were expected to be easy, while seemingly impossible tasks suddenly become very possible. That's how we've suddenly ended up in a world with quite a lot of AI generated artwork, but few robo-taxies.

    • @joejones9520
      @joejones9520 10 месяцев назад

      @@Srednicki123 "may" means i dont know ijut

  • @Lithilic
    @Lithilic 10 месяцев назад +8

    I have started to use an AI assisted search tool for research sourcing in my work. It is useful in finding answers to questions that are difficult to locate by only querying search terms in a database; however, I've found that you need to be familiar enough with the subject matter you are researching in order to properly screen its responses, which can be wrong or the certainty of the responses being overstated.

  • @alexanderclaylavin
    @alexanderclaylavin 10 месяцев назад +9

    I asked the Microsoft AI search bar that I found one day on my desktop a very arcane question. It got the question 90% correct, and its incorrect answer would fool someone who did not know otherwise.

  • @MrPDLopez
    @MrPDLopez 10 месяцев назад +6

    Thank you Patrick! I am happy to say I recognized some of my own ideas about AI as you spelled them out for all of us. I cannot use AI in my workplace because of safety and confidentiality policies, maybe when an internal knowledge base (off-bounds to everyone else) can be coupled with an LLM I may get the opportunity to use it for work. Otherwise it has been a fun ride when I use AI at home to learn prompt engineering

  • @aL3891_
    @aL3891_ 10 месяцев назад +27

    It can be but in a much _much_ narrower scope than most people think.
    Also pretty baller move to have an ai company sponsor this video

  • @Kyrieru
    @Kyrieru 10 месяцев назад +15

    I'm an indie game dev, and its currently not possible to replace most tasks with ai (sounds, art, animation, coding, design). The results are too random and lack coherence across works, and any desire for specificity in style or execution makes ai worthless. The prompts for art are too broad to describe low level and important things.
    I'm hoping that ai gets some bigger development in terms of tools which help artists. For example using ai to create digital brushes which perfectly mimic paint rather than mimicking "artwork". Mimicking artwork is not useful, but mimicking paint is.

    • @mandisaw
      @mandisaw 10 месяцев назад +6

      Adobe has had the tools/scripting API to do dynamic brushes (and a lot more) for ages. Similarly a lot of the AI use-cases I've heard in game-dev circles are often things that Unity (and presumably Unreal) already can do, or that are still better done by humans.
      I think folks - indies & large companies alike - are looking for that "make it cheap+fast+good" solution, and there's just no such thing.

  • @stribika0
    @stribika0 10 месяцев назад +33

    It's so good at adding the expected bullshit to my emails. It can come up with clearly horrible options so that management feels like they had a choice, it can pretend the good option was their idea, it can completely automate bikeshedding, etc. It's awesome.

  • @lomotil3370
    @lomotil3370 10 месяцев назад +3

    🎯 Key Takeaways for quick navigation:
    00:00 *Generative AI tipping point.*
    01:49 *AI's surprising capabilities and failures.*
    03:44 *Generative AI's ease of use.*
    04:36 *McKinsey: Generative AI impact by 2030.*
    05:55 *Harvard study on AI's impact.*
    08:15 *AI improves productivity and quality.*
    13:33 *AI benefits lower-skilled workers.*
    17:38 *Successful AI use strategies.*
    18:57 *AI output validation importance.*
    21:08 *AI's impact on employment.*
    22:32 *Future challenges and questions.*
    Made with HARPA AI

  • @admthrawnuru
    @admthrawnuru 10 месяцев назад +4

    For context, I'm a material scientist. I've used AI some both personally a lot and professionally a little. I've found that "idea generation" is overblown, if you use it enough you start to see very repeatative trends.
    Professionally, I've found two main uses:
    1. It's good, but not accurate, at identifying concept terms from descriptions. For example, it was able to tell me the law for a linear relationship between two phenomena that I couldn't find in literature until I knew the right term. It's sometimes inaccurate, but if you just Google the terms of gives you, this can be very useful, because in research often not knowing the right search term can slow down literature searches significantly.
    2. Editing and summarizing. For fun, I've tried having it actually generate sections of a paper I was writing, and even with a lot of promptimg and exsmples it was pretty bad.
    That said, so far I'm unaware of any LLM that's been trained for these tasks. Integrating web or database searches, or else just focusing on content accuracy during training might solve these issues in a few years

  • @poornoodle9851
    @poornoodle9851 10 месяцев назад +17

    AI is like a very efficient but very unreliable employee. Can be very fast on simple tasks but may cause more problems for everyone else when they make mistakes on complex things.

    • @sesam2k998
      @sesam2k998 10 месяцев назад +3

      It will keep making the misstakes over and over. A person probably wont.

  • @besnico
    @besnico 9 месяцев назад

    I have watched countless videos of yours, being initially driven to your channel by a video from the Plain Bagel guy. While I love all your finance/money/business content, it continues to blow my mind how your ability to research topics allows you to go well beyond the norm of financial analysis - you are giving me (and I'm sure many others watching these) a lot of value, whilst being entertaining. Please keep doing what you're doing!!

  • @Ringofire280
    @Ringofire280 10 месяцев назад +45

    How do I square the results of this study with the fact that consulting firms don't actually offer real value to firms contracting them regardless of AI usage?

    • @jimbojimbo6873
      @jimbojimbo6873 10 месяцев назад +17

      Consultancies are just massive marketing machines that latch onto the latest trend to sell a bit of work on stuff they have no expertise or product in.
      I’m a consultant

    • @philallen7626
      @philallen7626 10 месяцев назад +18

      As far as I can tell, consultants are just arse covering for management. Management can take credit if things go well, but if they go badly, just blame the consultants.

    • @phonyalias7574
      @phonyalias7574 10 месяцев назад +8

      @@jimbojimbo6873 Not so sure about that. The main value is it gives management cover, because it's this outside "independent" agency that agrees with implementing something, and tries to do it. If it works out, management gets credit, and if it doesn't the consultancy takes the blame.

    • @doncapo732
      @doncapo732 10 месяцев назад +4

      Deloitte at our company...🤣

    • @amicaaranearum
      @amicaaranearum 10 месяцев назад +1

      Management consulting is mostly a BS job performed by 20-somethings with little actual business experience, so it doesn’t surprise me that AI was helpful to them.

  • @henson2k
    @henson2k 10 месяцев назад +6

    In IT junior positions are already heavily affected by layoffs and AI just makes it harder for a new people to get into the industry.

  • @Keiranful
    @Keiranful 10 месяцев назад +9

    In business development I use gpt to get me started on writing texts that will then be heavily edited, or as a research tool to point me in the direction of the information I seek.

  • @yokothespacewhale
    @yokothespacewhale 10 месяцев назад +34

    Ok I’ll bite.
    Speaking strictly in the work setting I have tried to use the assistant in Microsoft’s databricks as a replacement to googling obscure functions and methods etc. it will often give me code that doesn’t do what I want while clearly understanding what I want it to do (from viewing my own code alone) and will actually give me functions that will not work in databricks. Even after I send it the error code saying as much as the input.
    In short, at the moment at least, it would have been an awesome tool for jr level me a few years ago as a smarter search algorithm. But even then it’s still very much the “I feel lucky” google button.

    • @mandisaw
      @mandisaw 10 месяцев назад +4

      Tried using the pre-Copilot tool in Visual Studio to generate doc headers for some 3rd-party libraries. It misinterpreted what the classes did, and in some methods mixed in parameters that didn't exist. Also couldn't handle/understand overrides - no consistency. It's more trouble than just reading/writing docs myself.

    • @aravindpallippara1577
      @aravindpallippara1577 10 месяцев назад +1

      ​@@mandisawyep mixing in arguments for functions methods where they don't exist or removing them when they do exist has been a common issue with github copilot in my experience
      It occasionally just writes out Turkish phrases for me as well out of the blue

    • @mandisaw
      @mandisaw 10 месяцев назад +1

      @@aravindpallippara1577 Maybe they should sell it as an "immersive" language tutor 😄

  • @DuckOverboard
    @DuckOverboard 10 месяцев назад +17

    I've tried a few of them. The marginal positive use they have is outweighed by the serious downsides that are immediately apparent. Just like the lawyer who saved time by having it write his brief and then had his tail handed to him because the LLM invented legal precedent whole-cloth, it's not as much of a blessing as the collective wisdom would have you believe.

    • @Custodian123
      @Custodian123 10 месяцев назад

      So nothing new then? Nothing more than natural selection, fools will use the tools poorly and the rest will benefit.
      The stove is hot.

    • @mandisaw
      @mandisaw 10 месяцев назад

      More than one lawyer! We've had two Federal cases in NYS and one Canadian case, that I'm aware of. "Saving" 2hrs gets you 2yrs' suspension 😮

  • @breauseph
    @breauseph 10 месяцев назад +4

    I work in a complex, data-driven part of media and have to train creative employees on technical concepts. I'm currently setting up training materials for a new employer, and I've found that ChatGPT helps a lot with the copy for training decks and glossaries. One of my favorite prompts has become "Explain [concept] to me as if I were a 20-year-old who's not very technically proficient," for example, and most of the time ChatGPT does pretty well with it. I can explain these concepts fluently to technical people, but deconstructing jargon for creatives takes a lot of effort and I'm happy to have the help. That being said, I *always* edit, because it has gotten things wrong or made assumptions that aren't in line with my team's perspective/philosophy. I also use it for Sheets and SQL formulas, but also have to test and edit because it's not always exactly right or particularly efficient. So, very much in line with what these studies found.

  • @tragicslip
    @tragicslip 10 месяцев назад +26

    i asked copilot about an obscure novel and its characters. it made up a story and character details using the title and names provided, correctly identifying the author of the real novel.

  • @PBoyle
    @PBoyle  10 месяцев назад +11

    Thanks to our growing list of Patreon Sponsors and Channel Members for supporting the channel. www.patreon.com/PatrickBoyleOnFinance : Paul Rohrbaugh, Douglas Caldwell, Greg Blake, Michal Lacko, Dougald Middleton, David O'Connor, Douglas Caldwell, Carsten Baukrowitz, hyunjung Kim, Robert Wave, Jason Young, Ness Jung, Ben Brown, yourcheapdate, Dorothy Watson, Michael A Mayo, Chris Deister, Fredrick Saupe, Winston Wolfe, Adrian, Aaron Rose, Greg Thatcher, Chris Nicholls, Stephen, Joshua Rosenthal, Corgi, Adi, Alex C, maRiano polidoRi, Joe Del Vicario, Marcio Andreazzi, Stefan Alexander, Stefan Penner, Scott Guthery, Peter Bočan, Luis Carmona, Keith Elkin, Claire Walsh, Marek Novák, Richard Stagg, Stephen Mortimer, Heinrich, Edgar De Sola, Sprite_tm, Wade Hobbs, Julie, Gregory Mahoney, Tom, Andre Michel, MrLuigi1138, sugarfrosted, Justin Sublette, Stephen Walker, Daniel Soderberg, John Tran, Noel Kurth, Alex Do, Simon Crosby, Gary Yrag, Mattia Midali, Dominique Buri, Sebastian, Charles, C.J. Christie, Daniel, David Schirrmacher, Ultramagic, Tim Jamison, Deborah R. Moore, Sam Freed,Mike Farmwald, DaFlesh, Michael Wilson, Peter Weiden, Adam Stickney, Agatha DeStories, Suzy Maclay, scott johnson, Brian K Lee, Jonathan Metter, freebird, Alexander E F, Forrest Mobley, Matthew Colter, lee beville, Fernanda Alario, William j Murphy, Atanas Atanasov, Maximiliano Rios, WhiskeyTuesday, Callum McLean, Christopher Lesner, Ivo Stoicov, William Ching, Georgios Kontogiannis, Arvid, Dru Hill, Todd Gross, D F CICU, michael briggs, JAG, Pjotr Bekkering, Jason Harner, Nesh Hassan, Brainless, Ziad Azam, Ed, Artiom Casapu, Eric Holloman, ML, Meee, Carlos Arellano, Paul McCourt, Simon Bone, Richard Hagen, joel köykkä, Alan Medina, Chris Rock, Vik, Fly Girl, james brummel, Jessie Chiu, M G, Olivier Goemans, Martin Dráb, Boris Badinoff, John Way, eliott, Bill Walsh, Stephen Fotos, Brian McCullough, Sarah, Jonathan Horn, steel, Izidor Vetrih, Brian W Bush, James Hoctor, Eduardo, Jay T, Claude Chevroulet, Davíð Örn Jóhannesson, storm, Janusz Wieczorek, D Vidot, Christopher Boersma, Stephan Prinz, Norman A. Letterman, georgejr, Keanu Thierolf, Jeffrey, Matthew Berry, pawel irisik, Daniel Ralea, Chris Davey, Michael Jones, Alfred, Ekaterina Lukyanets, Scott Gardner, Viktor Nilsson, Martin Esser, Paul Hilscher, Eric, Larry, Nam Nguyen, Lukas Braszus, hyeora,Swain Gant, Kirk Naylor-Vane, Earnest Williams, Subliminal Transformation, Kurt Mueller, KoolJBlack, MrDietsam, Saaientist, Shaun Alexander, Angelo Rauseo, Bo Grünberger, Henk S, Okke, Michael Chow, TheGabornator, Andrew Backer, Olivia Ney, Zachary Tu, Andrew Price, Alexandre Mah, Jean-Philippe Lemoussu, Gautham Chandra, Heather Meeker, John Martin, Daniel Taylor, Nishil, Nigel Knight, gavin, Arjun K.S, Louis Görtz, Jordan Millar, Molly Carr,Joshua, Shaun Deanesh, Eric Bowden, Felix Goroncy, helter_seltzer, Zhngy, lazypikachu23, Compuart, Tom Eccles, AT, Adgn, STEPHEN INGRAM, Jeremy King, Clement Schoepfer, M, A M, Benjamin, waziam, Deb-Deb, Dave Jones, Julien Leveille, Piotr Kłos, Chan Mun Kay, Kirandeep Kaur, Reagan Glazier, Jacob Warbrick, David Kavanagh, Kalimero, Omer Secer, Yura Vladimirovich, Alexander List, korede oguntuga, Thomas Foster, Zoe Nolan, Mihai, Bolutife Ogunsuyi, Hong Phuc Luong, Old Ulysses, Kerry McClain Paye Mann, Rolf-Are Åbotsvik, Erik Johansson, Nay Lin Tun, Genji, Tom Sinnott, Sean Wheeler, Tom, Артем Мельников, Matthew Loos, Jaroslav Tupý, The Collier Report, Sola F, Rick Thor, Denis R, jugakalpa das, vicco55, vasan krish, DataLog, Johanes Sugiharto, Mark Pascarella, Gregory Gleason, Browning Mank, lulu minator, Mario Stemmann, Christopher Leigh, Michael Bascom, heathen99, Taivo Hiielaid, TheLunarBear, Scott Guthery, Irmantas Joksas, Leopoldo Silva, Henri Morse, Tiger, Angie at Work, francois meunier, Greg Thatcher, justine waje, Chris Deister, Peng Kuan Soh, Justin Subtle, John Spenceley, Gary Manotoc, Mauricio Villalobos B, Max Kaye, Serene Cynic, Yan Babitski, faraz arabi, Marcos Cuellar, Jay Hart, Petteri Korhonen, Safira Wibawa, Matthew Twomey, Adi Shafir, Dablo Escobud, Vivian Pang, Ian Sinclair, doug ritchie, Rod Whelan, Bob Wang, George O, Zephyral, Stefano Angioletti, Sam Searle, Travis Glanzer, Hazman Elias, Alex Sss, saylesma, Jennifer Settle, Anh Minh, Dan Sellers, David H Heinrich, Chris Chia, David Hay, Sandro, Leona, Yan Dubin, Genji, Brian Shaw, neil mclure, Francis Torok, Jeff Page, Stephen Heiner, Tucker Leavitt, Peter, Tadas Šubonis, Adam, Antonio, Patrick Alexander, Greg L, Paul Roland Carlos Garcia Cabral, NotThatDan, Diarmuid Kelly, Juanita Lantini, hb, Martin, Julius Schulte, Yixuan Zheng, Greater Fool, Katja K, neosama, Shivani N, HoneyBadger, Hamish Ivey-Law, Ed, Richárd Nagyfi, griffll8, First & Last, Oliver Sun and Yoshinao Kumaga

    • @PsRohrbaugh
      @PsRohrbaugh 10 месяцев назад +2

      I'm so proud to be your #1 Patreon. It's a small price to pay for the value of your content.

  • @murfelpurf5556
    @murfelpurf5556 10 месяцев назад +8

    My concern is related to long term human team skillset. Such that atrophy of skills will grow in over time because of a lack of use by the people. AI may truly reduce longterm productivity in exchange for short term gains.

    • @franug
      @franug 10 месяцев назад +1

      I fear this too!

    • @TomTomicMic
      @TomTomicMic 10 месяцев назад +1

      Computer says no!?!

  • @hws888
    @hws888 10 месяцев назад +1

    Thanks for actually providing links to the papers. This is so rare among YT people 😀

  • @jerryburg6564
    @jerryburg6564 10 месяцев назад +4

    I tried to get ChatGPT to write brief narratives for inspection reports. The target audience was insurance underwriters. The AI could write a narrative when provided information from the inspector, but it always sounded like a real estate pitch. I could never get the result expressed properly and finally gave up. It was faster to write it myself because I didn’t need to rewrite the resulting text.

  • @davidedelson9061
    @davidedelson9061 10 месяцев назад +17

    The thing I have found AI most helpful for is in reducing time spent on tasks you cannot opt to not do, but are both time consuming and relatively low ROI. Not everything you do in a workday requires high precision or competence, and if you can get through that stuff faster, to more effectively prioritize your labor that *is* high ROI, that's a win, imho. The most obvious one among my colleagues is in the automation of writing one's quarterly performance review, which is something you can easily spend one or more full workdays on, which has relatively little value to anyone, unless you are in that moment attempting to get a significant level up. Otherwise, it's just a tax that can be more easily paid by the AI than by you.

    • @phonyalias7574
      @phonyalias7574 10 месяцев назад +3

      This just becomes an arms race though, with AI to write your performance review and AI to judge your review. Essentially it's AI judging AI output, much like the current employment market where AI makes your linked in profile, AI writes your resume and cover letter, AI acts as the first hiring screen to let resumes through a filter.

    • @creedolala6918
      @creedolala6918 10 месяцев назад +2

      My limited experience confirms this. I use AI background removal for some product photos. I have the skill to do it in Photoshop but the AI website does it in 2 seconds instead of 2 to 5 minutes. Check GPT also assisted with a script for uploading the images.
      So far it hasn't made my particular job obsolete, it's just made it a little easier.

    • @amicaaranearum
      @amicaaranearum 10 месяцев назад +1

      This is exactly how I use it: for simple tasks that have low stakes, low importance, and low value.

    • @TeamSprocket
      @TeamSprocket 7 месяцев назад +1

      This is an argument more for removing a process than spending time and money automating it.

  • @captainfatfoot2176
    @captainfatfoot2176 10 месяцев назад +20

    AI seems potentially useful for employees who are already knowledgeable, but I have to wonder whether it will stunt the growth of employees.

    • @Omniryu
      @Omniryu 10 месяцев назад +9

      It definitely will. No need for a jr to grow, if they can already punch up. Which (like in the video) will also take away from them understanding what's good or bad.

    • @mandisaw
      @mandisaw 10 месяцев назад +8

      Already seeing it with student & aspiring/junior programmers. People are lazy at their core, and while that's a great motivator for innovation, collaboration, and optimization, it also means a lot of folks will use a faulty crutch (or cheat!) instead of doing the difficult work of learning. I'm really worried about how much crappy code is gonna make its way into public-facing and mission-critical systems😞

    • @Omniryu
      @Omniryu 10 месяцев назад +3

      @@mandisaw I feel like it depends on if where ever they work allows AI. I hope companies are wise enough to not allow AI for coding tests. As an artist, I'm also seeing it from time to time. Mostly from people that couldn't draw that well and are now using it. A lot of "look of this cool thing I did" without the ability to actually see everything that's wrong with it or why it's not good. Missing out on basic art direction and the ability to self critique.

    • @mandisaw
      @mandisaw 10 месяцев назад +3

      @@Omniryu A few higher ed & workplace surveys have already come out showing that a significant % of students & workers are/would continue to use genAI even if their school/workplace explicitly forbade it. Last summer, there was a massive dip in ChatGPT usage corresponding roughly with the summer school holidays.
      Even among school, organization, & business leaders, they haven't really coalesced around a stance irt. genAI usage, and most haven't released guidance on this stuff (if they've drafted any at all).

    • @mandisaw
      @mandisaw 10 месяцев назад +4

      @@Omniryu As for the lack of self-awareness, that's exactly the problem, on both sides. The folks who are below-the-bar in a field can't assess whether the AI use is helping or harming their skill-growth. And the "audience" presumably will become accustomed to vaguely lousy products, maybe complaining occasionally, but never able to put their finger on just why it doesn't 'click'.

  • @Gerberbaby922
    @Gerberbaby922 9 месяцев назад +1

    Having an AI powered software service for your video sponsor was an interesting choice.

  • @9NZ4
    @9NZ4 10 месяцев назад +7

    It's unclear how exactly was performance measured. What does 17% boost mean? Did they completed task faster? If it's about quality then how it's measured?

    • @Docs4
      @Docs4 8 месяцев назад

      I just got acess 2 month ago to Copilot. As I am testing now for use cases. Well let me tell ya i havent found one. I already automated my boring tasks with VBA and power apps. All my e-mails are already automatic in nature. So it does not help me at all. I am peak efficiency already. Ok well i did find one use case, in making e-mails to higher management more 'ass licker type', ya know what i mean. But that doesn't boost productivity it just makes the tone of rough e-mails 'nicer' to narcisist management types.

  • @jorgerangel2390
    @jorgerangel2390 9 месяцев назад +2

    Tech leader here with 6 years of experience making software, I use copilot and being using it since it launched, it makes me faster when developing but as it is or even 10 times more powerful I do not see it developing software on its own

  • @Stan-b3v
    @Stan-b3v 10 месяцев назад +8

    The real and unsolvable problem with AI is the inability to interrogate and understand the results it generates.

    • @CapedBojji
      @CapedBojji 7 месяцев назад

      Prove to me you understand what you just wrote

    • @Stan-b3v
      @Stan-b3v 7 месяцев назад

      @@CapedBojji I’m curious to know what for you would constitute “proof” and why you are interested in me, specifically, providing it.
      That said, the ability to “ interrogate and understand the results” requires a volume of data processing that is beyond human capacity. With that being so one can never be certain that information in support of a result produced does or does not exist.
      And , if when you do try to interrogate said results you can and do find data which supports said results you don’t know if that data was generated by an AI or if it was collected and processed by people. ?????

    • @unvergebeneid
      @unvergebeneid 7 месяцев назад

      "Unsolvable" is a strong word. It is an already existing technique to feed a model's output back into the model. A model can in fact find fault with its own output. It can also improve on its own output or decide which of several output variants is best. Far from being unsolvable, models interrogating their own output and iterating over it is one of the more heavily researched avenues towards better results rn.

    • @Stan-b3v
      @Stan-b3v 7 месяцев назад

      @@unvergebeneid It is a strong word, and it is correctly used.
      Your conjecture about the models getting better at interrogating themselves simply illustrates my point that humans cannot.

    • @unvergebeneid
      @unvergebeneid 7 месяцев назад

      @@Stan-b3v oh, I completely misunderstood you. I thought you were talking about AI not being able to interrogate its own results! You're talking about interpretability!
      Well, I mean that's also an active area of research with some surprising progress. That being said, while I still find "unsolvable" to be too strong a word, along with all of safety research being in its infancy, this is something that worries me a lot.

  • @coolbd777
    @coolbd777 10 месяцев назад +1

    Patrick's video:
    ✍️🤓☝️
    Patricks outro:
    🤘😎🎸

  • @Dan.50
    @Dan.50 10 месяцев назад +28

    In the real world, "AI" translates to "give me government grants then have the media tell everyone I'm a genius."

    • @butwhytharum
      @butwhytharum 10 месяцев назад +2

      Buzz words drive attention.

    • @j3i2i2yl7
      @j3i2i2yl7 10 месяцев назад

      So I'll just update our busness plan is simple, just find "blockchain" and replace with "AI" throughout.

  • @janzalud216
    @janzalud216 10 месяцев назад +2

    this is so insanely interesting! Thank you Patrick!!

  • @makemoremusicnow
    @makemoremusicnow 10 месяцев назад +126

    AI is spam generation at an industrial scale and in every medium known to man (text, code, images, audio, video, etc.)
    The bust after this overhyped boom will be spectacular.

    • @andybaldman
      @andybaldman 10 месяцев назад +2

      As long as they aren’t able to self-improve it before that. And you can bet they’re trying.

    • @cameronhoglan
      @cameronhoglan 10 месяцев назад +25

      This 100%. AI tech is fun and all, but it makes scams far worse.

    • @tomlxyz
      @tomlxyz 10 месяцев назад +23

      The output is often so generic and one fear of mine is that everything in the future will be even more generic

    • @Custodian123
      @Custodian123 10 месяцев назад

      You're using it wrong. No tool works on its own, needs a brain using it.
      These tools are here to stay.

    • @Omniryu
      @Omniryu 10 месяцев назад +19

      Scamming is the only industry that's seen a boom from Ai lol. Everything else is just overhype and speculation

  • @brazenzebra9581
    @brazenzebra9581 10 месяцев назад +1

    So, layman's short paraphrased summary.
    Ai usage showed:
    1- Overall improved speed and quality of some tasks.
    2- Lower creativity and deviation between ideas created, as in more generic, less original. Though high quality, generic.
    3- Overall less accuracy for objectivity requiring critical thinking and analysis. Can influence individuals into a state of lower accuracy by being utilized
    4- Began closing the gap between the quality of work from lower skilled workers compared to higher skilled peers.
    5- Ai still actively hallucinates information, even fabricating scholarly research as often as 60-80% of the time.
    My reflections remain the same. In the fields most impressed by Ai, originality is necessary for success. And Ai produces, consistently generic similarities to the point of cliches. As is its design. The risk, is to to neutralize the value of effort, not the value of ideas.
    Ideas, and quality thought, will remain powerful as always. But, the gap between those unskilled and those skilled will evaporate. And skill will no longer buy you anything. Only the uniqueness of ones mind will. So, by in large, it won't be incredibly different than now, thoughtless sameness with pervade and fall, and those daring and determined enough to avoid Ai, will find themselves like fine art among stick figures for it.
    The human touch will not just be marketable, but a commodity people will premium for.

  • @greenockscatman
    @greenockscatman 10 месяцев назад +5

    I saw someone on RUclips do a tutorial about how to use AI to help with share price analysis. He never fed it any recent price data however, so his AI helper hallucinated a bunch of nonsense every time. Like using a magic 8-ball for your trading insight.

  • @timjenkins7075
    @timjenkins7075 10 месяцев назад

    I’ve been waiting all week for another video. Thanks!

  • @erinfindsen4953
    @erinfindsen4953 10 месяцев назад +12

    20:11 the air Canada chatbot did not make up the answer. The erroneous answer was on Air Canadas website.

    • @zucchinigreen
      @zucchinigreen 10 месяцев назад +1

      Uh oh someone definitely messed up by never updating the website.

    • @hello.claude
      @hello.claude 10 месяцев назад +1

      According to the reporting I’ve read, the information on Air Canada’s website was correct. It was the chatbot that gave the erroneous answer. This was reportedly the basis for Air Canada’s defensive argument in court.

  • @easygreasy3989
    @easygreasy3989 10 месяцев назад

    Such a good set, set up and delivery. Thanks for the value ❤

  • @ddhurry4168
    @ddhurry4168 10 месяцев назад +5

    Air canada was recently ruled liable for bad advice that its AI chatbot generated for customer questions. The company had argued that the chatbot was an entirely separate entity

    • @greebj
      @greebj 10 месяцев назад +3

      The really interesting case will arise after they introduce a condition of use disclaimer "all responses are for information purposes only and you acknowledge by using the chatbot that you cannot rely on the correctness or truthfulness of any output, which is not a substitute for the wording of the text of the relevant policy or one of our employees"
      Because we all know how many people routinely do not read T&Cs

    • @ddhurry4168
      @ddhurry4168 10 месяцев назад +10

      @greebj court basically ruled that there is no reason a consumer would judge one part of their website accurate and another part untrustworthy. So if they want to use it as a website feature, they are liable for what it says

    • @creepersonspeed5490
      @creepersonspeed5490 10 месяцев назад

      @@greebj you can use rag to improve outputs but my question is - it's meant to help users find information, and the information it finds isn't accurate, so it doesn't find information... meaning... now your users get stuck in chatbot loops and get pissed off or get the wrong information... which doesn't help you as a business. just test your fucking software damnit...

  • @Posiman
    @Posiman 10 месяцев назад +1

    If there is one job that can be replaced by language models, it's business consultants.
    Because "Act really importantly and authoritatively while talking absolute BS" is the one thing they are really good at.

  • @enduser8410
    @enduser8410 10 месяцев назад +5

    The problem we have with AI is that it's only good in specifically applied situations. When I was in university my CS professors were trying to change the name of the AI courses to Machine Learning (ML) as AI to them was not yet achieved. In their view to call it AI we need these algorithms to achieve 'general intelligence'.

    • @jameshughes3014
      @jameshughes3014 10 месяцев назад

      Forgive be for being pedantic but Pacman had AI.
      Eliza had it in the 60s.
      I think AI just means 'fake intelligence'. The same way artificial plants means 'fake trees'.I feel like if everyone understood that, people would be less confused about generative models. I think lots of everyday people think it is AGI, and that has not yet been achieved.

  • @FidesAla
    @FidesAla 9 месяцев назад +1

    The AI recognizes patterns of words, but the concept of the words having meanings at all does not “occur” to it. I put “occur” in quotes because *nothing* occurs to it. It does not “understand” anything. It does not perform the function of “understanding” or “thinking” at all, just copying. And that’s why the threat isn’t that it will take your job. The threat is that people will trust military, medical, etc. things to it thinking that it understands.

  • @curie3938
    @curie3938 10 месяцев назад +17

    I think I recently spoke to an AI generated customer service rep from India, it perfectly replicated the same confusing, unintelligible live persons I have spoken to in the past, heavy accent and all.

    • @RogueReplicant
      @RogueReplicant 10 месяцев назад +2

      Ikr, but the Indians are claiming to be "at the forefront of A.I.", lol

    • @n9o
      @n9o 7 месяцев назад

      Regarding customer support I really find it incredibly annoying that nowadays you really cant get actual humans for support that easily anymore. First you have to talk to some AI chat bot trying to link you to FAQs that you probably already read. Only after you tell it 3 times that your question wasnt answered, it links you to the actual support hotline and you have to start the conversation from scratch.

  • @TiredBush
    @TiredBush 10 месяцев назад

    2:27 My way of looking at it is: Deepmind is, as it suggests, the deeply analytical thinker, while Gemini is the creative aspect of thought, much like how we love to divide these two when discussing cognition in modern media. Perhaps combining the two models in some meaningful way is what we need for a holistic AI.

  • @richdobbs6595
    @richdobbs6595 10 месяцев назад +19

    I would bet that the improvements in AI will lead to further progression into industrial feudalism. If a lower skilled worker can still get the job done, it will be easier to use favoritism and non-job performance issues like loyalty and conformance in selecting employees to hire and retain.

    • @paulmcgreevy3011
      @paulmcgreevy3011 10 месяцев назад +1

      Your favourite employee is usually your most productive. However if you choose to retain an employee you like over one you don’t like then that’s a reasonable choice since you probably think that person will be better for the business overall.

    • @richdobbs6595
      @richdobbs6595 10 месяцев назад

      @@paulmcgreevy3011 Sure, but it sucks if you are trying to compete based on straight-forward job performance. If you have to be royalty to be king, that is sort of the essence of feudalism. Since you can define best for the business with any number of objective functions, that is pretty much a null statement.

  • @dominic.h.3363
    @dominic.h.3363 4 месяца назад +1

    16:52 The real takeaway here is that AI can speak with such confidence, it can even fool professionals that it isn't mistaken.

  • @bigeteum
    @bigeteum 10 месяцев назад +4

    I do use LLMs but what I found is that hey are good mostly for boilerplate knowledge. I use like and sofisticated Google search. Example, I do a lot of graphics all of data. I don't know all the plots code, but I can ask the ai for a starter. After that customization is really precarious with using aí. TBH, I don't think LLM can solve this, they need to understand the graphical packages and the functions under the hood to make good customization.

  • @nat9521
    @nat9521 9 месяцев назад +1

    From my experience current AI models tend to be most useful for more mundane tasks, e.g. OpenAI's Whisper Large for audio transcription, various OCR models which are vastly more accurate than their predecessors, and machine translation (which has also been around for a long time, but current incarnations represent a vast improvement). The more 'flashy' applications of AI such as LLMs can be useful as well, as they do on occasion result in significant time savings compared to the usage of a traditional search engine, but only if the user is not already well versed in the topic of the query. As such, they seem make information more accessible to a wider audience, although care must be taken not to fall victim to model hallucinations, requiring independent verification of all output.

  • @Darkskindiplo
    @Darkskindiplo 10 месяцев назад +12

    I am in chemical manufacturing and CGPT is incredibly helpful for figuring out components in chemical formulas during product development. It saves me tons of time. Also extremely helpful for my other environmental business.

    • @mutthie
      @mutthie 10 месяцев назад +3

      Interesting, I made really bad and even dangerous experiences using GPTs for creating protocol outlines. I think the issue was the inability of gpt for simple calculations. But the formating was alright and saved me some time.

  • @teuruti55
    @teuruti55 10 месяцев назад

    I’ve used ai to teach myself basic programming languages like SQL and M. I’ve able to write functional scripts for my company at my job. It’s able to communicate with me and answer questions as I need. It’s a really good teacher because it never answers your problems it’s just give you hints.

  • @Istandby666
    @Istandby666 10 месяцев назад +16

    They did the Tic Tac Toe calculations in the 80's.
    The AI (Joshua) came to the conclusion the only winning move is to not play the game.
    **Wargames**

    • @johntaggart979
      @johntaggart979 10 месяцев назад +2

      "Would you like to play a game?"

    • @Istandby666
      @Istandby666 10 месяцев назад

      @@johntaggart979
      How about a game of Global Thermal Nuclear War?

    • @user-xl5kd6il6c
      @user-xl5kd6il6c 10 месяцев назад

      The issue is that we are calling Machine Learning as "AI". These models are small architectures full of parameters, it's part of the field of AI, but they aren't AI
      What AI means was basically named as "AGI" now. And these models that just predict the next token using statistics aren't it

    • @Istandby666
      @Istandby666 10 месяцев назад

      @@user-xl5kd6il6c
      We are on the ground floor. The possibilities, is what makes this interesting.

  • @SebastianSkadisson
    @SebastianSkadisson 7 месяцев назад

    While me and my colleagues always include LLM where it makes sense from our perspective, on an operational level we tend to use the proven scientific AI models on more abstract levels in our systems rather than the more popular ones, which we in general only use directly on the surface/interface, because the scientific models tend to be more reliable but also need a way better understanding of the inner workings of neural network AI in order to implement them correctly. ("Scientific models" means mathematical models generally used in data analysis like e.g. gaussian naive bayes.)
    And me personally i love to toy around with DecisionTrees, DecisionForests and RandomForests, because to me being a programmer, these models represent the way a programmer solves a problem. You throw a bunch of variables at the programmer and show them the desired output, until they eventually get it right. Just that these algorithms then, once they become reliable, which takes a lot of training, are immediately more flexible than the usually more rigid classical algorithms. But thats just me playing around, we didnt ever write productive cide that way. I just think if we ever replaced my profession right now, thats exactly how.

  • @Phantom-mk4kp
    @Phantom-mk4kp 10 месяцев назад +5

    I gave Chat GPT a simple task of gear ratio and torque, after ridiculous results and me prompting it hade made a mistake followed by "I apologise you are correct" after four attempts I gave up.

    • @mandisaw
      @mandisaw 10 месяцев назад +1

      It's closer to autocomplete than a calculator. Could probably stumble (or hallucinate) its way through some basic HS Physics textbook examples, but anything tougher is out, and the math would all be wrong anyway.

    • @greebj
      @greebj 10 месяцев назад +3

      I asked about cofactors for a thyroid enzyme, it missed one, I asked it to find a paper outlining the missing nutrient as a cofactor, then whether the original list should have included it, it admitted the original response was incomplete, I immediately asked the original question again and it repeated the original (incomplete) answer. 😂

  • @Bobbysf92
    @Bobbysf92 10 месяцев назад +1

    Patrick, this is one of the most informative and to the point pieces of content on ai I've seen lately. Great job!

  • @gregtomamichel973
    @gregtomamichel973 10 месяцев назад +3

    As an owner of an automotive workshop, I feel pretty relaxed about AI. 😉 Seriously, it's interesting that this "revolution" affects a different part of the workforce to previous large technological advances.

    • @Lisekplhehe
      @Lisekplhehe 10 месяцев назад

      Yeah, a lot of precise movements not replicable, as different vehicles are built differently. You're pretty safe for long. It would take both super precise robotics, machinery (which would also have to be maintained as well) and somehow mixing AI into this. It could be useful as a knowledge base, but i think it would not have enough data to do so reliably.

    • @dillonbussard9576
      @dillonbussard9576 10 месяцев назад

      I own a motorcycle repair shop. This is exactly how I feel. I use AI to help work on bikes when I'm stumped, but it mostly just spits out a few Google responses.

  • @oneman7094
    @oneman7094 10 месяцев назад +1

    On the generic results study, you are most likely reffering to "RLHF mode collapse". So there are several stages that modern LLMs go through.
    First is general pretraining where the model tries to guess the next word in the internet. This is highly varied task and the model does not place all of its bets on one word but spreads it around a lot of words.
    At this stage the model is "creative". If you probabilistically choose among its guesses you will get varied output.
    Last stage is RLHF, where after model generates something you tell it it is ok or not ok. This is where we inject ethical considerations and tell it in which format to answer.
    But, this also destroys the model creativity. It learns to put all of its bets on one word and when you probabilistically choose you will just get that word or general mood/answer.

  • @marc_aussie
    @marc_aussie 10 месяцев назад +26

    The fact that neither Patrick nor anyone in the comments can provide a personal example of using AI for anything remotely close to groundbreakingly useful tells me everything about it’s current usefulness.

    • @jameshughes3014
      @jameshughes3014 10 месяцев назад +5

      There are examples, just not for generative AI. I'm a little concerned that when this bubble bursts, people will lump all AI into this generative model dumpster fire. Machine learning and AI are and have been incredibly useful in lots of ways that don't have anything to do with writing stories or generating lifeless pictures

    • @GalaKrond-b7k
      @GalaKrond-b7k 10 месяцев назад +3

      AI has already created several new medicines and is set to revolutionize the field entirely, little coper piggy kek

    • @martianhighminder4539
      @martianhighminder4539 10 месяцев назад +3

      ​@@jameshughes3014"AI winter" was first coined as a term in 1984 to describe the developmental ebbs and flows in AI interest, research progress, and funding over time. It wouldn't be unusual to have the field hit a wall again, but I do believe enough foundational work has been established to keep interest high for quite a while.
      I do expect there might be a goal shift from trying to finagle together a general purpose AI that can handle all human complexity equally, though, and more focus on optimizing for specific tasks and knowledge niches.

    • @marc_aussie
      @marc_aussie 10 месяцев назад +2

      @@jameshughes3014for example?

    • @jameshughes3014
      @jameshughes3014 10 месяцев назад +4

      @@marc_aussie advances in the discoveries of new medicines, new materials, physics. Finding new planets. Developing and checking microchips .Anything that requires processing large amounts of data. P.s. forgot my favorite, robots. Soon we will actually have robots that can help the less abled, thanks to AI. It's been a huge boon for them

  • @TheElectronPusher
    @TheElectronPusher 10 месяцев назад +2

    Pat, we're going to need you to start a fashion blog.

  • @jolly-rancher
    @jolly-rancher 9 месяцев назад +3

    Nice blazer Patrick

  • @PastaEngineer
    @PastaEngineer 10 месяцев назад +2

    That title got me ready to rage comment lol but the video is very accurate. AI is useful if you know how to use it. If you expect to prompt just once and get the correct answer, no. It requires the same skill that one uses when spending 30-60 minutes writing a very important email. You need to format your request in the excat manner required to be optimize it's context memory, it takes several prompts, requires iteration, and you must acknowledge and work around it's weakness.
    I made a custom bot capable of running external actions to acquire multiple data sources to fill its context memory, and then use an external web request to access custom instructions for how to process the context based on user keywords.
    It's going to make for one hell of a good table top adventure assistant, but could it also be useful in the job field for all those people whose main role is writing VBA to generate reports? Probably. I think we will see some really neat useful tools, even if that use is entertainment.

  • @JM-wm6he
    @JM-wm6he 10 месяцев назад +6

    I've been using AI to learn coding but even I as a beginner frequently spot very trivial errors in the code it provides.

    • @jameshughes3014
      @jameshughes3014 10 месяцев назад +1

      I'm honestly so glad I learned to code before generative AI was a thing. It must be so frustrating to try to find bugs, without knowing what to look for. But that is gonna end up making you a better coder I think, your developing the core thinking skills that really matter

  • @agentsnorlson7913
    @agentsnorlson7913 10 месяцев назад +3

    Radiographer/sonographer here. AI is apparently quite good at assisting diagnosis of medical imaging studies. The enthusiastic claims that AI will permeate all areas of radiography practice though, I genuinely haven't seen much evidence at all come through yet.
    Mainly I've seen the concrete example of AI predicting where to place callipers to measure fetal growth when we take the pictures, which can be adjusted (it's quite good but honestly saves us about a minute or less per study and really isn't hard).

    • @franug
      @franug 10 месяцев назад +1

      I remember reading about how "computers" (they didn't talk about AI back then) would replace radiologists like, a decade ago, and you guys are still here lol

    • @agentsnorlson7913
      @agentsnorlson7913 10 месяцев назад

      @franug Luckily/unluckily I'm one of the techs that take the pictures so I'm feeling pretty safe. We've survived staffing lay-off panics with the move from film to far more efficient imaging methods (like a few minutes to develop a film vs it's there in literal seconds AND can be adjusted) and I think that was a bigger threat. Wasn't around for that mind you, I'm experienced but not that experienced.
      The actual reporting side is going to be VERY interesting. Radiologists here in Australia are about mid six figures and well up, so there's a LOT of incentive to save money in this area. Went to a lecture on AI quite awhile ago and it was found it was actually more accurate than the radiologists (likely because the latter can suffer from recency bias). Of course the College of Radiologists aren't going to give up, and I'm thinking the likely outcome will be AI being used as an adjunct to suggest the presence or absence of disease which the radiologist will agree or disagree. This would improve efficiency if nothing else, and probably be at least no worse in terms of accuracy than we have now.
      Interesting times.

  • @chesthoIe
    @chesthoIe 10 месяцев назад +18

    AI: Solves Vesuvian burnt scrolls
    Patrick Boyle: Is AI Actually Useful?
    AI: Cries a single tear, explodes

  • @Zenobeus
    @Zenobeus 10 месяцев назад +1

    Before I Watch this video, I'm going to Answer the Title: Yes, very much so. After Watching Video: Lovely video.

  • @jextra1313
    @jextra1313 10 месяцев назад +4

    AI is just going to make us dumber. Imagine a people that haven't needed to generate ideas with their own mind for 20 generations.

  • @sakdab
    @sakdab 10 месяцев назад +1

    I use GPT4 around 5-10 times a day. Both for hobbies, question and work. Whenever Google isn't useful, GPT4 is usually. I work as a programmer, and I am usually smarter than GPT4. GPT4 is absolutely awful at understanding complex code with alot of custom functions. However, it is super impressive at making a new quick/easy function for me. It makes functions for me (no more than 30ish lines) and if there are any errors I fix them. Much faster than typing everything out. Super good at small workloads simple tasks, terrible at big complex ones.

  • @eubalenaglacialis
    @eubalenaglacialis 10 месяцев назад +8

    I am a biologist and I use chatgpt to write simple codes all the time. I also build neural networks to solve some very complex problems in biology. I definitely think AI is revolutionizing the field.

  • @MrRjsnowden
    @MrRjsnowden 3 месяца назад

    They are like old PCs. The main way to work with them is via a command line. With out a good request or correctly formed request you dont get waht you want out of them. The LLM will not go out and seek out a solution to a problem because it has no way to look for a problem. You have to know how you can apply the models to daily functions as soon as you get a standard deviation the model will likely fail or hallucinate and provide a poor or incorrect outcome.

  • @coldspring22
    @coldspring22 10 месяцев назад +9

    As an IT professional, Chatgpt and similar generative AI is most useful as a partner for in depth discussion on topics which is elusive in google search. While generative AI often give wrong answers, they do provide direct answers to topics which google search cannot seem to interpret correctly and often bring up new tangents and new facets to the topic that you haven't thought of before. So generative AI for me is a winner as a learning tool and developing in depth understanding of topics for which there isn't much clear documentation to be found in google search.

    • @tomlxyz
      @tomlxyz 10 месяцев назад +6

      While that's useful it's far less than promised in all the hype and that can't hold up with all the valuations of AI companies

    • @Snaperkid
      @Snaperkid 10 месяцев назад +2

      Except it doesn’t give answers or information at all. All it’s trying to do is tell you what it thinks you want to hear. This includes your own wrong information as authoritative.

  • @ikotsus2448
    @ikotsus2448 10 месяцев назад +15

    -Is AI actually useful?
    (humans)
    -Are humans actually useful?
    (the billionaires controlling AI)
    -Are these billionaires actually useful?
    (AI)
    edit: Patrick is exhibiting survivorhip bias here. An extinction event is always preceded by non extinction events. The rest of the video was excellent.

    • @AngelicRequiemX
      @AngelicRequiemX 10 месяцев назад

      Can't wait for the day to happen...it's going to be a reckoning.

  • @Posiman
    @Posiman 10 месяцев назад +7

    I recently tried to ask three different language models (ChatGPT, Gemini and Copilot) if there is a =LAMBDA() function in Microsoft's DAX query language.
    All three of them told me it exists and described in great detail how to use it and which use cases was it suitable for. But all of them refused to provide me with a link to the official documentation discussing it. Which was a good call, because the function does not actually exist in DAX.

  • @OdysseusIthaca
    @OdysseusIthaca 10 месяцев назад

    I'm an engineer. ChatGPT is like a powerful assistant, that goes off and gets stoned about every 10 minutes, comes up with some off-the-wall answer on how to build a function or something, but then will come up with a brilliant solution for another problem. So, you're basically gyrating between making fast progress on a project, or slamming into a brick wall. There's no doubt though, used effectively, it can be an effective force multiplier. I would say I'm 2-3 times faster with it.

  • @SG-nb9go
    @SG-nb9go 10 месяцев назад +7

    It’s useful only for simple things as a an assistant but not at all for aerospace engineering expertise, I can tell it gives out wrong answers even with basic questions.

    • @yds6268
      @yds6268 10 месяцев назад +2

      You can, but I bet the CEOs can't

    • @andybaldman
      @andybaldman 10 месяцев назад

      The only thing required to change that is time.

    • @papalegba6796
      @papalegba6796 10 месяцев назад +2

      Same here, it is useless for anything practical. Actively dangerous in fact.

  • @vbywrde
    @vbywrde 10 месяцев назад +1

    There are several aspects of working with AI that need to be taken into account. 1) AI is notoriously unreliable when it comes to presenting facts. 2) Those who use AI for work need to use it nimbly, hoping between tools, and sometimes using one tool to validate the work of another one, working razzle-dazzle to get to the needed result. 3) AI can be expensive (if you use GPT4 API for inferences, for example). 4) The entire industry is in Wild West mode, and things are not only changing constantly, but simultaneously improving as they go. 5) There are huge ethics questions that come up around the use of AI, and so there is a political storm brewing in regard to who can use them, and for what purposes. 6) AI does not reason well, and there are simple logic problems that stump them completely. 7) AI will not tell you up front if the answer it gives is a hallucination, but will present it as if its factually correct, unless you counter it with statements that reveal its errors - in which case it will apologize (depending on the model and a few other factors) and then try to present a better answer. 8) AI models remember absolutely nothing at all. They do not store data, and have no idea what was said prior to the latest input. So any memory that AI seems to have is being governed by the code that is interacting with the model. This may seem like a subtle point to some, so let me phrase it this way: Interacting with the AI does not make it smarter, or know you better. If it seems like it is, this is because the company that is hosting the model is recording what you say, and what the AI responds, and then feeding that back to the model on each exchange you have with it. The underlying model does not change at all during or because of these exchanges. 9) New and better models are coming out all the time, but they are incredibly expensive to produce, and the bigger they are (meaning the more capable) the more expensive they are. GPT4, for example, cost $61 million to create. They are also very expensive to inference (send questions to), and so the era of free AI is likely to be short. 10) LLMs (Large Language Models) are good for some things, but not other things. They are great at dealing with language, but not great at dealing with logic, math, or facts. Use them for what they are good at, and you'll find them useful. Try to use them for what they are not good at, and you will be frustrated. It is up to the person to learn what to use them for, and how to do so. LLMs are a tool, not a miracle, despite what the Hype-Masters of the industry would have people believe.
    All that said, I usually use AI for my job as a programmer analyst by going to Bing Chat (which uses GPT4 on the back end) and asking it to help me write SQL Queries. It is pretty good at this and can handle most requests if they do not exceed a certain, vague, level of complexity. Nevertheless, they are helpful and do save me time, when I used them in razzle-dazzle mode. They are also often wrong, and so I cannot use them to handle doing my job, and my boss couldn't fire me and have the AI do my job. At least not yet. I also use it to help me write position papers for my company, usually on the subject of AI.
    I think that over time, AI will integrate into civilization and become ubiquitous in the same way that computers. If we want to be successful in the future, then we need to learn how to use the AI, and keep up with its advancements as they evolve. Just my take on it. YMMV.

    • @Leonhart_93
      @Leonhart_93 10 месяцев назад +1

      Nice and insightful, I share those observations as well. I have seen too many programmers paralyzed by the anxiety that AIs will steal their jobs soon. But if they are decent at their jobs, that time will not be that soon.

  • @hydrohasspoken6227
    @hydrohasspoken6227 10 месяцев назад +7

    I am an experienced medical doctor. I use GPT4 heavily to discuss complex medical cases.
    It is my understanding that it probably reached medical expert level. It hardly goes wrong.

  • @GettingMyLifeTogether2024
    @GettingMyLifeTogether2024 10 месяцев назад +11

    I asked a friend that studied ai about the recent AI craze, he said that the breakthroughs in AI are mostly due to better processes.

    • @tomlxyz
      @tomlxyz 10 месяцев назад +4

      I've heard that they now just can throw more stuff at the wall and see what sticks but actually understanding doesn't seem to have improved to the same degree, that's why AI seemingly randomly fails with one thing but not a similar different thing

    • @gibbogle
      @gibbogle 10 месяцев назад +1

      processors, i.e CPUs?

    • @Omniryu
      @Omniryu 10 месяцев назад

      I think, people forget that these breakthroughs are new, not Ai. I remember seeing some AI painting stuff years ago, but it could only do landscapes. So much of it has been in development for a decade.

    • @user-xl5kd6il6c
      @user-xl5kd6il6c 10 месяцев назад

      @@Omniryu More than that. I'm an AI engineer and we are using stuff from the 60s all the time. Vector DBs for example, all of the concepts for it have 50 years
      What we have now is more processing power in GPUs. The issue with understanding things the models are doing, is that the architectures for AI are very simple, the magic comes from the parameters that were trained
      The parameters are in the billions, to a developer this is a black box and it's treated as such for the most part.
      People don't seem to understand that these models have no code for the most part. It's just a box that gets an input and creates an output

  • @chestodor4161
    @chestodor4161 10 месяцев назад +5

    I find using LLM AI the most useful when translating text to other languages or making summaries of documents or other text based information.

    • @TeamSprocket
      @TeamSprocket 7 месяцев назад

      How do you know it's accurate when you don't have any validation of it?

  • @rpere008
    @rpere008 10 месяцев назад +1

    In my opinion the future developments around AI might focus on the balance between applicability and accuracy. i.e. a happy medium between generic AI tools with mediocre accuracy and highly specialized AI tools with limited applications outside of their field. As a translator I like the convenience of machine translation for generic texts that I can copy-edit later, and I like the convenience of computer-aided translation supported by a good quality termbase and translation memory for more specialised texts; I don't know of any translation tool that combines applicability and accuracy optimally.

  • @madJesterful
    @madJesterful 10 месяцев назад +5

    Its worth noting that the findings also appear as tough they may reflect the bias of the entity doing the research. Remember that management consultants are going to find that they have the answers for your business, and those answers are not to do whatever it is you have been doing - and cutting staff 10%.
    We want to look hip and show industry we are good with AI, so its got to come out ahead and it sure would be nice if the best AI usage involved training we can provide wouldn't it? Oh look it does!
    And I am not saying the findings are "wrong" you just have to look at the blind spots: are many of these 'ideas' going to turn out to be based on product ideas that already existed and would cause a lot of legal or competitive problems? Patrick comments on a finding that hints that they may be because they were all a lot less diverse presumably based on the training data.

    • @jxh02
      @jxh02 10 месяцев назад +1

      I would love to see their "quality" metric for this kind of work. Getting metrics right is hard. Even the blithe assertions about people's incentives in the experiment design are suspect. Getting incentives right is also hard.