What are Transformers (Machine Learning Model)?

Поделиться
HTML-код
  • Опубликовано: 10 мар 2022
  • Learn more about Transformers → ibm.biz/ML-Transformers
    Learn more about AI → ibm.biz/more-about-ai
    Check out IBM Watson → ibm.biz/more-about-watson
    Transformers? In this case, we're talking about a machine learning model, and in this video Martin Keen explains what transformers are, what they're good for, and maybe ... what they're not so good at for.
    Download a free AI ebook → ibm.biz/ai-ebook-free
    Read about the Journey to AI → ibm.biz/ai-journey-blog
    Get started for free on IBM Cloud → ibm.biz/Bdf7QA
    Subscribe to see more videos like this in the future → ibm.biz/subscribe-now
    #AI #Software #ITModernization

Комментарии • 138

  • @command.terminal
    @command.terminal 5 месяцев назад +12

    In our graduation years we used to learn about something called codec, as in coder-decoder (something like modem for modulation-demodulation or balun for balanced-unbalanced in the domain of communication technology. So as I can understand from the video is that the transformers are just a fancy and advanced name for a codec, which functions at much bigger capitalistic scale.

  • @ChatGPt2001
    @ChatGPt2001 Месяц назад +8

    Transformers are a type of machine learning model used primarily for natural language processing (NLP) tasks. They have revolutionized the field of NLP due to their ability to handle long-range dependencies and capture complex linguistic patterns. Here are key points about transformers:
    1. **Attention Mechanism**: Transformers use an attention mechanism that allows them to weigh the importance of different words or tokens in a sequence when processing input data. This mechanism enables the model to focus on relevant information while ignoring irrelevant or redundant parts.
    2. **Self-Attention**: In a transformer model, self-attention refers to the process of computing attention scores between all pairs of words or tokens in an input sequence. This mechanism allows the model to capture dependencies between words regardless of their positions in the sequence.
    3. **Multi-Head Attention**: Transformers often employ multi-head attention, where multiple attention heads operate in parallel. Each attention head learns different aspects of the input data, enhancing the model's ability to extract meaningful information.
    4. **Encoder-Decoder Architecture**: Transformers typically consist of an encoder-decoder architecture. The encoder processes the input sequence, while the decoder generates the output sequence. This architecture is commonly used in tasks like machine translation and text generation.
    5. **Positional Encoding**: Since transformers do not inherently understand the order of tokens in a sequence like recurrent neural networks (RNNs), they use positional encoding to provide information about token positions. This allows the model to consider sequence order during processing.
    6. **Transformer Blocks**: A transformer model is composed of multiple transformer blocks stacked together. Each block contains layers such as self-attention layers, feedforward layers, and normalization layers. The repetition of these blocks enables the model to learn hierarchical representations of the input data.
    7. **BERT and GPT**: Two popular transformer-based models are BERT (Bidirectional Encoder Representations from Transformers) and GPT (Generative Pretrained Transformer). BERT is designed for tasks like sentiment analysis and question answering, while GPT focuses on generating human-like text.
    Transformers have significantly advanced the capabilities of NLP models, leading to breakthroughs in areas such as language translation, text summarization, sentiment analysis, and dialogue systems.

    • @user-jl5gj4mv1z
      @user-jl5gj4mv1z 25 дней назад

      you put great effort on writting this

    • @ShivammWarambhey
      @ShivammWarambhey 11 дней назад

      Chatgpt generated

    • @hightechhippie
      @hightechhippie 7 дней назад

      Thanks Im going to strat selling Ai services, systems if i can I already work with tech so , I'm all in on the Ai now that i know what it can do, I'm going to have a personal robot to hang with watch , when i get old it will be my body guard

  • @claudiamariariveraguevara7376
    @claudiamariariveraguevara7376 7 месяцев назад +1

    Thanks you for your enthusiasm and explanation , by far the best

  • @jaimeeduardo159
    @jaimeeduardo159 Год назад +45

    Banana joke GPT-4:
    Sure, here's a banana joke for you:
    Why did the banana go to the doctor?
    Because it wasn't peeling very well!

    • @evaar440
      @evaar440 Год назад +2

      Good transformer 🤣

  • @amarnamarpan
    @amarnamarpan 10 месяцев назад +27

    Dr. Ashish Vaswani is a pioneer and nobody is talking about him. He is a scientist from Google Brain and the first author of the paper that introduced TANSFORMERS, and that is the backbone of all other recent models.

    • @user-uv2sy5je4z
      @user-uv2sy5je4z 8 месяцев назад

      Agreed

    • @AK-ex5md
      @AK-ex5md 3 месяца назад

      He should be documenting his work like our guy, and make interesting vids.
      Hope it happens.

  • @ms.barrio4402
    @ms.barrio4402 Год назад +17

    I really love your videos as they are really friendly to understand. Really graceful for the high quality of the synthesis of key messages on AI/ML/DL. I am a medical doctor and biomedical researcher. I can see the great potential of using the different technics to further develop a bunch of areas, for example: economic evaluations based on modeling (using a combination of approaches in the sensitivity analysis to find out the internal consistence of the predictions…to gain internal validity as a cornerstone to have external validity). So, looking forward to learn more through your channel.
    Thank you, again for sharing good quality knowledge.
    L.

    • @ms.barrio4402
      @ms.barrio4402 Год назад

      Congratulations to all the team work!, I will keep learning more. Thank you all, Leslie.

  • @hassanjaved906
    @hassanjaved906 Год назад

    like to see the energy which you put on to it, Thanks for this.

  • @ArchieLuxtonGB
    @ArchieLuxtonGB 2 года назад +6

    Hi Martin from the Homebrew Challenge! ML and beer clearly go hand in hand!

  • @user-kc8qb8qf7r
    @user-kc8qb8qf7r 5 месяцев назад

    Thank your video,your video really easy understand

  • @GregHint
    @GregHint 11 месяцев назад +4

    What a great way to introduce the topic. First 4 seconds made me laugh out loud. Well done (and the rest of the video as well)

  • @goldencinder7650
    @goldencinder7650 Год назад +1

    I have been more then blown away by the unfathomable exponential growth from just increasing transformers an a few weights lol

  • @ilhamije
    @ilhamije Год назад +1

    Thank you!

  • @yasmincohen-sason3325
    @yasmincohen-sason3325 Год назад

    This was greate!!!

  • @garfocarro
    @garfocarro Год назад +68

    is the fact that he is able to write text mirrored incredible or is there a simple trick here?

    • @IBMTechnology
      @IBMTechnology  Год назад +97

      There is a trick. Hint: he's not left handed.

    • @vaibhavthalanki6317
      @vaibhavthalanki6317 Год назад +28

      its flipped and rotated, done through editing

    • @leihejun844
      @leihejun844 Год назад +3

      @@IBMTechnology yeah, I though he can't be left handed.

    • @leihejun844
      @leihejun844 Год назад

      @@vaibhavthalanki6317 it's not a glass, it's a mirror I think.

    • @somehhakarima5408
      @somehhakarima5408 Год назад

      @@IBMTechnology thought he was left handed

  • @didyouknowamazingfacts2790
    @didyouknowamazingfacts2790 Год назад +38

    The Transformer technology is the reason why you see AI everywhere.

  • @nikhilranka9660
    @nikhilranka9660 11 месяцев назад +4

    Thanks for this video - a simple and concise introduction to transformers.
    Do large language models really possess reasoning capabilities? Or, the way they operate makes it seem so.

  • @robb1324
    @robb1324 Год назад +71

    Perhaps the AI made the banana joke as a subtle way to tell us humans that we are a cruel species that mash anything we come across. The AI finds it funny because the banana would rather cross the road and take on the high likelihood of being mashed violently by a vehicle to avoid the certain mashing by humans. Perhaps the AI identified with the banana 🤔

    • @st0a
      @st0a Год назад +15

      Next level empathy: thinking about a banana's perception of reality 🧠

    • @drewsteinman1898
      @drewsteinman1898 Год назад

      Q

    • @zainkhalid5393
      @zainkhalid5393 Год назад +3

      You guys are overthinking it. 😁

    • @gohardorgohome6693
      @gohardorgohome6693 Год назад

      that's how I interpreted it too - like yeah, the AI knows the banana doesn't want to be mashed by a car, neither do I

    • @l4l01234
      @l4l01234 Год назад +1

      No, you’re definitely overthinking it. The AI doesn’t think anything because it is incapable of such context like “we are a cruel species that mash anything we come across”. Unless you specifically input that in the prompt, it has no mechanism to even conceive of the phrase.

  • @hightechhippie
    @hightechhippie 7 дней назад

    so starting about 4:10 when he explains the difference between classical agrothims verses a Generalzed pre trained Transformor model using an attention mechinism- Coould this be described as a typical PC processor compared to a quantium computer, I understand super positioninig on the quantium side and both are a set of one verse many calculations ? Its immateing thinking in the Ai model where the quantium PC is, well, I dont think we know except it goes and comesback?

  • @zackmertz3214
    @zackmertz3214 Год назад +6

    Great video! I'm stumped on how you made this. Did you really write backwards? Can you reveal your magic trick?

    • @JoshWalshMusic
      @JoshWalshMusic Год назад +7

      You write it naturally and then flip the video when editing.

    • @AK-ex5md
      @AK-ex5md 3 месяца назад

      Exactly what's gng on in my mind lmao

  • @albertkwan4261
    @albertkwan4261 Год назад

    This is the pinnacle performance of training.

  • @daniel_tenner
    @daniel_tenner 2 месяца назад

    “Before too long, they might even be able to come up with jokes that are actually funny.”
    2 years later, here’s the banana joke ChatGPT 4 (already 1y old) came up with for me.
    > Why did the banana go to the doctor?> Because it wasn't peeling well!
    I think we can call that a win.

  • @udayvadecha2973
    @udayvadecha2973 3 месяца назад

    You are mirror writing, Great skill🤩

  • @noahwilliams8996
    @noahwilliams8996 Год назад +4

    How does the transformer take something of variable length (like a sentence) and shove it into a neural network (which requires a fixed number of inputs)?

    • @anushka.narsima
      @anushka.narsima 11 месяцев назад +1

      Generic NNs take only fixed inputs but this is one of the specialities of these types of models! RNNs (the older model used for NLP) were created back in the 80s addressing mainly this issue, along with memory being important for sequences. LSTMs n now transformers came in to solve the issues with RNNs

  • @steriowang
    @steriowang 2 месяца назад

    Actually, I'm interested in the hand writing presentation style. How is it made ?

  • @AbdulRahman-tj3wc
    @AbdulRahman-tj3wc 8 месяцев назад

    Are encoders and decoders both RNN? Plz clear my doubt.

  • @hobonickel840
    @hobonickel840 Год назад +2

    Does this mean they can fix my adhd?
    I don't quite know why but all this transformer tech helps me understand my own glitched mind better

  • @jonasgk86
    @jonasgk86 Год назад +4

    Lol, i find the banane joke funny :)

  • @ramielkady938
    @ramielkady938 2 месяца назад

    Things are judged by their appearance. And this video looks way way better than it actually is. That explains the views.

  • @punk3900
    @punk3900 Месяц назад

    This was prophetic. I wonder whether at that time you realized that Transformer would revolutionize the world.

  • @1HARVEN1
    @1HARVEN1 Год назад +2

    Hey its the guy from the beer channel...

  • @tahmeed702
    @tahmeed702 8 месяцев назад

    Need Explanation for GRU , BERT , LSTM

  • @ibrahemahmed6399
    @ibrahemahmed6399 2 месяца назад

    I think he write on the glass normally and the camera got it backword so they montage it to be flipped so the written words can be shown notmally.

  • @Damodharanjay
    @Damodharanjay 11 месяцев назад +1

    Aged like a wine!

  • @sabahshams1582
    @sabahshams1582 2 месяца назад

    Hi, what does an autoregressive language model mean?

  • @thirtydays1982
    @thirtydays1982 Год назад +1

    how do i use transformers on a new pair of language?

  • @user-il9vr9oe7b
    @user-il9vr9oe7b 24 дня назад

    How do you get loads of loss on on a neural network in given ways for analytics

  • @zzador
    @zzador Год назад +1

    Transformers: More than meets the eye...

  • @raghavendrasooda5368
    @raghavendrasooda5368 7 месяцев назад

    Sir Will u give me a research topic in transformer

  • @anatolydyatlov963
    @anatolydyatlov963 3 месяца назад

    How are you able to write a mirror image of the words so effortlessly? :O

  • @markadyash
    @markadyash 2 года назад +2

    how can text algorithm (transformer) work in image domain like vision transformer over CNN

    • @ChocolateMilkCultLeader
      @ChocolateMilkCultLeader 2 года назад +1

      Transformers are being used in many ways. For example you could take a bunch of vectors (representing image features extracted from Convolutions) and feed them into Transformers to decode as text. This gives you a lot of power combining the NLP and Computer Vision Domain

    • @strongsyedaa7378
      @strongsyedaa7378 Год назад

      @@ChocolateMilkCultLeader
      Generic features or specific?

    • @ChocolateMilkCultLeader
      @ChocolateMilkCultLeader Год назад

      @@strongsyedaa7378 what do you mean?

  • @SciFiFactory
    @SciFiFactory Месяц назад

    So is it like ... a layered, parallelized autoencoder?

  • @EarningsApps
    @EarningsApps Год назад +1

    can we use transformers over spacy for NER?

  • @tartariazo5237
    @tartariazo5237 11 месяцев назад

    IBM: Next-Level Tech explained.
    Chat: How does he write backwards on that invisible board?

    • @IBMTechnology
      @IBMTechnology  11 месяцев назад +1

      See ibm.biz/write-backwards

  • @BigAsciiHappyStar
    @BigAsciiHappyStar 3 месяца назад

    Why did the attention mechanism NOT cross the road? Because it was paralyzed!😜😁
    BTW did I hear that part correctly near the end of the video?

  • @Jack_o3654
    @Jack_o3654 2 месяца назад

    I just have to say it
    TRANSFORMERS MORE THEN MEETS THE EYES!

  • @zvxcxczv
    @zvxcxczv Год назад

    this dude can write reversely. so awsome

    • @andrewnorris5415
      @andrewnorris5415 Год назад

      ha. it looks the right way around to him. The final image is inverted in the video we see. Fun trick.

  • @festusbojangles7027
    @festusbojangles7027 Год назад +5

    the joke was just too deep for your puny mind to get

  • @sudarshinirasa6913
    @sudarshinirasa6913 2 года назад +2

    Can we use this method to detect outliers in time series data

    • @TheShawMustGoOn
      @TheShawMustGoOn 2 года назад +1

      While you can use Transformers for Time Series, I'm not sure why you'd want some network architecture to look for outliers instead of regularizing it and let the network learn to ignore those during optimization.

    • @coffle1
      @coffle1 Год назад +2

      Transformers are a bit overkill for anomaly detection. A lot of time more traditional methods might perform better faster (especially if the resources for training the models are constrained like not having dedicated chips or an insufficient amount of training data)

  • @sohambhattacharjee951
    @sohambhattacharjee951 9 месяцев назад +1

    Now it can indeed write funny banana jokes!!

  • @saatvikmangal7994
    @saatvikmangal7994 4 месяца назад

    Latest update on banana humor of AI
    Why did the banana go to the doctor?
    Because it wasn't peeling well! - GPT 3.5 11th January 2024, 23:06 IST

  • @Bond-zj2ku
    @Bond-zj2ku 2 месяца назад

    I do searches for Transferormer in Machine learning.and in my mind same those transformer there and video starts with the same.

  • @MrofficialC
    @MrofficialC 6 месяцев назад

    You do realize the joke about the chicken crossing the road is a suicide joke right? He wanted to get to the other side?

  • @tuapuikia
    @tuapuikia Год назад

    Where can I summon autobot?

  • @calvink.4511
    @calvink.4511 10 месяцев назад +1

    They've got better jokes now. 😂

  • @sang-suangam9772
    @sang-suangam9772 2 года назад +3

    the banana … skidded …

    • @normacenva
      @normacenva 2 года назад +2

      it wanted to split

  • @michaelcharlesthearchangel
    @michaelcharlesthearchangel Год назад

    I don't like people ripping Me off, whether IBM or Google.

  • @emirsahin7167
    @emirsahin7167 3 месяца назад

    Is he writing on reverse so we can see it correctly?

  • @samahirrao
    @samahirrao 2 месяца назад

    Indian SME's might be able to create this and be a unicorn. Easily.

  • @norbertfeurle7905
    @norbertfeurle7905 Год назад

    Do I get this right, that a transformer is a special case of a state machine, which is designed to learn on, or update it's weights on demand, and is still general enough to cover most data?. Wouldn't an FPGA be optimal to implement such a state machine in flip flop, so that you can generate with 100mhz.

    • @nestorlopez7071
      @nestorlopez7071 Год назад +1

      It really all boils down to performing matrix multiplications. GPUs are best at that. An FPGA can be a GPU if it wants to (:

  • @dagreatcow
    @dagreatcow Год назад +3

    Optimus Prime

  • @MikeHowles
    @MikeHowles Год назад

    I came here to understand how on earth he writes backwards or what camera trickery I am obviously missing, LOL.

    • @IBMTechnology
      @IBMTechnology  11 месяцев назад +1

      See ibm.biz/write-backwards

    • @MikeHowles
      @MikeHowles 11 месяцев назад

      @@IBMTechnology LOL thanks!!! I suppose it shouldn't surprise me there is a video about that. Very cool and elegant technique.

  • @rongarza9488
    @rongarza9488 5 месяцев назад

    Correct me if I'm wrong but it seems that translating a document would require a human doing Quality Control right before publishing. Transformers are impressive in how close they come to mimicking humans but they seem to be The Great Pretenders. Now, how does that QC step get implemented in real time?

  • @ZelForShort
    @ZelForShort Год назад +4

    In reference to the summary of an article example, How does that work? How does the program know to summarize the article and not continue it?
    Also, how do you go from language processing to playing chess or other games or functions?

    • @damianliew5243
      @damianliew5243 Год назад +3

      I'm not a machine learning expert so I can't verify the validity of this answer, but from my POV I think these questions about "how the program... instead of..." is generally dependent on
      1. The actual architecture of the model (in this case, a transformer)
      2. The input data it's based upon (text vs maybe piece type and board position labels for a chessboard)
      3. The output data it's trying to predict (predict a summary text vs predict the next words in an article)
      Because such supervised/semi-supervised learning models learn off labelled data, (to a certain extent, for semi-supervised learning), all the model is really doing is mapping an input to an output. Think of it like a maths graph (which is actually exactly what it is); given a dataset with many points, you'd want to find a "best fit" line that models the rough trend accurately without over or underfitting. Machine models do this but on many axes (due to the use of vectors, some with just an insane number of dimensions).
      Of course there are many other things like hyperparameters, activation functions, loss functions, and nuanced variables to each model architectures, but hopefully this gives you a good understanding of ML in general.

    • @xerxel69
      @xerxel69 Год назад +2

      A summary is a continuation of the text in that case. Consider a webpage on the internet which has an article and then at the bottom of the page it says, "here is a summary of the key points we learned above" and it goes on to summarise. This is an example of the kind of content the AI is trained on. So as long as you do some Prompt Engineering then you can ask your question in such a way that the answer comes from completing the text! It's like magic! 🙂

    • @andrewnorris5415
      @andrewnorris5415 Год назад

      @@xerxel69 Yeah, articles do often contain a summary section at the end. Or parts of an essay say, "To summarise so far". Not sure if it can learn this totally unsupervised. Mu guess is summaries are a popular feature - so they will train it specifically to look for them and learn from them i na focused way. Not sure though.

  • @sohailpatel7549
    @sohailpatel7549 8 месяцев назад

    Instead of the content I started thinking how this guy writing in opposite direction 😭😂😂 Is this some AI trick or fr?!

  • @watherby29
    @watherby29 11 месяцев назад

    And with this simple idea the civilization ends. No, kidding, the AI will be so smart, it will leave us alone as we will be like bugs to it.

  • @ChocolateMilkCultLeader
    @ChocolateMilkCultLeader 2 года назад +1

    Are you guys open to Guest Speakers

  • @exploradorexplorador7404
    @exploradorexplorador7404 Год назад

    The banana joke is an instance of an “anti-joke”… just like the chicken joke.

  • @animalfrendo
    @animalfrendo Год назад

    But how does the human write backwards?

  • @danhetherington1335
    @danhetherington1335 2 месяца назад

    I dont think the joke was that bad. Picture meatwad from AquaTeen Hunger Force, but very pale beige.

  • @randomcheese1719
    @randomcheese1719 Месяц назад

    it doesn't "come up" with a thing, it regurgitates what it's learned. It's nothing but a copy machine and being made out to be much more than it really is by all the AI hype machine artists.

  • @talhaeneskoksal4893
    @talhaeneskoksal4893 Год назад +1

    Why do they always translate English sentence to French in every video that explains Transformers :D

  • @user-jl5gj4mv1z
    @user-jl5gj4mv1z 25 дней назад

    I didnt get it

  • @valentingorrin4541
    @valentingorrin4541 5 месяцев назад

    I can't concentrate I can't understand how he manages to write backwards

  • @vincent_hall
    @vincent_hall Год назад +1

    Well, jokes are hard.
    Kids take several years to learn how to be funny.

  • @amudhanbakthavathsalu5308
    @amudhanbakthavathsalu5308 4 месяца назад

    not very descriptive.. it is for those who already are studying deeply about sequencing, encoder decoder etc.

  • @robertweekes5783
    @robertweekes5783 Год назад

    The joke would’ve worked if it was a potato. Pretty close though.

  • @davejones542
    @davejones542 4 месяца назад

    ask it why did the potato cross the road

  • @curtisnewton895
    @curtisnewton895 Год назад +2

    ok but how about a more detailed explanation ?

  • @roodrigato
    @roodrigato 7 месяцев назад

    wait, does this guy write backwards?

  • @jayseph9121
    @jayseph9121 7 месяцев назад

    are you writing backwards in real time? because if so..... 🤯

    • @IBMTechnology
      @IBMTechnology  7 месяцев назад

      See ibm.biz/write-backwards

    • @jayseph9121
      @jayseph9121 7 месяцев назад

      @@IBMTechnology one of the few times in my life I wish to be lied to 😂

  • @dabrowsa
    @dabrowsa 3 месяца назад

    Did I miss something? This didn't seem to give any clue as to how transformers actually work.

  • @quantarank
    @quantarank 10 месяцев назад

    Your skills in writing backwards were really distracting.

    • @IBMTechnology
      @IBMTechnology  10 месяцев назад

      See ibm.biz/write-backwards for how it's done

  • @carlowood9834
    @carlowood9834 11 месяцев назад

    You didn't really explain anything.

  • @blkscreen15
    @blkscreen15 4 месяца назад

    didn't find it helpful to conceptually understand transformers

  • @zbeast
    @zbeast 2 месяца назад

    To reach the other bunch. chat gpt3.5