LSTM is dead. Long Live Transformers!

Поделиться
HTML-код
  • Опубликовано: 1 окт 2024

Комментарии • 297

  • @vamseesriharsha2312
    @vamseesriharsha2312 4 года назад +67

    Good to see Adam Driver working on transformers 😁

  • @FernandoWittmann
    @FernandoWittmann 4 года назад +358

    That's one of the best deep learning related presentations I've seen in a while! Not only introduced transformers but also gave an overview of other NLP strategies, activation functions and also best practices when using optimizers. Thank you!!

    • @ahmadmoussa3771
      @ahmadmoussa3771 4 года назад +5

      I second this! The talk was such a joy to listen to

    • @aashnavaid6918
      @aashnavaid6918 4 года назад +2

      in about 30 minutes!!!!

    • @jackholloway7516
      @jackholloway7516 2 года назад

      :¥£€€’

    • @jbnunn
      @jbnunn Год назад

      Agree -- I've watched half a dozen videos on transformers in the past 2 days, I wish I'd started with Leo's.

  • @sanjivgautam9063
    @sanjivgautam9063 4 года назад +232

    For anyone feeling overwhelmed, it is completely reasonable, as this video is just a 28 minute recap for experienced machine learning practitioners, and lot of them are just spamming the top comments with "This is by far the best video", "Everything is clear with this single video" and all.

    • @adamgm84
      @adamgm84 4 года назад +23

      Sounds like it is my lucky day then, for me to jump from noob to semi-non-noob by gathering thinking patterns from more-advanced individuals. I will fill in the swiss cheese holes of crystallized intelligence later by extrapolating out from my current fluid intelligence level... or something like that. Sorry I'll see myself out.

    • @svily0
      @svily0 4 года назад +7

      I was about to make a remark about the presenter speaking like a machine gun at the start. I can't even follow such a pace even in my native language, on a lazy Sunday afternoon with a drink in my hand. Who cares what you say if no one manages to understand it??? Easy, easy boy... slow down, no one cares how fast you can speak, what matters is what you are able to explain. (so the others understand it).

    • @ВиталийБуланенков
      @ВиталийБуланенков 4 года назад +12

      @@svily0 >I can't even follow such a pace even in my native language
      maybe that's the issue?

    • @svily0
      @svily0 4 года назад +2

      @@ВиталийБуланенков Well, could as well be, but on the fringe side I have a masters degree. Could not be just that. ;)

    • @Nathan0A
      @Nathan0A 4 года назад +6

      This is by far the best comment, Everything is clear after reading this single comment! Thank you all

  • @JeffCaplan313
    @JeffCaplan313 Год назад +11

    Transformers seem overly prone to recency bias.

  • @richardosuala9739
    @richardosuala9739 4 года назад +45

    Thank you for this concise and well-rounded talk! The pseudocode example was awesome!

  • @ajitkirpekar4251
    @ajitkirpekar4251 3 года назад +24

    Its hard to overstate just how much this topic has(is) transformed the industry. As others have said, understanding it is not easy because there are a bunch of components that don't seem to align with one another and overall the architecture is such a departure from the most traditional things you are taught. I myself have wrangled with it for a while and its still difficult to fully grasp. Like any hard problem, you have to bang your head against it for a while before it clicks.

  • @monikathornton8790
    @monikathornton8790 4 года назад +53

    Great talk. It's always thrilling to see someone who actually knows what they're supposedly presenting.

  • @Scranny
    @Scranny 4 года назад +14

    12:56 the review of the pseudocode of the attention mechanism was what finally helped me understand it (specifically the meaning of the Q,K,V vectors), what other videos were lacking. In the second outer for loop, I still don't fully understand why it loops over the length of the input sequence. The output can be of different length, no? Maybe this is an error. Also, I think he didn't mention the masking of the remaining output at each step so the model doesn't "cheat".

    • @Splish_Splash
      @Splish_Splash Год назад

      for every word we compute its query, key and value vectors, so we need to loop through our sequence

  • @ax5344
    @ax5344 3 года назад +2

    1. @10:17, the speaker says all we need is the encoder part for classification problem, is this True? How about BERT, when we use BERT encoding for classification, say sentiment analysis, all that has worked was the encoder part?
    2. @ 12:25, the slide is really clear in explaining relevance[i,j], but the example is translation, so clearly it is not on the "encoder part". In the encoder part, how is relevance[i,j] computed? what is the difference between key and value? It seems they are all values of the input vector? Aren't they the same in the encoder part?
    Thank you!

    • @trevorclark2186
      @trevorclark2186 2 года назад

      Good question...Key and Value seems symmetric. I was expecting symmetry in a self-attention model, but I can't quite understand how this works with the key/value analogy.

  • @SideOfHustle
    @SideOfHustle 10 месяцев назад +2

    are there really a half million of you out there that understand this?

  • @cliffrosen5180
    @cliffrosen5180 Год назад +1

    Wonderfully clear and precise presentation. One thing that tripped me up, though, is this formula at 4 minutes in:
    Hi+1 = A(Hi, xi)
    Seems this should rather be:
    Hi+1 = A(Hi,xi+1)
    which might be more intuitively written as:
    Hi = A(Hi-1,xi)

  • @lmao4982
    @lmao4982 Год назад +5

    This is like 90% of what I remember from my NLP course with all the uncertainty cleared up, thanks!

  • @Achrononmaster
    @Achrononmaster 4 года назад +2

    You folks need to look into asymptotics and Padé approximant methods, or for functions of many variables as ANN's are you'd use the generalize Canterbury Approximants. The is not yet a rigorous development in information theoretic terms, but Padé summations (essentially repeated fraction representations) are known to yield rapid convergence to correct limits for divergent Taylor series in non-converging regions of the complex plane. What this boils down to is that you only need a fairly small number of iterations to get very accurate results if you only require approximations. To my knowledge this sort of method is not being used in deep learning, but has been used by physicists in perturbation theory. I think you will find it extremely powerful in deep learning. Padé (or Canterbury) summation methods when generalized are a way of extracting information from incomplete data. So if you use a neural net to get a few first approximants, and assume they are modelling an analytically continued function, then you have a series (the node activation summation) you can Padé sum and extract more information than you'd be able to otherwise.

  • @ProfessionalTycoons
    @ProfessionalTycoons 4 года назад +25

    RIP LSTM 2019, she/he/it/they would be remembered by....

    • @mohammaduzair608
      @mohammaduzair608 4 года назад +3

      Not everyone will get this

    • @dineshnagumothu5792
      @dineshnagumothu5792 4 года назад +4

      Still, LSTM works better with long texts. It has its own use cases.

    • @mateuszanuszewski69
      @mateuszanuszewski69 4 года назад

      @@dineshnagumothu5792 you obviously didn't get it. it is "DEAD", lol. RIP LSTM.

  • @georgejo7905
    @georgejo7905 4 года назад +17

    interesting looks a lot like my signal class. how to implement various filters on a dsp.

  • @rohitdhankar360
    @rohitdhankar360 Год назад +1

    @10:30 - Attention is all you need -- Multi Head Attention Mechanism --

  • @TheBilly
    @TheBilly 3 года назад +1

    Keep the microphone to your mouth so hbpffpfpfph because that's really annoying right in pfpppbhbhffff of a sentence.

  • @riesler3041
    @riesler3041 3 года назад +2

    Presentation: perfect
    Explanation: perfect
    me (every 10 mins): " but that belt tho... ehh PERFECT!"

  • @BcomingHIM
    @BcomingHIM 4 года назад +52

    All i want is his level of humbleness and knowledge

    • @pazmiki77
      @pazmiki77 4 года назад +4

      Don't just want, make it happen than. You could literally do this

    • @pi5549
      @pi5549 3 года назад

      Find the humility to get your head down and acquire the knowledge. Let the universe do the rest.

  • @evennot
    @evennot 4 года назад +13

    I was trying to use similar super-low frequency sine trick for audio sample classification (to give network more clues about attack/sustain/release positioning). Never did I know, that one can use several of those in different phases. Such a simple and beautiful trick
    The presentation is awesome

  • @JagdeepSandhuSJC
    @JagdeepSandhuSJC 3 года назад +13

    Leo is an excellent professor. He explains difficult concepts in an easy-to-understand way.

  • @Kevin_Kennelly
    @Kevin_Kennelly Год назад

    When using acronyms, it is not good to LRTD. And do not ever GLERD. People won't understand the SMARG.
    .
    It does help if you ETFM (Explain the Fu*king Meaning) as you write.

  • @_RMSG_
    @_RMSG_ Год назад +2

    I love this presentation
    Doesn't assume that the audience knows far more than is necessary, goes through explanations of relevant parts of Transformers, notes shortcomings, etc;
    Best slideshow I've seen this year, and it's from over 3 years ago

  • @lukebitton3694
    @lukebitton3694 4 года назад +4

    I've always wondered how standard Relu's can provide non-trivial learning if they are essentially linear for positive values? I know with standard linear activation functions any deep network can be reduced to a since layer transformation. Is it the discontinuity at zero that stops this being the case for Relu?

    • @lucast2212
      @lucast2212 4 года назад +9

      Exactly. Think of it like this. A matrix-vector multiplication is a linear transformation. That means it rotates and shifts its input vector. That is why you can write two of these operations as a single one (A_matrix * B_matrix * C_vec = D_matrix * C_vec) and also why you can add scalar multiplications in between (which is what linear activation would do, and is just a scaling operation on the vector). But if you only scale some of the entries of the vector (ReLu) that does not work anymore.
      If you take a pen, rotating and scaling it preservers your pen, but if you want to only scale parts of it, you have to break it.

    • @lukebitton3694
      @lukebitton3694 4 года назад +1

      @@lucast2212 Cheers! good explanation, thanks.

  • @maciej2320
    @maciej2320 8 месяцев назад +1

    Four years ago! Shocking.

  • @boogaloobomber9889
    @boogaloobomber9889 3 года назад +2

    At 4:15 shouldn't it be
    H[i+1] = A(H[i] ; x[i+1]) ?

    • @LeoDirac
      @LeoDirac 3 года назад +1

      YES! Ooops, sorry about that. Good catch.

  • @BartoszBielecki
    @BartoszBielecki Год назад +3

    World deserve more lectures like this one. I don't need examples on how to tune U-net, but the overview of this huge research space and ideas underneath each group.

  • @Lumcoin
    @Lumcoin 3 года назад

    -sorry for the lack of technical terms- I did not completely get it how transformers work regarding to positional information: Isn't X_in the information of the previous hidden layer? That is not enough for the network, because the input embeddings lack any temporal/positional information, right? But why not just add one new linear temporal value to the embeddings instead of many sinewaves at different scale?

  • @謝其宏-p3z
    @謝其宏-p3z 4 года назад +24

    This video is incredible good. Keep in short and clear enough. Can you allow me to add translation for chinese?

    • @kamisama3099
      @kamisama3099 4 года назад +1

      If you have translated it into Chinese, please let me know and give me the link, thank you

    • @seattleapplieddeeplearning
      @seattleapplieddeeplearning  4 года назад

      That would be great! I don't know of any RUclips feature to delegate that permission, but if there is one, let us know how. 谢谢你的帮助!

  • @oguzhanercan4701
    @oguzhanercan4701 2 дня назад

    LSTMs dead? Yep, simple parity bit problem, cannot solved by transformers, but lstms :)

  • @rp88imxoimxo27
    @rp88imxoimxo27 3 года назад +1

    Nice video but forced to watch on 2x speed trying not to fall asleep

  • @gmbueno
    @gmbueno 4 года назад +1

    9:07, could you please explain what do you mean by "Needs specific labelled dataset for every task"?
    I literally just trained a LSTM network (Char RNN based on Karpathy's github.com/sherjilozair/char-rnn-tensorflow) by just giving it non labeled text.

    • @LeoDirac
      @LeoDirac 4 года назад

      That sentence isn't accurate, especially out of context. You can always train LSTM's unsupervised like you did. But the point I'm explaining is that "transfer learning never really worked" - which is to say you usually can't use a pre-trained model on a new problem.

  • @ismaila3347
    @ismaila3347 4 года назад +8

    This finally made it clear for me why RNNs have been introduced! thanks for sharing

  • @23232323rdurian
    @23232323rdurian Год назад

    the Eng:French matrix/diagram from 11:35 shows attention between an English and a French vector. But that would involve both the ENCODing and DECODing....how they interact.
    Whereas speaker is discussing *only* the internals of the ATTENTION mechanim in the Encoder at this point.
    I'd really like to see a similar matrix/diagram illustrating use of attention WITHIN the ENCODing session......it wouldnt involve French at all at this point, cuz ENCODER hasnt even got to the shared representation yet....the machine version of the of the input that comes AFTER the ENCODE, but BEFORE the DECODE.....
    ==> and you're not alone, I see this same vaguery elsewhere in other of Transformer processing....
    ==> but then, most likely I just misunderstand......

  • @thetruereality2
    @thetruereality2 3 года назад +1

    7:25 can you explain to me what does he mean by two hidden States

    • @LeoDirac
      @LeoDirac 3 года назад

      Literally it means that at each time step, there are two different state vectors passed from one LSTM cell to the next in the time sequence. What they each do or how they are distinct is not entirely clear to me. But structurally, the top one in the diagram (usually called C) acts like a ResNet in that new information is only added to it at each time step, making the gradient path simpler, and training easier. The bottom one (usually called h) is more like a vanilla RNN, responding quickly and directly to the input at that time step. So it's probably reasonable to think of them as representing slower & faster moving changes in the state - capturing interactions that are either closer together in the inputs or stretch over longer ranges.

  • @laykefindley6604
    @laykefindley6604 4 года назад +1

    Ha ha, is the constitution spam. Subtle.

  • @Jirayu.Kaewprateep
    @Jirayu.Kaewprateep 2 года назад

    📺💬 Yui krub I give you ice cream 🍦 when we plot sine wave in the word sentiment we still see some relationship that can be converted into word sequences in the sentence.
    🐑💬 It is possible and what you to do with the time domain when input is in bunches of frequencies with the time-related relationship.
    🥺💬 I hope they can mixed together with embedding or shuffling but remain the information within the same set of the inputs.
    🐑💬 You plot the Sigmoid function, Tanh and reLU and yes you can do a direct compares the estimated values within the same time domain.
    📺💬 Now give me some see what me dress like ⁉️ 👧💬 There are many points one significant see is low precisions network machine when execution with less precision but high accuracy.
    📺💬 Words CNN it can do some tasks better for di-grams tri-grams tasks it is working as CNN layer. 🐑💬 That is meaning we can add label or additional data into it ⁉️
    👧💬 Do you mean the scores, good, bad or some properties you earn from other networks or training with concatenated layers ⁉️
    🧸💬 You cannot copies and separated each parts when they are working.

  • @axe863
    @axe863 10 месяцев назад

    My greatest successes are blending traditional time series modeling with Transformer like Wavelet Denoised ARTFIMA + TFT

  • @jeffg4686
    @jeffg4686 7 месяцев назад

    Relevance is just how often a word appears in the input?
    NM on this. I looked it up.
    The answer is similarity of tokens in the embedding - ones with higher similarity gets more relevance.

  • @kampkrieger
    @kampkrieger Год назад

    Typical example of a bad lecture. Only showing stuff and without introducing or explaining what he is showing (what is that graph about? what are the axis or the arrows mean?) talks about it and goes on to the next slide

  • @sarmadys
    @sarmadys 3 года назад

    The only useful part was the self attention which you ruined ... I couldn't understand anything from your descriptions

  • @DoctorMGL
    @DoctorMGL Год назад

    i came here for "Transformers movie" and end up watching something i didn't understand s*t from it,
    the whole video was like alien language to me

  • @TruthOfZ0
    @TruthOfZ0 Год назад

    Well the problem is that you are using AI to make it learn wasting time and resources than use machine learning as an optimizer which is a better usage of neural networks!
    What i mean is that most dont get it when you are supposed to use neural network as an AI to make it learn from data and when to use neural network as a machine learning as an optimizer!!
    You need an engineer for that not a Phd IT professor xD. Stop wasting your time and hire more engineers!!!

  • @BoersenCrashKurs
    @BoersenCrashKurs 3 года назад +1

    When I want to use transformers for time series analysis while the dataset includes individual specific effects. What do I do? In this case the only possibility would be to match the batch size with the length of the individual data length? Right?

    • @LeoDirac
      @LeoDirac 3 года назад

      No, batch and time will be different tensor dimensions. If your dataset has 17 features, and the length is 100 time steps, then your input tensor might be 32x100x17 with a batch size of 32.

  • @morgengabe1
    @morgengabe1 Год назад

    If only he'd discovered this before thinking wework up.

  • @DeltonMyalil
    @DeltonMyalil 8 месяцев назад +1

    This aged like fine wine.

  • @Davourflave
    @Davourflave 4 года назад +5

    Very nice recap of Transformers and what sets them apart from RNNs! Just one little remark, you are not doing things in N^2 for the transformer since you fixed your N to be at maximum some sequence length.
    You can now set this N to be a much bigger number as GPUs have been highly optimized to do the according multiplications. However, for long sequence lengths, the quadratic nature of an all-to-all comparison is going to be an issue nonetheless.

  • @hailking5588
    @hailking5588 2 года назад +1

    Anybody knows why transfer learning never really worked with LSTM. Any links or papers on that?

    • @LeoDirac
      @LeoDirac 2 года назад +1

      I've never read any papers about this - just my personal experience and talking to colleagues. If I had to guess, I'd say it's related to the fact that LSTM's are really tough to train. Which is not surprising if you think about them as incredibly deep networks (depth = sequence length) but the weights are re-used at every layer. Those few parameters get re-used for a lot of things. Transfer learning necessarily means being able to _quickly_ retrain a network on a new task. But training is never fast with an LSTM. That's just my speculation though.

  • @MikeAirforce111
    @MikeAirforce111 4 года назад +1

    "LSTM is just like ResNet" ..... MAYBE... Just maybe.... its the other way around :-D

    • @himanshuagarwal4673
      @himanshuagarwal4673 4 года назад

      Maybe people who designed ResNet can tell if they thought of LSTM when designing the network.

  • @BlockDesignz
    @BlockDesignz 4 года назад +1

    This is brilliant.

  • @bruce-livealifewewillremem2663
    @bruce-livealifewewillremem2663 4 года назад +1

    Dude, can you share your PPT or PDF. Thanks in advance!

  • @SuilujChannel
    @SuilujChannel 4 года назад +5

    question regarding 26:27
    so if i plan on analysing time series sensor data should i stick to LSTM or is the transformers model a good choice for time series data?

    • @isaacgroen3692
      @isaacgroen3692 4 года назад +4

      I could use an answer to this question as well

    • @akhileshrai4176
      @akhileshrai4176 4 года назад

      @@isaacgroen3692 Damn I have the same question

    • @abdulazeez7971
      @abdulazeez7971 4 года назад +8

      U need to use LSTM for time series.
      Bcos in transformers, it's all about attention or positional intelligence which has to be learnt.
      Whereas in time series, it's all about the trend and patterns which requires the model to remember a complete sequence of data points.

    • @SuilujChannel
      @SuilujChannel 4 года назад +1

      @@abdulazeez7971 thanks for the info :)

    • @Jason-jk1zo
      @Jason-jk1zo 4 года назад +8

      The primary advantages and benefits from the transformer are the attention and positional encoding, which are quite useful for translation because the grammar differences in different languages may cause the disorder of the input and output words. But for time series sensor data, they are not disordered (comparing output with input)! RNN, such as LSTM is a suitable choice to perform analysis for such data.

  • @vijayabhaskar-j
    @vijayabhaskar-j 4 года назад +4

    Uploaded a month ago but has just 150 views and just 24 subs? WTH?

    • @vsiegel
      @vsiegel 4 года назад +2

      @@vothka205 But ML uses cats and dogs too!

  • @joneskiller8
    @joneskiller8 9 месяцев назад +1

    I need that belt.

  • @Johnathanaa7
    @Johnathanaa7 4 года назад +13

    Best transformer presentation I’ve seen hands down. Nice job!

  • @briancase6180
    @briancase6180 3 года назад +2

    Thanks for this! It gets to the heart of the matter quickly and in an easy to grasp way. Excellent.

  • @johnnyBrwn
    @johnnyBrwn Год назад

    This is such a rich talk. He should definitely change the title. I've searched far and wide for a lucid explanation of LSTM - this is the best online but doesn't seem as such due to odd title.

  • @maloukemallouke9735
    @maloukemallouke9735 3 года назад

    Thank's so much for video . can'i ask some one if he know where i can find a pre-trainded modele to identfiy number in Image that are from 0 to 100. No writied by hand specialy and can be any where position in image ?
    Thank's for adavance.

  • @ruevers
    @ruevers 4 года назад

    Almos ever, this videos in youtube are a lost of time, just talking and no real example or pratical stuff.All the same, too much talk no real thing. if had some teorics ok, but not else this they have.

  • @ahmetgunes4095
    @ahmetgunes4095 4 года назад

    Presentation is good but the presenter makes too many unnecessary jokes and murmuring. It is difficult to follow without pausing because attention is all I need and this kind of presenting disturbs it.

  • @alvinko9257
    @alvinko9257 3 года назад +1

    What a nutcase

  • @beire1569
    @beire1569 Год назад

    ooooh I so want to see a documentary about this ==> @25:20

  • @felipevaldes7679
    @felipevaldes7679 Год назад

    Leo Dirac: Can't pretrain on large corpus
    Sam Altman: Hold my beer...

    • @LeoDirac
      @LeoDirac 8 месяцев назад

      While I appreciate the association, what did I say to imply you can't retrain on a large corpus? In the summary "Key Advantages of Transformers" I wrote "Can be trained on unsupervised text; all the world's text data is now valid training data."

  • @GoogleUser-ee8ro
    @GoogleUser-ee8ro Год назад

    This beautiful speech is before OpenAI GPT, the world badly needs an update

    • @JohnNy-ni9np
      @JohnNy-ni9np Год назад

      Unfortunately OpenAI is a Close Source by now, people cannot openly talk about its internal structure anymore.

  • @Handelsbilanzdefizit
    @Handelsbilanzdefizit 3 года назад

    I'll train my transformer with the comments below ^^

  • @gauravkantrod1205
    @gauravkantrod1205 4 года назад +1

    Amazing talk. It would be of great help if you can post link to the documents.

  • @mongojrttv
    @mongojrttv 3 года назад

    Was curious about machine learning and feel like I'm getting a lesson on how to speak in heirogliyphs.

  • @arparwan
    @arparwan 3 года назад

    good summary of the RNN models. This video os not for newbies though

  • @MokhlesBouzaien
    @MokhlesBouzaien 4 года назад +6

    11:29 was that French? Nice explanation tho!

    • @NkThor
      @NkThor 4 года назад

      Yeah he's reading the translation on the left side.

    • @LeoDirac
      @LeoDirac 4 года назад +1

      @@NkThor *badly* reading the translation

    • @nineteenfortyeight6762
      @nineteenfortyeight6762 4 года назад

      @@LeoDirac thanks for giving us a facet on which not to feel inferior :)

  • @Stopinvadingmyhardware
    @Stopinvadingmyhardware Год назад

    No, don’t care about them.

  • @dabbopabblo
    @dabbopabblo 3 года назад

    none of that could of made sense and i wouldnt know

  • @giannagiavelli5098
    @giannagiavelli5098 3 года назад

    wrong, not at all like word2vec.

  • @suryamilenial2072
    @suryamilenial2072 3 года назад

    How to implement transformer n lstm in c++

  • @SanataniAryavrat
    @SanataniAryavrat 4 года назад

    Wow... that was a quick summarization of all the NN research things in past many decades...

  • @terjeoseberg990
    @terjeoseberg990 Год назад

    Did anyone try scaling the matricies so that that the Eigen value is exactly 1?

    • @seattleapplieddeeplearning
      @seattleapplieddeeplearning  Год назад +1

      (Leo here - sorry if you see this twice, but YT is blocking comments from my account for some reason.)
      Yes! My favorite paper on this topic is from Bengio's group which uses Unitary weight matrices, which are complex-valued, but constrained to have their eigenvalues exactly as 1. arxiv.org/abs/1511.06464 A simpler approach is to just initialize the weight-matrices with real-valued orthonormal matrices, a good summary at smerity.com/articles/2016/orthogonal_init.html
      But overall I think the key thing is that not long after these ideas were being explored, Transformers came along, which are simpler, more robust, and have plenty of other advantages. Critically IMHO, the training depth doesn't scale by the sequence length, which makes convergence much simpler.

    • @terjeoseberg990
      @terjeoseberg990 Год назад +1

      @@seattleapplieddeeplearning, Thanks.

  • @AmeerulIslam
    @AmeerulIslam 4 года назад +4

    I stopped after about 5 minutes.. Too much for me as this stage!

  • @ziruiliu3998
    @ziruiliu3998 Год назад

    supposing i am using a net to approximate a real world physis ODE equation with time series data, in this case the Transformer is still the best choice?

    • @seattleapplieddeeplearning
      @seattleapplieddeeplearning  Год назад

      I'm not sure. I have barely read any papers on this kind of modeling. I will say that a wonderful property of transformers is that they can learn to analyze arbitrary dimensional inputs - it's easy to create positional encodings for 1D inputs (sequence), or 2D (image), or 3D, 4D, 5D, etc. Some physics modeling scenarios will want this kind of input. If your inputs are purely 1D, you could use older NN architectures, but in 2023 there are very few situations where I'd choose an LSTM over a transformer. (e.g. if you need an extremely long time horizon.) -Leo

    • @ziruiliu3998
      @ziruiliu3998 Год назад

      @@seattleapplieddeeplearning Thanks for your reply, this realy helps me.

  • @musicphilebd9862
    @musicphilebd9862 Год назад

    Schmidhuba comin to get ya !

  • @johnoboyle3097
    @johnoboyle3097 2 года назад

    Any chance this guy is related to Paul Dirac?

  • @ElSenorEls
    @ElSenorEls 2 года назад

    "Then you call fit and that's it"

  • @stevelam5898
    @stevelam5898 Год назад

    I had a tutorial few hours ago on how to build an LSTM network using TF only, left me feeling completely stupid. Thank you for showing there is a better way.

  • @ChrisHalden007
    @ChrisHalden007 Год назад

    Great video. Thanks

  • @sarab9644
    @sarab9644 3 года назад +1

    Excellent presentation! Perfect!

  • @mikiallen7733
    @mikiallen7733 4 года назад +1

    Does the multi-headed attention + position encoding work equally well and better than plain vanilla LSTM but on numeric input ( float or integers ) vectors / tensors ?
    Your input is highly appreciated

    • @anoop5611
      @anoop5611 3 года назад

      Not an expert here, but the way attention works is closely tied to the way nearby words are relevant to each other: for example, a pronoun and it's relevant noun. Multi-headed attention would identify more such abstract relationships between words in a window. So if the numeric input seq has a set of consistent relationships among all its members, then attention would help embed more relational info on the input data so that processing it becomes easier when honouring this relational info.

  • @driziiD
    @driziiD Год назад

    very impressive presentation. thank you.

  • @lucusekali5767
    @lucusekali5767 2 года назад

    i didnt knew that zlatan also teach deep learning

  • @lukaznidarsic2838
    @lukaznidarsic2838 4 года назад

    Me, writing my batchelor's thesis partly on LSTMs: FUCK

    • @LeoDirac
      @LeoDirac 4 года назад

      LSTMs still have a very important place in deep learning. Just not for NLP.

  • @tastyw0rm
    @tastyw0rm Год назад

    This was more than meets the eye

  • @Alex-gc2vo
    @Alex-gc2vo 4 года назад

    ive never understood the use of sin and cos for positional encoding. just giving it a linear function would have also carried positional information. 0.2 > 0.1 so must be after 0.1

    • @LeoDirac
      @LeoDirac 4 года назад +3

      You are correct - a simple linear function would give the neural net all the positional encoding it needs, and it could figure out all the subsequent relationships from there. But many of those useful relationships would require several/many layers of nonlinear transformations for the NN to figure out -- e.g. if you need to learn a detector like "0.2 < x-y < 0.25" that's necessarily going to take at least two layers simply because each ReLU can only do so much work. Instead, the sin & cos encode more information that we're pretty sure is going to be useful, and thus save the NN the effort of figuring this stuff out itself. That is, the sin/cos encoding make arbitrary-distance positional comparison relationships instantly linearly separable in a single layer, and thus in a sense it "pre-trains" the net for what it would have to learn itself if you just gave it a simple linear positional encoding. HTH.

  • @xruan6582
    @xruan6582 4 года назад +1

    20:00 If I multiply a small scaling factor λ₁ (e.g. 0.01) to the output before feeding to activation function, sigmoid will be sensitive to difference between, say, 5 and 50. Similarly, if I multiply another scaling factor λ₂ (e.g. 100) to the sigmoid output, I can get activated output ranging between 0 and 100. Is that a better solution than Relu, which has no cap at all?

    • @LeoDirac
      @LeoDirac 4 года назад +1

      The problem with that approach is that in the very middle of the range the sigmoid is almost entirely linear - for input near zero, the output is 0.5 + x/4. And neural networks need nonlinearity in the activation to achieve their expressiveness. Linear algebra tells us that if you have a series of linear layers they can always and exactly be compressed down to a single linear layer, which we know isn't a very powerful neural net.

    • @xruan6582
      @xruan6582 4 года назад

      @@LeoDirac Relu is linear from 0 to ∞

    • @LeoDirac
      @LeoDirac 4 года назад

      @@xruan6582 Right! That's the funny thing about ReLU - it either "does nothing" (leaves the input the same) or it "outputs nothing" (zero). But by sometimes doing one and sometimes doing the other, it is effectively making a logic decision for every neuron based on the input value, and that's enough computational power to build arbitrarily complex functions. If you want to follow the biological analogy, you can fairly accurately say that each neuron in a ReLU net is firing or not, depending on whether the weighted sum of its inputs exceeds some threshold (either zero, or the bias if your layer has bias). And then a cool thing about ReLU is that they can fire weakly or strongly.

  • @progamer1196
    @progamer1196 4 года назад

    at 5:40, I think determinant is right word instead of eigen value...

  • @lukeno4143
    @lukeno4143 4 года назад +1

    still trying to do what a 10 year old can do. AGM is safe for now.

    • @herp_derpingson
      @herp_derpingson 4 года назад

      Last time I checked, 10 year olds cant beat world champions in chess, go or starcraft.

    • @NicheAsQuiche
      @NicheAsQuiche 4 года назад

      it depends on what the task is but basically, yeah. the biggest problem for AI atm is doing new stuff, its terrible at doing stuff it hasnt done/seen almost exactly before.

  • @danielschoch9604
    @danielschoch9604 Год назад

    Linear algebra of variable dimensions? Fock Spaces. Known for 90 years. en.wikipedia.org/wiki/Fock_space

  • @cafeinomano_
    @cafeinomano_ Год назад

    Best Transformer explanation ever.

  • @ramibishara5887
    @ramibishara5887 4 года назад +1

    where can I find the presentation doc of this talk amigos? thanks

  • @lucyairapetian407
    @lucyairapetian407 4 года назад +1

    Great talk, had to watch at 1.25x though.

    • @thusi87
      @thusi87 4 года назад

      he already talks as if he is on steroids :D Cant imagine I'd understand anything he says at 1.25x lol

    • @LeoDirac
      @LeoDirac 3 года назад

      Totally! I always listen to people talking at 1.25x to 1.5x if I can. Humans are much better at parsing language quickly than generating it. And I was umming and awwing a lot which lowers the information density.

  • @dgabri3le
    @dgabri3le 3 года назад

    Thanks! Really good compare/contrasting.

  • @Rhannmah
    @Rhannmah 3 года назад +2

    6:41 hahaha this is GODLIKE! The fact that Schmidhuber is on there makes the joke even better!

  • @snippletrap
    @snippletrap 4 года назад +2

    I use Python as "pseudocode" in presentations too. Much more intuitive than the ALGOL style that has been standard for so long.

  • @DavidWhite679
    @DavidWhite679 4 года назад +9

    This helped me a ton to understand the basics. Thanks!