Geoffrey Hinton: The Foundations of Deep Learning

Поделиться
HTML-код
  • Опубликовано: 23 дек 2024

Комментарии • 106

  • @blek1987
    @blek1987 11 месяцев назад +6

    What a luxury to be able to watch this on-demand, from anywhere, and for free!

  • @kentrader2489
    @kentrader2489 4 года назад +16

    This was a great presentation

  • @atifadib
    @atifadib 4 года назад +26

    Man he has a great sense of humour

  • @mangoldm
    @mangoldm Год назад +1

    Are hidden Markov models as relevant as Ray Kurzweil suggested?

  • @Tothefutureand
    @Tothefutureand Год назад

    13:13 this photo of rnn is most best photo I’ve ever seen about rnns

  • @klausunscharferelation6805
    @klausunscharferelation6805 Год назад

    about the singularity
    As Ray Kurzweil says
    When the whole universe becomes a computer,
    What does it calculate?
    Even though the purpose for calculating has already disappeared?

  • @vtrandal
    @vtrandal 3 года назад +1

    At 8:59 some comments about RIM. What is RIM?

    • @bcbcbcbcbc
      @bcbcbcbcbc Год назад

      Research in motion, the original name for Blackberry who used to make smart phones.

  • @shivakumarcd
    @shivakumarcd 6 лет назад +94

    "so if you're obsessed with only being one correct answer and you're being able to prove you get it, backpropagation is not for you nor is LIFE"

    • @loopuleasa
      @loopuleasa 6 лет назад +5

      classic Geoffrey

    • @wishall007
      @wishall007 6 лет назад +11

      He hasn't lost the British sarcasm

    • @shubhampandey1997shreeg
      @shubhampandey1997shreeg 4 года назад +2

      Actually by that line prof Hinton refers to the time when no one believed in his line and work of research and that decade taught him a great deal of patience until the technology reached a certain level to cope up with his knowledge.

    • @shivscd
      @shivscd 3 года назад

      @Ernest Alonzo WOW.. Do you think people who watch these video fall for you?

    • @RyckmanApps
      @RyckmanApps Год назад

      That was the best of his one-liners and zingers.

  • @dripcode2600
    @dripcode2600 Год назад

    Always been fascinated with computer neural networks. Exciting times!

  • @neerajsingh-xf3rp
    @neerajsingh-xf3rp Год назад +1

    deeply insightful

  • @kuldeepsingh876
    @kuldeepsingh876 Год назад +1

    Geoffrey Hinton sir has the super clarity about his thoughts. I love it. He does not beat around the bush.

    • @madamedellaporte4214
      @madamedellaporte4214 Год назад

      He should be put through court for paying with the future of humanity. Of us all so he can make a big stash of money and we see our children have no future ..That is what happens with mad scientists who let their vanity take hold of them.

  • @PaulScotti
    @PaulScotti 3 года назад +1

    Could someone clarify? He explained that backprop is better than "mutation" because backprop is parallelized vs. serialized -- but his explanation doesnt convey why backprop is achievable in parallel?

    • @brendawilliams8062
      @brendawilliams8062 Год назад

      Maybe if you have a neg. and a positive direction then you’d be best covering the known ground. I don’t know either.

    • @panda5088
      @panda5088 Год назад +1

      Because of the way the vector math works, you can calculate the difference in the result that a change to any one weight or bias would cause. Therefore you can adjust the result by manipulating multiple weights or biases at the same time. Here's a pretty good video explaining back prop more in-depth: ruclips.net/video/Ilg3gGewQ5U/видео.html

  • @xiaochengjin6478
    @xiaochengjin6478 5 лет назад +3

    watching such an amazing video at 5 am

  • @Alp09111
    @Alp09111 5 лет назад +5

    loved this, thank you!

    • @ElevateTechCA
      @ElevateTechCA  4 года назад +1

      No, thank YOU for checking us out!

  • @bensonmwaura9494
    @bensonmwaura9494 6 лет назад +1

    A tensor for paradigm shifts.

  • @mohammedalmukhtar5428
    @mohammedalmukhtar5428 5 лет назад +7

    He is a Genius..Period.

    • @holmerlike8395
      @holmerlike8395 3 года назад +1

      AND, you must be a complete moron for believing such !
      Or perhaps simply, just riding the gravy train... Keep it go'in Buddy !

  • @j.wilmararcila4150
    @j.wilmararcila4150 Год назад

    Gracias doc, que LINDO Mundo en El que sumerced SE mueve !

  • @Gabcikovo
    @Gabcikovo Год назад +1

    11:27 Tomáš Mikolov just smiled :))

  • @caonima323
    @caonima323 6 лет назад +3

    what is the time of this
    speech ?

    • @ElevateTechCA
      @ElevateTechCA  6 лет назад +6

      This speech took place in September 13, 2017 at Elevate TechFest.

    • @jj-ry8xv
      @jj-ry8xv 6 лет назад +2

      1998

    • @caonima323
      @caonima323 6 лет назад

      ethan chow you e d

  • @nguyenngocly1484
    @nguyenngocly1484 4 года назад

    You can make inside-out neural networks with fixed weighted sums (dot products) and adjustable (parametric) activation functions. Rather than the other way around.
    Then you are free to use high speed fast transforms for the fixed dot products.

  • @powertester5596
    @powertester5596 Год назад

    This is extraordinary!

  • @paulmitchell5349
    @paulmitchell5349 5 лет назад +4

    Knowledge is accessible data.Intelligence is ability to infer.

    • @Bmens71
      @Bmens71 5 лет назад

      paul mitchell like this quote

  • @TheCriticsAreRaving
    @TheCriticsAreRaving 5 лет назад +16

    17:16 Hinton flips off the audience

  • @TheHamoodz
    @TheHamoodz 6 лет назад +17

    How can someone be intelligent enough to invent neural networks and entertaining enough to give a presentation like that. Its almost unfair lol.

    • @ArtOfTheProblem
      @ArtOfTheProblem 5 лет назад +3

      and you forgot how clearly he explained the 'key concepts' there is maybe 300 pages of a typical book on the subject compressed into about 20 phrases in this talk.

    • @shubhamp.4155
      @shubhamp.4155 5 лет назад

      He did not invent neural networks

    • @prashanthadepu3013
      @prashanthadepu3013 4 года назад +2

      @@shubhamp.4155 but he brought to life many of them and redesigned completly

    • @mateuszanuszewski69
      @mateuszanuszewski69 4 года назад

      It is just math, and nothing else. So if one understands math at academic level can easly learn how neural networks will work and how to manipulate notations on them to get different types. And i am not talking about "oh learn Neural networks in 5 minutes" and people show you how to program a model in tensorflow.

  • @dearheart2
    @dearheart2 Год назад +3

    RNN is fun. I made a RNN for translation (word in, "thoughts" (RNN) works out), over the breaks in 1 week in high school in early 80ties. It could learn new words, structures and did an ok translation. Just made it for fun, like all the NN, AI at that time.

  • @Gabcikovo
    @Gabcikovo Год назад +1

    9:37 💜

  • @Sp8e
    @Sp8e Год назад

    I like this guy.

  • @HerrWortel
    @HerrWortel 4 года назад +3

    Hinton is giving us the finger at 17:15 xD

  • @paulmitchell5349
    @paulmitchell5349 5 лет назад +2

    Learning is not necessarily intelligence.More rapid learning means to twig the concept and apply the concept to new problems.

  • @cowardmildredj.7250
    @cowardmildredj.7250 4 года назад +2

    loved this, thank you!

  • @dearheart2
    @dearheart2 Год назад +1

    Used evolutionary algorithm for training NN's in the 80ties. Worked well already then. Limited hardware, had to program all myself, but I got general AI that could be used for almost anything.

  • @rkara2
    @rkara2 6 лет назад +1

    Process or singularity is the Heart of AI.
    Alan Turing clearly define this in his 1936 paper on computable numbers.
    Input > Process > Output (Turing complete expression or circular)
    Process (Turing incomplete expression or non-circular)
    !DA

  • @dannyiskandar
    @dannyiskandar 6 лет назад

    amazing ..thank you

  • @Free_Ya_Mind
    @Free_Ya_Mind Год назад

    In the nutshell, Artificial Neural Network is just a parametric composite function trained using the chain rule of derivative.

  • @jmf3210
    @jmf3210 Год назад +1

    Cool it with the 1960's slide presentation....definitely needs a producer..

  • @janaenae1338
    @janaenae1338 Год назад

    I really love you!!!❤

    • @loopuleasa
      @loopuleasa Год назад +1

      this guy was waaaay ahead of his time
      he casually defined what a thought is, and this thought never left my head 5 years ago

  • @samdavisok
    @samdavisok Год назад

    25:00

  • @zakali92
    @zakali92 5 лет назад +3

    Absolutely fascinating for a dummy like me

  • @nbme-answers
    @nbme-answers 5 лет назад

    14:47 symbols go in, symbols go out but in the middle it can't be "symbols"!

  • @fallhdesls5226
    @fallhdesls5226 4 года назад

    smart!long life

  • @bingeltube
    @bingeltube 6 лет назад

    Recommendable

  • @obsiyoutube4828
    @obsiyoutube4828 5 лет назад

    smart!long life

  • @damoonrobatian9371
    @damoonrobatian9371 3 года назад +1

    Geoff himself doesn't even know how it works! It just works!!! What kind of scientist could be satisfied with this type of "reasoning"? HYPE

  • @bopeng9504
    @bopeng9504 6 лет назад

    Great job Geoff. Pray that God would bless your life!

  • @mateuszanuszewski69
    @mateuszanuszewski69 4 года назад

    omg so many ads

  • @morthim
    @morthim 4 года назад

    if you are going to call people stupid you shouldn't contradict yourself and agree with the people you insult.

  • @u2naru
    @u2naru Год назад

    This guy now says that AI is dangerous after decades of leading projects on AI. Why now not the time this presentation was done?

    • @yadayada111986786
      @yadayada111986786 Год назад

      He ays himself he was surprised by the pace of the AI development. He thought these dangers would come much later and slower

  • @ancestralrocha7709
    @ancestralrocha7709 5 лет назад +4

    Some basic shit? Did I hear correctly?

  • @leo.budimir
    @leo.budimir 5 лет назад +1

    Damn, he looks like Palpatine here

  • @mutterich5290
    @mutterich5290 3 года назад

    LM21 was geht ab

  • @dr.mikeybee
    @dr.mikeybee 6 лет назад +1

    Professor Hinton is amazing, but there are too many of these NN for dummies lectures. They're starting to clog the space. It's a shame he didn't say anything really interesting here.

    • @mtoman
      @mtoman 6 лет назад

      True, finding advanced talks is getting pretty hard.

    • @pd.dataframe2833
      @pd.dataframe2833 5 лет назад +6

      For advanced knowledge you dont come to youtube....you fucking read research papers

    • @atomskreigns8071
      @atomskreigns8071 5 лет назад +2

      Sivaram Karanam and even that's not true you can search any ML topic on RUclips and get a lecture as advanced as u like a whole series of them

    • @SameenIslam
      @SameenIslam 5 лет назад

      Did you not read the title of this video? Clearly says foundations

    • @OptionGal
      @OptionGal 4 года назад

      @@pd.dataframe2833, not entirely true. Some of the best research papers are published and presented at symposiums such as this one. We are all very lucky that there are videos of these podium presentations so that we may also be inspired to learn.

  • @janaenae1338
    @janaenae1338 Год назад

    ❤️🧡💛💚💙💜💙💚💛🧡❤️🧡💛
    We are ALL the SAME PERSON experiencing life in a BUNCH
    OF DIFFERENT BODIES!!! which
    means that EVERY PERSON
    that you meet, is really just YOU...
    LIVING IN ANOTHER BODY!!
    you see..you are INTERACTING with YOURSELF at ALL TIMES!!! & once you understand this,you can achieve unity!
    I and my father are one
    Love Thy Neighbor as Thyself
    💜💙💚💛🧡❤️🧡💛💚💙💜💙💚

    • @robertolupot2497
      @robertolupot2497 Год назад

      Interesting. So how does the death of one body affect the living body?

  • @TomerBenDavid
    @TomerBenDavid 4 года назад

    Wow :)

  • @zzbeasley
    @zzbeasley 6 лет назад +1

    Is a thought more than an image? Think about it.

  • @JazzyGinger1
    @JazzyGinger1 10 месяцев назад

    Hello🦩🦩🦩,
    God the Father loves you so much that He sent Holy, Sinless Jesus (His Holy Son) to earth to be born of a virgin. Then He grew up and died on a cross for our sins. He was in the tomb for 3 days, then Father God raised Holy and Sinless Jesus Christ (Y'shua) to life! He appeared to people and went back to Heaven. We must receive Sinless Jesus sincerely to be God's child.
    John 1:12 says, "But as many as received him, to them gave he power to become the sons of God, even to them that believe on his name."

    Will you receive Christ sincerely?

  • @bobtarmac1828
    @bobtarmac1828 Год назад

    Today, Losing your job to ai agents is unacceptable. Ai Jobloss is here. So are Ai as weapons. Can we please find a way to Cease Ai / GPT? Or begin Pausing Ai before it’s too late?

  • @sakathvenu
    @sakathvenu 5 лет назад +1

  • @AnwarAdnan-e5w
    @AnwarAdnan-e5w Месяц назад

    And now Dr Hinton resign from google to alert people about the dangerous of AI. this remind me with Oppenheimer

  • @m3n4cE6
    @m3n4cE6 Год назад

    look at them, fatal wound/mark on their forehead

  • @yvespetit
    @yvespetit 2 года назад

    A spiteful professor who thinks his tinkering with computers does good for humanity! Talk about speech recognition, the machine voice announcing a caller on our phone system is always wrong.

  • @thiemtranthi7760
    @thiemtranthi7760 Год назад

    Ok hinton them ok😂

  • @DanOneOne
    @DanOneOne Год назад

    thank God I decided not to be a PhD student...

  • @rajaasim8229
    @rajaasim8229 6 лет назад +2

    Who dislikes Hinton?

    • @MrCmon113
      @MrCmon113 5 лет назад

      Well he shit on symbolists during the talk.