Heroes of Deep Learning: Andrew Ng interviews Geoffrey Hinton

Поделиться
HTML-код
  • Опубликовано: 7 авг 2017

Комментарии • 124

  • @aliasad8342
    @aliasad8342 5 лет назад +157

    Hinton's Advice:
    #1 - Read the literature but don't read too much of it.
    #2 - For creative researchers, I think what you want to do is to read a little bit of the literature and notice something that you think everybody is doing wrong and contrarian in that sense, you look at it and it just doesn't feel right and then figure out how to do it right, and when people tell you that's no good....just keep at it. And I have a very good principle for helping people keep at it. Which is, either your intuitions are good or they're not. If your intuitions are good, you should follow them and you will eventually be successful. If your intuitions are not good, it doesn't matter what you do.
    #3 - Never stop programming!
    #4 - I think one should reach enough to start building intuitions and then trust your intuitions and go for it. And, don't be too worried if everybody else says is nonsense.
    #5 - If you think its a really good idea and other people tell you it's complete nonsense, then you know you're really on to something.
    #6 - one good piece of advice for new grad students, see if you can find an adviser who has beliefs similar to yours because if you work on stuff that your adviser feels deeply about you'll get a lot of good advice and time from your adviser.

    • @prathyusha5393
      @prathyusha5393 4 года назад +3

      The 2nd one!

    • @Gabcikovo
      @Gabcikovo Год назад +1

      #2 👏 #TomášMikolov

    • @Gabcikovo
      @Gabcikovo Год назад +2

      29:59 this is where Hinton actually says that

    • @wk4240
      @wk4240 Год назад

      Good principals to follow. Regardless of what you are doing.

  • @dbiswas
    @dbiswas 6 лет назад +113

    I attended one of his lecture in Washington University,. He is extremely humble and a great speaker. He is the Einstein of our generation. I am really looking forward to his upcoming paper on Capsule theory. God ! it gives me a feeling like standing in queue for the latest iPhone. LOL !

  • @thesuryapolisetty
    @thesuryapolisetty 2 года назад +20

    00:30 How Hinton’s fascination with the brain led him to explore AI
    3:33 The story behind his seminal 1986 paper on backpropagation that he wrote with David Rumelhart
    8:16 The invention Hinton is still most excited about
    12:55 Hinton’s work on ReLU activations
    25:35 How Hinton’s understanding of AI has changed over the decades
    15:27 Hinton’s thoughts on the relationship between the brain and backpropagation
    19:09 Hinton’s work on dealing with multiple time scales
    20:43 Hinton’s ongoing research on capsules
    29:40 Hinton’s advice for someone who wants to break into AI
    37:15 Hinton’s thoughts on the paradigm shift in AI

  • @ismailelezi
    @ismailelezi 6 лет назад +38

    Even Andrew looks super-exciting in the video, imagine the viewers. :)

  • @saraths6184
    @saraths6184 6 лет назад +23

    Two legends talking on a domain they help build. No surprise it turned out so informative. Thank u for making this video.

  • @doryds
    @doryds 6 лет назад +11

    Utterly fascinating. Professor Hinton is inspiring.

  • @reoext
    @reoext 6 лет назад +3

    This is very interesting in terms of Prof. Hinton's historical perspective on his interest and development of many algorithms. Also, Prof Ng contribution to making everyone dreams become a reality from learning ML and applying in their prospective jobs.
    Thanks to both!!

  • @alexandeap
    @alexandeap 4 года назад +1

    Dear Andrew, first of all I wanted to thank you and the entire coursera team for the great academic contribution they give us. Secondly, I wanted to ask you to put the subtitles where possible so that we can enjoy 100% Spanish and South American speaking. finally ask you to include the course of neural networks of Jeoffrey Hinton since when you search through the search engine of coursera it does not find it. I will thank you for correcting this error because when I tried to search for the courses offered by the University of Toronto, I did not have a positive success either. Thanks again to you and the coursera team for the great innovative academic contribution and scientific and technological research.

  • @Trackman2007
    @Trackman2007 6 лет назад +68

    "Finally Andrew Ng bought a decent mic!" - nah.. have to edit myself. At around 27:00 and on he returns to toilet/pillow-filtered sound. That's just Andrew Ngs trademark!

  • @woolfel
    @woolfel 6 лет назад +2

    Thanks for this excellent interview. I'm totally geeking out at this interview!

  • @jsfnnyc
    @jsfnnyc 8 месяцев назад

    Best research advice ever!! "Read the literature, but not too much of it."

  • @senthilvs1723
    @senthilvs1723 6 лет назад +1

    Thanks, Andrew Ng for the interviews.

  • @morebaie3412
    @morebaie3412 5 лет назад +1

    This interview is highly insightful and helpful for deep learners, recommended watching!

  • @Joe-yr1em
    @Joe-yr1em 5 лет назад

    Thank you for this wonderful interview.

  • @unoqualsiasi7341
    @unoqualsiasi7341 6 лет назад

    Thanks for the amazing interview and for the sharing!

  • @yuwuxiong1165
    @yuwuxiong1165 6 лет назад

    Excellent interview...and very good advises from Hinton!

  • @aq1q
    @aq1q 6 лет назад +12

    Geoggrey Hinton and Andrew NG, ML Titans!

  • @jung8935
    @jung8935 6 лет назад +8

    Great interview, but it's so deep. I wish I could understand more of it...

  • @georgemaratos1122
    @georgemaratos1122 5 лет назад

    thank you so much for doing this interview

  • @mdougf
    @mdougf 5 лет назад +3

    Thanks for this interview, Andrew! Hey, fellow learners, if anyone is interested in joining my weekly Machine Learning Research Paper reading and discussion group, let me know!

  • @xbronn
    @xbronn 6 лет назад +15

    Huge thanks to Andrew and Geoffrey for this interview!

  • @jindagi_ka_safar
    @jindagi_ka_safar 5 лет назад

    Thanks for introducing us to the heroes of DL like Geoffrey

  • @azr_sd
    @azr_sd 4 года назад +8

    Andrew looking at Hinton Like how we all see Andrew when learning from him.. Huge respect for everyone who contributed to AI. :)

  • @bobcrunch
    @bobcrunch 6 лет назад +7

    Three years ago I took Ng's Machine Learning Coursera class and it really got me hooked on the subject. Then in the spring of 2017 I took Hinton's Neural Networks for Machine Learning Coursera class and it was a big step in my understanding the subject. Hinton's class goes 15 weeks and is a little intimidating in both depth and breadth. I have the math background so that really helped. I guess the bottom line is that if you're going to study the subject, study math through partial differential equations.

    • @bobcrunch
      @bobcrunch 6 лет назад

      Most of the backpropagation is just add-subtract-multiply, but the tricky step is to calculate the updated weights using gradient descent.

  • @MilanAndric
    @MilanAndric Год назад

    Fav quote, little after minute 30... "Either your intuition is good or its not. If its good you should follow them and eventually you will be successful. If they are no good it doesn't matter what you do. There's no point in not trusting your intuitions."

  • @CHECK3R5
    @CHECK3R5 6 лет назад +13

    38:00 onwards, brilliant philosophy of AI from this legend.

  • @harleyswick5449
    @harleyswick5449 6 лет назад +4

    This was great. Love hearing his more far out ideas

  • @wajdanali1354
    @wajdanali1354 6 лет назад

    honoured to listen to this discussion , subhanallah

  • @Global_Pivot
    @Global_Pivot 4 года назад +2

    "When you have something you think is a good idea and other people think it's a complete nonsense, then you know you are onto something"

  • @falak88
    @falak88 6 лет назад +3

    Fascinating !

  • @briancase9527
    @briancase9527 11 месяцев назад

    I so agree with Hinton: have an idea and go for it. I took this approach with something other than AI, but it also worked. What do I mean? I mean, even though my idea wasn't revolutionary and totally worthwhile, I LEARNED A LOT by just going for it and programming the heck out of it. The practical experience I gained served me well--very well--in my first jobs. Remember: your purpose is to learn, and you can do that following your intuition--which is fun--or following someone else's--which is less fun.

  • @brishtiteveja
    @brishtiteveja 5 лет назад +1

    I love Andrew so much. :)

  • @LuisGuillermoRestrepoRivas
    @LuisGuillermoRestrepoRivas 6 лет назад +5

    Interesting and informative. Thanks.
    But, on the last part: I believe that the symbolic approach of AI should not be totally abandoned by the researchers. One reason, by no means the only one, is that it gives AI a less of a "black box" approach than neural networks.

    • @tommygunhunter
      @tommygunhunter 5 лет назад

      symbolic.... total baloney! Symbols and indeed mathematics in general are emergent entities of the brain. AI if it is to replicate the workings of the brain has to dig deeper, be more subatomic, less molecular as therein lies the route to a multitasking, general intelligence.

  • @mehedihasanbijoy6609
    @mehedihasanbijoy6609 4 года назад

    listening to Geoff is something really fancy and super exciting.

  • @NattapongPUN
    @NattapongPUN 6 лет назад +2

    Legend!

  • @GreatUnwashedMass
    @GreatUnwashedMass 6 лет назад +54

    This guys' intuitions puts everyone else to shame.

    • @w.morillo
      @w.morillo 6 лет назад

      GreatUnwashedMass 0

    • @corywiedenbeck1562
      @corywiedenbeck1562 4 года назад

      Nice and humble atheist

    • @runvnc208
      @runvnc208 3 года назад

      Its also his clarity of writing I think. His papers are easier to understand for me than a lot of others.

  • @ehfo
    @ehfo 6 лет назад +1

    I wish someone post the link to all the paper mentioned in the interview

  • @dixingxu
    @dixingxu 6 лет назад +2

    NEVER STOP PROGRAMMING!! dope

  • @ProfessionalTycoons
    @ProfessionalTycoons 6 лет назад +2

    Amazing

  • @KangZhang
    @KangZhang 6 лет назад +1

    Never stop programming !

  • @koendejonghe1555
    @koendejonghe1555 6 лет назад +23

    At 32:08 : Never stop programming!

    • @brandomiranda6703
      @brandomiranda6703 6 лет назад +1

      that advice wasn't clear to me...is the advice to never stop programming cuz if a "bad student" doesn't make the idea work then if you don't lose your edge on programming then you can make work yourself?

    • @falcon20243
      @falcon20243 6 лет назад +3

      You only understand the tiny details when you can write a program to implement the whole thing yourself.

    • @brishtiteveja
      @brishtiteveja 5 лет назад

      And stop copy pasting and make mistakes and fix the bugs I guess !! :)

  • @Al.Mo.
    @Al.Mo. 6 лет назад +1

    BTW, that auto-generated caption is so creepy accurate

  • @sajanrai9047
    @sajanrai9047 6 лет назад +2

    Godfather 🙌🙌🙌

  • @KulvinderSingh-pm7cr
    @KulvinderSingh-pm7cr 6 лет назад

    Enlightened !!

  • @thegamechanger7157
    @thegamechanger7157 2 года назад

    Yes, I learned from his tutorial in Coursera in data science and technology

  • @PaulHigginbothamSr
    @PaulHigginbothamSr 11 месяцев назад

    I think the difference between wake and sleep is during sleep it is in the testing phase and during wake it is the operative phase of learning.

  • @zeus1082
    @zeus1082 6 лет назад +2

    The legend looks so innocent

  • @ziyiguo9296
    @ziyiguo9296 6 лет назад +4

    "The most beautiful one is the work I do between Terrence Sejnowski on Boltzmann Machines"

  • @hiauoe
    @hiauoe 6 лет назад +1

    Anybody know if anything was published about his stack of auto-encoders backprop idea?
    Would love to hear more about it

    • @chadwick3593
      @chadwick3593 6 лет назад

      Anton He's talking about layer-wise pretraining.

  • @brandomiranda6703
    @brandomiranda6703 6 лет назад +15

    Funny but important (?): “You either have good intuitions or don't. If you have good intuitions you’ll eventually be successful but if you have bad intuition no matter what you do it will suck, so follow your intuitions”-paraphrase of Hinton's research advice.
    What do people think?

    • @brandomiranda6703
      @brandomiranda6703 6 лет назад

      Don't give up (but in a smart way) seems to be better advice and I agree.
      You know where he borrowed it from?

    • @allenwang3331
      @allenwang3331 6 лет назад +7

      I think his intuition comes from his experience in diverse disciplines. He mentioned that he tried physics, physiology, psychology and philosophy during his undergrad. Intuition isn't some magical ability you're born with. It builds as you get exposed to information - similar to how variation in training data is beneficial to a good machine learning model.

    • @daviddav2845
      @daviddav2845 6 лет назад

      if u live long enoguh bad become good. trouble is we have short life span... so if you suspect u have bad intuiation , look deeper into your heart to find a better one which yiu can use to explain everything happen to you and around the world. Then u should arrive on some decent intuition.

  • @MattRiddell
    @MattRiddell 2 года назад

    Wow - the capsule concept is pretty close to the thousand brains idea!

  • @markgao11
    @markgao11 6 лет назад +1

    Very inspiring interview.
    AI is changing the world fundamentally anyway.

  • @VineetBhatawadekar
    @VineetBhatawadekar 5 лет назад

    Legend.

  • @godbennett
    @godbennett 6 лет назад

    Based on Hinton's descriptions of capsules, is it possible he has overlooked manifolds?
    It is pertinently the behaviour of manifolds, where solutions or sub manifolds (i.e. latent vectors on the states of particular concepts in the input space - some latent z is entailed by some factor distribution : {position, scale…}) are observed to lie in local patches of the global manifold, that engender that particular factors may be learned; for example pixels in the neighbourhood of some other pixels may signify transformations on that same pixel, while other neighbourhoods may be disentangled from the sampled latent vectors of the aforesaid pixel altogether, (i.e. other pixel data and their transformation data are separable from the events of the pixel discussed above)

    • @AhmedKachkach
      @AhmedKachkach 6 лет назад

      Hinton surely did not overlook manifolds.
      The "routing by agreement" bit is where the main difference is; The way traditional CNNs learn means that they would not generalise as well as capsule nets; a lot of the information is lost / duplicated. Capsule nets (with this focus on "concepts") would be biased to learn more re-usable concepts, with different properties.

  • @onamixt
    @onamixt 8 месяцев назад +1

    I watched the video as a part of Deep Learning Specialization. Sadly, it's way way over my head to comprehend much of what was said in the video.

  • @Gabcikovo
    @Gabcikovo Год назад +1

    35:08 our relationship to computers has changed.. instead of programming them, we show them, and they figure it out

  • @ashiningworld
    @ashiningworld 6 лет назад

    Top ten anime crossovers

  • @Level6
    @Level6 3 года назад +1

    힌튼의 조언 :
    # 1-문헌을 읽되 너무 많이 읽지 마십시오.
    # 2-창의적인 연구자들에게, 사람들이 좋지 않다고 말할 때 .. 그냥 계속하십시오. 그리고 저는 사람들이 그것을 지키도록 돕는 아주 좋은 원칙을 가지고 있습니다. 즉, 직관이 좋거나 그렇지 않습니다. 당신의 직관이 좋다면 당신은 그것을 따라야 하며 결국 성공할 것입니다. 당신의 직감이 좋지 않다면 당신이 무엇을 하든 상관 없습니다.
    # 3-프로그래밍을 멈추지 마십시오!
    # 4-직관을 쌓기 시작할 만큼 충분히 도달 한 다음, 직감을 믿고 실행해야 한다고 생각합니다. 그리고 다른 사람들이 말도 안된다고 해도 너무 걱정하지 마세요.
    # 5-당신이 그것이 정말 좋은 아이디어라고 생각하고 다른 사람들이 그것이 완전히 넌센스라고 말한다면, 당신은 당신이 정말로 무언가를 하고 있다는 것을 압니다.
    # 6-새로운 대학원생을 위한 좋은 조언 한 가지, 당신과 비슷한 신념을 가진 조언자를 찾을 수 있는지 확인하세요. 당신이 조언자가 당신에 대해 깊이 느끼는 일을 하면 좋은 조언과 시간을 많이 얻을 수 있기 때문입니다.

  • @chaidaro
    @chaidaro 6 лет назад +1

    God has spoken.

  • @a.gholiha6884
    @a.gholiha6884 6 лет назад

    Nice!

  • @Gabcikovo
    @Gabcikovo Год назад +1

    38:10 a thought is just a great big vector of neural activity

    • @Gabcikovo
      @Gabcikovo Год назад +1

      38:19 people who thought that thoughts were symbolic expressions made a huge mistake.. what comes in is a string of words, and what comes out is a string of words, and because of that, strings of words are the obvious way to represent things, so they thought what must be in between was a string of words or something alike.. Hinton thinks there's nothing like a string of words in between, he thinks thinking of it as of some kind of language is as silly as the idea that understanding the layout of a spacial scene must be in pixels :))

  • @vinodkumar-pv8qz
    @vinodkumar-pv8qz 6 лет назад +5

    youtube have only one like button :(

  • @sivaprasadml6582
    @sivaprasadml6582 5 лет назад

    @ 8:34 what was Geoffrey mentioning ? Which algorithm ?

  • @mahdiamrollahi8456
    @mahdiamrollahi8456 Год назад

    Great

  • @Gabcikovo
    @Gabcikovo Год назад

    18:00 2007 ignored Hinton and Bengio picked it up layer on

  • @surkewrasoul4711
    @surkewrasoul4711 10 месяцев назад

    Hey And , Do you still accept donations
    by any chance, I am hoping for 720p videos from now on.

  • @vovos00
    @vovos00 6 лет назад +1

    10:49

  • @evankim4096
    @evankim4096 4 года назад

    WOW I can't believe Hinton et al came up with an algorithm for STDP decades before neuroscientists came up with the concept....He is talking about this algorithm that came in the late 1980s, while STDP arrived on the scene in the early-mid 2000s

  • @Gabcikovo
    @Gabcikovo Год назад

    0:19 Godfather 😎

  • @johningham1880
    @johningham1880 4 года назад +1

    I wish my neurones could remember what they were doing when they pop out of a recursive call...

  • @asutoshmittapalli
    @asutoshmittapalli 3 года назад +1

    All rise for the Godfather

  • @godbennett
    @godbennett 6 лет назад

    Hinton: "Thoughts are great big vectors, and big vectors have causal powers, they cause other big vectors"
    Me: "Thought Curvature" paper : www.academia.edu/25733790/Causal_Neural_Paradox_Thought_Curvature_Aptly_the_transient_naive_hypothesis

  • @AlinNemet
    @AlinNemet 6 лет назад +2

    no question about AI and singularity!? sure he would've had a super funny answer :))

    • @mukuste
      @mukuste 6 лет назад +1

      The singularity is a pure scifi idea that actual AI researchers have no time for.

    • @AlinNemet
      @AlinNemet 6 лет назад

      Oh yeah!? then how come elon musk, stef. hawking, sam harris and even f**g bill gates, seriously talk about it as an existential threat!? surely they know something the rest of us don't :))

    • @mukuste
      @mukuste 6 лет назад +9

      None of them are AI researchers. They're laypeople on this issue.

    • @AlinNemet
      @AlinNemet 6 лет назад

      mukuste yeah it's ridiculous...though AGI is a very interesting topic, as Neil D Tyson noted, we are far from being on the path of something like that happening...so till then AI is just another tool to help us live better

    • @AlinNemet
      @AlinNemet 6 лет назад

      indeed

  • @salvatortermination4681
    @salvatortermination4681 6 лет назад

    alien language to me right now, still it's great haha

  • @haoshidi
    @haoshidi 3 года назад

    Concretely

  • @dileepjayamal9968
    @dileepjayamal9968 5 лет назад +1

    Still there are 17 dislikes, don't know why.. may be outliers...

  • @billykotsos4642
    @billykotsos4642 5 лет назад

    Andrew "I see" Ng

  • @wk4240
    @wk4240 Год назад +1

    Seriously doubt Geoffrey Hinton considers himself a hero - more like Dr. Frankenstein now. He's doing his part to spread the word on the dangers of reliance on AI.

  • @almostbutnotentirelyunreas166
    @almostbutnotentirelyunreas166 6 лет назад

    YESSS! A technical tour-de-force! The Godfather!
    Gotta love the consistent elephant in the room though. AI inevitability: Human intelligence becomes obsolete for any purpose beyond the banal. Luckily, the onset of autonomous AGI (consciousness optional) solves this permanently. ;-[
    Intelligence is THE differentiator on earth; let's relentlessly pursue / design Humanity's successor as Apex Predator. GAN indeed. GONE, more likely. Just keep kicking the can down the road, you clever hand-wringers.
    Learning leads to Knowledge, Knowledge is Power, Power corrupts, absolute Power corrupts absolutely. A clearly sub-optimal general (Human) outcome, driven by an optimising, self-adjusting, closed-loop feed-back system. Brilliantly myopic.

  • @jamesking2439
    @jamesking2439 6 лет назад

    The reflections of the screen in his glasses look like googly eyes.

    • @tommygunhunter
      @tommygunhunter 5 лет назад

      Google eyes! Connected to Google Brain :)

  • @sujoyparikh5362
    @sujoyparikh5362 2 года назад

    Absolutely love this, but Andrew, please stop saying "I see" constantly