Deep Learning - Computerphile

Поделиться
HTML-код
  • Опубликовано: 27 ноя 2024

Комментарии • 362

  • @rachael3533
    @rachael3533 6 лет назад +467

    Let's all take a moment to recognize he's explaining complex math and computing in a second language

    • @rosspidoto
      @rosspidoto 5 лет назад +54

      Native English speakers constantly take this for granted.

    • @noahwhipkey6262
      @noahwhipkey6262 5 лет назад +11

      @@rosspidoto Most people speak english nowadays. Kinda the defacto standard of technology.

    • @rosspidoto
      @rosspidoto 5 лет назад +7

      @@noahwhipkey6262 And what would your native language be then?

    • @noahwhipkey6262
      @noahwhipkey6262 5 лет назад +1

      @@rosspidoto Hmmm. Take a guess ;)

    • @vikranttyagiRN
      @vikranttyagiRN 5 лет назад +9

      Yupe exactly what i was thinking. Not an easy task at all

  • @tylerwaite8444
    @tylerwaite8444 8 лет назад +19

    This makes much more sense than "a neural network is just a combination of inputs and outputs with a hidden layer in between" that you get from a lot of other videos. Awesome!!!

  • @iLLt0m
    @iLLt0m 8 лет назад +185

    Machine learning courses always with the damn house prices lol.

    • @Jorge-ub3we
      @Jorge-ub3we 6 лет назад +3

      ikr

    • @WhoForgot2Flush
      @WhoForgot2Flush 6 лет назад +4

      right?

    • @MubashirullahD
      @MubashirullahD 5 лет назад

      hahaha

    • @hattrickster33
      @hattrickster33 4 года назад

      @@MubashirullahD you know?

    • @MubashirullahD
      @MubashirullahD 4 года назад +2

      @@hattrickster33 It is in week 1 of Coursera's Machine Learning Course taught by Andrew Ng. Best course out there. It is natural others would use the same example.

  • @boenrobot
    @boenrobot 8 лет назад +203

    Damn... I didn't noticed the whole "right" thing until I read the comments (half way through the video)... and suddenly, I can't unhear it (during the other half).

    • @djdedan
      @djdedan 8 лет назад +1

      +boenrobot i know! Now I cant un-hear it, either! ARGH!

    • @Che8t
      @Che8t 8 лет назад +3

      +DjDedan
      right?

    • @ozdergekko
      @ozdergekko 8 лет назад +1

      +boenrobot -- it literally made me stop watching. Am I left? (yes, I am. Very)

    • @flecksstudio725
      @flecksstudio725 7 лет назад +1

      you *could say* they *more or less* made you pay attention to it, *right*?

    • @sisakhoza4739
      @sisakhoza4739 7 лет назад +1

      Just fell into the same trap

  • @LILLJE
    @LILLJE 8 лет назад +359

    The camera guy is called Wright, right?

    • @Gehr96
      @Gehr96 8 лет назад +23

      +LILLJE I think it's rightly written Rite, right?

    • @fightocondria
      @fightocondria 8 лет назад +1

      +Gehr96 ->

    • @Squidward1314
      @Squidward1314 8 лет назад +24

      Wow! At first I didn't notice it at all. But he says it after EVERY sentence!

    • @blockchaaain
      @blockchaaain 8 лет назад +20

      Hahaha, I love that he changes it to "yeah" when he had to end a sentence on "wrong".

    • @ToyotomiHideyoshiGre
      @ToyotomiHideyoshiGre 8 лет назад +1

      Wright?
      Phoenix Wright!!

  • @Sneezy103
    @Sneezy103 8 лет назад +235

    Please go deeper into this subject, it's really interesting!

    • @112BALAGE112
      @112BALAGE112 8 лет назад +28

      +David T no pun indended

    • @judgeomega
      @judgeomega 8 лет назад +2

      +David T The most interesting subject to ever exist in the entire universe; Intelligence.

    • @INLF
      @INLF 8 лет назад +6

      if you're really interested in the subject go watch Andrew Ng's machine learning course at coursera and for a even deeper understanding Geoffrey Hinton's course "Neuronal networks for deep learning".
      The AI explanations on this channel are real bad...

    • @sanjacobo84
      @sanjacobo84 8 лет назад +3

      +INLF Great video! I started the cousera machine learning four weeks ago. Really cool.

    • @Vulcapyro
      @Vulcapyro 8 лет назад +4

      +David T +INLF I highly recommend the CS231n course at Stanford for convnets if you have some background in machine learning. They have been making incredible efforts towards disseminating the material and have _excellent_ notes as well as lecture recordings.

  • @NikiHerl
    @NikiHerl 8 лет назад +115

    His english takes a little getting used to but I actually feel like I understand neural networks now :D

    • @certee
      @certee 8 лет назад +23

      I think we've all become real estate agents more

    • @BurnabyAlex
      @BurnabyAlex 8 лет назад +10

      +Niki Herl varry yabbles.

    • @recklessroges
      @recklessroges 8 лет назад

      +Niki Herl I agree. How is this applied to code. Is it better to use a functional language or an imperative one. So many questions, and can machine learning teach us the answers to those?

    • @Toschez
      @Toschez 8 лет назад +2

      +Niki Herl Right?

    • @nicolasroux6999
      @nicolasroux6999 8 лет назад +2

      +Reckless Roges Better yet: use a multi-paradigm language. It ought to be fast, too. Which why most machine learning libraries are written in c++ for efficiency, sometimes with python bindings for ease of use.

  • @andik70
    @andik70 8 лет назад +23

    x_1 = location, x_2 = location, x_3 = location

    • @inin-id6bw
      @inin-id6bw 6 лет назад +1

      better than all the "right" comments combined

  • @TheAnimystro
    @TheAnimystro 8 лет назад +25

    Your video about alpha go intrigued me so much that I have been researching machine learning since, and compared to the other videos and technical explanations I have seen and read, this video is by far the easiest to understand and is the best at actually explaining how it works. It's just a shame that I had already invested at least 5 hours of my time to understand what you have said in only 11 minutes (though I did get to see some neural evolution networks fight each other gladiator style, so I guess there are positives and negatives hehe)

    • @omarshoura6327
      @omarshoura6327 2 года назад +2

      Where are you now in your research out of curiosity

  • @panzach
    @panzach 8 лет назад +4

    Thank you, I think it's the first time I see someone trying to explain multilayer perceptrons intuitively.

  • @NdamuleloNemakh
    @NdamuleloNemakh 6 лет назад +2

    This is the best overview of what deep learning is

  • @54m0h7
    @54m0h7 8 лет назад +2

    This reminds me of an Excel I made for work that estimates the cost of building a custom Air handler. I formatted it to print on 8.5x11, and was 20 pages when it was done. Each page was inputs/calcs for a different part, like Supply Fans, Cooling Coils, Humidifiers, etc. So by saying 'yes', you want a humidifier, it would not just add the cost of the humidifier, and install labour, but also add some length to the unit, which meant more framing cost, and added to the weight of the unit in both areas.

  • @tobiasztopczewski8089
    @tobiasztopczewski8089 8 лет назад +56

    at the start: "Maybe it gets slightly mafia".
    (mathy)

  • @woksin
    @woksin 8 лет назад +165

    Right

    • @Adraria8
      @Adraria8 8 лет назад +2

      Left

    • @blahblah9941
      @blahblah9941 8 лет назад +1

      or middle....

    • @woksin
      @woksin 8 лет назад

      true reality Right

    • @LaughterOnWater
      @LaughterOnWater 8 лет назад

      Is this better or worse than "You know" or "Like" as a punctuative crutch?

    • @l3p3
      @l3p3 8 лет назад

      Right?

  • @bakmanthetitan
    @bakmanthetitan 8 лет назад +95

    Excellent video, I love this topic! Definitely do more on machine learning.

  • @finlaymcewan
    @finlaymcewan 8 лет назад +21

    Google's system that can identify what an image contains uses a neural network that is over 30 layers deep!

    • @Shortninja66
      @Shortninja66 8 лет назад +6

      Yeah that's cool but can you link a source

    • @finlaymcewan
      @finlaymcewan 8 лет назад

      +Vulcapyro that's incredible

    • @michaelpound9891
      @michaelpound9891 8 лет назад +1

      +Shortninja66 It's called googlenet, if you search for it, the pdf is easily found. GoogleNet, VGG and others are some of the top contenders at the moment for large-scale classification. Microsoft's Deep Residual learning is another effort that looks to be the best yet, they even tried 1000 layers!

    • @Shortninja66
      @Shortninja66 8 лет назад

      Michael Pound Yeah what Microsoft has tried is pretty cool

    • @med5032
      @med5032 5 лет назад

      what is up with RUclips comments and never sourcing anything?

  • @calinmcosma
    @calinmcosma 8 лет назад +15

    Nice article. I want to hear more from this guy.
    Also, as a C.S. student, I would want some materials from these professors. I watched a lot of videos from Computerphile and I would really like some kind of references from interesting/favorite books that the professors recommend. I love the videos, but sometimes I like to go deeper into the subject. And yes, I do have internet, and I do know how to search things, but also, I like reading books, I like collecting C.S. books, so, I don't mind stacking some more in my collection.
    Thank you, and keep up the good work.

  • @g.livne98
    @g.livne98 8 лет назад +6

    1) I so regret I read all the comments on the "right" thing because it was very hard to listen after doing so.
    2) Artificial neural networks work mainly on learning from examples - using an example database of how what you want to achieve is done correctly, and you give your learning network a "grade" for how close it is to the correct value on the examples you train it on In order for it to learn the problem and tell you the value for examples you don't know the answer for.
    Was just pretty sure he wasn't clear enough about it but maybe i missed something because of the "right" thing...

  • @Phagocytosis
    @Phagocytosis 8 лет назад +9

    This video actually gives a very good and clear explanation of the concepts. Excellent job!

  • @mrbatweed
    @mrbatweed 8 лет назад +21

    Probably a bit more planning needed for a video attempting to teach at the intended level... A bit of deep teaching needed to go with the deep learning.

    • @karlkastor
      @karlkastor 8 лет назад +3

      +Mr Batweed yeah, it seemed like he was making it up on the go. Oh, I've not talked about Overfitting yet, let's throw this in.

  • @AwesomeCrackDealer
    @AwesomeCrackDealer 8 лет назад +15

    But how do you actually do it, like the programming?

    • @alextotheroh8624
      @alextotheroh8624 8 лет назад +2

      +Fuvity Weka is a tool worth looking into. Python has machine learning libraries and so does R. You can also do some interesting things with Amazon Machine Learning, but I can't remember if it's accessible under the free tier. When I tried it after Amazon Machine Learning first came out, it was free. Python and R will require lots of research and understanding while Weka and Amazon Machine Learning are more user friendly.

    • @SuviTuuliAllan
      @SuviTuuliAllan 8 лет назад +1

      +Fuvity I'm in the process of getting drunk so that I can work on a JavaScript library for doing deep learning on the GPU. Do you want me to write a tutorial as well? I'll add some nice ponies in it for you!

    • @AwesomeCrackDealer
      @AwesomeCrackDealer 8 лет назад

      Thanks, everyone! I'll check everything out, but I'm just a begginer programmer. I only know C, data structures and am learning Java and OOP

    • @bool.
      @bool. 8 лет назад +4

      +Fuvity Start with your input values. For every combination of input values, create a node in your first "layer" of your program. Each node has some function which takes its inputs, modifies them based on some internal weights, and then combines them somehow, returning the result. Repeat this, using the outputs of each layer as the inputs for the next layer for as many layers as you want your algorithm to have (being careful not to use too many to "over-fit" your algorithm).
      When it comes to training your algorithm, all you need to change is the weights in each of the nodes. Figuring out HOW to find the best weights is the hard part, and I don't think I could describe the ways to do it here.
      Ideally, you'll end up with combinations of inputs with no meaning being given very low weights when their values are used, and those which do mean things will be weighted more strongly.

    • @beegieb4210
      @beegieb4210 8 лет назад +5

      +Fuvity Tensor Flow, Keras, and Lassagne are excellent starting tools.
      I recently built a recurrent neural network chat bot with ~1,000,000 parameters using a GPU on Amazon Web services using Tensor Flow. It's actually pretty straight forward since it takes care of differentiation for you.
      That's the hardest bit in building neural networks: computing gradients efficiently and accurately. Thankfully TensorFlow takes care of that for you :)

  • @ibosnfs1997
    @ibosnfs1997 8 лет назад +226

    RightMan

    • @iviasterzox22
      @iviasterzox22 8 лет назад +3

      +ibrahim özcan Right? You tell me!

    • @ibosnfs1997
      @ibosnfs1997 8 лет назад +9

      IVIaster Zox Right! Right?

    • @petebenedetti435
      @petebenedetti435 8 лет назад

      +ondsinet _ (nik man) right

    • @fransezomer
      @fransezomer 8 лет назад

      +ibrahim özcan dude....lolololololoolooololoollllll....

    • @andrew1257
      @andrew1257 6 лет назад

      DAMN YOU NOW I CAN'T UNHEAR IT

  • @dries2965
    @dries2965 8 лет назад +2

    I like the initiative explanation that a neural network can approximate all functions to an arbitrary precision. The more layer (the deeper the network), the better the approximation. Couple that with the backpropagation algorithm, and you see, if there is a function between the in- and output, then given enough training data, you can approximate it.

  • @this_is_mac
    @this_is_mac Год назад

    Wow, I didn't understand the reasoning behind the hidden layers till now!!!

  • @mattlm64
    @mattlm64 8 лет назад +81

    Left.

  • @descreieratu2005
    @descreieratu2005 8 лет назад +20

    left?

  • @sau002
    @sau002 7 лет назад

    I think you are on the right track. Please continue this series with real life examples.

  • @Diggnuts
    @Diggnuts 8 лет назад +11

    Regarding the subject and how I though it was explained in this video, I tend to believe this production was a bit of a parker square....

    • @Diggnuts
      @Diggnuts 8 лет назад

      Google User
      Nah, I'll just remove you petulant tripe and block you.

    • @therflash
      @therflash 8 лет назад +1

      +Diggnuts stop making parker square a thing!!!

    • @Diggnuts
      @Diggnuts 8 лет назад

      therflash Perhaps I will, but that will just make my attempt at it ... a Parker square..

  • @jrobinson2095
    @jrobinson2095 4 года назад +1

    Nice explanation, it's cool to think about how the combination of different data can get so complicated

  • @selbstwaerts
    @selbstwaerts 7 лет назад +1

    I rightfully think he's right, right?
    The video even ENDS with "right" :D
    Besides this observation: Splendid explanation. Thanks.

  • @LarlemMagic
    @LarlemMagic 8 лет назад +3

    I had been seeking someone explain how to build a deep learning algorithm, thank you so much I *understand* now!

  • @Yorgarazgreece
    @Yorgarazgreece 8 лет назад +5

    You almost pulled two 360s with all those rights :P

  • @Carrosive
    @Carrosive 8 лет назад +6

    Complicated, right?

  • @henrypostulart
    @henrypostulart 8 лет назад +6

    His verbal, right? Ticks, right? Are massively, right? Annoying, right?

  • @Dima-rj7bv
    @Dima-rj7bv 2 года назад +1

    Such a beautiful, step-by-step explanation of a complicated concept. I love it!

  • @dhanshreea
    @dhanshreea 8 лет назад +2

    Thank you for this video, I finally understand what all the technical jargon about dependencies between intermediate layers means. Thank you!

    • @KatsarovDesign
      @KatsarovDesign 8 лет назад

      Can you explain it please? I also have troubles understanding this...

    • @dhanshreea
      @dhanshreea 8 лет назад

      What can you not understand?

    • @KatsarovDesign
      @KatsarovDesign 8 лет назад

      Dhanshree Arora The dependencies between the layers

  • @jamesqiu6715
    @jamesqiu6715 7 лет назад +1

    The result is right there on your left, sir!

  • @TheSidder1
    @TheSidder1 8 лет назад +8

    IMAGINE him as your GPS voice:
    "In 100 meters turn left, right."
    "Now turn left, right"

  • @oooBASTIooo
    @oooBASTIooo 8 лет назад +6

    I think he's right.

  • @i.p.knightly149
    @i.p.knightly149 6 лет назад +3

    Homework: develop an algorithm to calculate how many times he says 'right' in one video.

  •  8 лет назад +19

    Should have instead been three videos: machine learning, neural networks, and only then deep neural networks (aka deep learning). As it stands, this video fails to explain the topic adequately.

    • @Red_Salmond
      @Red_Salmond 6 лет назад +1

      You do better then mister know it all.

  • @wnh79
    @wnh79 6 лет назад

    This is a very simple and clear explanation of how NN work.

  • @TechyBen
    @TechyBen 8 лет назад +7

    Ah, so it's extra granularity in the detail it can find.

    • @nicolasroux6999
      @nicolasroux6999 8 лет назад +7

      +TechyBen The amazing part, as the guy pointed out, is that you (as the coder/user) don't even have to understand, nor know the existence of, underlying concepts. The neural net just deduces them if it can, given sufficient learning material and a proper layout.

  • @ByteBitTV
    @ByteBitTV 8 лет назад +3

    RIGHT?

  • @LeandroR99
    @LeandroR99 7 лет назад

    Haven't noticed the repeated right until I've read the comments. Content and explanation are more interesting than accent and language vices.

  • @smileyball
    @smileyball 8 лет назад

    How about a discussion about generative adversarial networks? They're fairly intuitive to explain and computerphile viewers will probably enjoy it.

  • @fxtech-art8242
    @fxtech-art8242 Год назад +1

    best explanation i have seen so far !

  • @endod8708
    @endod8708 7 лет назад

    after reading the comments i can not unhear it. i am not the only one right ?

  • @karlkastor
    @karlkastor 8 лет назад

    good Video! One slight critisism on the graphics: Typically Neural Networks are fully connected meaning every neuron is connected to every neuron on the layer before it.

    • @compuholic82
      @compuholic82 8 лет назад

      +Karl Kastor Traditionally, yes. But the generation of deep neural networks that have been making news lately are different. They are mainly convolutional neural networks which only have a small receptive field with regard to the previous layer. Also neighboring neurons share their connection weights which mathematically represents a convolution.
      The last layer(s) however - which typically are the actual classifier - are fully connected again.

    • @karlkastor
      @karlkastor 8 лет назад

      compuholic82
      I know how ConvNets work, but I think a vanilla/ fully conncted network would be a better first example.

  • @gohorseriding
    @gohorseriding 8 лет назад +4

    I'm left feeling kind of confused. How does the deep learning algorithm correct for overfitting? Is the algorithm trying to make the most parsimonious configuration? And do parameter configurations get penalised according to their complexity? Or am I missing the point?

    • @compuholic82
      @compuholic82 8 лет назад +1

      +Tom Swinfield Good question. The guy in the video left out an important part. While he is right that there is no reason to stop at the second layer there is a reason why deep learning has caught on just yet (aside from the computational resources needed). If you start to make a fully connected network deeper you will (a) very likely overfit and (b) have troubles to find a good initialization for your weights.
      The new generation of deep networks are convolutional networks which restrict the connectivity between neurons in the first layers. Each neuron only has a small receptive field towards the neurons of the previous layer. This helps to regularize the model.
      This probably wouldn't work very well with the example he provided since there is no reasons why neighboring input values should be correlated with each other (This is the problem when using hand-crafted features). But those convolutional structures work great with real-world data like images or sound. If you look at a pixel in an image you can usually predict the value of the neighboring pixel pretty well.

  • @nycsaba
    @nycsaba 8 лет назад +1

    Nice channel but, that pen is driving me crazy. Could you please filter out that scratchy noise of the pen from the audio line in feature videos pls :(

  • @l3p3
    @l3p3 8 лет назад +3

    Why does he end so many sentences with "right"? Asks he or states he that the sentence is right?

    • @Froggeh92
      @Froggeh92 8 лет назад +3

      Its a punctuating crutch that a lot of professors use to make sure the listener is following what they said.

    • @Froggeh92
      @Froggeh92 8 лет назад +1

      Its a punctuating crutch that a lot of professors use to make sure the listener is following what they said.

  • @TonyHammitt
    @TonyHammitt 8 лет назад +3

    Really a great explanation with a wonderfully relatable example. Very nicely done!

  • @rabbitpiet7182
    @rabbitpiet7182 6 лет назад

    ‘But what the current hotness is on any particular site is kinda “i don’t know” and always will be’

  • @javierguerrero9486
    @javierguerrero9486 8 лет назад

    I feel like the video started off late, like it seems they talked about this topic more in depth but didn't begin recording after deep in the conversation.

  • @user-ny7sg9mz1v
    @user-ny7sg9mz1v 4 года назад

    wow, i never thought it that way. What happens in hidden layers are quite similar to factor analysis

  • @willwong4804
    @willwong4804 6 лет назад

    This is essentially structural equation modelling (SEM). CFA to be specific. With some level of automation and a fancier name.

  • @jpclk1204
    @jpclk1204 6 лет назад

    A right deep learning algorithm may say that the right title for this video may be "Between right's - ComputerRight"

  • @lewisheriz
    @lewisheriz 2 года назад

    3:23 He says 'capture fake correlations', not 'cut through a thick of relations'. Very important point, so thought it worth clarifying. Clearly captioning AI needs more training data for Spanish accents in English ; )

  • @Nono-de3zi
    @Nono-de3zi Год назад

    Something that is not very clear in this videos (and many similar) is that weights can be *negative*. So we can encode (A AND B) OR ((A AND C) AND *NOT* B). That would be really relevant for house values.

  • @vkulanthaivel
    @vkulanthaivel 8 лет назад

    It says '125 years of age' but the oldest person was 122! (In the pideo) - Pideo means Picture video.

  • @MultiPaulinator
    @MultiPaulinator 8 лет назад

    So, the tl;dw of it is that deep learning is basically regular machine learning put through a recursion loop.

  • @unverifiedbiotic
    @unverifiedbiotic 8 лет назад +4

    Guys, use noise reduction for the audio, it's not that hard.

    • @deltagamma1442
      @deltagamma1442 8 лет назад +1

      +Michał Pawlak I'm no cyborg.

    • @aajjeee
      @aajjeee 8 лет назад

      Thats way too complicated for a machine to do

    • @unverifiedbiotic
      @unverifiedbiotic 8 лет назад +1

      Don't understand those comments, is it some kind of recurring joke on this channel? You only need to import the audio recorded with the video to Audacity (free software), find a moment in the track where the guy doesn't talk and select at least a second or two of the "silence". You select the noise reduction option from a dropdown list, adjust the parameters and go back to the track. Select a bit of the track where the guy talks and go back to noise reduction - you can then click preview in order to determine if the default parameters for noise reduction work well for the audio. If you're not satisfied, you tweak the three settings in order to get the the static noise and artifacts. You then go back again, select the entire audio track, go back to noise reduction, set the desired parameters again (they reset after you exit) and apply them. Export the audio and replace the original audio with the new track in your video editor. Problem solved, no static from a shitty integrated mic, no droning sound from the air conditioning, no noise from cars passing by outside, and the speaker doesn't have to soudn "robotic" or "tinny" after you do this, you just need to put a little effort into it. I don't understand why people don't do this with every video, while they take their time to make animations and whatnot.

    • @aajjeee
      @aajjeee 8 лет назад

      Michał Pawlak its ironic because your comment makes it seem hard when it isnt, its not a recuring joke but it is a joke

    • @unverifiedbiotic
      @unverifiedbiotic 8 лет назад

      Ok, if you've got a better way to introduce someone to the concept then go right ahead.

  • @charak100able
    @charak100able 6 лет назад

    excellent explanation! finally I understand how to interprate the intermediate layers. thanks!

  • @dh4817
    @dh4817 8 лет назад +54

    at the minute 4:06 i counted 25 rights, rght ?

    • @zoranhacker
      @zoranhacker 8 лет назад +9

      right

    • @Tortoise7597
      @Tortoise7597 8 лет назад +1

      +Diego A Hernàndez right

    • @DrSpooglemon
      @DrSpooglemon 7 лет назад

      I did notice a couple of yeahs...

    • @joelcoll4034
      @joelcoll4034 7 лет назад

      Diego A Hernàndez right

    • @stewiegriffin6503
      @stewiegriffin6503 7 лет назад +14

      He said it only 84 times. you are welcome:
      0:08 0:31 0:51 1:16 1:21 1:28 1:33 1:40 1:44 1:51 1:59 2:02 2:04 2:17 2:21 2:32 2:38 2:40 2:43 3:02 3:08 3:10 3:14 3:18 3:25 3:32 3:35 3:43 3:46 3:48 3:50 3:53 4:03 4:10 4:14 4:26 4:33 4:50 4:52 5:02 5:06 5:13 5:17 5:46 5:55 6:00 6:04 6:07 6:16 6:34 6:39 6:43 6:47 6:57 7:04 7:08 7:30 7:34 7:35 7:46 7:48 7:52 7:55 8:06 8:15 8:27 8:38 8:43 8:56 9:03 9:13 9:18 9:20 9:26 9:31 9:51 9:54 10:08 10:11 10:15 10:23 10:27 10:29 10:35 10:43

  • @KohrAh
    @KohrAh 8 лет назад +4

    It's not one of the best videos, right?

  • @sau002
    @sau002 7 лет назад

    I like the dialogue based approach .

  • @StealthMaster123
    @StealthMaster123 8 лет назад

    Didn't notice that he always says right until I read the comments. Thank you guys....

  • @1cy3
    @1cy3 8 лет назад

    3:39 though it sounds like he's saying "overfeeding" he actually means "overfitting".

  • @jessicap2589
    @jessicap2589 8 лет назад

    I feel like I use this channel to practice mindfulness.

  • @ToyotomiHideyoshiGre
    @ToyotomiHideyoshiGre 8 лет назад

    I recommend "Preferred Networks".

  • @xenathcytrin202
    @xenathcytrin202 6 лет назад

    Now what would happen if you took an output and had it affect something further down the tree?
    I wonder if that kind of feedback loop could be useful in some way.

  • @smzaccaria
    @smzaccaria 8 лет назад

    When you say "active" in a hierarchy do you mean marking something as "is" or "is not" by using binary variables 1's and 0's?

  • @js_models
    @js_models 4 года назад

    Overfitting might give you an answer that is wrong right?

  • @shipper611
    @shipper611 8 лет назад +2

    Super interesting, I'm wondering: what is the difference between DeepLearning and structural equation modeling?? Mathematical it seems really similar, doesn't it?

    • @Vulcapyro
      @Vulcapyro 8 лет назад

      +shipper611 SEM is so different that I'm not even quite sure how to start explaining how they're different lol

    • @shipper611
      @shipper611 8 лет назад

      okay, but isn't it also basically a regression with intermediate values?

    • @Vulcapyro
      @Vulcapyro 8 лет назад +1

      shipper611
      You can use neural networks to perform regression, but using them for something like regression could be pretty overkill due to the difference in complexity of the problem versus the representational power of nets. You can definitely find elements of regression techniques scattered about though, particularly considering a single neuron often looks like it performs some high-dimensional linear regression with a non-linear function applied afterwards.

  • @ethanpet113
    @ethanpet113 8 лет назад

    Just how much tractor feed do you guys have over at Nottingham?

  • @MichaelKondrashin
    @MichaelKondrashin 8 лет назад

    So how deep learning differs from neural networks?! From the description it is same thing.

  • @Frexican54
    @Frexican54 8 лет назад +1

    so its a really complicated multiple linear regression, basically?

    • @offchan
      @offchan 8 лет назад +2

      No, if a network contains only weighted sum of input like linear regression do then it will just be a big linear function no matter how deep your network is.
      We need non-linear component inside the network and that is called the activation function.

    • @knowlen
      @knowlen 8 лет назад

      close, it's more like multinomial logistic regression -but with nonlinear activation on hidden layers. It's still basically just regression though, and if you can understand linear regression you won't have trouble implementing this.

    • @Frexican54
      @Frexican54 8 лет назад

      Nick Knowles yeah I took a class on linear and time series regression.

    • @knowlen
      @knowlen 8 лет назад +1

      Yeah time series regression is like another level above this imo. If you want to do everything with neural nets this is kind of the break down: use LSTM networks for time series, Convolutional networks for image recognition, deep ANN for classification/prediction. Then there's Deep Q Networks (super popular in the research community at the moment), which are used for general artificial intelligence. The progression of Machine Learning curriculum right now (at least at my university) is usually a path from linear reg->logistic reg->ANN, with maybe some support vector machine (beats ANN when data is less abundant, but high dimensional) and clustering stuff thrown in for diversity.

  • @casperTheBird
    @casperTheBird 5 лет назад

    So are the groupings of variables decided by people or by the AI machine?

  • @apenasmeucanal5984
    @apenasmeucanal5984 8 лет назад +18

    #NiceShirt

    • @Adraria8
      @Adraria8 8 лет назад +2

      Hissssss!

    • @DasIllu
      @DasIllu 8 лет назад

      Made to screw with the discrete cosine transformation ;-)

  • @jacobdawson2109
    @jacobdawson2109 8 лет назад

    The way it's described, it sounds like each node is a linear combination of it's inputs. In which case, the output node is just some polynomial of the input nodes. To me, it seems like some non-linear combination of nodes would be needed to make the network truly useful, or am I missing something?

  • @federook78
    @federook78 Год назад

    Yes, it's very clear. That it is a second language, that is.

  • @ominous450
    @ominous450 6 лет назад

    Reminds me of many of my TA’s in University

  • @rickhernandez8301
    @rickhernandez8301 8 лет назад

    If anyone has studied Computer Science. You know exactly why he is saying "Right" every-time.

  • @juliank7408
    @juliank7408 Год назад

    Thanks for this intuitive explanation!

  • @danielgrace7887
    @danielgrace7887 8 лет назад

    If only a linear amount of a variable is feed into the network then how can the resulting value go down and then back up, wouldn't that require a quadratic function?

  • @ViktorLox
    @ViktorLox 8 лет назад

    Right?

  • @ddbrosnahan
    @ddbrosnahan 8 лет назад

    How long until our genetic information is uploaded and computers determine our genetic value?

  • @frull3
    @frull3 8 лет назад +6

    Perfect drinking game, every time he say right ... SHOT !

  • @waxwingvain
    @waxwingvain 8 лет назад +2

    This guy is the Ben Affleck of deep learning.

  • @therflash
    @therflash 8 лет назад

    The beginning of the video is misleading, Sean asks if you just sum it up to get a big number, but that's not what happens. You sum the values Zx*Ax, that means you multiply each feature Zx by some (initially random) value Ax and sum THAT, so you end up with some predicted price for each of your examples. Then, you look how far this prediction is from reality and that gives you error value for that example. Then you average your error values. Then slightly increase/decrease value A1 in a way that will decrease the average error. Then you do the same for A2. Then A3, A4, A5 etc.. Knowing derivatives of your average error with respect to Ax will tell you which direction each Ax has to be moved in order to decrease the average error, so you need to do that before you actually start moving those Ax's. After you move all Ax's by a tiny bit, you make new predictions and now those predictions should be closer to reality. You calculate new errors and new average error, this time your average error is hopefully smaller. And again, you calculate derivatives of average error with respect to each Ax and you move each Ax in a way that will decrease the average error. You keep doing this over and over till your predictions are reasonably close to reality or you fail miserably and realize that you need to square/multiply/log/scale/add/remove/.../.. some of your features, buy more computers, drink more coffee, get more data, dance during full moon, sacrifice goat, etc...

    • @Vulcapyro
      @Vulcapyro 8 лет назад

      +therflash Yeah, he didn't really even go into the basics of how this works. He uses the example of regression because that's something he knows, but I don't think this video actually got much across besides maybe why hidden layers _might_ be a good idea. High-level videos are fine, but it's important if you're actually trying to present the subject to learn, to have something more Brailsford-like in its clarity and straightforwardness of delivery, even if the content is slowly paced.

  • @RBYW1234
    @RBYW1234 4 года назад

    I got 4 tabs up, they all deep learning and i don't wanna exit them....

  • @berwyn9652
    @berwyn9652 11 месяцев назад

    I wish i can like this video more than once.

  • @billschannel1116
    @billschannel1116 8 лет назад

    I was interested in explaining this to a friend but I didn't know where you can buy dot matrix sheet fed paper. Does anyone know if Cobol programmer's paper will do?

  • @feastures
    @feastures 8 лет назад

    You can't get a house price by adding these factors, you need multiplication instead. Can't believe this is the basis of this video.

    • @jamiealkire6238
      @jamiealkire6238 8 лет назад

      In that case I should find a home with zero bathrooms going for $0. Then I'll just pay for a bathroom to be installed and I'll save a bundle!

    • @feastures
      @feastures 8 лет назад

      +Jamie Alkire You can't give a bathroom a fixed additional price, neither does age, district, building material, ... The guys premise is tottally wrong. Do you see it ?

  • @philorkill
    @philorkill 8 лет назад +2

    More confusing than clearing up things.

  • @tylermassey5431
    @tylermassey5431 5 лет назад

    They are missing two very important data points the buyer and the seller

  • @MrXperx
    @MrXperx 8 лет назад

    We can do a principal component analysis and reduce the extra variables right?

    • @tinyman392
      @tinyman392 8 лет назад

      Depends on the algorithm. Some neural nets actually do feature section implicitly. For example, a convolutional neural network will select its own features to feed into a standard, fully-connected neural network. A decision tree also does its own feature selection by using a greedy algorithm for learning. Depends on the algorithm class chosen and inputs. Obviously you can still do your own before sending it in to give the learning algorithm a "hint" (I use that term a little loosely).

  • @MushookieMan
    @MushookieMan 4 года назад

    If the relationship between these variables is linear, the structure could be flattened into only two layers, i.e. with algebraic substitution. I am left confused about what mathematical relationship exists between nodes in a real neural network.

  • @mattmatty1234
    @mattmatty1234 8 лет назад

    What is an example of bias term from his house explanation?