Deep Learning Basics: Introduction and Overview

Поделиться
HTML-код
  • Опубликовано: 7 янв 2025

Комментарии • 893

  • @lexfridman
    @lexfridman  6 лет назад +2075

    First lecture in the 2019 deep learning series! It's humbling to have the opportunity to teach at MIT and exciting to be part of the AI community. Thank you all for the support and great discussions over the past few years. It's been an amazing ride.

    • @sonjoydas7911
      @sonjoydas7911 6 лет назад +20

      You are Awesome, sir!

    • @TK-ke3nv
      @TK-ke3nv 6 лет назад +4

      I was waiting for it 😬🤗

    • @colouredlaundry1165
      @colouredlaundry1165 6 лет назад +5

      Go Go Lex!!! This is Awesome! Best way to start this year

    • @theaichannel242
      @theaichannel242 6 лет назад +14

      This is the best AI talk I have seen, I’m looking forward to developing my skills. I have so many ideas to tackle some of the harder questions and some issues I’ve noticed in training models and data gathering which I think are currently flawed.

    • @lexfridman
      @lexfridman  6 лет назад +44

      @Mohit Sharma We're releasing tutorials on our GitHub repo: github.com/lexfridman/mit-deep-learning

  • @b1ueberrycheesecake
    @b1ueberrycheesecake 4 года назад +821

    0:48 Deep Learning Basics Summary
    5:00 Visualization of 3% of the neurons and 0.001% of the synapses in the brain
    6:26 History of Deep Learning Ideas and Milestones
    9:13 History of DL Tools
    11:36 TensorFlow in One Slide
    13:32 Deep Learning is Representation Learning
    16:05 Why Deep Learning? Scalable Machine Learning
    17:10 Gartner Hype Cycle
    18:18 Why Not Deep Learning?
    21:59 Challenges of Deep Learning
    29:20 Deep Learning from Human and Machine
    30:00 Data Augmentation
    31:36 Deep Learning: Training and Testing
    32:10 How Neural Network Learn: Backpropagation
    32:28 Regression vs Classification
    32:54 Multi Class vs. Multi Label
    33:13 What can we do with Deep Learning?
    33:45 Neuron: Biological Inspiration for computation
    34:14 Biological and Artificial Neural Networks + Biological Inspiration for Computation
    35:55 Neuron: Forward Pass
    36:40
    Combining Neurons in Hidden Layers: The "Emergent" Power to Approximate
    37:37 Neural Networks are Parallelism
    38:00 Compute Hardware
    38:27 Activation Functions
    39:00 Backpropogation
    40:07 Learning is an Optimization Problem
    41:34 Overfitting and Regularization
    42:58 Regularization: Early Stoppage
    44:04 Normalization
    44:32 Convolutional Neural Networks: Image Classification
    47:52 Object Detection/ Localization
    50:03 Semantic Segmentation
    51:27 Transfer Learning
    52:27 Autoencoders
    55:05 Generative Adversarial Networks (GANs)
    57:03 Word Embeddings (Word2Vec)
    58:58 Recurrent Neural Networks
    59:49 Long Short-Term Memory (LSTM) Networks: Pick what to forget and what to remember
    1:00:15 Bidirectional RNN
    1:00:50 Encoder Decoder Architechture
    1:01:38 Attention
    1:02:10 AutoML and Neural Architecture Search (NASNet)
    1:04:40 Deep Reinforcement Learning
    1:06:00: Toward Artificial General Intelligence

    • @TheBlundert4ker
      @TheBlundert4ker 4 года назад +8

      Thank you

    • @flatcurve6465
      @flatcurve6465 4 года назад +11

      You're doing gods work

    • @LadyCoyKoi
      @LadyCoyKoi 4 года назад +6

      You are awesome... May many great things go into your life.

    • @720cinema8
      @720cinema8 4 года назад +11

      This was quite nice to take time so we could save some :). A selfless creature, indeed!

    • @maximilianobue7460
      @maximilianobue7460 4 года назад +1

      Oliver Woods no, his friend is, however he is allowed to read his slides and present the lecture as he holds a degree in the liberal arts

  • @shadowcoder887
    @shadowcoder887 2 года назад +202

    3 years later..he never would have guessed he would be best buds with Joe Rogan, David Goggins and interview Ye and others. Crazy

    • @sandigoletic7204
      @sandigoletic7204 2 года назад +9

      shows you if you're disciplined, a real human with a heart, and grind will get you to your goals. I am too dumb for this video.

    • @49erman2
      @49erman2 2 года назад +1

      For reals!

    • @justinking5964
      @justinking5964 2 года назад +2

      Can AI be used in predicting lottery pick3. I have a whole unique method that needs deep learning aid.

    • @justinking5964
      @justinking5964 2 года назад +1

      @@dyfrigshandy Thanks though don't konw what it is.I have been researched it for a decade.I wanna share with people with the same hobby.

    • @aurelianspodarec2629
      @aurelianspodarec2629 2 года назад +1

      @@sandigoletic7204 And still scared to post interview with Andrew Tate : d

  • @BruceW779
    @BruceW779 Год назад +59

    This might be 4 years old but it is still incredibly helpful in understanding the current state of ML and ANN. Thank you Lex.

  • @abrar4466
    @abrar4466 4 года назад +185

    I slept listening to you this morning and saw my mom reading deep learning books in my dream.

    • @webdev8284
      @webdev8284 4 года назад +1

      Lmfaoooo 😂😂

    • @yasinsharif3928
      @yasinsharif3928 3 года назад +9

      Your unconscious is telling you to learn

    • @crbradbury8282
      @crbradbury8282 3 года назад

      TMI. Abit TooTMI

    • @axea4554
      @axea4554 3 года назад

      Whoa

    • @danielg3924
      @danielg3924 3 года назад +2

      This means the genes on your mother's side are pushing you to learn, improve, overcome. She is saying "you, my son, are the future of intelligence in the universe... for good... or for ill" [ominous music intensifies]

  • @franktfrisby
    @franktfrisby 4 года назад +356

    I really admire the work that Lex is doing both at MIT and his podcast!

  • @maceovikasmr569
    @maceovikasmr569 5 лет назад +2041

    When she says “go deeper” but you’re all out of PowerPoint slides

  • @heyitsbruno
    @heyitsbruno Год назад +9

    Watching this on 2023, after the advancements of generative pretrained models, is mind-blowing. Things advanced so much in 4 years.

  • @pratcus
    @pratcus 2 года назад +6

    Lex,you are amazing as a lecturer and a finer example of a loving human. Your voice is so deep assertive and clear to the audience
    You're handsome with good attitude ,body language and can easily connect with people. I pray God bless you and family with blessings because we need you.Congrats man.

  • @SquidElvis
    @SquidElvis 3 года назад +11

    So talented, this guy should make his own podcast

  • @eshwarprasad2541
    @eshwarprasad2541 5 лет назад +54

    Thank you so much Lex. This will help us a lot. This will help the students, who cant afford paid online courses and none in the neighbourhood can teach.

  • @fusuyreds1236
    @fusuyreds1236 Год назад +25

    Electrical and computer engineering student here who's doing Jiu Jitsu as well. You can imagine how big a fan I am of Lex. So cool to see him actually going into the technicalities of his work.

  • @matthewwalsh7813
    @matthewwalsh7813 3 года назад +84

    This lecture is awesome and really inspiring. I've been a fan now for years now Lex, and I'm really happy to see your success. I just wanted to point out that I believe your analysis of "One Shot Learning" re: human bipedal locomotion might be a little off base. The learning and development process that leads to bipedalism is characterized by a list of precursors like crawling, sitting up, and standing up. This process takes usually between 1 and 2 years. This time (and the hundreds if not thousands of reps that come with it) is needed to build from the ground up both the requisite muscular strength and the requisite neural pathways for these coordinations to be possible. The process can be accelerated through coordination-specific training on the part of the parents (which occurs quite often). Errors that occur in this process lead to hardcore biomechanical problems down the road (e.g. requiring knee replacement at 55) Bipedalism is pretty complex, and is way harder than quadrupedalism, which would fall more in the scope of your one shot learning claim.

    • @lesschinskee
      @lesschinskee Год назад +1

      Loved your post.
      Let your child crawl to build their core strength before you worry that they aren’t standing yet.
      Putting diapers/nappies on a crawling child is similar to hobbling a horse.
      Think about it.
      The longer they crawl the better they will be able to walk.
      Obviously to let them crawl longer and without a massive chunk of material forcing misaligned muscular development is a huge inconvenience to the care giver.
      Prioritise your goals.

  • @ofviv
    @ofviv 2 года назад +29

    I don't exactly know why, but I am so proud of him.
    Both as a human and as a person who still puts efforts to not let knowledge become the source of cynicism. There's something about not giving up on love and other intellectually ridiculed concepts such as kindness. There's something pure about it.
    And for that purity, I am so proud of him.

  • @Lee-xb7lb
    @Lee-xb7lb 6 лет назад +157

    Thank you for sharing this on RUclips. This is what gives me hope in todays world. The walls that surround knowledge are coming down. Go team PEOPLE.

  • @Rahul-tg9gj
    @Rahul-tg9gj 6 лет назад +38

    Superb lecture. The guy speaks as if he sell dreams.Great confidence and knowledge

  • @arsh2489
    @arsh2489 11 месяцев назад +1

    Important Elements
    9:58
    Simple Python Neural Network Classification Number Model --> 87% Accuracy
    Step 1: Import Necessary Libraries (TensorFlow)
    Step 2: Import data set for model
    Step 3: Layers of neural network classification algorithm (drawed number --> classified Number) --> Use tensor flow for running data through NN Algorithm (hidden layer, input layer, output layer)
    Step 4: Train data using Algorithm using epochs (number of simulations data runs through neural network algorithm to increase accuracy of NN model, model.fit)
    Step 5: Evaluate model after trained (display test accuracy of trained data)
    Step 6: Actually using algorithm to predict what is in image (In this case what number the user wrote)
    16:02
    Ability to Remove Input of Human Experts:
    * Closer examination of Raw data without human extraction
    * Doesn't Require human step before classification
    22:02
    Supervised Learning:
    31:35

  • @dennishuang3498
    @dennishuang3498 6 лет назад +95

    Lex is a really admirable professor applying academy to solve real world problems through engineering ways. Kudos!

  • @avichalsharma3856
    @avichalsharma3856 6 лет назад +4

    English is not my first language but your voice is clear and pronunciation easy to understand. Keep up the good work.

    • @michaelwalshaw8305
      @michaelwalshaw8305 5 лет назад

      Piggyback on Mr. Sherma's comment: "English is my first language,But your voice is clearAnd pronunciation isn't too understand.Keep up the good work"

    • @michaelwalshaw8305
      @michaelwalshaw8305 5 лет назад

      "English is my first language,But your voice is clearAnd pronunciation easy to understand.Keep up the good work"

  • @alexiscao8749
    @alexiscao8749 5 лет назад +35

    I have never heard a technical course so poetic!

  • @josephbrocato6693
    @josephbrocato6693 10 месяцев назад +1

    Lex Fridman is absolutely fucking winner. Winner doing winner things. Is there a human being on earth who doesn’t like the guy? What an awesome blessing of a human being. We need more

    • @madmen1986
      @madmen1986 9 месяцев назад

      facts

    • @maxonwax6172
      @maxonwax6172 8 месяцев назад

      Sam Hyde dont like Fridman and he somehow right.

  • @arminthaller7284
    @arminthaller7284 2 года назад +1

    I am very fond ot the interviews in your podcast. Born 1961 I started my academic career studying computer science.I was one of those guys who chose the subject, because I could perform well above average with little effort. Now I am a lowly catholic priest still interested in all kind of science. Had I stayed in the computing business, I would have specialized in data modeling, data mining and data visualization. The lesson raised some philosophical questions with with practical consequences what I would like to research in your line of work.
    1. The philosophical issue raises with the very definition of an information bit representing an yes/no answer to a given question. The most important thing in the whole computation/data business is to select the right questions and a well enough working way to answer them. I think it weren't Biil Gates abilities as a programmer which made him successful, but the set of questions he wanted to provide solutions.
    This is where I draw the line separating human intelligence from artificial intelligence. Human intelligence is about selecting the right questions. Once that is done and there is some relation to computable empirical data, I think AI will outperform human efforts as it develops. I always expected AI to become superior in games like chess or poker, because those games are inherently digital, based on a restricted set of predefined questions. (i.e. Is there a white queen on square e1?')
    Training an AI somehow expands the limits I assumed as given The training of an AI creates a layer of abstraction, something I previosly saw as purely human.
    2. If I would research AI I would try to visualize that abstractions. I would implement functions like: 'Draw many different cats' if the AI is trained to recognize cats or 'Draw many different pictures similar to cats and equally similar to dogs.
    Then I would try to understand what the AI perceives as cat-like and if there is a better recognition when repeating the learning with the AI-generated examples.
    Has someone already tried this strategy?
    Did it work?
    3. I am a fan of Gregory Batesons theory of 'binocular learning'.
    So, when researching autonomous driving I would experimentally use two cams with two AIs interacting like the two sides of a human brain and try evaluate if I implemented something to generate the added knowledge Bateson describes as result of comparing different descriptions of the same.
    If successfuly generating Bateson's additional value I would try to understand, if there is a general difference between humans and AIs when generating deeper understanding by using that method probably based on Bateson's levels of learning.
    Has anyone done research like this?
    What were the results?
    I guess, using multiple input-devices (i.e stereoscopic cameras or combining different electro-manetic wavelength cameras) will greatly improve the reliability of AI's results, while using multiple interconnected AIs will mainly improve the researcher's theoretical understanding by 'listening' to AIs 'discussing' their abstractions.
    If answering my questions don't go far beyond 'yes' or 'no' because I never invested any time to understand neuronal networks or AI.

  • @devilisahomo
    @devilisahomo 5 лет назад +326

    "welcome everyone to 2019, it's really good to see everybody here"
    Time travellers?

    • @diegothaumaturgo
      @diegothaumaturgo 4 года назад +2

      HAHAHA good point.

    • @LadyCoyKoi
      @LadyCoyKoi 4 года назад +14

      I'm going to say "Welcome everyone to 2021... you survived Covid-19 and Trumps' incompetency."

    • @devilisahomo
      @devilisahomo 4 года назад +11

      @@LadyCoyKoi
      Trump saved America.
      God bless Trump

    • @c1dv1c1ous
      @c1dv1c1ous 4 года назад +7

      We're all time travelers. I've never met anyone stranded to one moment in time.

    • @ciarfah
      @ciarfah 4 года назад +6

      @@c1dv1c1ous You've never been to one of my lectures then

  • @idanwekhai
    @idanwekhai 6 лет назад +8

    I have school exams to read for.. but this video is more exciting to watch

  • @eni4ever
    @eni4ever 6 лет назад +6

    Amazing talk! Thank you, Lex! What an exciting time to be alive...

  • @souravsahoo1582
    @souravsahoo1582 4 года назад +2

    You know what lex will revolutionize the world..a great scientist and a fluent speaker,it always a pleasure to listen lex😍😍

  • @muhammedpektas7169
    @muhammedpektas7169 6 лет назад +21

    Thanks lex for your sharing. So I can follow this training from Turkey. I wish you success. Good Work.

  • @alexmyers3716
    @alexmyers3716 2 года назад +4

    It's interesting watching this lecture at the end of 2022, and seeing just how many problems deep learning has solved since this video was released. At 27:43, we've already reached Art and Book Writing, and are well on our way to a few others. And yet self driving hasn't advanced much at all.

    • @koho
      @koho Год назад

      Well, don't go by Tesla. Self driving has advanced a lot even since this lecture. Veritasium has a great video on this.

  • @leunglicken2680
    @leunglicken2680 Год назад +58

    I have attempted to meditate many times in my life and prior to this CD the only success I've experienced is with live guided meditation. ruclips.net/user/postUgkxzpa8CIfZcihW4Z0F_ja0QF3W9KIatrsq This is the first CD I've used that cuts through my unmedicated ADHD and enables me to truly relax and experience a quiet and energizing interval. The instructors voice is very soothing and pleasant to listen to. I am easily able to sit successfully through the entire CD, and for quite some time after. I cannot adequately express how tremendously helpful this CD has been on my spiritual journey!! Two thumbs up and 10 stars!

  • @samhvidberg5612
    @samhvidberg5612 2 года назад +3

    It’s super helpful to know how AI systems work, even though I don’t work in tech. It also helps me feel relieved to know that AI is still very far from becoming sentient. I didn’t realise just how amazing the human brain is in comparison.

  • @toastersman217
    @toastersman217 3 года назад +6

    This guy should starts a podcast. I am sure it would be popular.

  • @ArseniyCat
    @ArseniyCat Год назад +1

    Thank you for your honesty, Dr
    Fridman. Brilliant and thought -provoking to those who can ask questions to answer.

  • @Flameandfireclan
    @Flameandfireclan 3 года назад +2

    I would pay this man $$$$ just to keep pumping out lectures weekly

  • @peacock8730
    @peacock8730 4 года назад +60

    A great introduction lecture! Full of “fruit”, I learned a lot in just a hour. Thanks a lot for sharing!

  • @BenjaminGolding
    @BenjaminGolding 5 лет назад +10

    This is a great rundown of the general DL basics. Really good lecture

  • @IfadArdinx
    @IfadArdinx 2 года назад +3

    This lecturer has a good voice. He should start a podcast or something

  • @efleishermedia
    @efleishermedia Год назад +2

    This is amazing Lex! Superb FREE content so the cat and let curiosity kill it, over and over again, loving every secret

  • @floridaLise
    @floridaLise 2 года назад

    "Many times I've wondered how much there is to know" You are an impressive human Mr. Fridman. You saved the best for Last 1:04:41 (hungry cats)

  • @MrQasqyr
    @MrQasqyr 6 лет назад +29

    Thank you so much, Mr. Lex Fridman, for contribution and sharing your lectures!

  • @cjphanson
    @cjphanson 3 года назад +10

    Amazing lecture. Lex, you are a legend. Thank you .This runs at x1.25 really well too (for the busy minds out there....)

  • @Mrfunkysheep
    @Mrfunkysheep 2 года назад +1

    The way you said course 6.S094 make you sound like a awesome robot professor Lex!

  • @lisamuir4261
    @lisamuir4261 6 месяцев назад +1

    Had no idea Lex gave lectures. Multitasker

    • @elvisvan
      @elvisvan 2 месяца назад

      i'm surprised as you are, dude's a genuine intellectual

  • @AlbertBrucelee
    @AlbertBrucelee 5 лет назад +4

    thank you so much Lex! We from all over the world who can't afford to go to MIT, can get the same what your students learn!

  • @KISHORENEDUMARAN
    @KISHORENEDUMARAN 4 года назад +6

    "All kinds of problems are now in digital form" man, that was deep!

  • @ahmsokhbu4913
    @ahmsokhbu4913 2 года назад +3

    Dope lecture. Good coverage. I love the hidden point that performance depends on smaller batch sizes, which means higher sample rates (to me), Data is capital.

  • @akkp5810
    @akkp5810 2 года назад +1

    Great explanation. This is the first lecture in which I am able to understand very easily. The way of explaining is mesmerizing.

  • @kprabhakar975
    @kprabhakar975 6 лет назад +10

    Thank you very much Professor. It is really fulfilling to listen to you. I think at age of 64 I will be able to work and ask good questions.

    • @ankitkeshav2669
      @ankitkeshav2669 6 лет назад +1

      I really appreciate your urge to learn which even I at 20 have lost a bit.

    • @kprabhakar975
      @kprabhakar975 6 лет назад

      @@ankitkeshav2669 Thank you Ankit.

  • @tiago.ramos.
    @tiago.ramos. 3 года назад +2

    The best part was the honesty, on possible secundary effects that Deep Learning might do... none the less, we should definetely go ahead with Artificial Intelegence, never forgeting that C language is always there if we need to take a step back :)

  • @hearstzhang3881
    @hearstzhang3881 5 лет назад

    Thanks for sharing. My daughter is the frenchwoman in MIT, majoring Computational and System Biology.

  • @Makiverem-kv6oe
    @Makiverem-kv6oe Год назад +1

    Those slides… Man, I wish our lecturers put that much effort into compiling the slideshows that they’re in fact going to teach from for multiple years.

  • @stevenrogersfineart4224
    @stevenrogersfineart4224 3 года назад +3

    I wish I could watch an entire course by Lex :)

    • @alexb3617
      @alexb3617 Год назад

      was wondering about the same. but i guess thats not available online

  • @CodingBrainTeaser
    @CodingBrainTeaser 2 года назад +1

    Nice! Realy Lex is doing a great job. Lex's podcasts are very nice I listen to them every week. I suggest you should also watch ..... you will get an amazing experience with Lex ........ :)

  • @tommyhuffman7499
    @tommyhuffman7499 2 года назад +1

    Been a fan of your podcast for a while. Really puts you in a while new light to see you yeah. You really seem in your element teaching!

  • @jacobhunwick1588
    @jacobhunwick1588 4 года назад +1

    Lex you are so old school it's great

  • @spirit6221
    @spirit6221 2 года назад +2

    Good to see you teach..a teacher who is a continuous learner

  • @78Gdam
    @78Gdam Год назад

    I've been listening to Lex's podcast for a while, this is the first time I have audited one of his courses. I think he is starting to remind me of the Carl Sagan of our age.

  • @ahmsokhbu4913
    @ahmsokhbu4913 2 года назад

    Beginner > Hazard > Expert Love it!

  • @wthxrsh
    @wthxrsh 2 года назад +1

    every content you put is a gem Lex!

  • @danielsoares2479
    @danielsoares2479 6 лет назад +12

    What's a clear explanation! That is a real professor!

  • @jehriko7525
    @jehriko7525 Год назад

    Lex Friedman has become an inspiration to me greatly

  • @jeremyzimmerman5603
    @jeremyzimmerman5603 3 года назад +1

    The mark of a master is that he/she makes the complicated simple ... not simplistic ... but simple enough for the uneducated to be able to appreciate the major points. Thank you, Lex.
    Also, someone who I assume is not Lex drew me into a strange WhatsApp conversation that I terminated because the language was cryptic and not at all characteristic of Lex. You might change your RUclips password ... me recommending an MIT faculty to change his password. Just trying to help preserve your brand equity and the trust we place in you.

  • @pwnangel12
    @pwnangel12 6 лет назад +24

    Thank you for being such an amazing source of information and learning.

  • @MinhPham-uw1eb
    @MinhPham-uw1eb 5 лет назад +2

    I didn't know that agent 41 teaches Machine Learning. :)) This man is not just a professor; he is a popular figure, celebrity for young people to admire on.

  • @ZaneMcFate
    @ZaneMcFate 5 лет назад +11

    This is an extremely useful resource; thank you for sharing this!

  • @KeepingUp_withAI
    @KeepingUp_withAI 6 лет назад +2

    Thank you Lex for all your contribution and for sharing so much on RUclips. My life would not be the same with our you podcast series

  • @Esranurkaygin
    @Esranurkaygin Год назад +1

    It's pretty amazing to see how excited "nervous'' he is to do this lecture, just as much as most of us are to learn this topic. :D

  • @nikteshy9131
    @nikteshy9131 2 года назад

    Thank you very much ))) Lex fridman and MIT
    Спасибо большое Lex fridman и MIT за лекцию ))))
    from Russia )))

  • @richardsager9867
    @richardsager9867 4 года назад +10

    I get weird feeling when I hear lex talk. There's something that binds deep learning, media programming, and overall take over of a free thinking society. The way they collect data will not change. The population will change to make it easier for them to collect data and have a control.

  • @alexanderd8398
    @alexanderd8398 5 лет назад

    Thats cool. Free lecture from MIT on youtube. Very high quallity. Thanks.

  • @funnyguyjohnson
    @funnyguyjohnson Год назад +1

    I've been studying and getting certifications in Prompt Engineering, Mathematics, Coding, Data Science, Open Artificial Intelligence, Machine Learning, Deep Learning, and Neural Networks for a few years now. I can't find a job anywhere. When I'm in a interview and talk about the cost saving benefits and increase in productivity using Artificial Intelligence and automation, they usually end the interview right away and send a Dear John letter that they went with another candidate.

    • @ShitWrangler
      @ShitWrangler Год назад

      My speculation is there's so many baby boomers in North America that are drawing a pension or social security and automation simply can't pay for it. So our GDP per capita is gonna suffer badly as we struggle to automate and achieve more efficiency and productivity in order to prop up THAT generation.

  • @lasredchris
    @lasredchris 5 лет назад +1

    Based on input parameters. Supervised - predict apartment price
    Supervised learning -> unsupervised learning
    Humans can learn from very few examples
    Machines need thousands/million s of examples

  • @eaf888
    @eaf888 2 года назад

    omg. the way the planets were moving explains retrogrades, i always wondered... why would the planet go backwards... I have so many questions now (16:02 - Why deep learning (and why not))

  • @Lunsterful
    @Lunsterful 6 лет назад +7

    Awesome. Now to spend 30 years learning coding, math, statistics, linguistics, philosophy and then become an expert in a problem domain. At least I won't be bored.

    • @thefool733
      @thefool733 6 лет назад +3

      no neuroscience?

    • @Lunsterful
      @Lunsterful 6 лет назад +6

      @@thefool733 That too. I wish I had two lifetimes.

  • @jmchil4887
    @jmchil4887 2 года назад

    lex is the best sad i never had a teacher like him

  • @rfernand2
    @rfernand2 6 лет назад +1

    A tour de force in the selection, organization, and presentation of an overview of Deep Learning. I really enjoyed it - thanks for doing this and making it freely available to everyone!

  • @SriNiVi
    @SriNiVi 5 лет назад +11

    One of the most precise lectures since my Engineering school times. Would love to hear more from you.

  • @Heeroyui752
    @Heeroyui752 5 лет назад +4

    I understanding nothing about machine learning or the field of Artificial intelligence in general. But even I could understand what Lex was saying. Very well done!

    • @joelakoonamplakkal8218
      @joelakoonamplakkal8218 Год назад

      ഉഉഉഉഉഉഉഉഉഉഉഉഉഉഉഉഉഉഉഉഉഉഉഉഉഉഉഉഉഉഉഉഉഉ

  • @sa.8208
    @sa.8208 3 года назад +1

    15:43.. maybe we got it wrong... i see the E8 Lattice synchronization

  • @jeremiahbarlow1924
    @jeremiahbarlow1924 6 лет назад

    I just noticed at 39:00 minutes approx, there are definite lines in your forehead when the explanation started to get deep and you were reaching with your soul on how to explain. ;-) Thank you for your efforts in this course.

  • @MultiMediumArts
    @MultiMediumArts 9 месяцев назад

    I had no idea that you are/were a professor, and a great one at that.. thanks for sharing this video

  • @WTHFX
    @WTHFX 4 года назад

    Watching this at 2X is actually very enjoyable.

  • @LMTN13
    @LMTN13 4 года назад +5

    lex seem's to really enjoy teaching, looks like a happy dude :)

  • @adruvitpandit5816
    @adruvitpandit5816 4 года назад

    Not a deeply technical talk but does cover what exists out their to learn in Datascience and AI.

  • @naartarnegol9448
    @naartarnegol9448 4 года назад +1

    ...Lex seems to be an angel, amazing person!

  • @scentilatingone2148
    @scentilatingone2148 3 года назад +1

    It's cool to see you in your element Lex!

  • @oknoobcom
    @oknoobcom 6 лет назад +29

    Great resource Lex. Thank you for sharing. Keep them coming :)

  • @Saed7630
    @Saed7630 6 лет назад +16

    Clean, clear and realistic lecture!

  • @michaeld.thomas5049
    @michaeld.thomas5049 4 года назад +2

    Thank you so much, Mr. Lex Fridman, for contribution and sharing your lectures!
    Great resource Lex. Thank you for sharing. Keep them coming :)

  • @brotherlui5956
    @brotherlui5956 6 лет назад +28

    imho the best lecture to watch in january 2019

  • @fordmeisef9661
    @fordmeisef9661 3 года назад

    Simple as possible, but no simpler. I like that.

  • @PerceptiveAnarchist
    @PerceptiveAnarchist 2 года назад +14

    Great video, thanks for this Lex

  • @DucHongLe
    @DucHongLe 7 месяцев назад

    I’m glad that sector value of inclusive data bends the boundaries. Inclusive of the satire of real-name identifiers and the label that walk in the bathroom genres. Thesis of hopeless sadness that night the emotional crashes of demolishing partaking data…

  • @BrandiJKing
    @BrandiJKing 4 года назад +3

    This is an extremely useful resource; thank you for sharing this!
    "welcome everyone to 2019, it's really good to see everybody here"
    Time travellers?
    "welcome everyone to 2019, it's really good to see everybody here"
    Time travellers?

  • @modestas112911
    @modestas112911 Год назад

    Wow, what a guy. Thank you for sharing this video. Very well put together and engading lecture.

  • @toth1982
    @toth1982 Год назад

    17:45 "We are at the peak of inflated expectations." Well, 3-4 years later the expectations are much higher.
    Not that this was easy to predict, it is just interesting to see how things turned out.

  • @seefore5409
    @seefore5409 4 года назад +1

    Hard Part:
    Good Questions + Good Data
    ...I felt that

  • @vankoutedar
    @vankoutedar 2 года назад +11

    this is a very interesting lecture, thank you so much for making it available to a wider audience. are the other lectures of the series also available online?

  • @usuyaktom3069
    @usuyaktom3069 6 лет назад +2

    I'm so excited to join this class!

  • @pauldacus4590
    @pauldacus4590 5 лет назад +6

    37:13 "A neural network with a single hidden layer can approximate any (arbitrary) function"
    Is this true? Can it approximate a function where an input is squared, cubed, etc? Or a sine fn?
    Seems like it would depend on the activation function a lot, it seems like it wouldn't be true with a Relu activation.
    I honestly don't know if this is true, so just asking...

    • @bL4ckGeniu5
      @bL4ckGeniu5 5 лет назад +2

      it does not state the number of neurons in the layer,.... if it is "one dimensional" meaning one input value refers to one output value it should work out, but I am not that much of an expert

    • @jesselopes5196
      @jesselopes5196 4 года назад +1

      Yes, this is the computational difference between connectionist nets and Rosenblatt's (two layer) perceptron. Basically a perceptron can only handle linear functions (where the argument's exponent is 1) but adding a hidden layer allows the network to compute nonlinear functions (hence the input can be squared, cubed etc). So Lex is right (and this fact is older than Lex himself).

    • @tulasijamun3234
      @tulasijamun3234 4 года назад

      A neural network with a single hidden layer can approximate the function of a NAND gate (and hence approximate any (arbitrary) function).

  • @zhbw315
    @zhbw315 6 лет назад +5

    it's awsome to know the evolution and new skills of deep learning in this course!