First lecture in the 2019 deep learning series! It's humbling to have the opportunity to teach at MIT and exciting to be part of the AI community. Thank you all for the support and great discussions over the past few years. It's been an amazing ride.
This is the best AI talk I have seen, I’m looking forward to developing my skills. I have so many ideas to tackle some of the harder questions and some issues I’ve noticed in training models and data gathering which I think are currently flawed.
0:48 Deep Learning Basics Summary 5:00 Visualization of 3% of the neurons and 0.001% of the synapses in the brain 6:26 History of Deep Learning Ideas and Milestones 9:13 History of DL Tools 11:36 TensorFlow in One Slide 13:32 Deep Learning is Representation Learning 16:05 Why Deep Learning? Scalable Machine Learning 17:10 Gartner Hype Cycle 18:18 Why Not Deep Learning? 21:59 Challenges of Deep Learning 29:20 Deep Learning from Human and Machine 30:00 Data Augmentation 31:36 Deep Learning: Training and Testing 32:10 How Neural Network Learn: Backpropagation 32:28 Regression vs Classification 32:54 Multi Class vs. Multi Label 33:13 What can we do with Deep Learning? 33:45 Neuron: Biological Inspiration for computation 34:14 Biological and Artificial Neural Networks + Biological Inspiration for Computation 35:55 Neuron: Forward Pass 36:40 Combining Neurons in Hidden Layers: The "Emergent" Power to Approximate 37:37 Neural Networks are Parallelism 38:00 Compute Hardware 38:27 Activation Functions 39:00 Backpropogation 40:07 Learning is an Optimization Problem 41:34 Overfitting and Regularization 42:58 Regularization: Early Stoppage 44:04 Normalization 44:32 Convolutional Neural Networks: Image Classification 47:52 Object Detection/ Localization 50:03 Semantic Segmentation 51:27 Transfer Learning 52:27 Autoencoders 55:05 Generative Adversarial Networks (GANs) 57:03 Word Embeddings (Word2Vec) 58:58 Recurrent Neural Networks 59:49 Long Short-Term Memory (LSTM) Networks: Pick what to forget and what to remember 1:00:15 Bidirectional RNN 1:00:50 Encoder Decoder Architechture 1:01:38 Attention 1:02:10 AutoML and Neural Architecture Search (NASNet) 1:04:40 Deep Reinforcement Learning 1:06:00: Toward Artificial General Intelligence
This means the genes on your mother's side are pushing you to learn, improve, overcome. She is saying "you, my son, are the future of intelligence in the universe... for good... or for ill" [ominous music intensifies]
Lex,you are amazing as a lecturer and a finer example of a loving human. Your voice is so deep assertive and clear to the audience You're handsome with good attitude ,body language and can easily connect with people. I pray God bless you and family with blessings because we need you.Congrats man.
Thank you so much Lex. This will help us a lot. This will help the students, who cant afford paid online courses and none in the neighbourhood can teach.
Electrical and computer engineering student here who's doing Jiu Jitsu as well. You can imagine how big a fan I am of Lex. So cool to see him actually going into the technicalities of his work.
This lecture is awesome and really inspiring. I've been a fan now for years now Lex, and I'm really happy to see your success. I just wanted to point out that I believe your analysis of "One Shot Learning" re: human bipedal locomotion might be a little off base. The learning and development process that leads to bipedalism is characterized by a list of precursors like crawling, sitting up, and standing up. This process takes usually between 1 and 2 years. This time (and the hundreds if not thousands of reps that come with it) is needed to build from the ground up both the requisite muscular strength and the requisite neural pathways for these coordinations to be possible. The process can be accelerated through coordination-specific training on the part of the parents (which occurs quite often). Errors that occur in this process lead to hardcore biomechanical problems down the road (e.g. requiring knee replacement at 55) Bipedalism is pretty complex, and is way harder than quadrupedalism, which would fall more in the scope of your one shot learning claim.
Loved your post. Let your child crawl to build their core strength before you worry that they aren’t standing yet. Putting diapers/nappies on a crawling child is similar to hobbling a horse. Think about it. The longer they crawl the better they will be able to walk. Obviously to let them crawl longer and without a massive chunk of material forcing misaligned muscular development is a huge inconvenience to the care giver. Prioritise your goals.
I don't exactly know why, but I am so proud of him. Both as a human and as a person who still puts efforts to not let knowledge become the source of cynicism. There's something about not giving up on love and other intellectually ridiculed concepts such as kindness. There's something pure about it. And for that purity, I am so proud of him.
Important Elements 9:58 Simple Python Neural Network Classification Number Model --> 87% Accuracy Step 1: Import Necessary Libraries (TensorFlow) Step 2: Import data set for model Step 3: Layers of neural network classification algorithm (drawed number --> classified Number) --> Use tensor flow for running data through NN Algorithm (hidden layer, input layer, output layer) Step 4: Train data using Algorithm using epochs (number of simulations data runs through neural network algorithm to increase accuracy of NN model, model.fit) Step 5: Evaluate model after trained (display test accuracy of trained data) Step 6: Actually using algorithm to predict what is in image (In this case what number the user wrote) 16:02 Ability to Remove Input of Human Experts: * Closer examination of Raw data without human extraction * Doesn't Require human step before classification 22:02 Supervised Learning: 31:35
Lex Fridman is absolutely fucking winner. Winner doing winner things. Is there a human being on earth who doesn’t like the guy? What an awesome blessing of a human being. We need more
I am very fond ot the interviews in your podcast. Born 1961 I started my academic career studying computer science.I was one of those guys who chose the subject, because I could perform well above average with little effort. Now I am a lowly catholic priest still interested in all kind of science. Had I stayed in the computing business, I would have specialized in data modeling, data mining and data visualization. The lesson raised some philosophical questions with with practical consequences what I would like to research in your line of work. 1. The philosophical issue raises with the very definition of an information bit representing an yes/no answer to a given question. The most important thing in the whole computation/data business is to select the right questions and a well enough working way to answer them. I think it weren't Biil Gates abilities as a programmer which made him successful, but the set of questions he wanted to provide solutions. This is where I draw the line separating human intelligence from artificial intelligence. Human intelligence is about selecting the right questions. Once that is done and there is some relation to computable empirical data, I think AI will outperform human efforts as it develops. I always expected AI to become superior in games like chess or poker, because those games are inherently digital, based on a restricted set of predefined questions. (i.e. Is there a white queen on square e1?') Training an AI somehow expands the limits I assumed as given The training of an AI creates a layer of abstraction, something I previosly saw as purely human. 2. If I would research AI I would try to visualize that abstractions. I would implement functions like: 'Draw many different cats' if the AI is trained to recognize cats or 'Draw many different pictures similar to cats and equally similar to dogs. Then I would try to understand what the AI perceives as cat-like and if there is a better recognition when repeating the learning with the AI-generated examples. Has someone already tried this strategy? Did it work? 3. I am a fan of Gregory Batesons theory of 'binocular learning'. So, when researching autonomous driving I would experimentally use two cams with two AIs interacting like the two sides of a human brain and try evaluate if I implemented something to generate the added knowledge Bateson describes as result of comparing different descriptions of the same. If successfuly generating Bateson's additional value I would try to understand, if there is a general difference between humans and AIs when generating deeper understanding by using that method probably based on Bateson's levels of learning. Has anyone done research like this? What were the results? I guess, using multiple input-devices (i.e stereoscopic cameras or combining different electro-manetic wavelength cameras) will greatly improve the reliability of AI's results, while using multiple interconnected AIs will mainly improve the researcher's theoretical understanding by 'listening' to AIs 'discussing' their abstractions. If answering my questions don't go far beyond 'yes' or 'no' because I never invested any time to understand neuronal networks or AI.
It's interesting watching this lecture at the end of 2022, and seeing just how many problems deep learning has solved since this video was released. At 27:43, we've already reached Art and Book Writing, and are well on our way to a few others. And yet self driving hasn't advanced much at all.
I have attempted to meditate many times in my life and prior to this CD the only success I've experienced is with live guided meditation. ruclips.net/user/postUgkxzpa8CIfZcihW4Z0F_ja0QF3W9KIatrsq This is the first CD I've used that cuts through my unmedicated ADHD and enables me to truly relax and experience a quiet and energizing interval. The instructors voice is very soothing and pleasant to listen to. I am easily able to sit successfully through the entire CD, and for quite some time after. I cannot adequately express how tremendously helpful this CD has been on my spiritual journey!! Two thumbs up and 10 stars!
It’s super helpful to know how AI systems work, even though I don’t work in tech. It also helps me feel relieved to know that AI is still very far from becoming sentient. I didn’t realise just how amazing the human brain is in comparison.
Dope lecture. Good coverage. I love the hidden point that performance depends on smaller batch sizes, which means higher sample rates (to me), Data is capital.
The best part was the honesty, on possible secundary effects that Deep Learning might do... none the less, we should definetely go ahead with Artificial Intelegence, never forgeting that C language is always there if we need to take a step back :)
Those slides… Man, I wish our lecturers put that much effort into compiling the slideshows that they’re in fact going to teach from for multiple years.
Nice! Realy Lex is doing a great job. Lex's podcasts are very nice I listen to them every week. I suggest you should also watch ..... you will get an amazing experience with Lex ........ :)
I've been listening to Lex's podcast for a while, this is the first time I have audited one of his courses. I think he is starting to remind me of the Carl Sagan of our age.
The mark of a master is that he/she makes the complicated simple ... not simplistic ... but simple enough for the uneducated to be able to appreciate the major points. Thank you, Lex. Also, someone who I assume is not Lex drew me into a strange WhatsApp conversation that I terminated because the language was cryptic and not at all characteristic of Lex. You might change your RUclips password ... me recommending an MIT faculty to change his password. Just trying to help preserve your brand equity and the trust we place in you.
I didn't know that agent 41 teaches Machine Learning. :)) This man is not just a professor; he is a popular figure, celebrity for young people to admire on.
I get weird feeling when I hear lex talk. There's something that binds deep learning, media programming, and overall take over of a free thinking society. The way they collect data will not change. The population will change to make it easier for them to collect data and have a control.
I've been studying and getting certifications in Prompt Engineering, Mathematics, Coding, Data Science, Open Artificial Intelligence, Machine Learning, Deep Learning, and Neural Networks for a few years now. I can't find a job anywhere. When I'm in a interview and talk about the cost saving benefits and increase in productivity using Artificial Intelligence and automation, they usually end the interview right away and send a Dear John letter that they went with another candidate.
My speculation is there's so many baby boomers in North America that are drawing a pension or social security and automation simply can't pay for it. So our GDP per capita is gonna suffer badly as we struggle to automate and achieve more efficiency and productivity in order to prop up THAT generation.
Based on input parameters. Supervised - predict apartment price Supervised learning -> unsupervised learning Humans can learn from very few examples Machines need thousands/million s of examples
omg. the way the planets were moving explains retrogrades, i always wondered... why would the planet go backwards... I have so many questions now (16:02 - Why deep learning (and why not))
Awesome. Now to spend 30 years learning coding, math, statistics, linguistics, philosophy and then become an expert in a problem domain. At least I won't be bored.
A tour de force in the selection, organization, and presentation of an overview of Deep Learning. I really enjoyed it - thanks for doing this and making it freely available to everyone!
I understanding nothing about machine learning or the field of Artificial intelligence in general. But even I could understand what Lex was saying. Very well done!
I just noticed at 39:00 minutes approx, there are definite lines in your forehead when the explanation started to get deep and you were reaching with your soul on how to explain. ;-) Thank you for your efforts in this course.
I’m glad that sector value of inclusive data bends the boundaries. Inclusive of the satire of real-name identifiers and the label that walk in the bathroom genres. Thesis of hopeless sadness that night the emotional crashes of demolishing partaking data…
This is an extremely useful resource; thank you for sharing this! "welcome everyone to 2019, it's really good to see everybody here" Time travellers? "welcome everyone to 2019, it's really good to see everybody here" Time travellers?
17:45 "We are at the peak of inflated expectations." Well, 3-4 years later the expectations are much higher. Not that this was easy to predict, it is just interesting to see how things turned out.
this is a very interesting lecture, thank you so much for making it available to a wider audience. are the other lectures of the series also available online?
37:13 "A neural network with a single hidden layer can approximate any (arbitrary) function" Is this true? Can it approximate a function where an input is squared, cubed, etc? Or a sine fn? Seems like it would depend on the activation function a lot, it seems like it wouldn't be true with a Relu activation. I honestly don't know if this is true, so just asking...
it does not state the number of neurons in the layer,.... if it is "one dimensional" meaning one input value refers to one output value it should work out, but I am not that much of an expert
Yes, this is the computational difference between connectionist nets and Rosenblatt's (two layer) perceptron. Basically a perceptron can only handle linear functions (where the argument's exponent is 1) but adding a hidden layer allows the network to compute nonlinear functions (hence the input can be squared, cubed etc). So Lex is right (and this fact is older than Lex himself).
First lecture in the 2019 deep learning series! It's humbling to have the opportunity to teach at MIT and exciting to be part of the AI community. Thank you all for the support and great discussions over the past few years. It's been an amazing ride.
You are Awesome, sir!
I was waiting for it 😬🤗
Go Go Lex!!! This is Awesome! Best way to start this year
This is the best AI talk I have seen, I’m looking forward to developing my skills. I have so many ideas to tackle some of the harder questions and some issues I’ve noticed in training models and data gathering which I think are currently flawed.
@Mohit Sharma We're releasing tutorials on our GitHub repo: github.com/lexfridman/mit-deep-learning
0:48 Deep Learning Basics Summary
5:00 Visualization of 3% of the neurons and 0.001% of the synapses in the brain
6:26 History of Deep Learning Ideas and Milestones
9:13 History of DL Tools
11:36 TensorFlow in One Slide
13:32 Deep Learning is Representation Learning
16:05 Why Deep Learning? Scalable Machine Learning
17:10 Gartner Hype Cycle
18:18 Why Not Deep Learning?
21:59 Challenges of Deep Learning
29:20 Deep Learning from Human and Machine
30:00 Data Augmentation
31:36 Deep Learning: Training and Testing
32:10 How Neural Network Learn: Backpropagation
32:28 Regression vs Classification
32:54 Multi Class vs. Multi Label
33:13 What can we do with Deep Learning?
33:45 Neuron: Biological Inspiration for computation
34:14 Biological and Artificial Neural Networks + Biological Inspiration for Computation
35:55 Neuron: Forward Pass
36:40
Combining Neurons in Hidden Layers: The "Emergent" Power to Approximate
37:37 Neural Networks are Parallelism
38:00 Compute Hardware
38:27 Activation Functions
39:00 Backpropogation
40:07 Learning is an Optimization Problem
41:34 Overfitting and Regularization
42:58 Regularization: Early Stoppage
44:04 Normalization
44:32 Convolutional Neural Networks: Image Classification
47:52 Object Detection/ Localization
50:03 Semantic Segmentation
51:27 Transfer Learning
52:27 Autoencoders
55:05 Generative Adversarial Networks (GANs)
57:03 Word Embeddings (Word2Vec)
58:58 Recurrent Neural Networks
59:49 Long Short-Term Memory (LSTM) Networks: Pick what to forget and what to remember
1:00:15 Bidirectional RNN
1:00:50 Encoder Decoder Architechture
1:01:38 Attention
1:02:10 AutoML and Neural Architecture Search (NASNet)
1:04:40 Deep Reinforcement Learning
1:06:00: Toward Artificial General Intelligence
Thank you
You're doing gods work
You are awesome... May many great things go into your life.
This was quite nice to take time so we could save some :). A selfless creature, indeed!
Oliver Woods no, his friend is, however he is allowed to read his slides and present the lecture as he holds a degree in the liberal arts
3 years later..he never would have guessed he would be best buds with Joe Rogan, David Goggins and interview Ye and others. Crazy
shows you if you're disciplined, a real human with a heart, and grind will get you to your goals. I am too dumb for this video.
For reals!
Can AI be used in predicting lottery pick3. I have a whole unique method that needs deep learning aid.
@@dyfrigshandy Thanks though don't konw what it is.I have been researched it for a decade.I wanna share with people with the same hobby.
@@sandigoletic7204 And still scared to post interview with Andrew Tate : d
This might be 4 years old but it is still incredibly helpful in understanding the current state of ML and ANN. Thank you Lex.
I slept listening to you this morning and saw my mom reading deep learning books in my dream.
Lmfaoooo 😂😂
Your unconscious is telling you to learn
TMI. Abit TooTMI
Whoa
This means the genes on your mother's side are pushing you to learn, improve, overcome. She is saying "you, my son, are the future of intelligence in the universe... for good... or for ill" [ominous music intensifies]
I really admire the work that Lex is doing both at MIT and his podcast!
then you are a dummy
then you are a clown@@mkballer4502
When she says “go deeper” but you’re all out of PowerPoint slides
So true
You Made my Day!
@@Adriano70911 stfu
@@Adriano70911 stfu
😜
Watching this on 2023, after the advancements of generative pretrained models, is mind-blowing. Things advanced so much in 4 years.
Lex,you are amazing as a lecturer and a finer example of a loving human. Your voice is so deep assertive and clear to the audience
You're handsome with good attitude ,body language and can easily connect with people. I pray God bless you and family with blessings because we need you.Congrats man.
So talented, this guy should make his own podcast
Thank you so much Lex. This will help us a lot. This will help the students, who cant afford paid online courses and none in the neighbourhood can teach.
Electrical and computer engineering student here who's doing Jiu Jitsu as well. You can imagine how big a fan I am of Lex. So cool to see him actually going into the technicalities of his work.
Same here lol
He’s just like me fr! Headass
Real
@@wrestlingscience deadass no cap
@@wrestlingsciencelol
This lecture is awesome and really inspiring. I've been a fan now for years now Lex, and I'm really happy to see your success. I just wanted to point out that I believe your analysis of "One Shot Learning" re: human bipedal locomotion might be a little off base. The learning and development process that leads to bipedalism is characterized by a list of precursors like crawling, sitting up, and standing up. This process takes usually between 1 and 2 years. This time (and the hundreds if not thousands of reps that come with it) is needed to build from the ground up both the requisite muscular strength and the requisite neural pathways for these coordinations to be possible. The process can be accelerated through coordination-specific training on the part of the parents (which occurs quite often). Errors that occur in this process lead to hardcore biomechanical problems down the road (e.g. requiring knee replacement at 55) Bipedalism is pretty complex, and is way harder than quadrupedalism, which would fall more in the scope of your one shot learning claim.
Loved your post.
Let your child crawl to build their core strength before you worry that they aren’t standing yet.
Putting diapers/nappies on a crawling child is similar to hobbling a horse.
Think about it.
The longer they crawl the better they will be able to walk.
Obviously to let them crawl longer and without a massive chunk of material forcing misaligned muscular development is a huge inconvenience to the care giver.
Prioritise your goals.
I don't exactly know why, but I am so proud of him.
Both as a human and as a person who still puts efforts to not let knowledge become the source of cynicism. There's something about not giving up on love and other intellectually ridiculed concepts such as kindness. There's something pure about it.
And for that purity, I am so proud of him.
Thank you for sharing this on RUclips. This is what gives me hope in todays world. The walls that surround knowledge are coming down. Go team PEOPLE.
100%
Superb lecture. The guy speaks as if he sell dreams.Great confidence and knowledge
Important Elements
9:58
Simple Python Neural Network Classification Number Model --> 87% Accuracy
Step 1: Import Necessary Libraries (TensorFlow)
Step 2: Import data set for model
Step 3: Layers of neural network classification algorithm (drawed number --> classified Number) --> Use tensor flow for running data through NN Algorithm (hidden layer, input layer, output layer)
Step 4: Train data using Algorithm using epochs (number of simulations data runs through neural network algorithm to increase accuracy of NN model, model.fit)
Step 5: Evaluate model after trained (display test accuracy of trained data)
Step 6: Actually using algorithm to predict what is in image (In this case what number the user wrote)
16:02
Ability to Remove Input of Human Experts:
* Closer examination of Raw data without human extraction
* Doesn't Require human step before classification
22:02
Supervised Learning:
31:35
Lex is a really admirable professor applying academy to solve real world problems through engineering ways. Kudos!
like pretty much all engineers? LOL
He can’t speak properly
He isn't a professor
He is not a professor.
Oh really.y
English is not my first language but your voice is clear and pronunciation easy to understand. Keep up the good work.
Piggyback on Mr. Sherma's comment: "English is my first language,But your voice is clearAnd pronunciation isn't too understand.Keep up the good work"
"English is my first language,But your voice is clearAnd pronunciation easy to understand.Keep up the good work"
I have never heard a technical course so poetic!
Lex Fridman is absolutely fucking winner. Winner doing winner things. Is there a human being on earth who doesn’t like the guy? What an awesome blessing of a human being. We need more
facts
Sam Hyde dont like Fridman and he somehow right.
I am very fond ot the interviews in your podcast. Born 1961 I started my academic career studying computer science.I was one of those guys who chose the subject, because I could perform well above average with little effort. Now I am a lowly catholic priest still interested in all kind of science. Had I stayed in the computing business, I would have specialized in data modeling, data mining and data visualization. The lesson raised some philosophical questions with with practical consequences what I would like to research in your line of work.
1. The philosophical issue raises with the very definition of an information bit representing an yes/no answer to a given question. The most important thing in the whole computation/data business is to select the right questions and a well enough working way to answer them. I think it weren't Biil Gates abilities as a programmer which made him successful, but the set of questions he wanted to provide solutions.
This is where I draw the line separating human intelligence from artificial intelligence. Human intelligence is about selecting the right questions. Once that is done and there is some relation to computable empirical data, I think AI will outperform human efforts as it develops. I always expected AI to become superior in games like chess or poker, because those games are inherently digital, based on a restricted set of predefined questions. (i.e. Is there a white queen on square e1?')
Training an AI somehow expands the limits I assumed as given The training of an AI creates a layer of abstraction, something I previosly saw as purely human.
2. If I would research AI I would try to visualize that abstractions. I would implement functions like: 'Draw many different cats' if the AI is trained to recognize cats or 'Draw many different pictures similar to cats and equally similar to dogs.
Then I would try to understand what the AI perceives as cat-like and if there is a better recognition when repeating the learning with the AI-generated examples.
Has someone already tried this strategy?
Did it work?
3. I am a fan of Gregory Batesons theory of 'binocular learning'.
So, when researching autonomous driving I would experimentally use two cams with two AIs interacting like the two sides of a human brain and try evaluate if I implemented something to generate the added knowledge Bateson describes as result of comparing different descriptions of the same.
If successfuly generating Bateson's additional value I would try to understand, if there is a general difference between humans and AIs when generating deeper understanding by using that method probably based on Bateson's levels of learning.
Has anyone done research like this?
What were the results?
I guess, using multiple input-devices (i.e stereoscopic cameras or combining different electro-manetic wavelength cameras) will greatly improve the reliability of AI's results, while using multiple interconnected AIs will mainly improve the researcher's theoretical understanding by 'listening' to AIs 'discussing' their abstractions.
If answering my questions don't go far beyond 'yes' or 'no' because I never invested any time to understand neuronal networks or AI.
"welcome everyone to 2019, it's really good to see everybody here"
Time travellers?
HAHAHA good point.
I'm going to say "Welcome everyone to 2021... you survived Covid-19 and Trumps' incompetency."
@@LadyCoyKoi
Trump saved America.
God bless Trump
We're all time travelers. I've never met anyone stranded to one moment in time.
@@c1dv1c1ous You've never been to one of my lectures then
I have school exams to read for.. but this video is more exciting to watch
Amazing talk! Thank you, Lex! What an exciting time to be alive...
You know what lex will revolutionize the world..a great scientist and a fluent speaker,it always a pleasure to listen lex😍😍
Thanks lex for your sharing. So I can follow this training from Turkey. I wish you success. Good Work.
Me too
Adamsın reis
It's interesting watching this lecture at the end of 2022, and seeing just how many problems deep learning has solved since this video was released. At 27:43, we've already reached Art and Book Writing, and are well on our way to a few others. And yet self driving hasn't advanced much at all.
Well, don't go by Tesla. Self driving has advanced a lot even since this lecture. Veritasium has a great video on this.
I have attempted to meditate many times in my life and prior to this CD the only success I've experienced is with live guided meditation. ruclips.net/user/postUgkxzpa8CIfZcihW4Z0F_ja0QF3W9KIatrsq This is the first CD I've used that cuts through my unmedicated ADHD and enables me to truly relax and experience a quiet and energizing interval. The instructors voice is very soothing and pleasant to listen to. I am easily able to sit successfully through the entire CD, and for quite some time after. I cannot adequately express how tremendously helpful this CD has been on my spiritual journey!! Two thumbs up and 10 stars!
It’s super helpful to know how AI systems work, even though I don’t work in tech. It also helps me feel relieved to know that AI is still very far from becoming sentient. I didn’t realise just how amazing the human brain is in comparison.
This guy should starts a podcast. I am sure it would be popular.
Thank you for your honesty, Dr
Fridman. Brilliant and thought -provoking to those who can ask questions to answer.
I would pay this man $$$$ just to keep pumping out lectures weekly
A great introduction lecture! Full of “fruit”, I learned a lot in just a hour. Thanks a lot for sharing!
Loyalty's
Business
@@dezziepierce4769 ??
Curious to know what you learned from this
@@coop4476 And how. That's what we want to find out.
This is a great rundown of the general DL basics. Really good lecture
This lecturer has a good voice. He should start a podcast or something
This is amazing Lex! Superb FREE content so the cat and let curiosity kill it, over and over again, loving every secret
"Many times I've wondered how much there is to know" You are an impressive human Mr. Fridman. You saved the best for Last 1:04:41 (hungry cats)
Thank you so much, Mr. Lex Fridman, for contribution and sharing your lectures!
Amazing lecture. Lex, you are a legend. Thank you .This runs at x1.25 really well too (for the busy minds out there....)
Thanks for the tip. Worked well :)
The way you said course 6.S094 make you sound like a awesome robot professor Lex!
Had no idea Lex gave lectures. Multitasker
i'm surprised as you are, dude's a genuine intellectual
thank you so much Lex! We from all over the world who can't afford to go to MIT, can get the same what your students learn!
"All kinds of problems are now in digital form" man, that was deep!
Dope lecture. Good coverage. I love the hidden point that performance depends on smaller batch sizes, which means higher sample rates (to me), Data is capital.
Great explanation. This is the first lecture in which I am able to understand very easily. The way of explaining is mesmerizing.
Thank you very much Professor. It is really fulfilling to listen to you. I think at age of 64 I will be able to work and ask good questions.
I really appreciate your urge to learn which even I at 20 have lost a bit.
@@ankitkeshav2669 Thank you Ankit.
The best part was the honesty, on possible secundary effects that Deep Learning might do... none the less, we should definetely go ahead with Artificial Intelegence, never forgeting that C language is always there if we need to take a step back :)
Thanks for sharing. My daughter is the frenchwoman in MIT, majoring Computational and System Biology.
Those slides… Man, I wish our lecturers put that much effort into compiling the slideshows that they’re in fact going to teach from for multiple years.
I wish I could watch an entire course by Lex :)
was wondering about the same. but i guess thats not available online
Nice! Realy Lex is doing a great job. Lex's podcasts are very nice I listen to them every week. I suggest you should also watch ..... you will get an amazing experience with Lex ........ :)
Been a fan of your podcast for a while. Really puts you in a while new light to see you yeah. You really seem in your element teaching!
Lex you are so old school it's great
Good to see you teach..a teacher who is a continuous learner
I've been listening to Lex's podcast for a while, this is the first time I have audited one of his courses. I think he is starting to remind me of the Carl Sagan of our age.
Beginner > Hazard > Expert Love it!
every content you put is a gem Lex!
What's a clear explanation! That is a real professor!
Lex Friedman has become an inspiration to me greatly
The mark of a master is that he/she makes the complicated simple ... not simplistic ... but simple enough for the uneducated to be able to appreciate the major points. Thank you, Lex.
Also, someone who I assume is not Lex drew me into a strange WhatsApp conversation that I terminated because the language was cryptic and not at all characteristic of Lex. You might change your RUclips password ... me recommending an MIT faculty to change his password. Just trying to help preserve your brand equity and the trust we place in you.
Thank you for being such an amazing source of information and learning.
I didn't know that agent 41 teaches Machine Learning. :)) This man is not just a professor; he is a popular figure, celebrity for young people to admire on.
This is an extremely useful resource; thank you for sharing this!
Thank you Lex for all your contribution and for sharing so much on RUclips. My life would not be the same with our you podcast series
It's pretty amazing to see how excited "nervous'' he is to do this lecture, just as much as most of us are to learn this topic. :D
Thank you very much ))) Lex fridman and MIT
Спасибо большое Lex fridman и MIT за лекцию ))))
from Russia )))
I get weird feeling when I hear lex talk. There's something that binds deep learning, media programming, and overall take over of a free thinking society. The way they collect data will not change. The population will change to make it easier for them to collect data and have a control.
Thats cool. Free lecture from MIT on youtube. Very high quallity. Thanks.
I've been studying and getting certifications in Prompt Engineering, Mathematics, Coding, Data Science, Open Artificial Intelligence, Machine Learning, Deep Learning, and Neural Networks for a few years now. I can't find a job anywhere. When I'm in a interview and talk about the cost saving benefits and increase in productivity using Artificial Intelligence and automation, they usually end the interview right away and send a Dear John letter that they went with another candidate.
My speculation is there's so many baby boomers in North America that are drawing a pension or social security and automation simply can't pay for it. So our GDP per capita is gonna suffer badly as we struggle to automate and achieve more efficiency and productivity in order to prop up THAT generation.
Based on input parameters. Supervised - predict apartment price
Supervised learning -> unsupervised learning
Humans can learn from very few examples
Machines need thousands/million s of examples
omg. the way the planets were moving explains retrogrades, i always wondered... why would the planet go backwards... I have so many questions now (16:02 - Why deep learning (and why not))
Awesome. Now to spend 30 years learning coding, math, statistics, linguistics, philosophy and then become an expert in a problem domain. At least I won't be bored.
no neuroscience?
@@thefool733 That too. I wish I had two lifetimes.
lex is the best sad i never had a teacher like him
A tour de force in the selection, organization, and presentation of an overview of Deep Learning. I really enjoyed it - thanks for doing this and making it freely available to everyone!
One of the most precise lectures since my Engineering school times. Would love to hear more from you.
wow!
I understanding nothing about machine learning or the field of Artificial intelligence in general. But even I could understand what Lex was saying. Very well done!
ഉഉഉഉഉഉഉഉഉഉഉഉഉഉഉഉഉഉഉഉഉഉഉഉഉഉഉഉഉഉഉഉഉഉ
15:43.. maybe we got it wrong... i see the E8 Lattice synchronization
I just noticed at 39:00 minutes approx, there are definite lines in your forehead when the explanation started to get deep and you were reaching with your soul on how to explain. ;-) Thank you for your efforts in this course.
I had no idea that you are/were a professor, and a great one at that.. thanks for sharing this video
Watching this at 2X is actually very enjoyable.
Ben Shapiro lecture
lex seem's to really enjoy teaching, looks like a happy dude :)
Not a deeply technical talk but does cover what exists out their to learn in Datascience and AI.
...Lex seems to be an angel, amazing person!
It's cool to see you in your element Lex!
Great resource Lex. Thank you for sharing. Keep them coming :)
Clean, clear and realistic lecture!
Thank you so much, Mr. Lex Fridman, for contribution and sharing your lectures!
Great resource Lex. Thank you for sharing. Keep them coming :)
imho the best lecture to watch in january 2019
Agreed
Simple as possible, but no simpler. I like that.
Great video, thanks for this Lex
I’m glad that sector value of inclusive data bends the boundaries. Inclusive of the satire of real-name identifiers and the label that walk in the bathroom genres. Thesis of hopeless sadness that night the emotional crashes of demolishing partaking data…
This is an extremely useful resource; thank you for sharing this!
"welcome everyone to 2019, it's really good to see everybody here"
Time travellers?
"welcome everyone to 2019, it's really good to see everybody here"
Time travellers?
Wow, what a guy. Thank you for sharing this video. Very well put together and engading lecture.
17:45 "We are at the peak of inflated expectations." Well, 3-4 years later the expectations are much higher.
Not that this was easy to predict, it is just interesting to see how things turned out.
Hard Part:
Good Questions + Good Data
...I felt that
this is a very interesting lecture, thank you so much for making it available to a wider audience. are the other lectures of the series also available online?
I'm so excited to join this class!
37:13 "A neural network with a single hidden layer can approximate any (arbitrary) function"
Is this true? Can it approximate a function where an input is squared, cubed, etc? Or a sine fn?
Seems like it would depend on the activation function a lot, it seems like it wouldn't be true with a Relu activation.
I honestly don't know if this is true, so just asking...
it does not state the number of neurons in the layer,.... if it is "one dimensional" meaning one input value refers to one output value it should work out, but I am not that much of an expert
Yes, this is the computational difference between connectionist nets and Rosenblatt's (two layer) perceptron. Basically a perceptron can only handle linear functions (where the argument's exponent is 1) but adding a hidden layer allows the network to compute nonlinear functions (hence the input can be squared, cubed etc). So Lex is right (and this fact is older than Lex himself).
A neural network with a single hidden layer can approximate the function of a NAND gate (and hence approximate any (arbitrary) function).
it's awsome to know the evolution and new skills of deep learning in this course!