I'm excited about recent developments in NLP, deep RL, speeding up training/inference, big GANs, powerful DL frameworks, and real-world application of DL in driving 370k+ Tesla HW2 cars! What else would you like to see covered?
Hey Lex! Could you do a lecture that reviews state of the art in privacy-preserving methods? Would be cool to get your perspective on potential applications of federated learning, homomorphic encryption, and blockchain technologies towards models that enable secure, value-oriented distributed learning over decentralized data. Would be awesome to have Andrew Trask as a guest. www.openmined.org Eg. github.com/OpenMined/PySyft/tree/master/examples/tutorials ruclips.net/channel/UCsONJ5EfiYYfShT4VL7Bt9g
Any in-depth walkthroughs of how to transfer the learning in these breakthroughs to apply to our own datasets? Maybe this isn't the right thread to ask.
Yes! There are several programs in each step of the operating chain for databases, preloaded libraries, modeling, etc., that accomplish the same operation but with different strengths... Maybe what we could use is a walk through of the pieces of a typical operating chain. Python vs. Java is one piece for operating system, python NLP library vs. Stanford NLP library is one for preloaded algorithm sets in language processing, etc... I'm looking as an executive who knows machine learning is as decentralized an opportunity as the dot com boom, because someone will make a tool with the technique. Learning about spaCy and gensim was informative but these are data science problems to be solved by researchers, I only expect to incorporate these tools in my Frankenstein Monster. Still could use a treasure map, I can't afford to start a data department just yet but I'd bet big my aim is attainable now or within two years.
Lex's summary is so true: Stochastic gradient descent and backpropagation are still backbones of the current state of the art AI techniques. Therefore, we need some innovations to see leaps in the field. Thank you Lex!
Great review on recent advance in deep learning! Would be great to see a similar review of current (immediate) challenges e.g. limited numerical extrapolation abilities, multi-task learning...etc.
Thanks for the lecture. Also, well edited (erasing pauses). Funny to see you transform to Men in Blue compared to when you started the lecture two years ago. Looking good
You can sort of get that vibe from the lectures. One cool moment was when Jeremy demonstrated the 3-minute DawnBench result in the middle of a lecture.
They also mentioned academics “don’t give a shit” about structured data in deep learning, and talk like they really are renegade outsider. Btw: “don’t give a shit” was the literal phrase used. 😀
If you go here: deeplearning.mit.edu/ and you click on the slides links (es. www.dropbox.com/s/v3rq3895r05xick/deep_learning_state_of_the_art.pdf?dl=0) you get the slides and you can easily click on the links in them ^_^
adanet is very interesting! that is a very good material on progressive learning for data scarse situation! also multiple classification in rounds... and one on confusion for fine grain classifications without augmentation!
So, the AutoAugment is about augmentation of worst possible inputs? Like you try to do augmented ops that are hardest to recognize correctly thus force network to learn them better?
Not really if you're starting from scratch. Maybe someday someone will release a pretrained general purpose model. This sorta idea seems to be something that the deep mind team is chasing down. They're very interested in general intelligence.
This is attempting computational expression! It inhibits natural inherent algorithmic processing. Human is an emotional cognitive being. Whether it be locution or calculation, induction/deduction is based on emotional cognitive aptitude, therewith conflict with the attempt of mimicking sterile computation and natural anatomization. I have watched from afar and Chomsky is aware who I am... ( ^V^ ) If human continues to use technology without the comprehension that tech creates an ersatz structure, than human will fall deeper and deeper into a form of psychosis. Love to ALL
Considering the material that needed to be covered and the time it had to be covered in, especially for something that's meant to be an overview/review and not a how-to, I'm not sure why you would think this is supposed to actually get into the nitty-gritty that is already covered elsewhere.
Thanks for the lecture. Also, well edited (erasing pauses). Funny to see you transform to Men in Blue compared to when you started the lecture two years ago. Looking good
I'm excited about recent developments in NLP, deep RL, speeding up training/inference, big GANs, powerful DL frameworks, and real-world application of DL in driving 370k+ Tesla HW2 cars! What else would you like to see covered?
Healthcare
Hey Lex! Could you do a lecture that reviews state of the art in privacy-preserving methods?
Would be cool to get your perspective on potential applications of federated learning, homomorphic encryption, and blockchain technologies towards models that enable secure, value-oriented distributed learning over decentralized data.
Would be awesome to have Andrew Trask as a guest.
www.openmined.org
Eg. github.com/OpenMined/PySyft/tree/master/examples/tutorials
ruclips.net/channel/UCsONJ5EfiYYfShT4VL7Bt9g
Lex Fridman Healthcare
Any in-depth walkthroughs of how to transfer the learning in these breakthroughs to apply to our own datasets? Maybe this isn't the right thread to ask.
Yes! There are several programs in each step of the operating chain for databases, preloaded libraries, modeling, etc., that accomplish the same operation but with different strengths... Maybe what we could use is a walk through of the pieces of a typical operating chain. Python vs. Java is one piece for operating system, python NLP library vs. Stanford NLP library is one for preloaded algorithm sets in language processing, etc...
I'm looking as an executive who knows machine learning is as decentralized an opportunity as the dot com boom, because someone will make a tool with the technique. Learning about spaCy and gensim was informative but these are data science problems to be solved by researchers, I only expect to incorporate these tools in my Frankenstein Monster. Still could use a treasure map, I can't afford to start a data department just yet but I'd bet big my aim is attainable now or within two years.
It could be very insightful to go back to this talk in a couple of years time, and see how these ideas developped.
It's been two years. Here's a reminder.
This is the most valuable thing that I saw in 2019.
This is fantastic. I am trying to create a storytelling system using LSTM s and a corpus of self-written works. Thank you for this Mr. Fridman.
Lex's summary is so true:
Stochastic gradient descent and backpropagation are still backbones of the current state of the art AI techniques. Therefore, we need some innovations to see leaps in the field.
Thank you Lex!
This is the most valuable thing that I saw in 2019.
This guy have the same voice that the Breaking Bad's actor, Jesse Pinkman.
And he looks like him in a way
Thanks a ton Lex! You're one of the guys who brought me from Mech. Engineering to AI :)
and ofc the mnoney
@@amritaanshnarain7524 don't get it. Is it bad to earn money?
Awesome,, thanks
The intro is invaluable and very helpful
Great review on recent advance in deep learning! Would be great to see a similar review of current (immediate) challenges e.g. limited numerical extrapolation abilities, multi-task learning...etc.
Thanks for the lecture. Also, well edited (erasing pauses). Funny to see you transform to Men in Blue compared to when you started the lecture two years ago. Looking good
So funny how Lex refers to the fast.ai folks as renegade reserchers :-D
You can sort of get that vibe from the lectures. One cool moment was when Jeremy demonstrated the 3-minute DawnBench result in the middle of a lecture.
@manideep lanka 27:30
They also mentioned academics “don’t give a shit” about structured data in deep learning, and talk like they really are renegade outsider. Btw: “don’t give a shit” was the literal phrase used. 😀
40:40 Is it just me or is Lex slowly increasing his excitement when talking about OpenAI & DOTA 2
Anyway, great work and thank you
Didn't understand much. But I'm excited. Moving forwards.
Very happy to see the prosperity of deep learning. I hope I can excavate the biggest potential of DL in the field of computational advertising.
I was absolutely shocked by how brilliant these approaches to deep learning where. I'm absolutely excited to see what we can come up with next
Thank you Lex !! Always great to learn from you
Amazing content Lex, always very informative to help filter the firehose of research papers.
It would be great if you add to description links from the presentation!
If you go here: deeplearning.mit.edu/ and you click on the slides links (es. www.dropbox.com/s/v3rq3895r05xick/deep_learning_state_of_the_art.pdf?dl=0) you get the slides and you can easily click on the links in them ^_^
Thanks Lex!
Awesome Lex, are you planning on doing the same for 2020?
Awesome video. Are there good links to videos on other developments not mentioned, e.g. in healthcare and agriculture?
Super talk, keep up the good job.
adanet is very interesting! that is a very good material on progressive learning for data scarse situation! also multiple classification in rounds... and one on confusion for fine grain classifications without augmentation!
i meant there is!!
So, the AutoAugment is about augmentation of worst possible inputs? Like you try to do augmented ops that are hardest to recognize correctly thus force network to learn them better?
thanks lex
Is there a way to solve the need for so much data?
Not really if you're starting from scratch. Maybe someday someone will release a pretrained general purpose model. This sorta idea seems to be something that the deep mind team is chasing down. They're very interested in general intelligence.
awesome !!!
lex is my favourite mossad agent 😂
Does anyone know what technology amazon textract uses?
Would you please make a video detailed lecture on computer vision and algorithm for real time tracking applications?
🤓👍🏻 thanks
This guy have the same voice that the Breaking Bad's actor, Jesse Pinkman.
Mаgnificent!
This is a survey class.
31:53 Baam!!!!!!
This is attempting computational expression! It inhibits natural inherent algorithmic processing. Human is an emotional cognitive being. Whether it be locution or calculation, induction/deduction is based on emotional cognitive aptitude, therewith conflict with the attempt of mimicking sterile computation and natural anatomization. I have watched from afar and Chomsky is aware who I am... ( ^V^ ) If human continues to use technology without the comprehension that tech creates an ersatz structure, than human will fall deeper and deeper into a form of psychosis. Love to ALL
No nonsense AI from Lex
If this lecture is over my head and I need a little more knowledge about the fundamental concepts where should I look for that?
I really like this series, helped me understand ruclips.net/video/aircAruvnKk/видео.html
I only completed watching this video because I think he is charming.
I think he speeks more faster than before
I see a hand-waving review of acronyms and no step-by-step tracing of algorithms. No substance. NeXT!
Considering the material that needed to be covered and the time it had to be covered in, especially for something that's meant to be an overview/review and not a how-to, I'm not sure why you would think this is supposed to actually get into the nitty-gritty that is already covered elsewhere.
First
Thanks for the lecture. Also, well edited (erasing pauses). Funny to see you transform to Men in Blue compared to when you started the lecture two years ago. Looking good