Deep Learning to Solve Challenging Problems (Google I/O'19)
HTML-код
- Опубликовано: 2 окт 2024
- This talk will highlight some of Google Brain’s research and computer systems with an eye toward how it can be used to solve challenging problems, and will relate them to the National Academy of Engineering's Grand Engineering Challenges for the 21st Century, including the use of machine learning for healthcare, robotics, and engineering the tools of scientific discovery. He will also cover how machine learning is transforming many aspects of our computing hardware and software systems.
Watch more #io19 here: Inspiration at Google I/O 2019 Playlist → goo.gle/2LkBwCF
TensorFlow at Google I/O 2019 Playlist → bit.ly/2GW7ZJM
Google I/O 2019 All Sessions Playlist → goo.gle/io19al...
Learn more on the I/O Website → google.com/io
Subscribe to the TensorFlow Channel → bit.ly/TensorF...
Get started at → www.tensorflow...
Speaker: Jeff Dean
T0E51E event: Google I/O 2019; re_ty: Publish; product: Cloud - AI and Machine Learning - Deep Learning Containers; fullname: Jeff Dean;
machine learning is taking machine learning experts' jobs. :-)
The slide at 4:45 with the grand engineering challenges for the 21st century helped a lot. I often get overwhelmed or confused by all the projects and applications coming out of the tech world. Many of them don't make sense. This slide gave me a good framework for making sense of what these technologies are trying to solve.
Good presentation. 😊
I wonder whose agenda that is ?
تحياتي الخالصة شكرا جزيلا thank you
I always had this idea that AI should be smart enough to determine what models should be tried when it was given a data file. It should be able to run an initial analysis to classify the nature of the file and predict the intention of file usage. Base on this analysis it should be able to find the best one from the existing models. And if it could not find one, it should be able to create a new one. I guess Google has already put my idea in practice.
Outline:
Restore & improve urban infrastructure(Combining vision and robotics for grasping task, Self-supervised imitation learning)
Advance health informatics(Predicting properties molecules)
Engineer the tools of scientific discovery(Tensorflow and its applications)
Some pieces of work and how they fit together / Bigger models, but sparsely activated(Sparsely gated mixture of experts layer-MoE)
AutoML-Automated Machine learning"learning to learn"(Cloud AutoML)
Special computation properties of deep learning(Reduce precision, Handful of specific operations)
More 36:49
Very informative and insightful talk. Thank you Google for sharing it with us.
@8:30 the same technic used by naruto when he was training using his many clones :)
Regarding autoML, after time there would seem to be an ever increasing corpus of models. Humans, being the limited creatures that tend to have the same problems, might not actually need to have a ‘fresh’ model trained every time their brain perceives a problem that needs solving. That solution probably already exists and has been solved. Rather , it might be faster (and much less energy intensive)to simply archive these models with a set of useful metadata so that a google search can find the model that solves the problem. And metadata selection and assignment to individual models can be automated after they are designed by autoML. The metadata can be considered as the ‘label’ for the model. This metadata can also be used to ‘explain’ to a user ‘why’ the machine selected a particular model/algorithm. in addition, the machine would be able to engage the user in a ‘conversation’ - as it ‘asks for metadata’. The user would perceive this discourse as questions about the dataset/problem that he/she has-meanwhile the machine is building an information tree to sift/sort from its vast library of models. This also addresses the human problem where the user often starts by choosing the wrong approach to solving the problem. Or just as often uses the ‘cooked spaghetti’ approach to model selection - throw them all against the wall of the problem and see what sticks.
This is awe inspiring I love it thank you Google
Recommendable!
Best talk of #io2019
Jeff ! We need TPUv4!
As an engineering student who knows so far only basic c/c++ coding, i found this talk excellent, easy to understand and greatly informative!
Thank you very much, this is inspiring and eye opening as to what can be done!
This is good for robotics and computer vision applications.
Are these slides available anywhere?
Expect more developers to share the trained AI models into the Model Play.
"Humans that you plop on the carpet in your living room" _stuff data scientists say_
Thank you a lot for the talk given. The idea of ML automation sounds great.
Expect more developers to share the trained AI models into the Model Play.
11:19 wrong india map
Many places outside India I see this map. We need to get it modified at the source
I train my models on GpuClub com and don't worry about maintaining these huge machines. No investment is the best investment...
Regarding automobiles, we built the auto interface for humans. We leveraged our built in sensors (eyes and ears) and designed a bipartite system - vehicle and road. But why are we now trying to shoehorn AI into that human centric system? If we were designing a system from scratch for AI and machines, would we build it the same way? Would it not make sense to build telemetry into the road - make the road more intelligent and let it direct vehicles more directly? Do we need vehicles that can go where there are no roads? This would reduce the cost of complex and hack able vehicle based systems.
Best talk of #io2019
"I wanted to call it the Arm Pit, but I was overruled." Lost it. 😂
M(x,y)dx+N(x,y)dy=0
With reference to scientific learning: When you have a lot of data, but no data in the particular point in parameter hyperspace that you are interested in, what do you do? Extrapolate the model will result in bias an loss of accuracy. Experiments in real world systems seems unavoidable , experimental datapoint that is ofthen very expensive. The interaction between Machine Learning modeling and the planing and execution of experiments seems to be new and very interesting research area.
It just seems like an extension of the use of computers to crunch numbers and do linear algebra on a level humans couldn't manage in a lifetime but on a level heretofore unseen. As he said, the idea of machine learning has been around for a long time. I'm sure we all remember the famous line from T2: "My CPU is a neural net processor, a learning computer".
Waymo, really?
How can it solve problems, when Humanity is at stake now.
Amazing talk!
that tree design behind himm is pretty cool
Interesting and insightful.
This is awe inspiring I love it thank you Google
Recommendable!
Nice classes sir I want to neads thisclasses
This is good for robotics and computer vision applications.
This is awe inspiring I love it thank you Google
Very insightful.
Great info, thank you for sharing!
Is there a way to get those slides?
Lidar lol
Superb thank you for uploading
can I access the power of v3 tpu pod on google cloud platform?
sure, feel free to use all of them
amazing talk
Recommendable!
Thank you!
cool
Are these slides available anywhere?
Please explain how developing artificial intelligence solutions for subsurface data analysis in oil and gas exploration and production is 'socially beneficial'.