Subject: A Heartfelt Thank You for the Inspiration Dear Martin, To you and your team at IBM, Thank you for your insightful and thought-provoking presentation on AI Inference. In just ten minutes, you’ve given us a spark of inspiration that feels as if we’ve just discovered something fundamental-like the wheel or fire, while others continue hunting mammoths. Your explanation of training and inference has opened our eyes not just to the technical intricacies of AI, but to the broader possibilities of how human thinking and technology can evolve in ways we hadn’t quite imagined before. The clarity and depth of your message left us with a sense of excitement and wonder about the future of AI and its potential to shape human development. While we may be far from fully understanding the true power of this knowledge, we feel we’ve just begun to glimpse what lies ahead. Your words have encouraged us to think differently, and for that, we are truly grateful. Thank you for lighting a path that, though still in its early stages, holds so much promise. With sincere thanks and admiration, Alonso de la Garza.
Wow, this is so amazing. I was just reading on this today because I’ve been finding it difficult to understand the concept of inference and regards to ml
I realize that useful AI models will spend most of their time being used (inferencing) rather than being trained. But, it still seems that the resources consumed for training are more than significant, especially since they often are trained many times. It would be informative to see some data or ratios, especially since i've heard that the biggest models can consume staggering amounts of power and time. These IBM vids are great, thank you all.
Subject: A Heartfelt Thank You for the Inspiration
Dear Martin,
To you and your team at IBM,
Thank you for your insightful and thought-provoking presentation on AI Inference. In just ten minutes, you’ve given us a spark of inspiration that feels as if we’ve just discovered something fundamental-like the wheel or fire, while others continue hunting mammoths.
Your explanation of training and inference has opened our eyes not just to the technical intricacies of AI, but to the broader possibilities of how human thinking and technology can evolve in ways we hadn’t quite imagined before. The clarity and depth of your message left us with a sense of excitement and wonder about the future of AI and its potential to shape human development.
While we may be far from fully understanding the true power of this knowledge, we feel we’ve just begun to glimpse what lies ahead. Your words have encouraged us to think differently, and for that, we are truly grateful. Thank you for lighting a path that, though still in its early stages, holds so much promise.
With sincere thanks and admiration,
Alonso de la Garza.
Thanks from Chile
Wow, this is so amazing. I was just reading on this today because I’ve been finding it difficult to understand the concept of inference and regards to ml
I realize that useful AI models will spend most of their time being used (inferencing) rather than being trained. But, it still seems that the resources consumed for training are more than significant, especially since they often are trained many times. It would be informative to see some data or ratios, especially since i've heard that the biggest models can consume staggering amounts of power and time.
These IBM vids are great, thank you all.
Great explanation. Thank you for demystifying this critically important topic.
Is it true that all the IBM researchers can write in reverse?
and outlook needs spam detector the most!
looks like you wrote tor instead of detector i think my brain is turning into ai lool