Thank you for giving beautiful minds like yourself the credit they deserve. And also for clarifying, the AI transition is a goal worth pursuing. Btw you are a an amazing teacher, speaker, and human being. Peace
Also, if you don't have control over the process, the actual data, how can you be the final decision maker? In huge organizations, many processes which includes methodologies, assumptions, becomes part of an automated system that was built by experts over many many years, there's no one that understands the entire process, how any changes in the external condition, or anywhere in the process, impacts the outcomes. This becomes a rigid system because no one is able to problem solve/make decisions by inspect, modify, or question the process. Humans are effectively forced out of the decision making process in the effort to remove repetitive tasks, we ended up taking what the automated processes spits out and execute on them.
Cassie, your deep dive into the nuances of 'thinking' and 'thunking' truly added a new layer to my understanding - enlightening to say the least! On Duchamp's fountain, it prompts a playful pondering: was it a masterstroke of 'Thinking' or perhaps a tongue-in-cheek 'Stinking'? 😊 Really appreciate the insights you bring!
LLMs have a penchant for suggesting support (subsidies) for renewable energy. But a fee charged to industries that extract natural resources, emit pollution or destroy habitat will shrink or close the worst-offending players. A higher fee will mean faster reduction of harm. If we adjust the fee to whatever level necessary to produce the effect we want (what is deemed permissible by average opinion), we will not need any other policy aimed at managing environmental impacts. (Government subsidies are inefficient and divisive. AI should not be promoting inefficient and divisive policy.)
What if the AI can set its own goal and choose its own dataset? What would motivate AI to set goals for themselves? What has motivated huamans to set goals?
have you guys watch AI learn how to walk? it mostly brute force and takes a lot of time and the result are mostly hilarious. yeah I don't think AI will take over anytime soon, but maybe far in the future
People are missing your shining, heartwarming digital presence! Then again it would be great if you just settle down and have a great life :) You should be a mom.
I guess this is probably a speech for a generic audience, so I understand why you are sending these kind of simplified messages. I agree that who creates something is responsible for what that thing is going to do, especially if it has some sort of automation. And that's ok. BUT, I find kind of a very strong position saying that AI will never do thinking. We are just at the beginning; the AI systems that we have today are mostly, as you say, "optimize this goal on this dataset". I guess this is not going to be forever like that. Even though we have no fully generic artificial intelligence in place on earth (that I am aware of), I would not bet against the fact that at some point, someone, somewhere, will re-create what natural intelligence is doing: setting their own objectives, reproduce, staying alive, etc. and then, the result of all of this will be some kind of a new form of artificial life with thinking and creativity. So, in conclusion, you are making distinction between thinking and thunking and it's ok for the random guy walking on the street in 2023. But if we dig deeper on what really "thinking" is, do you have a scientific definition so that your new word really make sense?
By looking at the video title, I don't think I like this, AI do the thinking for us, this could really hurt or even destroy humanity . Only Computer Scientists that works in the field of AI can think like this? there own little bubble.
I don't know about the arguments in this presentation. Yes, art history shows us that artists build on each other's work and there are many papers that document the fall outs between artists for copying each others ideas. So GEN AI is just speeding up what has been done in the past except we can't put a name to the non human artist (or we can as Cassie puts it). The second argument about this is that we shouldn't be letting AI think, but without going into the evolution of ML training, as humans, we use heuristics - we don't always go back to first principles which means that with time we will depend on these subjective AI models. The one point which i agree is that we have to hold these subjective people to account. But it isn't the mathematicians (MLOps teams) alone, we should hold the entire organisation's C-Suite accountable. These are behemoth companies and their impact is huge so when I hear a presentation like this I pause? What is the agenda? Don't know who the intended audience was for this presentation but the arguments need refining.
very valid points. im surprised these points havn't been taken into consideration. i guess the presentation is targeted towards general audience like me. even so i question the points in this video based on human psychology, environment for "creativity", etc
how can you learn to "think" if you don't practice "thunking" first?? creativity need a strong, well-learned foundation, otherwise everyone can call themselves an artist by taking a photo of a urinal? now real artists are jobless. look at how many ballet companies, opera singers are surviving? the very few that's operating relies on donations. also, while im highly creative, i can't stand "thunking" tasks, there are many people who ENJOY doing mundane tasks, typically they take low $$ jobs, and these are the jobs getting automated first. Unlike machines, humans are diverse and not everyone wants or should be thinking about the futures of mankind.
That stage design, wow.
I am so happy to see your growing platform. I have been a fan since I took a probability class with you at Google. Keep it up!
Totally agree! Thunking kills a creative problem solver!
Thank you for giving beautiful minds like yourself the credit they deserve. And also for clarifying, the AI transition is a goal worth pursuing. Btw you are a an amazing teacher, speaker, and human being. Peace
Also, if you don't have control over the process, the actual data, how can you be the final decision maker? In huge organizations, many processes which includes methodologies, assumptions, becomes part of an automated system that was built by experts over many many years, there's no one that understands the entire process, how any changes in the external condition, or anywhere in the process, impacts the outcomes. This becomes a rigid system because no one is able to problem solve/make decisions by inspect, modify, or question the process. Humans are effectively forced out of the decision making process in the effort to remove repetitive tasks, we ended up taking what the automated processes spits out and execute on them.
Cassie, your deep dive into the nuances of 'thinking' and 'thunking' truly added a new layer to my understanding - enlightening to say the least! On Duchamp's fountain, it prompts a playful pondering: was it a masterstroke of 'Thinking' or perhaps a tongue-in-cheek 'Stinking'? 😊 Really appreciate the insights you bring!
Very insightful talk, improve my vision about the topic, thanks
Good thinking!
wow.. just wow!
Very insightful
Brilliant.
Was this all AI-generated? Well done.
LLMs have a penchant for suggesting support (subsidies) for renewable energy. But a fee charged to industries that extract natural resources, emit pollution or destroy habitat will shrink or close the worst-offending players. A higher fee will mean faster reduction of harm. If we adjust the fee to whatever level necessary to produce the effect we want (what is deemed permissible by average opinion), we will not need any other policy aimed at managing environmental impacts. (Government subsidies are inefficient and divisive. AI should not be promoting inefficient and divisive policy.)
What if the AI can set its own goal and choose its own dataset? What would motivate AI to set goals for themselves? What has motivated huamans to set goals?
See the end of the video, same answer. If you build a system that you allow to replicate, then you built more than one system. Do it responsibly!
What if I weren't born on Earth?"
have you guys watch AI learn how to walk? it mostly brute force and takes a lot of time and the result are mostly hilarious. yeah I don't think AI will take over anytime soon, but maybe far in the future
Alan Kay.
People are missing your shining, heartwarming digital presence! Then again it would be great if you just settle down and have a great life :) You should be a mom.
I guess this is probably a speech for a generic audience, so I understand why you are sending these kind of simplified messages. I agree that who creates something is responsible for what that thing is going to do, especially if it has some sort of automation. And that's ok.
BUT, I find kind of a very strong position saying that AI will never do thinking. We are just at the beginning; the AI systems that we have today are mostly, as you say, "optimize this goal on this dataset". I guess this is not going to be forever like that. Even though we have no fully generic artificial intelligence in place on earth (that I am aware of), I would not bet against the fact that at some point, someone, somewhere, will re-create what natural intelligence is doing: setting their own objectives, reproduce, staying alive, etc. and then, the result of all of this will be some kind of a new form of artificial life with thinking and creativity.
So, in conclusion, you are making distinction between thinking and thunking and it's ok for the random guy walking on the street in 2023. But if we dig deeper on what really "thinking" is, do you have a scientific definition so that your new word really make sense?
By looking at the video title, I don't think I like this, AI do the thinking for us, this could really hurt or even destroy humanity . Only Computer Scientists that works in the field of AI can think like this? there own little bubble.
Try actually watching the video
I don't know about the arguments in this presentation. Yes, art history shows us that artists build on each other's work and there are many papers that document the fall outs between artists for copying each others ideas. So GEN AI is just speeding up what has been done in the past except we can't put a name to the non human artist (or we can as Cassie puts it). The second argument about this is that we shouldn't be letting AI think, but without going into the evolution of ML training, as humans, we use heuristics - we don't always go back to first principles which means that with time we will depend on these subjective AI models. The one point which i agree is that we have to hold these subjective people to account. But it isn't the mathematicians (MLOps teams) alone, we should hold the entire organisation's C-Suite accountable. These are behemoth companies and their impact is huge so when I hear a presentation like this I pause? What is the agenda?
Don't know who the intended audience was for this presentation but the arguments need refining.
very valid points. im surprised these points havn't been taken into consideration. i guess the presentation is targeted towards general audience like me. even so i question the points in this video based on human psychology, environment for "creativity", etc
how can you learn to "think" if you don't practice "thunking" first?? creativity need a strong, well-learned foundation, otherwise everyone can call themselves an artist by taking a photo of a urinal? now real artists are jobless. look at how many ballet companies, opera singers are surviving? the very few that's operating relies on donations. also, while im highly creative, i can't stand "thunking" tasks, there are many people who ENJOY doing mundane tasks, typically they take low $$ jobs, and these are the jobs getting automated first. Unlike machines, humans are diverse and not everyone wants or should be thinking about the futures of mankind.
Are you AI?