[00:15] Introduction of Andrew Ng and Mehran Sahami, highlighting their contributions to AI and computer science education. [02:13] Mehran Sahami's significant influence on Stanford's CS program and student enrollment. [06:13] Transformation of coding education through AI, enhancing accessibility and efficiency. [08:13] Discussion on optimal integration points for AI coding tools in the education system. [11:53] Advantages of employing design patterns for robust and maintainable code. [13:48] Necessity for students to comprehend AI-generated code through foundational knowledge. [17:10] Critical role of foundational skills in maximizing AI's potential in coding. [18:48] AI's assistance in complex software development decisions while retaining human oversight. [22:05] Impact of foundational concepts on effective coding practices. [23:36] Importance of abstraction layers in enhancing computer science and AI performance. [26:53] AI's role in automating and augmenting job tasks, advocating for widespread coding literacy. [28:34] Coding as precise problem-solving, with Python bridging natural language and programming. [31:48] Empowerment through programming for systematic problem-solving, aided by generative AI. [33:26] Coding education fostering transferable problem-solving frameworks. [36:40] Emphasis on understanding the social ramifications of AI and coding for responsible innovation. [38:19] Importance of responsible AI practices in coding education, focusing on tangible ethical issues. [41:43] AI's value creation through real-world applications across various sectors. [43:29] Leveraging domain expertise to enhance AI-driven problem-solving. [46:56] AI's rapid influence on industries and the necessity for skilled professionals. [48:35] AI as a catalyst for productivity enhancements, contingent on human decision-making. [52:05] AI's role in making technology more accessible and fostering a low-code development environment. [53:44] Overview of courses by Andrew Ng and Mehran Sahami, with additional resources provided.
Andrew Ng, [08:55]: "In addition to training people for today's jobs, we also sometimes want to make sure people have the skills to do the jobs that will be around 2, 3, 4, 10, 20 years from now." Me: Start with reading Dario Amodei's "Machines of Loving Grace" to understand the limits of AI to help prepare to make a more informed decision. Paxton Hehmeyer, [26:31]: "Don't think about jobs, think about the tasks that you're doing and how that will influence the tasks." Andrew Ng, [27:56]: "Take a job, break it down to the tasks, see what tasks AI can automate or augment." Me: Take a job, break it down to the tasks, and further break it down to the abilities required for the task. Then see if AI had already mastered the set of abilities required for the task. Mehran Sahami, [48:36]: "Is AI going to create jobs, or is it going to have people be unemployed? Is it going to destroy jobs?" Mehran Sahami, [48:47]: "AI's neither going to create or destroy jobs." Me: AI is going to destroy jobs. Just because there are currently more jobs that humans can currently fill if their labor starts making economic sense does not change this fact. There is a finite set of abilities that humans can do. Each time one of the ability is automated by AI, the tasks that depend entirely on the set of automated abilities is destroyed. This follows that jobs that only contain such tasks are destroyed. Mehran Sahami, [48:48]: "AI's neither going to create or destroy jobs. You are going to create or destroy jobs." Mehran Sahami, [49:20]: "Do you go after 20% more product, more features, more stuff that you build, or do you have a 20% smaller labor force in your engineering staff? That's a human decision." Me: That's a market decision, not a human decision. A public company CEO's job is to maximize profits to the share holder, if this person is to go against the market, this person will soon cease to be CEO. A private company CEO may choose to go against the market, but this creates opportunites in the market for other companies and this will in turn apply pressure to that private company's CEO. With a hypothetical example barring the creation of new markets: If the workforce becomes 20% more efficient due to AI with the demand for the company's goods and services remaining the same, then to maximize profits, about 17% of the workforce will be eliminated. Then if the savings are passed onto consumers thereby raising demand, the workforce may then see a corrective increase but will in all likelihood be less than the original workforce. With enough foresight the correctional swings can be eliminated, but it still doesn't change the fact that there will be a net smaller workforce in general.
Excellent discussion. The key is students would be able to refine their programming skills using tools like copilot. And also how machine learning is going to take a back seat with all these models around.
This discussion on ethics is timely. With AI transforming industries at such a rapid pace, embedding ethics in education feels like a safeguard against unintended consequences.
On the point of AI models trained with code is a better model, I agree. In early GPT releases, there were some people saying "guessing next word" wouldn't help solve real-world problems, with some sample puzzles posted. I took one such puzzle, prompted GPT to generate Python code (helped a little on syntax editing, yes, cheated), and asked AI to run the code and explain the result in English, it came out great. There are some applications started doing this. Besides, a software program is a step-by-step reasoning/instructions to solve a problem, so AI model could directly benefit from training with code (assuming code and natural language can relate each other), because other natural language sources, such as literatures, publications, social media postings would not have enough step-by-step instructions.
Andrew says he will teach his kids to code and everyone should learn as well, Jensen Huang says don't teach your kids to code. Predicting the needed skills for tomorrow's jobs is just hard :)
We need to understand the models better. Especially, defects, errors, etc. We need to have more tests, show transparency in the training data, conduct safety and rule based approach of data mining.
Why Some Might Say Ng "Sucks" at AI Research Progress lol 1. Lack of Cutting-Edge Innovation: Compared to researchers like Geoffrey Hinton, Yann LeCun, or Yoshua Bengio, who are directly responsible for foundational breakthroughs (e.g., backpropagation, convolutional neural networks, etc.), Ng might seem less innovative. 2. Focus on Practicality Over Exploration: Ng often emphasizes applying existing techniques to real-world problems rather than chasing new, speculative ideas. This might make his work feel less exciting to those in cutting-edge research circles. 3. Slow Adoption of Emerging Trends: Critics argue that Ng’s courses and public lectures sometimes lag behind the forefront of research (e.g., his initial dismissal of reasoning abilities in LLMs or relatively slow embrace of transformers).
[00:15] Introduction of Andrew Ng and Mehran Sahami, highlighting their contributions to AI and computer science education.
[02:13] Mehran Sahami's significant influence on Stanford's CS program and student enrollment.
[06:13] Transformation of coding education through AI, enhancing accessibility and efficiency.
[08:13] Discussion on optimal integration points for AI coding tools in the education system.
[11:53] Advantages of employing design patterns for robust and maintainable code.
[13:48] Necessity for students to comprehend AI-generated code through foundational knowledge.
[17:10] Critical role of foundational skills in maximizing AI's potential in coding.
[18:48] AI's assistance in complex software development decisions while retaining human oversight.
[22:05] Impact of foundational concepts on effective coding practices.
[23:36] Importance of abstraction layers in enhancing computer science and AI performance.
[26:53] AI's role in automating and augmenting job tasks, advocating for widespread coding literacy.
[28:34] Coding as precise problem-solving, with Python bridging natural language and programming.
[31:48] Empowerment through programming for systematic problem-solving, aided by generative AI.
[33:26] Coding education fostering transferable problem-solving frameworks.
[36:40] Emphasis on understanding the social ramifications of AI and coding for responsible innovation.
[38:19] Importance of responsible AI practices in coding education, focusing on tangible ethical issues.
[41:43] AI's value creation through real-world applications across various sectors.
[43:29] Leveraging domain expertise to enhance AI-driven problem-solving.
[46:56] AI's rapid influence on industries and the necessity for skilled professionals.
[48:35] AI as a catalyst for productivity enhancements, contingent on human decision-making.
[52:05] AI's role in making technology more accessible and fostering a low-code development environment.
[53:44] Overview of courses by Andrew Ng and Mehran Sahami, with additional resources provided.
Thanks mate
Andrew Ng, [08:55]: "In addition to training people for today's jobs, we also sometimes want to make sure people have the skills to do the jobs that will be around 2, 3, 4, 10, 20 years from now."
Me: Start with reading Dario Amodei's "Machines of Loving Grace" to understand the limits of AI to help prepare to make a more informed decision.
Paxton Hehmeyer, [26:31]: "Don't think about jobs, think about the tasks that you're doing and how that will influence the tasks."
Andrew Ng, [27:56]: "Take a job, break it down to the tasks, see what tasks AI can automate or augment."
Me: Take a job, break it down to the tasks, and further break it down to the abilities required for the task.
Then see if AI had already mastered the set of abilities required for the task.
Mehran Sahami, [48:36]: "Is AI going to create jobs, or is it going to have people be unemployed? Is it going to destroy jobs?"
Mehran Sahami, [48:47]: "AI's neither going to create or destroy jobs."
Me: AI is going to destroy jobs.
Just because there are currently more jobs that humans can currently fill if their labor starts making economic sense does not change this fact.
There is a finite set of abilities that humans can do.
Each time one of the ability is automated by AI, the tasks that depend entirely on the set of automated abilities is destroyed.
This follows that jobs that only contain such tasks are destroyed.
Mehran Sahami, [48:48]: "AI's neither going to create or destroy jobs. You are going to create or destroy jobs."
Mehran Sahami, [49:20]: "Do you go after 20% more product, more features, more stuff that you build, or do you have a 20% smaller labor force in your engineering staff? That's a human decision."
Me: That's a market decision, not a human decision.
A public company CEO's job is to maximize profits to the share holder, if this person is to go against the market, this person will soon cease to be CEO.
A private company CEO may choose to go against the market, but this creates opportunites in the market for other companies and this will in turn apply pressure to that private company's CEO.
With a hypothetical example barring the creation of new markets:
If the workforce becomes 20% more efficient due to AI with the demand for the company's goods and services remaining the same, then to maximize profits, about 17% of the workforce will be eliminated.
Then if the savings are passed onto consumers thereby raising demand, the workforce may then see a corrective increase but will in all likelihood be less than the original workforce.
With enough foresight the correctional swings can be eliminated, but it still doesn't change the fact that there will be a net smaller workforce in general.
Excellent discussion. The key is students would be able to refine their programming skills using tools like copilot. And also how machine learning is going to take a back seat with all these models around.
This discussion on ethics is timely. With AI transforming industries at such a rapid pace, embedding ethics in education feels like a safeguard against unintended consequences.
Keeping our focus on the problems we want to solve with technology is a good insight.
On the point of AI models trained with code is a better model, I agree.
In early GPT releases, there were some people saying "guessing next word" wouldn't help solve real-world problems, with some sample puzzles posted. I took one such puzzle, prompted GPT to generate Python code (helped a little on syntax editing, yes, cheated), and asked AI to run the code and explain the result in English, it came out great. There are some applications started doing this.
Besides, a software program is a step-by-step reasoning/instructions to solve a problem, so AI model could directly benefit from training with code (assuming code and natural language can relate each other), because other natural language sources, such as literatures, publications, social media postings would not have enough step-by-step instructions.
Andrew says he will teach his kids to code and everyone should learn as well, Jensen Huang says don't teach your kids to code. Predicting the needed skills for tomorrow's jobs is just hard :)
Great interview. Quality.
We need to understand the models better. Especially, defects, errors, etc. We need to have more tests, show transparency in the training data, conduct safety and rule based approach of data mining.
Great
Congratulations 🎉🎉🎉
Why Some Might Say Ng "Sucks" at AI Research Progress lol
1. Lack of Cutting-Edge Innovation: Compared to researchers like Geoffrey Hinton, Yann LeCun, or Yoshua Bengio, who are directly responsible for foundational breakthroughs (e.g., backpropagation, convolutional neural networks, etc.), Ng might seem less innovative.
2. Focus on Practicality Over Exploration: Ng often emphasizes applying existing techniques to real-world problems rather than chasing new, speculative ideas. This might make his work feel less exciting to those in cutting-edge research circles.
3. Slow Adoption of Emerging Trends: Critics argue that Ng’s courses and public lectures sometimes lag behind the forefront of research (e.g., his initial dismissal of reasoning abilities in LLMs or relatively slow embrace of transformers).
Yes, I am Patil BA from India. These are global issues and let us understand and contribute to a better society.
🙏
Are American companies able to sponsor tenured professors in China, in the same way that Tencent does in the US?
Merci
Anyone from India 🇮🇳
From Pakistan
Not from India but your neighbouring country Nepal 🙆🏻♀️👾
@@Whyyyuhik I love Nepal ❤️
@@bikrombarman12345 well in that case you should come and visit Nepal and enjoy the true beauty of nature 😉💗
No