Lex Fridman Podcast full episode: ruclips.net/video/oFfVt3S51T4/видео.html Thank you for listening ❤ Check out our sponsors: lexfridman.com/sponsors/cv8075-sa See below for guest bio, links, and to give feedback, submit questions, contact Lex, etc. *GUEST BIO:* Aman Sanger, Arvid Lunnemark, Michael Truell, and Sualeh Asif are creators of Cursor, a popular code editor that specializes in AI-assisted programming. *CONTACT LEX:* *Feedback* - give feedback to Lex: lexfridman.com/survey *AMA* - submit questions, videos or call-in: lexfridman.com/ama *Hiring* - join our team: lexfridman.com/hiring *Other* - other ways to get in touch: lexfridman.com/contact *EPISODE LINKS:* Cursor Website: cursor.com Cursor on X: x.com/cursor_ai Anysphere Website: anysphere.inc/ Aman's X: x.com/amanrsanger Aman's Website: amansanger.com/ Arvid's X: x.com/ArVID220u Arvid's Website: arvid.xyz/ Michael's Website: mntruell.com/ Michael's LinkedIn: bit.ly/3zIDkPN Sualeh's X: x.com/sualehasif996 Sualeh's Website: sualehasif.me/ *SPONSORS:* To support this podcast, check out our sponsors & get discounts: *Encord:* AI tooling for annotation & data management. Go to lexfridman.com/s/encord-cv8075-sa *MasterClass:* Online classes from world-class experts. Go to lexfridman.com/s/masterclass-cv8075-sa *Shopify:* Sell stuff online. Go to lexfridman.com/s/shopify-cv8075-sa *NetSuite:* Business management software. Go to lexfridman.com/s/netsuite-cv8075-sa *AG1:* All-in-one daily nutrition drinks. Go to lexfridman.com/s/ag1-cv8075-sa *PODCAST LINKS:* - Podcast Website: lexfridman.com/podcast - Apple Podcasts: apple.co/2lwqZIr - Spotify: spoti.fi/2nEwCF8 - RSS: lexfridman.com/feed/podcast/ - Podcast Playlist: ruclips.net/p/PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 - Clips Channel: ruclips.net/user/lexclips *SOCIAL LINKS:* - X: x.com/lexfridman - Instagram: instagram.com/lexfridman - TikTok: tiktok.com/@lexfridman - LinkedIn: linkedin.com/in/lexfridman - Facebook: facebook.com/lexfridman - Patreon: patreon.com/lexfridman - Telegram: t.me/lexfridman - Reddit: reddit.com/r/lexfridman
Lets be honest, chatgpt, cursor ai, claude ai are pretty good, but they wont replace anyone in the near future, i work with them everyday and for the most time they suck also who is writing stores or drawing images 99% of people arent interessted in reading writing or drawing
I am web developer (working mainly with JavaScript and PHP) and I use chatgtp almost every day in my work. From my own experience I can say that AI is helpful in my work, but in most cases it doesn't provide me with working solution right away. Often I have to modify proposed code or just spend quite a lot of time to explain AI how to change it. I have also had cases when I spend many hours trying to get right answer from AI, but it just keeps me in loop of non-working solutions now matter what I ask to him. Sometimes I write to him that this solution doesn't work, to which I admits that something is wrong but then just keeps providing similar answer which doesn't solve anything. So in my opinion at least currently AI is not capable of replacing programmers, but lets see how much better it will get in the future. Also in current situation you absolutely have to be familiar with programming workflow itself if you want to receive decent solutions from AI.
This entire conversation is irrelevant. The microsecond AI becomes capable of replacing you there will not be time for a conversation about whether or not it can replace you of course AI cannot currently replace your position because if it could, you would be fired literally immediately.
Dev here. I completely agree. One more caveat is that even with Ai, it still needs someone to operate it. We will still need devs to understand the context, problem and desired functionality. One can’t create a prompt if they don’t understand the problem, as well as needing a solid enough understanding to implement the solution and understand what solutions are viable, reliable and effective.
most web development work is not terribly complicated. The amount of time you spent prompting the AI, if you had the knowledge/expertise, you would have written your solution in less time, and likely more maintainable as well.
@@MrMcWittagreed. And no matter how advanced they get we are still decades off society being comfortable letting these things do the work without strict human oversight.
they seem hesitant about answering the question.What is implied is what matters. they speak of productivity and efficiency gains, which will result in a less number of jobs overall. In the past a significant number of jobs was not much more than writing boilerplate code and people "earned a living" doing that. the Ninja programmers are not as common as they think.
I think they know that there aren't that many ninjas... They're just trying to win the capitalism game like all this generation has been brainwashed into
been using cursor for some time. lots of bugs and issues not beeing adressed…. seeing this «team» i understand why. if AI is replacing humans, why cant this «team» just have cursor fix cursor?
@2:07 "As long as humans are actually the ones designing the software specifying what they want to be built and it's not just a company being run by all AI's we think you'll really want the human in the driver seat." Oh my sweet summer child....
I highly doubt that programming will die out for a long time since you would need an ungodly amount of computing power, and the code wont be optimized for specific tasks for a long time.
In simple words AGI. And even if AGi happens then still there will be a need of AGI Engineers which be like doctors of AI. AI will surely replace these type of programmers who do web development and simple app development but i am not talking about that programming.
As a programmer that graduated in the year 2000 I think we’re going to lose fundamental logic skills being able to debug critically the basecode being used to build these models when there are bugs in the base code by simply coding around the so called “bugs”, leave them there creating an infinite number of zero days which The CGI’s will they themselves exploit
Still uncertain as to whether AI will replace developers, but I do know OpenAI will be replacing Cursor in the near future. No idea why they went all in on developing a platform reliant product that I'm sure they themselves know the platform can and will just usurp when they deem fit.
Openai's search failed to replace Perplexity so far, which is the only precedent I can think of. Plus, openai's platform will be locked into openai's models, and there's no guarantee they will grab the lead. Sonnet is best for programming right now, o1-preview is too expensive/slow and Anthropic will probably leapfrog them with opus-3-5. I wouldn't be so sure of this. I think you're imagining that openai will train a specialized coding AI using RL maybe? Could happen, could not. I do think it will happen eventually because an AI that codes super well is a necessary condition of the singularity (a tight self-improvement loop), and I suppose OpenAI will be the ones to gather the extremely expensive dataset needed for that type of RL. Not too soon though, and that will herald the final years of capitalism as we know it anyway so I don't see the downside really for these kids.
@@iverbrnstad791their product is the base for most of those a(p)I products, their own cost will always be inferior to the competition. Basing your product on a plataform and making a good UI isn't a good idea.
Just look to drafting as an example of the future. 50 years ago, it took HUNDREDS of drafters in a massive office working as a team to do complex engineering. Revisions were slow, and complexity was expensive. Now, that same level of design work can be done by one skilled CAD operator in a matter of hours. Sure, "drafting" isn't dead as a skill. It has just made it 100x faster so that it can be done with less people which reduces overhead and increases quality. To think that anything different will happen with programming is ridiculous.
Sure, in the short-medium term. But eventually (10-20 years even) these AI systems will very likely have human level or superior critical thinking and other intelligence based capabilities. At that point, there is no comparison to any past tech revolutions across industries, sectors, nations, etc. Because in the past, it was all about creating better tools for humans to use. Now, its all about creating better thinkers than humans. Which inevitably makes humans replacable. These guys are focused on better tool creation, which has its place in the market for now. But the big players are focused on AGI and beyond, and human-made tools will become largely irrelevant.
LLMs are effectively extreme implementations of statistics on languages and the characters within the language. If programming is outsourced to an LLM, that is effectively making the quality of the code produced on the middle of the bell curve - assuming we can get the LLM to the point it can "understand" a problem only given a set of prompts. Do we really want the quality of our production code to become statistically average code? Can you imagine if the average person writing code - not the average software engineer, but to include those just learning and/or seeing if they like it - pushed something to Google's production code? My hot take: if companies jump too quickly to LLMs writing their code, they will quickly see diminished quality and may not recover. If they slowly adopt it but still utilize it to write code, it will take SWE jobs for a few years, then boom the SWE job market because there will be a huge demand for expert SWEs to fix all of the automated tech debt that has accumulated. If they adopt LLMs to provide a tool for SWEs to leverage, then we might see a boom in SWE productivity which still would just create a need for more SWEs to keep up with market expansion.
Agree with this statement ! But you forgot one small detail, programming here in the U.S is already outsourced. And they are all "Training" models, why do you think a lot of tech jobs today are outsourced to India ? You have to understand the only limitations to this is the compute energy, our U.S based data centers and the power its taking to train premium models from Open AI and Anthropic are in fact key. The only average code their will be is the ones LLM's are able to process more efficiently and with better results. Not saying their will be one, heck their may be a new type of code in the future maybe something like stenography but for programing. Who knows... but one thing humans have always done and we keep advancing each generation to the next and that will never stop or slow down.
@@WeylandLabs What you describe reflects the same poor understanding of LLM structure and statistics common in the AI hype advocates. Speculating that researchers could simply develop a new programming language in which AI models would write code shows that you fail to understand how LLMs are trained. For AI models to produce meaningful output relevant to a field, researchers must first train these models using datasets consisting of the relevant material. If researchers developed a new programming language for LLMs, they must then curate a dataset with which they will then train models. This concept of training models on relevant datasets is foundational to the field of AI research. Beyond the issue of simply creating *a* dataset, exists the issue of the dataset's *quality*. The dataset of coding examples in this new programming language must also contain code for complicated algorithms developed by researchers in other fields. Should this hypothetical dataset lack these numerous algorithms along with appropriate context for their use, the efforts will result in a sub-par dataset when compared to existing datasets of code. As a solution, these hypothetical--and naive--researchers could develop a transpiler to convert the code in existing databases into their new "LLM-specific language"; however, this would, at best, result in code that inherits the coding paradigms of the original programming languages, which would undermine the goals of producing a new programming language in the first place. All this is secondary to the primary misunderstanding AI-hype enthusiasts have about LLMs: that AI "thinks" like us. The current LLM structure does not involve a series of reasoning before producing an answer. This is why many feel AI-generated text or images feel uncanny or "off"; words come not after consideration for how they should tie back to an author's intentions, motives, or other. Instead, a calculation to find the most likely word to in the sequence determines the text it produces. The result of this has profound utility, but this utility has inherent limitations in its application that require consideration when implementing LLMs as a solution to a problem. Researchers have attempted to develop approaches to augment or replace current LLM structure, but continue to produce unimpressive results. The datasets used to train LLMs, books, research articles, audio/video transcripts, and internet posts, consist of text that is the product of reasoning; they do not contain the reasoning that resulted in the text. For the same reason, even text resulting from little or no thought fails to provide benefit for developing chain-of-thought reasoning in LLMs. Some suspect that, by training a complimentary model to manage the series of reasoning for an LLM, they might produce a system that can overcome this limitation. Unfortunately, without data that details the thought patterns for various problems, researchers must curate thought patterns themselves. As different problems require--sometimes vastly--different thought patterns, this approach suffers from a limited scope of effectiveness. Those who push the AI-hype point to certain models' performance on competitive math problems as evidence that researchers have solved this problem. This advancement may seem to prove that researchers have found the solution, but at most it shows that they have developed an approach for solving a specific type of problem in a specific field. Further undermining the sentiment that researchers have found "the solution" is the fact that math has the unique characteristic of requiring/expecting problem solvers to show their work. One can find countless explanations and step-by-step procedures for finding the answers to a wide variety of math problems. Additionally, researchers can programmatically modify the values used in these problems and their explanations to easily create more training data with which they can train the models. This all ignores the largest issue in advancing AI research with the current approach: since the public has had access to a certain chatbot, publicly-available text now increasingly consists of its output. Training a model on its own output results in a trend to the median, as described by the original commenter. Without an unprecedented advance in AI research, the delivered product will not meet the high expectations of the AI-hype train, and people will grow disappointed. This is exceptionally unfortunate as the result of these models has produced unprecedented utility for the market, and whose value will only continue to add to the economy.
@IsaiahDavis-bu2in When you try to discredit someone, make sure you dont use an LLM prompt rebuttal. How you used your language is very unoriginal and is common in LLMs to generate it like that. The first part was good but try harder next time use words like "Clear" and "Consise" when trying to enhance a prompt otherwise you get something that is a polished gem every time, and as humans we are not that good. You but in in a bit of effort to prompt it make sure you put in the same effort to understand it. As your AI hype guy learn a bit more, "Detailed Linguistics" in programming. But if you have anything real add as a human, please let me know. I'll be here all day...
Large language models will not, I am almost certain, be able to replace programmers in their current form, because they (the LLMs) so often produce answers that look convincing but that are actually false and/or ineffective when scrutinised; and it takes an expert to differentiate between a plausible-seeming but false answer to some coding problem, and one that is actually correct. It also takes a highly specialized body of knowledge to know what to do with the code once it has been generated by a LLM; the average person would not have a clue about how to structure a file system, create a virtual envrionment, or how an operating system executes a particular program. I could be wrong, but I don't see it happening anytime soon.
It'll be like admins/secretaries. Before you needed one per manager, now with automation tools you need one for many managers. A senior engineer in a code review role with AI programming agents will be able to support way more of what a company wants to do
@@romanemcdougal It's possible, certainly; but I'm not sure how much further LLM's can be improved, as, fundamentally, they are still going to be determining what the most likely solution to a given problem is based on the available data. I struggle to see how that process could ever reliably produce effective and sophisticated code; but I guess we will see.
Programmers will become AI operators in the the future, iterating more or less as follows: produce code from specs, test/verify, fix/optimise, test until you reach production ready code. Competent AI operators will be able to do the work that previously took a few devs in a team. The exact number it will be reduced by will be dependent on the factor by which dev time is reduced by when AI is used. Consequently organizations will have to create a stream of these AI operators with a growth pathway for development from junior to tech lead level. Seeing as demand for devs will drop as a result of AI being able to replace devs and reduce team size when coupled with the operators, they will also have to incentivize these positions correctly to ensure people want to go study, and are not afraid of being unemployed after completing their studies. No matter what people say, it is going to get to a place where it reduces the number of devs required for any and every software project. It will also make the job more about verification and optimization rather than creation. Will potentially do the same all along the food chain - BAs, PMs, Testers. Now the timeframe is the key part - the LLMs will improve more than enough in the next 5 years for operators to be able to get decent enough code from them to work with until production ready. It will however take organizations around the same 5 but up to 15-20 years to completely move to the new way of working. People will say they will get left behind, well yeah, lol, that happens even today, some orgs are stuck in the mid 2000s with their entire dev lifecycle and product development lifecycle - they still operate, they still have a market, some of them are quite large and successful too, they operate in niches where the competition is low etc.
AI operators, prompt engineers, it's like simplify a person to a "computer user" but its endless ways to do it. LLMs are not an exact tool, it doesn't know when it's wrong and give different answers from time to time. So rely on your skills and knowledge, and improve them with AI tools. Of some reason a lot of people get experts to predict the future when we don't know.
I think in this case adoption will be quite rapid, as long as the output is sufficiently good. Simply due to ease of implementation and the very obvious value proposition. But then again, that's 5 years. Who knows what capabilities these AI systems will have in 10, 15, 20 years. May not even be LLMs, could be based on some superior architecture that's being researched as I type this comment. Not to mention improvements to self-learning algos and free access to the internet at inference. Software development (like many other services) may become as accessible as the internet is today...
What they’re saying makes sense. People want control in what they’re making. Ai don’t really understand the human experience and senses. I’ll be surprised if programmers get replaced anytime soon. The jobs will get easier and the wages might go down due to lower demand… but yea nobody really knows what’s going to happen
I guess the problem is using leetcode on mid and small companies, sure Google the best way to store their algorithms. A small business doesn't need it for sure.
Some comments here remind me of what graphic designers were saying a few years ago. They were confident that generative AI wouldn’t be able to create anything useful at a professional level. Now, generative AI is helping graphic designers in various ways and may soon replace many of them. Programming will take longer to be affected, but it seems to be heading in the same direction. If the next generation of LLMs makes a leap similar to the one from GPT-3 to GPT-4, things could become truly interesting.
The outsorcing and AI will let the cs grads without jobs oportunities. It's happening right know, im a senior from Colombia and i work for a US company, my salary is 2 times smaller than the junior one in US and it's still being a good money for me
Nah, programs just get more and more complex. The introduction of ASP is more significant than LLMs. It can barely help with basic stuff in the enterprise codebase at my job. People that think companies are just going to let AI run loose on their codebase aren't very bright or are blinded by hype.
the chat style of building apps will already be great with just current llm technology when we make a system that can counter prompt you the engineer with tons of clarifying questions before coding.
I think they’re trying to imitate him to sound more profound.. it’s not a natural way of speaking conversationally. Elon Musk also speaks with similar cadence.
@prakashsekar2583 It's called PR, they are recommended to talk like they're thinking deeply about stuff all the time. It makes them sound like an authority, the subject doesn't matter. What matters; for PR purposes, is how they come across.
I don't see how these youngsters can appreciate the complexities of real world IT systems. Naturally they can't. They use terms like "engineering department" but in reality they don't really understand neither of these two words. Engineering in the real world is exactly *not* about code, but about humans collaborating to make a systems work. The way they talk about this is very revealing, but also very common for young folks.
The amount of errors in the codes will reach unprecedented levels. The programmers will just sit and correct what the machines wrote because they received wrong instructions about what they had to do.😂😂😂
@@virgiliustancu9293 saw some data that showed as the amount of AI generated code increased in the real world, the number of bugs increased significantly as well. Too many naive people thinking that programming is simple and easy to generate now. It’s going to be a fun time
My take is, LLMs are trained on data, new things or concepts and technology pop up every year. Would companies relax and wait to train a model every year on new data?
The funny thing to me about programmers or software developers is that they believe that AI will be able to figure out how to complete 90% of labor tasks, but don't believe that they will be able to write software without.
Nobody believes that AI will be able to complete 90% of labor tasks. How would that even work, you would need sentient humanoid robots for that. Robots are only good at specific repetitive specialized tasks.
"There are 2 ways of going about programming: you think really hardly carefully upfront about the best possible way to do it, but I much more prefer just jump on it and iterate really quickly" OH BOY, that's the recipe for a disaster. At least we're going to make a lot of money fixing security bugs in the future, because the AI's will be specialized in writing buggy software, but following that standard.
With AI, there will always be a place for exceptional humans, but that has been the case with any industry that gets automated. That isn't the issue, the issue is what happens to the rest.
The answer is never. I have spent quite a lot of time with these models and they often get stuck in a local minima and you end up spending a lot of time just fixing that. You will have to spend gargantuan amount of money and compute(from current levels) to fix the "getting stuck in a local minima" problem and it just won't be economical ever. At best these models will be really good at auto-complete. Anything else is just hype or delusion.
I disagree, look outward 10-15 years as intelligence is going on a double exponential curve. Dont look at now, look at the future. It will become better faster cheaper and safer than all programmers. You will see.
@@deeedledeee but to say it will "never" replace programmers is a bit short-sighted. As long as it keeps getting better, whatever you ask it, it will solve, and it will get really good at reading your mind.
I highly doubt that. The problem lies in the amount of context required to understand what solution is applicable to the problem at hand is impossible to provide to an AI. You can’t even give a human being enough context to do that on the first few attempts. It’s one of the last jobs that would ever be replaced by AI
@@trentirvin2008I'm going to wait 2.5 years to reply to this. You, me and everyone you know will have zero purpose in 5 years. Just waiting for retrenchment packages to be drawn up and then not to exist. It's time to eat the rich. 😂
@@trentirvin2008 of course. It isn't the best yes. But remember it wasn't even a thing before 2022 give or take. So it will be immeasurably better in the coming few years. Honestly. Not a swipe at anyone. Just a fair assessment. My engineering work will surely be automated away too. I'm under no illusion as to my self importance. Just prepare is all I am saying.
@@WhatIsRealAnymore it was a thing before 2022, the AI models we see now have been developed over the past 30 years, and even the CEO’s of these companies and head researchers have acknowledged AI is in a plateu and will be for years to come. We were only exposed to the leap in advancement because it finally became somewhat useful. Don’t fall for marketing hype trains.
Furthermore, if it does not completely replace developers and 100K+ salaries are gone, what would be the incentive to program if you get paid the same as Walmart? The value of developers is in their difficulty.
@@Softcushionthat’s a stupid comment, there are many well paid jobs that people choose to pursue because of the monetary incentives; doctors, lawyers, dentists
AI will replace literally every job imaginable eventually. Programmers, financial advisors, accountants, and anything involving objective numbers or values will be replaced first.
I don't think AI will ever replace programmers. AI is super helpful in giving a code template. But you still need a programmer to customize the code template which caters to a particular use case.
People are still going to need to learn how to program to not only use this tool effectively, but also to review and audit the code being produced. Having critical code infrastructure being produced by AI without a serious level of audit would be a huge mistake.
The kids are high on their own supply. Programmers will be devastated in the next 15 years. Yes, some developers will still be required, but most will not be needed. These people are training their replacement.
Unless there's a major breakthrough in AI, this current iteration of AI with the limitations of LLMs will definitely not replace programmers. It's an assistant, but non-programmers will not go far with it and will hit a wall very quickly. The more complex the code is, the less they understand if what the AI is producing makes any sense. Companies that fall into the hype and layoff engineers will need to hire them back to fix the mess AI will create
😂😂😂😂 alright bro , but if you are an experienced engineer, you wouldn't say that, yes for simple code bases and obvious logic, it will replace some , but we all know AI fails to write efficient code and horrible at code base with complex logic .
whether them or someone else, the goal will be to reduce costs and increase efficiency, which will inevitably replace a lot of jobs. most of the devs I work with don't seem to comprehend that AI will continue to improve, possibly exponentially. just last year, the majority of my coworkers brushed it off as useless
Short answer is, yes... the programmer will be like a set of tasks and not a formal job, it will be part of the responsibilities in a day of someone who interacts with the AI, probably for 15 dollars the hour lol
As a species, we consistently overestimate our intelligence, significance, and uniqueness. We believe ourselves to be more capable than we truly are, often inflating the complexity or importance of our actions, when in reality, they may be far less challenging or remarkable than we imagine.
I feel like programmers are so efficient at automating other people out of jobs, the day programming is taken over by AI, 99% of Corporate jobs will also be automated.
That's not really how this is going to play out. Business people won't have to describe what they need to a bot in terms of software. Sure, doing that would provide the lift that the bot is faster and cheaper at building software, but when you have a bot that is that capable (to build and deploy software) it is not that far of a stretch to having bot (ai) that does what that sofware would do without having to describe a "system design" to it and buiding separate software. These ai systems will simply monitor all business communications and data (structured and unstructured) and will proactively offer insights and informations and reactivley spit out whatever requast you have of it. Overtime companies will worry less and less about structuring data and how UIs shuold be laid out, and rather simply work along side the AI. So yes, developers will largely go away unforatunately, but it won't be due to AI that can build better software faster and cheaper. It will be because with AI you won't need that older paradigm of business systems.
My advice would be, keep learning programming as you use AI to write code. Continue at getting better in coding. And you will actually use these AI tools better and increase your productivity.
For sure around ~95% of programming tasks can shortly be automated for basic (web) app development. For special cases such as developing libraries and specialised software it will take longer to get to 95% but we will likely get there within the next 5-10 years
"all the [new] things that draw people to programming, like building things really fast". Dude absolutely not. Nobody is drawn to programming when they look at todays tech stacks and see 500 different languages/frameworks/cloud services/libraries, it draws them away. New technology helps people build things faster, but absolutely not to people just starting who have no clue how to unfold the mess.
Did calculators kill mathematicians? Can AI kill programmers? No! It makes programming easier and faster. I had designed a software and it would take me 5 years to make it but with a new calculator it took me 6 months.
I think you are a little confused about "programming" vs building stuff. I think programming is a means to an end for the majority of "programmers". It's not like dancing, where people just love dancing, it's more like brick laying or architecture. People want beautiful houses so they need to learn these things. If people did not want beautiful houses they would not exist, it's a craft not an art.
Programming is a tool to go from idea to product. I am not a programmer, but now I am able to create a useful software product already. I think many more people will be able to create amazing things, which will lead to more diverse and rich software ecosystem. The science and technology will progress even more rapidly!
You're probably able to create a useful todo-list app. But you are not able to create a big enterprise system with various components talking to each other via different networks and messaging systems etc. As some comments already mentioned, you need to have a profound understanding of how software works, how networking works, how concurrency works to verify that the AI doesn't hallucinate. And I don't see the average-non-programmer-joe ever doing that.
The problem lies in how our system is set up. AI will automate away many jobs, maybe all jobs, eventually. At that point there's little use for products, the people owning AI(assuming it stays under human control) won't have any reason to sell anything to pleb, they won't have a reason to share AI either, as the machine can do everything, meaning the only things that matter is land and resources.
@@iverbrnstad791 Unless there's a major breakthrough in AI, this current iteration of AI with the limitations of LLMs will definitely not replace programmers
No, manual labor won't be replaced. Even if it will, for how long? Oil is not infinite, so other energy fossils. Without them you can't maintain other energy production. AI, drones, robots are going to be a thing, but a thing of a very short period of time before it just dies out
Nobody here know’s what they’re talking about. Sounds like a bunch of teenagers who have to submit a report on something way out of their element, then a bunch of elementary kids admiring them because they use big words and have facial hair. Reporters are experts in reporting. An interview of someone with 10 years of experience in their career is kind of underwealming. Dunno, talk with someone who started programming before personal computers came out.
Such poor analogy, the human intervention needed in virtual world & physical world are of different spectrum, the virtual automation is the basis for physical world automation or humanoids, which is still going to take some time, but here we are not talking about complete elimination of human, the intent is going to be there in every field, it's just that the 100 people intervention is going to be reduced to 20, so the jobs or work is going to become super niche skill instead of conglomerates fulfilling their social responsibilities of providing jobs. In short the power tools you're talking about, they are soon going to be operated by another power tool which we call humanoids or robots.
@@harshalsharma2518power tools aren’t intelligent. AI can be assumed to be intelligent at some level. This is categorically different than power tool smh
Have any of you actually tried to use ai for coding I've been for last few months it's never works iv yet to get a working project from ai there is always errors & functionality problems & no matter how much time is spent I can never seem to get a final product ai kind of sucks still I've tried many models also many seem to have short & long term memory loss they aren't taking over the world anytime soon trust me.
i think they lost the point, yes from a programmer perspective ai will be "fun", when people ask will there be job or if i should do x field they want to know will their be jobs or in this case will ai take my job? The answer is yes lol. Idk if they are purposefully avoiding that answer or only thinking from the programmer side and not the business side which ultimately, without govt intervention, will wipe out a large portion as we have already seen.
what about the joy of programming, is it like riding a bike without using a steer, what about creativity and innovation, this ai thing is killing the human right brain in a long term
I really enjoyed these clips and these young men are indeed brilliant, but I was keeping track, the Cursor team almost never answered directly the main question asked in the video title. I thought for a moment I was watching a political interview.
For those interested in programming here please do it as a hobby. The job market is going to be filled with unemployed programmers with a lot more experience as you the newby in the next 5 years. Honestly, reading the programmers in these comments i see tremendous BIAS. A human's pride. Not one single one can admit that it went from an incoherent chatbot about 2.5 years ago to a fairly useful programming tool. Where will you be in another 2.5 years? 😂😂 Prepare to eat the rich.
Lex Fridman Podcast full episode: ruclips.net/video/oFfVt3S51T4/видео.html
Thank you for listening ❤ Check out our sponsors: lexfridman.com/sponsors/cv8075-sa
See below for guest bio, links, and to give feedback, submit questions, contact Lex, etc.
*GUEST BIO:*
Aman Sanger, Arvid Lunnemark, Michael Truell, and Sualeh Asif are creators of Cursor, a popular code editor that specializes in AI-assisted programming.
*CONTACT LEX:*
*Feedback* - give feedback to Lex: lexfridman.com/survey
*AMA* - submit questions, videos or call-in: lexfridman.com/ama
*Hiring* - join our team: lexfridman.com/hiring
*Other* - other ways to get in touch: lexfridman.com/contact
*EPISODE LINKS:*
Cursor Website: cursor.com
Cursor on X: x.com/cursor_ai
Anysphere Website: anysphere.inc/
Aman's X: x.com/amanrsanger
Aman's Website: amansanger.com/
Arvid's X: x.com/ArVID220u
Arvid's Website: arvid.xyz/
Michael's Website: mntruell.com/
Michael's LinkedIn: bit.ly/3zIDkPN
Sualeh's X: x.com/sualehasif996
Sualeh's Website: sualehasif.me/
*SPONSORS:*
To support this podcast, check out our sponsors & get discounts:
*Encord:* AI tooling for annotation & data management.
Go to lexfridman.com/s/encord-cv8075-sa
*MasterClass:* Online classes from world-class experts.
Go to lexfridman.com/s/masterclass-cv8075-sa
*Shopify:* Sell stuff online.
Go to lexfridman.com/s/shopify-cv8075-sa
*NetSuite:* Business management software.
Go to lexfridman.com/s/netsuite-cv8075-sa
*AG1:* All-in-one daily nutrition drinks.
Go to lexfridman.com/s/ag1-cv8075-sa
*PODCAST LINKS:*
- Podcast Website: lexfridman.com/podcast
- Apple Podcasts: apple.co/2lwqZIr
- Spotify: spoti.fi/2nEwCF8
- RSS: lexfridman.com/feed/podcast/
- Podcast Playlist: ruclips.net/p/PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
- Clips Channel: ruclips.net/user/lexclips
*SOCIAL LINKS:*
- X: x.com/lexfridman
- Instagram: instagram.com/lexfridman
- TikTok: tiktok.com/@lexfridman
- LinkedIn: linkedin.com/in/lexfridman
- Facebook: facebook.com/lexfridman
- Patreon: patreon.com/lexfridman
- Telegram: t.me/lexfridman
- Reddit: reddit.com/r/lexfridman
Lets be honest, chatgpt, cursor ai, claude ai are pretty good, but they wont replace anyone in the near future, i work with them everyday and for the most time they suck also who is writing stores or drawing images 99% of people arent interessted in reading writing or drawing
15 year olds talking about the old days of programming.
😂😂😂
😂😂😂😂
Yeah. Dudes weren't there.
These kids know more than you
@@AustinBritt Being in the right place at the right time has so much to do with it.
I am web developer (working mainly with JavaScript and PHP) and I use chatgtp almost every day in my work. From my own experience I can say that AI is helpful in my work, but in most cases it doesn't provide me with working solution right away. Often I have to modify proposed code or just spend quite a lot of time to explain AI how to change it. I have also had cases when I spend many hours trying to get right answer from AI, but it just keeps me in loop of non-working solutions now matter what I ask to him. Sometimes I write to him that this solution doesn't work, to which I admits that something is wrong but then just keeps providing similar answer which doesn't solve anything.
So in my opinion at least currently AI is not capable of replacing programmers, but lets see how much better it will get in the future.
Also in current situation you absolutely have to be familiar with programming workflow itself if you want to receive decent solutions from AI.
This entire conversation is irrelevant. The microsecond AI becomes capable of replacing you there will not be time for a conversation about whether or not it can replace you of course AI cannot currently replace your position because if it could, you would be fired literally immediately.
It’s only going to get better and better
Dev here. I completely agree. One more caveat is that even with Ai, it still needs someone to operate it. We will still need devs to understand the context, problem and desired functionality. One can’t create a prompt if they don’t understand the problem, as well as needing a solid enough understanding to implement the solution and understand what solutions are viable, reliable and effective.
most web development work is not terribly complicated. The amount of time you spent prompting the AI, if you had the knowledge/expertise, you would have written your solution in less time, and likely more maintainable as well.
@@MrMcWittagreed. And no matter how advanced they get we are still decades off society being comfortable letting these things do the work without strict human oversight.
"Back in 2013 programming had so much cruft ..." ROTFL! This is genuinely cute and freaking hilarious at the same time.
And wrong - the cruft has since built to whole layers of sticky ugly molds.
2013 was pretty smooth. I started in 2003 and I’m still young. In 2003 is was notepad and IE.
He meant before AI bots, chatgpt era..
@user-kt5hx6hl7m how young exactly?
Time to invest in social skills
Exactly, communication skills will be more essential in the future. Even listening and empathizing
Funniest comment ive seen today, thx man
Why would you invest more time in social skills as leverage with technology increases... Seems like terrible life strategy
@Wubwub772 the fact that you didn't understand that it was a joke symbolizes how deficient we are getting with social skills
Ah fuck
they seem hesitant about answering the question.What is implied is what matters. they speak of productivity and efficiency gains, which will result in a less number of jobs overall. In the past a significant number of jobs was not much more than writing boilerplate code and people "earned a living" doing that. the Ninja programmers are not as common as they think.
orrr, the quicker efficiency in the workplace can lead to more demand due to the less costly building software becomes
@@aresstavropoulos916 yes, more demand for AI, not humans
I think they know that there aren't that many ninjas... They're just trying to win the capitalism game like all this generation has been brainwashed into
Literally is never going to happen. If 1 guy can do 10 peoples jobs. 9 people are losing their jobs@@aresstavropoulos916
Unemployment is really really low though. So clearly less boilerplate isn't a problem.
Such a silly question... When AI is smart enough to replace programmers, it will be smart enough to replace every digital job in existence
And its a question of when, not if.
Unless we off ourselves in a full scale nuclear war or some other catastrophic event that destroy civilization.
nuclear war won’t happen. a.i can definitely code
@@sevendoubleodex Nuclear war has an estimated risk of about 1% every year.
@@ManicMindTrick that’s an estimated lie.
@@sevendoubleodex Seriously, what's the logic behind it? Other than the usual mutual destruction?
been using cursor for some time. lots of bugs and issues not beeing adressed…. seeing this «team» i understand why. if AI is replacing humans, why cant this «team» just have cursor fix cursor?
@@helgeh they do important(tm) work 😉
@2:07 "As long as humans are actually the ones designing the software specifying what they want to be built and it's not just a company being run by all AI's we think you'll really want the human in the driver seat." Oh my sweet summer child....
I highly doubt that programming will die out for a long time since you would need an ungodly amount of computing power, and the code wont be optimized for specific tasks for a long time.
In simple words AGI. And even if AGi happens then still there will be a need of AGI Engineers which be like doctors of AI. AI will surely replace these type of programmers who do web development and simple app development but i am not talking about that programming.
As a programmer that graduated in the year 2000 I think we’re going to lose fundamental logic skills being able to debug critically the basecode being used to build these models when there are bugs in the base code by simply coding around the so called “bugs”, leave them there creating an infinite number of zero days which The CGI’s will they themselves exploit
Still uncertain as to whether AI will replace developers, but I do know OpenAI will be replacing Cursor in the near future. No idea why they went all in on developing a platform reliant product that I'm sure they themselves know the platform can and will just usurp when they deem fit.
bc ycombinator and ai buzzwords 😊
Openai's search failed to replace Perplexity so far, which is the only precedent I can think of. Plus, openai's platform will be locked into openai's models, and there's no guarantee they will grab the lead. Sonnet is best for programming right now, o1-preview is too expensive/slow and Anthropic will probably leapfrog them with opus-3-5. I wouldn't be so sure of this. I think you're imagining that openai will train a specialized coding AI using RL maybe? Could happen, could not. I do think it will happen eventually because an AI that codes super well is a necessary condition of the singularity (a tight self-improvement loop), and I suppose OpenAI will be the ones to gather the extremely expensive dataset needed for that type of RL. Not too soon though, and that will herald the final years of capitalism as we know it anyway so I don't see the downside really for these kids.
Agree.
Not convinced it will be OpenAI, they lost many of their greatest engineers, so it wouldn't surprise me if they fall off.
@@iverbrnstad791their product is the base for most of those a(p)I products, their own cost will always be inferior to the competition. Basing your product on a plataform and making a good UI isn't a good idea.
Just look to drafting as an example of the future. 50 years ago, it took HUNDREDS of drafters in a massive office working as a team to do complex engineering. Revisions were slow, and complexity was expensive. Now, that same level of design work can be done by one skilled CAD operator in a matter of hours. Sure, "drafting" isn't dead as a skill. It has just made it 100x faster so that it can be done with less people which reduces overhead and increases quality. To think that anything different will happen with programming is ridiculous.
exactly
@@WildcatWarrior15 I agree, however it won't work these people think
I agree with this statement but to say it’s ridiculous to think anything will happen differently with programming is ridiculous lol
Sure, in the short-medium term. But eventually (10-20 years even) these AI systems will very likely have human level or superior critical thinking and other intelligence based capabilities. At that point, there is no comparison to any past tech revolutions across industries, sectors, nations, etc. Because in the past, it was all about creating better tools for humans to use. Now, its all about creating better thinkers than humans. Which inevitably makes humans replacable. These guys are focused on better tool creation, which has its place in the market for now. But the big players are focused on AGI and beyond, and human-made tools will become largely irrelevant.
@@mambaASI perhaps, probably not. This has been the promise since 1965
LLMs are effectively extreme implementations of statistics on languages and the characters within the language. If programming is outsourced to an LLM, that is effectively making the quality of the code produced on the middle of the bell curve - assuming we can get the LLM to the point it can "understand" a problem only given a set of prompts. Do we really want the quality of our production code to become statistically average code? Can you imagine if the average person writing code - not the average software engineer, but to include those just learning and/or seeing if they like it - pushed something to Google's production code?
My hot take: if companies jump too quickly to LLMs writing their code, they will quickly see diminished quality and may not recover. If they slowly adopt it but still utilize it to write code, it will take SWE jobs for a few years, then boom the SWE job market because there will be a huge demand for expert SWEs to fix all of the automated tech debt that has accumulated. If they adopt LLMs to provide a tool for SWEs to leverage, then we might see a boom in SWE productivity which still would just create a need for more SWEs to keep up with market expansion.
💯
💯
Agree with this statement !
But you forgot one small detail, programming here in the U.S is already outsourced. And they are all "Training" models, why do you think a lot of tech jobs today are outsourced to India ?
You have to understand the only limitations to this is the compute energy, our U.S based data centers and the power its taking to train premium models from Open AI and Anthropic are in fact key. The only average code their will be is the ones LLM's are able to process more efficiently and with better results. Not saying their will be one, heck their may be a new type of code in the future maybe something like stenography but for programing. Who knows... but one thing humans have always done and we keep advancing each generation to the next and that will never stop or slow down.
@@WeylandLabs What you describe reflects the same poor understanding of LLM structure and statistics common in the AI hype advocates. Speculating that researchers could simply develop a new programming language in which AI models would write code shows that you fail to understand how LLMs are trained. For AI models to produce meaningful output relevant to a field, researchers must first train these models using datasets consisting of the relevant material. If researchers developed a new programming language for LLMs, they must then curate a dataset with which they will then train models. This concept of training models on relevant datasets is foundational to the field of AI research. Beyond the issue of simply creating *a* dataset, exists the issue of the dataset's *quality*. The dataset of coding examples in this new programming language must also contain code for complicated algorithms developed by researchers in other fields. Should this hypothetical dataset lack these numerous algorithms along with appropriate context for their use, the efforts will result in a sub-par dataset when compared to existing datasets of code. As a solution, these hypothetical--and naive--researchers could develop a transpiler to convert the code in existing databases into their new "LLM-specific language"; however, this would, at best, result in code that inherits the coding paradigms of the original programming languages, which would undermine the goals of producing a new programming language in the first place.
All this is secondary to the primary misunderstanding AI-hype enthusiasts have about LLMs: that AI "thinks" like us. The current LLM structure does not involve a series of reasoning before producing an answer. This is why many feel AI-generated text or images feel uncanny or "off"; words come not after consideration for how they should tie back to an author's intentions, motives, or other. Instead, a calculation to find the most likely word to in the sequence determines the text it produces. The result of this has profound utility, but this utility has inherent limitations in its application that require consideration when implementing LLMs as a solution to a problem.
Researchers have attempted to develop approaches to augment or replace current LLM structure, but continue to produce unimpressive results. The datasets used to train LLMs, books, research articles, audio/video transcripts, and internet posts, consist of text that is the product of reasoning; they do not contain the reasoning that resulted in the text. For the same reason, even text resulting from little or no thought fails to provide benefit for developing chain-of-thought reasoning in LLMs. Some suspect that, by training a complimentary model to manage the series of reasoning for an LLM, they might produce a system that can overcome this limitation. Unfortunately, without data that details the thought patterns for various problems, researchers must curate thought patterns themselves. As different problems require--sometimes vastly--different thought patterns, this approach suffers from a limited scope of effectiveness.
Those who push the AI-hype point to certain models' performance on competitive math problems as evidence that researchers have solved this problem. This advancement may seem to prove that researchers have found the solution, but at most it shows that they have developed an approach for solving a specific type of problem in a specific field. Further undermining the sentiment that researchers have found "the solution" is the fact that math has the unique characteristic of requiring/expecting problem solvers to show their work. One can find countless explanations and step-by-step procedures for finding the answers to a wide variety of math problems. Additionally, researchers can programmatically modify the values used in these problems and their explanations to easily create more training data with which they can train the models.
This all ignores the largest issue in advancing AI research with the current approach: since the public has had access to a certain chatbot, publicly-available text now increasingly consists of its output. Training a model on its own output results in a trend to the median, as described by the original commenter.
Without an unprecedented advance in AI research, the delivered product will not meet the high expectations of the AI-hype train, and people will grow disappointed. This is exceptionally unfortunate as the result of these models has produced unprecedented utility for the market, and whose value will only continue to add to the economy.
@IsaiahDavis-bu2in When you try to discredit someone, make sure you dont use an LLM prompt rebuttal. How you used your language is very unoriginal and is common in LLMs to generate it like that. The first part was good but try harder next time use words like "Clear" and "Consise" when trying to enhance a prompt otherwise you get something that is a polished gem every time, and as humans we are not that good. You but in in a bit of effort to prompt it make sure you put in the same effort to understand it. As your AI hype guy learn a bit more, "Detailed Linguistics" in programming.
But if you have anything real add as a human, please let me know.
I'll be here all day...
Large language models will not, I am almost certain, be able to replace programmers in their current form, because they (the LLMs) so often produce answers that look convincing but that are actually false and/or ineffective when scrutinised; and it takes an expert to differentiate between a plausible-seeming but false answer to some coding problem, and one that is actually correct. It also takes a highly specialized body of knowledge to know what to do with the code once it has been generated by a LLM; the average person would not have a clue about how to structure a file system, create a virtual envrionment, or how an operating system executes a particular program. I could be wrong, but I don't see it happening anytime soon.
It'll be like admins/secretaries. Before you needed one per manager, now with automation tools you need one for many managers. A senior engineer in a code review role with AI programming agents will be able to support way more of what a company wants to do
You are assuming LLS are not going to evolve, that they will stay in the current form. They will make them better, and programmers will be replaced.
@@rickymort135 I totally agree with that.
@@romanemcdougal It's possible, certainly; but I'm not sure how much further LLM's can be improved, as, fundamentally, they are still going to be determining what the most likely solution to a given problem is based on the available data. I struggle to see how that process could ever reliably produce effective and sophisticated code; but I guess we will see.
Give it about 10-15 years. However, the concern is giving up too much control to something that could create a terminator like scenario.
When will it be possible to automatically translate design concepts into fully functional software without any coding?
Programmers will become AI operators in the the future, iterating more or less as follows: produce code from specs, test/verify, fix/optimise, test until you reach production ready code. Competent AI operators will be able to do the work that previously took a few devs in a team. The exact number it will be reduced by will be dependent on the factor by which dev time is reduced by when AI is used.
Consequently organizations will have to create a stream of these AI operators with a growth pathway for development from junior to tech lead level. Seeing as demand for devs will drop as a result of AI being able to replace devs and reduce team size when coupled with the operators, they will also have to incentivize these positions correctly to ensure people want to go study, and are not afraid of being unemployed after completing their studies. No matter what people say, it is going to get to a place where it reduces the number of devs required for any and every software project. It will also make the job more about verification and optimization rather than creation. Will potentially do the same all along the food chain - BAs, PMs, Testers. Now the timeframe is the key part - the LLMs will improve more than enough in the next 5 years for operators to be able to get decent enough code from them to work with until production ready. It will however take organizations around the same 5 but up to 15-20 years to completely move to the new way of working. People will say they will get left behind, well yeah, lol, that happens even today, some orgs are stuck in the mid 2000s with their entire dev lifecycle and product development lifecycle - they still operate, they still have a market, some of them are quite large and successful too, they operate in niches where the competition is low etc.
AI operators, prompt engineers, it's like simplify a person to a "computer user" but its endless ways to do it. LLMs are not an exact tool, it doesn't know when it's wrong and give different answers from time to time. So rely on your skills and knowledge, and improve them with AI tools. Of some reason a lot of people get experts to predict the future when we don't know.
I think in this case adoption will be quite rapid, as long as the output is sufficiently good. Simply due to ease of implementation and the very obvious value proposition. But then again, that's 5 years. Who knows what capabilities these AI systems will have in 10, 15, 20 years. May not even be LLMs, could be based on some superior architecture that's being researched as I type this comment. Not to mention improvements to self-learning algos and free access to the internet at inference. Software development (like many other services) may become as accessible as the internet is today...
Whenever AI takes its turn at being on-call, then you’ll have me interested.
Will just use serverless
Let's do another interview of these guys in 20 years, see how their perspective has changed.
Make it 10. They’ll be proven completely wrong by then.
Feel like they barely built anything but prototypes. Just the most advanced of their class.
What they’re saying makes sense. People want control in what they’re making. Ai don’t really understand the human experience and senses. I’ll be surprised if programmers get replaced anytime soon. The jobs will get easier and the wages might go down due to lower demand… but yea nobody really knows what’s going to happen
Good one, AI will kill us all way before then
So do we need leetcode base interview process at tech companies ?
@@sunnycse117 “let me chatgpt the answer while i explain which marginalized groups i belong to”
I guess the problem is using leetcode on mid and small companies, sure Google the best way to store their algorithms. A small business doesn't need it for sure.
Some comments here remind me of what graphic designers were saying a few years ago. They were confident that generative AI wouldn’t be able to create anything useful at a professional level. Now, generative AI is helping graphic designers in various ways and may soon replace many of them. Programming will take longer to be affected, but it seems to be heading in the same direction. If the next generation of LLMs makes a leap similar to the one from GPT-3 to GPT-4, things could become truly interesting.
The outsorcing and AI will let the cs grads without jobs oportunities. It's happening right know, im a senior from Colombia and i work for a US company, my salary is 2 times smaller than the junior one in US and it's still being a good money for me
Cursor built my Chinese Chess App for me in under 30 seconds. Sure I have to go in a tweak it, but it did most of the dirty work
Programming will become a commodity and the industry will shrink 90% over the next 10 years!
what will shrink it
We don't have enough compute
Do you have a crystal ball
Nah, programs just get more and more complex. The introduction of ASP is more significant than LLMs. It can barely help with basic stuff in the enterprise codebase at my job. People that think companies are just going to let AI run loose on their codebase aren't very bright or are blinded by hype.
@@navroze92 lol.
the chat style of building apps will already be great with just current llm technology when we make a system that can counter prompt you the engineer with tons of clarifying questions before coding.
Cursor is just a wrapper over openai, right
lol basically, its a wrapper over multiple LLMs
openai is just another nvidia wrapper
nvidia is just another silicon wrapper
@@vimalvnair999Silicon is just another atom wrapper
just a wrapper, lol
why does his speaking cadence remind me of sam altman
I think they’re trying to imitate him to sound more profound.. it’s not a natural way of speaking conversationally. Elon Musk also speaks with similar cadence.
@prakashsekar2583 It's called PR, they are recommended to talk like they're thinking deeply about stuff all the time. It makes them sound like an authority, the subject doesn't matter. What matters; for PR purposes, is how they come across.
I am learning ai/ml , should i continue or drop out. Please answer ?
Continue
I don't see how these youngsters can appreciate the complexities of real world IT systems. Naturally they can't. They use terms like "engineering department" but in reality they don't really understand neither of these two words. Engineering in the real world is exactly *not* about code, but about humans collaborating to make a systems work. The way they talk about this is very revealing, but also very common for young folks.
The amount of errors in the codes will reach unprecedented levels.
The programmers will just sit and correct what the machines wrote because they received wrong instructions about what they had to do.😂😂😂
Probably because the business analyst don't provide specific use cases which is the issue today
@@virgiliustancu9293 saw some data that showed as the amount of AI generated code increased in the real world, the number of bugs increased significantly as well. Too many naive people thinking that programming is simple and easy to generate now. It’s going to be a fun time
Amount of code written by LLM is directly proportion to no of zerodays
My take is, LLMs are trained on data, new things or concepts and technology pop up every year. Would companies relax and wait to train a model every year on new data?
The funny thing to me about programmers or software developers is that they believe that AI will be able to figure out how to complete 90% of labor tasks, but don't believe that they will be able to write software without.
not sure what programmers you know stupid enough to believe that
nice made up stats… generalized about programmers, then specified their personal beliefs
Nobody believes that AI will be able to complete 90% of labor tasks. How would that even work, you would need sentient humanoid robots for that.
Robots are only good at specific repetitive specialized tasks.
"There are 2 ways of going about programming: you think really hardly carefully upfront about the best possible way to do it, but I much more prefer just jump on it and iterate really quickly"
OH BOY, that's the recipe for a disaster. At least we're going to make a lot of money fixing security bugs in the future, because the AI's will be specialized in writing buggy software, but following that standard.
With AI, there will always be a place for exceptional humans, but that has been the case with any industry that gets automated. That isn't the issue, the issue is what happens to the rest.
2:12 why he sounds like a mix of Zuckerberg and Kim Kardashian LOL
The answer is never. I have spent quite a lot of time with these models and they often get stuck in a local minima and you end up spending a lot of time just fixing that. You will have to spend gargantuan amount of money and compute(from current levels) to fix the "getting stuck in a local minima" problem and it just won't be economical ever. At best these models will be really good at auto-complete. Anything else is just hype or delusion.
I disagree, look outward 10-15 years as intelligence is going on a double exponential curve. Dont look at now, look at the future. It will become better faster cheaper and safer than all programmers. You will see.
@@malindrome9055agree!
@@malindrome9055 I can agree with the idea that 15 years from now we will have better AI, of course.
@@deeedledeee but to say it will "never" replace programmers is a bit short-sighted. As long as it keeps getting better, whatever you ask it, it will solve, and it will get really good at reading your mind.
Casual programming or basic software development, probably. But the fields of Computer Engineering, SE and CS, that’s a different story.
I highly doubt that. The problem lies in the amount of context required to understand what solution is applicable to the problem at hand is impossible to provide to an AI. You can’t even give a human being enough context to do that on the first few attempts. It’s one of the last jobs that would ever be replaced by AI
@@trentirvin2008I'm going to wait 2.5 years to reply to this. You, me and everyone you know will have zero purpose in 5 years. Just waiting for retrenchment packages to be drawn up and then not to exist. It's time to eat the rich. 😂
@@WhatIsRealAnymore lol ever tried to use an AI to code?
@@trentirvin2008 of course. It isn't the best yes. But remember it wasn't even a thing before 2022 give or take. So it will be immeasurably better in the coming few years. Honestly. Not a swipe at anyone. Just a fair assessment. My engineering work will surely be automated away too. I'm under no illusion as to my self importance. Just prepare is all I am saying.
@@WhatIsRealAnymore it was a thing before 2022, the AI models we see now have been developed over the past 30 years, and even the CEO’s of these companies and head researchers have acknowledged AI is in a plateu and will be for years to come. We were only exposed to the leap in advancement because it finally became somewhat useful. Don’t fall for marketing hype trains.
AI will entirely replace programmers. To say otherwise is just gaslighting developers to avoid fear and panic.
Furthermore, if it does not completely replace developers and 100K+ salaries are gone, what would be the incentive to program if you get paid the same as Walmart? The value of developers is in their difficulty.
If you're an engineer for the money you shouldn't have become an engineer.
@@Softcushionthat’s a stupid comment, there are many well paid jobs that people choose to pursue because of the monetary incentives; doctors, lawyers, dentists
I wish I had the confidence to post something this stupid
@@michaelh13 Denial is a great form of stupidity, go ahead!
They think they’re gods for forking VS code and wrapping it with Claude
he's absolutely nailed that bullshit slick talking Silicon Valley is so famous for.
AI will replace literally every job imaginable eventually. Programmers, financial advisors, accountants, and anything involving objective numbers or values will be replaced first.
I"m pretty sure that ginger is a hybrid alien
Bruh!!!!!!!!!!!!!!
I don't think AI will ever replace programmers. AI is super helpful in giving a code template. But you still need a programmer to customize the code template which caters to a particular use case.
Someone should tell OpenAI about child labor laws.
People are still going to need to learn how to program to not only use this tool effectively, but also to review and audit the code being produced. Having critical code infrastructure being produced by AI without a serious level of audit would be a huge mistake.
The kids are high on their own supply. Programmers will be devastated in the next 15 years. Yes, some developers will still be required, but most will not be needed. These people are training their replacement.
They’re not just training they’re replacement, but almost all replacement in a service role.
Unless there's a major breakthrough in AI, this current iteration of AI with the limitations of LLMs will definitely not replace programmers. It's an assistant, but non-programmers will not go far with it and will hit a wall very quickly.
The more complex the code is, the less they understand if what the AI is producing makes any sense.
Companies that fall into the hype and layoff engineers will need to hire them back to fix the mess AI will create
just use vs code and be happy
Short answer 'Yes' AI will replace programmers within a few years
No, it won't be able to do that...
ruclips.net/user/clipUgkxOY6Uv_SIhahJIA0_WAjG5s-Uq9dPXoam?si=PPBjZOmndTI5I7sF
😂😂😂😂 alright bro , but if you are an experienced engineer, you wouldn't say that, yes for simple code bases and obvious logic, it will replace some , but we all know AI fails to write efficient code and horrible at code base with complex logic .
@@hwapyongedouard soon you will see this happening.
Said some “Hassan”
@@roman78hold my beer, Romani
whether them or someone else, the goal will be to reduce costs and increase efficiency, which will inevitably replace a lot of jobs. most of the devs I work with don't seem to comprehend that AI will continue to improve, possibly exponentially. just last year, the majority of my coworkers brushed it off as useless
Short answer is, yes... the programmer will be like a set of tasks and not a formal job, it will be part of the responsibilities in a day of someone who interacts with the AI, probably for 15 dollars the hour lol
As a species, we consistently overestimate our intelligence, significance, and uniqueness. We believe ourselves to be more capable than we truly are, often inflating the complexity or importance of our actions, when in reality, they may be far less challenging or remarkable than we imagine.
Yes
I feel like programmers are so efficient at automating other people out of jobs, the day programming is taken over by AI, 99% of Corporate jobs will also be automated.
That's not really how this is going to play out. Business people won't have to describe what they need to a bot in terms of software. Sure, doing that would provide the lift that the bot is faster and cheaper at building software, but when you have a bot that is that capable (to build and deploy software) it is not that far of a stretch to having bot (ai) that does what that sofware would do without having to describe a "system design" to it and buiding separate software. These ai systems will simply monitor all business communications and data (structured and unstructured) and will proactively offer insights and informations and reactivley spit out whatever requast you have of it. Overtime companies will worry less and less about structuring data and how UIs shuold be laid out, and rather simply work along side the AI. So yes, developers will largely go away unforatunately, but it won't be due to AI that can build better software faster and cheaper. It will be because with AI you won't need that older paradigm of business systems.
Basic concepts of programming are still necessary with AI around. Imagine if you can't validate an AI answer. Scary right.
My advice would be, keep learning programming as you use AI to write code. Continue at getting better in coding. And you will actually use these AI tools better and increase your productivity.
The question is when will Cursor get replaced. OpenAI or some other big player are probably gonna get involved.
Programming was already fun before AI. Is it going to be even funnier now? The future of programming looks bright
Would the abstraction stack maybe even contain levels deeper than human coding languages, a language level underneath that is AI optimized?
For sure around ~95% of programming tasks can shortly be automated for basic (web) app development.
For special cases such as developing libraries and specialised software it will take longer to get to 95% but we will likely get there within the next 5-10 years
I think sooner. The amount of resources invested into it is unreal
"all the [new] things that draw people to programming, like building things really fast". Dude absolutely not. Nobody is drawn to programming when they look at todays tech stacks and see 500 different languages/frameworks/cloud services/libraries, it draws them away. New technology helps people build things faster, but absolutely not to people just starting who have no clue how to unfold the mess.
at the end, they seem can't wait any longer to leave the chairs, just like student after a long class.
AI will probably most likely lead to the end of the world but in the meantime there will be great companies." ~Sam Altman, 2023
Did calculators kill mathematicians? Can AI kill programmers? No! It makes programming easier and faster. I had designed a software and it would take me 5 years to make it but with a new calculator it took me 6 months.
YES....That is the end game replace the majority of current state programmers
"the future of programming is going to be fun". 90% of programmers will lose their jobs. How much fun will they have being plunged into poverty?
Yup, the whole conversation is just weird.
These kids are not answering the question properly. Either they don’t understand the question or they don’t know the answer.
LOL 2013... back in the old days
Artificial intelligence will not replace programmers. AI will replace humans.
Most Programmers are humans so everyone will get replaced
I think you are a little confused about "programming" vs building stuff. I think programming is a means to an end for the majority of "programmers". It's not like dancing, where people just love dancing, it's more like brick laying or architecture. People want beautiful houses so they need to learn these things. If people did not want beautiful houses they would not exist, it's a craft not an art.
No one is answering his questions straight, they are just beating around the bush
Programming is a tool to go from idea to product. I am not a programmer, but now I am able to create a useful software product already. I think many more people will be able to create amazing things, which will lead to more diverse and rich software ecosystem. The science and technology will progress even more rapidly!
You're probably able to create a useful todo-list app. But you are not able to create a big enterprise system with various components talking to each other via different networks and messaging systems etc. As some comments already mentioned, you need to have a profound understanding of how software works, how networking works, how concurrency works to verify that the AI doesn't hallucinate. And I don't see the average-non-programmer-joe ever doing that.
@@zerberus1097 we'll get there when we solve the hallucination and context limit problems.
@@kai_s1985That's one way to look at it. Another is - you just became a programmer 😎
The problem lies in how our system is set up. AI will automate away many jobs, maybe all jobs, eventually. At that point there's little use for products, the people owning AI(assuming it stays under human control) won't have any reason to sell anything to pleb, they won't have a reason to share AI either, as the machine can do everything, meaning the only things that matter is land and resources.
@@iverbrnstad791 Unless there's a major breakthrough in AI, this current iteration of AI with the limitations of LLMs will definitely not replace programmers
Engineering is becoming a problem of defining what you want rather than designing and building what you want.
Short answer: no of course not
It should be said that they are interdependent in the future.
Ai will replace all jobs eventually....not just programming
No, manual labor won't be replaced. Even if it will, for how long? Oil is not infinite, so other energy fossils. Without them you can't maintain other energy production. AI, drones, robots are going to be a thing, but a thing of a very short period of time before it just dies out
Will calculators replace mathematicians? Will word processors replace lawyers? Come on, it's a tool that helps you work faster!!!
these guys look like literal harkonnen creatures
Nobody here know’s what they’re talking about. Sounds like a bunch of teenagers who have to submit a report on something way out of their element, then a bunch of elementary kids admiring them because they use big words and have facial hair.
Reporters are experts in reporting. An interview of someone with 10 years of experience in their career is kind of underwealming. Dunno, talk with someone who started programming before personal computers came out.
anyone who thinks ai won't replace the majority of programmers is in denial
Saving 8:30 to show my gf on exactly how my brain works
The way he talks sounds like he read some Frege/Wittgenstein
What he’s trying to say is of course but we’re hoping not
I'll just save you all 12 minutes. No. No, AI is not going to replace programmers. It's like asking if power tools will replace construction workers.
Such poor analogy, the human intervention needed in virtual world & physical world are of different spectrum, the virtual automation is the basis for physical world automation or humanoids, which is still going to take some time, but here we are not talking about complete elimination of human, the intent is going to be there in every field, it's just that the 100 people intervention is going to be reduced to 20, so the jobs or work is going to become super niche skill instead of conglomerates fulfilling their social responsibilities of providing jobs.
In short the power tools you're talking about, they are soon going to be operated by another power tool which we call humanoids or robots.
They’re just also just selling their product.
@@harshalsharma2518 dawg what in the yap
If you code, you know the truth is it will eventually replace us
@@harshalsharma2518power tools aren’t intelligent. AI can be assumed to be intelligent at some level. This is categorically different than power tool smh
Somehow...I'm feeling fear, by just listening to this....
This is not normal...
Have any of you actually tried to use ai for coding I've been for last few months it's never works iv yet to get a working project from ai there is always errors & functionality problems & no matter how much time is spent I can never seem to get a final product ai kind of sucks still I've tried many models also many seem to have short & long term memory loss they aren't taking over the world anytime soon trust me.
i think they lost the point, yes from a programmer perspective ai will be "fun", when people ask will there be job or if i should do x field they want to know will their be jobs or in this case will ai take my job? The answer is yes lol. Idk if they are purposefully avoiding that answer or only thinking from the programmer side and not the business side which ultimately, without govt intervention, will wipe out a large portion as we have already seen.
It might replace coders. Not engineers, at least not in the next 20-30 years
what about the joy of programming, is it like riding a bike without using a steer, what about creativity and innovation, this ai thing is killing the human right brain in a long term
Have these guys ever spoken to a business person? No way they will ever build a technical system, not in English not in anything else.
I'm too dumb to understand what they are saying lol
Probably they would feel the same if you would talk about specifics in you profession.
I really enjoyed these clips and these young men are indeed brilliant, but I was keeping track, the Cursor team almost never answered directly the main question asked in the video title. I thought for a moment I was watching a political interview.
For those interested in programming here please do it as a hobby. The job market is going to be filled with unemployed programmers with a lot more experience as you the newby in the next 5 years. Honestly, reading the programmers in these comments i see tremendous BIAS. A human's pride. Not one single one can admit that it went from an incoherent chatbot about 2.5 years ago to a fairly useful programming tool. Where will you be in another 2.5 years? 😂😂 Prepare to eat the rich.
Just hyping their startup for valuation for exit and retirement. Move on.
So much nerd. Love it. ✌🏽
Programming is already natural language! Anyone here using binary??
Chat GPT just proves people don’t actually read anymore.
Yes cursor is nice and juniors loves it. but honestly, its not able to solve the real problems any time soon.
I like how they always dont give proper answer to single question 😂 its like asking politicians about immigration problem