I have to wonder how this can be more effective than writing code yourself. If I have to add more words and explanation to my "prompt engineering", then where's the benefit? I expect you still need the level of skill to write the go as you would need to get the correct answer out of chatgpt. How do you learn all the things you need to make a good prompt and or program if you're not out there writing code?
This is definitely what I'm wondering. There have been countless times that I'm writing a prompt and stop halfway to say fuck it, because you have to be so explicit to be as sure as you can be that the output will do what you're trying to do and for it to safely handle exceptions. Having to be that explicit is basically like having a junior developer tag along for everything, but you can only speak to them through text -- it's just not ideal
It just shifts the workload somewhere else. Instead of having to think of the program and the language specific stuff, you just spend all that time thinking about how you're gonna explain it to the AI. It's more infuriating imo but maybe some people feel they're faster when doing it (doesn't mean the are, but the right mental space still helps).
Shifting the workload to explaining the job to be done is actually really interesting, and kind of reminds me of rubber duck debugging. I personally hate using generative AI for actually writing code (exception being copilot autocompletes), but now i'm thinking about how many problems developers have that actually stem from them not having a firm understanding of the problem itself or what a solution might look like. I can see the benefit of cementing your explanation (and understanding) of a problem through that feedback loop.
Really interesting discussion. Love the straightforwardness of the responses and questions. I am still wondering: 1. how do you manage/persist context when generating code? typically for medium size backends which are iteratively developed, how do you keep previous context to generate next bit of code fragment? 2. how descriptive should prompts be in such cases? 3. how much is productivity hurt when context switching between browser and editor?
If each company wanted their own AI models in house with a large enough context window to fit their repositories with millions of lines of code and all the git history, wouldn't that mean that it would take an insane amount of energy and dollars for that company? I can't see that happening any time soon or if at all. It just looks too expensive, and only big tech would be able to afford something like that.
The problem with AI code is that you still need to know the code to inspect it. And you can’t effectively learn to read without also writing. Eventually it might be a c/assembley case, but…it’s definitely not that good yet and it’s a little dubious if we will get there soon. We’ll see.
Tensor as a rank 3 vector, but with some geometry constraints under curvilinear coordinates, but not relevant for AI, even the addition and multiplication can be replaced by the weaker constraint of monotonically increasing given the sigmoid non-linearity.
I have to wonder how this can be more effective than writing code yourself. If I have to add more words and explanation to my "prompt engineering", then where's the benefit? I expect you still need the level of skill to write the go as you would need to get the correct answer out of chatgpt. How do you learn all the things you need to make a good prompt and or program if you're not out there writing code?
This is definitely what I'm wondering. There have been countless times that I'm writing a prompt and stop halfway to say fuck it, because you have to be so explicit to be as sure as you can be that the output will do what you're trying to do and for it to safely handle exceptions. Having to be that explicit is basically like having a junior developer tag along for everything, but you can only speak to them through text -- it's just not ideal
It just shifts the workload somewhere else. Instead of having to think of the program and the language specific stuff, you just spend all that time thinking about how you're gonna explain it to the AI. It's more infuriating imo but maybe some people feel they're faster when doing it (doesn't mean the are, but the right mental space still helps).
Shifting the workload to explaining the job to be done is actually really interesting, and kind of reminds me of rubber duck debugging. I personally hate using generative AI for actually writing code (exception being copilot autocompletes), but now i'm thinking about how many problems developers have that actually stem from them not having a firm understanding of the problem itself or what a solution might look like. I can see the benefit of cementing your explanation (and understanding) of a problem through that feedback loop.
I've been a developer for 25 years, writing code is the easiest part of my job. I didn't see AI replacing us anytime soon.
OpenAI ambassador says that ChatGPT generates her code. YEah, no interest conflict at all...
Really interesting discussion. Love the straightforwardness of the responses and questions. I am still wondering:
1. how do you manage/persist context when generating code? typically for medium size backends which are iteratively developed, how do you keep previous context to generate next bit of code fragment?
2. how descriptive should prompts be in such cases?
3. how much is productivity hurt when context switching between browser and editor?
The timestamps are for the Low Level Learning episode...
thanks for pointing it out! just corrected :)
@@backendbanterfm great, thanks!
If each company wanted their own AI models in house with a large enough context window to fit their repositories with millions of lines of code and all the git history, wouldn't that mean that it would take an insane amount of energy and dollars for that company? I can't see that happening any time soon or if at all. It just looks too expensive, and only big tech would be able to afford something like that.
Study transfer learning boi
The problem with AI code is that you still need to know the code to inspect it. And you can’t effectively learn to read without also writing. Eventually it might be a c/assembley case, but…it’s definitely not that good yet and it’s a little dubious if we will get there soon. We’ll see.
Natalie has a lot of interesting perspectives. Enjoyed this interview
Great talk, Natalie's takes on AI are the most reasonable ones I've heard.
Super interesting point about hackers taking ownership of the hallucinated libraries. Is that something you've seen in the wild?
idk about experienced devs, but for a beginner chat gpt is just great. Much quicker to gpt how to code something than to google
Next week a recommendation for sqlc. looks interesting.
Tensor as a rank 3 vector, but with some geometry constraints under curvilinear coordinates, but not relevant for AI, even the addition and multiplication can be replaced by the weaker constraint of monotonically increasing given the sigmoid non-linearity.
I think our job will look pretty much the same in 5 years.
13:44 I also build 90% Iteratively with gpt. Devops though, not dev.
99% here