@@ConernicusRex for boilerplate, proof-of-concept, and throwaway projects Claude absolutely can code. It would just be nuts to trust it for production code.
For now, at least for me, LLMs only work as a substitute to search engine when i'm too lazy to open the documentation. Other than that, they usually get in the way instead of helping.
Sometimes, its more convenient to search Stack Overflow instead of reaching for AI. Its only worthwhile for medium-complexity tasks that are sufficiently specialized that a search engine won't produce an exact answer. For easy things, the error rate is too high to compete with search. Search engines can even add better than ChatGPT. For hard things, its too hard to understand what it got wrong and then correctly infer what it should've done instead.
Even with that they're only reliable for popular projects. I asked ChatGPT about SoftEther stuff and it got very basic things wrong (which I've immediately confirmed via actual documentation) because there isn't as much data for it to learn from.
LLMs are a $20 a month rubber duck. They are like talking to your coworker who is on an entirely different project so they dont really have any context for your issues but once in a while make a suggestion that does somehow help you solve something when you’re stuck.
The whole "it looks kinda good, but I still have to re-write it from scratch" issue is something I'm really familiar with from my previous job as a translator. It wasn't unusual for clients to send something they'd already run through machine translation, expecting a big discount since "it's already most of the way there." Like, no, dude. I still have to do all the same work regardless.
Im no expert, but I have a feeling that this applies to generative AI as a whole. If we assume our goal in this field is to create some sort of AGI that can do anything, then we have hit a dead end. LLMs are good at pretending to be intelligent but they’re simply not. All they do is generate text; they cant think beyond predicting the next word in the sentence, which is definatly not enough for even the most basic tasks, let alone programming. If LLMs are supposed to be a step towards AGI, then the next best step is to scrap this whole idea of LLMs being “intelligent” and start from scratch. Its like being in a maze - LLMs got us physically close to the exit, but in reality, its just a dead end, and the only way to get to the exit from there is to go all the way back to the start and plan out a path in a completely different direction.
Yeah, and 2 years later what's the situation ? Translator job is disappearing at a quick speed. LLMs are just too good to justify to hire human translators, for a lot of areas. The same is going to happen with devs.
@@maxave7448 In principle, "predict the next token" is a completely general way to solve any problem, because you can always go e.g. "given the list of cities located at (x, y), a hamiltonian cycle of shortest length that passes through them is... " and let the LLM fill it out. But of course, very few people have bothered to examine if it's an _efficient_ way to solve problems and it's not looking good. We're missing complexity theory for machine learning.
The less boilerplate in your program, the less LLMs can even TRY to do. I do a math- and analysis-heavy CLI application with virtually no fat at all and extremely particular purpose-made data structures. I've tried to see if an LLM can produce even part of what I do, and it's a total wash. The best I can do is ask it for some sample math and sources so I can start writing my own code. LLMs are great at producing JavaScript boilerplate for massive frameworks that are all on the way out anyways. That's about it.
Have you tried giving it a piece of your code as context (and telling it what it does, why, etc)? It probably won't work, but the more context they have, the better responses they tend to produce
Big agree but from an embedded perspective. It is useless for most of my applications. A non tech colleague of mine committed some obvious gpt generated code and a register address was changed. It took me 2 hours to find out what the fuck was wrong with it. It looks good at a glance but is utter shit with details and details matter.
You're doing it wrong. Give it a set of constraints, a sandbox to play with, a lot of tools (such as SMT solvers, proof assistants, data analysis tools, computer algebra systems, etc.) and then it'll converge on a very optimised and dense design, matching what the best human developers are capable of.
A bunch of us older guys are going to be like that gritty retired cop in the film that gets pulled back into work for one last case, a few days before we're supposed to finally move to our Palm Beach condo to enjoy our golden years after 12 years of being the irrelevant old dinosaur in the precinct who solved crimes the old fashioned way, but when the newer force - who were taught to solve crimes using clever prompts to AI - found ChatGPT borken, after the servers were hacked by a foreign state actor, a prolific serial killer killer emerged, gripping the city in terror, and now we're the only ones who can stop them before it's too late.
One good use case for LLMs I found is expanding my knowledge about a topic. I just ask what are the possible solutions to a scientific problem (very general like, how can I compute the spectral radius of a matrix) and then go on different resources for precise info
If you ask it about things you already have expertise in you will find things which are subtly wrong. That continues being true in other domains, you just won’t be able to identify them. Using it like this will slowly introduce things into your thinking which sound right but are slightly wrong.
@@notusingmyrealnamegoogle6232 That's why go on to different resources after getting an overview answer. You never accept what the LLM says as true, but you can use its explanation to find keywords that you didn't know to search for. LLM's don't make known unknowns into known knowns, but it can make unknown unknowns into known unknowns.
@@notusingmyrealnamegoogle6232 Well, he is saying he uses LLMs to get an overview of a topic, and then move on to other sources for precise info. IMO this is completely fine, and will not "slowly introduce things into [his] thinking which sound right but are slightly wrong". He's not relying on the LLMs for the deep dive, and once he moves on to more trustworthy sources, any false information should be easy to identify. On the other hand, I 100% agree with you that relying exclusively on LLMs for learning a subject is dangerous.
@@carl-henrikkristoffersen2313 it is much much more likely to cause subtly wrong ideas to pile up than simply reading the Wikipedia article on the topic. LLMs are trained in that but have only statistical weights for different probablities of next tokens. It's a compression of the web’s info and that compression inherently loses fidelity.
could we get the recording date in the description? It would be nice for context, especially in fast moving topics (like Crowdstrike) or when talking about the state of Prime's projects
i will say, i was able to quickly use claude to iterate a react front end for a search api backend, it was really good and brought days down to hours. (maybe because I suck at front end). This was NOT production but a DEMO ui to illustrate what the api can do.
The problem lies in having to translate a very grey, open to interpretation natural language (like English) into a black and white, logically sound formal language (programming language). It'll never work. If you have to describe a mild problem with 10 paragraphs of english in order to ensure all the nitty gritty details are accurate, you're better off writing the code yourself.
In my opinion that's just not a good use case, not sure why people are focusing so heavily on it. Use it for what it's good at: boilerplate generation, documentation summarization, explanation, and syntactical translation.
That reminds me of a programming anecdote: a guy comes up to a programmer and says “in the future, we will have a way to describe an app to a machine to make it”, then the programmer replies “yes, we have that already - its called code”
This reminds me of a friend who thinks we won't be using keyboard in the future, because we can just talk or think. Like wth? Would you rather explain youe code verbally? Like that's going to be easier lmao
Is okay to give what can be described in one to two sentences as that will result in code snippets - is feasible to examine snippets for validity, etc. AI makes mistakes or IOW, 80% correctness doesn’t fly. If used it to generate large amounts of code with lots of details going on, then would have to spend considerable time having to vet out the solution. Plus the code snippets I’ve seen generated are not production ready in their quality, so even the snippets I have to significantly re-write to get them to where could be acceptable to put into real production program. The value is in saving time of not having to do Internet searches just to come up with an illustration for doing some rather specific thing. But I have cases where the AI strikes out and I still have to do the searching/research.
My hypotesis is this will lead to a stock market crash once everyone realizes the promises made will never (at least regarding LLMs) come to fruition. I've been saying this since late last year. I've seen so many people fooled by the smoke and mirrors (including an employee at Google working on this tech who called it sentient! Remember that?!). At the end of the day, it tells me whos really good and whos not. Those who claim it speeds up their productivity by a decent amount are likely not that great of programmers to begin with. Wall street fell for it, and we'll all pay for their stupidity again. I might be overly pessimistic, but I've been predicting this for a while, and I hope I'm wrong.
I consider myself an intermediate programmer and I don't even use AI for the majority of code I write. Sometimes I just let the AI suggest a possible solution when I'm really stuck but I just use that idea to write the code myself. It's a good practice to try to tackle the problem yourself first.
I have almost never used AI in a professional setting tbh. And if I do, its usually for troubleshooting an issue with a package or some obscure error. LLMs are honestly a pretty good alternatives to search engines/documentations in some cases, but are useless for solving real problems imo
You know what this point in time reminds me of? When suddenly companies decided to offshore and contract out dev to obscure body shops only to realise that the cost of the work redo, quality and customer blowback far exceeded the expected $$ savings expected. LLMs are fantastic but just not super ready for the big time where the results matter.
absolutely. look at the top companies in the S&P500. the google, amazon, and stuff. they all hire SW devs whom they pay $400k. there is a reason they don't instead hire indians in bombay at $20k. The SW companies that will top the stock market in the coming years are unlikely to be those companies relying on chatgpt to do the "innovation" for them. it will be those companies who hire good engineers and who pay them well, just like google and meta have been doing for the past decade. these good engineers are engineers who don't need to constantly ask chat gpt to give them the answer to their problems.
I have been using Claude to code and it is great when you know exactly what to do. But that is exactly the hard part of programming. Basically I know what to code and i just guide the LLM to write it for me. Stuff like when i have to take a list of data points and plot their histograms in a matplotlib grid. The LLM is way faster than me. But if you're solving problems you don't know exactly how to solve, which is mostly what we do, you cannot possibly give the LLM the correct instructions. Yes you can go back and forth and shape the code it generates giving it more context, but you need to know how to code for that, then why not just do it, you're gonna do it faster as well.
I've got the same experience. Only for my day job, 95% of the code I need to write I have a pretty good idea of what I need so Claude does a great job with my instructions. E.g. I need to write an iOS form filling app, which will upload JSON to an AWS endpoint. Stuff like that it's great at.
IMO the HOT take is just a shit one. It's the "I'm a bit better dev than I think take". It's the "I envy AI, because that's edgy and gets me views take". I simply can't find the power in me to write 300 lines of code when i can just prompt it in a sentence or two. I rather fix the few mistakes and go back and forth. Shit, sometimes I just can't even bother to think how to begin writing some out of the box UI/UX functionality, but I can easily explain it in plain english. That's when it's at least 5X. I have no issues admitting that my brain freezes sometimes and I'm lazy. I don't intend to be a youtuber or a Netflix nerd. I like money, drugs and women.
that's why if I would use some LLM, I would use it for code generation of boilerplate code and if I notice that I need that boilerplate multiple times (I guess you could notice via your prompt history), I would ask it to turn one version a generic version and then use that one
@@Huey-ec1 I mentioned an example where that is not the case. Instead of going and checking the api for matplotlib subplots, i can just ask the LLM to do it. The code is easy to verify but cumbersome to write. Perfect use case for LLMs
I work as a backend dev with Node and Nest.js, last couple of weeks all I do is fucking CRUD, so I just put fields into copilot chat and say "generate me new crud endpoints based on my other endpoints with this fields" and it generates me all the functionality, starting from orm schema, dto, controllers, services, logging, swagger docs🙃.
Funny when people want AI to be a Swiss Army knife. Just like before, it's better at some things than others. Like video recommendations... Or music playlists... And code that it saw a million examples of.
Most of the reason why that works is because of the huge adoption of those platforms and tons of code availability. As soon as it’s in a realm that doesn’t have huge data sources available, it’s all hallucinations.
3 years ago (no chatgpt back then) when I did my databases course in college the professor didn't want all the mock data we used to be gibberish created by python scripts. I literally had to sit down for 3 days and write realistic mock data BY HAND to test the database. With an LLM that would take 1 minute
@@awmy3109not when the data gets complicated. Let's say you want to write a realistic looking description for some made up research paper or something like that.
I used chat gippity often until. I spent over an 1 and a half trying to get it to give me code that wouldn't fail the tests. Out of anger, I deleted all of its coded and started from scratched. I finished in 20ish min. Turns out I was lazy and didn't want to think it through. NOW I'll use it mostly to explain why or what something is doing. Basically documentation. It's just faster than having to track something down usually
Thanks for this. I went and watched his video because I think that's the right thing to do. I couldn't agree more. I'm not a UI programmer, but you can see even the simple stuff you need to understand what goes into making a component behave how you want, and the AI has no idea about what it does wrong until you tell it.
LLMs can't figure out if something exists or not. They *will use* some similar looking content from another part of the training data if what you ask doesn't exist in the relevant part of the training set.
Yeah some kicks in the chins would be super cool, some people are so overhyped by AI, being "10x" devs, but I don't see them developing new Javascript frameworks at 10x speed...
The most effective hint on LLM usage I think of is: LLMs suck at details (confabulations aka hallucinations) but are good at high level structure, eg. "draft me a TOC of specific type of document " or "what components should such and such system have" or "enumerate various types and subtypes of X in the context of Y"
I can’t help but feel like this is yet another case of engineers thinking “we’ll release this to the world and they’ll use it in exactly the limited way we intend it to be used” only for the marketing department to say “this is the most important invention in the world and everyone must use it for everything.”
Quite a few of those engineers at anthropic and openAI buy into the hype, (though not fully), from what I've seen. I think its a cultural ingroup/outgroup thing at that level.
@@Happyduderawr Now that you mention it, my own company is building an internal AI product based on Claude that they will not stop gushing about. I always thought they were just trying to justify all the money being spent on training the customized bits of the model, but maybe they really think they’re changing the company.
I really like using claude to write serialization code. I work with a bunch of large API's in my work where a single response could be 100 lines of JSON, and all of it is necessary business data, but translating the resulting snippet into a rust struct that's correctly annotated with Serde attributes is the WORST most manual task I've ever had to do. Now, I can put the blob into Claude, and it'll spit back out a rust struct that's 99% ready to compile
For short-form refactoring it's incredibly useful. Had to create a bunch of objects with slightly different attributes. Gave Claude the data file and format for objects and had it finished in 25 seconds when it would've taken me 15-20 minutes to type out by hand. People are just focusing on use cases that will never be optimal. EDIT: Obviously you should ALWAYS check the output it gives you. Just want to make sure that's clear.
@@jamess.2491 this is a great place for unit tests. AI is great for reducing manual labor, but got dam it's gonna have hand-written tests around it cuz I don't trust it further than I could throw the datacenter that trained it
The OP kind of gives away that he's a bad coder by isolating "UI components" as something Claude made him a lot faster at. UI work outs a lot of bad devs because making a component that *looks right* isn't hard, but getting something that *feels right* when you're interacting with it or changing state is where all the work goes and LLMs are terrible at that part.
Prompt: I have a tower defense game using a list of bools for pathing. I want to add an ice tower that slows creeps. What's the simplest way to modify my existing system to accommodate this? Please provide the most straightforward solution first, then mention any alternatives if relevant. Sonnet 3.5: The simplest way to modify your existing system to accommodate an ice tower that slows creeps is to introduce a "slow factor" for each tile on the path. Here's a straightforward approach: 1. Replace your list of bools with a list of floats: - 1.0 represents normal speed - Values between 0 and 1 represent slowed tiles - 0 (or negative values) can still represent impassable tiles if needed 2. When a creep moves, multiply its speed by the slow factor of the tile it's on. This solution is simple and requires minimal changes to your existing system. It allows for easy stacking of slow effects (just use the minimum slow factor) and is computationally efficient. Alternatives: 1. Creep-based slowing: Instead of modifying the path, apply slow effects directly to creeps. This is more flexible but might be more complex to implement and manage. 2. Separate slow layer: Keep your boolean path and add a separate list for slow factors. This keeps the original pathing intact but requires checking two lists. 3. Tile objects: Replace the list with tile objects that have properties like "passable" and "slow_factor". This is more extensible for future features but requires more significant changes to your current system. 4. Event-based system: Use an event system where towers emit "slow" events and creeps listen and respond to these events. This is very flexible but more complex to set up. The simple approach provided first should work well for most tower defense games. If you need more complex interactions or are planning to add many different types of effects, you might want to consider one of the alternative approaches.
LLMs are good when: 1) You're learning (You can't fully trust it and it forces you to question everything) 2) You actually know what you're doing, but you're trying a new language or something 3) You just want someone friendly to talk to.
Heavily agree, I’m pretty much a beginner to everything programming but I’ve been using ChatGPT to give me practice exercises for C++ and so far it’s been pretty helpful and forces me to learn
I've been using GitHub Copilot as an open source maintainer since it became available and I genuinely do not understand how people are misusing it. I already work with code very day so I don't need to look up *how* to do things, I can just tell it what I want it to do and it'll spit out the code that does what I tell it to, and I can quickly eyeball it to see if it's right and make adjustments if I know it could be done better. I feel like all these people using it and getting bad results are because they aren't telling it what they want it to produce, they're being vague or asking open-ended questions that it doesn't have the necessary context to answer.
Which is why its amazing for university students! You're always learning, and you always sort of know what you want but not exactly how. LLMS bridge the gap really well
Ehhhh I’d have to disagree for when you’re learning first learning. Such as a uni student. Part of the learning period is failing on your own REALLY BAD. Those make the best lessons. Spending hours trying to do something yourself, only to find out the answer was trivial. Or just grinding something that simply isn’t easily solved without years of experience. Asking an AI will just hold your hand through it. The difference between a professor teaching a student and an AI is massive. First off, a professor with real world experience is invaluable. They know when to hold someone’s hand and when to let go. When to give a tid bit and not. AI leaves those decisions in the hands of the learner. Most people will choose the easy way out. The ones who don’t will begin to stand out in coming years.
I'm mainly using LLM's to get information / summaries from written texts. The summaries are not that great, but they do provide one point of view from the original texts. For example, I'm preparing a lot when studying from the Bible, working on much of the exegetical texts, so LLMs help to get a point of view of the historical context and / or cultural for the text I'm looking into. They are pretty limited into giving concrete new opinions (novelty), but they are pretty good at munching away large documents with historical / cultural / geo-political data, and formulate valid summaries related to a thing that I'm interested in. But, for programming, most of the job copilot is doing for me is smart auto-completes and refactorings + name suggestions based on context (which is again, based on the context code, and helps me with naming moreso then anything else).
Claude (like the other LLMs) starts off looking really smart, and then slowly evolves into a total crash dummy as you progress. It starts butchering its own code and forgetting what it wrote until it completely unwinds. Then it starts apologizing for its noobness. With that said -IF- you implement the design patterns yourself, and use it to handle the heavy lifting on redundant tasks, it's a great tool to help a real developer become more productive.
5:25 Literally 10x faster with LLM is obviously easily obtainable if you have bad enough skill issues. I think even 100x might be possible if your own skills are bad enough.
You have to use a tool for its intended purpose, just because the word 'programming' is in the subject, doesn't mean you are doing the same thing. The whole thing about developing is, by the way, reducing complex problems into isolated parts that are individually simple. At least, that's my view on programming, separate concerns and isolate dependencies. Also, I'm good at solving problems, I'm bad at remembering all the minor details of each language, especially as developing is only 10% of my work. Who cares if I can concatenate strings with &, +, ., && or whatever.
finally i got it. its not y combination its HYPE combination, which mean it combines hype with tech which might lead to some good outcomes because there is no better ingredient than hype to get people to use your product.
11:25 You could print the ASCII table one time and look it up there. Or even just do a one line snippet of Python to print the ASCII value of a character. That has to be one of the least useful examples of proompting ever.
@@rostcraft Same kind of comment made for that. A more useful case would be socratic debates over why one kind of business decision makes sense over another according to different personas only for the human to decide for themselves what to bring to the table in a real conversation about high-level decisions with co-workers.
Wait so your response to needing the answer to a random question is to have already answered the question in the past and just look back at that answer.... You do know why this isn't smart?
@@TheKastellan You do know why knowing the terminal code is such a common question that it would be a part of something called a STANDARD? You know, the ISO 6429 and ECMA 48 standards. You can even find other people who have dealt with the same thing before in a simpler way and in actual code such as RosettaCode, RIGHT? That's such a silly strawman fail you just committed. 🤦♀️
Yeah, for example something like a interview questions can be often answered with LLMs, but when I try it on anything that is not something you should kinda memorize, it is pretty good. But if you try to feed it something, it often even adds mistakes back after you tell it to remove it (or just responds with the same code).
Same. Been trying out Claude and Chatgpt more recently just to search up simple definitions of terms I haven't come across before, and code snippets for simple functions. That's about all the use I could get from LLMs so far.
For non-expert programmers, I think it's common to start with pseudocode and then temporary code before finalizing code. This way of coding in 'drafts' seems like it would benefit from AI. It does the work of creating the first draft for you. Better programmers just start coding; they dont need the pseuducode, so it wastes time. This is putting aside the issue of knowing the code. It's hard for me to jump in on someone else's project because I have to learn about it first. However, for these folks, the pseudocode is the tedious part and so it feels like they get to the better part faster.
i asked Claude to make me an SVG image for a sprite and then i learned how to edit SVGs for the first time to fix it. i do think Claude is a game changer, because i went from not doing something in a neat way to being able to technically and creatively fix it.
I use it mostly to teach me things I’m borderline on. Eg I never learned JS but can read it well. I have written stuff in python had it convert it so I can see how it dies it in JS. Also it’s like a little dude that RTFM for everything so it’s a one stop shop for how to use most libs
The neetcode website uses x6 graph from the ant library. I have some experience on it as I had to use it in work and also contributing to the open source workflow library Elsa which also uses it and the only thing you need is the configuration for it to look like that. But i bet the AI didnt even use the same library, it just found a graph lib and used it.
You are looking at this completely backwards. This is not meant for someone with 15 years of experience at Netflix. It's meant for low skilled Indian workers and then eventually to work on its own. The goal is not to have the best code ever written in the history of man. For the best of any product a robot or AI is going to make. The goal is to get workers for $0.10 an hour. If you think companies are going to stop with that goal now that they've had a taste... We must be living in a different world. I think the truth is experience programmers don't need AI as it's basically a document lookup tool. But going from never riding a line of code to making an app and publishing it on the Google Play store in less than a week... It's a powerful zero to minimum viable product tool.
The point NeetCode makes contradicts this. The product that the AI comes up with is not only ugly, but unmaintainable. If there were additional things to be added, you’d either have to untangle the entire mess the AI gives to you, patiently guide the AI by the hand, or rewrite it from scratch. All of those options are huge time wasters, and removes the possible gains.
@@MrDgf97 I think the answer is they're just not going to maintain it, just shit it out into the market and not support it, like the code equivalent of ewaste.
The only reason I use gpt is to tell me the syntax of something, I am new to programming as a whole since I started seriously just a year ago. AI helps me write my ideas in the language I am using, and its honestly bad at it lol, it makes thing much more complicated and it couldn't solve some small issues that when I took a closer look at my code I was able to fix it immediately. Now I just use it for syntax and to just brainstorm through a problem because writing down the problem helps me understand it.
The literal unfixable problem with LLM's is that when it comes to the subjective they basically output the bell curve distribution mean to determine the response. You're getting the average quality of content from the web, not the exceptional code. If you assume the responses are great responses then you either have average code or below average in general. If you want to use them for the subjective always assume there is a much better solution in most instances.
This is simply a matter of having enough parameters. If you have trillions of parameters, then the LLM will be able to change its answer based off particular 4grams or 5grams that occur sub 0.0000001% of the time in the dataset. Which means rare ngrams certainly have an effect. This has never been an issue for LLM's and simple TF-IDF scoring solved that problem back in the early 2010's IIRC.
I just use the models to clear my doubts. For example, if I don't understand what recursion is, I ask the models until I do. And at a point, I did that and it worked. I use it help myself not work instead of myself.
Exactly. GPT is decent improvement in search engines; they do a better job at collecting the specific info you want and phrasing it in different ways. That's what it good at. The problem is the people who believe it will spawn general AI in 6 months 😂 it is not conscious and doesn't think. Just a better search engine.
Good, it's is great at presenting the aggregation of what it has been trained on. It is like getting multiple results at once tailored for how you asked if be formatted
To me there is definitely some level of prompting skill, but there is also a good dose of empty hype. The answer IMO is that a lot of people are way better at prompting LLM than we are, but also that not all projects are the same for an LLM. More than difficulty, it's how "typical" the problem is that matters. If the problem is within the latent space of the training data (the problem isn't in the training data, but its nature is somewhere between problems that are), prompting can get you the solution. You could have a LLM solve a problem 3 times more difficult than another it failed to simply by a difference in prompting skill. But I doubt you could say the same if the problem is truly unique. It would be totally outside the latent space, unpredictable. If a model knows everything about software programming and you give it a hardware programming problem, it can't solve it no matter how well you prompt or how advanced the model is. I recently got into image generation, and that's the best comparison to me. There are people out there that can get a great result in maybe 5-6 hours of prompting, which is way less time than producing the same result by hand. But at the same time, the greatest results you see are the results of multiple days of refining. I have never felt the gap as much as when I installed comfyui (one of the best tools to run image generation locally with a node "programming" kind of system), spent maybe 50 hours in total, and had yet to have a single flawless image. I used the tricks I could find, all knowledge I could google, but there was a huge gap between reproducing someone's result and producing my own stuff and it still being flawless.
I'm curious. Why did you say "Kamala Harris of UI's" rather than Donald Trump there? Trump has been an uncanny facsimile of a human for decades. Everything I have heard from Harris recently has sounded very reasonable.
thanks for covering the hype (now we know from a professional, so it's 100%). But... i use it for coding some simple stuff, of course it needs some code review and fixing, dodging hallucinations, etc... yet it gets me going - i use it as "code NOW, DO it!" way getting out of comfort zone.
This misses the forest for the trees. No the current LLMs are not going to replace you or even make you 10x more productive. That said, it's still a major breakthrough and we could only be a few algorithmic breakthroughs from that being a reality. It's better to prepare for a world in which that is the case instead of getting so defensive. It might be 10 years, or it might be one year but I do strongly believe it is coming.
LLMs help me with syntax. At most i ask it to provide alternatives. Im not a seasoned programmer in lang x or y, but well versed in computational thinking. So, I agree, LLMs petform pretty well when dealing with very scoped and isolated coding tasks, aka snippets.
Personally think floats are more flexible since you could easily use the array as a multiplier for the speed of the entity by simply multiplying the array element to the base speed. Additionally, you go from 0 to 1 and 1 is much easier to type than the max unsigned int. (The following is a wild guess) Besides, the entity speed is probably of type float anyway and (afaik) multiplying float to another float is generally faster than multiplying it with an int.
I use chatgpt or Gemini. I come from a mechanical and aerospace background. I got into IOT and embedded programming about 2 years ago. Llm Really helped me with snippets of code. I am able to ask it to write better functions faster than I could obviously cause it's a skill issue but I am sure I am already getting to a point where its wasting my time more than helping me. So I am back to reading documentation. But it helped me through my learning phase for sure. You are still solving the actual problem by yourself though.
They just aren't using it right. GPT is hands down best tool ever. I don't use it for strictly "writing my code" - I use it for getting answers like comparing approaches, understanding framework internals, finding problems to bugs or why something isn't working the way I expect it to work. It's way faster than searching SO, reading Github issues, etc.
Usually the promting is divided in two outcomes: either you write a giant paragraph that basically contains the answer in between words and get an error because your session timed out and now you have to rewrite the whole paragraph (usually by this time you have figured out the answer yourself), or you see the answer yourself. In both cases LLM spits out dumb shit that looks legit (it learnt from the best: keyboard/internet warriors from the land of 4Chan). There are exceptions to this rule when you ask LLM something general/basic. (This is why in general giant LLMs are a waste of time and resourses: they are so general they struggle to be specific and concise. What is more intuitive, resource effective, and energy efficient is splitting them into multiple experts who have the right assumptions and the right biases familiar to experts in the field)
I don't use the copilot. Anyway, I asked LLMs different questions in my area of expertise, and while the answers seem to be ok on the surface, they are always shallow and wrong in details.
I don't think you really understand the essence of why some people like AI for development. Using AI for development is like have a team of knowledgeable and enthusiatic interns who come back within 1 minute with every assignment you give them. But they never check or test their code. This is why people who are used to give other people coding assignments tend to be enthusiatic about AI, they get the same as before, but they can iterate much faster and don't have to explain everything up front with the team forgetting half of it anyway. And you have to check the code anyway. For engineers, it can be bad, because its like some of your code goes back to the intern level if you treat it as code writing assistant. As an engineer, you should only give coding jobs to AI that you would be give to an very knowledgable intern and give enough context. Claude is not actually good, what Claude.ai does it use a metacognitive preprompt, which results in less mistakes when you don't give it enough content. I'm 10x myself for some projects, but for others (where its more engineering than architecting), it might actually be slower, as some rookie mistakes are introduced and as an engineer, it is very difficult for me to keep on monitoring and evaluation all the code changes the AI generates in mere minutes. It would reduce my job to some kind of QA/tester, and that's not what I want to do if I want to code.
So it's mostly for people who are not good at working in a team? Creating basically a fake team of sorts? Maybe that explains why I feel no need to get AI help, I work solo for 99% of the time.
8:00 It is not bad at prompting, it is that by the time I figure out the magic words to tell LLM to write correct code, I can write correct code. So, if it is 10x for someone then they just can't write code fast enough :/
I have argued with people who adamantly proclaim that LLMs significantly improve their coding job, and that LLMs can write all the things they need. Meanwhile, I can't get them to update an HTML table properly. The people who claim this are the people who are going to be replaced. If the LLM already knows everything about your job, legitimately, be scared.
I've been building RoR applications for 3 years. When I switched jobs, I was tasked with building a small WPF application in C#. I decided to make an experiment and use ChatGPT for the whole development process. I got so frustrated, that I decided to take a 8h crash course on WPF and C# to do it myself. As Prime says, LLM's are nice to build documentation, and snippets, I don't think they are close to replace a good software engineer. Not at least this decade.
LLM's have been helpful to me in terms of research and finding documentation. Sometimes, if I have a big idea and don't know where to start, I'll use one to spit out a block of code for the base idea to get the creative juices flowing, but so far that's been its main purpose in my workflow. Does anyone have any thoughts on these use cases?
So, when you're good enough, all problems that make it to you are hard. Therefore sure, many people can claim that LLMs are "solving their issues", but my issues aren't even easy to form into written form, because there's just so much to think through. By the time I had typed it clearly enough for LLM to understand, I'd have already figured it out myself.
Remember people, LLMS are just effects of Statistics, you basically "try" some statistical stuff and expect good code to come out. Think people, you HAVE a brain, learn become better.
The golden line in this has to be “people underestimate how good they are at solving problems”. This conversation becomes much shorter when we remind ourselves of this❤
GPT can't even help me with excel issues lol. I was playing around with it, tried to give it the most simple example problem i could think of, and it was nonstop circular references.
I like copilot for some generated code but mostly for JSDoc completion. It sucks at generating tests which I wish it was better at because I don’t like writing tests.
Regarding Copilot I find it super useful for writing code against the AWS API because that for sure is faster than checking the docs, as the AWS API is not very consistent between services. But of course still you need to know what you want to do.
I've been using Gemini recently. It doesn't speed up my work but it makes it more comfortable. I still need to do a thorough code review of everything it spits out.
I started programming about 18 months ago. I have a different specialism, and I’m building a tool to help enable best practice for that specialism - and it is too niche/too time intensive/too expensive for an engineer to learn the field in order to build the tool. Since I started programming in the era of AI, I can say with absolute confidence that it has not improved my ability to code, or write the stuff I need, or even pointed me in the right direction - and it has absolutely slowed things down. It’s a shame, because the real achievement within this space for “no coding experience founders” is just how quickly you can actually learn - the barrier for entry is lower than it has ever been.
I was going to say the same thing about having to refactor the whole thing from scratch. It's important to get outraged at ridiculous stuff. Honestly, the process of AI "getting better" with an industry-wide awareness is about collectively merging the concepts of what it does well as it gets better, with the concepts of when, where, and how to outsource it to perform other tasks. It's similar to learning how to leverage skilled and unskilled people. It's also just what happens with most technologies as they mature.
I wore the AKG712K for years and the Q701 before that... You seem to really like them. I suggest you try to "The Composer" by Austrian Audio. All the AKG engineers are gone and went to Austrian Audio. I listened to a good number of great headphones at CanJam NYC and I walked out buying their composer headphone. My ears were use to the 712K and it was just that, but improved in all the ways it needed to be. I strongly recommend you give it a listen. It's a better version of AKG712K, but with actual bass. They can run without an amp, but the amp really gives it good bass. Either way, good videos.
Your take seems pretty accurate - ChatCCP really would make me 10x more productive because I have no clue wtf I'm doing. If you're already familiar with programming, the language, the APIs, then just sitting down and writing yourself would be way faster.
Luke was spitting fire, UI for many apps are 0.1x problems but somehow we've made it really complicated. People working on game engines are doing some real hardcore programming in which ai can't do shit. UI on the other hand, the fact we are still hand cranking them out in many instances is so backwards, such a 0.1x problem.
Junior Dev here: But you said it yourself, its really good at answering documentation questions, that makes me work 10x faster because I get my information faster. It doesn't solve real problems but i can research really fast for me. It is also pretty descent at explaining legacy code line by line, which can be really useful if you are getting into a new language. Sometimes i was able to surprise senior devs simply by beeing able to write SQL that was way out of my league. For example I reduced 800 lines of legacy c++ to about 78 lines simply by using a query. The thing is without gpt I would have never been able to form that query because I am kinda new to SQL. But I knew what I wanted to be able to do.
With the "10X from AI" thing, I think that there's some really niche cases where I've gotten massive benefits, but mostly I agree that if you're 10X overall, you were shitty before. The last time I was like "That was awesome" with AI helping me code, I asked chatGPT to write me a personal profile that was styled like a Terminal, and it did a pretty good job in like 20 seconds, which made a massive difference for that particular thing.
My first conversation impressed me. I'm working on something to parse a complex binary file with structures and modules where fields are defined by tables which have multiple fields themselves. I had already done work using python to parse the complex 2100 page PDF which defines the file structure and produce markdown files that are both human readable and with values and markers I could later parse and use to generate code. Each triple heading ### had the structure code and name like "### Structure Code 1234 - Customer", I had a text line with `Total Bytes: 156`, and a bulleted list of tables with the table identifier (i.e. 18 is the table id that defines a name with first and last properties), name, and byte offset and size in brackets like `* 18 - Customer Name [12:40]`. My conversation went like this: "Create code to parse markdown and create json" - generated python code to parse generic markdown. "Change that to javascript" - did that. "Use this as the sample markdown" (providing the first two structures in my file - this is where it was like magic. I ran the code it provided and it generated great JSON. Each triple heading became an object like `{ type: "structure", code: "1234", name: "Customer", totalBytes: 156, fields: [..., { id: 18, name: "Customer Name", start: 12, length: 40}, ...]` Probably would have taken me at least 20 to do that myself. 10x can be real for the parts it can handle.
So if I understood it correctly the enemies in your TD move on a 2D bool/float array. This seems really wasteful. Couldn't you just make a Vec And have the enemies move from point to point? Then the ice tower could just check if any enemies are within a certain distance and slow them down. EDIT: This could also make for a fun optimization challenge.
AI is crazy for those who don't know how to code but not for writing code. It helps you quickly figure out what I need for things. I could spend 20 minutes googling around for whatever tool or method that i may not even know the name of or I could prompt some AI about use case or what I'm trying to do and normally get a few options upsides and downsides in a short digestable manner.
I think the usefulness of AI tools for programming depends on the level of the user and experience with a particular language. If you have been using a language for 10 years in the kind of problem you've been solving for 10 years then chances are AI won't help. But when I say I use it constantly, it's more so for learning that actually coding for me, if that makes sense.
I figured out what llms are best at and I can never go back. More times than I can count, I have some data in some format and need a way to get it into another format. ChatGPT saves me so much time doing this. Everything else it does for me has been mostly frustrating
Prime's ability to take short form videos and make them into long form content needs to be studied
He can produce infinite content
Its called adhd
It's called slop, the optimal form of content
That’s every react streamer
You should watch Asmongold
AI allows you to 10x 2% of your workload
Only if your workload is mostly coding / writing prose. Most of the time work is just asking the right questions and AI can’t help a lot with that.
@@tianlechenit can’t code. Anyone who tells you it can doesn’t work on real world projects.
@@ConernicusRex for boilerplate, proof-of-concept, and throwaway projects Claude absolutely can code. It would just be nuts to trust it for production code.
@@bobnull your subjunior skillset is showing.
@@bobnull claude is so good man.
Referring to Claude sonnet as Kai cennat is one of my favorite things ever
On Y Combinator at 0:58 -
"Every. Single. Time."
You got that right.
For now, at least for me, LLMs only work as a substitute to search engine when i'm too lazy to open the documentation. Other than that, they usually get in the way instead of helping.
Seems like they arent useful for you at all. That must mean your a real chad programmer and wont be replaced by ai ❤
for me, cht.sh does a better job. The only use i got was using it to expand my bullet points to paragraphs since i suck at writing.
Sometimes, its more convenient to search Stack Overflow instead of reaching for AI.
Its only worthwhile for medium-complexity tasks that are sufficiently specialized that a search engine won't produce an exact answer.
For easy things, the error rate is too high to compete with search. Search engines can even add better than ChatGPT.
For hard things, its too hard to understand what it got wrong and then correctly infer what it should've done instead.
Same. If anything, LLMs improved my googlefu. Slightly replacing searching in some small cases.
Even with that they're only reliable for popular projects. I asked ChatGPT about SoftEther stuff and it got very basic things wrong (which I've immediately confirmed via actual documentation) because there isn't as much data for it to learn from.
LLMs are a $20 a month rubber duck. They are like talking to your coworker who is on an entirely different project so they dont really have any context for your issues but once in a while make a suggestion that does somehow help you solve something when you’re stuck.
Then you're stuck again cause you don't know where to go from there
@@maxzapom😂
That honestly might be the most convincing pitch for LLMs I've heard so far. I _might_ pay for that.
This
Right. LLMs are basically Clippy with an oversized fandom.
The whole "it looks kinda good, but I still have to re-write it from scratch" issue is something I'm really familiar with from my previous job as a translator. It wasn't unusual for clients to send something they'd already run through machine translation, expecting a big discount since "it's already most of the way there." Like, no, dude. I still have to do all the same work regardless.
Im no expert, but I have a feeling that this applies to generative AI as a whole. If we assume our goal in this field is to create some sort of AGI that can do anything, then we have hit a dead end. LLMs are good at pretending to be intelligent but they’re simply not. All they do is generate text; they cant think beyond predicting the next word in the sentence, which is definatly not enough for even the most basic tasks, let alone programming. If LLMs are supposed to be a step towards AGI, then the next best step is to scrap this whole idea of LLMs being “intelligent” and start from scratch. Its like being in a maze - LLMs got us physically close to the exit, but in reality, its just a dead end, and the only way to get to the exit from there is to go all the way back to the start and plan out a path in a completely different direction.
Programming is literally translating the language of unspoken needs of people into the language a computer understands.
What kind of translations did you do, English language to other spoken languages, or programming language conversions like Fortran to Rust?
Yeah, and 2 years later what's the situation ? Translator job is disappearing at a quick speed. LLMs are just too good to justify to hire human translators, for a lot of areas.
The same is going to happen with devs.
@@maxave7448 In principle, "predict the next token" is a completely general way to solve any problem, because you can always go e.g. "given the list of cities located at (x, y), a hamiltonian cycle of shortest length that passes through them is... " and let the LLM fill it out. But of course, very few people have bothered to examine if it's an _efficient_ way to solve problems and it's not looking good.
We're missing complexity theory for machine learning.
The less boilerplate in your program, the less LLMs can even TRY to do. I do a math- and analysis-heavy CLI application with virtually no fat at all and extremely particular purpose-made data structures. I've tried to see if an LLM can produce even part of what I do, and it's a total wash. The best I can do is ask it for some sample math and sources so I can start writing my own code. LLMs are great at producing JavaScript boilerplate for massive frameworks that are all on the way out anyways. That's about it.
Have you tried giving it a piece of your code as context (and telling it what it does, why, etc)? It probably won't work, but the more context they have, the better responses they tend to produce
Big agree but from an embedded perspective. It is useless for most of my applications.
A non tech colleague of mine committed some obvious gpt generated code and a register address was changed. It took me 2 hours to find out what the fuck was wrong with it.
It looks good at a glance but is utter shit with details and details matter.
Agreed. I have yet to find an LLM that can give me a *sensible* attempt at an implementation of a kd-tree, let alone a working one.
You're doing it wrong. Give it a set of constraints, a sandbox to play with, a lot of tools (such as SMT solvers, proof assistants, data analysis tools, computer algebra systems, etc.) and then it'll converge on a very optimised and dense design, matching what the best human developers are capable of.
@@aculleon2901 So an " non tech colleague of yours" is allowed to commit to the code base... You might want to ask GPTs if that sounds about right.
A bunch of us older guys are going to be like that gritty retired cop in the film that gets pulled back into work for one last case, a few days before we're supposed to finally move to our Palm Beach condo to enjoy our golden years after 12 years of being the irrelevant old dinosaur in the precinct who solved crimes the old fashioned way, but when the newer force - who were taught to solve crimes using clever prompts to AI - found ChatGPT borken, after the servers were hacked by a foreign state actor, a prolific serial killer killer emerged, gripping the city in terror, and now we're the only ones who can stop them before it's too late.
YEP! As long as it pays well I welcome this LOL
So we are all Morgan Freeman in seven? Get your classical music playlist ready for a night of research at the library!
@@mhedden033 more of a Sylvester Stalone in Demolition Man.
One good use case for LLMs I found is expanding my knowledge about a topic. I just ask what are the possible solutions to a scientific problem (very general like, how can I compute the spectral radius of a matrix) and then go on different resources for precise info
If you ask it about things you already have expertise in you will find things which are subtly wrong. That continues being true in other domains, you just won’t be able to identify them. Using it like this will slowly introduce things into your thinking which sound right but are slightly wrong.
@@notusingmyrealnamegoogle6232 That's why go on to different resources after getting an overview answer. You never accept what the LLM says as true, but you can use its explanation to find keywords that you didn't know to search for. LLM's don't make known unknowns into known knowns, but it can make unknown unknowns into known unknowns.
@@notusingmyrealnamegoogle6232 how do people who AI hallucinates ask it technical questions in good faith?
@@notusingmyrealnamegoogle6232 Well, he is saying he uses LLMs to get an overview of a topic, and then move on to other sources for precise info. IMO this is completely fine, and will not "slowly introduce things into [his] thinking which sound right but are slightly wrong". He's not relying on the LLMs for the deep dive, and once he moves on to more trustworthy sources, any false information should be easy to identify.
On the other hand, I 100% agree with you that relying exclusively on LLMs for learning a subject is dangerous.
@@carl-henrikkristoffersen2313 it is much much more likely to cause subtly wrong ideas to pile up than simply reading the Wikipedia article on the topic. LLMs are trained in that but have only statistical weights for different probablities of next tokens. It's a compression of the web’s info and that compression inherently loses fidelity.
"LLM take" I'm going to use that when people saying things to me that make no sense.
Some people genuinely give "according to chatgpt" trying to act helpful. Life's short so thanks for the heads-up to ignore your wall of text.
could we get the recording date in the description? It would be nice for context, especially in fast moving topics (like Crowdstrike) or when talking about the state of Prime's projects
ruclips.net/video/U_cSLPv34xk/видео.html&lc=UgwNcCKYI6bp1e_PBnZ4AaABAg
W suggestion.
Was this recorded a while ago or something?
W suggestion
👍
Jesus, who calls themselves a technical founder?
people that have time to pad themself on the shoulder and have a huge ego
Someone who is not a technical founder
@@SXsoft99pat*
You are either a Jobs or a Woz
It is very rare to find someone that is both
Non technical founder
"Agent Orange Level Snake Oil", need to remember that one!!
lmao noted
i will say, i was able to quickly use claude to iterate a react front end for a search api backend, it was really good and brought days down to hours. (maybe because I suck at front end). This was NOT production but a DEMO ui to illustrate what the api can do.
The problem lies in having to translate a very grey, open to interpretation natural language (like English) into a black and white, logically sound formal language (programming language). It'll never work. If you have to describe a mild problem with 10 paragraphs of english in order to ensure all the nitty gritty details are accurate, you're better off writing the code yourself.
In my opinion that's just not a good use case, not sure why people are focusing so heavily on it. Use it for what it's good at: boilerplate generation, documentation summarization, explanation, and syntactical translation.
That reminds me of a programming anecdote: a guy comes up to a programmer and says “in the future, we will have a way to describe an app to a machine to make it”, then the programmer replies “yes, we have that already - its called code”
This reminds me of a friend who thinks we won't be using keyboard in the future, because we can just talk or think.
Like wth? Would you rather explain youe code verbally? Like that's going to be easier lmao
Is okay to give what can be described in one to two sentences as that will result in code snippets - is feasible to examine snippets for validity, etc. AI makes mistakes or IOW, 80% correctness doesn’t fly.
If used it to generate large amounts of code with lots of details going on, then would have to spend considerable time having to vet out the solution. Plus the code snippets I’ve seen generated are not production ready in their quality, so even the snippets I have to significantly re-write to get them to where could be acceptable to put into real production program. The value is in saving time of not having to do Internet searches just to come up with an illustration for doing some rather specific thing. But I have cases where the AI strikes out and I still have to do the searching/research.
My hypotesis is this will lead to a stock market crash once everyone realizes the promises made will never (at least regarding LLMs) come to fruition. I've been saying this since late last year. I've seen so many people fooled by the smoke and mirrors (including an employee at Google working on this tech who called it sentient! Remember that?!).
At the end of the day, it tells me whos really good and whos not. Those who claim it speeds up their productivity by a decent amount are likely not that great of programmers to begin with.
Wall street fell for it, and we'll all pay for their stupidity again. I might be overly pessimistic, but I've been predicting this for a while, and I hope I'm wrong.
I consider myself an intermediate programmer and I don't even use AI for the majority of code I write. Sometimes I just let the AI suggest a possible solution when I'm really stuck but I just use that idea to write the code myself. It's a good practice to try to tackle the problem yourself first.
I have almost never used AI in a professional setting tbh. And if I do, its usually for troubleshooting an issue with a package or some obscure error. LLMs are honestly a pretty good alternatives to search engines/documentations in some cases, but are useless for solving real problems imo
"Why would you give a speed increase to someone unable to do it in the first place" is my new favorite phrase.
You know what this point in time reminds me of? When suddenly companies decided to offshore and contract out dev to obscure body shops only to realise that the cost of the work redo, quality and customer blowback far exceeded the expected $$ savings expected. LLMs are fantastic but just not super ready for the big time where the results matter.
absolutely. look at the top companies in the S&P500. the google, amazon, and stuff. they all hire SW devs whom they pay $400k. there is a reason they don't instead hire indians in bombay at $20k. The SW companies that will top the stock market in the coming years are unlikely to be those companies relying on chatgpt to do the "innovation" for them. it will be those companies who hire good engineers and who pay them well, just like google and meta have been doing for the past decade. these good engineers are engineers who don't need to constantly ask chat gpt to give them the answer to their problems.
I have been using Claude to code and it is great when you know exactly what to do. But that is exactly the hard part of programming.
Basically I know what to code and i just guide the LLM to write it for me. Stuff like when i have to take a list of data points and plot their histograms in a matplotlib grid. The LLM is way faster than me.
But if you're solving problems you don't know exactly how to solve, which is mostly what we do, you cannot possibly give the LLM the correct instructions. Yes you can go back and forth and shape the code it generates giving it more context, but you need to know how to code for that, then why not just do it, you're gonna do it faster as well.
I've got the same experience. Only for my day job, 95% of the code I need to write I have a pretty good idea of what I need so Claude does a great job with my instructions. E.g. I need to write an iOS form filling app, which will upload JSON to an AWS endpoint. Stuff like that it's great at.
IMO the HOT take is just a shit one. It's the "I'm a bit better dev than I think take". It's the "I envy AI, because that's edgy and gets me views take".
I simply can't find the power in me to write 300 lines of code when i can just prompt it in a sentence or two. I rather fix the few mistakes and go back and forth. Shit, sometimes I just can't even bother to think how to begin writing some out of the box UI/UX functionality, but I can easily explain it in plain english. That's when it's at least 5X. I have no issues admitting that my brain freezes sometimes and I'm lazy.
I don't intend to be a youtuber or a Netflix nerd. I like money, drugs and women.
that's why if I would use some LLM, I would use it for code generation of boilerplate code
and if I notice that I need that boilerplate multiple times (I guess you could notice via your prompt history), I would ask it to turn one version a generic version and then use that one
You know what's even better when you know exactly what you want to do? The actual programming language.
@@Huey-ec1 I mentioned an example where that is not the case. Instead of going and checking the api for matplotlib subplots, i can just ask the LLM to do it. The code is easy to verify but cumbersome to write. Perfect use case for LLMs
I work as a backend dev with Node and Nest.js, last couple of weeks all I do is fucking CRUD, so I just put fields into copilot chat and say "generate me new crud endpoints based on my other endpoints with this fields" and it generates me all the functionality, starting from orm schema, dto, controllers, services, logging, swagger docs🙃.
Funny when people want AI to be a Swiss Army knife. Just like before, it's better at some things than others. Like video recommendations... Or music playlists... And code that it saw a million examples of.
Most of the reason why that works is because of the huge adoption of those platforms and tons of code availability. As soon as it’s in a realm that doesn’t have huge data sources available, it’s all hallucinations.
3 years ago (no chatgpt back then) when I did my databases course in college the professor didn't want all the mock data we used to be gibberish created by python scripts. I literally had to sit down for 3 days and write realistic mock data BY HAND to test the database. With an LLM that would take 1 minute
Nothing AI about that. There are tools for that before AI.
@@awmy3109not when the data gets complicated. Let's say you want to write a realistic looking description for some made up research paper or something like that.
I used chat gippity often until. I spent over an 1 and a half trying to get it to give me code that wouldn't fail the tests. Out of anger, I deleted all of its coded and started from scratched. I finished in 20ish min. Turns out I was lazy and didn't want to think it through. NOW I'll use it mostly to explain why or what something is doing. Basically documentation. It's just faster than having to track something down usually
Thanks for this. I went and watched his video because I think that's the right thing to do. I couldn't agree more. I'm not a UI programmer, but you can see even the simple stuff you need to understand what goes into making a component behave how you want, and the AI has no idea about what it does wrong until you tell it.
On the documentation stuff, I feel like it's still bad when it hallucinate which makes you waste so much time when it turns out it just doesn't exist.
i thought i was the only one who spent alot of time trying to debug ai code just to findout it doesnt exist
LLMs can't figure out if something exists or not. They *will use* some similar looking content from another part of the training data if what you ask doesn't exist in the relevant part of the training set.
Constantly hallucinating, then fixing its hallucinations with new hallucinations
Yeah some kicks in the chins would be super cool, some people are so overhyped by AI, being "10x" devs, but I don't see them developing new Javascript frameworks at 10x speed...
"AI is a better search engine"... Then why is Google so much worse?
@@KevinJDildonik that a very good point lmao
The most effective hint on LLM usage I think of is:
LLMs suck at details (confabulations aka hallucinations) but are good at high level structure, eg. "draft me a TOC of specific type of document " or "what components should such and such system have" or "enumerate various types and subtypes of X in the context of Y"
23:26 AI inbreeding? Finally, Habsburg Code 😂
Ha! Nice.
I can’t help but feel like this is yet another case of engineers thinking “we’ll release this to the world and they’ll use it in exactly the limited way we intend it to be used” only for the marketing department to say “this is the most important invention in the world and everyone must use it for everything.”
Quite a few of those engineers at anthropic and openAI buy into the hype, (though not fully), from what I've seen. I think its a cultural ingroup/outgroup thing at that level.
@@Happyduderawr Now that you mention it, my own company is building an internal AI product based on Claude that they will not stop gushing about. I always thought they were just trying to justify all the money being spent on training the customized bits of the model, but maybe they really think they’re changing the company.
when you make a space-ship out of lego set and think now you can go build a rocket to colonise mars
I really like using claude to write serialization code. I work with a bunch of large API's in my work where a single response could be 100 lines of JSON, and all of it is necessary business data, but translating the resulting snippet into a rust struct that's correctly annotated with Serde attributes is the WORST most manual task I've ever had to do. Now, I can put the blob into Claude, and it'll spit back out a rust struct that's 99% ready to compile
For short-form refactoring it's incredibly useful. Had to create a bunch of objects with slightly different attributes. Gave Claude the data file and format for objects and had it finished in 25 seconds when it would've taken me 15-20 minutes to type out by hand. People are just focusing on use cases that will never be optimal.
EDIT: Obviously you should ALWAYS check the output it gives you. Just want to make sure that's clear.
Yeah, I used ChatGPT to auto implement enums and structs from the table-structured documentation of an API
@@jamess.2491 this is a great place for unit tests. AI is great for reducing manual labor, but got dam it's gonna have hand-written tests around it cuz I don't trust it further than I could throw the datacenter that trained it
That's a really good use case for using AI.
That's a really good use of LLMs, but it could have been (at least mostly) automated even without machine learning.
The OP kind of gives away that he's a bad coder by isolating "UI components" as something Claude made him a lot faster at. UI work outs a lot of bad devs because making a component that *looks right* isn't hard, but getting something that *feels right* when you're interacting with it or changing state is where all the work goes and LLMs are terrible at that part.
Prompt: I have a tower defense game using a list of bools for pathing. I want to add an ice tower that slows creeps. What's the simplest way to modify my existing system to accommodate this? Please provide the most straightforward solution first, then mention any alternatives if relevant.
Sonnet 3.5:
The simplest way to modify your existing system to accommodate an ice tower that slows creeps is to introduce a "slow factor" for each tile on the path. Here's a straightforward approach:
1. Replace your list of bools with a list of floats:
- 1.0 represents normal speed
- Values between 0 and 1 represent slowed tiles
- 0 (or negative values) can still represent impassable tiles if needed
2. When a creep moves, multiply its speed by the slow factor of the tile it's on.
This solution is simple and requires minimal changes to your existing system. It allows for easy stacking of slow effects (just use the minimum slow factor) and is computationally efficient.
Alternatives:
1. Creep-based slowing: Instead of modifying the path, apply slow effects directly to creeps. This is more flexible but might be more complex to implement and manage.
2. Separate slow layer: Keep your boolean path and add a separate list for slow factors. This keeps the original pathing intact but requires checking two lists.
3. Tile objects: Replace the list with tile objects that have properties like "passable" and "slow_factor". This is more extensible for future features but requires more significant changes to your current system.
4. Event-based system: Use an event system where towers emit "slow" events and creeps listen and respond to these events. This is very flexible but more complex to set up.
The simple approach provided first should work well for most tower defense games. If you need more complex interactions or are planning to add many different types of effects, you might want to consider one of the alternative approaches.
LLMs are good when:
1) You're learning (You can't fully trust it and it forces you to question everything)
2) You actually know what you're doing, but you're trying a new language or something
3) You just want someone friendly to talk to.
Heavily agree, I’m pretty much a beginner to everything programming but I’ve been using ChatGPT to give me practice exercises for C++ and so far it’s been pretty helpful and forces me to learn
I've been using GitHub Copilot as an open source maintainer since it became available and I genuinely do not understand how people are misusing it. I already work with code very day so I don't need to look up *how* to do things, I can just tell it what I want it to do and it'll spit out the code that does what I tell it to, and I can quickly eyeball it to see if it's right and make adjustments if I know it could be done better. I feel like all these people using it and getting bad results are because they aren't telling it what they want it to produce, they're being vague or asking open-ended questions that it doesn't have the necessary context to answer.
Which is why its amazing for university students! You're always learning, and you always sort of know what you want but not exactly how. LLMS bridge the gap really well
Ehhhh I’d have to disagree for when you’re learning first learning. Such as a uni student. Part of the learning period is failing on your own REALLY BAD. Those make the best lessons.
Spending hours trying to do something yourself, only to find out the answer was trivial. Or just grinding something that simply isn’t easily solved without years of experience. Asking an AI will just hold your hand through it. The difference between a professor teaching a student and an AI is massive. First off, a professor with real world experience is invaluable. They know when to hold someone’s hand and when to let go. When to give a tid bit and not. AI leaves those decisions in the hands of the learner. Most people will choose the easy way out. The ones who don’t will begin to stand out in coming years.
I'm mainly using LLM's to get information / summaries from written texts. The summaries are not that great, but they do provide one point of view from the original texts. For example, I'm preparing a lot when studying from the Bible, working on much of the exegetical texts, so LLMs help to get a point of view of the historical context and / or cultural for the text I'm looking into. They are pretty limited into giving concrete new opinions (novelty), but they are pretty good at munching away large documents with historical / cultural / geo-political data, and formulate valid summaries related to a thing that I'm interested in.
But, for programming, most of the job copilot is doing for me is smart auto-completes and refactorings + name suggestions based on context (which is again, based on the context code, and helps me with naming moreso then anything else).
This is the top comment. No idea why.
No it isnt
we will never know
what can be, unburdened by what has been.
Claude (like the other LLMs) starts off looking really smart, and then slowly evolves into a total crash dummy as you progress. It starts butchering its own code and forgetting what it wrote until it completely unwinds. Then it starts apologizing for its noobness. With that said -IF- you implement the design patterns yourself, and use it to handle the heavy lifting on redundant tasks, it's a great tool to help a real developer become more productive.
Neet gained himself a subscriber
I like your idea for having a movement speed modifier float for each field on the map. I'm borrowing that idea for sure.
The only use I have left for LLMs is writing shell and bash magic
5:25 Literally 10x faster with LLM is obviously easily obtainable if you have bad enough skill issues. I think even 100x might be possible if your own skills are bad enough.
You have to use a tool for its intended purpose, just because the word 'programming' is in the subject, doesn't mean you are doing the same thing.
The whole thing about developing is, by the way, reducing complex problems into isolated parts that are individually simple. At least, that's my view on programming, separate concerns and isolate dependencies.
Also, I'm good at solving problems, I'm bad at remembering all the minor details of each language, especially as developing is only 10% of my work. Who cares if I can concatenate strings with &, +, ., && or whatever.
finally i got it.
its not y combination its HYPE combination, which mean it combines hype with tech which might lead to some good outcomes because there is no better ingredient than hype to get people to use your product.
11:25 You could print the ASCII table one time and look it up there.
Or even just do a one line snippet of Python to print the ASCII value of a character. That has to be one of the least useful examples of proompting ever.
he meant not default ascii but colored symbols thing
@@rostcraft Same kind of comment made for that.
A more useful case would be socratic debates over why one kind of business decision makes sense over another according to different personas only for the human to decide for themselves what to bring to the table in a real conversation about high-level decisions with co-workers.
Wait so your response to needing the answer to a random question is to have already answered the question in the past and just look back at that answer....
You do know why this isn't smart?
@@TheKastellan You do know why knowing the terminal code is such a common question that it would be a part of something called a STANDARD?
You know, the ISO 6429 and ECMA 48 standards. You can even find other people who have dealt with the same thing before in a simpler way and in actual code such as RosettaCode, RIGHT?
That's such a silly strawman fail you just committed. 🤦♀️
I mean that's kind of his point: it wasn't that useful.
omg ThePrimeTime reacted to me on live television, appreciate it boss 🫡
You are the other Boss 😊🎉
😂 😂 😂
Yeah, for example something like a interview questions can be often answered with LLMs, but when I try it on anything that is not something you should kinda memorize, it is pretty good. But if you try to feed it something, it often even adds mistakes back after you tell it to remove it (or just responds with the same code).
Same. Been trying out Claude and Chatgpt more recently just to search up simple definitions of terms I haven't come across before, and code snippets for simple functions. That's about all the use I could get from LLMs so far.
34:11 "wdym, i'm a plumber" lol
For non-expert programmers, I think it's common to start with pseudocode and then temporary code before finalizing code.
This way of coding in 'drafts' seems like it would benefit from AI. It does the work of creating the first draft for you.
Better programmers just start coding; they dont need the pseuducode, so it wastes time.
This is putting aside the issue of knowing the code. It's hard for me to jump in on someone else's project because I have to learn about it first.
However, for these folks, the pseudocode is the tedious part and so it feels like they get to the better part faster.
Y combinator is not to be trusted
I've used Sonnet (and Haiku) at work and it would constantly get the color what we were analyzing wrong. Probably 40% of the time.
i asked Claude to make me an SVG image for a sprite and then i learned how to edit SVGs for the first time to fix it.
i do think Claude is a game changer, because i went from not doing something in a neat way to being able to technically and creatively fix it.
I use it mostly to teach me things I’m borderline on. Eg I never learned JS but can read it well. I have written stuff in python had it convert it so I can see how it dies it in JS. Also it’s like a little dude that RTFM for everything so it’s a one stop shop for how to use most libs
the neetagen
The neetcode website uses x6 graph from the ant library. I have some experience on it as I had to use it in work and also contributing to the open source workflow library Elsa which also uses it and the only thing you need is the configuration for it to look like that. But i bet the AI didnt even use the same library, it just found a graph lib and used it.
You are looking at this completely backwards.
This is not meant for someone with 15 years of experience at Netflix.
It's meant for low skilled Indian workers and then eventually to work on its own.
The goal is not to have the best code ever written in the history of man.
For the best of any product a robot or AI is going to make.
The goal is to get workers for $0.10 an hour.
If you think companies are going to stop with that goal now that they've had a taste...
We must be living in a different world.
I think the truth is experience programmers don't need AI as it's basically a document lookup tool.
But going from never riding a line of code to making an app and publishing it on the Google Play store in less than a week...
It's a powerful zero to minimum viable product tool.
I believe you are correct. This is the 30,000-foot view I'm seeing as well.
Sounds like the next step of enshittification to me. Is "coded in India" going to be the new "made in China" ?
@@5h4ndtif you've ever had to work in a codebase developed by Indians, it already was the "Made in China" of the software world
The point NeetCode makes contradicts this. The product that the AI comes up with is not only ugly, but unmaintainable. If there were additional things to be added, you’d either have to untangle the entire mess the AI gives to you, patiently guide the AI by the hand, or rewrite it from scratch. All of those options are huge time wasters, and removes the possible gains.
@@MrDgf97 I think the answer is they're just not going to maintain it, just shit it out into the market and not support it, like the code equivalent of ewaste.
The only reason I use gpt is to tell me the syntax of something, I am new to programming as a whole since I started seriously just a year ago. AI helps me write my ideas in the language I am using, and its honestly bad at it lol, it makes thing much more complicated and it couldn't solve some small issues that when I took a closer look at my code I was able to fix it immediately. Now I just use it for syntax and to just brainstorm through a problem because writing down the problem helps me understand it.
You realize Sam Altman was the previous President (I said CEO first, but he was President) of Y-Combinator, right? lol
He was president, not CEO.
The literal unfixable problem with LLM's is that when it comes to the subjective they basically output the bell curve distribution mean to determine the response. You're getting the average quality of content from the web, not the exceptional code. If you assume the responses are great responses then you either have average code or below average in general. If you want to use them for the subjective always assume there is a much better solution in most instances.
This is simply a matter of having enough parameters. If you have trillions of parameters, then the LLM will be able to change its answer based off particular 4grams or 5grams that occur sub 0.0000001% of the time in the dataset. Which means rare ngrams certainly have an effect. This has never been an issue for LLM's and simple TF-IDF scoring solved that problem back in the early 2010's IIRC.
I just use the models to clear my doubts. For example, if I don't understand what recursion is, I ask the models until I do. And at a point, I did that and it worked. I use it help myself not work instead of myself.
Exactly. GPT is decent improvement in search engines; they do a better job at collecting the specific info you want and phrasing it in different ways. That's what it good at.
The problem is the people who believe it will spawn general AI in 6 months 😂 it is not conscious and doesn't think. Just a better search engine.
Why is search so much worse now, if AI is so good at search?
Good, it's is great at presenting the aggregation of what it has been trained on. It is like getting multiple results at once tailored for how you asked if be formatted
@@KevinJDildonik LLMs are not exactly ai since intelligence isn't there. That aside, they're good at generating text, not search.
To me there is definitely some level of prompting skill, but there is also a good dose of empty hype.
The answer IMO is that a lot of people are way better at prompting LLM than we are, but also that not all projects are the same for an LLM. More than difficulty, it's how "typical" the problem is that matters. If the problem is within the latent space of the training data (the problem isn't in the training data, but its nature is somewhere between problems that are), prompting can get you the solution. You could have a LLM solve a problem 3 times more difficult than another it failed to simply by a difference in prompting skill. But I doubt you could say the same if the problem is truly unique. It would be totally outside the latent space, unpredictable. If a model knows everything about software programming and you give it a hardware programming problem, it can't solve it no matter how well you prompt or how advanced the model is.
I recently got into image generation, and that's the best comparison to me.
There are people out there that can get a great result in maybe 5-6 hours of prompting, which is way less time than producing the same result by hand. But at the same time, the greatest results you see are the results of multiple days of refining.
I have never felt the gap as much as when I installed comfyui (one of the best tools to run image generation locally with a node "programming" kind of system), spent maybe 50 hours in total, and had yet to have a single flawless image. I used the tricks I could find, all knowledge I could google, but there was a huge gap between reproducing someone's result and producing my own stuff and it still being flawless.
I'm curious. Why did you say "Kamala Harris of UI's" rather than Donald Trump there? Trump has been an uncanny facsimile of a human for decades. Everything I have heard from Harris recently has sounded very reasonable.
thanks for covering the hype (now we know from a professional, so it's 100%). But... i use it for coding some simple stuff, of course it needs some code review and fixing, dodging hallucinations, etc... yet it gets me going - i use it as "code NOW, DO it!" way getting out of comfort zone.
This misses the forest for the trees. No the current LLMs are not going to replace you or even make you 10x more productive. That said, it's still a major breakthrough and we could only be a few algorithmic breakthroughs from that being a reality. It's better to prepare for a world in which that is the case instead of getting so defensive. It might be 10 years, or it might be one year but I do strongly believe it is coming.
LLMs help me with syntax. At most i ask it to provide alternatives. Im not a seasoned programmer in lang x or y, but well versed in computational thinking. So, I agree, LLMs petform pretty well when dealing with very scoped and isolated coding tasks, aka snippets.
2:57 wouldn't an unsigned int array make the maths easier and still give plenty of fidelity for varying the speed?
Personally think floats are more flexible since you could easily use the array as a multiplier for the speed of the entity by simply multiplying the array element to the base speed. Additionally, you go from 0 to 1 and 1 is much easier to type than the max unsigned int. (The following is a wild guess) Besides, the entity speed is probably of type float anyway and (afaik) multiplying float to another float is generally faster than multiplying it with an int.
I use chatgpt or Gemini. I come from a mechanical and aerospace background. I got into IOT and embedded programming about 2 years ago. Llm Really helped me with snippets of code. I am able to ask it to write better functions faster than I could obviously cause it's a skill issue but I am sure I am already getting to a point where its wasting my time more than helping me. So I am back to reading documentation. But it helped me through my learning phase for sure.
You are still solving the actual problem by yourself though.
They just aren't using it right. GPT is hands down best tool ever. I don't use it for strictly "writing my code" - I use it for getting answers like comparing approaches, understanding framework internals, finding problems to bugs or why something isn't working the way I expect it to work. It's way faster than searching SO, reading Github issues, etc.
Are you an AI?
Usually the promting is divided in two outcomes: either you write a giant paragraph that basically contains the answer in between words and get an error because your session timed out and now you have to rewrite the whole paragraph (usually by this time you have figured out the answer yourself), or you see the answer yourself. In both cases LLM spits out dumb shit that looks legit (it learnt from the best: keyboard/internet warriors from the land of 4Chan). There are exceptions to this rule when you ask LLM something general/basic.
(This is why in general giant LLMs are a waste of time and resourses: they are so general they struggle to be specific and concise. What is more intuitive, resource effective, and energy efficient is splitting them into multiple experts who have the right assumptions and the right biases familiar to experts in the field)
The Sonnetagen
I don't use the copilot. Anyway, I asked LLMs different questions in my area of expertise, and while the answers seem to be ok on the surface, they are always shallow and wrong in details.
I don't think you really understand the essence of why some people like AI for development. Using AI for development is like have a team of knowledgeable and enthusiatic interns who come back within 1 minute with every assignment you give them. But they never check or test their code. This is why people who are used to give other people coding assignments tend to be enthusiatic about AI, they get the same as before, but they can iterate much faster and don't have to explain everything up front with the team forgetting half of it anyway. And you have to check the code anyway. For engineers, it can be bad, because its like some of your code goes back to the intern level if you treat it as code writing assistant. As an engineer, you should only give coding jobs to AI that you would be give to an very knowledgable intern and give enough context.
Claude is not actually good, what Claude.ai does it use a metacognitive preprompt, which results in less mistakes when you don't give it enough content.
I'm 10x myself for some projects, but for others (where its more engineering than architecting), it might actually be slower, as some rookie mistakes are introduced and as an engineer, it is very difficult for me to keep on monitoring and evaluation all the code changes the AI generates in mere minutes. It would reduce my job to some kind of QA/tester, and that's not what I want to do if I want to code.
So it's mostly for people who are not good at working in a team? Creating basically a fake team of sorts? Maybe that explains why I feel no need to get AI help, I work solo for 99% of the time.
Best vid in on llms. I think this one was better to summarize the points from many of your previous snippets/vids on the topic
The tech influencers on RUclips have become so desperate that they are now reacting to reaction videos. What a downfall. Sigh!
8:00 It is not bad at prompting, it is that by the time I figure out the magic words to tell LLM to write correct code, I can write correct code. So, if it is 10x for someone then they just can't write code fast enough :/
"Kamala Harris of UIs" 😂
This is such an evocative expression!
UI salad.
Actually AI did a lot in the early 2010s, Image recognition exploded starting from 2012 when they started using neural nets
AI has been a part of people's everyday lives for a lot longer than they think.
The unexpected Kamala Harris burns 😂😂😂
I have argued with people who adamantly proclaim that LLMs significantly improve their coding job, and that LLMs can write all the things they need. Meanwhile, I can't get them to update an HTML table properly. The people who claim this are the people who are going to be replaced. If the LLM already knows everything about your job, legitimately, be scared.
I've been building RoR applications for 3 years. When I switched jobs, I was tasked with building a small WPF application in C#. I decided to make an experiment and use ChatGPT for the whole development process. I got so frustrated, that I decided to take a 8h crash course on WPF and C# to do it myself. As Prime says, LLM's are nice to build documentation, and snippets, I don't think they are close to replace a good software engineer. Not at least this decade.
LLM's have been helpful to me in terms of research and finding documentation. Sometimes, if I have a big idea and don't know where to start, I'll use one to spit out a block of code for the base idea to get the creative juices flowing, but so far that's been its main purpose in my workflow. Does anyone have any thoughts on these use cases?
So, when you're good enough, all problems that make it to you are hard. Therefore sure, many people can claim that LLMs are "solving their issues", but my issues aren't even easy to form into written form, because there's just so much to think through. By the time I had typed it clearly enough for LLM to understand, I'd have already figured it out myself.
Remember people, LLMS are just effects of Statistics, you basically "try" some statistical stuff and expect good code to come out. Think people, you HAVE a brain, learn become better.
I used to wish to go back to the Renaissance or some other medieval age.
Now I wish to go back to the internet dark ages
The golden line in this has to be “people underestimate how good they are at solving problems”.
This conversation becomes much shorter when we remind ourselves of this❤
GPT can't even help me with excel issues lol. I was playing around with it, tried to give it the most simple example problem i could think of, and it was nonstop circular references.
I like copilot for some generated code but mostly for JSDoc completion. It sucks at generating tests which I wish it was better at because I don’t like writing tests.
Regarding Copilot I find it super useful for writing code against the AWS API because that for sure is faster than checking the docs, as the AWS API is not very consistent between services. But of course still you need to know what you want to do.
I've been using Gemini recently. It doesn't speed up my work but it makes it more comfortable. I still need to do a thorough code review of everything it spits out.
I started programming about 18 months ago. I have a different specialism, and I’m building a tool to help enable best practice for that specialism - and it is too niche/too time intensive/too expensive for an engineer to learn the field in order to build the tool.
Since I started programming in the era of AI, I can say with absolute confidence that it has not improved my ability to code, or write the stuff I need, or even pointed me in the right direction - and it has absolutely slowed things down. It’s a shame, because the real achievement within this space for “no coding experience founders” is just how quickly you can actually learn - the barrier for entry is lower than it has ever been.
I was going to say the same thing about having to refactor the whole thing from scratch. It's important to get outraged at ridiculous stuff.
Honestly, the process of AI "getting better" with an industry-wide awareness is about collectively merging the concepts of what it does well as it gets better, with the concepts of when, where, and how to outsource it to perform other tasks. It's similar to learning how to leverage skilled and unskilled people. It's also just what happens with most technologies as they mature.
I wore the AKG712K for years and the Q701 before that... You seem to really like them. I suggest you try to "The Composer" by Austrian Audio. All the AKG engineers are gone and went to Austrian Audio. I listened to a good number of great headphones at CanJam NYC and I walked out buying their composer headphone. My ears were use to the 712K and it was just that, but improved in all the ways it needed to be. I strongly recommend you give it a listen. It's a better version of AKG712K, but with actual bass. They can run without an amp, but the amp really gives it good bass. Either way, good videos.
Your take seems pretty accurate - ChatCCP really would make me 10x more productive because I have no clue wtf I'm doing. If you're already familiar with programming, the language, the APIs, then just sitting down and writing yourself would be way faster.
Luke was spitting fire, UI for many apps are 0.1x problems but somehow we've made it really complicated. People working on game engines are doing some real hardcore programming in which ai can't do shit. UI on the other hand, the fact we are still hand cranking them out in many instances is so backwards, such a 0.1x problem.
Junior Dev here: But you said it yourself, its really good at answering documentation questions, that makes me work 10x faster because I get my information faster. It doesn't solve real problems but i can research really fast for me. It is also pretty descent at explaining legacy code line by line, which can be really useful if you are getting into a new language. Sometimes i was able to surprise senior devs simply by beeing able to write SQL that was way out of my league. For example I reduced 800 lines of legacy c++ to about 78 lines simply by using a query. The thing is without gpt I would have never been able to form that query because I am kinda new to SQL. But I knew what I wanted to be able to do.
With the "10X from AI" thing, I think that there's some really niche cases where I've gotten massive benefits, but mostly I agree that if you're 10X overall, you were shitty before.
The last time I was like "That was awesome" with AI helping me code, I asked chatGPT to write me a personal profile that was styled like a Terminal, and it did a pretty good job in like 20 seconds, which made a massive difference for that particular thing.
My first conversation impressed me. I'm working on something to parse a complex binary file with structures and modules where fields are defined by tables which have multiple fields themselves. I had already done work using python to parse the complex 2100 page PDF which defines the file structure and produce markdown files that are both human readable and with values and markers I could later parse and use to generate code. Each triple heading ### had the structure code and name like "### Structure Code 1234 - Customer", I had a text line with `Total Bytes: 156`, and a bulleted list of tables with the table identifier (i.e. 18 is the table id that defines a name with first and last properties), name, and byte offset and size in brackets like `* 18 - Customer Name [12:40]`. My conversation went like this: "Create code to parse markdown and create json" - generated python code to parse generic markdown. "Change that to javascript" - did that. "Use this as the sample markdown" (providing the first two structures in my file - this is where it was like magic. I ran the code it provided and it generated great JSON. Each triple heading became an object like `{ type: "structure", code: "1234", name: "Customer", totalBytes: 156, fields: [..., { id: 18, name: "Customer Name", start: 12, length: 40}, ...]` Probably would have taken me at least 20 to do that myself. 10x can be real for the parts it can handle.
So if I understood it correctly the enemies in your TD move on a 2D bool/float array. This seems really wasteful. Couldn't you just make a Vec And have the enemies move from point to point? Then the ice tower could just check if any enemies are within a certain distance and slow them down.
EDIT:
This could also make for a fun optimization challenge.
AI is crazy for those who don't know how to code but not for writing code. It helps you quickly figure out what I need for things. I could spend 20 minutes googling around for whatever tool or method that i may not even know the name of or I could prompt some AI about use case or what I'm trying to do and normally get a few options upsides and downsides in a short digestable manner.
I think the usefulness of AI tools for programming depends on the level of the user and experience with a particular language. If you have been using a language for 10 years in the kind of problem you've been solving for 10 years then chances are AI won't help. But when I say I use it constantly, it's more so for learning that actually coding for me, if that makes sense.
Do you know the industry term for a prompt that is comprehensive and precise enough to generate a program?
Code. It's called code.
I figured out what llms are best at and I can never go back. More times than I can count, I have some data in some format and need a way to get it into another format. ChatGPT saves me so much time doing this. Everything else it does for me has been mostly frustrating
if you wernt that tall to begin with how could this mushroom make you taller Mario?