Can AI Write Shell Scripts?

Поделиться
HTML-код
  • Опубликовано: 28 июн 2024
  • Sign up for the JetsonHacks Newsletter: newsletter.jetsonhacks.com/
    Writing Linux shell scripts can be tedious busy work. Can AI really write code for the task we request?
    Website: jetsonhacks.com
    Github: github.com/jetsonhacksnano
    Twitter: / jetsonhacks
    Some of these links here are affiliate links. As an Amazon Associate I earn from qualifying purchases at no extra cost to you.
  • НаукаНаука

Комментарии • 13

  • @cullendolan5619
    @cullendolan5619 Год назад +2

    Great video. I like it for very specific questions like looking for a specific function in a library and then very broad questions like comparing two different approaches (ex: questions that get locked on stack overflow). The second isn't as reliable but is usually points me in the right direction.

    • @JetsonHacks
      @JetsonHacks  Год назад +2

      Thank you for the kind words. I've found it useful to get me pointed in the right direction too. However, the part that is scary is that "it kinda sorta works some of the time". If the person doesn't understand what's going on, many of what we call "bad things" can happen. It's even scarier when dealing with low level system code like a shell script can produce. Thanks for watching!

  • @lucerocj
    @lucerocj 11 месяцев назад +1

    You should have just copy pasted the unit test output to have it figure out the error by adding more statements. Honestly this is exactly how I am using this but it helps me to not spend so much time 'guessing' by google searching. Thanks for the video! Following

    • @lucerocj
      @lucerocj 11 месяцев назад

      Also, I've found chatgpt4 is better at responses.
      I am the debugger but I wouldn't have been able to write the code in the first place so... learn by failing.

    • @JetsonHacks
      @JetsonHacks  11 месяцев назад

      I've found that using it in that manner can either help come to an answer more quickly, or move away from the answer even faster. When there's an error, you may get 'Sorry for the confusion earlier', or some other mea culpa. It's using that time to try to work through a better response. I've found if you can't rectify things in one to two questions, then it's unlikely things will get better. We'll have to see how ChatGPT Code Interpreter does. Thanks for watching!

    • @JetsonHacks
      @JetsonHacks  11 месяцев назад +1

      That's an interesting take. It might very well be that these LLMs can help teach some level of programming.

    • @lucerocj
      @lucerocj 11 месяцев назад

      @@JetsonHacks Obviously this means I don't know 'exactly' why this works or when it fails 'how' to debug. With that said though, I am not writing code to any production repo and only using it for my Proof of Concept work or when I need to turn manual steps into troubleshooting scripts (which decreases my documentation and training workload for others).

  • @BenjaminNitschke
    @BenjaminNitschke Год назад +2

    I noticed that the more inexperienced anyone is in any topic, the more useful ChatGPT seems to be to them. However if you are an expert in any field you just find everything annoying about ChatGPT, hallucinations are the worst, choices are mostly bad, code is nicely formatted, but nothing an expert would ever write, etc. If you rarely work in some technology (sql, javascript, shell scripts, cooking, whatever) it is more useful as a hint giver and providing you with a quick sample on how you can get started and then you can finish up the hard parts yourself (the 15 minute parts you talked about). Imo it doesn't make any sense to play "debugger" for ChatGPT ..

    • @JetsonHacks
      @JetsonHacks  Год назад +1

      That's a good insight. There seem to be a lot of people that make claims about ChatGPT writing entire front end/back ends for web apps using various Javascript frameworks. However, I don't think these people have ever had to do the "hard part" which is to debug, grow and maintain it in a production environment.
      I agree with your view on becoming part of the ChatGPT debugging army. The only satisfying part of writing code is, well, the writing code part. That is why a lot of intern/fresh hires/junior developers get discouraged when they start software development careers. They start out by having to debug other peoples code, or add small features here and there. It's what I call "unfun programming". That's why it's also called "work".
      There's been the dream since the early 80s that software could help a programmer write more and better software. A programmers workbench if you will. A person would write a software specification (a very simple example is to write a command line parser like in the video), and the machine would return a template or actual code. It should not write code that does not work. If it can't write code that works, then an outline suffices. It would ask for help on implementation details if needed, and ask what type of tradeoffs the program should make (space, time complexity and so on).
      ChatGPT does a surprisingly good job at explaining code snippets. It seems hit and miss with trying to help debug interesting code. The current context limits mean that most LLMs can't grok an entire program of any meaningful size. It will be interesting to see how this changes over the next few years as people pour the next few billion dollars in.

  • @bbamboo3
    @bbamboo3 9 месяцев назад

    Since chatGPT doesn't know what it doesn't know, when you stumble upon a word that has ambiguity in this context, the human must figure it out. Example: the word map.....has meanings in context but was a source of mis-direction in some versions of the generated python script. Repeated experiment 10 times and found that 2x it ran as generated, 8/10 had various bugs that produced buggy/worthless results. Some of the generated code was easy to fix, others were just junk. My lack of "prompt engineering skill" likely plays a role in this poor score and I'm working experiments to improve my skill with this tool.

    • @JetsonHacks
      @JetsonHacks  9 месяцев назад

      That's certainly interesting! People get this wrong too, of course. It feels like to me so far that it either gets pretty close in the first couple of tries, or things go off the rails. I haven't found that there's much of a mid-ground. I've found that asking for it to refine its answer can either immediately rewarding, or the rabbit hole starts to open up.
      On the other hand, Co-pilot on Github feels better and seems to get more answers correct. It's not clear what the differences are between the data analysis mode of ChatGPT and Co-pilot.
      It's certainly interesting to test with and get a better understanding of what it can do, and what it can't do.
      Like most programmers, you have to watch them carefully. Sometimes they write good code, other times they just don't. Thanks for watching!

    • @bbamboo3
      @bbamboo3 9 месяцев назад +1

      @@JetsonHacks The ChatGPT beta code version provides code for what it does and this is a huge advantage when I want to see the difference between what it is doing and what I want it to do. Sometimes this will allow me to suggest a different package or something. It was fun to have it write shell scripts that I used on Unix in the 1980's..... using AWK for example....🙂

    • @JetsonHacks
      @JetsonHacks  9 месяцев назад

      @@bbamboo3 It is certainly fun! I think we're a bit away from being able to treat it like a junior programmer, but it's still a great proof of concept.