This is absolutely awesome. I have been using this to do one shots, and piping them to piper for TTS. Mark I may not always understand everything but by in far you have educated me the most out of everyone I follow. It's good when I don't understand because it encourages me to push further, thus I end up learning more. Thank you so much for putting all this stuff together!
Hi Mark, this very much reminds me of fabric, but at more ease. For a proposal, create a script that writes a list of pytest unit files for a set of files (or a directory), execute them and debug it, until they run and if they run, show the result :) Basically feed the results of the execution back in the llm (I think there is even a option for that -c)
It is very much like Aider. Does it support OpenRouter api keys ? Can it create or update new files on the folder where you run it ? . Does it work fast with Ollama local Ais ? I have tried Ollam local Ais and they are very slow ? do you know how to speed them up ?
I've not used OpenRouter, but from a quick glance no, I don't think so. With this you are choosing the LLM rather than having it pick which one is appropriate for a given task. I find Ollama models run pretty fast, but only if they're under 10bn parameters. It's slow above that size. I'm on an M1 Max with 64 GB RAM but that's split between the GPU and everything else.
@@learndatawithmark Thank you for your answers. I think I have an "old" laptop that has only a cpu and no gpu or npu could that be the reason the ollama local ais run so slow ? do you have to a gpu at least for ollama to respond in a reasonable time ? is there somthing else I can do to speed this up ?
This is absolutely awesome. I have been using this to do one shots, and piping them to piper for TTS. Mark I may not always understand everything but by in far you have educated me the most out of everyone I follow. It's good when I don't understand because it encourages me to push further, thus I end up learning more. Thank you so much for putting all this stuff together!
To have my companion directly on the command line like grep is awesome. Seems to be quite straight forward and very easy to use, what is a great plus.
Informative, thanks
Hi Mark, this very much reminds me of fabric, but at more ease. For a proposal, create a script that writes a list of pytest unit files for a set of files (or a directory), execute them and debug it, until they run and if they run, show the result :)
Basically feed the results of the execution back in the llm (I think there is even a option for that -c)
pretty nifty
It is very much like Aider. Does it support OpenRouter api keys ? Can it create or update new files on the folder where you run it ? . Does it work fast with Ollama local Ais ? I have tried Ollam local Ais and they are very slow ? do you know how to speed them up ?
I've not used OpenRouter, but from a quick glance no, I don't think so. With this you are choosing the LLM rather than having it pick which one is appropriate for a given task.
I find Ollama models run pretty fast, but only if they're under 10bn parameters. It's slow above that size.
I'm on an M1 Max with 64 GB RAM but that's split between the GPU and everything else.
@@learndatawithmark Thank you for your answers. I think I have an "old" laptop that has only a cpu and no gpu or npu could that be the reason the ollama local ais run so slow ? do you have to a gpu at least for ollama to respond in a reasonable time ? is there somthing else I can do to speed this up ?
Very cool I have plenty of directories that need checking