This is exactly how tutorials should be! I’ve wasted so much valuable time on other RUclips channels where you have to suffer through 20 minutes of mindless rambling just to get 2 minutes of actual information!
Great tutorial, thanks! Great non-commandline open-soruce option for local llm. Next step after this tutorial trying to load something large like Mixtral 8x7B that feels like it almost would work with one's PC-spec - I suggest to find the slightly smaller Q-version of the model on Huggingface (as .GGUF), and manually import it into Jan, and give it a go. Was very easy aswell 👍👍
Is dowmloading to my work laptop is not an issue? I find it very useful in case i need to create a recap from our team meeting or any short meeting. Appreciate your advise Andy.
I have a robust 3090 video card with 24GB ram...and over 128GB of normal RAM. It was still chewing up the CPU time. I downloaded Nvideo SDK and for whatever reason it now uses only Video card RAM like it should. Mistral is the best as you said. Can we prove JAN is now sending any of our data to China or anywhere else? Who can verify this? On my machine Mistral runs as fast as ChatGPT.
This is exactly how tutorials should be! I’ve wasted so much valuable time on other RUclips channels where you have to suffer through 20 minutes of mindless rambling just to get 2 minutes of actual information!
Andy: Always enjoy your videos. Thanks!
Thanks you really covered things nobody else did like tokens etc . Im gonna give this thing a whirl Thanks Again
Great tutorial, thanks! Great non-commandline open-soruce option for local llm. Next step after this tutorial trying to load something large like Mixtral 8x7B that feels like it almost would work with one's PC-spec - I suggest to find the slightly smaller Q-version of the model on Huggingface (as .GGUF), and manually import it into Jan, and give it a go. Was very easy aswell 👍👍
i wish you give us much more information about each model.
Is dowmloading to my work laptop is not an issue? I find it very useful in case i need to create a recap from our team meeting or any short meeting. Appreciate your advise Andy.
I downloaded Jan AI and intalled a few models, tried to have it write using Chat GPT and it says I need to pay?
can I upload a pdf for reference for the ai's response? if not, then basically jan is useless.
Yeah, you can put a PDF on there. But, you must have a fast computer for that. :)
how is it different from LM Studio?
I have a robust 3090 video card with 24GB ram...and over 128GB of normal RAM. It was still chewing up the CPU time. I downloaded Nvideo SDK and for whatever reason it now uses only Video card RAM like it should. Mistral is the best as you said. Can we prove JAN is now sending any of our data to China or anywhere else? Who can verify this? On my machine Mistral runs as fast as ChatGPT.
how about the api problem