How would you compare it LM Studio and oobabooga web UI, also consider the roadmap, they also support OpenAI-compatible API server endpoint. So what do you think?
I have set oobabooga aside for the moment, because I can't seem to get STT working. I am hoping that Ollama or Jan, via APIs will make it easier for stt & tts. Any experience?
Jan is more inline with LM Studio. They are applications which you can download and install on your local machine. Text Generation Web UI is web based. Jan and LM Studio easy to use but less customisation and can't extend further. TextGen Web UI is customisable and can be extended further using plugins/extensions.
I really liked the episode! Thanks a lot!!! Can I ask a question? I wanted to know how to interact with chat locally in Excel using the anthropic type. I hope the question is clear. I want to receive answers in Excel, through queries to the local Jan ai, interacting through anthropic formulas in Excel. Maybe there are free solutions for such integrations for Api work with Excel. I would be very grateful for your answer!
JanAI sounds great. I have to check it out. What about multi language models? A model that can talk in English + Spanish + French + German + Italian + Arabic + Persian + Chinese + Japanese? Most of the language related (Grammar) questions do fail if you ask any open source model. However, I tried to combine a subset of multiple models through another model that can at least translate from n to m languages, but most of the time, even English expressions are too flat and not rich enough. At least it seems to be, according to the English inference that is afterwards translated to language X by model Y. (Y = a research only model by meta)
@@MervinPraison , the current models are definitely a good start. It‘s just so misleading to have tons of videos on YT, claiming close/on par/beyond GPT 3/3.5/4 (even with the addition to say in some use cases). I really tried hard to find ways to redirect different types of requests to a proper model (or multiple expert models), especially with the help of AI to place these decisions. On the fly content analysis, pre-prompted „experts“, all ending up with JanAI like strategies, to determine a proper reply to a request that needs more or less time to be determined. It‘s so frustrating to have tons of different „dialects“ of how to prompt different models (and I don‘t talk about the prompt template). Natural language processing? No. It is similar to „How to prompt Siri“. You have exactly know, how to provoke a wanted reply. I tried to find simple system prompts (e.g.: Dream Interpreter that interprets a one shot description of a dream a. based on ancient Egypt believes, b. based on C.G. Jung) in English, compatible with GPT-3.5, GPT-4 as well as certain mistral, mixtral, and llama2 models. It was just frustrating. Currently, everything seems to be so theoretical, yes, maybe primitive, but after a year of trial and error, with watching on everything I can get, it‘s time to go a step back on this. Yes, we have a cool syntax with JanAI. But the results aren’t amazing.
I am not sure if it is also possible to upload once own documents in jan and answer chat from them. I could not find any interface for uploading documents into it.
Wanted to like it but theres just too many issues. GPU not working properly on the docker version (The app worked fine). Blank page when trying to access the url over the network on a different machine. Hope they get things ironed out. Looks promising though...
HI Mervin. Very cool. I just today got the multimodal llava running on a Raspberry Pi 5 using ollama, inspired by your recent video. I have plans for API integration into my robot soon. How would you compare Jan to Ollama? My use case is CPU only, 8Gb ram, mobile embodied AI robot project(s).
Ah, I see. Thanks for the insight. I will stick to Ollama for now then, as my robot projects would probably be classified as "developer". Cheers! @@MervinPraison
Thanks for the Tut, just downloaded it today Jan 5th on my M@2 Macbook. In settings > Advanced > Expertimental mode. I don't see Enable API Server. Am I missing something?
It doesn't work for me it shows no interface even if I reinstall, re-download ect...it doesn't launch...I tried opening it in administrative mode but to no avail, .....
daaaang bro, content waterfall! i love your content !
Amazing video Mervin!
Is this Jan 1.0? Jan 6 "will be wild!" :)
How would you compare it LM Studio and oobabooga web UI, also consider the roadmap, they also support OpenAI-compatible API server endpoint.
So what do you think?
I have set oobabooga aside for the moment, because I can't seem to get STT working. I am hoping that Ollama or Jan, via APIs will make it easier for stt & tts. Any experience?
@@whitneydesignlabs8738 Not yet.
Jan is more inline with LM Studio. They are applications which you can download and install on your local machine.
Text Generation Web UI is web based.
Jan and LM Studio easy to use but less customisation and can't extend further.
TextGen Web UI is customisable and can be extended further using plugins/extensions.
@@MervinPraison What about performance, have you found particular differences between the 3?
@@mayorc Haven't tested the performance between those three yet.
I really liked the episode! Thanks a lot!!! Can I ask a question? I wanted to know how to interact with chat locally in Excel using the anthropic type. I hope the question is clear. I want to receive answers in Excel, through queries to the local Jan ai, interacting through anthropic formulas in Excel. Maybe there are free solutions for such integrations for Api work with Excel. I would be very grateful for your answer!
more precisely like in Claude for Sheets
JanAI sounds great. I have to check it out.
What about multi language models? A model that can talk in English + Spanish + French + German + Italian + Arabic + Persian + Chinese + Japanese?
Most of the language related (Grammar) questions do fail if you ask any open source model.
However, I tried to combine a subset of multiple models through another model that can at least translate from n to m languages, but most of the time, even English expressions are too flat and not rich enough. At least it seems to be, according to the English inference that is afterwards translated to language X by model Y. (Y = a research only model by meta)
I agree with you. Open source models are currently primitive. I haven't tried any multi language model yet.
@@MervinPraison , the current models are definitely a good start.
It‘s just so misleading to have tons of videos on YT, claiming close/on par/beyond GPT 3/3.5/4 (even with the addition to say in some use cases).
I really tried hard to find ways to redirect different types of requests to a proper model (or multiple expert models), especially with the help of AI to place these decisions. On the fly content analysis, pre-prompted „experts“, all ending up with JanAI like strategies, to determine a proper reply to a request that needs more or less time to be determined.
It‘s so frustrating to have tons of different „dialects“ of how to prompt different models (and I don‘t talk about the prompt template). Natural language processing? No. It is similar to „How to prompt Siri“. You have exactly know, how to provoke a wanted reply.
I tried to find simple system prompts (e.g.: Dream Interpreter that interprets a one shot description of a dream a. based on ancient Egypt believes, b. based on C.G. Jung) in English, compatible with GPT-3.5, GPT-4 as well as certain mistral, mixtral, and llama2 models. It was just frustrating.
Currently, everything seems to be so theoretical, yes, maybe primitive, but after a year of trial and error, with watching on everything I can get, it‘s time to go a step back on this.
Yes, we have a cool syntax with JanAI. But the results aren’t amazing.
I am not sure if it is also possible to upload once own documents in jan and answer chat from them. I could not find any interface for uploading documents into it.
Im still searching for it for days?
Please create a tutorial for Ollama (wsl) + how to make a modified file for running custom models. Thanks!
Wanted to like it but theres just too many issues. GPU not working properly on the docker version (The app worked fine). Blank page when trying to access the url over the network on a different machine. Hope they get things ironed out. Looks promising though...
It needs to be able to use any model. But I like it.
HI Mervin. Very cool. I just today got the multimodal llava running on a Raspberry Pi 5 using ollama, inspired by your recent video. I have plans for API integration into my robot soon. How would you compare Jan to Ollama? My use case is CPU only, 8Gb ram, mobile embodied AI robot project(s).
Ollama is developer friendly, Jan mainly focuses on non-developers.
Ah, I see. Thanks for the insight. I will stick to Ollama for now then, as my robot projects would probably be classified as "developer". Cheers! @@MervinPraison
Thanks for the Tut, just downloaded it today Jan 5th on my M@2 Macbook. In settings > Advanced > Expertimental mode. I don't see Enable API Server. Am I missing something?
You need to download the latest version directly from github.com/janhq/jan Experimental (Nightly Build)
@@MervinPraison thank you
Can you make video on errors in AutoGEN and VSCODE? I am not able to run any model yet :/
It doesn't work for me it shows no interface even if I reinstall, re-download ect...it doesn't launch...I tried opening it in administrative mode but to no avail, .....
I tested in Mac and it's working for me.
Wow amazing
Can i run Jan AI without a GPU? I have only 4GB ram i5 3570s cpu
Can you run it, yes. Would be better off using google, yes. 16GB will even run slow.
No function calling or grammar.
On the roadmap, though.
Not worth a look until they add one or the other to get structured data out via API.
great