Yes it's very repetitive, unfortunately It doesn't help that's a barebone API (no min P, smooth sampling, Repetition Penalty, etc.) Finetunes are very welcomed but the non-commercial licence + too big a model for 99% of power users is a problem
which is the best api overall and how does it gets paid, as of what ive seen they are subscriptions, are they monthly or how do they work im pretty new to this
The subscriptions are for the general public and only give you access through their proprietary websites and apps, like ChatGPT Plus. The APIs are a program-to-program linking system and the remote ones are generally pay-per-token, rather aimed at developers. The general public sites/apps are generally limited and excessively censored. So if you want a more advanced use, APIs are the way to go, paired with an interface like SillyTavern, or even the little AI games that are coming out. For what's best, which API doesn't really matter, it's rather the AI model. You can see a model scoreboard at the end of the video, should give you an idea of the best models around currently. You can also try running a model on your PC, with a program like LM Studio, and link it to ST through API. Advantages are, it's free (you just pay for the electricity), and there are many uncensored opensource models to pick from. Mistral Large-2 123B is opensource, meaning you can download it on your PC and run it yourself, if your PC is powerful enough.
@@ElaraArale It's a luxury that many cannot afford. It's also not worth buying a new graphics card just to roleplay with AI, but a good video game is more reasonable. When AI starts to be more than just text that has a tendency to go off the rails then it'll be truly worth buying cards for.
Yes, it seems to me 55b option is the best in memory, 70b will already have limitations for example on the context window. We have 8b variants and sharply huge models for server's amount of memory
I've been using Mistral Large 2, from my experience, it has a better sense of the environment and object permanence than Sonnet 3.5.
Do you use it via MistralAI API ? Don't you have repetition issues ?
@@davidvincent380 Yes, I use the MistralAI Api, and for repetition, I'd say it's not as repetitive as 3.5
Just how there is Mini Magnum 12B, we need a Mini version of Mistral Large 123B (Mistral Mini? 12B)
Mistral Nemo ?
@@OneShot_cest_mieux Oh, is Mistral Nemo a smaller version of Mistral Large?
Мохнатый мотороллер...
Sux
Yes it's very repetitive, unfortunately
It doesn't help that's a barebone API (no min P, smooth sampling, Repetition Penalty, etc.)
Finetunes are very welcomed but the non-commercial licence + too big a model for 99% of power users is a problem
I hope this model will appear on Together Ai.
It seems the license for this model prohibits its use for commercial purposes.
cannot get this to connect to ollama and oobabooga seems crash prone these days
How did manage the format at 0:55?
Does it know how many r's Strawberry has tho?
How to use it? Can it be in horde ai?
which is the best api overall and how does it gets paid, as of what ive seen they are subscriptions, are they monthly or how do they work im pretty new to this
The subscriptions are for the general public and only give you access through their proprietary websites and apps, like ChatGPT Plus. The APIs are a program-to-program linking system and the remote ones are generally pay-per-token, rather aimed at developers.
The general public sites/apps are generally limited and excessively censored. So if you want a more advanced use, APIs are the way to go, paired with an interface like SillyTavern, or even the little AI games that are coming out.
For what's best, which API doesn't really matter, it's rather the AI model. You can see a model scoreboard at the end of the video, should give you an idea of the best models around currently.
You can also try running a model on your PC, with a program like LM Studio, and link it to ST through API. Advantages are, it's free (you just pay for the electricity), and there are many uncensored opensource models to pick from. Mistral Large-2 123B is opensource, meaning you can download it on your PC and run it yourself, if your PC is powerful enough.
just use mistral on their site, I don't see any censorship there
So the question is, what is the most powerful RP model Right Now ?
Claude 3.5 Sonnet.
How about now ? @@MustacheAI
Only if we could run it locally...
You can....but money...ya know?
@@ElaraArale I would love to see you try running that kind of PC in an apartment.
@@ElaraArale It's a luxury that many cannot afford. It's also not worth buying a new graphics card just to roleplay with AI, but a good video game is more reasonable. When AI starts to be more than just text that has a tendency to go off the rails then it'll be truly worth buying cards for.
Yes, it seems to me 55b option is the best in memory, 70b will already have limitations for example on the context window. We have 8b variants and sharply huge models for server's amount of memory
You can always run models from ssd and wait (about a day for an answer). Even 405b. Or buy a bunch of ram if you have enough money. or even vram...
🏎 dzzzz
I guess this model can't chatting English very well.