Now You can Easily Host your Ollama using Salad Cloud at Just $0.3
HTML-код
- Опубликовано: 23 сен 2024
- In this video, let’s host your ollama models on cloud.
Create a chatbot for just $0.3 using Ollama, SALAD and Open webui.
Let me take you step by step approach on how to do this.
Let’s do this!
Join the AI Revolution!
#SALAD #SALAD GPU#customollama #custommodels #noushermes #functioncalling #jsonstructuredoutpur #AGI #openai #autogen #windows #ollama #ai #llm_selector #auto_llm_selector #localllms #github #streamlit #langchain #openai #ollama #webui #github #python #llm #largelanguagemodels
CHANNEL LINKS:
🕵️♀️ Join my Patreon: / promptengineer975
☕ Buy me a coffee: ko-fi.com/prom...
📞 Get on a Call with me at $125 Calendly: calendly.com/p...
❤️ Subscribe: / @promptengineer48
💀 GitHub Profile: github.com/Pro...
🔖 Twitter Profile: / prompt48
TIME STAMPS:
0:00 Intro
🎁Subscribe to my channel: / @promptengineer48
If you have any questions, comments or suggestions, feel free to comment below.
🔔 Don't forget to hit the bell icon to stay updated on our latest innovations and exciting developments in the world of AI!
I cant find the Deployment URL as illustrated. Where do I check them ?
nice video, one Question do you know how i set that ollama allows multiple requests
yes. check out this video.
ruclips.net/video/8r_8CZqt5yk/видео.htmlsi=TDCcO0gksibb57P_
@@PromptEngineer48 yes that works. But I don’t know how I can set this on salad
Bro follow up question. Let say deploy and running. If i will not use the app will it still charge me with per hour rate? and second question the token response is unlimited? no limit?
Hi.. yes this will keep on charging you per hour rate. If you want to be charged when you use it only, you need to explore serverless architecture. search for runpods serverless.
response token is limited by the llm. which llm are you using.. it is not unlimited.
Best video
Thanks !!
Just $0.394 ... per hour 😏= $283.68 a month?
vs NVIDIA A100 cost of $10000 😀