First I would like to thank you, As a beginner you have provided a solid platform to me. I would also eagerly wait for your next video on how to train model locally and fine tune . Thank you so much once again .
I haven't looked into training an LLM. It's a bit more challenging and expensive to do than just using an off-the-shelf model, but it's a great way to gain more control and quality from the LLM.
@@pixegami there is the limit for llm context so it's hard to convert knowledge base into a little text. There are no lot of examples of using ldap3 library on python, even llama3 knows it. But it's hard to produce example with my own messaging library in my corporation.
can you do new episode combine with "Ollama: Run LLMs Locally On Your Computer (Fast and Easy)" and "Langchain Python Project: Easy AI/Chat For Your Docs"? Which you will just used local LLM to process the Docs
Ultimately, the only way to test your app effectively is to do end-to-end testing with a bunch of sample answers/questions. If you get good results, and the embeddings help you find the right items in the DB, your embeddings are probably good enough. As for Bedrock: my decision wasn't exactly to use Bedrock-it was more to use a larger cloud-based model (OpenAI or Gemini would be fine too). I just used Bedrock because my developer stack is quite biased/skewed towards AWS, so I guess just personal choice.
Doesn't seem like there's a first-party Java integration, but there are some third party ones: github.com/amithkoujalgi/ollama4j Or you can use Java to make a standard REST API call to the local server directly.
Hi, I really like all your lectures and examples. Just wonder if you could show an example of how to use RAG within a web Ruby-based REST_API architecture ( using, for instance, Llama and Chroma_semantic_similarity_search with as input a pdf representing a job posting as an example?) Thanks a lot!
Sorry, I'm not familiar with "AI Town" - is it this? github.com/a16z-infra/ai-town It looks like you can use Ollama as a backend: github.com/a16z-infra/ai-town?tab=readme-ov-file#3-to-run-a-local-llm-download-and-run-ollama
How you must frame your question is how the system requirements are changing from OLLAMA 1 to OLLAMA 3? So even if you invest heavily now, when the future versions come out and as LLMs keep growing size exponentially, there's no point running locally unless you're investing in hardware that requires to be upgraded every 4 years. Whatever you earn as bonus every year, keep it aside to invest in hardware, online tutorials and books. Finally you could put up your question to ChatGPT itself instead of seeking answers here.
@@manoharmeka999 you seem to be very strongly opinionated. But there are applications like when you are dealing with sensitive documents where you might not want to expose this info to open ai or anyone else via a query. Also that money might be a lot for people in India but are just a business expense for some others.
when he means locally, can anyone explain what that means? For example, let's say I want to use an LLM that I downloaded on my machine privately. Is this what that means? Not connected to the internet?
Definitely would love to see more videos on training the models, finetunin and adding documents etc.
Love this channel, your content is clear explanation. Please do one video for fine tuning LLM for any specific task with one real-time use case
First I would like to thank you, As a beginner you have provided a solid platform to me. I would also eagerly wait for your next video on how to train model locally and fine tune . Thank you so much once again .
Thank you, glad you liked it!
Sound quality is much better in this new setup. I just saw your FastApi video (if you are wondering why I am appreciating the sound of the video 😂)
Haha I'm glad to hear that. I made some adjustments to my set-up, glad it's paying off :)
You have a great knack of keeping things simple and understandable.
Thank you so much for creating such amazing content! I’d love to see more videos on training and fine-tuning models.
Love this channel, very clear and factual explanation of the topic
Thank you, I really appreciate hearing that :)
Thanks, the simple codes you showed helped me a lot!
Glad to hear that!
It's interesting to train custom LLM instead of using RAG [2:45]
I haven't looked into training an LLM. It's a bit more challenging and expensive to do than just using an off-the-shelf model, but it's a great way to gain more control and quality from the LLM.
@@pixegami there is the limit for llm context so it's hard to convert knowledge base into a little text. There are no lot of examples of using ldap3 library on python, even llama3 knows it. But it's hard to produce example with my own messaging library in my corporation.
can you do new episode combine with "Ollama: Run LLMs Locally On Your Computer (Fast and Easy)" and "Langchain Python Project: Easy AI/Chat For Your Docs"? Which you will just used local LLM to process the Docs
Absolutely :) A lot of people have been asking for this, so that's going to be my next video (plus a couple of other top requested features).
Diving a bit deeper into embeddings would be nice. And vector database. How to know the quality of your embeddings. What made you go to bedrock?
Ultimately, the only way to test your app effectively is to do end-to-end testing with a bunch of sample answers/questions. If you get good results, and the embeddings help you find the right items in the DB, your embeddings are probably good enough.
As for Bedrock: my decision wasn't exactly to use Bedrock-it was more to use a larger cloud-based model (OpenAI or Gemini would be fine too). I just used Bedrock because my developer stack is quite biased/skewed towards AWS, so I guess just personal choice.
@@pixegami Could I hire you to assist me on a project? Very similar to your tutorial, it'd save me time?
It's Good. Very nice explanation😀
can i integrate ollama with java?
Doesn't seem like there's a first-party Java integration, but there are some third party ones: github.com/amithkoujalgi/ollama4j
Or you can use Java to make a standard REST API call to the local server directly.
Hi,
I really like all your lectures and examples. Just wonder if you could show an example of how to use RAG within a web Ruby-based REST_API architecture ( using, for instance, Llama and Chroma_semantic_similarity_search with as input a pdf representing a job posting as an example?)
Thanks a lot!
Can i install ai town with this. Other method was too complex for me as i am new to alot of this
Sorry, I'm not familiar with "AI Town" - is it this? github.com/a16z-infra/ai-town
It looks like you can use Ollama as a backend: github.com/a16z-infra/ai-town?tab=readme-ov-file#3-to-run-a-local-llm-download-and-run-ollama
Super interesting! Do you know what are the RAM requirements to run this locally?
How you must frame your question is how the system requirements are changing from OLLAMA 1 to OLLAMA 3? So even if you invest heavily now, when the future versions come out and as LLMs keep growing size exponentially, there's no point running locally unless you're investing in hardware that requires to be upgraded every 4 years. Whatever you earn as bonus every year, keep it aside to invest in hardware, online tutorials and books.
Finally you could put up your question to ChatGPT itself instead of seeking answers here.
@@manoharmeka999 you seem to be very strongly opinionated. But there are applications like when you are dealing with sensitive documents where you might not want to expose this info to open ai or anyone else via a query. Also that money might be a lot for people in India but are just a business expense for some others.
when he means locally, can anyone explain what that means? For example, let's say I want to use an LLM that I downloaded on my machine privately. Is this what that means? Not connected to the internet?
Yes you are right. It means that even if you loose your internet connection you will evidently receive a response from the chat.
How about Meta’s LLM?
Absolutely. This is available on Ollama. You can use `llama2`, but now `llama3` is also available: ollama.com/blog/llama3
thanks
You're welcome!