@TimCarambat I had to pause the video just to leave a comment! I'm deeply impressed by the excellence and simplicity of the content presented here. It's truly remarkable to have access to such tools, created by a team that clearly demonstrates passion and a keen ear for what we all think and wish would be great to have, and at every update, distilling all p of these wishes into a few simple clicks within this amazing piece of technology! I'm immensely grateful for the opportunity to experienceh the brilliance of software engineering and development of Anything LLM, especially within the context of open-source communities. Participating in the advancement of genuine and incredible open tools is a privilege. Thank you Tim! I will be promoting this project to the moon and back, because this deserves to be known.
haha I was just about to leave a comment when I read yours. I feel the same. What a champion Tim is. I do not know if I will ever install AnythingLLM but I think I will donate to Tim regardless.
Aye, I was interested in anythingLLM a while back but chose another project for my inference server. I've found getting half decent agent capabilities to be a huge time sink for someone with my skill set (I'm a physical security guy, not a programmer) and the results just weren't worth the time invested. Even basic agent capabilities with RAG, memory and so on in a package that I can just plug into ollama sounds awesome. Prepping the server now. Here's hoping.
Bro this has to be the most comprehensive, simple, engaging and all around entertaining video on AI I've ever watched. Your presentation, explanations, and exert level knowledge base are all 'S' tier! Bra-freakin'-vo! Subscriber well earned and deserved! 🏆👏🏽👏🏽
Been using for the past few months and is my go-to app for local RAG. Adding agents huge plus. Looking forward to being able to add my own AutoGen agents to the list with their own special tools. Thanks for the great work Tim.
You are a gift from god, TIm Carambat. Thank you for your continous efforts to make these technologies available to the rest of us. If this was 10.000 years ago, YOU're the guy teaching the rest of us how to make fire with flint and tinder. May you and your loved ones be blessed for eternity!
Thank you very much. As soon as I saw that RAG was built in and it was simple to use, I immediately started finding readme pdfs on various topics to ensure I could use this tool as efficiently as possible. After my targeted pdfs are found, I plan on grabbing data from how-to and wiki.
This is by far the easiest and most powerful way to use LLMs locally, full support, like and sub. And many thanks for the amazing work, especially being open source.
Funny ! I heard yesterday for the first time about AnythingLLM during an AI-info event.... and discarded the idea of giving it more attention because it was presented as "just another local RAG support". And now I stumble across this video by chance - and the additional agent functionality changes everything ! BTW, very well presented , this feature ! My immediate idea & feedback: if there was ANY chance to model custom agents in Flowise and re-import the JSON exports of this Flowise flow as input for an AnythingLLM custom agent, you'd save yourself the trouble of designing your own agent editor AND would start with a comparably large installed base. OK, maybe that's just wishful thinking..... but maybe I'm also not the only one with this wish to facilitate local agent building ;-)
@TimCarambat I'm excited to see the features you talked about work with the ollama like in the video for the agent, as of now, its same as before I updated, but it's exciting to think of the future.
Im a chronic video skipper but watched this back to back. Great explanations and can't wait to try this out! Would love to see more videos, tutorials or even lectures from you. You really have a knack for explaining things!😊
This is awesome! I was looking for frameworks similar to this and now i see that this is way better than what we were looking for. Great job on this one!
Fantastic Tim! Mine doesnt have agent config, guess i need to delete and udate, ill try that, looks great! keep up good work, i love anythingllm i really do!
Damm,... finally found the tools that i been looking for..MAN you save my day, i have been crazy stuck finding webui for my ollama remote server..your a gift from heaven keep it up your helping alot of people like us..thank you so much..❤❤❤😂😅😊😊
Anythingllm is awesome. Glad to hear custom agents are on the roadmap. It's the big hole in capability. Also need config to change agent promt. I scan a lot of code and the @ is used often to define decorators.
Thanks for building anything llm with these features. For many days I was searching for better UI than a terminal and more features for ollama models. And you have done it. Thank you very much.
Mind blowing simple, coherent and full of information. Think didn`t come across something similar in all terms on RUclips ( I`m not leaving comments often but this really really worth it ). Good job Tim Carambat !
Hey. Nice to meet you. I’ve just come across this now and honestly, this couldn’t come at a better time. I already realised the potential of AnythingLLM and was looking at how to utilise it as I’m building some text to action agents and your video just accelerated my path to objective. I have subscribed and look forward to more of your posts. Nice work 👍🙏👍
Big thanks man. This video helps alot for me as an beginner to understand how good a local llm is and which Usecases we have. Thumbs up for this great video.
NNNooooo!!! Thank you!! Great tool! Have lots of process documents at work and because of compliance and privacy issues, we are not allowed to upload any documents onto the internet. This is a game changer!!!!
Yo honestly it feels great when guys like you make your software completely free and i also think you should keep a option of donation. after seeing guys like you i will make something great and i will make it completely free to use and open source. again thanks dude!❤.
This is awesome work. I looked at the other simple to install Windows front ends and stumbled on this. Pretty cool stuff and I love how you can add documents and external websites to feed it information. An offline LLM is soooooo much more preferred. The only item I don't understand is why you could just ask a regular question once you provided the document, but used @agent when asking to summarize a document.
IMO, i find having a local LLM that even is **only** like 75% as good as on online alternative is just much more rewarding. Like i can be on an airplane, open my laptop, and start brainstorming with an AI. Pretty neat. Next evolution would be a local AI on your phone but i dont think we have that tech _yet_
Amazing Tools, AnythingLLM!!! I love it so mich! With my M1Max Macbook Pro, it runs very smooth locally. Starred AnythingLLM for sure!!! Keeps your great work and thx a million for sharing.
🎯 Key Takeaways for quick navigation: 00:00 *🤖 Introduction to Ollama & AnythingLLM and AMA* - Introduction to Ollama and AnythingLLM - Explanation of AMA application for running LLMs on local devices - Overview of quantization process and agent capabilities in LLMs 02:30 *🧠 Understanding Model Quantization and Selection* - Importance of selecting the right quantization level for LLMs - Differences between various quantization levels like Q1 and Q8 - How quantization impacts model performance and reliability 06:07 *🛠 Setting up AnythingLM with Q8 Model and AMA* - Instructions for setting up AMA with Q8 LLW model - Steps to download and run AnythingLM on local devices - Connecting to AMA server and configuring privacy settings 08:27 *💬 Enhancing Model Knowledge Using RAG and Workspace* - Uploading documents for model referencing in workspace - Improving model responses by utilizing documents in the workspace - Configuring workspace settings for better model performance 11:41 *🌐 Using Agents for Advanced Functionality in AnythingLLM* - Utilizing agents to enhance LLMs capabilities beyond basic text responses - Enabling web scraping, file generation, summarization, and memory functions - Integrating external services like Google for web browsing functionalities Made with HARPA AI
@Tim do you know anything about home assistant, home automation application. Reason i ask is they already have some intergration with LLM but not with agents and not specialized for home assistant auto automations. When you have time check it out and see if its possible to integrate this with home assistant that would be great. Great job with the video!
Been struggling to get custom agents to integrate reliably with external tooling, using frameworks like crewui with local LLMs. Would love a video guide explaining best practices for this
how'd you know!? We will likely make some kind of external supplemental process for fine-tuning, but at least make the tuning process easy to integrate with AnythingLLM. RAG + Fine-tune + agents = very powerful without question
@TimCarambat really caught my attention, not a technical guy (product designer) here but it was clear to understand, I'll definitely give it a try since I'm building some related products. Mini feedback the UI could have some "love" like my boss says but the overall experience feels natural...not too consumer face but natural enough. Will love to hear more from you and your team if there's a chance. Fabi, from Argentina 🇦🇷🤘
feedback. I discovered this the same day as fabric. If you have not looked at it please do. If there is a way to include it inside of AnythingLLM it would be my dream tool. Well that and a note capability for saving a quick note as text or audio that can be transcribed, turned into a note with a reliable and easy to read format. One button record and stop/send to create a note for later review right on the first page. I'm building something like that already as a stand alone but I'm a noob trying something simple to learn with. It would be great if someone who knows what they are doing could put it into a tool I already plan to use a lot in the future. I love it so far. I have only been playing with it for a day.
This explained the rag and agents parts I couldn't set up. Great educational content for those who are not programmers. Appreciate your explanations being without that much of "pre-supposed" know-how, that coders have - which is most tutorials on youtube... I still didn't get why there's a difference between @agent commands and just regular chat
In a perfect world, they are the same. AnythingLLM originally was only rag. In the near future @agent won't be needed and agent commands will work seamlessly in the chat. So @agent is temporary for now so you know for sure you want to possibly use some kind of tool for your prompt. Otherwise, it's just simple rag
There were some concepts I didn't quite understand; for example, tunneling from the Windows PC to the Mac (if it's on your local network, why work with VPN protocols rather than client/server - due to needing a stateful connection vs. 200 response code or something?). But the interface itself is brilliant! And I think that when it becomes agent-swarm-capable it's going to be a much better option for me than Crew AI, as it feels more intuitive, I am just going to need multiple agents working together. I have never installed a local LLM, but you have inspired me to give it a try. Thanks!
I think it's a very good application, easy to use, and after testing it for a day or so, I have some wishes. 1. direct commands Bypass Agent LLM in Agent mode. It takes time for the agent to understand the sentence and convert it into internal command, and url parsing sometimes fails depending on the agent. For example, a command that scrapes the specified URL and shows the result, or a command that lists the currently registered documents with numbering. And a command that summarizes the document by this number instead of its full name. 2. I wish there was a way to pre-test the settings in the options window to make sure they are correct, such as specifying LLM or search engine. I hope this application is widely known and loved by many people.
Hi Tim, thank you very much for the great video showcasing open source llms, and tools like anythingllm to create agents. I followed your video and successfully was able to do everything in your video. Are there other agentic videos for other usecases you made, look forward to see them. Cheers
I love this tool, I already made several Workspaces, each with its own LLM and RAG. This video was good how with an explanation. I am a Python developer and I would like to create my own agents
Hi Tim.. I am absolutely impressed with the capabilities of AnythingLLM. Just a small query..how can I deploy it on a cloud machine and serve it as a chat agent on my website? I actually want to add few learning resources as pdf for the rag document of this llm so that my users can chat with the content of those pdfs on my website. I also want to understand how many such parallel instances of similar scenario but with different set of pdf is possible? For instance, if I am selling ebooks as digital product to my users, can I have unique instances autogenerated for each user based on their purchase?
We offer a standalone docker image that is a multi-user version of the desktop app. It has a public chat embed that is basically a publicly accessible workspace chat window. You can deploy a lot of places depending on what you want to accomplish: github.com/Mintplex-Labs/anything-llm?tab=readme-ov-file#-self-hosting For this, you could do one AnythingLLM instance, multiple workspace where each has its own set of documents, and then a chat widget for each. This would give you the end result you are looking for
Great video! I also want to know how well the RAG function of AnythingLLM performs. It's important that text, images, and papers are handled properly and meaningful chunking are achieved
If you aren't already, will you please look into integrating knowledge graphs into the app, along the same lines as the graphRAG project? Thanks for everything you do!
Actually have a development branch from long ago with KGs in them. I didn't find the performance much more remarkable, but the issue is never made it to prod was some OSS llms were performing horribly trying to make node/relationships with the graph dB I was using
Super useful, thanks very much! I haven't been able to get the Google integration to work reliably yet; I'm not trusting what it pulls back when I ask for (what I think) are simple things, like the ingredients for a very specific product. Scraping that site and pulling in the page data fixes that. Again, very cool!
Great things shown. Tx for all the work and commitment. 🎉 Here is a kind of dedicated use case I am interested to get acces: I am a mind mapping addict. I use Mind Manager, that stores the mm in .mmap format. I would like to ask ANYTHINGLLM to help me scan all folders for mind maps on different subjects and Rag & summarize on them, without having to export all mmap files in another format. Is this doable at this stage? What else should have or have created?
@sergiofigueiredo1987, @TimCarambat, I agree with Sergio. Wow! I have Ollama installed locally on a Windows machine in WSL. (I was leery of the Windows preview, but I may switch because NATing the Docker container is a pain.) I also pondered how to build a vector DB on my machine and integrate agents. You guys have already done it!
@TimCarambat Hey Tim it wont let me select anything under Workspace Agent LLM Provider even though everything is setup and working, obviously ollama is running and everything else in anything is using ollama fine in the app, but this selection option doesn't show like yours does.
@Tim: I am a professional translator (English to French), and I've just discovered AnythingLLM. Sometimes I have to translate confidential documents that cannot be shared on the cloud. They need to remain locally on my own computer. Once the translation is done, they have to be encrypted to be sent to clients. Could I use AnythingLLM to help me with the translation process? Could I use it with my actual Lexicum, glossaries and personal dictionaries? Most are PDF or DOCX files. How would I do that? What are the first steps? Many thanks if you can give me some hints on how to proceed. I'm now a new subscriber! 😊
Great demo but if I'm hearing this correctly from other comments - you cannot implement your own GUI interface on top of AnythingLLM to interact with on your own internal website/app? If that's correct I can just demo/tinker with the tool and not implement anything real in my company for internal use. Novice here but like the approach. I can't tell how much data you can train your own LLM on but will keep searching for info.
I know it's a huge ask, but it would be great if it could listen to a inputs and active windows. it could be really cool if it could capture and describe my workflow, i could analyze what i am doing, and than generate macros for me.
Will be coming soon! Just carving out how agents should work within the context of AnythingLLM and should be good. Also, it would be nice to be able to just import your current CrewAI and use it in AnythingLLM - save you the work you have done so far
Looks really good and simple. I tried PrivateGPT using conda/Poetry and could never get it to work, so jumped into WSL for Windows connecting to Ubuntu running ollama, via WEBUI. Works great, but this just looks so much easier. Will have to give it a try. What I do like with the WEBUI I have is I can select different model, and even use multiple models at the same time.
Yeah, we didnt want to "rebuild" what is already built and amazing like text-web-gen. No reason why we cant wrap around your existing efforts on those tools and just elevate that experience with additional tools like RAG, agents, etc
productive criticism input: instead of saying "we do this...that..." say "typing / clicking / ... will ..." - this with provide 1. clear instruction(s) to follow and 2. explain the purpose of that action, all in one go, saving time and reducing / eliminating confusion
Awesome Tool! Installed it today and i'm super hyped to have such a powerful tool running on my PC ! I was wondering if it was possible to access the agent functions through the API? Couldn't something about in the documentation.
@TimCarambat I'm very impressed with AnythingLLM, particularly how you can easily incorporate additional capabilities with agents. I'm trying to create a GPT chatbot in Slack without using OpenAI, and at first, I was going to use LM Studio since it has a "server" component - where you can pass it API calls and reflect the answer in a Slack bot. I've looked around and I do not see this feature in AnythingLLM - is this coming or planned? I'd love to drop everything and just use your excellent tool.
Tim, you are the man. Great work from you and your team on all of these great software. You are making complex simple. One request or a pointer in the right direction, are there any CLI tools to execute the agents and out put result to any of the workspaces. The GUI is great but at some point all of that needs some automation. Anyway, keep up the great work.
This is a great point, so we do have an API that you can use but it currently does not support agents :( Guess i know what needs to be fixed now - you are not the first to voice this!
@TimCarambat I had to pause the video just to leave a comment! I'm deeply impressed by the excellence and simplicity of the content presented here. It's truly remarkable to have access to such tools, created by a team that clearly demonstrates passion and a keen ear for what we all think and wish would be great to have, and at every update, distilling all p of these wishes into a few simple clicks within this amazing piece of technology! I'm immensely grateful for the opportunity to experienceh the brilliance of software engineering and development of Anything LLM, especially within the context of open-source communities. Participating in the advancement of genuine and incredible open tools is a privilege. Thank you Tim! I will be promoting this project to the moon and back, because this deserves to be known.
This is so incredibly kind. Sharing with team!
haha I was just about to leave a comment when I read yours. I feel the same. What a champion Tim is. I do not know if I will ever install AnythingLLM but I think I will donate to Tim regardless.
Aye, I was interested in anythingLLM a while back but chose another project for my inference server. I've found getting half decent agent capabilities to be a huge time sink for someone with my skill set (I'm a physical security guy, not a programmer) and the results just weren't worth the time invested.
Even basic agent capabilities with RAG, memory and so on in a package that I can just plug into ollama sounds awesome.
Prepping the server now. Here's hoping.
AnythingLLM NEEDS to get more attention, because its simply great! I can’t wait to see custom agents in AnythingLLM! Well done!
Bro this has to be the most comprehensive, simple, engaging and all around entertaining video on AI I've ever watched. Your presentation, explanations, and exert level knowledge base are all 'S' tier! Bra-freakin'-vo! Subscriber well earned and deserved! 🏆👏🏽👏🏽
Been using for the past few months and is my go-to app for local RAG. Adding agents huge plus. Looking forward to being able to add my own AutoGen agents to the list with their own special tools. Thanks for the great work Tim.
You are a gift from god, TIm Carambat.
Thank you for your continous efforts to make these technologies available to the rest of us.
If this was 10.000 years ago, YOU're the guy teaching the rest of us how to make fire with flint and tinder.
May you and your loved ones be blessed for eternity!
This is very high praise. I appreciate the kind comment!
Thank you very much. As soon as I saw that RAG was built in and it was simple to use, I immediately started finding readme pdfs on various topics to ensure I could use this tool as efficiently as possible. After my targeted pdfs are found, I plan on grabbing data from how-to and wiki.
Thank you for explaining quantisation in details for niebiews.
Screw giving a Github star (I did anyway). Tell me when you're going public so I can buy shares!! lol Seriously, you have a winner here.
@@JRo250 ill be sure to preallocate shares for star givers.
Note to SEC: this is a joke. Maybe
You built an amazing piece of software. Thank god that I stumbled across this video.
This is by far the easiest and most powerful way to use LLMs locally, full support, like and sub. And many thanks for the amazing work, especially being open source.
🫡
Funny ! I heard yesterday for the first time about AnythingLLM during an AI-info event.... and discarded the idea of giving it more attention because it was presented as "just another local RAG support". And now I stumble across this video by chance - and the additional agent functionality changes everything ! BTW, very well presented , this feature !
My immediate idea & feedback: if there was ANY chance to model custom agents in Flowise and re-import the JSON exports of this Flowise flow as input for an AnythingLLM custom agent, you'd save yourself the trouble of designing your own agent editor AND would start with a comparably large installed base. OK, maybe that's just wishful thinking..... but maybe I'm also not the only one with this wish to facilitate local agent building ;-)
@TimCarambat
I'm excited to see the features you talked about work with the ollama like in the video for the agent, as of now, its same as before I updated, but it's exciting to think of the future.
I had no idea you had a channel talking about your software. Im a big fan of your work!
Im a chronic video skipper but watched this back to back. Great explanations and can't wait to try this out! Would love to see more videos, tutorials or even lectures from you. You really have a knack for explaining things!😊
PS I've starred on Github!
I really appreciate you saying this. I have gotten a comment or two prior saying im the worst at it. Can't please everyone! Glad you found it useful.
Great this will make LLM more understandable for many ppl.
What I love about your tutorials is that you succinctly explain all the things that come across during the tutorial. Thanks!
Tim, thank you for making the world a better place with this awesome tool! :)
Great work , congrats . Im Very impressed with your gift of comunicate and the impressive amount of work that it took to develop this tool!!!
This is so dope. Great no-code solution and it's awesome that it's open source.
Excellent video! Thank you for explaining things plainly and quickly. Valuable.
If you type ollama show you can see the context window of the model FYI
Very cool to play with, look forward to seeing where the Agents go, nice work!
This is awesome!
I was looking for frameworks similar to this and now i see that this is way better than what we were looking for.
Great job on this one!
Fantastic Tim! Mine doesnt have agent config, guess i need to delete and udate, ill try that, looks great! keep up good work, i love anythingllm i really do!
Damm,... finally found the tools that i been looking for..MAN you save my day, i have been crazy stuck finding webui for my ollama remote server..your a gift from heaven keep it up your helping alot of people like us..thank you so much..❤❤❤😂😅😊😊
Amazing program Tim. So easy to understand and use. You and the team have a done a stellar job. Cheers
This is absolutely absurd. Thank you so much, this is an incredible project and I hope it gets more attention. I'm sold!
Anythingllm is awesome. Glad to hear custom agents are on the roadmap. It's the big hole in capability. Also need config to change agent promt. I scan a lot of code and the @ is used often to define decorators.
Thanks for building anything llm with these features. For many days I was searching for better UI than a terminal and more features for ollama models. And you have done it. Thank you very much.
Mind blowing simple, coherent and full of information. Think didn`t come across something similar in all terms on RUclips ( I`m not leaving comments often but this really really worth it ). Good job Tim Carambat !
Hey. Nice to meet you. I’ve just come across this now and honestly, this couldn’t come at a better time. I already realised the potential of AnythingLLM and was looking at how to utilise it as I’m building some text to action agents and your video just accelerated my path to objective. I have subscribed and look forward to more of your posts. Nice work 👍🙏👍
Would love to see this run stable diffusion and comfy ui workflows
I've been using this for months, and it's fantastic. Dude, thanks. Amazing work.
@@myronkoch this is such a nice thing to hear. Thank you for your support!
Big thanks man. This video helps alot for me as an beginner to understand how good a local llm is and which Usecases we have. Thumbs up for this great video.
Good stuff, will try it out. Subscribed. Looking forwards to seeing how this develops.
Amazing Tim. Keep up the good work.
NNNooooo!!! Thank you!! Great tool! Have lots of process documents at work and because of compliance and privacy issues, we are not allowed to upload any documents onto the internet. This is a game changer!!!!
Yo honestly it feels great when guys like you make your software completely free and i also think you should keep a option of donation. after seeing guys like you i will make something great and i will make it completely free to use and open source. again thanks dude!❤.
This is a very nice too! I appreciate you doing this intro video personally.
I have loved everything llm since the beginning.
Amazing, the way you have explained a complex concept. Thank you
Very strong video Tim. I'm going to give this all a try right now.
Can't wait to use this. Thank you!
This is awesome work. I looked at the other simple to install Windows front ends and stumbled on this. Pretty cool stuff and I love how you can add documents and external websites to feed it information. An offline LLM is soooooo much more preferred. The only item I don't understand is why you could just ask a regular question once you provided the document, but used @agent when asking to summarize a document.
IMO, i find having a local LLM that even is **only** like 75% as good as on online alternative is just much more rewarding.
Like i can be on an airplane, open my laptop, and start brainstorming with an AI. Pretty neat.
Next evolution would be a local AI on your phone but i dont think we have that tech _yet_
Amazing Tools, AnythingLLM!!! I love it so mich! With my M1Max Macbook Pro, it runs very smooth locally. Starred AnythingLLM for sure!!! Keeps your great work and thx a million for sharing.
love to hear this! Email us any feedback! team@mintplexlabs.com
Incredible.. gonna make building tools so much easier. Cant wait to see more agent abilities added!
Awesome vid! Really impressed with how you presented the information. 🙏 thank you
🎯 Key Takeaways for quick navigation:
00:00 *🤖 Introduction to Ollama & AnythingLLM and AMA*
- Introduction to Ollama and AnythingLLM
- Explanation of AMA application for running LLMs on local devices
- Overview of quantization process and agent capabilities in LLMs
02:30 *🧠 Understanding Model Quantization and Selection*
- Importance of selecting the right quantization level for LLMs
- Differences between various quantization levels like Q1 and Q8
- How quantization impacts model performance and reliability
06:07 *🛠 Setting up AnythingLM with Q8 Model and AMA*
- Instructions for setting up AMA with Q8 LLW model
- Steps to download and run AnythingLM on local devices
- Connecting to AMA server and configuring privacy settings
08:27 *💬 Enhancing Model Knowledge Using RAG and Workspace*
- Uploading documents for model referencing in workspace
- Improving model responses by utilizing documents in the workspace
- Configuring workspace settings for better model performance
11:41 *🌐 Using Agents for Advanced Functionality in AnythingLLM*
- Utilizing agents to enhance LLMs capabilities beyond basic text responses
- Enabling web scraping, file generation, summarization, and memory functions
- Integrating external services like Google for web browsing functionalities
Made with HARPA AI
@Tim do you know anything about home assistant, home automation application. Reason i ask is they already have some intergration with LLM but not with agents and not specialized for home assistant auto automations. When you have time check it out and see if its possible to integrate this with home assistant that would be great. Great job with the video!
Debug mode would be ideal. Agent to scrape the web just exits without any error even though I do have search engine api defined
Thanks for developing AnythingLLM and for the tutorial! I did not know I can create agents that can go online!
Great software, great video, a lot to learn from it so, way to go man! thanks for such a brilliant AI piece.
Been struggling to get custom agents to integrate reliably with external tooling, using frameworks like crewui with local LLMs. Would love a video guide explaining best practices for this
this is seriously very neat tool👏👏👏 Pls add some feature to custom develop agents with function calls. It will be helpful for our local automations.
This is shown in the UI that we will be supporting custom agents soon!
So training/finetuning is coming up as well? Loving the progress and process updates, keep up the great work Tim!
how'd you know!?
We will likely make some kind of external supplemental process for fine-tuning, but at least make the tuning process easy to integrate with AnythingLLM.
RAG + Fine-tune + agents = very powerful without question
@@TimCarambat That's awesome to hear!! I created an agent to get insider info, that's how I know of course!
@@FlynnTheRedhead !!!!! I thought i was hearing clicks during my phone calls!!!
@TimCarambat really caught my attention, not a technical guy (product designer) here but it was clear to understand, I'll definitely give it a try since I'm building some related products. Mini feedback the UI could have some "love" like my boss says but the overall experience feels natural...not too consumer face but natural enough. Will love to hear more from you and your team if there's a chance.
Fabi, from Argentina 🇦🇷🤘
feedback. I discovered this the same day as fabric. If you have not looked at it please do. If there is a way to include it inside of AnythingLLM it would be my dream tool. Well that and a note capability for saving a quick note as text or audio that can be transcribed, turned into a note with a reliable and easy to read format. One button record and stop/send to create a note for later review right on the first page. I'm building something like that already as a stand alone but I'm a noob trying something simple to learn with. It would be great if someone who knows what they are doing could put it into a tool I already plan to use a lot in the future.
I love it so far. I have only been playing with it for a day.
This explained the rag and agents parts I couldn't set up. Great educational content for those who are not programmers. Appreciate your explanations being without that much of "pre-supposed" know-how, that coders have - which is most tutorials on youtube...
I still didn't get why there's a difference between @agent commands and just regular chat
In a perfect world, they are the same. AnythingLLM originally was only rag. In the near future @agent won't be needed and agent commands will work seamlessly in the chat.
So @agent is temporary for now so you know for sure you want to possibly use some kind of tool for your prompt. Otherwise, it's just simple rag
There were some concepts I didn't quite understand; for example, tunneling from the Windows PC to the Mac (if it's on your local network, why work with VPN protocols rather than client/server - due to needing a stateful connection vs. 200 response code or something?). But the interface itself is brilliant! And I think that when it becomes agent-swarm-capable it's going to be a much better option for me than Crew AI, as it feels more intuitive, I am just going to need multiple agents working together. I have never installed a local LLM, but you have inspired me to give it a try. Thanks!
Thank you! Will test it for sure. I think you guys are on the exact right path 😎👍
Would it be possible to see a vide of setting up your Ollama models on Anything LLM, I followed these instructions but my ollama models never load.
I really appreciate your time to explain a lot of things to us. ❤🎉
I think it's a very good application, easy to use, and after testing it for a day or so, I have some wishes.
1. direct commands Bypass Agent LLM in Agent mode. It takes time for the agent to understand the sentence and convert it into internal command, and url parsing sometimes fails depending on the agent. For example, a command that scrapes the specified URL and shows the result, or a command that lists the currently registered documents with numbering. And a command that summarizes the document by this number instead of its full name.
2. I wish there was a way to pre-test the settings in the options window to make sure they are correct, such as specifying LLM or search engine.
I hope this application is widely known and loved by many people.
Hi Tim, thank you very much for the great video showcasing open source llms, and tools like anythingllm to create agents. I followed your video and successfully was able to do everything in your video. Are there other agentic videos for other usecases you made, look forward to see them. Cheers
Great job! This is wonderful! I will be responding after using to let you know my thoughts if you care to see them :)
I love this tool, I already made several Workspaces, each with its own LLM and RAG. This video was good how with an explanation. I am a Python developer and I would like to create my own agents
Absolutely everything I'm looking for. Thank you!
this demo is fire
Hi Tim.. I am absolutely impressed with the capabilities of AnythingLLM. Just a small query..how can I deploy it on a cloud machine and serve it as a chat agent on my website?
I actually want to add few learning resources as pdf for the rag document of this llm so that my users can chat with the content of those pdfs on my website.
I also want to understand how many such parallel instances of similar scenario but with different set of pdf is possible? For instance, if I am selling ebooks as digital product to my users, can I have unique instances autogenerated for each user based on their purchase?
We offer a standalone docker image that is a multi-user version of the desktop app. It has a public chat embed that is basically a publicly accessible workspace chat window. You can deploy a lot of places depending on what you want to accomplish: github.com/Mintplex-Labs/anything-llm?tab=readme-ov-file#-self-hosting
For this, you could do one AnythingLLM instance, multiple workspace where each has its own set of documents, and then a chat widget for each. This would give you the end result you are looking for
Wow this is amazing... I'm gonna go star you!
Great video! I also want to know how well the RAG function of AnythingLLM performs. It's important that text, images, and papers are handled properly and meaningful chunking are achieved
i saved this months ago was just starting out on a new project and realized your from the future.
Awesome! thank you for this. looking forward to more information/details/examples on using agents w/AnythingLLM!
If you aren't already, will you please look into integrating knowledge graphs into the app, along the same lines as the graphRAG project? Thanks for everything you do!
Actually have a development branch from long ago with KGs in them. I didn't find the performance much more remarkable, but the issue is never made it to prod was some OSS llms were performing horribly trying to make node/relationships with the graph dB I was using
@tim carabat - Legend! Can you show how Anthing LLM can interface / co-ordinate / use your defined oLlama's Agents?
looks great - will take a look after I have played around with Llava and Ollama.
Super useful, thanks very much! I haven't been able to get the Google integration to work reliably yet; I'm not trusting what it pulls back when I ask for (what I think) are simple things, like the ingredients for a very specific product. Scraping that site and pulling in the page data fixes that. Again, very cool!
Great things shown.
Tx for all the work and commitment.
🎉 Here is a kind of dedicated use case I am interested to get acces:
I am a mind mapping addict. I use Mind Manager, that stores the mm in .mmap format.
I would like to ask ANYTHINGLLM to help me scan all folders for mind maps on different subjects and Rag & summarize on them, without having to export all mmap files in another format. Is this doable at this stage? What else should have or have created?
This is really cool. the best i have used so far.
@sergiofigueiredo1987, @TimCarambat, I agree with Sergio. Wow! I have Ollama installed locally on a Windows machine in WSL. (I was leery of the Windows preview, but I may switch because NATing the Docker container is a pain.) I also pondered how to build a vector DB on my machine and integrate agents. You guys have already done it!
This is the easiest all-in-one platform. Thanks. More videos please ❤
@TimCarambat
Hey Tim it wont let me select anything under Workspace Agent LLM Provider even though everything is setup and working, obviously ollama is running and everything else in anything is using ollama fine in the app, but this selection option doesn't show like yours does.
@Tim: I am a professional translator (English to French), and I've just discovered AnythingLLM. Sometimes I have to translate confidential documents that cannot be shared on the cloud. They need to remain locally on my own computer. Once the translation is done, they have to be encrypted to be sent to clients.
Could I use AnythingLLM to help me with the translation process?
Could I use it with my actual Lexicum, glossaries and personal dictionaries? Most are PDF or DOCX files.
How would I do that? What are the first steps?
Many thanks if you can give me some hints on how to proceed.
I'm now a new subscriber! 😊
Great demo but if I'm hearing this correctly from other comments - you cannot implement your own GUI interface on top of AnythingLLM to interact with on your own internal website/app? If that's correct I can just demo/tinker with the tool and not implement anything real in my company for internal use. Novice here but like the approach. I can't tell how much data you can train your own LLM on but will keep searching for info.
Awesome presentation.
I know it's a huge ask, but it would be great if it could listen to a inputs and active windows. it could be really cool if it could capture and describe my workflow, i could analyze what i am doing, and than generate macros for me.
Man, i faded this project when it first came out, now im like... wow...
@@pr0d1gyvisions74 still free!
Billionaire alert🎉 seriously dope content , easy to understand effective communication
This is perfect it just needs more tools and agent customization like crew ai and it is going to be an absolute killer for the ai industry.
Will be coming soon! Just carving out how agents should work within the context of AnythingLLM and should be good.
Also, it would be nice to be able to just import your current CrewAI and use it in AnythingLLM - save you the work you have done so far
This is so cool! 🎉🎉🎉🎉❤
Looks really good and simple. I tried PrivateGPT using conda/Poetry and could never get it to work, so jumped into WSL for Windows connecting to Ubuntu running ollama, via WEBUI. Works great, but this just looks so much easier. Will have to give it a try. What I do like with the WEBUI I have is I can select different model, and even use multiple models at the same time.
Yeah, we didnt want to "rebuild" what is already built and amazing like text-web-gen. No reason why we cant wrap around your existing efforts on those tools and just elevate that experience with additional tools like RAG, agents, etc
This is a gem
productive criticism input: instead of saying "we do this...that..." say "typing / clicking / ... will ..." - this with provide 1. clear instruction(s) to follow and 2. explain the purpose of that action, all in one go, saving time and reducing / eliminating confusion
Thank you, you are correct - that instruction would be much more clear.
You should format yr comment for better readibility...
Thank you! This is so awesome!
Great work. Thanks a lot 🙏
Awesome Tool! Installed it today and i'm super hyped to have such a powerful tool running on my PC ! I was wondering if it was possible to access the agent functions through the API? Couldn't something about in the documentation.
@TimCarambat I'm very impressed with AnythingLLM, particularly how you can easily incorporate additional capabilities with agents. I'm trying to create a GPT chatbot in Slack without using OpenAI, and at first, I was going to use LM Studio since it has a "server" component - where you can pass it API calls and reflect the answer in a Slack bot.
I've looked around and I do not see this feature in AnythingLLM - is this coming or planned? I'd love to drop everything and just use your excellent tool.
We have an API that runs in the background. You can make an API key and communicate with AnythingLLM's workspaces via a Slackbot to accomplish this.
Tim, you are the man. Great work from you and your team on all of these great software. You are making complex simple. One request or a pointer in the right direction, are there any CLI tools to execute the agents and out put result to any of the workspaces. The GUI is great but at some point all of that needs some automation. Anyway, keep up the great work.
This is a great point, so we do have an API that you can use but it currently does not support agents :(
Guess i know what needs to be fixed now - you are not the first to voice this!
super-killer-mega-feature!!!111
Really awesome demonstration. I am excited about agents. Would be nice to be able to build custom tools in python for agents to use.