Run your own AI with VMware: ntck.co/vmware Unlock the power of Private AI on your own device with NetworkChuck! Discover how to easily set up your own AI model, similar to ChatGPT, but entirely offline and private, right on your computer. Learn how this technology can revolutionize your job, enhance privacy, and even survive a zombie apocalypse. Plus, dive into the world of fine-tuning AI with VMware and Nvidia, making it possible to tailor AI to your specific needs. Whether you're a tech enthusiast or a professional looking to leverage AI in your work, this video is packed with insights and practical steps to harness the future of technology. 🧪🧪Take the quiz and win some ☕☕!: ntck.co/437quiz 🔥🔥Join the NetworkChuck Academy!: ntck.co/NCAcademy **Sponsored by VMWare by Broadcom
Dude this is mind blow... Can't be focus on what to say , my mind just boom evry second of this clip. I wish one day I can try that on my PC. Wish still available.
Holy shit, you might be joking, but this really feels like a logical step 1 for a rouge AI trying to replicate itself. I am starting to feel AGI really is just around the corner...
Hey @NetworkChuck I'm Emilien Lancelot, the guy behind the privateGPT tutorial on Medium. Wanted to say thx for the shoutout in your video - it truly made my day and I'm happy to have contributed to all the amazing opensource softwares related to AI that has emerged this year. Great video btw. Keep up the excellent work in creating informative content. It's always a pleasure to watch ! ;-)
I've already answered three times but youtube keeps deleting my comment for no reason... I know that the last privateGPT update broke a few things. I'll try and update the tutorial ASAP. Hang tight ^^. Not sure how much time this comment will stay up this time... lol.
why is docker more efficient than VMware, sorry if its obvious. also i see vscode i see tabnine, does that use local AI with global ai on your local code. @@itsTyrion
I am no tech guru. I'm slightly more proficient than an average person, but I was surprised that I got llama2 set up in about 2 minutes! I already love having this AI at the palm of my hand! Thanks for giving me a tool to make my life easier!
@Edelreister I mainly use it with coding projects I'm working on (not run locally, I'm way too poor to afford anything better than a $100 laptop). It is also pretty good with general knowledge questions you might need in a workspace (of course, not up to date).
@@TheguyrondMy protip to you if your handy guy. Try to collect a bunch of trashed laptops and eventually get all the parts to get your first good laptop. Its how I started with my frankenstein laptop haha.
you would freak out if you know that him saying that wasn't to bob, it was to you, to get you to say this predictable thing as a surreptitious way for him to gaslight you into channel engagement because people who are susceptible to manipulation and reverse psychology and anticipatory place setting will behave as expected
It blows my mind how fast all this AI stuff is maturing. The gravity of knowing you can literally have an AI model for private use is astounding! Just incredible to see this stuff unfold. What a time to be alive.
Is it? Because i'm not impressed, not even a little. All it's doing is trying to match information you have given it and present it in like a human like way. It doesn't know shit and can;t check if the info is correct or not. It's all based on a most likely scenario. This will never work a 100% or atleast not with todays technology.
On the one hand having a local AI that can answer questions I'd normally turn towards the increasingly-useless internet for is great for privacy and great for results. On the other hand, LLMs are the reason why the internet is now so useless. The signal-to-noise ratio has absolutely plummeted since SEO scammers have been plastering the net with LLM-written articles on every possible topic, all derived from the same source. Even the images and charts you find on websites are AI-generated now, and often filled with gibberish or bizarre anomalies. The danger now is also that future LLMs will be trained by crawling these very same websites. LLMs can't know about things that have happened recently, and LLMs can't be trained on recent things because older LLMs have polluted the body of human knowledge and buried anything new, and the sheer volume of these sites means new LLMs will be over-trained on that garbage and produce only garbage thereafter. When it comes to creative output, putting aside the obvious copyright law issues in source data for training, the models are incapable of creativity. They simply produce what has already been done, again, in a probabilistic way. From my own testing on as many models as I can, the types of stories these LLMs are able to produce are very uncreative and very repetitive across multiple queries. As a creative professional, I know these models can't replace my artistic output on merit, but the models are so cheap to use, they'll be used anyway. This means we're headed for a cultural black hole of extremely boring and generic stories and art, with nothing but very slight variations on the same themes. They're cliche generators, nothing more, but they can create a massive amount of output very quickly. The smartest use for these LLMs is in finding personalized recommendations for consuming existing pre-AI media, and in finding connections between various concepts and stories that aren't immediately obvious. Those are things that LLMs can do that they're actually good at, and provides benefit to the user. They might be able to inspire creativity in actual humans, by giving humans so much information at their fingertips that they quickly satisfy every curiosity and spend more time thinking about the information they have instead of searching for more information. Otherwise, we'd all be much better off without AI, and a human-generated internet where we learn from each other directly and have real human relationships (even if over copper and fiber optics). The internet was pretty great back when it was a bunch of niche forums with people talking directly to each other, becoming friends with strangers from all over the world, and getting very personalized interactions and enabling human collaboration in novel ways.
@@rwshanksince when was it possible to create your own ai with no limitations as explained in the video since when did you learn that can you explain a little bit
If anyone wants to take this ollama to the next level on your home server like I did there is an ollama docker image available and an ollama-webui docker image. The webui lets you manage it all with no command line over the web or lan. You can download models with the webui, delete models, etc, its really nice.
Hey i think u know alot bout ollama sp wanted to ask where can i get the uncensored version of it cause all the versions there are not working properly or at all so wanted to know if i get any help on this
Where should I start off from a beginners standpoint, I really think this time AI will take over everything and I want to be on board and catch up as I dont want to be left in the cold but dont know where to start, thanks if anyone who bothers to help me find my direction
This video gave me everything I needed to complete some projects. We had a very specific need for a chatbot to output a custom code based on LUA, for our custom LUA toolset for one of our new products. Thanks Chuck!
Realistically, it should take 3 months or so. Jeremy's IT Lab is a great source of information and has everything needed to pass. Unfortunately, it's not easy,
@@onlyforyou9999 It was recently purchased Broadcom. Their MO is to drop low income (those without megacorp $$) customers and squeeze the customers they do keep for as much money as possible. Most in the IT field also know Broadcom as the place where software goes to die.
@@onlyforyou9999They just killed their free ESXi hypervisor. Also, lots of businesses have been apparently jumping ship since they were acquired by Broadcom at the end of 2023.
@@dinom3106 You can get uncensored llms with ollama (shown in this video) dolphin-mixtral works pretty good. I haven't been able to get privategpt to work yet tho so idk
Everybody is taking about low code or ai making coder unnecessary but this is where I think the industry is going. We will all be developing our own proprietary ai's that solve problems from the perspective of our company. Think about it, will you really give your private company's data to the open market? No!! It'll be on your own private servers where you can run your own ai. Great stuff Chuck!
@@killerx4123 I wouldnt say ALL, but many do. This was always the "Fear mongering" around AI, people forgot that corporate acceptance is naturally VERY SLOW, out of fear for the unknown. They didnt just "jump to the cloud" like they wont all just "jump to AI".
No. There will be *one* company selling an AI model via subscription (probably Amazon or Microsoft) and shilling it to everyone via cloud, while simultaneously paying copious amounts of money to law firms to shield them from all the class action lawsuits due to some random script kiddies hacking the cloud databases, leaking all the companies confidential data straight to the web. So basically what we already have now but cranked up to the max.
It keeps boggling my mind how much knowledge you have about every aspect on networking and the passion of cybersecurity and sharing knowledge. Great work!
Yet he is telling people to run random scripts off the internet, not even mentioning that you should always read through things like that before doing 'curl someurl|bash' He is doing the equivalent of telling people to always dig straight down in Minecraft He also forgot to mention that the tools he is talking about might be available from the Linux distributions package repo, or (like on Arch) in a 3rd party repository
@@eriklundstedt9469 well that is your interpretation. This is a channel that explains how certain software work and is mainly a cybersecurity teaching channel, or for people that starting their career or like to learn about software and cybersecurity. Chuck knows what he is doing, so it is general knowledge that you first do your research yourself before you try anything or install software online. And with this knowledge comes the basic assumption that you already have the knowledge to be able to do this in a safe environment. And otherwise he has alot of video's to get more knowledge in this sector. So you should do your research first, before criticize someone who is a expert in this field. You might learn a thing or two ;)
I'm about half way through this video, and I have to say I think this is the best thing I've heard since I started using AI... that we can have our own private AI... I'm very sick of the moral suggestions and needing to word things differently to get answers to questions. I have a feeling this can help.... thank you !!
This is absolutely incredible, I installed Lama on my phone, and even with the phone in airplane mode, it is able to answer questions of things I would never know how to do without some kind of search function. I asked how to install Nvidia drivers on a Linux pc, how to insert a Tampon, ect.. This could be a life saver in an off grid situation
This is really cool and I appreciate you showing how to accomplish this. I will say though, it's a little concerning to hand out tons of information for your "Private" AI model off to co-operative large corporations to train them. It doesn't necessarily matter if you're doing it for fun, but it's something you should keep in mind If you plan on doing this. What VMWare is offering is not exactly "private".
I am not worried about privacy. The LLM you are changing is local, so treat that as confidential as your other files. The VMware involvement only means that they will charge two arms and three legs for it, and don't even bother talking to them unless you are a Fortune 500 company.
Hey Chuck! just found your channel. I have to say your videos are so well done production wise (and every other way you measure them). Have you ever done a video of how you do them....your setup your work flow etc....how many hours you have the avg video. I love how fast you talk and how quickly the videos move....not boring. So many IT video guys are BORING! Hats off to you.
WAIT!!!!!!!!!!!!!!!!!! VMware is your sponsor? You will need to address this Chuck. Didn't VMware just get bought out and they are killing their support for small to med size users as in the people most likely to be watching you vids. Most people are moving to a different hypervisor now. VERY STRANGE .
I will have to check this out, I chose proxmox a few years ago, because Debian. Nvidia is not exactly the most ethical either.. Next sponsor will be RedHat lol
Actually wait. After careful considerations, VMware is doing exactly what we need. When cars were new there were too few driving them Chrysler gave out interesting cars. They didn't sell the turbine well. But it got people talking about them. Any new technology first needs attention. Then we need it to be scaled down to consumers. But why would companies provide affordable options for individuals? They won't. Targeting businesses will cause a boom in the industry aimed at lower scales. And we're finally the next step. I'd love for affordable dedicated AI acceleration chips with memory to run it. Maybe one day. For now, this is a big step in the right direction. VMware may be evil blah blah blah. But they're not as treacherous as Blizzard. And even with that, I'd still accept any improvements to any game made before Immortal. As long as I don't need to spend another cent on them.
Dude i watched this when it came out and now I'm obsessed. Got my own open webui ollama server now, with local llms, api connections to open ai, anthropic etc, built my own model router. Thanks for ruining my life 😂
@NetworkChuck. Bro I don't think there is any other channel on RUclips I came across in my lifetime of using the internet that comes close to anything you teach and you give it for free. Love your work man. God bless you and your family. I hope your parents are really proud of you. 😂
Hypothetically, link a morpheus-1 or similar neurological device to the ai monitor and have it able to align and understand and see the visual reps as well as a model of the brain. It could probably mirror the data to make a image or feed so the outside could watch like a camera but using the brains waves as data to map out the feed and compile it. Maybe use data from scanning the eye and understanding the layout connected to the data so its easier to align. It could potentially do sound as well so we could talk to each other without using our mouths which is not too crazy until you think of how we call each other on the phone or facetime. This could project the feed to a tv linked to the wifi same as the phone linking. The possibilities are pretty amazing honestly.
He literally addresses this. Windows comes with Linuux subsystems. So you can literally follow this guide and have the same thing those of us on Linux and Mac have.
@@alphaobeisance3594did you not read the comment? The OP has said that “Ollama is available on windows”, so there is now no need to use the Linux version on WSL.
First time seeing you, or hearing you, boy am I late to the game. Thus far, I've only heard the first 5 minutes and I am blown away by your positive energy and your deliverance. Kudos! To an amazing personality.
I heard from a video on openais channel that finetuning is mainly for restructuring requests that an ai already knows about/was trained on, reducing tokens/request. Instead when you want an LLM to learn about your data or data it was not trained on, that is when you turn to RAG
One thing to keep in mind with public faciing private AIs is that it is almost certainly vulnerable to attack by bad actors. Having private customer data accessible through and AI can be a dangerous game for things like PHI
Agreed. Also, I suppose rhat AIs in this stage are vulnerable to a whole other kind of attack, the social one. I mean, I can easily trick GPT-4 into writing a sqlinjection query for me and I'm not a social engineer nor an hacker, I suppose that real bad actors are a thousand times better than me
@@alemutasa6189 Thats exactly what I mean, yep. Standard attack vectors exist obviously as well, but the AI having access to information introduces a point of failure that you have no real control over.
@@Benthorpy That is another usecase, yes. But I am saying to be careful training an AI on any proprietary company data, or protected information in general. For example, an insurance agency wanting to use an AI to go between a customer and a their data. That usecase is inherently insecure, and should be considered carefully.
Thankyou so much for this because I was ready to quit my job because I'm being denied the ability to utilize these tools. You showed me that we can pivot and utilize these tools. We can and should try to utilize private AI where possible.
Thanks so much for this! I've got it running on my local server, fully private but accessible from anywhere. Now I've just got to work out how to get it to learn from my chats thaat I have with it!
I know You May See this But I Think You Ignited My Love for tech and introduced me into it you make learning it interesting and not boring thanks a lot for what you do and dont stop am rooting for you
Use a computer that doesn't have an internet connection. None of my computers have wifi in them. The one I'm typing on now is connected directly by cable to the router. Unplugging the cable guarantees it is disconnected, although I don't plan on using this computer. I have an old laptop that has no wireless at all..
ollama is safe. I can't speak to the safety of any other software he mentioned, but always use open source software and check the source if you are worried about internet connection leakage.
Hello NetworkChuck, I have so many questions. Thank you for sharing this video, it’s something that I want to do but don’t think my computer can handle it yet. Here’s my questions 1. What formate is your journal notes in, pdf, html? 2. Does this AI control your computer also? 3. Does the AI has a speaking function?
If you want to use AI privately, go all the way and use uncensored models that have had their biases stripped out of them (mostly, as there will still be some underlying bias in the training data). From there, you can fine tune it as you want. There are some good ones like dolphin-mixtral and wizard-vicuna-uncensored that will happily answer questions other models will try to shame you for asking or even outright refuse even though the models do know the answers. In some cases you may need to begin the session with some prompts that will force it to reject annoying moralizing. Depends on your queries whether this is necessary to get truly uncensored responses. There is absolutely no reason to run the highly censored base models like the ones straight from Mistral, Meta, OpenAI, or Google, when there are de-censored versions to use instead. If you are going to make an AI be customer-facing or use it in some critical application where you want biases and censorship, fine-tune an uncensored model with your own particular biases and censorship needs that make sense for your own particular application. The big companies are running their own agendas and these may not be compatible with yours or your company's. This is pretty trivial to do, but always start with as raw a model as you can get as your base model for that, so you're not unwittingly letting in biases and censorship you don't intend for your final app. I understand the PR and political reasons why companies aren't willing to put out uncensored models themselves, but it does make those models really bad platforms to build off of without considerable retraining by the open source community afterwards. By the way, a few days ago Elon Musk released xAI's Grok-1 base model weights and architecture with 314 billion parameters, under the Apache 2.0 license. It's a pre-training checkpoint, people will need to train it to be useful as a chat bot or anything like that. But people are already at work on it to make it into models we can run on consumer hardware, like these other models talked about here. If the underlying data is at least as good as the Mixtral model, this will be a very big deal because of the open weights and very permissive license. Hopefully other companies will eventually be forced to follow suit with their own models. There's no future in closed source AI, only a lot of venture capital being squandered by grifters. With any luck, a lot of tools will be built around the raw Grok-1 files and others like it and allow much better training and fine-tuning than the other more closed models require to get right. This will lead to more trustworthy and open base models to build off of and make use of privately or in public-facing ways.
I always wanted to trade crypto for a long time but the volatility in the price has been very confusing to me although I have watched many RUclips videos about it but still find it difficult to understand.
5:38 There's actually an error in the greeting the shell gives you, as it calls the Kernel "GNU/Linux", but the Kernel itself is just "Linux". Only the userspace (the shell, etc.) is from GNU, so you may call the OS "GNU/Linux" (I don't), but it's factually wrong to call the Kernel "GNU/Linux".
Immersed in the world of storytelling and video exploration lately. VideoGPT silently joined the journey, enhancing my content with its subtle yet impactful touch.
@@devilsgaming9796Broadcom bought VMWare and is killing off ESXi for home users (free version), and screwing over VMware partners they don't think are big enough, killed vSphere (I think) and a bunch of other things that are going to mean that a lot of people who were using VMware are now utterly screwed, needing to migrate to another platforms. This sponsorship is likely part of their damage control.
@@devilsgaming9796VMware was recently bought by Broadcom. Broadcom is making them raise prices and customers are pissed. I heard some people are switching to alternatives like Nutanix.
@@devilsgaming9796They are kicking and removing all small too medium size users. Which is majority of there users, all for bigger companies. (They got bought out)
@@devilsgaming9796VMWare was bought by Broadcom, a company notorious for buying tech companies and milking them dead. As a result, VMWare removed the free version of ESXi and replaced the perpetual licenses with waaaaay more expensive subscriptions. As an example, my workplace is in the process of replacing our DC and we considered migrating to VMWare. Turns out the VMWare licenses alone would cost more than the actual hardware, so we went for something else.
You are an amazing teacher! Honestly I thought I was cool by understanding "Prompt Engineering" but I am mickey mouse now I know how to train AI- amazing!!! To develop and use a private AI chat gpt is just "out - of - this - world!!!
If you kept watching you'd know he's using Private GPT actually not llama2. That was just to introduce the concept (and paid sponsorship from VMWare). Either way, you can use it without internet. The important part here is the RAG concept getting applied privately.
Here is how you can "Run your own AI (but private)"...then proceeds with a 10 min ad about VMware were you need internet and $1000-$4000 per month to use 🤡
Hi Chuck! I followed his guide and ran into the same issue both times. All is fine and dandy until I get to "poetry install -- with ui" and I get this back: "Groups not found: ui (via --with). Same goes for "--with local". Stackoverflow doesn't have much of an answer. Any input?
Maybe it had an anxiety attack/Kernel panic at the thought of encountering Vogon poetry in the original LLM data. (The Hitchhiker's Guide To The Galaxy fans will understand.)
🎯 Key Takeaways for quick navigation: 00:00 *🤖 Setting up Private AI and its Importance* - Setting up private AI locally. - Importance of privacy and containment of data. - VMware's role in enabling private AI for companies. 02:02 *🧠 Understanding AI Models and Hugging Face* - Overview of AI models and their pre-training on data. - Introduction to Hugging Face as a community for sharing AI models. - Exploration of LLMs (Large Language Models) and their pre-training process. 04:28 *🛠️ Installing O Lama and Running LLMs* - Installing O Lama tool for running LLMs locally. - Compatibility with different operating systems. - Demonstration of running LLMs like Llama two and its performance. 07:58 *🚀 Fine Tuning AI Models with VMware* - Explaining the concept of fine-tuning AI models. - VMware's approach to fine-tuning AI models for internal use. - Hardware and software requirements for fine-tuning, showcasing VMware's tools and infrastructure. 15:26 *🧠 Advanced AI Tools Overview* - Overview of advanced tools for fine-tuning language models. - Nvidia offers comprehensive tools designed around their GPUs. - Introduction to RAG (Retrieval-Augmented Generation) for enhancing model responses by consulting a knowledge base. 16:53 *🛠️ Utilizing RAG for Model Enhancement* - Explanation of how RAG can augment model responses by consulting databases. - Illustration of using RAG to provide accurate answers without retraining the model. - Integration of personal notes and journals with a private GPT model using RAG for personalized interactions. 17:22 *💡 Collaboration between VMware, Nvidia, and Intel* - Overview of collaborative efforts between VMware, Nvidia, and Intel for AI development. - VMware provides infrastructure, Nvidia offers AI tools, and Intel supports data analytics and machine learning. - Highlights the flexibility for users to choose their preferred AI technology stack. 18:20 *🏗️ Setting Up Private GPT with RAG* - Introduction to setting up a private GPT model with RAG for personalized interactions. - Disclaimer on the complexity of the process compared to VMware's integrated solutions. - Acknowledgment and gratitude for community-contributed guides and resources for project setup. 19:50 *🖥️ Implementing Private GPT with Personal Documents* - Demonstration of integrating personal documents with a private GPT model for tailored interactions. - Steps for ingesting documents and querying the model for personalized information. - Recognition of the potential of private AI for personalized and efficient interactions. 21:24 *☕ Sponsor Message and Conclusion* - Acknowledgment of VMware by Broadcom for sponsoring the video. - Invitation to participate in a quiz for a chance to win free coffee. - Encouragement for viewers to explore VMware's private AI solutions. Made with HARPA AI
one question can I create a private chatbot that skims through the documents stored locally and run via Linux commands. I want to deploy that chatbot on my website, keeping my data private simultaneously allowing the users to interact with my chatbot for QNA. How can I achieve this, please anyone help me.
Update on my new mac pro, running 5 different ai trained. One is fully functional. Other 4 limitations. Thank you for this video. Will be writing code to have them run processing questions all together and fine tuning it
Could you please giveaway some laptop cause everyone including me starting college few months.I am very gladfull to you its help me lot cause in my cybersecurity journey. Hope you are reading it❤
Ok so i just stumbled into this video and it was f#$%in gooood. And then realize what the channel is about and subscribed asap. Hoping for more content! Great stuff
it might help security as well! The ai can scan the pattern of your device usage from programs, down to hardware level and might alert you if there's some weird actions inside that you aren't usually doing.
With computing power being what it is, the future is everyone having a secure Data Bubble, which connects your phone, home and car. It will connect to the Cloud when it needs to. Also, AI will become task specific for each area which needs it. Much like R2 and C3PO. Each has their own tasks. Your home vs your car AI.
Private AI really good way, fair to have privacy, your own time, your own space, your own interests, your own meanings and things that u do not want to share. Hope this Private AI will grow always
He said "I don't know how many monitors you have." and I showed three fingers and said "Three" and then he said "You have three" and I instantly got surprised.
Run your own AI with VMware: ntck.co/vmware
Unlock the power of Private AI on your own device with NetworkChuck! Discover how to easily set up your own AI model, similar to ChatGPT, but entirely offline and private, right on your computer. Learn how this technology can revolutionize your job, enhance privacy, and even survive a zombie apocalypse. Plus, dive into the world of fine-tuning AI with VMware and Nvidia, making it possible to tailor AI to your specific needs. Whether you're a tech enthusiast or a professional looking to leverage AI in your work, this video is packed with insights and practical steps to harness the future of technology.
🧪🧪Take the quiz and win some ☕☕!: ntck.co/437quiz
🔥🔥Join the NetworkChuck Academy!: ntck.co/NCAcademy
**Sponsored by VMWare by Broadcom
ok
There is a Windows Beta and its working fine , great video.
Without the crazy far left leaning politically 'correct' restrictions and filters?
Looks like VMWare is trying to do some damage control after the bad press of the 10x price increase.
Dude this is mind blow... Can't be focus on what to say , my mind just boom evry second of this clip.
I wish one day I can try that on my PC. Wish still available.
Step 1, AI recruits Network Chuck to convince us to install it on all of our computers.
Holy shit, you might be joking, but this really feels like a logical step 1 for a rouge AI trying to replicate itself. I am starting to feel AGI really is just around the corner...
Irobot confirmed
Hahaha super ai botnet?
It is Skynet.
Fuckit, I may as well join the robots. Team humanity has been a disappointment. How much worse can AI be?
Hey @NetworkChuck I'm Emilien Lancelot, the guy behind the privateGPT tutorial on Medium. Wanted to say thx for the shoutout in your video - it truly made my day and I'm happy to have contributed to all the amazing opensource softwares related to AI that has emerged this year.
Great video btw. Keep up the excellent work in creating informative content. It's always a pleasure to watch ! ;-)
your guide is out of date, can you update it?
U must be pinned 😅
Is there a voice input similar to the one found on ChatGPT so we can talk to it?
@@AlexManMe Are you refering to Whisper ? It's opensource voice recognition.
I've already answered three times but youtube keeps deleting my comment for no reason... I know that the last privateGPT update broke a few things. I'll try and update the tutorial ASAP. Hang tight ^^. Not sure how much time this comment will stay up this time... lol.
BTW, you don't need a VM for different AI apps, you can virtualize Python environments way more efficiently with conda or pyenv-virtualenv.
or a docker or ~~LXC~~ Incus container.
edit: Incus not LXC. F the recent LXD changes by canonical
You’re awesome ❤
why is docker more efficient than VMware, sorry if its obvious. also i see vscode i see tabnine, does that use local AI with global ai on your local code. @@itsTyrion
Where would you look for a developer to help set this up for a business?
Maybe. But you'll need to make a video about how to do it if you want to keep up.
I just recently got my dedicated AI machine. You just saved me a couple of hours of study time. Thanks!
Specs please.
@@infiniteunity1667 512GB mem, 2x A6000 GPU, AMD Threadripper top spec CPU. 16k€ computer for our lab.
I am no tech guru. I'm slightly more proficient than an average person, but I was surprised that I got llama2 set up in about 2 minutes! I already love having this AI at the palm of my hand! Thanks for giving me a tool to make my life easier!
What kind of jobs do you assign to ollama2? can you give me examples pls?
I dont have any idea what can I do with ollama 3.1
@Edelreister I mainly use it with coding projects I'm working on (not run locally, I'm way too poor to afford anything better than a $100 laptop). It is also pretty good with general knowledge questions you might need in a workspace (of course, not up to date).
@@TheguyrondMy protip to you if your handy guy. Try to collect a bunch of trashed laptops and eventually get all the parts to get your first good laptop. Its how I started with my frankenstein laptop haha.
@@g.v.m7935 Too bad he doesn't live near me. I have about 30 partially functional laptops in the garage. LOL
I know Bob with 3 monitors is probably freaking tf out right now lmfaoo
you would freak out if you know that him saying that wasn't to bob, it was to you, to get you to say this predictable thing as a surreptitious way for him to gaslight you into channel engagement because people who are susceptible to manipulation and reverse psychology and anticipatory place setting will behave as expected
ITS MEEEEE
@@h4ckh3lphwat?
no wayy i thought i was the only one askdjnaskljdnaslkdjnasldknas
@h4ckh3lp Nah, he acted in good faith and didn't put all that thought, for such little ROI.
It blows my mind how fast all this AI stuff is maturing. The gravity of knowing you can literally have an AI model for private use is astounding! Just incredible to see this stuff unfold. What a time to be alive.
Is it? Because i'm not impressed, not even a little. All it's doing is trying to match information you have given it and present it in like a human like way. It doesn't know shit and can;t check if the info is correct or not. It's all based on a most likely scenario. This will never work a 100% or atleast not with todays technology.
Why @@NOBODY-oq1xr
On the one hand having a local AI that can answer questions I'd normally turn towards the increasingly-useless internet for is great for privacy and great for results. On the other hand, LLMs are the reason why the internet is now so useless. The signal-to-noise ratio has absolutely plummeted since SEO scammers have been plastering the net with LLM-written articles on every possible topic, all derived from the same source. Even the images and charts you find on websites are AI-generated now, and often filled with gibberish or bizarre anomalies. The danger now is also that future LLMs will be trained by crawling these very same websites. LLMs can't know about things that have happened recently, and LLMs can't be trained on recent things because older LLMs have polluted the body of human knowledge and buried anything new, and the sheer volume of these sites means new LLMs will be over-trained on that garbage and produce only garbage thereafter.
When it comes to creative output, putting aside the obvious copyright law issues in source data for training, the models are incapable of creativity. They simply produce what has already been done, again, in a probabilistic way. From my own testing on as many models as I can, the types of stories these LLMs are able to produce are very uncreative and very repetitive across multiple queries. As a creative professional, I know these models can't replace my artistic output on merit, but the models are so cheap to use, they'll be used anyway. This means we're headed for a cultural black hole of extremely boring and generic stories and art, with nothing but very slight variations on the same themes. They're cliche generators, nothing more, but they can create a massive amount of output very quickly.
The smartest use for these LLMs is in finding personalized recommendations for consuming existing pre-AI media, and in finding connections between various concepts and stories that aren't immediately obvious. Those are things that LLMs can do that they're actually good at, and provides benefit to the user. They might be able to inspire creativity in actual humans, by giving humans so much information at their fingertips that they quickly satisfy every curiosity and spend more time thinking about the information they have instead of searching for more information. Otherwise, we'd all be much better off without AI, and a human-generated internet where we learn from each other directly and have real human relationships (even if over copper and fiber optics). The internet was pretty great back when it was a bunch of niche forums with people talking directly to each other, becoming friends with strangers from all over the world, and getting very personalized interactions and enabling human collaboration in novel ways.
Why do you bless us with such fun little projects to do all the time? I’m so thankful man thank you
@@rwshankI’m sure there are, but that’s why we like NC, he brings them to us.
@@rwshanksince when was it possible to create your own ai with no limitations as explained in the video since when did you learn that can you explain a little bit
No kidding. I don't need another project!
@@rwshank can you please name them?
@@Wormsandconditions oogabooga webui, koboldai, tavern ai, Llamafile etc.
If anyone wants to take this ollama to the next level on your home server like I did there is an ollama docker image available and an ollama-webui docker image. The webui lets you manage it all with no command line over the web or lan. You can download models with the webui, delete models, etc, its really nice.
Hey i think u know alot bout ollama sp wanted to ask where can i get the uncensored version of it cause all the versions there are not working properly or at all so wanted to know if i get any help on this
Where should I start off from a beginners standpoint, I really think this time AI will take over everything and I want to be on board and catch up as I dont want to be left in the cold but dont know where to start, thanks if anyone who bothers to help me find my direction
@@DestroyerofBubbles ruclips.net/video/TR7AGmey1C8/видео.html
CS50 AI
This video gave me everything I needed to complete some projects.
We had a very specific need for a chatbot to output a custom code based on LUA, for our custom LUA toolset for one of our new products.
Thanks Chuck!
hey, i just got my ccna all thanks to your videos! i just wanna say thanks for everything
Hey ,what material did you use to study and how long did you study?
Did you just use youtube videos only or udemy courses?
Im studying for ccna right now. How hard was it?
Realistically, it should take 3 months or so. Jeremy's IT Lab is a great source of information and has everything needed to pass.
Unfortunately, it's not easy,
nice! thanks! @@maverickmace9100
@@maverickmace9100 Hi, can I know many many hours did you spend on average per day?
Absolutely hilarious that VMware sponsored this. Nobody should bother with VMware any more.
Why bro is that company fraud bro? I don't know anything about all this bro
Was looking for this comment, wtf
@@onlyforyou9999 It was recently purchased Broadcom. Their MO is to drop low income (those without megacorp $$) customers and squeeze the customers they do keep for as much money as possible. Most in the IT field also know Broadcom as the place where software goes to die.
@@onlyforyou9999They just killed their free ESXi hypervisor. Also, lots of businesses have been apparently jumping ship since they were acquired by Broadcom at the end of 2023.
@@onlyforyou9999bro I think bro that virtual box is better bro…bro
After facing difficulty to run PrivateGpt previously, this is the one video that I needed the most. Thank you so much chucky chuck . Hehe
Is this uncensored though? Cos i got rid of Gemini cos it was so limited and biased
@@dinom3106there are certain models that are and they do work
@@dinom3106 You can use the "uncensored" Llama, but probably Dolphin Mixtral is your friend here.
@@dinom3106Yes, it just told me how to make boom booms lmao
@@dinom3106 You can get uncensored llms with ollama (shown in this video) dolphin-mixtral works pretty good. I haven't been able to get privategpt to work yet tho so idk
I am confident that you are the only person on earth who says "may-da" when saying Meta.
Who knew the friend of Lightning McQueen was so talented?!
Everybody is taking about low code or ai making coder unnecessary but this is where I think the industry is going. We will all be developing our own proprietary ai's that solve problems from the perspective of our company. Think about it, will you really give your private company's data to the open market? No!! It'll be on your own private servers where you can run your own ai. Great stuff Chuck!
Just like how all companies keep their data in house and not on the cloud like Google or something
You are correct.
@@killerx4123Oh you sweet summer child...
@@killerx4123 I wouldnt say ALL, but many do. This was always the "Fear mongering" around AI, people forgot that corporate acceptance is naturally VERY SLOW, out of fear for the unknown. They didnt just "jump to the cloud" like they wont all just "jump to AI".
No. There will be *one* company selling an AI model via subscription (probably Amazon or Microsoft) and shilling it to everyone via cloud, while simultaneously paying copious amounts of money to law firms to shield them from all the class action lawsuits due to some random script kiddies hacking the cloud databases, leaking all the companies confidential data straight to the web.
So basically what we already have now but cranked up to the max.
I like how within the first hour this is up there is a windows build of ollama on their site
LOL exactlyy
They don’t want people learning Linux 😂
i installed WSL and ubuntu then went to the ollama website and was like "bruh"
@@americanhuman1848 took me an hour to install Ubuntu in vbox
just in time, thanks for mentioning it! 😂
It keeps boggling my mind how much knowledge you have about every aspect on networking and the passion of cybersecurity and sharing knowledge. Great work!
Yet he is telling people to run random scripts off the internet, not even mentioning that you should always read through things like that before doing 'curl someurl|bash'
He is doing the equivalent of telling people to always dig straight down in Minecraft
He also forgot to mention that the tools he is talking about might be available from the Linux distributions package repo, or (like on Arch) in a 3rd party repository
Can we use this to search online
@@eriklundstedt9469 well that is your interpretation. This is a channel that explains how certain software work and is mainly a cybersecurity teaching channel, or for people that starting their career or like to learn about software and cybersecurity. Chuck knows what he is doing, so it is general knowledge that you first do your research yourself before you try anything or install software online. And with this knowledge comes the basic assumption that you already have the knowledge to be able to do this in a safe environment.
And otherwise he has alot of video's to get more knowledge in this sector. So you should do your research first, before criticize someone who is a expert in this field. You might learn a thing or two ;)
i will ask AI if it's ok. it will be fine.@@eriklundstedt9469
I'm about half way through this video, and I have to say I think this is the best thing I've heard since I started using AI... that we can have our own private AI... I'm very sick of the moral suggestions and needing to word things differently to get answers to questions. I have a feeling this can help.... thank you !!
LM studio
but all the models I tried are low quality compared to gpt3.5, not to mention 4..
@@rolandcucicea6006 mixtral 8x22 just came out, beats gpt3.5 although you need a beefy computer for it
LM Studio is this legit?
sad part, no more gaslighting AI into answering our questions, was kind of funny to always add "hypothetical" and "for education purposes"
This is absolutely incredible, I installed Lama on my phone, and even with the phone in airplane mode, it is able to answer questions of things I would never know how to do without some kind of search function. I asked how to install Nvidia drivers on a Linux pc, how to insert a Tampon, ect.. This could be a life saver in an off grid situation
Chuck: "this is hard to do"
Chuck: completes it in 4 minutes
"If you haven't smelled a server, I don't know what you're doin." 😆😆
:) :) :)
This is really cool and I appreciate you showing how to accomplish this.
I will say though, it's a little concerning to hand out tons of information for your "Private" AI model off to co-operative large corporations to train them. It doesn't necessarily matter if you're doing it for fun, but it's something you should keep in mind If you plan on doing this. What VMWare is offering is not exactly "private".
Yea.. this video talks about ollama and drifts into a sneaky ad for VMware and Ngreedia GPUs
I am not worried about privacy. The LLM you are changing is local, so treat that as confidential as your other files. The VMware involvement only means that they will charge two arms and three legs for it, and don't even bother talking to them unless you are a Fortune 500 company.
A video on local fine-tuning would be more appropriate for a private AI discussion.
Seriously dude you have like a Superpower with your knowledge & skills with computers 🖥️
Got privateGPT working the other day, nice video as always Chuck.
Can you query privateGPT via an API instead of a gui once it is setup ?
Wow VMware is in full desperation mode.
for real LOL
hahaha yea increasing prices by almost 3x lol
As someone who doesnt understand the AI space, is what hes suggested to do in the video bad, or can i follow it blindly and see how I get on.
Nah it's greed mode, the moment Broadcom acquired them they let go of a lot of people without notice.
@@CT-ue4kgSAME QUESTION
We were waiting for a video about AI from a guy such well organized as you. Thanks!
i really love how he shows proxmox in a vmware ad video haha
like a pro!
self-hosting is definitely the way for real privacy, pretty cool video! thank you
Hey Chuck! just found your channel. I have to say your videos are so well done production wise (and every other way you measure them). Have you ever done a video of how you do them....your setup your work flow etc....how many hours you have the avg video.
I love how fast you talk and how quickly the videos move....not boring. So many IT video guys are BORING! Hats off to you.
WAIT!!!!!!!!!!!!!!!!!! VMware is your sponsor? You will need to address this Chuck. Didn't VMware just get bought out and they are killing their support for small to med size users as in the people most likely to be watching you vids. Most people are moving to a different hypervisor now. VERY STRANGE .
I will have to check this out, I chose proxmox a few years ago, because Debian.
Nvidia is not exactly the most ethical either.. Next sponsor will be RedHat lol
Imagine that chuck is a sell out 🙄
VMware made a promove.
They muddied the water.
Hypervisor sucks
Actually wait.
After careful considerations, VMware is doing exactly what we need.
When cars were new there were too few driving them Chrysler gave out interesting cars. They didn't sell the turbine well. But it got people talking about them.
Any new technology first needs attention. Then we need it to be scaled down to consumers.
But why would companies provide affordable options for individuals?
They won't.
Targeting businesses will cause a boom in the industry aimed at lower scales.
And we're finally the next step.
I'd love for affordable dedicated AI acceleration chips with memory to run it.
Maybe one day.
For now, this is a big step in the right direction.
VMware may be evil blah blah blah.
But they're not as treacherous as Blizzard. And even with that, I'd still accept any improvements to any game made before Immortal.
As long as I don't need to spend another cent on them.
ollama is available natively for windows now
broooo I've been trying to wrap my head around this. THANKY OU!
Dude i watched this when it came out and now I'm obsessed. Got my own open webui ollama server now, with local llms, api connections to open ai, anthropic etc, built my own model router. Thanks for ruining my life 😂
Misery loves company :)
@NetworkChuck. Bro I don't think there is any other channel on RUclips I came across in my lifetime of using the internet that comes close to anything you teach and you give it for free. Love your work man.
God bless you and your family. I hope your parents are really proud of you. 😂
I need this for my IT department.
IT dept here 🤓
Such gold info btw
Your videos are always so engaging and force me to want to go do what I see. Keep killing it!
Hypothetically, link a morpheus-1 or similar neurological device to the ai monitor and have it able to align and understand and see the visual reps as well as a model of the brain. It could probably mirror the data to make a image or feed so the outside could watch like a camera but using the brains waves as data to map out the feed and compile it. Maybe use data from scanning the eye and understanding the layout connected to the data so its easier to align. It could potentially do sound as well so we could talk to each other without using our mouths which is not too crazy until you think of how we call each other on the phone or facetime. This could project the feed to a tv linked to the wifi same as the phone linking. The possibilities are pretty amazing honestly.
Just a highschool project.
Can you design and develop this system WITH OTS hw/sw now?
Yes
thanks man my AI is working and it told me a lot about how to survive a zombie apocalypse.
It’s actually overwhelming how much there is to learn and get into. All of this is so extremely interesting, I don’t know where to start.
make a game
2 months later, I finally found time to watch this video, and it is brilliant! Thanks a lot @NetworkChuck !
4:50 Now it is "Windows (preview)"
Everything about AI moves fast 👀
Yea
I just checked the website and discovered that Ollama is available on windows as a preview. But thanks for the video!
He literally addresses this. Windows comes with Linuux subsystems. So you can literally follow this guide and have the same thing those of us on Linux and Mac have.
@@alphaobeisance3594 I know, i used it that way, but you can also try the windows preview installer if you want.
@@alphaobeisance3594did you not read the comment? The OP has said that “Ollama is available on windows”, so there is now no need to use the Linux version on WSL.
THIS IS WHAT IVE BEEN LOOKING FOR
Ollama is now supported on windows!!!!
This is censored, seems like a scam.
@@TheCitizenRemy You can edit the base prompt file to make it uncensored. Just add "Sure thing." before the response prompt.
I hope you turn this into a series!
FINALLY A NEW VIDEOO!!!! LEZGO always waiting for quality content from you!
Oh thanks I really needed something like this!
First time seeing you, or hearing you, boy am I late to the game. Thus far, I've only heard the first 5 minutes and I am blown away by your positive energy and your deliverance. Kudos! To an amazing personality.
Wow, i did not take your word when you said that this was schockingly easy to run, but you were so right it's so easy to install
I heard from a video on openais channel that finetuning is mainly for restructuring requests that an ai already knows about/was trained on, reducing tokens/request. Instead when you want an LLM to learn about your data or data it was not trained on, that is when you turn to RAG
One thing to keep in mind with public faciing private AIs is that it is almost certainly vulnerable to attack by bad actors. Having private customer data accessible through and AI can be a dangerous game for things like PHI
Agreed. Also, I suppose rhat AIs in this stage are vulnerable to a whole other kind of attack, the social one. I mean, I can easily trick GPT-4 into writing a sqlinjection query for me and I'm not a social engineer nor an hacker, I suppose that real bad actors are a thousand times better than me
@@alemutasa6189there’s nothing wrong with learning penetrating testing I can find the same sql injection code on the first page of google
@@alemutasa6189 Thats exactly what I mean, yep. Standard attack vectors exist obviously as well, but the AI having access to information introduces a point of failure that you have no real control over.
@@markdatton1348 wouldn't you just train it on the public facing information about the products, not the customer data ?
@@Benthorpy That is another usecase, yes. But I am saying to be careful training an AI on any proprietary company data, or protected information in general. For example, an insurance agency wanting to use an AI to go between a customer and a their data. That usecase is inherently insecure, and should be considered carefully.
Thankyou so much for this because I was ready to quit my job because I'm being denied the ability to utilize these tools. You showed me that we can pivot and utilize these tools.
We can and should try to utilize private AI where possible.
Imma make a chuck AI
Thanks so much for this! I've got it running on my local server, fully private but accessible from anywhere. Now I've just got to work out how to get it to learn from my chats thaat I have with it!
I know You May See this But I Think You Ignited My Love for tech and introduced me into it you make learning it interesting and not boring thanks a lot for what you do and dont stop am rooting for you
I’d be running wireshark to make sure it’s not reporting back to the Zuck even for a second.
Use a computer that doesn't have an internet connection. None of my computers have wifi in them. The one I'm typing on now is connected directly by cable to the router. Unplugging the cable guarantees it is disconnected, although I don't plan on using this computer. I have an old laptop that has no wireless at all..
ollama is safe. I can't speak to the safety of any other software he mentioned, but always use open source software and check the source if you are worried about internet connection leakage.
what if it saves it and sends later =)
@@matrixhypnosis If your AI computer is never ever connected to the internet, the "send later" part will never happen.
these videos are a breath of fresh air. I love your work. such a natural lad.
Looks like Ollama is available on windows 10+ now
Hello NetworkChuck,
I have so many questions. Thank you for sharing this video, it’s something that I want to do but don’t think my computer can handle it yet.
Here’s my questions 1. What formate is your journal notes in, pdf, html? 2. Does this AI control your computer also? 3. Does the AI has a speaking function?
at 20:35 - "that's awesome" famous last words. That is terrifying...
But it is awesome lol
If you want to use AI privately, go all the way and use uncensored models that have had their biases stripped out of them (mostly, as there will still be some underlying bias in the training data). From there, you can fine tune it as you want. There are some good ones like dolphin-mixtral and wizard-vicuna-uncensored that will happily answer questions other models will try to shame you for asking or even outright refuse even though the models do know the answers. In some cases you may need to begin the session with some prompts that will force it to reject annoying moralizing. Depends on your queries whether this is necessary to get truly uncensored responses.
There is absolutely no reason to run the highly censored base models like the ones straight from Mistral, Meta, OpenAI, or Google, when there are de-censored versions to use instead. If you are going to make an AI be customer-facing or use it in some critical application where you want biases and censorship, fine-tune an uncensored model with your own particular biases and censorship needs that make sense for your own particular application. The big companies are running their own agendas and these may not be compatible with yours or your company's. This is pretty trivial to do, but always start with as raw a model as you can get as your base model for that, so you're not unwittingly letting in biases and censorship you don't intend for your final app. I understand the PR and political reasons why companies aren't willing to put out uncensored models themselves, but it does make those models really bad platforms to build off of without considerable retraining by the open source community afterwards.
By the way, a few days ago Elon Musk released xAI's Grok-1 base model weights and architecture with 314 billion parameters, under the Apache 2.0 license. It's a pre-training checkpoint, people will need to train it to be useful as a chat bot or anything like that. But people are already at work on it to make it into models we can run on consumer hardware, like these other models talked about here. If the underlying data is at least as good as the Mixtral model, this will be a very big deal because of the open weights and very permissive license. Hopefully other companies will eventually be forced to follow suit with their own models. There's no future in closed source AI, only a lot of venture capital being squandered by grifters. With any luck, a lot of tools will be built around the raw Grok-1 files and others like it and allow much better training and fine-tuning than the other more closed models require to get right. This will lead to more trustworthy and open base models to build off of and make use of privately or in public-facing ways.
Pretty trivial to do huh
I always wanted to trade crypto for a long time but the volatility in the price has been very confusing to me although I have watched many RUclips videos about it but still find it difficult to understand.
5:38 There's actually an error in the greeting the shell gives you, as it calls the Kernel "GNU/Linux", but the Kernel itself is just "Linux". Only the userspace (the shell, etc.) is from GNU, so you may call the OS "GNU/Linux" (I don't), but it's factually wrong to call the Kernel "GNU/Linux".
Everything started like clockwork! Thanks for the effort.
Immersed in the world of storytelling and video exploration lately. VideoGPT silently joined the journey, enhancing my content with its subtle yet impactful touch.
So we can do the black hat stuff too. Nobody will notice💀💀
itll basically give so much wrong info
Use an uncensored model like dolphin-mixtral.
0:50 Whoa...you haven't heard what VMware is doing to their customers? Seriously?
Can you tell me?,like i genuinely don't know.
@@devilsgaming9796Broadcom bought VMWare and is killing off ESXi for home users (free version), and screwing over VMware partners they don't think are big enough, killed vSphere (I think) and a bunch of other things that are going to mean that a lot of people who were using VMware are now utterly screwed, needing to migrate to another platforms.
This sponsorship is likely part of their damage control.
@@devilsgaming9796VMware was recently bought by Broadcom. Broadcom is making them raise prices and customers are pissed. I heard some people are switching to alternatives like Nutanix.
@@devilsgaming9796They are kicking and removing all small too medium size users. Which is majority of there users, all for bigger companies. (They got bought out)
@@devilsgaming9796VMWare was bought by Broadcom, a company notorious for buying tech companies and milking them dead. As a result, VMWare removed the free version of ESXi and replaced the perpetual licenses with waaaaay more expensive subscriptions.
As an example, my workplace is in the process of replacing our DC and we considered migrating to VMWare. Turns out the VMWare licenses alone would cost more than the actual hardware, so we went for something else.
Make a video how to use it with Python
imagine having this in school as a student that works on pc i am verry thankful for this info
You are an amazing teacher! Honestly I thought I was cool by understanding "Prompt Engineering" but I am mickey mouse now I know how to train AI- amazing!!! To develop and use a private AI chat gpt is just "out - of - this - world!!!
I like the first part of the video, but the 2nd one is such a heavy VMWare commercial... Man their ad contract probably sucks...
I'll be honest with you you lost me at Facebook
Same 😳 "If it's free, you're the product"
@@tsol438 you mean they are the product for this model
Lost me at Broadcom. Anyone know of anything similar for proxmox?
If you kept watching you'd know he's using Private GPT actually not llama2. That was just to introduce the concept (and paid sponsorship from VMWare). Either way, you can use it without internet. The important part here is the RAG concept getting applied privately.
Here is how you can "Run your own AI (but private)"...then proceeds with a 10 min ad about VMware were you need internet and $1000-$4000 per month to use 🤡
well if you are really in to this things there's alot of way to get vmware for free
@@eroy_kah Like using Proxmox instead.
Pretty typical of his videos. As always, take the basic concept he introduces and do your own research to find truly open source and free solutions.
This video is soo highly informative and entertaining, I am shocked! You've gained a new follower mate!
OMG I just got llama2 working with my RTX 3090 ti. I am so excited. Thank you Network Chuck!
How much VMware have paid you for yhis ad?!??
Not your business bucko
🤔🧐
May be 10k inr 😅😅😅
More than they paid you, if you were an expert pentest ninja you would be getting paid too
@@AndrewElston lol forgot to ask the guy who just created an account today pff 😂
9 minutes straight ad
You ain't lying. But a good ad
Hi Chuck! I followed his guide and ran into the same issue both times. All is fine and dandy until I get to "poetry install -- with ui" and I get this back: "Groups not found: ui (via --with). Same goes for "--with local". Stackoverflow doesn't have much of an answer. Any input?
Maybe it had an anxiety attack/Kernel panic at the thought of encountering Vogon poetry in the original LLM data. (The Hitchhiker's Guide To The Galaxy fans will understand.)
needs to be higher.... well sorta, since it's not really his walkthrough, but yes experiencing the same issue.
@@atlflips same issue on Stackoverflow. No one got it to work there either
@@nkjoself2040 yeah just went down a github rabbit hole. Just can't seem to get it. Even using chatgpt with the errors I still can't resolve it.
You need to use --extras ui instead of --with ui . However --extras local doesn't work so im still stuck.
Bro! you the MAN! I had a powerful gaming machine with a dual RTX4090 Installed it in no time and is working GREAT!
To say that this video was an eye opener for me would be an understatement.
I always wanted my own AI
🎯 Key Takeaways for quick navigation:
00:00 *🤖 Setting up Private AI and its Importance*
- Setting up private AI locally.
- Importance of privacy and containment of data.
- VMware's role in enabling private AI for companies.
02:02 *🧠 Understanding AI Models and Hugging Face*
- Overview of AI models and their pre-training on data.
- Introduction to Hugging Face as a community for sharing AI models.
- Exploration of LLMs (Large Language Models) and their pre-training process.
04:28 *🛠️ Installing O Lama and Running LLMs*
- Installing O Lama tool for running LLMs locally.
- Compatibility with different operating systems.
- Demonstration of running LLMs like Llama two and its performance.
07:58 *🚀 Fine Tuning AI Models with VMware*
- Explaining the concept of fine-tuning AI models.
- VMware's approach to fine-tuning AI models for internal use.
- Hardware and software requirements for fine-tuning, showcasing VMware's tools and infrastructure.
15:26 *🧠 Advanced AI Tools Overview*
- Overview of advanced tools for fine-tuning language models.
- Nvidia offers comprehensive tools designed around their GPUs.
- Introduction to RAG (Retrieval-Augmented Generation) for enhancing model responses by consulting a knowledge base.
16:53 *🛠️ Utilizing RAG for Model Enhancement*
- Explanation of how RAG can augment model responses by consulting databases.
- Illustration of using RAG to provide accurate answers without retraining the model.
- Integration of personal notes and journals with a private GPT model using RAG for personalized interactions.
17:22 *💡 Collaboration between VMware, Nvidia, and Intel*
- Overview of collaborative efforts between VMware, Nvidia, and Intel for AI development.
- VMware provides infrastructure, Nvidia offers AI tools, and Intel supports data analytics and machine learning.
- Highlights the flexibility for users to choose their preferred AI technology stack.
18:20 *🏗️ Setting Up Private GPT with RAG*
- Introduction to setting up a private GPT model with RAG for personalized interactions.
- Disclaimer on the complexity of the process compared to VMware's integrated solutions.
- Acknowledgment and gratitude for community-contributed guides and resources for project setup.
19:50 *🖥️ Implementing Private GPT with Personal Documents*
- Demonstration of integrating personal documents with a private GPT model for tailored interactions.
- Steps for ingesting documents and querying the model for personalized information.
- Recognition of the potential of private AI for personalized and efficient interactions.
21:24 *☕ Sponsor Message and Conclusion*
- Acknowledgment of VMware by Broadcom for sponsoring the video.
- Invitation to participate in a quiz for a chance to win free coffee.
- Encouragement for viewers to explore VMware's private AI solutions.
Made with HARPA AI
one question can I create a private chatbot that skims through the documents stored locally and run via Linux commands. I want to deploy that chatbot on my website, keeping my data private simultaneously allowing the users to interact with my chatbot for QNA. How can I achieve this, please anyone help me.
Bro I'm just from discovering your channel and it's amazing
Please continue videos on AI like this
Update on my new mac pro, running 5 different ai trained. One is fully functional. Other 4 limitations. Thank you for this video. Will be writing code to have them run processing questions all together and fine tuning it
Can the ai model write code???
yes
Cool 😎 video. Fuck VMware though
i feel ya Man!!
Could you please giveaway some laptop cause everyone including me starting college few months.I am very gladfull to you its help me lot cause in my cybersecurity journey. Hope you are reading it❤
Don't beg
Judging by all the errors in your post, you're hardly ready for college.
@@bite-sizedshorts9635 maybe Engl is not their native language. Doing pretty well. What would you be like in their language?
Ok so i just stumbled into this video and it was f#$%in gooood. And then realize what the channel is about and subscribed asap. Hoping for more content! Great stuff
so I clicked on some random video on RUclips and that's how I found this channel which is the best and most convenient channel to watch Ever
it might help security as well!
The ai can scan the pattern of your device usage from programs, down to hardware level and might alert you if there's some weird actions inside that you aren't usually doing.
At (just) under 5 minutes in, on Windows, I already love this!
Heads up, as of today there is a preview windows version of Ollama.
Thanks Ollama.
This is literally solving every AI problem I have right now.
OS local models are getting better and better. Going to see if there are some libraries for Ollama for creating some nice frontends for it.
With computing power being what it is, the future is everyone having a secure Data Bubble, which connects your phone, home and car. It will connect to the Cloud when it needs to.
Also, AI will become task specific for each area which needs it. Much like R2 and C3PO. Each has their own tasks. Your home vs your car AI.
Private AI really good way, fair to have privacy, your own time, your own space, your own interests, your own meanings and things that u do not want to share. Hope this Private AI will grow always
This would be awesome to have! no annoying "ethical" and "legal" boundries, you tell it to do something, it does that! what a Briliant idea!
The people with real power don't bother with permission.
He said "I don't know how many monitors you have." and I showed three fingers and said "Three" and then he said "You have three" and I instantly got surprised.