AI DevBytes
AI DevBytes
  • Видео 17
  • Просмотров 49 453
ACCESS Open WebUI & Llama 3 ANYWHERE on Your Local Network!
ACCESS Open WebUI & Llama3 ANYWHERE on Your Local Network! In this video, we'll walk you through accessing Open WebUI from any computer on your local network with a completely separate user account.
📺 Setting Up Open WebUI - ULTIMATE Llama 3 UI: Dive into Open WebUI & Ollama! ruclips.net/video/D4H5hMMoZ28/видео.html
🚀 What You'll Learn:
* How to get the IP address of your local Open WebUI instance
* How to navigate to Open Web UI on a different computer
* How to setup a new Open WebUI user account
* Approving the new Open WebUI user account
* Using Open Web UI from a different computer on your network
Chapters:
00:00:00 - Intro
00:01:13 - Finding IP address of Open WebUI
00:03:20 - Accessing Open W...
Просмотров: 1 966

Видео

OpenUI & Llama 3: Effortless Text to Frontend UI Generation
Просмотров 4,3 тыс.14 дней назад
OpenUI & Llama 3: In this video, we will walk you through step by step how to set up generate user interface components with OpenUI. Chapters: 00:00:00 - Intro 00:00:35 - Setting up OpenUI 00:02:47 - Verifying Ollama is running on your computer 00:04:28 - Setting Llama 3 as your model in OpenUI 00:06:00 - Generating the first UI component 00:07:30 - Generating the second UI component Other Vide...
Open WebUI & OpenAI DALL-E 3: Effortless Text to Image Generation
Просмотров 1,7 тыс.21 день назад
Open WebUI & OpenAI DALL-E 3: Setup Text to Image Generation: In this video, we will walk you through step by step how to set up Image generation using Open WebUI image generation functionality. Chapters: 00:00:00 - Intro 00:00:20 - Configuring Image Generation 00:03:15 - Generating Images 00:05:45 - DALLE-3 Pricing 🚀 What You'll Learn: * How configure image generation in Open WebUI * Generatin...
ULTIMATE Llama 3 UI: Chat with Docs | Open WebUI & Ollama! (Part 2)
Просмотров 4,5 тыс.28 дней назад
Ollama Llama 3 Open WebUI: In this video, we will walk you through step by step how to set up Document chat using Open WebUI's built-in RAG functionality using FREE Ollama models. 🚀 What You'll Learn: * Setup custom prompts * Uploading documents with Open WebUI * Using Open WebUI prompt shortcuts * Using uploaded documents in your prompt Chatpers: 00:00:00 - Intro 00:00:43 - Setup Custom Prompt...
ULTIMATE Llama 3 UI: Dive into Open WebUI & Ollama!
Просмотров 9 тыс.Месяц назад
Ollama Llama 3 Open WebUI: In this video, we will walk you through step by step how to set up Open WebUI on your computer to host Ollama models. 🚀 What You'll Learn: * How to install Docker * Setup Open WebUI with Docker * Basics of using Open WebUI * Pull new Ollama models down using Open WebUI Chatpers: 00:00:00 - Intro 00:00:30 - Installing Docker 00:02:13 - Installing Open WebUI 00:05:15 - ...
Customize Dolphin Llama 3 with Ollama!
Просмотров 2,3 тыс.Месяц назад
Personalized Dolphin Mixtral 8x7b & Dolphin Llama 3 | In this video we will walk through step by step how to create a personalized Dolphin Mixtral 8x7b & Dolphin Llama 3 model using Ollama. 🚀 What You'll Learn: * How to create an Ollama Model File * Adding a custom system prompt for your custom Dolphin Mixtral 8x7b & Dolphin Llama 3 models * Customizing Dolphin Mixtral 8x7b & Dolphin Llama 3 mo...
Create your own CUSTOM Mixtral model using Ollama
Просмотров 825Месяц назад
Customized Mixtral 8x7b | In this video we will walk through step by step how to create a customized Mixtral 8x7b model using Ollama. 🚀 What You'll Learn: * How to create an Ollama Model File * Adding a custom system prompt for your custom Llama 3 * Customizing Mixtral 8x7b models parameters Chapters: 00:00:00 - Intro 00:01:35 - Getting Mixtral 8x7b 00:02:27 - Testing Mixtral 8x7b model 00:03:1...
Create your own CUSTOMIZED Llama 3 model using Ollama
Просмотров 13 тыс.Месяц назад
Llama 3 | In this video we will walk through step by step how to create a custom Llama 3 model using Ollama. 🚀 What You'll Learn: * How to create an Ollama ModelFile * Adding a custom system prompt for your custom Llama 3 * Customizing Llama 3 models parameters Chatpers: 00:00:00 - Intro 00:01:03 - Getting Llama 3 00:01:48 - Testing Llama 3 model 00:02:38 - Creating Custom Model file 00:07:40 -...
OLLAMA | Want To Run UNCENSORED AI Models on Mac (M1/M2/M3)
Просмотров 2,7 тыс.Месяц назад
OLLAMA | How To Run UNCENSORED AI Models on Mac (M1/M2/M3) One sentence video overview: How to use ollama on a Mac running Apple Silicon. 🚀 What You'll Learn: * Installing Ollama on your Mac M1, M2, or M3 (Apple Silicon) - ollama.com * Downloading Ollama models directly to your computer for offline access * How to use ollama * How to harness the power of open-source models like llama2, llama2-u...
Mastering AI Vision Chatbot Development with Ollama & Streamlit
Просмотров 1,9 тыс.Месяц назад
Welcome to our comprehensive tutorial on the multimodal llava model where we'll guide you through the process of building an image analyzer chatbot using Ollama and LLava - two powerful open-source tools. Whether you're a seasoned developer or just starting out, this step-by-step walkthrough will help you grasp the concepts and implement them effectively. In this tutorial, we'll cover everythin...
OLLAMA | Want To Run UNCENSORED AI Models on Windows?
Просмотров 4,4 тыс.Месяц назад
Ollama & Windows | Run FREE Local UNCENSORED AI Models on Windows with Ollama One sentence video overview: How to run ollama on windows. 🚀 What You'll Learn: * Installing Ollama on your Windows PC - ollama.com * Downloading Ollama models directly to your computer for offline access * How to use ollama * How to harness the power of open-source models like llama2, llama2-uncensored, and codellama...
Build an UNCENSORED AI Chatbot in 1 Hour with Ollama! (Part 2)
Просмотров 1,1 тыс.2 месяца назад
Streamlit & Ollama | How to Build a Local UNCENSORED AI Chatbot | Part 2 Link to Part 1 of this video series: Part 1 | Streamlit & Ollama | How to Build a Local UNCENSORED AI Chatbot: ruclips.net/video/BA0656SdODU/видео.html Playlist: ruclips.net/p/PL39czAYesA5ckKIohbmfL6Bq8X4JwV1Ge Are you tired of restrictions on AI chatbots? In this video we dive into the exciting world of building your own ...
Build an UNCENSORED AI Chatbot in 1 Hour with Ollama! (Part 1)
Просмотров 6342 месяца назад
Streamlit & Ollama | How to Build a Local UNCENSORED AI Chatbot | Part 1 Are you tired of restrictions on AI chatbots? In this video we dive into the exciting world of building your own local, uncensored AI chatbot using Streamlit! This tutorial is perfect for developers, tech enthusiasts, and anyone interested in creating their own AI chatbot development without limitations. Windows Version of...
Build AI Chatbot with Streamlit & OpenAI!
Просмотров 6192 месяца назад
Build Your Own AI Chatbot with Streamlit and OpenAI: A Step-by-Step Tutorial 🚀 Join us on an exciting coding adventure as we dive into the world of GenerativeAI! In this comprehensive tutorial, we're going to show you how to build an AI chatbot from scratch using Streamlit and OpenAI. Whether you're a coding newbie or a seasoned developer looking to expand your skills, this video is for you! K...
How to Setup a Streamlit Environment in 5 Minutes
Просмотров 2493 месяца назад
Absolutely, Devvy! Here's a RUclips video description that encapsulates the essence of your provided context: 🔔 ruclips.net/channel/UCuuySIxs4zmMhH7Q-obyk_Q Subscribe to our channel for more tutorials and coding tips Windows Users: www.datacamp.com/tutorial/installing-anaconda-windows 🚀 Quick and Easy Streamlit Development Setup in Under 5 Minutes! 🚀 Are you ready to dive into the world of Stre...
How to Set up OpenAI in Anaconda (2024)
Просмотров 2353 месяца назад
How to Set up OpenAI in Anaconda (2024)
How to 🚀 Build a Custom 🤖 GPT with Actions In Less Than 2 Hours!
Просмотров 2383 месяца назад
How to 🚀 Build a Custom GPT with Actions In Less Than 2 Hours!

Комментарии

  • @pinkhilichurl7670
    @pinkhilichurl7670 17 часов назад

    transferring model data Error: unsupported content type: text/plain; charset=utf-8 FROM llama3:8b PARAMETER temperature 1 PARAMETER stop <|start_header_id|> PARAMETER stop <|end_header_id|> PARAMETER stop <|eot_id|> PARAMETER stop <|reserved_special_token TEMPLATE """ {{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|> """ SYSTEM You are a helpful AI assistant named LLama3. Please provide the aspects discussed in the following Bengali product review. Please only consider the aspects from the following list of aspects: product, service, delivery, packaging, seller, rider, price, shelf. can you tell me why i am facing this error? I followed all the steps as you did.

    • @AIDevBytes
      @AIDevBytes 17 часов назад

      I haven't seen that error before. If I had to guess it would be because you have a multi-line SYSTEM message but it's not wrapped in triple quotes. Example: SYSTEM """Line one Line two Line three '''""

  • @ascaridesign
    @ascaridesign 2 дня назад

    Hello, it's works but only if I use terminal, with OllamaWebUi not, you know why ? Thx

    • @AIDevBytes
      @AIDevBytes День назад

      I have tried with OpenWeb UI and ran into the same issues. I'm assuming there is a bug in the OpenWebUI app that is causing the errors.

  • @tonicop8729
    @tonicop8729 2 дня назад

    many error on pip install .

  • @toddroloff93
    @toddroloff93 2 дня назад

    Great Video. Thanks for doing this.

  • @taylorerwin807
    @taylorerwin807 3 дня назад

    For those having issues at around the 35min mark, you need to ensure that your role and function can access the internet aka. 'VPC'. Change those setting in the IAM menu for your user. Once that permission is added (AWSLambdaVPCAccessExecutionRole), you go back to configuration in the function and go down to VPC.

  • @aliceiqw
    @aliceiqw 4 дня назад

    Please help with this: **(venv) (base) iamal@IAMALs-MBP suede % ollama create my-llama3-model -f /Users/iamal/Desktop/suede/custom-llama3.yaml** Error: command must be one of "from", "license", "template", "system", "adapter", "parameter", or "message"

    • @AIDevBytes
      @AIDevBytes 3 дня назад

      What does your file look like? Also, on the model files I created I do not use an extension.

  • @aliceiqw
    @aliceiqw 4 дня назад

    when running this in the terminal: "ollama create my-llama3-model -f custom-llama3.yml" I get this error: Error: command must be one of "from", "license", "template", "system", "adapter", "parameter", or "message"

    • @AIDevBytes
      @AIDevBytes 2 дня назад

      What does your model file content look like?

  • @kannansingaravelu
    @kannansingaravelu 4 дня назад

    How do we fine tune the model? Do you have a video on it?

    • @AIDevBytes
      @AIDevBytes 17 часов назад

      I currently don't have a video covering that yet, but looking to create one in the future.

  • @tnsgaming6571
    @tnsgaming6571 6 дней назад

    Which extension do you use to create the custom -llama3 Pls help

    • @AIDevBytes
      @AIDevBytes 3 дня назад

      I don't add an extension to the model file when I create it.

  • @RNMSC
    @RNMSC 6 дней назад

    It's not universal across Linux, but many distributions have depricated the ifconfig command for getting the network ip address. One command you can use to get this is 'ip address show | grep inet' which should give you a list similar to ifconfig on the mac, and will include ipv6 addresses if you have ipv6 enabled. Lan specific ipv6 addresses will begin 'fe80:' and should be available to all hosts within the local network. If you have ipv6 addresses other than this and '::1' you may want to make sure that your network is blocking external traffic for any services you don't want to share outside of your lan. At the moment, my openUI service does not appear to be attaching to the IPv6 stack, however that is potentially subject to change in the future. You can always ask an llm how to connect to a server's ipv6 address and you should get some possibly useful responses, however if the service isn't attaching to ipv6, it's not going to work. Now if I could find a way to get OpenUI on a docker server, at a locally significant hostname, to connect to a system running ollama (because while the docker server doesn't have a graphics card capable of givving happy response rates, the game system does, and can be used while I'm not gaming for that. However it looks like ollama isn't attaching to any of the hardware nic ipv4 addresses so far. I'll get it worked out eventually. (maybe?)

    • @RNMSC
      @RNMSC 6 дней назад

      Solved that. There is an 'error message' in the output after editing the systemctl file for the ollama.service that suggests that it didn't take. I forced a copy of the teporary output file to the output file, then followed up with the systemctl daemon-reload, and systemctl restart ollama.service commands (sudo'ed) and it appears to have taken. Now to get openui to point at that. Adding the server to connections under setup seems to have no effect. Ah well. may have to remove and re-install the docker client on my localhost.

  • @davidtindell950
    @davidtindell950 8 дней назад

    thank u. new subscriber !

  • @stanleykwong6438
    @stanleykwong6438 10 дней назад

    Thanks for acting on my suggestion and making this video. As always, great content. Keep up the great work!

    • @AIDevBytes
      @AIDevBytes 10 дней назад

      thank you for the suggestion. I always read the comments to see what viewers such as yourself would like to see. Hope you found the video helpful.

    • @stanleykwong6438
      @stanleykwong6438 9 дней назад

      @@AIDevBytes Very helpful, i was able to setup the hosting for my wife to use on her computer with this. Side note for future topics when you have time and it aligns to your interest. 1. Step by step incorporating stable diffusion into OpenWebUI would be great 2. How to do RAG properly (from the best system in your opinion, to best practices on how to structure your format for better ingestion) 3. How to allow external access securely? I've seen some videos where Ngrok is used, but their walkthrough was confusing. Thanks again.

    • @AIDevBytes
      @AIDevBytes 9 дней назад

      Thank for the suggestions

  • @AIDevBytes
    @AIDevBytes 10 дней назад

    Thank you @stanleykwong6438 for the video idea. Here are other Open WebUI videos you may like: ULTIMATE Llama 3 UI: Dive into Open WebUI & Ollama! ruclips.net/video/D4H5hMMoZ28/видео.html ULTIMATE Llama 3 UI: Chat with Docs | Open WebUI & Ollama! (Part 2) ruclips.net/video/kDwEIgmqaEE/видео.html Open WebUI & OpenAI DALL-E 3: Effortless Text to Image Generation ruclips.net/video/DHVQ1UBaYMQ/видео.html

  • @engdoretto
    @engdoretto 10 дней назад

    Thanks for the informative video. Would it be possible for me to chat with my own pdf documents?

    • @AIDevBytes
      @AIDevBytes 10 дней назад

      Glad you found the video helpful. Yes, check out this video here ruclips.net/video/kDwEIgmqaEE/видео.html

  • @Panda-sv3oy
    @Panda-sv3oy 14 дней назад

    My friend, I love your videos and I wanted you to explain to us how to use n8n .. I want to use it to automate everything possible in my daily use

  • @yurijmikhassiak7342
    @yurijmikhassiak7342 14 дней назад

    Why you are showing us UI with completely oh "hellow word" code. Is it possible to do something harder? Can you change the design with feedback? Can you setup the style library to have consistent components?, generate a few options to select from?

    • @AIDevBytes
      @AIDevBytes 14 дней назад

      This video is meant to just to show how to get OpenUI up and running. I didn't want to start with anything too complex, since many people who watch the channel are new to many of these tools. I usually create more advanced follow up videos later. Yes, you can change the style with feedback by continuing with a follow on prompt and the output will be altered There is not presently a settings to keep the style consistent. I would personally only use this app for prototyping and then further refine the design in something like VSCode. Also, the model you use is going to dictate the quality of output. I find that using the OpenAI GPT API generates pretty good UI code, but I like to focus on free things on the channel. I may create a video later using Groq to generate the UI.

  • @lethalviper250
    @lethalviper250 14 дней назад

    This works for HTML only or can we use it to prepare Java and Python based UI?

    • @AIDevBytes
      @AIDevBytes 14 дней назад

      You could try to change the system prompt to see if it will create python or Java code for you, but it will not render a preview for you. Since Java and Python UIs aren't HTML based natively. This tool is meant to primarily be used developers that by develop who use web frameworks that use HTML and JSX as their native UI output.

  • @tatkasmolko
    @tatkasmolko 15 дней назад

    And does generating html component from image via ollama work for you? I get an Error! 500 Internal Server Error: object of type bytes is not JSON serializable

    • @AIDevBytes
      @AIDevBytes 15 дней назад

      You will need to download the Llava model into Ollama. Then set that as your model in OpenUI. I will say that using an image to generate the UI does not give great results based on my testing.

    • @tatkasmolko
      @tatkasmolko 15 дней назад

      @@AIDevBytes Thank you for your help. I didn't read the documentation very carefully :D I was using the wrong version of the llava model. Everything works. Question is there any model from open source portfolio that can render html from image in GPT-3.5-Turbo quality? For example some model via LM Studio?

    • @AIDevBytes
      @AIDevBytes 15 дней назад

      Glad you go it working. I had sent another message but deleted it since I realized that would not work because we need a better open-source multi-modal first to generate better HTML

    • @tatkasmolko
      @tatkasmolko 15 дней назад

      @@AIDevBytes I skipped docker and used directly the frontend application vitejs with a little code modification I was able to connect it to LM Studio (windows) where there is a greater possibility of open source models with Vision Adapter and I achieve much better results than through docker and ollama llava model. The advantage is that with LM Studio I can use one API called Multi Model Session which allows me to achieve better results when generating html code from images without the need to use the paid version of GPT-4o. But need high VRAM of GPU.

    • @AIDevBytes
      @AIDevBytes 15 дней назад

      Thanks for the info. I will have to try it out.

  • @stephenzzz
    @stephenzzz 16 дней назад

    Thanks DevBytes

  • @AIDevBytes
    @AIDevBytes 16 дней назад

    🧑‍💻 Commands for setting up OpenUI Be sure you have Python installed: www.python.org/downloads 🧑‍💻 Ollama Mac Setup: ruclips.net/video/03J_Z6FZXT4/видео.html 🪟 Ollama Windows Setup: ruclips.net/video/E5k_Ilnmehc/видео.html OpenUI Setup Commands 👇 Command 1: git clone github.com/wandb/openui Command 2: cd openui/backend Command 3: pip install . Command 4: export OPENAI_API_KEY=xxx Command 5: python -m openui

  • @stanleykwong6438
    @stanleykwong6438 18 дней назад

    i've been trying to get my own models setup and your series of videos have been exceptionally clear, please post more videos for an indepth use of Open WebUI and perhaps for the following so that non-technical people like me can implement it. 1. How to setup and allow other computers with WebUI installed to access the computer that has ollama installed on the same network. 2. How to setup "My Modelfiles" within WebUI, specifically how the Content section of the Modelfile is compiled, and how to incorporate other parameters beyond temperature. Exceptional work! I will spread the word with my team. Keep it up.

    • @AIDevBytes
      @AIDevBytes 18 дней назад

      I will look creating separate videos on those two topics. Thanks for the feedback.

  • @primordialcreator848
    @primordialcreator848 19 дней назад

    any chance you can please make one using Automatic1111 connecting to a local instance of stable diffusion or any open source image generator please? I've been trying to get it working but seems to get stuck on saving when ever I try to connect to the local instance with api mode active. any info on this would be amazing! thank you for another great video! <3 <3 <3

    • @AIDevBytes
      @AIDevBytes 19 дней назад

      Glad you found the video helpful. I will have to look at getting stable diffusion working correctly on my M3 Mac. It was not the best experience the last time I ran it.

    • @primordialcreator848
      @primordialcreator848 19 дней назад

      @@AIDevBytes thanks for the reply! - got stable diffusion 2.1 working on my 1max mac, I'm able to generate images with it by it self but stuck having it integrate with WebUI. i can use Dall-E 3 for now np. just hopping for something free to use lol

  • @ph3ll3r
    @ph3ll3r 20 дней назад

    Where can get the ollama run ollama: instruct model? The closest that I found was ollama run meta-llama/Meta-Llama-3-8B-Instruct which does not work form hugging faces

    • @AIDevBytes
      @AIDevBytes 20 дней назад

      Check out these vdeos here: Windows Setup - OLLAMA | Want To Run UNCENSORED AI Models on Windows?: ruclips.net/video/E5k_Ilnmehc/видео.html Mac Setup: OLLAMA | Want To Run UNCENSORED AI Models on Mac (M1/M2/M3): ruclips.net/video/03J_Z6FZXT4/видео.html Models can be found here ollama.com/library

    • @eviltoast3r
      @eviltoast3r 20 дней назад

      Ollama run llama3:instruct will pull and run the latest instruct model

    • @AIDevBytes
      @AIDevBytes 20 дней назад

      Correct, if you just want to pull the model run -> ollama pull llama:instruct

  • @VjRocker007
    @VjRocker007 20 дней назад

    I got it set up! Thank you for the quick guide!! Dev would you make a video on how to setup lobechat as well? It's a very interesting piece of front end. But hosting locally, I found it quite challenging since I'm not too technical in self hosting apps on the Mac.

    • @AIDevBytes
      @AIDevBytes 20 дней назад

      I'll take a look at lobechat. I have not personally used it, but it sounds interesting.

  • @madhudson1
    @madhudson1 21 день назад

    How much on average does it cost to generate. I know itll be based on tokens. Just after a ballpark

    • @AIDevBytes
      @AIDevBytes 20 дней назад

      I have updated the description with Chapters. Jump to the chapter "DALLE-3 Pricing" in the video. I explain pricing there. In the end your cost is going to be determined by the usage of the DALLE-3 API. The more the use the API the more it will cost. Also, note image generation is not tied to tokens like text generation. I am using free models for text generation in this video using Ollama and the Llama model. Videos for setting up Ollama can be found here: Windows Setup ruclips.net/video/E5k_Ilnmehc/видео.html Mac Setup: ruclips.net/video/03J_Z6FZXT4/видео.html

  • @tiffanyw3794
    @tiffanyw3794 21 день назад

    Can you show how to connect sdforge it’s for computers with lower graphics

    • @AIDevBytes
      @AIDevBytes 21 день назад

      I'll have to take a look at it. I know people seem to use it with Stable Diffusion. I personally don't do a lot with Stable Diffusion because of the hardware requirements that most people need and the clunkiness of the UI.

    • @tiffanyw3794
      @tiffanyw3794 21 день назад

      @@AIDevBytes sdforge was created by the a1111 it works great for low gpu users would be awesome to use in this Ui. I see it only has 3 options. Is there away to add a different image model

    • @AIDevBytes
      @AIDevBytes 21 день назад

      Since OpenWeb UI is an open-source project anyone can edit the code. But there is no way to currently add model to the image section. You can add new model for chat. If you check out video 1 in the series I setup OpenWeb UI with Ollama ruclips.net/video/D4H5hMMoZ28/видео.htmlsi=QFr1LzQhONF4mbgp

  • @AIDevBytes
    @AIDevBytes 21 день назад

    Thank you @vjrocker007 for the video idea. Part 1 & 2 of this series can be found below. 📺 PART 1 - ULTIMATE Llama 3 UI: Dive into Open WebUI & Ollama! ruclips.net/video/D4H5hMMoZ28/видео.html 📺 PART 2 - ULTIMATE Llama 3 UI: Chat with Docs | Open WebUI & Ollama! ruclips.net/video/kDwEIgmqaEE/видео.html

  • @stephenzzz
    @stephenzzz 21 день назад

    Thanks!

  • @LesCalvin3
    @LesCalvin3 23 дня назад

    I like the clear presentation a lot. Thanks.

  • @MaliciousCode-gw5tq
    @MaliciousCode-gw5tq 23 дня назад

    Hi can you provide the specs of your GPU use for this project to get better experience and smooth response.

    • @AIDevBytes
      @AIDevBytes 23 дня назад

      Check out the video description. I have my Mac book specs below.

    • @MaliciousCode-gw5tq
      @MaliciousCode-gw5tq 23 дня назад

      @@AIDevBytes Awesome thanks. It will help get idea for a RIG Thanks Man Keep it up!...

  • @alphaobeisance3594
    @alphaobeisance3594 23 дня назад

    I can't seem to comprehend how to provide access to my AI through OpenWebUI via my domain. I'd like to grant access for my family but I for the life of me can't seem to get it set up for public access.

    • @AIDevBytes
      @AIDevBytes 23 дня назад

      In this video, I demonstrate local access only. I will explore how to host this on a server and create a follow-up video on the process. Note that hosting on a server will incur costs due to GPU pricing.

    • @alphaobeisance3594
      @alphaobeisance3594 23 дня назад

      @@AIDevBytes I've got homelab and hardware. Just can't seem to figure out the networking side of things for some reason. Tried proxy through Apache but it must be over my head as I can't get it to function correctly.

    • @AIDevBytes
      @AIDevBytes 23 дня назад

      Gotcha, yeah unfortunately I wouldn't really be able to help there since I don't know your network configuration or topology. You should be able to go into your router settings and setup port forwarding to you homlab server.

  • @Thedemiul
    @Thedemiul 24 дня назад

    Hi, I noticed your job post on Upwork about hiring an editor, and I'm interested in that position. Could we schedule a chat to discuss further?

    • @AIDevBytes
      @AIDevBytes 24 дня назад

      Please send all proposals through Up work and someone on the team will reply there. Thanks

    • @Thedemiul
      @Thedemiul 24 дня назад

      @@AIDevBytesBoss, I've run out of connects there.

    • @AIDevBytes
      @AIDevBytes 24 дня назад

      Sorry the individual that handles openings only accepts proposals through that platform unfortunately.

    • @AIDevBytes
      @AIDevBytes 24 дня назад

      Feel free to send your Upwork username and we can send you a proposal request. You can email here inquiries@aidevbytes.com Thanks

  • @VjRocker007
    @VjRocker007 24 дня назад

    Amazing video! thank you for sharing! Bytes could do a video tutorial on setting up image generation from Dall-E through webUI as well? That would be really helpful.

    • @AIDevBytes
      @AIDevBytes 24 дня назад

      Glad you liked the video. I'll look into creating a video based on your suggestion.

  • @VjRocker007
    @VjRocker007 25 дней назад

    You're a legend, thank you for posting these how to videos. Keep it up, for a person who doesn't have time to go through all of these docs these have been incredibly helpful. Appreciate all of your hard work!!

  • @hamzahassan6726
    @hamzahassan6726 25 дней назад

    hi, I am trying to make a model file with these configurations: # Set the base model FROM llama3:latest # Set custom parameter values PARAMETER num_gpu 1 PARAMETER num_thread 6 PARAMETER num_keep 24 PARAMETER stop <|start_header_id|> PARAMETER stop <|end_header_id|> PARAMETER stop <|eot_id|> # Set the model template TEMPLATE "{{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> getting Error: unexpected EOF Could you tell me what am I doing wrong?

    • @AIDevBytes
      @AIDevBytes 25 дней назад

      Looks like you didn't close your double quotes at the end of your template. Simple mistake which can drive you crazy 😁 Let me know if that fixes your issue. EDIT: Also, use triple quotes like this when using multiple lines for your template. TEMPLATE """ Template values goes here """

    • @hamzahassan6726
      @hamzahassan6726 25 дней назад

      @@AIDevBytes getting same error with this # Set the base model FROM llama3:latest # Set custom parameter values PARAMETER num_gpu 1 PARAMETER num_thread 6 PARAMETER num_keep 24 PARAMETER stop <|start_header_id|> PARAMETER stop <|end_header_id|> PARAMETER stop <|eot_id|> # Set the model template TEMPLATE """ {{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> """

    • @AIDevBytes
      @AIDevBytes 25 дней назад

      When I get some free and I'm at my computer again today. I will give it a try to see if I can isolate the problem and let you know.

    • @hamzahassan6726
      @hamzahassan6726 25 дней назад

      @@AIDevBytes thanks mate. much appreciated

    • @AIDevBytes
      @AIDevBytes 25 дней назад

      @@hamzahassan6726 I copied the model file content you had and pasted into a new file and was able to create a new model. I am not quite sure why you are the getting the error: "Error: unexpected EOF". I have not been able to duplicate the error. Also, one thing to call out looks like you are not using the llama3 template from ollama, but that doesn't appear to be causing the issue. I would make sure you are not using rich text format in your model file and ensure that it is plaintext only. if you go to the llama3 model (ollama.com/library/llama3:latest/blobs/8ab4849b038c) the template looks like this: {{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|>

  • @elrilesy
    @elrilesy 28 дней назад

    Hey mate, this is absolutely awesome, thanks so much. I'm wondering how well this would work at Knowledge Bases? Ie lets say I'm a decent sized company that has a huge internal wiki covering all sorts of internal processes that guides our employees in how to operate the business. Do you think I could upload all our wiki articles to the documents section and then give this to our team to be able to talk to our own internal chatbot to ask questions about what they should do in certain situations? Or do you think uploading very large amounts of documents would sort of overwhelm it and not produce great results? i.e do you think to achieve this outcome you'd actually need to fully retrain the base llama3 model itself rather than just adding documents on top of an existing stock llama3 model? Because as far as I can tell in my laymands understanding of all this is that when you upload documents, each prompt you enter it's having to kinda look through it all for an answer where as if you re-trained the base model it would know it more "naturally" haha. Sorry, not sure if I've explained that very well.

    • @AIDevBytes
      @AIDevBytes 27 дней назад

      for this architecture you would not need to to retrain the model, since it's using RAG (Retrieval Augmented Generation) to find the answers in the documents. I will say this solution is really for users to run it locally on their computer, since the document store is running in a container on the users computer. This solution would not probably scale well for multiple users across multiple computers. You would want a more enterprise or custom solution if you are going to run this for your business with multiple users on multiple computers.

  • @FaeEvergreen
    @FaeEvergreen 28 дней назад

    I tried this setup but the RAG in WebUI is absolutely atrocious. Maybe I'm just not having a great time because I'm on a Windoze machine, or maybe I messed up somewhere but my conversations seem to have VERY hit & miss accuracy when working with docs. I have tried compiling all docs into a master doc, I've tried using multiple small docs. I've tried .doc, .md, .txt, .csv with varying levels of success. It just seems like it's great in theory and not so much in practice. Like it's really cool that it's available and working, but as far as using it in my every day life -- probably not.

  • @ssswayzzz
    @ssswayzzz 28 дней назад

    do you think you will need more than m3 max 36gb 14cpu/30gpu for running all these models im thinking of getting one myself to try all these models all i know that i need a capable device , btw thank you so much for your videos once i get this device im sure i will be around your channel alot benefiting from your experience thank you

    • @AIDevBytes
      @AIDevBytes 28 дней назад

      I don't have to have the M3 Max, but it definitely will help. I say the minimum RAM you will need on Apple Silicon is 16 GB to run the smaller models. With the M3 Max you will be able to run most of the Open-Source models up to about 40 billion parameters, just not the really large parameter models like the 70 billion parameter models. Also, happy you are finding the content useful.

  • @art3mis635
    @art3mis635 28 дней назад

    when running the docker command I get this error: %2Fdocker_engine/_ping": open //./pipe/docker_engine: The system cannot find the file specified. See 'docker run --help'. any help ?

    • @AIDevBytes
      @AIDevBytes 28 дней назад

      Did you already have docker installed or did you install a new version of docker?

    • @art3mis635
      @art3mis635 27 дней назад

      @@AIDevBytes yes I have docker desktop installed, but there is a newer version. Should I update ? Current version: 4.29.0 (145265) New version: 4.30.0 (149282)

    • @AIDevBytes
      @AIDevBytes 27 дней назад

      I recommend running the latest version to see if that fixes the issue. I am running the newest version of Docker.

    • @art3mis635
      @art3mis635 27 дней назад

      @@AIDevBytes it worked thank you

    • @AIDevBytes
      @AIDevBytes 27 дней назад

      glad that fixed your issue

  • @eevvxx80
    @eevvxx80 28 дней назад

    Thanks mate, I have a question. Can I add my text to llama3?

    • @AIDevBytes
      @AIDevBytes 28 дней назад

      Can you explain further? Do you mean add you own text to the SYSTEM parameter? Not sure I'm am following your question.

  • @stephenzzz
    @stephenzzz 28 дней назад

    Brilliant thanks!

  • @HernanMartinez82
    @HernanMartinez82 28 дней назад

    Which embedding model did you use for the document chat?

    • @AIDevBytes
      @AIDevBytes 28 дней назад

      By default the HuggingFace sentence transformer model is used (sentence-transformers/all-MiniLM-L6-v2) for embedding. You can change it to use whatever model you would like by going to your "Document Settings".

  • @victor_ndambani
    @victor_ndambani 29 дней назад

    thank u man you now have a new sub

  • @AIDevBytes
    @AIDevBytes Месяц назад

    PART 1 can be found here -> ULTIMATE Llama 3 UI: Dive into Open WebUI & Ollama! ruclips.net/video/D4H5hMMoZ28/видео.html

  • @laalbujhakkar
    @laalbujhakkar Месяц назад

    So, what's the point of "customizing" when I can just change the system prompt? Isn't it like copying /bin/ls to /bin/myls and feeling like I accomplished something?

    • @AIDevBytes
      @AIDevBytes Месяц назад

      This a very simple example, but the purpose would be if you wanted to change multiple parameters as part of the model and use it in another application. Example, you could use the model with something like Open WebUI and then lock users into only using the model you customized with your new parameters.

  • @conerwei6720
    @conerwei6720 Месяц назад

    my computer did now have good gpu,can you make a video about how wo use cloud ollama and local webui?

    • @conerwei6720
      @conerwei6720 Месяц назад

      did not have good gpu,sorry

    • @AIDevBytes
      @AIDevBytes Месяц назад

      Yes, I'll create a video of how to run Ollama in a cloud environment. I'll see how to setup Open WebUI also in a cloud environment. Probably two different videos at later dates.

  • @thomasdeshayes9292
    @thomasdeshayes9292 Месяц назад

    Thanks. can we use Jupyter lab instead?

    • @AIDevBytes
      @AIDevBytes Месяц назад

      Yes, as long as the notebook is running on a computer with a GPU.

  • @EduShark
    @EduShark Месяц назад

    Can i use this with llama3 locally?

    • @AIDevBytes
      @AIDevBytes Месяц назад

      Yes, check out this video ruclips.net/video/03J_Z6FZXT4/видео.htmlsi=rzzfODttC8Qr5m8h

  • @gregortidholm
    @gregortidholm Месяц назад

    Great explanation 🙏

  • @0x-003
    @0x-003 Месяц назад

    another question which model is best suited for programming?, i see many different but its abit hard to find the best of them? if i already have llama3 70B installed, is there any reason to install codellama 70B? or is it 2 completely things?

    • @AIDevBytes
      @AIDevBytes Месяц назад

      For coding specific models yes codellama. Which parameter count is going to depend on your hardware capabilities. If you have hardware that can run the 70B model I would use that.

    • @0x-003
      @0x-003 Месяц назад

      @@AIDevBytes which model is good for overall questions? like everyday things etc

    • @AIDevBytes
      @AIDevBytes Месяц назад

      For open source models I like llama3 for general purpose use. I also like Mixtral:8x7b if you have the hardware to run it.