Flowise + n8n - The BEST No Code + Local AI Agent Combo

Поделиться
HTML-код
  • Опубликовано: 27 янв 2025

Комментарии • 165

  • @ColeMedin
    @ColeMedin  Месяц назад +8

    There was a glitch in my editing around the 2:40 mark so I had to take a tiny part out of the video, my bad everyone! That is why it is a bit choppy there. Thanks to those of you who pointed it out!
    The Local AI and n8n sections are LIVE in the oTTomator Think Tank (also home to the oTToDev community!) - head and over and come be a part of this super fast growing community!
    thinktank.ottomator.ai
    Local AI Section:
    thinktank.ottomator.ai/c/loca...
    n8n Section:
    thinktank.ottomator.ai/c/n8n/

    • @cuvy500
      @cuvy500 Месяц назад

      Reminder: The URL of "Local AI Section" is not fully pasted here.

    • @BirdManPhil
      @BirdManPhil 2 дня назад

      @colemedin can you please add langfuse to this

  • @robfinito4671
    @robfinito4671 Месяц назад +12

    I've been scouting the whole RUclips universe for this EXACT video only to have it here... Thanks

  • @AI-GPTWorkshop
    @AI-GPTWorkshop Месяц назад +18

    Super Excited for our future collaboration together Cole!

    • @Benj-n7t
      @Benj-n7t Месяц назад

      super excited for you guys' @coleMedin

    • @loicbaconnier9150
      @loicbaconnier9150 Месяц назад +4

      Funny i just put a comment in your chanel with Cole Medin code link, before to see you will collaborate
      Add Nate and it will be a monster collaboration

    • @entranodigital
      @entranodigital Месяц назад

      Totally agree with you this. It's gonna be a legendary trio​@@loicbaconnier9150

    • @weissahmad8675
      @weissahmad8675 Месяц назад +3

      This is awesome. You and Zubair are the best there is right now in this space. Looking forward to this collaboration

    • @Benj-n7t
      @Benj-n7t Месяц назад +2

      Super cool. You guys are my two fav content creators in this space. Looking forward to it.

  • @ward_jl
    @ward_jl Месяц назад +2

    Super valuable video. Flowise and n8n are my favorite tools to build AI applications with!

    • @ColeMedin
      @ColeMedin  Месяц назад

      Thanks man! Yeah they're great!

  • @DigitalAlchemyst
    @DigitalAlchemyst Месяц назад +3

    YOOOO I cant wait to see the collab with zubair you two are top tier creators in this space, stoked for this one

    • @Benj-n7t
      @Benj-n7t Месяц назад +2

      Agreed.

    • @ColeMedin
      @ColeMedin  Месяц назад

      Thank you man! I can't wait either!

  • @johngoad
    @johngoad Месяц назад +1

    Nice I have been using Flowise since their beginning. It really blew up for me when I started calling it via API from other applications

  • @digital.artistic.solutions
    @digital.artistic.solutions Месяц назад +1

    Great work! Your workflows are a real game-changer. I’ve expanded my local AI stack with Suparbase and LocalAI instead of Postgres, Qdrant and Ollama. LocalAI is OpenAI API compatible and works seamlessly with the OpenAI nodes in n8n. It’s such a huge improvement. Definitely a valuable addition! #keepAIlocal

    • @ColeMedin
      @ColeMedin  Месяц назад

      Sounds awesome! Nice!!

    • @digital.artistic.solutions
      @digital.artistic.solutions Месяц назад

      @@ColeMedin Absolutely! Building a self-hosted AI stack with Supabase, n8n, Flowise, LocalAI, Open WebUI, Alltalk TTS, comfyui, and Searxng is definitely a solid foundation for a local AI setup. ;) This idea would be sick content for more videos! I'm totally stoked to see all the cool projects and explanations you'll share. Let's level up our knowledge of local AI solutions together! 🚀💪

  • @alx8439
    @alx8439 Месяц назад +1

    Thanks for spending time and introducing this tutorial

    • @ColeMedin
      @ColeMedin  Месяц назад

      You bet! Glad you like it!

  • @eyewitness4560
    @eyewitness4560 Месяц назад +1

    Quick note, if you are one of the 3 people like me who use windows, have an amd card and some technical knowhow, you can get all this running on your gpu using WSL and Ubuntu 22.04, as that has ROCm support. I run ollama and owui in the wsl instance and run the rest in docker desktop on windows. It takes a bit of faffing about to get the ports and integration up, but your previous two vids and an ai assistant absolutely enables anyone who can reasonably prompt or follow orders.

    • @ColeMedin
      @ColeMedin  Месяц назад +1

      That's amazing - thank you for guidance on this! I don't have an AMD card myself but I know a lot of people who got one for gaming at one point and now want to do LLM inference with it so that's super helpful.

  • @xaviruiz8345
    @xaviruiz8345 Месяц назад +1

    Great contnt Cole!! Thank so much for your videos, I' m learning a lot 😊 Idea for your next videos: N8n, ollama and Llama 3.2 vision model. I didn' find any content about it!

    • @ColeMedin
      @ColeMedin  Месяц назад

      Thank you so much! Great suggestion :D

  • @shyamvai
    @shyamvai 22 дня назад

    How do you deploy this Flowise flow as an actual local executible Application on the local machine so I can something like a genie always on standby on my desktop?

    • @ColeMedin
      @ColeMedin  20 дней назад

      Good question! The flow will have to stay within Flowise, but you can turn it into an API endpoint (by defining a webhook for the flow trigger) and have that always running on your machine.

  • @T33KS
    @T33KS Месяц назад +6

    Great hands-on content as always! I have asked many creators the following question, but haven't really gotten a convincing answer:
    Why use Flowise with n8n when you can replicate the same RAG flow on n8n? In other, words n8n has all the tools needed to build complex RAG Agents. (I love flowise btw, just trying to find a reason not to ditch it completely for n8n-only flow)

    • @ColeMedin
      @ColeMedin  Месяц назад

      Thank you! And this is a great question.
      So even though you can create anything in n8n that you could with Flowise, it still makes sense to build a lot of things with Flowise (especially to prototype your agents quickly) because Flowise has so many more integrations built on top of LangChain than n8n. As an example, all the document loaders in LangChain are available in Flowise, so you can quickly ingest documents into a vector DB from Brave web API searching (just as an example) and other tools that you could do with n8n but through a very custom setup.
      I hope that makes sense!

    • @T33KS
      @T33KS Месяц назад

      @ColeMedin thanks for your reply. I see your point. For prototyping it makes absolute sense. Not for production though; since I am having API response latency and lag from Flowise. So I'm not sure what is the best approach for production and deployment (bedrock maybe?)

  • @christopheboucher127
    @christopheboucher127 Месяц назад +1

    It will be great to add longterm memory as ZEP for agents, and grap memory vector database from ZEP too.. Great joc thx for all your precious contents !

  • @johanw2267
    @johanw2267 Месяц назад +2

    Keep pumping out dem videos.

  • @jeremiealcaraz
    @jeremiealcaraz Месяц назад +2

    OMg ! You're the best !!!

  • @natureleadership101
    @natureleadership101 Месяц назад +1

    Help needed.I want to do the add a button call improve text. when clicked the user input text in a long text field shall be improved by ai model in a popup and ask user whether to use the ai generated text or existing text. how to do this in my web too.

  • @alx8439
    @alx8439 Месяц назад +2

    In Linux with recent docker installation you also should "docker compose" instead of "docker-compose". The compose thing was previously a separate binary, but now it is docker plugin

    • @ColeMedin
      @ColeMedin  Месяц назад +1

      Got it, thanks for calling that out!

  • @kenneththompson4450
    @kenneththompson4450 Месяц назад +1

    Thank you for the content, I’m using your bundles along with trying to integrate Fabric (an open-source framework for augmenting humans using AI). I’d love to see you add this to the bundle, or if I ever get time maybe I will lol. Ty

    • @ColeMedin
      @ColeMedin  Месяц назад

      That's a great idea!

  • @andianders3565
    @andianders3565 Месяц назад +2

    I really like your content. May you once speak about how to update your starter kit whenever you've added some features to it. Anyway your content allows me as a noob to get my hands on local ai. Cheers!

    • @ColeMedin
      @ColeMedin  Месяц назад +1

      Thank you very much!! To update the starter kit you basically just have to run the same commands you used to install it - but it'll go MUCH quicker since you already have everything installed.

    • @andianders3565
      @andianders3565 Месяц назад

      @@ColeMedin Thank you so much! I'm really impressed by the opportunities n8n offers me for my daily tasks.

    • @andianders3565
      @andianders3565 Месяц назад

      Hi @@ColeMedin I found an issue with n8n next form nodes in active workflows. Next forms just showed up after I added - WEBHOOK_URL=127.0.0.1:5678 to the environment-settings in the docker-compose. After adding this even in active workflows next form elements showed up as expected.

  • @marcshojaei9420
    @marcshojaei9420 Месяц назад +1

    hi, thanks for your effort , i have a suggestion could you please add the NOCODB this installation

  • @lucaseatp
    @lucaseatp Месяц назад +5

    what do you think about dify? great video

    • @ColeMedin
      @ColeMedin  Месяц назад

      Thank you! I haven't tried Dify yet but it is on my list of the dozens of platforms to try out! haha
      I have heard great things about it

  • @mikehenderson-b5h
    @mikehenderson-b5h 14 дней назад +1

    local everything and open source everything is THE FUTURE. protect it at all costs or else see the end of days

  • @josejaner
    @josejaner Месяц назад +1

    Will it be possible with Swarm to have a hybrid of models, for example: using the triage or first-level agent ''gpt-4o or mini" and the agents that manage tools (last level) that use ollama with models like 'qwen2.5-coder' that works very well and fast? 🤔 I don't know if this can be possible, my intention is to reduce costs and increase perimeter data security.

    • @ColeMedin
      @ColeMedin  Месяц назад

      Yes this is definitely possible! I would look into the Agent Flows in Flowise if you're looking to do that there!

  • @mtofani91
    @mtofani91 Месяц назад

    Hey Cole! Is always great to see new video there- thanks for giving us an overview of Fwise. I’m curious: what are your hardware specs for this test? Also, how do you see the competition among these low-code tools? There are tons of options out there, but n8n still seems like one of the most robust and solid choice when it comes to nodes/widgets for our ia agents. Saludos!

  • @johnclarkon369
    @johnclarkon369 Месяц назад +1

    🎉 you know what, you're a genius

    • @ColeMedin
      @ColeMedin  Месяц назад

      Haha thank you very much!

  • @phatboi9835
    @phatboi9835 Месяц назад +1

    It's great that you included all the great things. But what should we do if we already have ollama and openWebUI? Is there a way to take them out of the build so it won't download it again?

    • @ColeMedin
      @ColeMedin  Месяц назад

      Great question - yes you certainly can! You just have to take those services out of the docker-compose.yml file!

  • @prabhic
    @prabhic Месяц назад +1

    Thank you very much. Very much helpfull, just what is needed, at right time

    • @ColeMedin
      @ColeMedin  Месяц назад

      You bet! Glad it is helpful!

  • @fusilad
    @fusilad Месяц назад +1

    Thanka for the heads up for default olama context size

  • @tyronemiles4345
    @tyronemiles4345 Месяц назад +1

    Cole, would it be possible to add nextcloud to the compose file for the file local file sharing? If so, what will I need to do?

    • @ColeMedin
      @ColeMedin  Месяц назад

      Yeah looking here:
      github.com/nextcloud/docker
      Looks like there is a Docker image you can put into the Docker compose file to add it to the stack! You would just have to add it as another service in the docker-compose.yml file. Which Windsurf/Cursor would definitely be able to help you with!

  • @TheAleksander22
    @TheAleksander22 Месяц назад +4

    You've got some overlapping audio starting at 2:45 (Flowise Overview section)

    • @ColeMedin
      @ColeMedin  Месяц назад +1

      Thank you for the heads up! I am editing this part out now!

    • @TheWiiZZLE
      @TheWiiZZLE Месяц назад +1

      @@ColeMedin Thanks for the reactivity, I was about to dm you about it

    • @AriellePhoenix
      @AriellePhoenix Месяц назад +2

      Thought I had another tab playing lol

  • @unknownhacker9416
    @unknownhacker9416 Месяц назад +4

    Hi Cole... May I know your PC specifications...?

    • @ColeMedin
      @ColeMedin  Месяц назад

      I share it here!
      thinktank.ottomator.ai/t/my-2x-3090-gpu-llm-build/1714

  • @ChrisMartinPrivateAI
    @ChrisMartinPrivateAI Месяц назад +1

    Perfect sequencing. Been holding off doing a deep dive into custom LangGraph coding and NOW I know why. Prototype (maybe more) in Flowise. Vectorshift is very easy to learn and get it to work agent type flows but is on a managed tier. Is Flowise the bolt.new of LangGraph? That is the direction I am heading.

    • @ColeMedin
      @ColeMedin  Месяц назад

      Thank you! Yeah VectorShift is great but unfortunate it isn't open source.
      If Flowise had an AI integrated to build workflows like Zapier does, it certainly would be the Bolt.new of LangGraph! That would be incredible...

  • @AnimikhAich
    @AnimikhAich Месяц назад +1

    Question: once I've built a local Flowise workflow, what is the best way to export that and deploy it? I'd like to deploy it on a server. How do I do that? Any suggestions are welcome.

    • @ColeMedin
      @ColeMedin  Месяц назад +1

      Great question! You can click on the code icon in the top right of the builder to export the agent. You can turn it into an API endpoint to use with any frontend you want to build!

  • @mike-s7c
    @mike-s7c Месяц назад +1

    Great channel Cole! Can you do a session on how replicate this on a VPS?

    • @ColeMedin
      @ColeMedin  Месяц назад

      Thank you Mike! Yeah I am planning on doing that in the future!

  • @roccobooysen3611
    @roccobooysen3611 Месяц назад +1

    Could you perhaps make a video to talk about the tech architecture setup for these platforms. As an example: I understand that you can just install Flowise and N8N but there is security that should be implemented on the network so that it does. It exposes your entire network.

    • @ColeMedin
      @ColeMedin  Месяц назад +1

      Yeah this is a great suggestion for a video, definitely agree it's needed. Thank you!

  • @Tennysonism
    @Tennysonism Месяц назад

    Cole, I’m enjoying this project! I setup the last published environment with n8n and everything is working great. How do I pull the updates from git that include flowise? Can I update my current build or do I need to start from the beginning?

  • @The-ism-of-isms
    @The-ism-of-isms Месяц назад +2

    If u make a playlist serires for what we can acheive with flowise + n8n combination that would be game chnager for the community 🙌😍. NO ONE IS DOING THAT I HOPE YOU DO IT. Thank u bosss , teach us more with flowise and n8n combinations

    • @ColeMedin
      @ColeMedin  Месяц назад

      Yeah that sounds great, thank you for the suggestion and the kind words!

  • @AE-yp4kc
    @AE-yp4kc Месяц назад +3

    If you are planning on using n8n for commercial purposes, be prepared to open your wallet to pay a license fee. From their "Sustainable Use License: You may use or modify the software only for your own internal business purposes or for non-commercial or personal use. You may distribute the software or provide it to others only if you do so free of charge for non-commercial purposes."

    • @MarkLewis00
      @MarkLewis00 Месяц назад

      Composio has a free forever plan

  • @meateaw
    @meateaw Месяц назад +1

    That local AI video was fantastic, I just wish there was a better set of tools in N8N for obtaining, and converting documents into a common format.
    I fought SO LONG trying to get OneDrive and GoogleDrive to convert the files for me. I'm just going to have to implement my own web service that can convert files into whatever format I want. It kind of defeats the point of a no-code system if I have to go and build my own tools ;)

    • @ColeMedin
      @ColeMedin  Месяц назад

      I totally get the frustration - hopefully we'll see those types of tools built out soon for the platform! It's one of the reasons for RAG heavy agents I use other platforms like Voiceflow or Flowise.

  • @NoCodeKing69
    @NoCodeKing69 Месяц назад +1

    Can't you do all of this purely in n8n? What is the reason to add another tool like flowise to the stack?

    • @ColeMedin
      @ColeMedin  Месяц назад

      Thank you! And this is a great question.
      So even though you can create anything in n8n that you could with Flowise, it still makes sense to build a lot of things with Flowise (especially to prototype your agents quickly) because Flowise has so many more integrations built on top of LangChain than n8n. As an example, all the document loaders in LangChain are available in Flowise, so you can quickly ingest documents into a vector DB from Brave web API searching (just as an example) and other tools that you could do with n8n but through a very custom setup.
      I hope that makes sense!

  • @ahmedalikhan6154
    @ahmedalikhan6154 Месяц назад +1

    is there anyway to create agents like that through coding in python? or can we convert these automation into python

    • @ColeMedin
      @ColeMedin  Месяц назад

      Great question! Flowise is built on top of LangChain which is a Python framework, so anything you build with this can be converted to Python code using LangChain!

    • @alx8439
      @alx8439 Месяц назад

      Apart from langflow there's a plethora of agentic libraries already for python.

  • @TheDJHollywood
    @TheDJHollywood Месяц назад +2

    At 2:45 mark you have a duplicate track. I spent a couple minutes looking for another open browser, Then I paused this one and both voices were gone, not just 1.

    • @ColeMedin
      @ColeMedin  Месяц назад +1

      Yeah found it, thanks! I'm editing it out now.

    • @TheDJHollywood
      @TheDJHollywood Месяц назад +1

      @@ColeMedin I found it funny that I was looking for a second browser even AFTER the duplicate track was done, but I thought it was just behind and you would talk again.

  • @farbensan6120
    @farbensan6120 Месяц назад +1

    Hi Cole! What's the minimum required for your hardware to run AI workflows/agents locally? Salamat!

    • @ColeMedin
      @ColeMedin  Месяц назад

      Great question! It depends a lot on the specific local LLM you want to run. For smaller LLMs (

  • @sebastianpodesta
    @sebastianpodesta Месяц назад +1

    Hi Cole, Can I update my old Ai Kit installation to this?

    • @ColeMedin
      @ColeMedin  Месяц назад +3

      You sure can Sebastian! You can run the same Docker compose command you used to start it for the first time and it'll update everything and you won't lose any workflows or anything else you've built with the kit!

    • @sebastianpodesta
      @sebastianpodesta Месяц назад +1

      @ thanks a lot for all your work!!

    • @ColeMedin
      @ColeMedin  Месяц назад

      You bet!

  • @loicbaconnier9150
    @loicbaconnier9150 Месяц назад +1

    Is the killer docker compose will integrate flowize and:
    KEY COMPONENTS
    Ollama: Run state-of-the-art language models locally
    n8n: Create automated AI workflows
    Qdrant: Vector database for semantic search
    Unstructured: Advanced document processing
    Argilla: Data labeling and validation
    Opik: Model evaluation and monitoring
    JupyterLab: Interactive development environment
    liteLLM as llm router
    What do you think ?

    • @ColeMedin
      @ColeMedin  Месяц назад

      I love it! Some of these tools I am definitely thinking about adding into the local AI starter kit as I continue to expand it!

  • @BirdManPhil
    @BirdManPhil Месяц назад +1

    my current k8s:
    argocd - gitops
    traefik - front end reverse proxy
    promethius - stack stats
    grafana - stats dashboard
    redis - cache and que management for n8n workers
    pgadmin4 - ui for postgresql
    postgresql w/pgvector - postgress with vector ability for hybrid semantic rag
    qdrant - optional vector database for pure vector rag
    langfuse - monitor llm usage stats
    n8n - tools for flowise
    flowise - main agentic flow builder
    i want to add langflow at some point

  • @kalilbarry3773
    @kalilbarry3773 Месяц назад

    Hey Cole, what you think about Langflow, is there a reason you chose Flowise instead?

    • @ColeMedin
      @ColeMedin  Месяц назад +1

      They are super similar tools! Honestly there wasn't a huge reason I picked Flowise over Langflow - I would have to dive into using Langflow more before I could really do a in depth comparison.

  • @frosti7
    @frosti7 Месяц назад

    I'm actually looking for a cloud solution thats suitable for a team, is there a recommendation?

    • @ColeMedin
      @ColeMedin  Месяц назад

      Cloud solution for what exactly? For running local AI?

  • @jaggyjut
    @jaggyjut Месяц назад

    With cursor ai and windsurf which can code, is there still going to be a demand for no code platforms?

    • @ColeMedin
      @ColeMedin  Месяц назад

      Super fair question! Definitely until LLMs can consistently write error free code, no code platforms like Flowise and n8n are going to still be by far the fastest way to create your agents. But even once we get to that point, it can still take a while to set up your environment, get the LLM to truly understand what you are trying to build, and execute on it.

  • @GustavoRabelo93
    @GustavoRabelo93 Месяц назад +1

    What are the recommended specs for the GPU? I have a 3080ti, would that be enough?

    • @ColeMedin
      @ColeMedin  Месяц назад +1

      Depends on the local LLM you want to run! A 3080ti will definitely be able to run ~20b parameter models and smaller pretty fast! You could try 32b parameter models as well but that is where it would start to be pushing it.

    • @seniorcaution
      @seniorcaution Месяц назад +1

      Should be able to run 20b without quantization. I've been able to run 32b and even some 70b models with quantization on my 12GB card

    • @alx8439
      @alx8439 Месяц назад

      ​@@seniorcautionnah 3080ti has 12Gb of VRAM. Models with original weights (non quantized) are 16bit = 2 bytes per parameter. There's nowhere near you can fit 20B model (40Gb of weights) into 12Gb of VRAM. But if you take 4-5bits quant then yes

  • @CraigIkerd
    @CraigIkerd Месяц назад +1

    Hi,
    I am right at the very beginning of trying to work out the most effective, cost effective and easy to learn way of learning how to create, chain and deploy teams of no-code AI agents ... Hope that makes sense and really value any advice / pointers .... This is the first video I am watching and have subscribed to ... Hope you can help - Thanx in advance :o)

    • @ColeMedin
      @ColeMedin  Месяц назад

      Welcome to the world of building AI agents! This is definitely a good starting point. I would recommend starting with tools like n8n and Flowise just to get an understanding of what goes into building agents quickly!

  • @wulfrum5567
    @wulfrum5567 Месяц назад

    How do I update my n8n from docker?

    • @ColeMedin
      @ColeMedin  Месяц назад

      If you run the same command you did to initially set up the stack it will automatically update n8n!

    • @wulfrum5567
      @wulfrum5567 Месяц назад

      @ColeMedin you mean docker compose? What would happen to my old data?

    • @ColeMedin
      @ColeMedin  Месяц назад +1

      Yes that's right! And all your old data remains since you are just rebuilding to update, not starting from scratch!

  • @mitchellrcohen
    @mitchellrcohen Месяц назад

    Thanks!

  • @go5495
    @go5495 Месяц назад +1

    of course you are an Ai agent being so unearthly helpful that not a quality all the human have

  • @Luke-Barrett
    @Luke-Barrett Месяц назад

    Can't this all be done in n8n ?

    • @ColeMedin
      @ColeMedin  Месяц назад

      Fair question, Luke! In the end everything in Flowise can be done in n8n, but certainly not as fast. Loading documents for RAG for example is one thing you can do MUCH faster with Flowise, just because of all the integrations and prebuilt tools we have there since it's built on top of LangChain. So what I like doing a lot of times is prototyping with Flowise and then eventually moving things into n8n - it's an easy transition too.

  • @Kaya-n1m
    @Kaya-n1m Месяц назад +1

    Dope video. If u upgrade ur mic, ur views will blow up.

    • @ColeMedin
      @ColeMedin  Месяц назад

      Thank you, I appreciate the feedback!

  • @sneakymove
    @sneakymove Месяц назад

    the problem with flowise is u could be always on waiting list. I could not download or start anything until now since 3 weeks ago

    • @ColeMedin
      @ColeMedin  Месяц назад

      Hmmm... Flowise is open source so you can just install it and run it now like I showed in the video! Are you maybe referring to a cloud offering they have?

  • @AriellePhoenix
    @AriellePhoenix Месяц назад +1

    Flowise aren't accepting new sign ups, they have a waitlist 😭 looks cool, though!

    • @ColeMedin
      @ColeMedin  Месяц назад +1

      You can download and run it yourself locally like I did in this video! I believe that is just for their cloud offering!

  • @technicalcontrol6026
    @technicalcontrol6026 Месяц назад +1

    im no coder, just learning, i am technical though but it would be super helpful if steps were simplified.. so many assumptions isolate users from your site to a small demographic of knowledgeable coders.. the gap between coding knowledge and Ai users will inevitably close fast and creative influences who think only about possibilities outside of applications will emerge fast once more localised agents and functions become specific to our need.. we don't seem far away from being able to express in language what we want, and have it done through many agents , like a cursor world order on steroids, not only writing code but checking its application and UI with other agents, maybe this is happening already? but i just want to come out of the rabbit hole of new learning sometimes and say, 'create a local AI window with multiple agents i can select from ( images, not code) like super heroes with specific tasks.. up load, scrape and search , uncensored and self learning from my data references... now i know that can all be done, but the simplest way of doing this is going to cross the finish line. its always about simplicity. 'Let there be light' and there was light ....

  • @augmentos
    @augmentos Месяц назад

    Yasss

  • @LA_318
    @LA_318 Месяц назад +2

    @colemedin I've really been enjoying your videos and trying to replicate what you've been doing. I think it's very valuable. One thing I'm specifically trying to do myself is create a workflow where it takes screenshots of what I'm doing and then using The latest LlamaVision LLM to convert that to a document that can be saved for later so then I can ask my local AI about projects I've worked on and just details that I may have missed when taking my notes for my builds for other projects that not necessarily have to do anything with AI. I've been having trouble setting this up but if this interests you I'd love to see you make a video about how to make something like that work. I have other ideas on how to integrate that to make other things but if I can't figure this out then I certainly can't move on to more complicated workflows.

    • @ColeMedin
      @ColeMedin  Месяц назад +1

      Thank you! Love your use case too. I certainly haven't created enough videos on vision based generative AI so this would be something awesome to cover!

    • @alx8439
      @alx8439 Месяц назад

      I'm pretty sure there are already some open source projects doing this:
      1) take screenshoting tool that can be executed from CLI on your operating system. There are plenty of them + some are even included to your OS
      2) write a simple script (bash / cmd) to take a screenshot and save it to file. Then, as a next step in that script you call ollama with latest llama visual model to process that screenshot and extract whatever information you want to extract. Save the ollama output as text file. Script itself is just 3 lines of code.
      3) now you have a pair of screenshot file and text file describing what is on screenshot. You can save them anywhere - local folder, Google drive, database
      4) put that script to be executed on a schedule, like once a minute
      5) reuse any rag ingestion pipeline to make embedding vectors out of screenshot description and put it + original screenshot + datetime to any vector database of your choice
      7) use that vector database and embedding model as RAG tool with any model

  • @АлександрУшанов-щ3е

    No subtitles?

    • @ColeMedin
      @ColeMedin  Месяц назад

      They should be generated by default! Are you not seeing them when you usually do?

    • @АлександрУшанов-щ3е
      @АлександрУшанов-щ3е Месяц назад +1

      ​@@ColeMedin they appeared, but not right away, somewhere around six hours later. although before they appeared almost immediately. And thanks for the videos!

    • @ColeMedin
      @ColeMedin  Месяц назад

      Great! You bet! Not sure why it took extra time this time around.

  • @marcosbeliera1
    @marcosbeliera1 Месяц назад

    Nice! But I see it very similar with n8n Agents.

    • @ColeMedin
      @ColeMedin  Месяц назад +1

      Thanks! And yeah it is similar, part of why they integrate so well together. Flowise makes it easier to do some things like load documents into RAG because of all the different document loaders there are that aren't available in n8n. Just one example!

  • @andrew-i9e1b
    @andrew-i9e1b 20 дней назад

    cant get this to run followed the instructions it is not clear what the database details come from do I need to sign up somewhere or do I just make it up like the API key? so not very clear to someone new at this at all!!!

  • @Maxim_Kulakov
    @Maxim_Kulakov Месяц назад +1

    You need to think about people with studio headphones watching your video because these audio effect you have throughout the whole video ARE REALLY LOUD!

    • @Kaya-n1m
      @Kaya-n1m Месяц назад

      His mic is low, so the audio isn’t balanced. If he upgrades his mic to be closer to his mouth his views will increase a ton. If u need help, hit me up.

  • @ShubzGhuman
    @ShubzGhuman Месяц назад

    well im awake

  • @TechieOsvio
    @TechieOsvio Месяц назад

    Bro how to move my chatbotnon public domain. For my personal use

  • @alx8439
    @alx8439 Месяц назад

    Also all these aka "agent flows" I'm seeing are kind of flawed. They don't have loopbacks and self reflection. Imagine that internet search results didn't return anything meaningful. In ideal situation agent should detect that and try another round of search with using different wordings. But instead whatever garbage has returned is being used

    • @ColeMedin
      @ColeMedin  Месяц назад

      Yeah that is fair! Though those kind of things can be built into these workflows! It's up to the user to make that happen is the way I see it.

  • @alx8439
    @alx8439 Месяц назад

    The logging / transparency of what is happening in flowise is shit.

    • @ColeMedin
      @ColeMedin  Месяц назад

      Yeah I agree, that's why I tend to use it for just prototyping

  • @marciocunha-g6w
    @marciocunha-g6w 17 дней назад +1

    bro go touch grass

  • @terrorkiller645
    @terrorkiller645 Месяц назад +1

    Can you make like a series playlist where you can give a full in depth explanation on how to download these things as well as make a space where ppl can ask on questions or of they get stuck on something.

    • @ColeMedin
      @ColeMedin  Месяц назад

      Yeah I am working on this! And over at thinktank.ottomator.ai I have a community for people to post these kind of questions!

  • @mahdihussein2826
    @mahdihussein2826 Месяц назад

    Can you share how to deploy a rag on Instagram using n8n please

  • @PyJu80
    @PyJu80 Месяц назад +3

    Dont know if you belive in the universe. But I come to RUclips when I hit a raodblock. Then who pops up. My main man Cole. The F**king G.O.A.T. With the exact solution to my problem. EVERY TIME. Hats off. I don't subscribe to many things, but get excited for your next project. NGL, I can see bolt.new or someone coming to you with alot of money for this. You're all over RUclips, even Claude 3.5 is impressed with you and youre new news 😁.
    Man I love your work. One Day, One Day, you will intergrate VSC and allow for file changes live and mate, I wouldnt blame you to take the $2b and run 🤑🤑🤐🤐. 👊👊

    • @PyJu80
      @PyJu80 Месяц назад +1

      Ohhhhh, and if you can convert the above into an open_ai_like base url to use as a model in your OttoDev. Then im fyling to USA to bow to the G.O.A.T.

    • @ColeMedin
      @ColeMedin  Месяц назад

      Haha thank you so much - that means a lot to me! $2b would be a fever dream... I would take it and keep teaching on RUclips :)

    • @PyJu80
      @PyJu80 Месяц назад +1

      ​@@ColeMedin don't forget me on the Lads holiday. All expenses spring break on you. 😅😅😂😂❤🎉