10 n8n Tips in 10 Minutes to 10x Your AI Automations

Поделиться
HTML-код
  • Опубликовано: 25 дек 2024

Комментарии • 98

  • @SoloJetMan
    @SoloJetMan Месяц назад +10

    this dude has an impeccable knack for coming through with the most relevant and well timed contents!

  • @ifnotnowwhen2811
    @ifnotnowwhen2811 Месяц назад +11

    This channel is incredibly informative and efficient-packed with valuable content, no fluff, just pure knowledge. Great job, Cole, on another outstanding knowledge share!

    • @ColeMedin
      @ColeMedin  Месяц назад

      Thank you so much - that means a lot! :D

  • @MohammadTalli1764
    @MohammadTalli1764 5 дней назад +1

    00:00:00 Introduction
    00:00:20 Tip 1: Use Superbase for Scalable AI Agents
    00:01:15 Tip 2: Choosing the Right Large Language Models
    00:02:15 Tip 3: Extracting Text from Different File Types
    00:02:45 Tip 4: Referencing Previous Node Outputs
    00:03:30 Tip 5: Building AI Agents as API Endpoints
    00:04:30 Tip 6: Handling Multiple Items in a Single Node Output
    00:05:30 Tip 7: Using Data Pinning to Save Test Event Outputs
    00:06:00 Tip 8: Creating Error Workflows
    00:06:45 Tip 9: Using the Schedule Trigger to Automate Workflows
    00:07:30 Tip 10: Exploring the n8n Workflow Library
    00:08:30 Conclusion

  • @mikemansour4634
    @mikemansour4634 22 дня назад +1

    Thank you! This is one of the most helpful videos for n8n out there. Your explanation is clear, and the examples make it so easy to understand. It’s amazing how you simplify complex topics. Keep up the great work-your content is invaluable for the community!

    • @ColeMedin
      @ColeMedin  19 дней назад

      Thank you so much Mike! That means a lot to me!

  • @johnsaxxon
    @johnsaxxon 28 дней назад +1

    W0ot! Happy to say i'm using 9 of these tips already. Gotta get those error workflows going. As always, thank you for your videos. Much of my skills have started with, rounded out or completely been from you!

    • @ColeMedin
      @ColeMedin  26 дней назад

      I love it - you are so welcome!

  • @josemadarieta3
    @josemadarieta3 Месяц назад +1

    I'm seeing all kinds of videos popping up rehashing your work. The supabase based rag is a great example. Just saw two today... what? 2 months after yours dropped? Keep leading the way my man.

    • @ColeMedin
      @ColeMedin  Месяц назад

      Haha that's awesome, thanks Jose!

  • @moefayed
    @moefayed Месяц назад +2

    This was extremely helpful, thank you!

  • @IsaTimur
    @IsaTimur Месяц назад +1

    Awesome! Thx a lot! I've just made a PG Vector instead of Supabase, which is working well and all locally, thx for all your tutorials it's all handy and simple!

    • @ColeMedin
      @ColeMedin  Месяц назад

      Glad it helped! Thank you for the kind words!

  • @lionhearto6238
    @lionhearto6238 Месяц назад +1

    this is what i needed. thank you!
    more n8n content if possible

    • @ColeMedin
      @ColeMedin  Месяц назад

      Of course! Yes - more n8n content coming very soon!

  • @mksmurff
    @mksmurff Месяц назад +1

    Thanks!

    • @ColeMedin
      @ColeMedin  Месяц назад +1

      You are so welcome - thank you so much for your support!

  • @seanolivas9148
    @seanolivas9148 Месяц назад +6

    When you say you will have an ai that will have the knowledge of all these workflows, is that like a fork of bolt where you ask it to build a particular workflow for you? That would be awesome

    • @ColeMedin
      @ColeMedin  Месяц назад +2

      That is one of the end goals! To start though it will more just be able to reference an existing knowledgebase of workflows to give you ones that are similar to a workflow you describe that you want to build. And just have extended knowledge of n8n compared to a base LLM. But then yes the primary goal eventually is to have it build out full workflows for you.

  • @user-nbfkxngjmyb
    @user-nbfkxngjmyb Месяц назад +1

    wonderful content! please keep up the good work

    • @ColeMedin
      @ColeMedin  Месяц назад

      Thank you! I sure will! :D

  • @PrashadDey
    @PrashadDey Месяц назад +1

    you’re truly awesome. thanks for the tips.

    • @ColeMedin
      @ColeMedin  Месяц назад +1

      You are so welcome!

  • @hernanlazaro2766
    @hernanlazaro2766 Месяц назад +1

    Best content about n8n

    • @ColeMedin
      @ColeMedin  Месяц назад

      Thank you very much :D

  • @Saint_Iori
    @Saint_Iori Месяц назад +1

    You're so cool. Thanks, Cole!

    • @ColeMedin
      @ColeMedin  Месяц назад

      You're welcome! Thank you!

  • @mauricemendy
    @mauricemendy Месяц назад +1

    Lots of good tips, thank you for that!

    • @ColeMedin
      @ColeMedin  Месяц назад

      My pleasure! Glad it was helpful!

  • @sebastianpodesta
    @sebastianpodesta Месяц назад +1

    Keep em coming!! Is there any special considerations on updating n8n in if it was installed with with your Ai kit?

    • @ColeMedin
      @ColeMedin  Месяц назад +2

      Thanks Sebastian - will do!
      To update the n8n container in the local AI starter kit, you can run the command:
      docker compose pull
      This will update all the images to latest including n8n if there is an update!

  • @jeffreymoore1431
    @jeffreymoore1431 28 дней назад

    Hi Cole. I was working in n8n cloud today and found they have a Grafana connector. I am dying to try it. Can not find any videos on it. I want to test it with Gemini. I used Gemini on your vectorshift project. My prompt was, " you are a data analyst etc. So Gemini tells me about all the data analysis it can do including regeession. Sounds pretty advanced. Learned about Google big query which is one way to connect data to Gemini but I think we can do the same with n8n or vectorshift. Super cool. Thanks for all the videos.!

    • @ColeMedin
      @ColeMedin  26 дней назад

      That is super cool! You are so welcome!

  • @zensajnani
    @zensajnani Месяц назад +1

    amazing video thank you so much !

    • @ColeMedin
      @ColeMedin  29 дней назад

      Thank you - you bet!

  • @ammadkhan4687
    @ammadkhan4687 11 дней назад +1

    Hi Cole, is it possible to fine tune ollama model with own dataset using n8n?

    • @ColeMedin
      @ColeMedin  9 дней назад

      Great question! I haven't tried fine tuning specifically within n8n but you can call any fine tuning API from n8n so you certainly could! It would honestly probably be easier to follow a Python tutorial for this though.

  • @nateherk
    @nateherk Месяц назад +1

    Another banger🔥

  • @KrisFromFuture
    @KrisFromFuture Месяц назад +1

    More N8N content please ❤

  • @christiansaur53
    @christiansaur53 Месяц назад +1

    You have a great style of teaching. Are you launching a skool or other coaching based community?

    • @ColeMedin
      @ColeMedin  Месяц назад +1

      Thank you very much! I have a Discourse community that I just started which I will be building up instead of a Skool group!
      thinktank.ottomator.ai

  • @lucastrallo7881
    @lucastrallo7881 Месяц назад +1

    Good job, very useful tips,...

    • @ColeMedin
      @ColeMedin  Месяц назад

      Glad it was helpful! Thanks!

  • @loryo80
    @loryo80 Месяц назад +1

    You forget the Gemini models, truly Gemini flash is surprising me and I start replacing 4o mini in all my test phase for almost free

    • @ColeMedin
      @ColeMedin  Месяц назад +1

      Wow if Gemini Flash is actually performing as well as GPT-4o-mini for you that's incredible! Thanks for sharing!

  • @alessiolanzillotta7929
    @alessiolanzillotta7929 9 дней назад +1

    do you think it is possible run llamafile LLM with n8n? if yes...do a video please!

  • @JeremieAITech-ob7th
    @JeremieAITech-ob7th Месяц назад +1

    I LOVE YOUR VIDEO ! Thank you so muuuuuch to exit !

  • @wassimdesign-dv7hp
    @wassimdesign-dv7hp Месяц назад +1

    Thank you Cole for this valuable advice. I'm just starting in the AI agents world and I hesitate between 2 solutions: Which is better & cost effective N8N or Flowise local?

    • @ColeMedin
      @ColeMedin  Месяц назад +1

      You are so welcome! There is actually a really good opportunity to use them together. Flowise has better integrations with LLMs, and then n8n has better integrations with other services (Slack, Google Drive, Asana, etc.). So you can build your agents with Flowise and have the tool workflows in n8n! Both are very cost effective since you can self-host both.

  • @wpheelp
    @wpheelp Месяц назад

    Forgive me for asking off-topic, but I'm very curious in what program you create these fancy thumbnails for RUclips?

    • @ColeMedin
      @ColeMedin  Месяц назад

      Good question! I use Photoshop!

  • @AndrewsPanda
    @AndrewsPanda Месяц назад +1

    Great video. I self host N8N. Do you have these workflows available?

    • @ColeMedin
      @ColeMedin  Месяц назад

      Thanks man! Self hosting is definitely the way to go!
      These workflows were all created as little examples except for the two larger ones that I have videos on and I have the workflows downloadable as JSON files there!

    • @AndrewsPanda
      @AndrewsPanda Месяц назад +1

      ​@@ColeMedinn8n saves so much time compared to coding from the ground up. Awesome content. Thanks mate I'll check them out

    • @ColeMedin
      @ColeMedin  Месяц назад

      It sure does! Thank you!

  • @reklistube
    @reklistube Месяц назад +1

    Can you do a comparison video of n8n flowise ai, and langflow? There are so many of these low code style things now it’s hard to know which one to use.

    • @ColeMedin
      @ColeMedin  Месяц назад +1

      Yes I am actually planning on this for next month!

  • @jribesc
    @jribesc Месяц назад +1

    Thanks !!!!!

  • @soulacrity7498
    @soulacrity7498 Месяц назад +1

    For the vector store almost all the videos i have seen use pinecone vector store. How does pinecone compare to supabase? Is supabase better than pinecone? Is pinecone not production ready? Thx!

    • @ColeMedin
      @ColeMedin  Месяц назад

      Pinecone is definitely production ready as long as you aren't on their free tier! I only prefer Supabase because it's nice to have the SQL database (for agent chat memory) and embeddings for RAG be in the same platform, but Pinecone actually starts to outperform Supabase once you get to a large number (hundreds of thousands) of vectors.

  • @gergobonda5130
    @gergobonda5130 Месяц назад

    Hey,
    Im having trouble with creating an AI asisstent with googlecalendar tool and local ollama as ai agent. I tried to set up the description of the ai agent so it would handle my calendar correctly but it keep messing up the event title and start time end time. I would really appriciate a video about it. How would you implement it ?
    Thanks!
    Keep up the good work!

    • @ColeMedin
      @ColeMedin  Месяц назад

      Sounds like an interesting use case! How exactly does it help you manage your calendar? I'd be curious to know more before I speak to what content I would create soon that could relate! Thanks!

    • @gergobonda5130
      @gergobonda5130 Месяц назад +1

      @@ColeMedin Since than, i managed to make it work. It can create events for me or check my calendar what i do and so on...
      I have different flows for the calendar handling agents. Because the system prompts are special for all, and the respond with a fixed json which is the input of the calendar tool.
      My plan is to create a router agent in the main flow which decides which agent should handle the task. if i talk about creating an event it gives the info to the respected agent. Or if i just asking something which in my vector database it goes in a different path.
      But to be honest, its a bit complicated with ollama. if i use the new gpt it understands everything easily, but ollama is sometimes a bit stupid to handle everything.

    • @ColeMedin
      @ColeMedin  29 дней назад

      Glad you figured it out - sounds awesome! Yeah local LLMs are often not going to do as well as GPT-4o or other larger models like Claude 3.5 Sonnet. At least that's the case for now...

  • @MeTuMaTHiCa
    @MeTuMaTHiCa Месяц назад +1

    Thx, the ship towards Flowise. I'm waiting to dock in port

    • @ColeMedin
      @ColeMedin  Месяц назад +1

      Yes I am actually putting out Flowise content soon!

  • @ahmedmsalah1409
    @ahmedmsalah1409 Месяц назад

    I couldn't use Gemini with agent ai and supabase, any help please

    • @ColeMedin
      @ColeMedin  Месяц назад

      Is there a specific error you are getting?

  • @PIOT23
    @PIOT23 Месяц назад +2

    Is n8n better than flowise? I’ve been a flowise user but am getting intrigued by the latest n8n content 🤔

    • @ColeMedin
      @ColeMedin  Месяц назад +3

      Great question! There is actually a really good opportunity to use them together. Flowise has better integrations with LLMs, and then n8n has better integrations with other services (Slack, Google Drive, Asana, etc.). So you can build your agents with Flowise and have the tool workflows in n8n!

    • @PIOT23
      @PIOT23 Месяц назад +1

      @ that makes a lot of sense thank you for explaining! Thanks for the great helpful content as well!

    • @ColeMedin
      @ColeMedin  Месяц назад

      Awesome, you bet!

  • @stevensexton5801
    @stevensexton5801 Месяц назад

    Cole, can your 'local-swarm-agent' be used as a n8n workflow? If so, can you show how?

    • @ColeMedin
      @ColeMedin  Месяц назад

      You certainly could! I'm going to be making agent type workflows for n8n for some content in the future!

  • @josemadarieta3
    @josemadarieta3 Месяц назад +1

    Also, I'm flabbergasted that make doesn't also have a pin feature... or maybe I'm not. Thats a deal breaker for me

  • @exileofthemainstream8787
    @exileofthemainstream8787 Месяц назад +1

    Cole, why use n8n at all? why not just create agents directly using relavant api in vScode using python? Same goes for langchain? Why even use any of these when an LLM can be called with a simple python code?

    • @ColeMedin
      @ColeMedin  Месяц назад

      Super fair question - thank you! The biggest reason to use n8n is it's still the fastest way to build workflows that integrate with a bunch of different services and AI agents, even with the latest and greatest AI coding assistants out there. It's super easy for non-technical people to do amazing things with it and for more technical people like me (and you too I'm assuming), it still saves a lot of time!

    • @exileofthemainstream8787
      @exileofthemainstream8787 Месяц назад

      @@ColeMedin I'm non technical actually. But when i look at n8n i don't see how it's more beneficial than using code. To me every llm gives direct access to their api right. why would a more efficient solution to be to use third parties i guess i'm trying to understand. i have this same issue with trying to understand langchain.

    • @ColeMedin
      @ColeMedin  Месяц назад

      Yeah I get it! Overall these abstractions are meant to save time because they handle a lot for you. But of course you have less room for customizing with these abstractions, so it's pros and cons between convenience/speed and customizability/transparency.

  • @edoardododoguzzi
    @edoardododoguzzi Месяц назад

    at my end "$json?" not work with multiple trigger :S

    • @ColeMedin
      @ColeMedin  Месяц назад

      The format is {{ $json.data }}
      replace data with the attribute you are trying to access

  • @kuyajon
    @kuyajon Месяц назад

    solid content, i just dont need it right now

    • @ColeMedin
      @ColeMedin  Месяц назад

      That's totally fair! What are you looking for specifically?

    • @kuyajon
      @kuyajon Месяц назад

      @@ColeMedin it's just too advanced for me :) im just very beginner. but enjoy watching

    • @ColeMedin
      @ColeMedin  Месяц назад +1

      Fair enough, I appreciate the honesty!

  • @annasc8280
    @annasc8280 28 дней назад

    Top!!!!

  • @TeedCapwel-z7t
    @TeedCapwel-z7t Месяц назад

    I sent you a purchase request email but no response, please reply and thank you.

    • @ColeMedin
      @ColeMedin  Месяц назад

      Sorry which email was that?