LlamaIndex Webinar: Build No-Code RAG with Flowise

Поделиться
HTML-код
  • Опубликовано: 20 сен 2024

Комментарии • 21

  • @anilshinde8025
    @anilshinde8025 7 месяцев назад +8

    Very great step in flowise as integration with llamaindex. One small addition required is use of local model or ollama

  • @EasyAINow
    @EasyAINow 6 месяцев назад

    Great video showcasing how LlamaIndex and Flowise can work together. Thanks for doing this.

  • @jagusiff
    @jagusiff 7 месяцев назад

    a Fantastic demo! a great tool for non-developers who see the promise of LLM Apps and want a playground to conceptualize their ideas!

  • @mkarhade
    @mkarhade 7 месяцев назад +1

    absolutely brilliant, I think you solved one of my biggest problems just now! Cant wait to get my hands dirty with it

  • @carstenli
    @carstenli 7 месяцев назад +3

    Thanks for this. - How does the integration with Ollama works?

  • @xrayhuang
    @xrayhuang 5 месяцев назад

    Great video. Would you please instruct how to deploy it locally after using Flowise? It would be nice to able to export python code for production level development.

  • @MaliRasko
    @MaliRasko 7 месяцев назад

    man... I love this ..I wish there were more ways to interact with Llamaindex through GUI .. low-code style

  • @cheahanqi964
    @cheahanqi964 3 месяца назад

    awesome !!!!!!!! . ,

  • @sitedev
    @sitedev 7 месяцев назад

    GOLD!

  • @nickjunes
    @nickjunes 3 месяца назад +1

    Can you use this tool with a local model? I love LLM Studio because it can download a model and run it locally. It seems like none of these LLM flow tools can use local models. That seems really limiting. What if we don't want to send our private information to chatGPT or Claude? I think it would be great if it could work with a local model like LLMStudio.

    • @richarddk
      @richarddk 3 месяца назад

      can use ollama with the Flowise.

  • @bambanx
    @bambanx 3 месяца назад

    Flowise can read and write local files? Thanks you

  • @bertobertoberto3
    @bertobertoberto3 7 месяцев назад +1

    🔥

  • @NewUser12345-i
    @NewUser12345-i 7 месяцев назад

    Can you retrieve the amount of tokens used in the api response?

  • @awakenwithoutcoffee
    @awakenwithoutcoffee 4 месяца назад

    Are we able to chunk PDf's/Text per page ?

    • @drmartinbartos
      @drmartinbartos 4 месяца назад +1

      @awakenwithoutcoffee I was just wondering if you found a solution to per-page chunking? getting a citation page (or ideally a subsection of a page - maybe top third, middle third, bottom third of a page if a whole page is too large to fit in context after retrieval) would be humanising the reference results.
      Altogether I’m a bit concerned about the extent to which chunking in RAG systems isn’t rationally optimised or only crudely.. e.g. why a 20 character overlap? (some single molecule names are longer than that.. meaning both adjacent chunks may have incomplete names..) Surely it’s more semantically sensible to split at nearest sentence end.. or paragraph or section.. yet some of that make-or-break semantic indexing and retrieval is happening using crude default arbitrary figures someone plucked from the air used in places hidden somewhat deep in the library stack…

    • @awakenwithoutcoffee
      @awakenwithoutcoffee 4 месяца назад

      @@drmartinbartos Hi there Dr. Martin,
      You are entirely right sir and one of the reasons I am studying hard to find a solution for this. The reason why this isn't done straight of the bat is that it is simply much easier and predictable to chunk by "character size" than to break it based on semantic meaning. The thought being that the actual "meaning" of the content would be retrieved trough Vector Search regardless where it was located in the text. However, like you say, this makes accurate text retrieval almost impossible and is only useful in specific use-cases. Implementing dynamic, semantically split chunking requires more sophisticated NLP techniques and can be more complex to manage. If you have a linked-in feel free to post it so I can keep you updated once we found a solution.

  • @tom2think
    @tom2think 7 месяцев назад +2

    Thank Jerry for the content. IMO, as a dev with 20+ years under my belt, this tool, aimed at simplifying AI pipeline setups, doesn't quite hit the mark. It seems great for non-coders wanting to build simple stuff, but the moment you scale or add complexity, it turns into a tangled mess of links and boxes. Seen plenty of no-code tools promising easy UI/app creation, but they often fall short, especially for sophisticated features. A neat addition would be making these tools easily integrate with websites for basic chatbots, but for real-deal production? Doubtful. Feels like you can't achieve serious apps with this, much like Wix or Webflow are great for websites but not for complex applications. Shouldn't be seen as a full replacement for coding. Just ends up being a web of confusion. Just my 2 cents.

    • @dylantkl
      @dylantkl 3 месяца назад

      You might be right, but I think this tool has its place. I won't discount it too soon as not everyone needs or wants to develop everything from scratch.

  • @mostafamosa3495
    @mostafamosa3495 7 месяцев назад

    N8N have alot off node and you have more tools

  • @florentflote
    @florentflote 6 месяцев назад