LLM Structured Output with Local Haystack RAG and Ollama

Поделиться
HTML-код
  • Опубликовано: 20 окт 2024

Комментарии • 6

  • @ThomasWawra-f8e
    @ThomasWawra-f8e 7 месяцев назад

    Awesome, thanks man! Works like a charm. I learned two things:
    1. haystack -> never came across that before -> NICE!!!
    2. using open source llms to extract structured data the Pydantic-way. Again: NICE!!!
    Thanks!

  • @mtflygel
    @mtflygel 8 месяцев назад

    Awesome stuff man! Your videos have been really helpful in helping me build my own local RAG applications! :)
    Have you also explored deploying a RAG implementation in a hosted environment? Perhaps also with Haystack or Graphlit or something along those lines?
    Anyway, thanks for the great content!

    • @AndrejBaranovskij
      @AndrejBaranovskij  8 месяцев назад +1

      Hey, thanks a lot. Sparrow API runs on top of FastAPI, so technically, it can be deployed anywhere :)

  • @JitendraKumar-uo4tg
    @JitendraKumar-uo4tg 5 месяцев назад

    Haystack vs Langchain - which one is better? What are the pros and cons?

    • @AndrejBaranovskij
      @AndrejBaranovskij  5 месяцев назад

      I would say - both are good, it depends. Always need to test for your use case.