Get up and running with local ChatGPT/gLLMs with Ollama in R

Поделиться
HTML-код
  • Опубликовано: 27 окт 2024

Комментарии • 5

  • @danielsaldivia2570
    @danielsaldivia2570 6 месяцев назад +1

    Many thanks for this tutorial, Johannes!

  • @CanDoSo_org
    @CanDoSo_org 7 месяцев назад +1

    Hi, Johannes. can we deploy it locally without Nvidia GPU, say on MacBook pro?

    • @JBGruber
      @JBGruber  7 месяцев назад

      Yes! At 14:50 I show how you can disable the GPU dependency. It will be much slower though, as I also explain.

  • @wesleylohoi
    @wesleylohoi 4 месяца назад

    Hi Johannes,
    What a fantastic video! I am also a R lover, I got a question after I watched your video.
    May I know if is it possible for us can turn the LLM model in R as well? such as we got some extract information that we would like to load into it? like academic papers, reports etc.
    Cheers,
    Wesley

    • @JBGruber
      @JBGruber  4 месяца назад

      I started working on this. What you are talking about is called RAG (retrieval augmented generation) and it's definitly possible in R! I will publish it as a tutorial as soon as it's ready.