Ollama Structured Output: REVOLUTIONISING AI API Backends!

Поделиться
HTML-код
  • Опубликовано: 26 янв 2025

Комментарии • 38

  • @nexuslux
    @nexuslux Месяц назад +1

    The best tutorial you have done. And this is an important milestone for ollama.

  • @HyperUpscale
    @HyperUpscale Месяц назад +3

    Ollama just changed the game.

  • @tubaguy0
    @tubaguy0 Месяц назад +3

    How did the LLM decide what age to assign to the dogs in the image? I’m not sure if precision prompting can overcome hallucinations reliably enough to trust the outputs still.

  • @dareljohnson5770
    @dareljohnson5770 Месяц назад

    You are the man!

  • @pedroandresgonzales402
    @pedroandresgonzales402 Месяц назад

    Muy bueno que haya puesto varios idiomas ❤

  • @luisfable
    @luisfable Месяц назад

    Thanks! Amazing information

  • @RaviPrakash-dz9fm
    @RaviPrakash-dz9fm Месяц назад +4

    Hey, how is it different from
    from langchain_ollama import ChatOllama
    llm = ChatOllama(model=model_name, temperature=0)
    llm_json_mode = ChatOllama(model=model_name, temperature=0, format="json")

  • @tomasbusse2410
    @tomasbusse2410 Месяц назад

    This is really great. Still thinking about how to best use this.

  • @60pluscrazy
    @60pluscrazy Месяц назад

    WOW!!! 🎉

  • @rubenrodenascebrian3855
    @rubenrodenascebrian3855 Месяц назад

    Great video! Could you help me and explain how you generate audio tracks in different languages? Thank you very much!

  • @togai-dev
    @togai-dev Месяц назад +4

    Thank you for this video Mervin. Big fan here. Quick question can you tell me the library that you use to print structured output such as yours?

    • @TheSalto66
      @TheSalto66 Месяц назад +1

      You must use last release of Ollama 0.5.1 and library 'pip install ollama --upgrade' , 'pip install pydantic --upgrade'

    • @togai-dev
      @togai-dev Месяц назад

      @@TheSalto66 Thanks. I meant the Terminal actually the standard output.

  • @techchef08
    @techchef08 Месяц назад +2

    Still does not output consistently in JSON. I tried refactoring the code and added more error handling / extracted JSON via RE and still inconsistent results. Someday it will get there I'm sure, not quite there yet however.

    • @ishwaragoudapatil9654
      @ishwaragoudapatil9654 18 дней назад

      At the end of whatever prompt you have used , put this : Begin your answer with "{"

  • @nabeelsalahuddin3117
    @nabeelsalahuddin3117 Месяц назад

    Great vid

  • @iltodes7319
    @iltodes7319 Месяц назад

    good job

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w Месяц назад +1

    But given that OpenAI, Claude can do this already, why do we need ollama for it?

    • @forzaspeedster
      @forzaspeedster Месяц назад +3

      idk cause you can use it for free without relying on external API.

    • @username4294967296
      @username4294967296 Месяц назад +1

      It's private, completely under your control, offline, free for always and deterministic.

  • @chiusto
    @chiusto Месяц назад +1

    how to operate on windows system?should i install a linux system?🤔🤔

    • @RocketLR
      @RocketLR Месяц назад +1

      bruh it’s python.. it doesn’t matter 💀

  • @emimix
    @emimix Месяц назад +1

    Good video. You forgot to include the link for the code.

    • @MervinPraison
      @MervinPraison  Месяц назад +2

      Thanks for letting me know. Now added

  • @alx8439
    @alx8439 Месяц назад +4

    Funny, that this whole "structureness" is reached by just giving LLM an example in system or user prompt of what is expected. This is what is called "multishot". And by no any means it is an achivement of ollama as the title suggests. It is what every LLM can do, using any inference backend

    • @RaviPrakash-dz9fm
      @RaviPrakash-dz9fm Месяц назад

      There's a lot of cleaning that ollama is handling

    • @alx8439
      @alx8439 Месяц назад

      @@RaviPrakash-dz9fm I'm not so sure. Even if they did, by doing that they are taking away LLM controls from you and hiding some prompt manipulation - which makes it harder to debug

    • @hand-eye4517
      @hand-eye4517 Месяц назад

      @@alx8439lmao the ignorant speaks without knowing

    • @anubisai
      @anubisai Месяц назад +1

      You're wrong about your assertions here, bud....

    • @RaviPrakash-dz9fm
      @RaviPrakash-dz9fm Месяц назад

      @@anubisai From what I have seen in some other libs giving structured output, it's just minor prompts and a lot of regex. Not sure what's here

  • @SonGoku-pc7jl
    @SonGoku-pc7jl Месяц назад

    uau, pleae teacher same with js please if is posible ;) and you can put the super audio in spanish like other videos?? ;) thanks for all!

  • @annonymbruger
    @annonymbruger Месяц назад

    Just another hello world example. This is not revolutionary - it’s just plain simple. Level up and try to focus on some hard edge cases. As developers we still spend most of our time with repetitive tasks ai can’t yet solve for us.

    • @shiweijie9903
      @shiweijie9903 Месяц назад

      would like to know what are the examples ai still cant solve for developers