How did the LLM decide what age to assign to the dogs in the image? I’m not sure if precision prompting can overcome hallucinations reliably enough to trust the outputs still.
Hey, how is it different from from langchain_ollama import ChatOllama llm = ChatOllama(model=model_name, temperature=0) llm_json_mode = ChatOllama(model=model_name, temperature=0, format="json")
Still does not output consistently in JSON. I tried refactoring the code and added more error handling / extracted JSON via RE and still inconsistent results. Someday it will get there I'm sure, not quite there yet however.
Funny, that this whole "structureness" is reached by just giving LLM an example in system or user prompt of what is expected. This is what is called "multishot". And by no any means it is an achivement of ollama as the title suggests. It is what every LLM can do, using any inference backend
@@RaviPrakash-dz9fm I'm not so sure. Even if they did, by doing that they are taking away LLM controls from you and hiding some prompt manipulation - which makes it harder to debug
Just another hello world example. This is not revolutionary - it’s just plain simple. Level up and try to focus on some hard edge cases. As developers we still spend most of our time with repetitive tasks ai can’t yet solve for us.
The best tutorial you have done. And this is an important milestone for ollama.
Ollama just changed the game.
How did the LLM decide what age to assign to the dogs in the image? I’m not sure if precision prompting can overcome hallucinations reliably enough to trust the outputs still.
You are the man!
Muy bueno que haya puesto varios idiomas ❤
Thanks! Amazing information
Hey, how is it different from
from langchain_ollama import ChatOllama
llm = ChatOllama(model=model_name, temperature=0)
llm_json_mode = ChatOllama(model=model_name, temperature=0, format="json")
This is really great. Still thinking about how to best use this.
WOW!!! 🎉
Great video! Could you help me and explain how you generate audio tracks in different languages? Thank you very much!
Thank you for this video Mervin. Big fan here. Quick question can you tell me the library that you use to print structured output such as yours?
You must use last release of Ollama 0.5.1 and library 'pip install ollama --upgrade' , 'pip install pydantic --upgrade'
@@TheSalto66 Thanks. I meant the Terminal actually the standard output.
Still does not output consistently in JSON. I tried refactoring the code and added more error handling / extracted JSON via RE and still inconsistent results. Someday it will get there I'm sure, not quite there yet however.
At the end of whatever prompt you have used , put this : Begin your answer with "{"
Great vid
good job
But given that OpenAI, Claude can do this already, why do we need ollama for it?
idk cause you can use it for free without relying on external API.
It's private, completely under your control, offline, free for always and deterministic.
how to operate on windows system?should i install a linux system?🤔🤔
bruh it’s python.. it doesn’t matter 💀
Good video. You forgot to include the link for the code.
Thanks for letting me know. Now added
Funny, that this whole "structureness" is reached by just giving LLM an example in system or user prompt of what is expected. This is what is called "multishot". And by no any means it is an achivement of ollama as the title suggests. It is what every LLM can do, using any inference backend
There's a lot of cleaning that ollama is handling
@@RaviPrakash-dz9fm I'm not so sure. Even if they did, by doing that they are taking away LLM controls from you and hiding some prompt manipulation - which makes it harder to debug
@@alx8439lmao the ignorant speaks without knowing
You're wrong about your assertions here, bud....
@@anubisai From what I have seen in some other libs giving structured output, it's just minor prompts and a lot of regex. Not sure what's here
uau, pleae teacher same with js please if is posible ;) and you can put the super audio in spanish like other videos?? ;) thanks for all!
Just another hello world example. This is not revolutionary - it’s just plain simple. Level up and try to focus on some hard edge cases. As developers we still spend most of our time with repetitive tasks ai can’t yet solve for us.
would like to know what are the examples ai still cant solve for developers