Very nice informative video, enjoyed it a lot. Would have liked to see a bit more on how to calculate and implement some of these metrics though. For example how the hallucinations are quantified, since it seems to me it's a very difficult thing to measure.
Good video - I like IBM’s approach to day 2 model operations. Their automated monitoring around llms builds on their leading approach for monitoring/versioning of traditional ML models. Great stuff, Briana!
Hi, fellow IBMer here. Congratulations on all the achievements in the Generative AI space. I've got a question: how do you calculate ROUGE, BLEU and other reference-dependent metrics when in production, where you don't have an expected example to draw from?
I havne't play around with LLM + RAG , but when i think about it , sounds like i can just use LLM and pair it with my office's wiki, then i can chat to get my information !! purrrffecto !
RAG is just so poorly done most places. Azure is ok. I know Microsoft's ex cto Sirosh who built their cognitive search. They are the only ones i've found who don't suck horribly. And don't even talk to me about OpenAI's Knowledge Bases or things will get vitriolic and scatalogical very quickly.
Very nice informative video, enjoyed it a lot. Would have liked to see a bit more on how to calculate and implement some of these metrics though. For example how the hallucinations are quantified, since it seems to me it's a very difficult thing to measure.
THANK YOU! I find your tutorials helpful and informative and … not full of fluff! xx ❤️❤️
So much important concept to keep up to date any LLMs with the information from the internet.
Good video - I like IBM’s approach to day 2 model operations. Their automated monitoring around llms builds on their leading approach for monitoring/versioning of traditional ML models. Great stuff, Briana!
Hi, fellow IBMer here. Congratulations on all the achievements in the Generative AI space. I've got a question: how do you calculate ROUGE, BLEU and other reference-dependent metrics when in production, where you don't have an expected example to draw from?
Proper training for both human and machine for somehow is out of control, don’t you agree? How you manage that?
I havne't play around with LLM + RAG , but when i think about it , sounds like i can just use LLM and pair it with my office's wiki, then i can chat to get my information !! purrrffecto !
Hey then what is RAGA about
RAG is just so poorly done most places. Azure is ok. I know Microsoft's ex cto Sirosh who built their cognitive search. They are the only ones i've found who don't suck horribly. And don't even talk to me about OpenAI's Knowledge Bases or things will get vitriolic and scatalogical very quickly.
BLEU = (bilingual evaluation understudy)
en.wikipedia.org/wiki/BLEU
ROUGE = (Recall-Oriented Understudy for Gisting Evaluation)
en.wikipedia.org/wiki/ROUGE_(metric)
Many thanks for your wonderful video! 🙏🙏🙏