Building a research assistant with o3-mini

Поделиться
HTML-код
  • Опубликовано: 1 фев 2025

Комментарии • 10

  • @MarianoJPonce
    @MarianoJPonce 9 минут назад

    Thank you legend!

  • @JackFelke-kz6cc
    @JackFelke-kz6cc Час назад

    thanks lance!

  • @61Marsh
    @61Marsh 6 часов назад

    nice summary, thanks for sharing

  • @biochemcompsci
    @biochemcompsci 7 часов назад

    Well Done

  • @WinonaNagy
    @WinonaNagy 8 часов назад

    Great summary of o3-mini! Cost-efficiency and speed are impressive. How's it dealing with large context management and integration with other models? Keen to see some benchmarks!

  • @RohanKumar-vx5sb
    @RohanKumar-vx5sb 7 часов назад

    amazing

  • @RollingcoleW
    @RollingcoleW 3 часа назад

    @langchain I would like to see how we can add a coding sandbox to this tool. let's say I ask the research assistant to look up new documentation from langgraph. I would like to see the ai model also be able to code and execute a few examples and save the logs. Maybe this runs in a docker container sandbox for safety. But I would find this feature of simple benchmarking soooooo useful.

  • @rishavranaut7651
    @rishavranaut7651 5 часов назад

    To build applications with ai agents.. Token cost is really high when llm do tool calling and communicate to get atleast a initial answer which even may not be correct. Reducing tokens prices is a need of hour rather than introducing new model every next day.
    Current models are sufficient enough to integrate in the workflows.

  • @DaveVT5
    @DaveVT5 7 часов назад +1

    What’s the gUI tool? Langgraph?