OTM-AICAMP_2024-09-26: Dean, Corey, Logan

Поделиться
HTML-код
  • Опубликовано: 4 окт 2024
  • ------------------------------------------------------------
    Host: Arunava (Ron) Majumdar, IBM
    Mary Grygleski, Callibrity
    Coordinators:
    Nitin Murali, Illinois Tech
    Nick Saldana, University of Wisconsin Green Bay
    Facility:
    David Giard, Microsoft
    Collaboration:
    Bill Liu, AI Camp
    Dave Neilson, IBM AI Alliance
    ------------------------------------------------------------
    AI Camp:
    www.aicamp.ai/...
    ------------------------------------------------------------
    Tech Talk: Make Model Alignment a Software Engineering Process
    Speaker: Dean Wampler (The AI Alliance)
    Abstract: Software developers are accustomed to using iterative, incremental, and repeatable processes. Adding Generative LLMs brings several new challenges:
    1.⁠ ⁠How can I tune a base model (or otherwise align it) in an iterative and incremental way, keeping all the benefits of working that way?
    2.⁠ ⁠How do I know the new model is actually better? How do I know there are no regressions in pre-existing behavior?
    This talk explores a new toolkit, InstructLab, for addressing #1 and some possible approaches to address #2.
    / deanwampler
    ------------------------------------------------------------
    Tech Talk: Leveraging AI in KNIME with LLMs for Video Content Creation
    Speaker: Corey Weisinger (KNIME)
    Abstract: This talk will explore how KNIME's AI extension integrates with LLMs like ChatGPT, enabling customization through document stores and Retrieval-Augmented Generation (RAG). We'll demonstrate its practical use in generating first drafts of video scripts, highlighting how KNIME can streamline AI-driven content creation.
    / corey-weisinger
    ------------------------------------------------------------
    Tech Talk: Increase inferential AI model effectiveness with latency and data protection
    Speaker: Logan Chung (Equinix)
    Abstract: In this talk, I will discuss: 1) current uses for inferential AI in Equinix datacenters; 2) information on order of magnitude regarding times for inferential AI executions; and 3) need for base infrastructure that encourages cost and new structure - such as Apache glacier.
    / logan-chung-64190ba0
    ------------------------------------------------------------

Комментарии •