LLama2.java: LLM integration with A 100% Pure Java file

Поделиться
HTML-код
  • Опубликовано: 11 май 2024
  • An airhacks.fm (airhacks.fm) conversation with Alfonso Peterssen (@TheMukel ( / themukel ) ) about:
    discussion about Alfonso's early programming experience and participation in the IOI (stats.ioinformatics.org) competition, studying computer science and functional programming with Martin Odersky, internships at Google and Oracle Labs ( / oracle_labs ) working on compilers and the Espresso (www.graalvm.org/latest/refere...) project implementing a JVM in Java,
    espresso mentioned in "#208 GraalVM: Meta Circularity on Different Levels" (airhacks.fm/#episode_208) , "#194 GraalVM, Apple Silicon (M1) and Clouds" (airhacks.fm/#episode_194) , "#167 GraalVM and Java 17, Truffle, Espresso and Native Image" (airhacks.fm/#episode_167) and "#157 The Ingredients of GraalVM" (airhacks.fm/#episode_157) ,
    porting LLVM (llvm.org) to pure Java in one class, integrating Large Language Models (LLMs) in Java by porting the LLAMA model from C to Java,
    GPU acceleration with tornadovm (www.tornadovm.org) ,
    TornadoVM appeared at "#282 TornadoVM, Paravox.ai: Java, AI, LLMs and Hardware Acceleration" (airhacks.fm/#episode_282) ,
    performance of the Java port being within 10% of the C versions, potential huge opportunities for integrating AI and LLMs with enterprise Java systems for use cases like fraud detection, the Java port being a 1,000 line self-contained implementation with no external dependencies, the need for more resources and support to further develop the Java LLM integration,
    the llama2.java (github.com/mukel/llama2.java) project
    Alfonso Peterssen on twitter: @TheMukel ( / themukel )
  • НаукаНаука

Комментарии •