Nvidia Inference Microservices - AI Workbench NIM-Anywhere Project Components
HTML-код
- Опубликовано: 7 фев 2025
- The video walks you through the containers and networking that make up the NIM-Anywhere project. The project is evolving so the topology and components will change..
NVidia Inference Microservices is a model for deploying inference engines as microservice endpoints. They demonstrate using this in the AI Workbench NIM-Anywhere project available on GitHub. The project contains a set of applications that consume the NIM inference engine, rerankers, and embedding models as API endpoints running in the NVIDIA cloud, your cloud, the data center, or local machines.
github.com/NVI...