AWS re:Invent 2020: Deploying PyTorch models for inference using TorchServe

Поделиться
HTML-код
  • Опубликовано: 7 янв 2025

Комментарии •

  • @mehdia5176
    @mehdia5176 3 года назад +7

    Thank you for this instructive presentation. Could you please provide a link to the github repository ?

  • @MuhammadAli-mi5gg
    @MuhammadAli-mi5gg 3 года назад +1

    Thanks, for well instructed demo..
    Can I deploy my SuperResolution model, in SageMaker with TorchServe...

  • @karanshirke6538
    @karanshirke6538 2 года назад

    Awesome video thanks man.

  • @soroushaalibagi8853
    @soroushaalibagi8853 Год назад +1

    where are the files? like the model.py file?!

    • @awssupport
      @awssupport Год назад

      Hey there! I'd encourage looking through the Pytorch GitHub repo for the most updated info on this service: go.aws/3QyW7m3. There, you can also open an issue for any questions or help. 📄 ^RM

    • @soroushaalibagi8853
      @soroushaalibagi8853 Год назад

      Hi there! Thanks for the reply. I'm looking for an example that deploys an SDK model using a pretrained pytorch model. Exactly similar to what this video shows (tar the model and upload it to S3). However, I was not successful in finding a working example that contains all files. Specially I'm interested to see the functions in the model.py script. I will be grateful if you could share the files of the example in this video. @@awssupport

    • @awssupport
      @awssupport Год назад

      Hi there! If you'd like, feel free to ask our community of experts on re:Post: go.aws/aws-repost. 👥 For additional routes for help, you may also review these options: go.aws/get-help. 🛠️ ^LD