Quad NVIDIA Tesla P40 - Stable Diffusion Benchmark - Budget 24GB GPU for AI

Поделиться
HTML-код
  • Опубликовано: 14 ноя 2024

Комментарии • 3

  • @repixelatedmc
    @repixelatedmc 6 дней назад

    Quick question, why even a p40 if it needs to run on fp32 and has equivalent performance of a 3060 that is able to run on fp16 ( 2x less vram )

    • @blackm3285
      @blackm3285  6 дней назад

      ​​@@repixelatedmc because its cheaper, I bought these card as 820 CNY (115 USD) per card, but now it become double of this price. Also, 24GB of physical VRAM let it more future-proof.

    • @blackm3285
      @blackm3285  6 дней назад

      By the way, using FP16 on GPU can't use less VRAM, only become faster, because the VRAM requirements is depending on models you using, and the SD models you can find now usually are "purned FP16", if your GPU don't support FP16, it can be load on GPU and run it as FP32 by stuffing zero on missing precision. I tested this when community still provide FP32 and FP16 models, and using FP32 model will become a little bit faster.