🐼【AI电商实战】如何用Stable Diffusion生成相同人物相同街景不同角度的图片 独家思路公开 SD电商应用高级篇 stable diffusion角色设计

Поделиться
HTML-код
  • Опубликовано: 24 дек 2024

Комментарии •

  • @handsomewong6936
    @handsomewong6936 Год назад +2

    一口气看完了你们做的视频教程,很受益,谢谢博主的辛劳付出!

    • @kexue
      @kexue  Год назад

      哈哈 感谢支持

  • @oscarhu2010
    @oscarhu2010 6 месяцев назад

    謝謝你的視頻, 看過你的很多作品,非常用心, 讓我受益匪淺~ 詳細有效率, 冗言不多~ 難得專業又多點子的RUclipsr~👍👍👍

    • @kexue
      @kexue  6 месяцев назад

      感谢支持哈哈 加油!

    • @wkl-w3w
      @wkl-w3w 5 месяцев назад

      @@kexue😢discord,我加不进去。能不能给我一个新的链接

    • @kexue
      @kexue  5 месяцев назад

      @@wkl-w3w 额 我删群了 没时间看 都是广告

    • @wkl-w3w
      @wkl-w3w 5 месяцев назад

      @@kexue 好的

  • @naonaoss
    @naonaoss Год назад +1

    先赞后看❤

    • @kexue
      @kexue  Год назад

      感谢支持

  • @makisekurisu_jp
    @makisekurisu_jp Год назад +3

    1.SD產生人物,移除背景,建立人物mask遮罩。
    2.將mask加入谷歌街景,使用SD進行局部重繪。

    • @kexue
      @kexue  Год назад

      嗯 也可以 好思路

  • @mvandomheweVandomhewe
    @mvandomheweVandomhewe 7 месяцев назад

    这个思路好牛哇

  • @schneizellamperouge997
    @schneizellamperouge997 11 месяцев назад

    视频5:20处提到原图片是1328*800,为什么是把预处理分辨率由512成800而不是1328呢?小白求解答🙏🙏

    • @kexue
      @kexue  11 месяцев назад

      个人经验以小的值为准 大的话有时候出的线稿可能不完整。我有一期专门讲canny的原理 可以我主页搜下看看

  • @jinjin5940
    @jinjin5940 11 месяцев назад

    Stable Diffusion 可以創造一個固定的腳色,然後在不同的場景的圖片嗎。

    • @kexue
      @kexue  11 месяцев назад

      在一定程度可以 但是比较难 角色身上的元素越多 切换场景重绘之后就越难保持一致性

  • @oyundalaiborjgon8438
    @oyundalaiborjgon8438 Год назад

    问一下,仰斜侧脸的图片(就是另一个眼睛出来一点点的图片)怎么换脸好呢?我试了几种都是 出来个不正常的眼睛,用什么软件插件好呢?大师能做个教程吗?

    • @kexue
      @kexue  Год назад

      侧脸不好的主要原因是模型训练的侧脸数据较少。这个最快速且有效的方法是换个模型试一下。sdxl类的模型可能会好一些。

  • @lilillllii246
    @lilillllii246 10 месяцев назад

    Thanks, do you have a link to the comfyui workflow related to this video?

    • @kexue
      @kexue  10 месяцев назад

      drive.google.com/file/d/1d812QDwR_GTlxusBC1nODzYFfM9whVgs/view?usp=drive_link

  • @shanpoyang
    @shanpoyang 11 месяцев назад

    大佬,我又有个SD的问题像请教,最近使用XL的大模型(dreamshaper那个)但是我用CTROLLNET的姿势检测,一直在成图上没反应,甚至图形质量变得很差(因为我下了几个不同的XL的OPENPOSE的模型)请问SDXL现在用骨骼定义姿势不好使吗,还是我哪个步骤没做对?之前用1.5的都没这样的问题。

    • @kexue
      @kexue  11 месяцев назад +1

      sdxl的模型是要适配SDXL的controlnet的 1.5的controlnet不可用。openpose SDXL controlnet可以在这里下载huggingface.co/thibaud/controlnet-openpose-sdxl-1.0 不过我没有用过。作者的演示是基于comfyUI的,在他八月的回复中说这个openpose不支持webUI huggingface.co/thibaud/controlnet-openpose-sdxl-1.0/discussions/4 我不确定现在是否支持 如果有需要可以试一下。

    • @shanpoyang
      @shanpoyang 11 месяцев назад

      @@kexue 谢谢回复,我下载了好几个sdxl 版本的control net了,应该不是模型问题,我看别人介绍也是您说的那个界面,webui 也许是不支持,不过另一个我暂时不会用,可能要学一下才可以尝试一下

    • @kexue
      @kexue  11 месяцев назад

      嗯嗯 我也有comfyUI相关的教程 可以参考下@@shanpoyang

  • @hygge-lagom-chill
    @hygge-lagom-chill Год назад

    你超強!

    • @kexue
      @kexue  Год назад

      哈哈 一般般啦~

  • @雷彩霞-y2f
    @雷彩霞-y2f 9 месяцев назад

    comfyui能用图生图,生成多角度的图片吗

    • @kexue
      @kexue  9 месяцев назад

      不太好实现 统一性会很一般

  • @PSAndPowerBI
    @PSAndPowerBI 8 месяцев назад

    灰常赞

    • @kexue
      @kexue  8 месяцев назад

      感谢支持

  • @wen-ry6nk
    @wen-ry6nk 11 месяцев назад

    大佬,非常感谢能分享这么多的干货。我想问一下,您这电商课怎么报名捏。

    • @kexue
      @kexue  11 месяцев назад

      请发您v到kexuejia.studio@gmail.com 谢谢

  • @lilillllii246
    @lilillllii246 Год назад

    Thank you for always. I'm trying to generate images of clothes without people, mannequins, backgrounds, or shadows, but it's challenging. Often, images of mannequins or people keep appearing. Is there a solution to this problem? Also, I have another question: If I have an existing clothing image and an image of a different model, is there a way to apply the existing clothing image onto the original model?

    • @kexue
      @kexue  Год назад +1

      Thank you for your support.(my answer is translate by google bard)
      First, for the first question. The fundamental reason is that most of the current large models (checkpoints) have a significant portion of person models. These models are naturally trained on a significant portion of human data. In the AI's view, the two elements of clothing and people should be strongly correlated, meaning that there is a high probability that clothing will be drawn with people, and vice versa, there is a high probability that limbs will be completed with clothing. A relatively feasible solution to this problem that I know of is to use the canny or lineart model of controlnet to control, and then use a mask to limit the area to be redrawn in the local redrawing mode of image-to-image, to modify the pattern of clothing, of course, this requires that there is already a ready-made clothing image (can be SD generated, or a real photo).
      About the second question. If conditions permit, you can take a photo of the existing clothing on a dummy, and then replace the dummy with a real person(ruclips.net/p/PL4L5yXcAegdxYufOlaqVoSgzY8Wuauqih). If you only have a single flat clothing image and a person, you can try using the ip-adapter model for style transfer, but the effect will not be very good.

    • @kexue
      @kexue  Год назад

      I just asked a friend of mine who makes related products, and he said that if the basic model uses the SD1.5 native model, and then the reverse prompt words are added to remove the words related to the characters, there is a high probability that a picture will be generated with only clothes and no characters. (There is a small chance that characters will still appear.)

    • @lilillllii246
      @lilillllii246 Год назад

      thank you. Are there any videos of you related to making your first clothes?@@kexue

    • @kexue
      @kexue  Год назад +1

      Sorry, I don't have for now. You can try the ideas I replied to just now. My series of videos are all based on the premise that the dummy is already wearing clothes, and then use stablediffusion to replace the dummy with a real person. ruclips.net/p/PL4L5yXcAegdxYufOlaqVoSgzY8Wuauqih@@lilillllii246

    • @kexue
      @kexue  Год назад

      and you can try this huggingface.co/spaces/HumanAIGC/OutfitAnyone@@lilillllii246

  • @w02190219
    @w02190219 Год назад

    謝謝分享

    • @kexue
      @kexue  Год назад

      感谢支持

  • @anthonilin8852
    @anthonilin8852 Год назад

    来学习

    • @kexue
      @kexue  Год назад

      哈哈感谢支持

  • @th3pawn
    @th3pawn Год назад

    老师,视频里这个熊猫嘴可以动的效果是咋做的啊?

    • @kexue
      @kexue  Год назад

      出图的话见这个视频twitter.com/YTkexue/status/1704810036034421183 然后嘴巴其实就是几贞帧不同大小的半圆图 循环播放

  • @legg936
    @legg936 Год назад

    学习

    • @kexue
      @kexue  Год назад

      加油加油

  • @zoearthmoon
    @zoearthmoon Год назад

    我發現現在新的SD沒有面部修復功能,不知道為什麼

    • @kexue
      @kexue  Год назад +1

      嗯 SDwebUI 1.6版之后UI界面做了更改 面部修复因为我用的比较少 所以还真没注意过修改之后那个功能去哪了 我一般修脸都直接扔局部重绘 我有一期电商视频讲到过

  • @zoearthmoon
    @zoearthmoon Год назад

    ❤❤❤

  • @chanmz-st9gt
    @chanmz-st9gt Год назад

    既然有街景图了,为什么不直接在ps添加骨骼图,然后重绘蒙版,这样背景图不用重绘

    • @kexue
      @kexue  Год назад +2

      可以 但是融合度需要解决 而且直接截图的街景 会截到一些谷歌的标注之类的东西

  • @zoearthmoon
    @zoearthmoon Год назад

    我發現我把sd相關影片都看完了😂

    • @kexue
      @kexue  Год назад +1

      哈哈 毕业啦

    • @zoearthmoon
      @zoearthmoon Год назад

      @@kexue 還沒還沒 SD可以挖的東西還很多哩,辛苦大大啦

  • @yuco2000
    @yuco2000 Год назад

    沙发

    • @kexue
      @kexue  Год назад +1

      前排出售瓜子花生