Thank you for always. I'm trying to generate images of clothes without people, mannequins, backgrounds, or shadows, but it's challenging. Often, images of mannequins or people keep appearing. Is there a solution to this problem? Also, I have another question: If I have an existing clothing image and an image of a different model, is there a way to apply the existing clothing image onto the original model?
Thank you for your support.(my answer is translate by google bard) First, for the first question. The fundamental reason is that most of the current large models (checkpoints) have a significant portion of person models. These models are naturally trained on a significant portion of human data. In the AI's view, the two elements of clothing and people should be strongly correlated, meaning that there is a high probability that clothing will be drawn with people, and vice versa, there is a high probability that limbs will be completed with clothing. A relatively feasible solution to this problem that I know of is to use the canny or lineart model of controlnet to control, and then use a mask to limit the area to be redrawn in the local redrawing mode of image-to-image, to modify the pattern of clothing, of course, this requires that there is already a ready-made clothing image (can be SD generated, or a real photo). About the second question. If conditions permit, you can take a photo of the existing clothing on a dummy, and then replace the dummy with a real person(ruclips.net/p/PL4L5yXcAegdxYufOlaqVoSgzY8Wuauqih). If you only have a single flat clothing image and a person, you can try using the ip-adapter model for style transfer, but the effect will not be very good.
I just asked a friend of mine who makes related products, and he said that if the basic model uses the SD1.5 native model, and then the reverse prompt words are added to remove the words related to the characters, there is a high probability that a picture will be generated with only clothes and no characters. (There is a small chance that characters will still appear.)
Sorry, I don't have for now. You can try the ideas I replied to just now. My series of videos are all based on the premise that the dummy is already wearing clothes, and then use stablediffusion to replace the dummy with a real person. ruclips.net/p/PL4L5yXcAegdxYufOlaqVoSgzY8Wuauqih@@lilillllii246
一口气看完了你们做的视频教程,很受益,谢谢博主的辛劳付出!
哈哈 感谢支持
謝謝你的視頻, 看過你的很多作品,非常用心, 讓我受益匪淺~ 詳細有效率, 冗言不多~ 難得專業又多點子的RUclipsr~👍👍👍
感谢支持哈哈 加油!
@@kexue😢discord,我加不进去。能不能给我一个新的链接
@@wkl-w3w 额 我删群了 没时间看 都是广告
@@kexue 好的
先赞后看❤
感谢支持
1.SD產生人物,移除背景,建立人物mask遮罩。
2.將mask加入谷歌街景,使用SD進行局部重繪。
嗯 也可以 好思路
这个思路好牛哇
视频5:20处提到原图片是1328*800,为什么是把预处理分辨率由512成800而不是1328呢?小白求解答🙏🙏
个人经验以小的值为准 大的话有时候出的线稿可能不完整。我有一期专门讲canny的原理 可以我主页搜下看看
Stable Diffusion 可以創造一個固定的腳色,然後在不同的場景的圖片嗎。
在一定程度可以 但是比较难 角色身上的元素越多 切换场景重绘之后就越难保持一致性
问一下,仰斜侧脸的图片(就是另一个眼睛出来一点点的图片)怎么换脸好呢?我试了几种都是 出来个不正常的眼睛,用什么软件插件好呢?大师能做个教程吗?
侧脸不好的主要原因是模型训练的侧脸数据较少。这个最快速且有效的方法是换个模型试一下。sdxl类的模型可能会好一些。
Thanks, do you have a link to the comfyui workflow related to this video?
drive.google.com/file/d/1d812QDwR_GTlxusBC1nODzYFfM9whVgs/view?usp=drive_link
大佬,我又有个SD的问题像请教,最近使用XL的大模型(dreamshaper那个)但是我用CTROLLNET的姿势检测,一直在成图上没反应,甚至图形质量变得很差(因为我下了几个不同的XL的OPENPOSE的模型)请问SDXL现在用骨骼定义姿势不好使吗,还是我哪个步骤没做对?之前用1.5的都没这样的问题。
sdxl的模型是要适配SDXL的controlnet的 1.5的controlnet不可用。openpose SDXL controlnet可以在这里下载huggingface.co/thibaud/controlnet-openpose-sdxl-1.0 不过我没有用过。作者的演示是基于comfyUI的,在他八月的回复中说这个openpose不支持webUI huggingface.co/thibaud/controlnet-openpose-sdxl-1.0/discussions/4 我不确定现在是否支持 如果有需要可以试一下。
@@kexue 谢谢回复,我下载了好几个sdxl 版本的control net了,应该不是模型问题,我看别人介绍也是您说的那个界面,webui 也许是不支持,不过另一个我暂时不会用,可能要学一下才可以尝试一下
嗯嗯 我也有comfyUI相关的教程 可以参考下@@shanpoyang
你超強!
哈哈 一般般啦~
comfyui能用图生图,生成多角度的图片吗
不太好实现 统一性会很一般
灰常赞
感谢支持
大佬,非常感谢能分享这么多的干货。我想问一下,您这电商课怎么报名捏。
请发您v到kexuejia.studio@gmail.com 谢谢
Thank you for always. I'm trying to generate images of clothes without people, mannequins, backgrounds, or shadows, but it's challenging. Often, images of mannequins or people keep appearing. Is there a solution to this problem? Also, I have another question: If I have an existing clothing image and an image of a different model, is there a way to apply the existing clothing image onto the original model?
Thank you for your support.(my answer is translate by google bard)
First, for the first question. The fundamental reason is that most of the current large models (checkpoints) have a significant portion of person models. These models are naturally trained on a significant portion of human data. In the AI's view, the two elements of clothing and people should be strongly correlated, meaning that there is a high probability that clothing will be drawn with people, and vice versa, there is a high probability that limbs will be completed with clothing. A relatively feasible solution to this problem that I know of is to use the canny or lineart model of controlnet to control, and then use a mask to limit the area to be redrawn in the local redrawing mode of image-to-image, to modify the pattern of clothing, of course, this requires that there is already a ready-made clothing image (can be SD generated, or a real photo).
About the second question. If conditions permit, you can take a photo of the existing clothing on a dummy, and then replace the dummy with a real person(ruclips.net/p/PL4L5yXcAegdxYufOlaqVoSgzY8Wuauqih). If you only have a single flat clothing image and a person, you can try using the ip-adapter model for style transfer, but the effect will not be very good.
I just asked a friend of mine who makes related products, and he said that if the basic model uses the SD1.5 native model, and then the reverse prompt words are added to remove the words related to the characters, there is a high probability that a picture will be generated with only clothes and no characters. (There is a small chance that characters will still appear.)
thank you. Are there any videos of you related to making your first clothes?@@kexue
Sorry, I don't have for now. You can try the ideas I replied to just now. My series of videos are all based on the premise that the dummy is already wearing clothes, and then use stablediffusion to replace the dummy with a real person. ruclips.net/p/PL4L5yXcAegdxYufOlaqVoSgzY8Wuauqih@@lilillllii246
and you can try this huggingface.co/spaces/HumanAIGC/OutfitAnyone@@lilillllii246
謝謝分享
感谢支持
来学习
哈哈感谢支持
老师,视频里这个熊猫嘴可以动的效果是咋做的啊?
出图的话见这个视频twitter.com/YTkexue/status/1704810036034421183 然后嘴巴其实就是几贞帧不同大小的半圆图 循环播放
学习
加油加油
我發現現在新的SD沒有面部修復功能,不知道為什麼
嗯 SDwebUI 1.6版之后UI界面做了更改 面部修复因为我用的比较少 所以还真没注意过修改之后那个功能去哪了 我一般修脸都直接扔局部重绘 我有一期电商视频讲到过
❤❤❤
既然有街景图了,为什么不直接在ps添加骨骼图,然后重绘蒙版,这样背景图不用重绘
可以 但是融合度需要解决 而且直接截图的街景 会截到一些谷歌的标注之类的东西
我發現我把sd相關影片都看完了😂
哈哈 毕业啦
@@kexue 還沒還沒 SD可以挖的東西還很多哩,辛苦大大啦
沙发
前排出售瓜子花生