I try it with sketches, with photos and even with The sims 4 houses images, interior or exterior and i cant make SD to give me a rendering of these files. Scribbles ans all of that i tried and I dont understand how i follow step by step it doesnt work for me, SD give me renders that are cery differente from the original picture. I dont know whats wrong 😢 please help
I was unable to reproduce the building in the neural network. I took a screenshot of your gray render, copied exactly the PROMT from the video (positive and negative), connected ControlNet, downloaded exactly the same cyberrealistic_v33.safetensors model, turned on exactly the same settings as in your video. But it was not possible to repeat the result! What's the catch? )))
Install control net and also make sure you download specific control net model and put it in the control net model folder. Only then control net will work.
This is very good, but the rendering output that it gives us is not the model that we introduce to artificial intelligence, and this is bad because all residential and commercial apartment projects have land restrictions, structural restrictions and national building laws. And we want to use artificial intelligence for rendering when we have taken all the stages of studies, concepts and approvals from the employer and construction organizations of that area, and the change it creates in the building is just a beautiful image and has no use. This problem is not only in stable diffusion, but also in Midgerni, and I hope it will become more reliable in the future.
Yes, we need to wait for some updates how sd can use your building perfectly for creating images, but like on sketch stage we can use what we have, or like to give some more understands of building by material. And it will work. Another great feature is, that you no need to spent time on some details anymore, you can give basics of your image to stable and it will update your work with photography details, so result can be faster and better at the end, like better quality.
what model do you use at at checkpoint? could you share it or tell where to find similar? thank you ) I would like you to elaborate more on this , especially interiors, looking forward to you new interesting vodeos
ControlNet enhances consistency and control over the generated images. A depth map in Stable Diffusion provides information about the distance of objects in a scene from the camera. It helps the model understand the spatial arrangement of elements, allowing for more realistic image generation.
Пол года назад пробовал с ним играться. В плане эскизного поиска имеет место быть. Но на что-то серьезнее пока не годится. Нельзя сделать одинаковое сгенерированное решение с двух разных ракурсов. Нет точности. О рендере для реальных проектов пока нет смысла даже говорить .
Promt: A photography of hospital, glass facade, cars, evening, shot by Canon EOS5D Mark IV +Canon EF 16-35mm f/2.8L II USM Negative: Cartoon, painting, illustration, (worst quality, normal quality^2), (Watermark), immature, child, semi-realistic, CGI, sketch, cartoon, drawing, anime, close up, text, cropped, out of frame, worst quality, low quality, JPEG artifacts, ugly
I try it with sketches, with photos and even with The sims 4 houses images, interior or exterior and i cant make SD to give me a rendering of these files. Scribbles ans all of that i tried and I dont understand how i follow step by step it doesnt work for me, SD give me renders that are cery differente from the original picture. I dont know whats wrong 😢 please help
are you using ControlNet extension? if you're not, maybe is the problem
I was unable to reproduce the building in the neural network. I took a screenshot of your gray render, copied exactly the PROMT from the video (positive and negative), connected ControlNet, downloaded exactly the same cyberrealistic_v33.safetensors model, turned on exactly the same settings as in your video. But it was not possible to repeat the result! What's the catch? )))
Install control net and also make sure you download specific control net model and put it in the control net model folder. Only then control net will work.
This is very good, but the rendering output that it gives us is not the model that we introduce to artificial intelligence, and this is bad because all residential and commercial apartment projects have land restrictions, structural restrictions and national building laws. And we want to use artificial intelligence for rendering when we have taken all the stages of studies, concepts and approvals from the employer and construction organizations of that area, and the change it creates in the building is just a beautiful image and has no use.
This problem is not only in stable diffusion, but also in Midgerni, and I hope it will become more reliable in the future.
Yes, we need to wait for some updates how sd can use your building perfectly for creating images, but like on sketch stage we can use what we have, or like to give some more understands of building by material. And it will work. Another great feature is, that you no need to spent time on some details anymore, you can give basics of your image to stable and it will update your work with photography details, so result can be faster and better at the end, like better quality.
The general idea is not use the the AI to FInal render, just for sketches, references, improvement of the 3d model base
great job! that was really cool!
where can I download your stable diffusion checkpoint?
what model do you use at at checkpoint? could you share it or tell where to find similar? thank you ) I would like you to elaborate more on this , especially interiors, looking forward to you new interesting vodeos
cyberrealistic_v33 it seems from 3:33
so usefull video, thank you very much!
useful, 10x for sharing
wow can you do a restaurant interior design? I haven't seen videos on that yet.
Why didn't you use AI to generate an english voice?
can it be live?
What is the future of corona and other software?
Було б круто, щоб можна було обрати озвучку відео. Це було б набагато комфортніше.
What is the name of the photoshop plugin you use?
same question here
🤔🤔🤔 Why did you use ControlNet and Depth model? What part of the generation did it play? I really don’t understand ⁉️
It was really a thrilling question! Why have you been silent? Hey
ControlNet enhances consistency and control over the generated images.
A depth map in Stable Diffusion provides information about the distance of objects in a scene from the camera. It helps the model understand the spatial arrangement of elements, allowing for more realistic image generation.
Useful. Thanks
Thank you
what version of stable diffusion?
Its a1111 model based on 1.5 version, because models that i use trained on 1.5
thanks you it's possible to find a tuto for install and link?@@artemlt
cyberrealistic how download this file?
Пол года назад пробовал с ним играться. В плане эскизного поиска имеет место быть. Но на что-то серьезнее пока не годится. Нельзя сделать одинаковое сгенерированное решение с двух разных ракурсов. Нет точности. О рендере для реальных проектов пока нет смысла даже говорить .
link share stable diffusion
Promt: A photography of hospital, glass facade, cars, evening, shot by Canon EOS5D Mark IV +Canon EF 16-35mm f/2.8L II USM
Negative: Cartoon, painting, illustration, (worst quality, normal quality^2), (Watermark), immature, child, semi-realistic, CGI, sketch, cartoon, drawing, anime, close up, text, cropped, out of frame, worst quality, low quality, JPEG artifacts, ugly
bye bye Archviz artists , its just matter of time when you wont have a job anymore
Lol