Better Than Midjourney: Openjourney Stable Diffusion
HTML-код
- Опубликовано: 23 сен 2024
- ✨ Support my work on Patreon: / allyourtech
⚔️ Join the Discord server: / discord
🧠 AllYourTech 3D Printing: / @allyourtech3dp
👾 Follow Me on X: / blovereviews
💻My Stable Diffusion PC: kit.co/AllYour...
How to install stable diffusion models: • How To Install Stable ...
How to install Automatic1111: • Unlock Limitless Ai Ar...
Today we're taking a look at Openjourney. Openjourney is a stable diffusion model trained on images created by midjourney. Shout out to Prompthero for training the model and uploading it for our enjoyment!
Openjourney model: huggingface.co...
Creativeindie prompts: www.creativind...
prompthero: prompthero.com/
a little tip, you dont need 150 steps, 20 steps is enough. It depends on the image, but you don't need to got above 50 steps. Hope this can help. you just need to play around with the steps and CFG scale as they compliment each other.
Great tip!!! Thank you
@@allyourtechai If anything, setting it to 150 steps makes your images highly chaotic, because the AI will try to add more detail with every step.
Heck, sometimes only 10 is enough to make decent images.
@@jamiewongttv Doesn't quite work like that. All the samplers do is solve ordinary differential equations through iterations. Any non-ancestral sampler (the ones without 'a' in the name) will converge on a final image. Normally this occurs in around 20-30 steps. The last 100 or so images using that sampler would likely be the same.
and if you're wondering why ancestral samplers don't converge it's because they add noise back into the image after every step.
After trying out OpenJourney on my machine, I have to say that I still prefer Dreamlike Diffusion 1.0. It's better than blue willow or MidJourney 3, but not as as good as MidJourney 4 or 5. In version 4, MidJourney added some kind of miracle sauce to make things turn out stunning.
Maybe they have hidden prompts, explaining why it works so well without any negative prompt from the user
It’s the colors, I can’t tell exactly how but Midjourneys colors are very pleasant. They kind of a wow effect.
@@TheRafark Stable Diffusion always has a more flat 2-dimensional look like a computer game engine. Midjourney looks more like an artist who understands 3-dimensional form and lighting.
*currently i've used deep ai, night cafe ai and dall E-2 the most and from those i've generated literally 10,000 images (36 gigs worth) over the past few months and have just stared experimenting with one called Leonardo Ai which is proving to be most impressive...i find a prompt that works and will generate over one hundred images using it...and my prompts are more akin to word salad than most that might be used but the results are still very interesting*
*and personally, since most of what i do would be considered world building imagery i actually prefer any text generated to be some form of gibberish of the fonts look unrecognizable...it gives the image one more layer of complexity in my opinion where you have to determine the context based on the overall image and setting*
I've tried a lot of different AI art tools, but Midjourney is still king for me
Midjourney is fantastic, I agree
I use a bunch of different ones like my Snoop Dogg t-shirt I did I made it in my journey I played with in Photoshop I uploaded is again in Leonardo I didn't like the effects on there so I went to playground Ai and played in there it's just ever-expanding more more more
midjourney might still be the best for many, but hopefully SD can be just as good one day. Remember folks, support open and free software, it will likely be what saves us one day
@@allyourtechai but brother we have to pay we can use only 2 days
@@Thedeepseanomad You're right, but nobody will listen.. humans are like moths to a flame.
Awesome video! Going to try this out. You may be getting diminishing returns with 150 sampling. I don't think your getting alot of change over 50. It is just adding GPU render time.
Definitely the case! Great call out
Sometimes, just changing the image size fixes those multiple faces. Try 512x512 in this case. At least, this helps me when using Leonardo AI.
Yes, that's a great tip. Similarly if you do a wide image it often times produces multiple characters.
@@allyourtechai
for portrait photography try
600*512 for two portraits.
and 512*768 for portrait, and adding upscaler you set it by 1.55. This is the best so far
@@DoneerGaming awesome! I’ll give that a shot
I am very glad to find your channel. Learn a lot from here. Many thanks.
Glad to hear that!
They just disabled free accounts on MidJourney today. Hoping to find something comparable with SD.
Check out my latest video. I just built a free version of Midjourney!
I think it's hilarious that Midjouney pretty much stole millions of artworks to create their AI and resell them, and someone else just said, YOINK and made a model off of them.
@@AlFirous what an eloquent way to describe the internet
They did'nt steal it
@@Satsuma-tm8ep I feel like if MJ "stole" the art, then so has every artist who ever visually observed a piece of art and replicated any techniques, styles or otherwise and/or used those elements for inspiration. People don't actually understand how it works and think its chopping images up and pasting together which it is not
Stole? Do your eyes steal everything they see?
they r bunch of freaks
Using a model that was trained on some MJ images is not like having MJ locally
I still see significant differences when comparing
it's actually much better than using MidJourney. You can use speciific models for specific uses, as most models are biased toward styles and output.
high res fix should sort those weird double faces or you can even add "2 faces" as a negative prompt... and you could dial back the steps 20 will give you something 50 will give you something better ... 150 might actually be pushing it too far and cause it to over iterate the image. Imagine you are drawing something and keep adding and adding and adding... you end up with something you should have stopped 50 versions ago.
Great tip! Thank you
Just pulled the trigger on a 12GB gfx card so I can use this. My intention is to create art of my own for my rather bare walls.
That's an awesome idea! Please share some of your favorites once you get it all going. Let me know if I can help.
Could you explain a little mor how to install open journey?
Great video here... but you should include 'mdjrny-v4 style' in the prompt when using this model. :)
Thanks for the tip! I wasn't aware of that actually.
Wow exactly what I was looking for! Thx!! :)
0:24 That's just not true. Starting from the Pro plan you can create an unlimited number of images in relax, which I'm mostly using.
for the past week I been doing 3d images but what you should do is take a bunch of them in PS and put something together I got one Image up at Deviantart the most important part of the prompt is ..... red black and blue design .... at the end of the prompt
I would love to see some of the art you are creating.
I was just on your Discord. All of the bots appear to be down. Also, you don't have any "prompt instructions" anywhere, so your "general" chat has some confused people.
its fascinating that even if the whole process of generating an image compared to being an actual artist people can not even come up with their own prompts and ideas :) i guess thats a good thing
Using a known working prompt as a baseline starting point, and working to get the result you want is a great way for people to learn how stable diffusion works.
@All Your Tech AI I think of it like a how to draw book. Example: someone else has figured out the ratios of the human body and face. Me using those same ratios is not copying, but using it as one of many building blocks.
@@Guy-Am-I How are people supposed to be judgemental when you guys keep providing rational explanations for things? smh...
@olutoyinonayemi1350 This made my day, thanks. 😊
so cool can not wait to make something with this
Thank dear friend for this!
The end was the best lol
You're Godsent, bro.
Couldnt be simplier.. heh, yeah iam probably really stupid. I cant find any checksum file and when i simply deploy and later copy & paste into models folder, than nothing has happened.
You definitely aren't stupid. This stuff can be complex, but i'll help you get there. Are you using automatic1111 or invoke ai to generate images?
Like a knife, a weapon available to everyone is not a weapon anymore it's just a mere tool... #hail_to_the_open_source_and_public_research
Thanks for the infos !
This is amazing. Will tryout soon.
Let me know how it goes and how I can help!
spent 30 hrs in a single day...that's dedication.
Probably closer to 50 ;)
This has been pretty helpful Thanks alot!
Anytime!
Really good video. I appreciate the resources for Midjourney.
Thank you!
@@allyourtechai
Hi.
What GPU were used in the making of the images?
Thank you.
@@mikekiske I'm running an Nvidia RTX 3090
@@allyourtechai Sweet. Thanks for the video.
Midjourney seems to produce much better colors. Stable difusion looks great, but Midjourney’s color mixes look more pleasant.
A link in the description to your tutorial on setting up automatic 1111 would have been nice.
Description updated with links.
How to stop a previous prompt from appearing at the end of a new prompt? Help, please!
bro plz What kind of microphone are you using
Blue Yeti: amzn.to/44imBgO
I was really impressed by the capabilities of the Image Stable Diffusion Img2Img model in the video. I'm curious to know, do you think this AI technology could be used to create virtual showrooms for furniture and jewelry? It would be interesting to see if the photorealistic images generated by the model could be used to showcase physical products in a virtual environment. What are your thoughts on this? Can you give the best model reference for it?
Bruh wtf are you talking about that has nothing to do with this lol. Also, totally shit idea, noone wants to enter some virtual world to shop for real life goods, thats like using a chainsaw to turn logs into tinder..
I definitely thin you could easily create a virtual showroom. From there you would probably use photoshop or another editing program to place the products in the virtual showroom.
hi im a beginner. are there any step by step tutorials on how to get openjourney or midjourney on my own computer?
Unfortunately you can’t run midjourney on your own machine, but I have tutorials to install InvokeAI or automatic1111 on your pc. From there you can use openjourney or any other downloadable model you would like
Hi @allyourtechreviews, first, thank you very much for your video. But I'm new in this matter, so I need help, and I would like to ask you if you can help me, I need a step by step to work with Openjourney. I will write here what I think and you (if you can, of course) enlighten me to get there. 1- I install Stable Diffusion in my computer after that Install 1111, and later I install Open journey, is this, or I lost something? TIA
Is there an AI that is only trained on images that are public domain only? Or can I host a 'blank' AI that I train on my own art to speed up making my art? I do not want to use other's art without their consent.
Definitely! I have a tutorial that shows you how to train a custom model on any images you want Create AI Images Of Yourself for Free: DreamBooth & Stable Diffusion
ruclips.net/video/q7wfFMIcvsU/видео.html
How do I find the page with the Stable diffusion checkpoint box at 03:34? I have been looking for it to no avail, is it part of Hugging face? Thanks for the very nice video
wow, thanks but I missed how you reach this screen of generating the image in 03:43
Really tired of everybody saying "better than MJ" is every new ai imager better than mj? Playground, Leonardo etc ...all say it n its no where close
doesnt work on my asus rog x13 1060gtx amd processor any suggest althernative
The 10 series cards unfortunately use half precision floating point operations and can’t run stable diffusion. You need at least an rtx 20 series card.
@@allyourtechai how about 3080 xg mobile does work?
What kind of video card do I need to run this, something super strong? Also I have mostly macs and a steam deck. Any hope for these guys? Thank you and great video
YOu can use an M1 or M2 mac, but image generation is pretty slow. Your best bet is something with a recent Nvidia video card (RTX line). I use an RTX 3090 GPU for example.
Just curious, does this midjourney also have Niji journey you can download for Stablediffusion?
How do I get the upscaler like you have in your 1111
Can i use the midjourne sample for invoke ai ?
Yes! You may need to fine tune it slightly, but I find it works great for the most part
my midjourney has expired , it fineshed my trial version... is there a way to use even so without pay?
sir, how to download midjourney v4.1 model for stable diffusion?? download option is not exist on hugging face
Can you make the Pope like midjourney v5?
Nice work. Can you outline your VRAM hardware?
I do have a very powerful PC running an RTX 3090 which has 24GB of VRAM. There are some settings you can tweak for cards with low VRAM. I'll put together a tuning guide :)
@@allyourtechai I was guessing it must be fast with all those iterations and steps. With all the Google Collab notebooks, what are your opinions about just buying credits / tokens instead of dedicated hardware / local install?
Was looking for how to download the .ckpt file from hugging face
Here is a how to on the subject: ruclips.net/video/8NhSXMCaPBg/видео.html
Another question: i tried different models online with Open journey now like stable diffusion and "midjourney" etc but when am creating people the faces all end up distorted and are not fitting to the photo.
is this because of my prompting skills?
Prompts are part of it. You can also use face restoration settings which might help. For some good starting prompts, you can check out what I have shared on my website here: allyourtech.ai/directory-ai_art_promp/
Thanks for the tutorials! Is it possible to script Automatic1111 and feed the last frame back into the system in order to make stable diffusion videos?
Great question, let me try
@@allyourtechai I went down a bit of a rabbit hole after seeing your videos. I added deforum to my Automatic1111 install. In that plugin, you can set a strength variable that does just this. It is a percentage of how much of the last frame should be altered. That said, I am sure you will dig up some interesting alternatives.
Hii what do you mean by “Go to your models director inside your stable diffusion install and drop it “ . I’m new. Do I need something before I drop this where ever I have to drop it ? I thought thins was the A 1111 download . And that A1111 was a stable diffusion program 😬. I’m confused
I only know how to use Midjourney . Beginner
If you have automatic1111 installed, you should be able to go to the directory where you installed it and there should be a /models/stable-diffusion directory. Download or compy the checkpoint file there, and next time you restart automatic1111 it will show up in your drop down menu. I'll doa video on how to do it.
Very helpful! Subscribed!
Awesome, thank you!
It would help to know what system requirements are needed
I’ll do a more detailed video on the subject and cover Mac and pc requirements as well as specs for a good system.
@@allyourtechai 🙌🏼🙏🏻 subbed
graphics card with 6GB minimum (preferably 12GB to run a few of the extension without worry). The only answer u need.
@@abdullahsulaymaan9085 ty! 🙌🏼
Thanks man!
What is the minimum spec requirement? It would be great if you mentioned this on the video
That’s worthy of an entire video!
What is it about that this has to be trough Discord?
What kind of hardware are you working with and how long did each prompt take to render?
Each prompt only take a few seconds to render, but I am running a powerful GPU (RTX 3090)
Is it possible to give a link to an existing picture to build a new one based on it? Midjourney has this great function, I don't know if it has anything else, but I'd love to find out. Let me know if you know. Thanks!
Image 2 image can be used for that on your local software
Awesome dude! Subbed..😁
Thanks for the sub!
what if I have a MacBook?
If you have an M1 MacBook you should be able to install InvokeAI. I'm going to do a tutorial on making that work.
how does it work for mac?
Mac's tend to not have enough graphical performance to handle things like stable diffusion. An m1 can run invokeai, although quite slowly.
@@allyourtechai I’ll take Mac os over the insanity of windows any day.
3:40 images from hell
i cant find download button.
You might need to sign up for a free account before the download button shows
Why you no go through installing it so we know what to do??
I have two videos covering how to install stable diffusion locally. Setting up a bot is quite a bit more involved though. I would start with the local setup forst
@@allyourtechai It's ok I found it. Many thanks.
Is quite difficult to install stable diffusion and the requirement is high
Yes, although I have some install guides if your computer meets the requirements
Do u need a GPU for automatic 1111?
Only reason I'm not installing SD.
Edit: I guess it's needed.
Yes, definitely!
Can you use specific midjourney prompts like --ar 4:3 etc?
Also, is there a way to keep training the model using images from midjourney?
I'm not sure but I don't think that aspect ratio prompt would work. It's also not needed as you set the ratio when you set the resolution.
@@Elwaves2925 yes you can..I do it all the time aspect ratios..there are many..
@@jrl9319 Ah okay, good to know.
@@Elwaves2925 there are many different aspect ratios and if you don't set the aspect ratio what you're going to get it's just a square I make a lot of wall art so what I use basically is AR 32 or 23 depending on what I want but I've also used 16 19many many different aspect ratios and you just have to figure out what you want your final image to be.
As I get closer to the final image that I want I change my aspect ratio to what I'm going to use it for whether it's a portrait or poster or a landscaping
@@jrl9319 Yeah I know about asoect ratios. The point was that you don't need to set it in the prompt for A1111, you do it with the resolution sliders. Which is why I thought the MJ prompts wouldn't work, especially as A1111's prompts use different formatting to MJ in some cases. If they work then great. 🙂
"cooler than Midjourney..."
Yeah I dunno man. Like SD *is* better. I have it on my beast of a computer and I love it. But guess what? More than 75% of my life is spent away from my computer.
What keeps bringing me back to MJ .. is that I can use it on my phone. Wherever I am. Whenever I want.
I still save my favorite images to tweak on the computer... But having the ability to create them literally anywhere is just unbeatable.
Yes, I can get a more precise image with SD... But because of the amount of time I am able to spend on MJ... I think I can compensate for its inadequacies now.
You should check out InvokeAI (I did a video on how to install it on your pc). It allows you to open up the port in your router so you can use it on your local network or on the web (phone, tablet, Mac, pc). Solves the problem you just described
@@allyourtechai would I have to have my PC on when I'm not home to do so? I thought about using Remote Desktop and playing with SD that way... But I don't know if I want to be pushing my computer when I'm not home to make sure it doesn't light on fire 😂
I'll check out your other video though, thanks! Been wanting to check out other programs. My favorite AI creators seem to run their images through multiple different AI programs to get stunning results, so I've been wanting to try so as well.
So you can make your own Midjounary. 😁
how can i download this stuff (i dont see a link, sorry) and can I use them completly offline?
Do you already have automatic1111 or invoke ai installed on your computer? If not, I have a tutorial for installing each of them. That's the stable diffusion software you can use to generate images.
@@allyourtechai no I haven't. I will check that out. Thanks
@@allyourtechai Why did you even bother making the video if you didn't clarify this point, you sly fox?
Can you help me with this error in automatic1111: so when I run the cmd user thing it first downloads a 2.4 gb file and then after some mins 1 to 2 mins it gives me A lot of red texts I have a 8gb ram and a amd Radeon gpu. And even when I open the stable diffusion website it doesn't opens like when I refresh or like it doesn't opens
What is the error it shows?
Alot of red texts and codes
Ossam Video
don't use ckpt file. You always should use safetensors !
Yep, great tip. Even a few weeks ago it was more rare to find a safe tensor file. I don’t even think I knew what they were when I recorded this lol. Agree it’s the way to go.
But is it also for Mac????
It can run on an m1 or m2 mac although very slow compared to a PC.
This is a bad thumbnail, the paid Midjourney picture looks inarguably better than the free version.
Can be used in comercial?? 😮
I think lawyers are still trying to figure that out
great!!!
Yes yes, at the first glance everything looks “woooow!” BUT when You go analyzing into details You discover a mix of shapes with no sense…AI try to emulate human skills BUT when you ask for an image full of details, it creates a mix with shapes that intersect between themselves….SO it can works in social networks but how if You need to print them and make an exhibition or something high quality with all these imperfections? I’m not talking about pixel quality, I’m talking about details that are with no sense….
yes, u are right
Good luck sir
Does it work on mac?
You can run stable diffusion on an M1 mac currently.
I saw this video the day it came out. Put my email in a never received anything. I put my email twice.
But we are still paying for the power bill by running this locally. Nice video though.
I have solar panels! Generally speaking though, yes, you are right
Solar panels to the rescue XD. Basically free electricity during daytime and I use all these high end gpu and rendering applications during daytime
If you want the same consistent face in multiple different images, is there a way to go back and forth to a 3D model?
I'm assuming it has been done before but I just found a bunch of really advanced complicated processes. This involved blending/merging different models, some good in hands or faces, and blending again and tweaking weights until you have insanely good results. Lora was mentioned here too.
I'd just like a set of different faces, hairs and outfits and combine them with a fully rigged base human model. Maybe a similar thing could be reached by having sets of pictures from fixed angles. Maybe just a full frontal and 45 degrees is enough (and assuming symmetry).
I think once you have one set of images and a 3D intermediate step, you could generate any angle.
Once you have a new image you should be able to create a new set of images. AI should be able to do some guesswork to change the similar face into a similar 3D pointcloud. With this you could generate several high res 3D models and choose the best, kinda retraining the model.
Ideally you'd use different sets as a face mod. You could have a bland average face and create a parameter to scale towards one of the set or mixing several. Perhaps even combining with several styles, like anime, game character and hyper realistic.
I think such a creation process could extract nice features from new AI art. A cool jacket or pose and add it to an ever growing library of tools, without having to rely too much on prompts and fixing mutations and uncanny aspects.
Even as an amateur in AI i would have to say you might be approaching this from the wrong angle. For one, you might be able to get by without the rigged model if you just use something like openpose or a few other posing algorithms that can communicate with the AI what pose you want your model in. You could use the 3d modelled approach if you need extremely repeatable results though, and there are many advantages to working that way as well, all depends on your priorities. Anyways, you use the pose algorithm in controlnet to create a suitable base image that has the correct pose, either by feeding it a photo you've taken or an image you've found with the right pose, and possibly alongside a LoRa network trained to get your base model roughed in. Then, you use masking/inpainting to regenerate any parts of the image that aren't what you want, fiddling with the settings to get the right result. If you have very particular results in mind, you just train small LoRa models on the results you need, like one particular face, weapon, outfit, etc. That way you aren't constantly regenerating the whole image causing it to drift away from the desirable aspects of your results so far. You can get all these results inside the Automatic1111 webui, i managed to learn all this stuff just in the past week or 2.
Also, as one other method addressing your initial question about going back and forth between a 3d model, if you went with that approach you could likely save time by simply using controlnet to generate and implement a depth map of your 3d modelled input, that way you can just keep the controlnet in place the whole time you are inpainting to guide your results. I think one of the pitfalls in working with these AI systems right now is getting too wrapped up in long workflows to try and very specific results. This is desirable in a few cases, but i think overwhelmingly the strength of AI art is that if you "Throw paint at the wall" so to speak, your paint just magically turns into beautiful outputs you wouldn't have thought of on your own. I think your idea about continuously referencing back to a 3d model directly is more desirable if you need repeatable results, but if you are creating single-use works, I think that relying on lightweight, speedy workflows to rapidly iterate will pay dividends.
Thanks, you're right, it might not be needed and is very complicated.
I just watched a nice tutorial on getting images (photos mixed with an AI prompt) into Blender using a cool but expensive plugin (Face Builder) and then get it into unreal engine:
ruclips.net/video/FKoy7bncHLs/видео.html
Cool but a bit overkill and nowhere near perfect. DAZ3D has two alternatives to Face Builder, FaceGen and Face Transfer, that aren't free but at least no subscription. These have some other issues you'd constantly need to fix.
Masking and inpainted with the canny and pose checkpoints in controlnet seem to have loads of options already.
@@teambellavsteamalice I got a bit wordy, thanks for your reply haha. I do still think the face rigging will be a useful, affordable, option in the future, jsut not for every use case, mainly for rigidly repeatable outputs for animation, etc.
@@teambellavsteamalice You can use controlnet's openpose model in conjunction with their reference preprocessor to achieve that.
PC? I only have Macpro OS monterrey. 😢
You can run InvokeAI on an m1 Mac, but it is very slow
What is Eleven 11??
Automatic1111 is free stable diffusion software you can use.
thanks got it
How do you get that GUI? Mine is very different. Just says Easy Diffusion, and I don't seem to see any place to put the checkpoint file.
Which software are you using?
dude... why didnt you just show us how to download the model? huggingface isnt that intuitive for new users...
Yep, I dropped the ball there. Doing a step by step how to later today.
Midjourney V5 destroys this
MJ V5 is very good
FYI. The vignette filter you're using in this video is distracting and annoying because when you move, it moves... and you move a lot.
I appreciate the feedback. I’ll try to be better
I agree with the vignette movement. It does take away from your messaging.
Side by side comparison would have been effective to convey your argument. I don't see any service beating Midjourney yet. It is better than the rest by a long shot. Prove me wrong. I will wait.
Midjourney is great! Can’t disagree with you there. I think we’re at a point where you can get stellar results on a system that runs locally.
@@allyourtechai you did disagree with your title choice, which clickbaits hard
What about AMD GPUs?
Stable diffusion and most forms of ai run better on nvidia hardware in general. You can run InvokeAI on an AMD GPU. I have a tutorial that shows you a one click setup for that in my videos.
CAN ANYONE RECOMMEND A GOOD BUDGET PC LIKE 100 BUCKS OR LESS FOR *STABLE DIFFUSION?*
You are going to need at least a 20 series nvidia gpu with 8gb of ram. Did you mean to say a 1,000 dollar budget?
Right off the bat - midjourney produces much much better results with the same prompt. I'm really sorry to say, mind you.
Now it does, yes. Remember though that this video was created before Midjourney 5 was released. At that time stable diffusion was easily on par
@@allyourtechai yep, thanks for pointing that out, hopefully open journey catches up again
As long you got super-duper gpu pc/graphic card
Or run it in a Google Collab. Maybe a guide for running in Collab should be nice?.
As a presentation, side-by-side comparison with MidJourney would be much better, with range of different styles.
But yeah, it’s free for a reason - results are mostly unusable and just a mess.
I have a video uploading that shows prompts to get hyper realistic images using these models.
No match for midjourney and with v5 its killing
V5 wasn’t out at the time, but is definitely leading. Firefly and SDXL both look promising as well
yeah dont use paid sites where with the right models stable defusion is perfect
How many models will you end up needing for different styles, then?!
@@bernadettblummer108 As many as you want or needed to cover the styles you want. Without knowing how many styles you want, there's no way to say. A fair guess would be a minimum of one model for each style you like.