Stable Diffusion XL is an open source base model on which you build other models focussed on more specific areas, also you have LORAs which influence the attention layer. So you are not testing the whole package. Also the prompt is a little too wordy and won't break into tokens very well. "(cartoon:1.4) octopus, cheerful, pirate hat, shimmering, coral reef, fish, underwater, magical, colorful" is probably all you need. As I understand it he current generation don't read ordinary text they only break it into tokens, although with Stable diffusion you can now hook Chat GPT4 to it which will write a prompt which tokenises well. They are all pretty good though and getting better at a dizzying rate.
ComfyUI is the best way. Hard to get your head around at first but it's a great way to learn how Image generators work. I love Midjourney etc but they sacrifice a lot of potential for ease of use.@@WesGPT
@@WesGPTindeed! That’s just the tip of the iceberg aswell, as a locally installed Stable Diffusion allows you to get extensions aswell, a great one being “control net” which gives you tonnes of creative control, such as: ability to extract poses from images to which your generated image will copy the pose (can edit the pose aswell), use depth maps so you can control the negative space in a composition and the distance of where things will be placed in the render, can transfer a face from an image to your generated image using ipadapter, and even import images and convert them to line art which stable diffusion will keep the structure of and not leave. And so many more, and with all these add ins, you can control the weight and strength, and also control when this process will start and end during the image generation. Another great extension is “photopea” which allows you to send your generated image into a photoshop like clone, and make changes, and then send back to stable diffusion, with all these things combined and more it definitely gives you the most creative control. So as someone like myself who actually does illustration and photo editing, it lets me be alot more involved and precise with what I’m trying to generate
@@rafaelborgocruz I haven't used tensorart but use Stable Diffusion XL quite a bit. Midjourney is still my favorite, but if you can get access to any free AI-image generator, and the results look good, you should use it 😊
🎯 Key Takeaways for quick navigation: 00:00 🎨 *Image Generator Overview* - Introduction to the comparison of image generation results. - Overview of the three contenders: DALL-E 3, Stable Diffusion XL, and Mid Journey v6. - Explanation of how to access and use each image generator. 01:19 🌈 *Categories for Comparison* - Explanation of the five categories for image generation comparison: cartoon images, photorealistic humans, architecture, seamless patterns, and logos. - Mention of the selected prompts for each category. - Emphasis on the variety of outputs to capture a wide range of styles. 02:27 🤔 *Cartoon Image Generation* - Presentation of the first category: cartoon images. - Display of three generated images without revealing their sources. - Inviting viewers to guess which image corresponds to DALL-E 3, Mid Journey v6, and Stable Diffusion XL. 03:59 🎭 *Cartoon Image Reveal* - Reveal of the sources for the three cartoon images. - Comparison and analysis of each image's adherence to the prompt. - Encouragement for viewers to share their preferences among the three images. 04:46 📸 *Photorealistic Human Image Generation* - Introduction to the second category: photorealistic human images. - Presentation of three generated images of a street performer playing a saxophone. - Observation of details and expressions in each image. 06:33 🌟 *Photorealistic Human Image Evaluation* - Reveal of the sources for the three photorealistic human images. - Comparison and personal preference assessment of the images. - Highlighting Mid Journey v6 as a standout in terms of photorealism. 07:16 🏰 *Architectural Image Generation* - Introduction to the third category: architecture. - Presentation of three generated images of an elaborate Gothic Cathedral complex. - Focus on architectural details and surroundings. 08:25 🖼️ *Architectural Image Sources Revealed* - Reveal of the sources for the three architectural images. - Analysis of each model's approach to the Gothic Cathedral prompt. - Recognition of common traits in the output of each image generator. 09:06 🌸 *Seamless Texture Generation* - Introduction to the fourth category: seamless textures. - Presentation of three generated images of vintage floral wallpaper. - Evaluation of hand-drawn details and pastel color choices. 10:44 🔄 *Seamless Texture Assessment* - Reveal of the sources for the three seamless texture images. - Assessment of the seamlessness of each texture. - Acknowledgment of preference and recognition of mid Journey v6 for its performance. 11:11 ☕ *AI Business Logo Generation* - Introduction to the fifth category: AI business logos for a gourmet coffee shop. - Presentation of three generated logo designs. - Evaluation of details, color schemes, and overall aesthetic. 12:49 🎓 *AI Business Logo Sources Unveiled* - Reveal of the sources for the three AI business logos. - Critique of each model's performance in logo design. - Reflection on the progress from earlier AI models like DALL-E 2. 13:45 🤖 *Closing Remarks and Future Plans* - Brief explanation of accessing Mid Journey version 6 in alpha stage. - Request for viewer feedback on continuing the series with more models and diverse prompts. - Mention of previous videos and potential additions to future comparisons. Made with HARPA AI
After I guessed the first 3 rounds 100% accurately I patted myself on the back, been working with Mid and Stable for over a year and can easily spot its images from a block away. 🚀🚀
This is great! I was looking for a comparison just today. Also dalle3 has an api i can use! Perfect. Midjourney doesnt have an api if anyone is wondering
@@ButcherTTV But wouldn't you be using it outside of Discord because it's an API? So the only person on the hook for TOS would be the business that is the 3rd party.
@@WesGPT probably true! The one I was looking at you had to use your own discord login info. I haven't looked in depth though. You would hope that paid services would handle that discord account side of it for you
Leonardo is my all time fav. Especially “live canvas”. And I like ideogram. There’s a ton of ai in canva now and I would love some deep dives into workflows using the image generator and exploring the possibilities with it.
@@CM-zl2jw At least we should have lots of conversations for what people want, and possibly legislation too. Otherwise someone reads neighbours secret memories, and publishes them to web.
@@HKallioGoblin have you watched the first season of “upload”. It’s interesting and hilarious. I didn’t watch the second season because they went woke. Go woke. Go broke. Anyway. It’s an interesting concept and the series might be predictive programming? Who knows. We have to be willing to think step by step, take a breath and think out loud 😁
Nice vid, couple of things though: =First of all "photorealism" "photorealistic" is an art style that involves painting, digital illustration, or drawing in a manner that attempts to depict its subject in the most realistic fashion possible. So actually Dall-e got the art style the closest in your second round. I only say this because the models often understand the definition of art styles as an artist would. While the term is commonly used by non-artists to describe photography, it's not what you want to use in a prompt if you want a photograph. =Second using the same prompt between these AI systems, while it might seem to be a way to control variables, is going create problems. Prompts need to be tailored to the way the models are labeled. Midjourney and Stable Diffusion tend to work best with "tag-like" prompts. A list of visual attributes that don't include a narrative and isn't constructed as a sentence: Photograph, black man blue shirt saxophone busy downtown street, and it's best to keep your prompt under the token limit for the first tensor (about 100 words). Dall-e likes longer prompts with narrative and descriptions of the subject's emotions etc. Just a difference in the way the models were built and the images labeled.
1. Thank you for this! It's good to know exactly what artists mean by "photorealism" (though I did have a hunch I was using the phrase wrong). 2. I agree with this too. This was a not a perfect test by any means, but wanted to compare in the simplest way possible. If I were to redo the experiment, I'd definitely try to brainstorm a way to cater specific prompts to specific image generators.
@@WesGPT Thanks for the reply. Also, with Dall-e3 it looks like openai has decided it's going to use ChatGPT to revise all image prompts. I guess this is their way of preventing misuse. It makes it a lot harder to figure how to present your prompt. Just FYI. Also I thought your use of "disposable camera" in your primer on selling club photos was a smart way to get the look you want.
So you've used the BASE Stable Diffusion XL. The base is like a rough Diamond, is does not look that great but you can shape it into many beautiful shapes. For someone who just wants to get some images Midjourney is the best looking "right out of the box"... Most people who just use Stable Diffusion BASE "as is", never get the quality images that you see the pros achieve But if you are going for a specific look use and mix Models, Loras and train your own, you can get superior results. I wonder if you'll consider comparing these again , giving Stable Diffusion the time and effort that it needs to be a real gem.
"Most people who use sd xl out of the box never get good". Damn right. I couldn't tolerate it. I deleted a bunch of training data, and was like screw this, there are tuned free ones online anyway. Then I got fooocus, and couldn't believe. Tuning that would have taken, I don't even know how long.... all done. And all in a simple to use ui. It was like upgrading from a 1970's computer, to a modern one.
@@amondhawes-khalifa1949 i'm not sure how much image generation has left to even improve. When it can do text , that's about it. The next level is video.
You got to mention which model do you use for Stable Diffusion. Not fair comparison because some models do better than others. I think midjourney is capable of auto switching best model for you to use that's why normally we get good result.
As far as I'm aware, OpenAI does not offer a free trial. However, they MAY give you free credits (up to $10) when you sign up for a developers account. Also, you can use Microsoft Copilot for free (which is pretty much ChatGPT Plus)
this is a cool idea and cool test... but it's not really fair cause to get the best results out of midjourney (even v6) and certainly SDXL... you have to prompt it differently than DALL E... so using the same prompt for all 3 won't give fair results unfortunately 💪😎
My impression is that DALL-E has the best language understanding, but the resulting image often have a "corporate art style" and care more about accuracy than coherence. SDXL seems to have relatively weak language understanding, but is also the easiest to manipulate with certain key words. SDXL prompts feel more like instructions than descriptions. Midjourney has really good scene and style consistency, but it can be hard to precisely control what you want. Ultimately I still prefer SDXL because you have more exposed knobs to tune when run locally, and when you really want to create something precisely, being able to directly feed control instructions to the model is much easier than trying and failing with different prompts.
Octopus - Very predictable. Midjourney doesn't do anthropomorphic very well (from what I've seen), so the left image was obviously Midjourney. The bubble-duplication of the treasure chests on the right said Stable Diffusion. And the very cartoony center image was very Dall-E. (All correct) Sax player - Midjourney likes its dark compositions, so that was obviously the third image. The broken sax was an easy Stable Diffusion tell. And the slightly plastic-y person on the left was clearly Dall-E. (All correct) Cathedral - Composition and realistic plants in the middle image says Midjourney. The toy model on the left is Dall-E (it looks very similar to architecture prompts I've tried on Dall-E before). So the messy cathedral on the right is Stable Diffusion. (All correct) Wallpaper - Midjourney does not like color, so the left image. There's a subtle roundedness in the right image which to me suggests Dall-E. That leaves Stable Diffusion for the brightly colored middle image. (Wrong: Mixed up SD and MJ. Apparently MJ doesn't hate bright colors as much as I've imagined.) Logo - Dall-E does logos well, so that should be the middle image. Midjourney tends towards extra noise in the image details, so the right image. So Stable Diffusion on the left, which really didn't grasp "logo" all that well. (All correct) Given how Stable Diffusion did with the logo, if it had come first, I would have more easily guessed that SD did the left wallpaper image, because of similar design problems. Overall, Dall-E does bright and round and cartoony, and is very good at adhering to prompt detail; Midjourney does dark and realistic, with good composition and image balance, but has real issues following prompts with detail; and Stable Diffusion is most likely to be a bit broken.
If you want a prompt only generation with SDXL, use fooocus as it imitates midjourney and creates great results. Otherwise SDXL without any customised controls or community trained checkpoints is going to be lackluster.
You use --ar at the end of your prompt. So, for example, if you wanted a 16:9 aspect ratio for an image about a coffee cup, you'd write: coffee cup, warm tones, woman smiling --ar 16:9
I think you just pick the image with the least amount of brightness/awesomeness, and that would be Stable DiffusionXL. It sucks at making images that POP. Also, Dalle-3 tends to make isometric images for some reason.
For DALLE.3 I just tried your gourmet coffee logo and it worked perfectly, the prompt was: Create a logo featuring a cup of coffee in a circle with the text ‘gourmet coffee’ around the inside perimeter of the circle
Midjourney gives realistic images and better colors, while Dall E gives more cartoonish images or something from Pixar Studios and stable diffusion is something between
The seamless pattern “test” would be better if you actually placed at least two images next to each other, instead of guessing if it’s actually seamless! Seamless is the most important part in that test. Using text with quotes gives a better result.
I guessed all the MidJourney V6 renditions so think I will stay with that. I just need to get the face swapping fixed as it works good with MJ5.2, but 6 just came out badly.
Yeah, I'm pretty good at guessing MJ too. As for the FaceSwap tool - the newest video (tonight) will create some personalized AI Canvas art and use the FaceSwap tool. So will experiment to see if V6 is causing issues for me as well.
@@WesGPT I think I got it figured out. My Instagram now has 43 posts & 147 followers. Also got an offer to model sunglasses (I think they missed the AI Model tag in m bio) and another group wanting me to pay them to increase my follower list. Think I'm going to go based on my work & not paying for likes. At least for a month or so, then might try them out.
Only if you make nothing but a simple prompt.. XL has got abut 300 improved specialized models you can select from if you use the frontends Automatic1111, InvokeAI, or ComfyUI.. You can throw in a sketch of an idea, tell the ai, take that pose,/composition/ room, and replicate it for example by using the light, materials and feeling of another image you selected, using the IP Adapter.. for Professional work Here XL in my opinion is lightyeares ahead... even if the simple text to image Prompt from midjurney nearly creates magicaly beautiful images... XL lives from the comunity delivering aditional tools..
Mid journey being not accessible for myself is out of the picture or out of the question but between the other two it's really hard to pick a favorite. Dall-E is pretty cool because the interpretation of the prompt tends to be extremely literal. I would say stable decision interprets prompts a little bit more loosely. And Dall-E, at least for me always as some extra something like a finished shine that i don't see on SD a much. Cool video. Thanks. Just one thing too. And I hear this all the time and got it wrong myself in the past but apparently photo realism is just an art style that attempts to get close to looking like an actual photo. But if you're looking for a realistic photo or a realistic-looking art that appears to be a photo, you would just prompt "photo"
@@WesGPT Hey Wes, it's nothing too bad just the recurring cost even if only 10 a month, I try to limit my AI spending as I already pay for Dall-E-3 (600 images a month for 14 bucks). That's cool - I just learned about photorealism recently too.
imho, all depend on the requirements, not only about the aesthetics. for example, dalle3 is not replacable for storytelling (such as my weird tutorials project in: ruclips.net/video/sXgZDn-5QGE/видео.html (100% made by AI) 😄
👋 Dall-E 3 is the clear winner for most images in regards to prompt coherence and image fidelity. Plus......... *it's FREE to use* For certain types of architecture, SDXL does exceptionally well. 👍
Thanks for voting! I thought DALL-E 3 won the seamless textures, the architecture, and the cartoon. But I'm a real sucker for photorealistic images and I loved what Midjourney did with the saxophone player 💯
@@WesGPT As concerns the sax player, would SDXL perform better with Loras and special checkpoints? As far as I know Midjourney and Dall-e cannot be tweaked and improved unlike Stable Diffusion
@@obscuremusictabs5927The next plan up ($30) gives you unlimited relaxed generations and 15h of fast generations. I only do maybe 4-5 generations per day so the basic plan is plenty!
Under water cartoon = 1 mj 2 Dalle 3 sd Realistic man 1 sd 2 dalle 3 mj Dalle 2. 😂😂😂😂 Architecture 1 dalle 2 mj 3 sd Pattern 1 dalle 2 mj 3 sd Got tricked as well. Logo 1 sd 2 dalle 3 mj Thanks for the game. 😂. Fun.
Stable Diffusion XL is an open source base model on which you build other models focussed on more specific areas, also you have LORAs which influence the attention layer. So you are not testing the whole package. Also the prompt is a little too wordy and won't break into tokens very well. "(cartoon:1.4) octopus, cheerful, pirate hat, shimmering, coral reef, fish, underwater, magical, colorful" is probably all you need. As I understand it he current generation don't read ordinary text they only break it into tokens, although with Stable diffusion you can now hook Chat GPT4 to it which will write a prompt which tokenises well. They are all pretty good though and getting better at a dizzying rate.
This is good to know! Thanks for sharing. Definitely need to research more about Stable Diffusion XL and its models/layers.
ComfyUI is the best way. Hard to get your head around at first but it's a great way to learn how Image generators work. I love Midjourney etc but they sacrifice a lot of potential for ease of use.@@WesGPT
@@WesGPTindeed! That’s just the tip of the iceberg aswell, as a locally installed Stable Diffusion allows you to get extensions aswell, a great one being “control net” which gives you tonnes of creative control, such as: ability to extract poses from images to which your generated image will copy the pose (can edit the pose aswell), use depth maps so you can control the negative space in a composition and the distance of where things will be placed in the render, can transfer a face from an image to your generated image using ipadapter, and even import images and convert them to line art which stable diffusion will keep the structure of and not leave.
And so many more, and with all these add ins, you can control the weight and strength, and also control when this process will start and end during the image generation.
Another great extension is “photopea” which allows you to send your generated image into a photoshop like clone, and make changes, and then send back to stable diffusion, with all these things combined and more it definitely gives you the most creative control.
So as someone like myself who actually does illustration and photo editing, it lets me be alot more involved and precise with what I’m trying to generate
Have you guys used tensorart? Is a good web stable diffusion? Or recommend another? Midjorney too expensive…
@@rafaelborgocruz I haven't used tensorart but use Stable Diffusion XL quite a bit. Midjourney is still my favorite, but if you can get access to any free AI-image generator, and the results look good, you should use it 😊
🎯 Key Takeaways for quick navigation:
00:00 🎨 *Image Generator Overview*
- Introduction to the comparison of image generation results.
- Overview of the three contenders: DALL-E 3, Stable Diffusion XL, and Mid Journey v6.
- Explanation of how to access and use each image generator.
01:19 🌈 *Categories for Comparison*
- Explanation of the five categories for image generation comparison: cartoon images, photorealistic humans, architecture, seamless patterns, and logos.
- Mention of the selected prompts for each category.
- Emphasis on the variety of outputs to capture a wide range of styles.
02:27 🤔 *Cartoon Image Generation*
- Presentation of the first category: cartoon images.
- Display of three generated images without revealing their sources.
- Inviting viewers to guess which image corresponds to DALL-E 3, Mid Journey v6, and Stable Diffusion XL.
03:59 🎭 *Cartoon Image Reveal*
- Reveal of the sources for the three cartoon images.
- Comparison and analysis of each image's adherence to the prompt.
- Encouragement for viewers to share their preferences among the three images.
04:46 📸 *Photorealistic Human Image Generation*
- Introduction to the second category: photorealistic human images.
- Presentation of three generated images of a street performer playing a saxophone.
- Observation of details and expressions in each image.
06:33 🌟 *Photorealistic Human Image Evaluation*
- Reveal of the sources for the three photorealistic human images.
- Comparison and personal preference assessment of the images.
- Highlighting Mid Journey v6 as a standout in terms of photorealism.
07:16 🏰 *Architectural Image Generation*
- Introduction to the third category: architecture.
- Presentation of three generated images of an elaborate Gothic Cathedral complex.
- Focus on architectural details and surroundings.
08:25 🖼️ *Architectural Image Sources Revealed*
- Reveal of the sources for the three architectural images.
- Analysis of each model's approach to the Gothic Cathedral prompt.
- Recognition of common traits in the output of each image generator.
09:06 🌸 *Seamless Texture Generation*
- Introduction to the fourth category: seamless textures.
- Presentation of three generated images of vintage floral wallpaper.
- Evaluation of hand-drawn details and pastel color choices.
10:44 🔄 *Seamless Texture Assessment*
- Reveal of the sources for the three seamless texture images.
- Assessment of the seamlessness of each texture.
- Acknowledgment of preference and recognition of mid Journey v6 for its performance.
11:11 ☕ *AI Business Logo Generation*
- Introduction to the fifth category: AI business logos for a gourmet coffee shop.
- Presentation of three generated logo designs.
- Evaluation of details, color schemes, and overall aesthetic.
12:49 🎓 *AI Business Logo Sources Unveiled*
- Reveal of the sources for the three AI business logos.
- Critique of each model's performance in logo design.
- Reflection on the progress from earlier AI models like DALL-E 2.
13:45 🤖 *Closing Remarks and Future Plans*
- Brief explanation of accessing Mid Journey version 6 in alpha stage.
- Request for viewer feedback on continuing the series with more models and diverse prompts.
- Mention of previous videos and potential additions to future comparisons.
Made with HARPA AI
Very cute little ending with Dall-E 2. I remember when it blew my mind. Took me back!
I know, same here! Crazy how far we've come.
After I guessed the first 3 rounds 100% accurately I patted myself on the back, been working with Mid and Stable for over a year and can easily spot its images from a block away. 🚀🚀
Same. You have a great AI eye 👀
This is great! I was looking for a comparison just today. Also dalle3 has an api i can use! Perfect.
Midjourney doesnt have an api if anyone is wondering
Yeah, it doesn't. But some 3rd party companies may be offering access to Midjourney through an API. Haven't explored this yet though 😊
@@WesGPT that is true! But using it with the risk of getting banned under discord ToS
@@ButcherTTV But wouldn't you be using it outside of Discord because it's an API? So the only person on the hook for TOS would be the business that is the 3rd party.
@@WesGPT probably true! The one I was looking at you had to use your own discord login info. I haven't looked in depth though. You would hope that paid services would handle that discord account side of it for you
Leonardo is my all time fav. Especially “live canvas”. And I like ideogram.
There’s a ton of ai in canva now and I would love some deep dives into workflows using the image generator and exploring the possibilities with it.
We should unite this with Neuralink, and animate peoples memories.
@@HKallioGoblinor should we?😂. That’s an ethical can o worms. Holy moly.
Oh, let me look into the Canva AI stuff - great idea!
@@CM-zl2jw At least we should have lots of conversations for what people want, and possibly legislation too. Otherwise someone reads neighbours secret memories, and publishes them to web.
@@HKallioGoblin have you watched the first season of “upload”. It’s interesting and hilarious. I didn’t watch the second season because they went woke. Go woke. Go broke.
Anyway. It’s an interesting concept and the series might be predictive programming? Who knows. We have to be willing to think step by step, take a breath and think out loud 😁
Nice vid, couple of things though:
=First of all "photorealism" "photorealistic" is an art style that involves painting, digital illustration, or drawing in a manner that attempts to depict its subject in the most realistic fashion possible. So actually Dall-e got the art style the closest in your second round. I only say this because the models often understand the definition of art styles as an artist would. While the term is commonly used by non-artists to describe photography, it's not what you want to use in a prompt if you want a photograph.
=Second using the same prompt between these AI systems, while it might seem to be a way to control variables, is going create problems. Prompts need to be tailored to the way the models are labeled. Midjourney and Stable Diffusion tend to work best with "tag-like" prompts. A list of visual attributes that don't include a narrative and isn't constructed as a sentence: Photograph, black man blue shirt saxophone busy downtown street, and it's best to keep your prompt under the token limit for the first tensor (about 100 words). Dall-e likes longer prompts with narrative and descriptions of the subject's emotions etc. Just a difference in the way the models were built and the images labeled.
1. Thank you for this! It's good to know exactly what artists mean by "photorealism" (though I did have a hunch I was using the phrase wrong).
2. I agree with this too. This was a not a perfect test by any means, but wanted to compare in the simplest way possible. If I were to redo the experiment, I'd definitely try to brainstorm a way to cater specific prompts to specific image generators.
@@WesGPT Thanks for the reply. Also, with Dall-e3 it looks like openai has decided it's going to use ChatGPT to revise all image prompts. I guess this is their way of preventing misuse. It makes it a lot harder to figure how to present your prompt. Just FYI.
Also I thought your use of "disposable camera" in your primer on selling club photos was a smart way to get the look you want.
So you've used the BASE Stable Diffusion XL.
The base is like a rough Diamond, is does not look that great but you can shape it into many beautiful shapes.
For someone who just wants to get some images Midjourney is the best looking "right out of the box"...
Most people who just use Stable Diffusion BASE "as is", never get the quality images that you see the pros achieve
But if you are going for a specific look use and mix Models, Loras and train your own, you can get superior results.
I wonder if you'll consider comparing these again , giving Stable Diffusion the time and effort that it needs to be a real gem.
Do you find that the best trained SD models perform better than Midjourney v6 for most types/styles of images?
@@WesGPT It depends, you can choose a specific model to generate some kind of images. And the result is almost perfect
"Most people who use sd xl out of the box never get good". Damn right. I couldn't tolerate it. I deleted a bunch of training data, and was like screw this, there are tuned free ones online anyway. Then I got fooocus, and couldn't believe. Tuning that would have taken, I don't even know how long.... all done. And all in a simple to use ui. It was like upgrading from a 1970's computer, to a modern one.
This was awesome Wes! You'll need to do this video again in a year. Things are about to get silly.
Oh, for sure. I can't wait to see what the next version has in store.
When you say "silly" I fully expect that to mean "blitheringly insane" at this point (^ᴗ^'')
@@amondhawes-khalifa1949 i'm not sure how much image generation has left to even improve. When it can do text , that's about it. The next level is video.
Prompt understanding - Dall-e 3.
I think it won that too!
You got to mention which model do you use for Stable Diffusion. Not fair comparison because some models do better than others. I think midjourney is capable of auto switching best model for you to use that's why normally we get good result.
I used all the default settings for Stable Diffusion XL in DreamStudio
Hello WES, i want to ask that how can you use openai free trial?
As far as I'm aware, OpenAI does not offer a free trial.
However, they MAY give you free credits (up to $10) when you sign up for a developers account. Also, you can use Microsoft Copilot for free (which is pretty much ChatGPT Plus)
@@WesGPT because i want to try your auto blogger, i am a student and i don't have credit card 😭😭(sorry my English is not very good)
Yeah, see if they can give you some free trial credits! Try contacting support too 😊
@@WesGPTHELLO! WES! I GOT 5$ FREE TRIAL CAN I USE IN AUTO BLOGGER!?
Yes!
this is a cool idea and cool test... but it's not really fair cause to get the best results out of midjourney (even v6) and certainly SDXL... you have to prompt it differently than DALL E... so using the same prompt for all 3 won't give fair results unfortunately 💪😎
So do we give even MORE points to models like Midjourney v6 and SD XL because they performed as well, if not better, with the DALL-E prompts?
My impression is that DALL-E has the best language understanding, but the resulting image often have a "corporate art style" and care more about accuracy than coherence. SDXL seems to have relatively weak language understanding, but is also the easiest to manipulate with certain key words. SDXL prompts feel more like instructions than descriptions. Midjourney has really good scene and style consistency, but it can be hard to precisely control what you want. Ultimately I still prefer SDXL because you have more exposed knobs to tune when run locally, and when you really want to create something precisely, being able to directly feed control instructions to the model is much easier than trying and failing with different prompts.
Thanks for sharing your opinion! Really refreshing because most of the answers praised Midjourney over SDXL.
Octopus - Very predictable. Midjourney doesn't do anthropomorphic very well (from what I've seen), so the left image was obviously Midjourney. The bubble-duplication of the treasure chests on the right said Stable Diffusion. And the very cartoony center image was very Dall-E. (All correct)
Sax player - Midjourney likes its dark compositions, so that was obviously the third image. The broken sax was an easy Stable Diffusion tell. And the slightly plastic-y person on the left was clearly Dall-E. (All correct)
Cathedral - Composition and realistic plants in the middle image says Midjourney. The toy model on the left is Dall-E (it looks very similar to architecture prompts I've tried on Dall-E before). So the messy cathedral on the right is Stable Diffusion. (All correct)
Wallpaper - Midjourney does not like color, so the left image. There's a subtle roundedness in the right image which to me suggests Dall-E. That leaves Stable Diffusion for the brightly colored middle image. (Wrong: Mixed up SD and MJ. Apparently MJ doesn't hate bright colors as much as I've imagined.)
Logo - Dall-E does logos well, so that should be the middle image. Midjourney tends towards extra noise in the image details, so the right image. So Stable Diffusion on the left, which really didn't grasp "logo" all that well. (All correct)
Given how Stable Diffusion did with the logo, if it had come first, I would have more easily guessed that SD did the left wallpaper image, because of similar design problems.
Overall, Dall-E does bright and round and cartoony, and is very good at adhering to prompt detail; Midjourney does dark and realistic, with good composition and image balance, but has real issues following prompts with detail; and Stable Diffusion is most likely to be a bit broken.
Wow, you did so well! You have a GREAT eye to the different generations of these models. Very impressed 💯
If you want a prompt only generation with SDXL, use fooocus as it imitates midjourney and creates great results. Otherwise SDXL without any customised controls or community trained checkpoints is going to be lackluster.
This is a good tip. But did you misspell something? Do you mean "Focus" instead of "Fooocus" ?
How to change aspect ratio of the pictures in Dall-e 3?
You use --ar at the end of your prompt.
So, for example, if you wanted a 16:9 aspect ratio for an image about a coffee cup, you'd write:
coffee cup, warm tones, woman smiling --ar 16:9
Dall e 3 would be great if I weren't paying to get throttled after making a few images
Yep, I hear you haha
I've been working with AI image generators for a long time. Even so, I was surprised that I got all of them correct somehow. Yay
Great job! I was hoping a few would trick you guys (like the pattern one)
I think you just pick the image with the least amount of brightness/awesomeness, and that would be Stable DiffusionXL. It sucks at making images that POP. Also, Dalle-3 tends to make isometric images for some reason.
Did you prefer Midjourney's generations over the others for most of the tests?
Before the cartoon prompt reveal I've paused to guess:
1. MJ
2. D3
3. SD
Wohhoo! I guessed right.
Great job! You have a good eye for the different models 💯
I think you're suppose to put the text in "quotes" to generate properly
For which prompt/model?
For DALLE.3 I just tried your gourmet coffee logo and it worked perfectly, the prompt was: Create a logo featuring a cup of coffee in a circle with the text ‘gourmet coffee’ around the inside perimeter of the circle
@@ThePennyPincher-td5oj Thanks for trying that out! For the prompt in the experiment though, there isn't supposed to be any text.
3:50
I guessed the MJ......but I failed the Dall-E and SDXL. I'm actually a bit surprised SDXL did that well with that prompt.
Yeah, same here! The octopus definitely looks like a DALL-E image. Surprised at how well it can generate cartoon-like images.
Love this stuff! Leonardo AI maybe?
Great suggestion! I'll add it to the list 😊
really good video, would love to see more!
Thank you so much, there's more to come!
Very cool. Thank you. Can you compare or recommend which ai does best sketch to photo/rendering ?
Really good video suggestion! I'll add it to the pipeline 😊
Could you do another comparison to see which one is able to compose a group of in-focus subjects most coherently?
Yes, definitely!
Midjourney gives realistic images and better colors, while Dall E gives more cartoonish images or something from Pixar Studios and stable diffusion is something between
Agreed 100%. Different use-cases for the different models
The seamless pattern “test” would be better if you actually placed at least two images next to each other, instead of guessing if it’s actually seamless! Seamless is the most important part in that test.
Using text with quotes gives a better result.
Yeah, I should have done this. My bad. Will do it in the next test!
Yeah, Midge and SD must be prompted with a description, not a command. Just saying. :)
Should definitely remove words like "generate" "make me" and "draw an image" for the next experiment.
I guessed all the MidJourney V6 renditions so think I will stay with that. I just need to get the face swapping fixed as it works good with MJ5.2, but 6 just came out badly.
Yeah, I'm pretty good at guessing MJ too.
As for the FaceSwap tool - the newest video (tonight) will create some personalized AI Canvas art and use the FaceSwap tool. So will experiment to see if V6 is causing issues for me as well.
@@WesGPT Awesome! Another question, any way to add makeup after the face swap? My Gimp skills are quite rusty.
@@Wolfgang1224 Oh that's tough. Can't think of a way to do this with AI (yet)
@@WesGPT I think I got it figured out. My Instagram now has 43 posts & 147 followers. Also got an offer to model sunglasses (I think they missed the AI Model tag in m bio) and another group wanting me to pay them to increase my follower list. Think I'm going to go based on my work & not paying for likes. At least for a month or so, then might try them out.
Did you put the words for the coffee one in quotation marks? MJ version 6 does its best to copy text but only if they are specified within quotes.
I used the prompt exactly as shown in the video! But I'll definitely use quotation marks for the next test.
Great video, thank you for the comparison!
Thanks for watching! I'm glad you enjoyed 😊
Im running stable diffusion locally. You just need an rtx or gtx video card.
Zero censorship!
Amazing! I definitely need to upgrade my computer haha
Nobody really uses base model of sdxl tho
Octopus = stable diffusion / dall-e / midjourney
Saxophone = dall-e / stable diffusion / midjourney
Building = dall-e / stable diffusion / midjourney
Good guesses! How did you do?
3:44 "Definitely got the chest right"
**Shows blob of wood and locks mangled together in the rough shape of a pyramid.*
Haha!
Cascade can do text, I hear. Haven't gotten it to work yet. Fooocus is pretty damn good, but not with text
I'll check them out 😊
Awesome Video! Thank you!
Glad you liked it!
Midjourny loves muted backgrounds 🤭
Oh does it ever 😂
@@WesGPT yeah always 😂
Midjourney v 6 is lacking in understanding prompts compared to v 5.2
I found it to be quite the opposite. In my next video, I use a 65 word prompt and it gets all of the details near the end of it.
Are you using community trained checkpoints on SDXL?
No, this was the default model from DreamStudio.
Midjourney is the best in my opinion
Agreed 100%
Only if you make nothing but a simple prompt.. XL has got abut 300 improved specialized models you can select from if you use the frontends Automatic1111, InvokeAI, or ComfyUI.. You can throw in a sketch of an idea, tell the ai, take that pose,/composition/ room, and replicate it for example by using the light, materials and feeling of another image you selected, using the IP Adapter.. for Professional work Here XL in my opinion is lightyeares ahead... even if the simple text to image Prompt from midjurney nearly creates magicaly beautiful images... XL lives from the comunity delivering aditional tools..
@@tomschuelke7955 I need to look into SD XL more! I don't know much about all of these community models.
Cool pictures
Love the photorealistic ones!
Mid journey being not accessible for myself is out of the picture or out of the question but between the other two it's really hard to pick a favorite. Dall-E is pretty cool because the interpretation of the prompt tends to be extremely literal. I would say stable decision interprets prompts a little bit more loosely. And Dall-E, at least for me always as some extra something like a finished shine that i don't see on SD a much. Cool video. Thanks.
Just one thing too. And I hear this all the time and got it wrong myself in the past but apparently photo realism is just an art style that attempts to get close to looking like an actual photo. But if you're looking for a realistic photo or a realistic-looking art that appears to be a photo, you would just prompt "photo"
Oh no! Why is Midjourney not accessible for you?
As for the "photo-realistic" comment - you're right, and I learned this after the video was created 😊
@@WesGPT Hey Wes, it's nothing too bad just the recurring cost even if only 10 a month, I try to limit my AI spending as I already pay for Dall-E-3 (600 images a month for 14 bucks). That's cool - I just learned about photorealism recently too.
❤Thank you sir 🎉😮😊
You're the best 😊
There is something wrong with that Sax for sure...
I ain't no sax player but it doesn't look right 😂
Stable diffusion can generate nsfw, that alone kill the other 2😂
Oh 100% 😂
Mid > Dalee > SD XL
more, this is interesting
Glad you hear you thought this was interesting 😊
I kinda like dalle : 2 haha its more artsy
Haha! I guess it does have it quirks...
Midjourney FTW
I agree 100%
imho, all depend on the requirements, not only about the aesthetics. for example, dalle3 is not replacable for storytelling (such as my weird tutorials project in: ruclips.net/video/sXgZDn-5QGE/видео.html (100% made by AI) 😄
Haha I LOVE this video! Thanks for sharing 😊
❤
❤️
👋
Dall-E 3 is the clear winner for most images in regards to prompt coherence and image fidelity.
Plus......... *it's FREE to use*
For certain types of architecture, SDXL does exceptionally well.
👍
Thanks for voting!
I thought DALL-E 3 won the seamless textures, the architecture, and the cartoon. But I'm a real sucker for photorealistic images and I loved what Midjourney did with the saxophone player 💯
@@WesGPT As concerns the sax player, would SDXL perform better with Loras and special checkpoints? As far as I know Midjourney and Dall-e cannot be tweaked and improved unlike Stable Diffusion
@@eugenmalatov5470I've been hearing a lot about checkpoints and the Loras model but have not had a chance to try it out yet. I'll get back to you 😊
It is better
MUCH better.
@@WesGPT I mean which of these two applications is better?
@@M53_53 It depends on what type of image you're generating but I usually default to Midjourney v6
It’s called bokeh. 😂
I keep seeing this word everywhere now haha. Never knew it meant "blur produced in out-of-focus parts of an image"
200 images? lol
200 generations of 4 images. So 800 images
@@WesGPT Even so, that comes to roughly 7 prompts a day for a whole month.
@@obscuremusictabs5927 I'm unable to understand what you're implying. Is that too much or too little?
@@WesGPT Too little by a thousand miles. How many generations do you do in a day?
@@obscuremusictabs5927The next plan up ($30) gives you unlimited relaxed generations and 15h of fast generations.
I only do maybe 4-5 generations per day so the basic plan is plenty!
Under water cartoon = 1 mj
2 Dalle
3 sd
Realistic man
1 sd
2 dalle
3 mj
Dalle 2. 😂😂😂😂
Architecture
1 dalle
2 mj
3 sd
Pattern
1 dalle
2 mj
3 sd
Got tricked as well.
Logo
1 sd
2 dalle
3 mj
Thanks for the game. 😂. Fun.
Thanks so much for voting! Loved that you participated and wrote all of it down 🙏
A1111 plus CivitAI for the $0 win or ComfyUI + CivitAI if you're a masochist.
Looking into A1111 and ComfyUI soon. Haven't heard of CivitAI yet though!
@@WesGPT CivitAI is basically a public site where the community share their resources. Checkpoints being the indispensable one.
🔥
❤️