Nice video, thank you! MJ 6 is very crazy. I have come to rely on the zoom out, and I have come to rely on VARY REGION. V6 sort of needs to nail it right off or I go back to 5.2 where I have so much more control. And you can only upscale 2X which is lame. Also, I would NEVER want a "creative" upscale. I want VARY REGION and I want 4X upscale.
YES-- around 13 minutes in, you run into a common problem of mine, which is censorship of fully-clothed women! I can't do face-swap on them and it takes up a lot of time. They've been better since I complained and also since I joined a NSFW site to get the images MidJourney hadn't been giving me. IRONICALLY sometimes it gives me topless women I didnt ask for and those are NOT marked NSFW. It seems obsessed with marking clothing "dirty" for whatever reason. It could be you talking about socks that set it off.
I actually think this may become less of an issue if MJ ever fully moved off Discord. It's Discord that is the primary culprit with their terms of service.
@@ScaryStoriesNYC Man, I wish it was a way to hack MidJourney, It won't even let you do anything like lingerie or nightgown. I wish it was a way you could do NSFW or even NSFW ish ingaes on MidJourney especially with version 6. That would be crazy.
Have you tried uploading the reference picture to chatGTP and asking for it to create a Midjourney prompt that would produce the image. Might provide some useful hints
Yes, that can help but overly verbose prompts from ChatGPT actually can result in far worse images. Especially if you just smack the entire text in there straight away without any selection. Incremental prompting is far more effective. PS: You also learn more this way.
Did you give it a go with your reference images? Obviously, I do not have their URLs but I tried creating one with "woman on leafy balcony relaxing in the sun" uploaded the result to Midjourney, asked for a detailed Midjourney prompt, and then used the verbose output verbatim to create a very similar image. Of course, I appreciate this is a special case where you are looking to replicate something exactly. Normally you would want to start from scratch with your own ideas
Given that you reinforced a specific look with an image prompt, I'm not surprised at all. But as you say, although the video obviously demonstrates how to replicate an image, my primary objective is to show how one needs to prompt. If you're creating a new idea from scratch and you don't have a reference, then this is a far more effective way.
I did not use the image as part of the prompt - text only. I found a similar man on a beanbag stock image and repeated the process I described. IMHO, it got a lot closer to the original image first time than yours did with several prompts. It may pick up on useful phrases e.g. from mine " The man has curly hair and is engaged with his smartphone, displaying a joyful expression. He is positioned in a relaxed, slouched posture that suggests comfort and ease" . I agree, in general, with your last sentence but, to me, it suggests that starting from a related image can sometimes be useful and will avoid things like the phone facing the wrong way
Is it better to ask ChatGPT to first describe the image and then proceed from there? The issues with hands and feet might be resolved in vary region once it is released in version 6 . Fingers crossed
Best investment I made last year was to get your course. I do look forward to each new video. V6 is not going to be the last new version of Midjourney, but everything in your course has given me a great foundation that can indeed be a great starting point from which to grow with each new version. The fact that you take time to add new material that incorporates the changes in each new version makes it so the investment continues to pay great divdends as time goes on. While there are many very good teachers out there, you are , as far as I am concerned, one of the very best. Very helpful video! Thanks. P.S. I appreciate you not rushing out videos simply to rush out a video, but you take the time to understand the changes yourself, making sure you are giving something of lasting value.
I also have to generate a lot of monsters for my channel. 5.2 will eventually produce very coherent realistic werewolves and monsters, but v6 so far does this WEIRD THING. It makes the monsters look like they are really there in the scene but they don't look alive, they look more like statues built out of mud standing there in the middle of the woods at night. If you tell either that the image is from a specific year, then it makes the monster look like how monster looked in horror movies of that specific year. So it will look like a guy in a silly suit if you tell it the 50s but it will look like a Rob Bottin radio-controlled animatronic movie monster if you tell it the 80s.
I do a lot of zombies. I'd go on google and search make up artists/vfx etc. Tom savini, ray harryhausen. Maybe the dates are referencing culture changes etc. I think censorship is killing it. Complete mess now. I have collected 600 prompts from gallery searchers. So many dont work now because AI dont like it.
I find your deep dive into recreating an existing image, as you've done here, very helpful. Your comment noting the importance of a command of the language in creating good prompts is right on. AI image creation is, indeed, an art, although words are our brushes. I hope you'll be able to do more like this. Keep up the good work.
Thank you for showing the step by step process needed to produce a very specific image. What you teach (which I appreciate) is that it is unnecessary to create a mini-novel prompt, in one shot, to achieve all you want. Thanks!
Good job matching the control image with the prompt only. Personally, if I were to attempt this, I would start by using the original image as an image reference and try different weights on the image, as well as any aspects of the prompt that weren't coming through strongly enough. As the image got closer to target, I'd possibly add those as image references, which I've found helps to build coherence. My feeling is that by using an image reference, you are saving tokens that would otherwise be taken up with words... which a picture paints a thousand of. 📸
Yes, you can do that. My objective was to address the concern that the image reference is clearly owned by someone else. So it's debatable whether I'd be allowed to use it in this way.
@@TokenizedAI I don't see an issue if you're using it as a reference/starting point, and the final result doesn't end up looking like a photocopy. It's the equivalent of an artist referencing a photograph to paint a portrait.
@@TokenizedAI There are copyright laws that set out how different an image has to be from another image before it can be classed as an original work in its own right. This is judged by the result, not the process employed to achieve it.
That's not my point. The mere fact that I'm demonstrating this live on video means that anyone who sees this can use it to justify legal action against me, whether it merits or not. I'm not saying they'd win and you're absolutely right. But why expose myself unnecessarily when I don't have to? Remember, this is Adobe Stock.
I think the hands- and feet issues comes from the fact that nearly every new major version of midjourney is trained seperately and "from scratch". v6 was in training for nearly 9 months. maybe it is harder than we think to merge already trained versions rather than train them from scratch.
You can try that but I can assure you that it's not that simple. The ChatGPT prompt will add tons of bloat and some words will have double meanings. You'll also have to turn all stylization off, like I did l. A longer prompt also doesn't automatically make the image better.
That doesn't help at all because it's going to change far more than you want it to change. I really don't recommend this approach if you want to actually learn and understand how to do this. Using ChatGPT for this is not going to give you more control. Instead, you're outsourcing random luck disguised as "control".
If you're working with the default stylization then coherence will be low. I recommend learning to prompt with --style raw and maybe even --stylize 0. The high base stylization is always the problem. It makes prompting very beginner friendly but it makes it nearly impossible to prompt coherently if you need something very specific.
The prompt master! And this from the Master of MJ course. Man I wish the Describe would work WAY better, I really needed to duplicate some reference images for a client and just wasn’t even getting close that way so will try these suggests bro
Do the comparison yourself. Use a complex prompt like I did and use the same seed. I get different images when using --s 0 and --style raw separately or in combination. My guess is that not stipulating --style raw somehow changes other variables beyond just --stylize. 🤷🏻♂️
Wow. At this rate I'll have the literary skills to write a novel by v8. For specificity, why not just put the image reference into the prompt and/or use /describe?
I text this as the vid started. Some interesting results. Hope u can do a fantasy one. I'm thinking if everything is created on a database do u think MJ is just drawing the results based on those stock images. If you were to do a random 1 off non existing visual could you get same results. I remember in mj4 trying to get yellow vines climbing up a skyscraper buildings. Was getting horrific results. Might have to go back to that 1. Check it out.
Thanks for confirming this approach. I have been trying this approach for a few weeks and it seems to be the best way to try to get a specific look or scene from MJ V6. I have also found that sometimes, it is easier to separate the subject and background into different prompts and merge them in something like photoshop, especially if you plan to animate some parts of the images. Since V6 does not have vary region, I have been using Photoshop AI generation to fix things. It seems to work pretty well. Thanks again Christian your videos and BTW, I use promptalot - A LOT! 😃😃
Define "worse"? If you mean the artefacts, then yes. But generally speaking, the image gets more and more accurate. Bear in mind, this is an alpha model of v6
@@TokenizedAI Yeah, I mean unnatural limbs, weird poses, the image becomes less of a great 'photo' or artwork and more of a... I don't know how to phrase it well, but it reminds me of any complicated website or webapp backend -- the more features the client wants we put in there, the more it looks like a maze or tower that's gonna crush down any minute if you touch any insignificant part of it :)
@@TokenizedAI With previous models I had always had the same feeling: first picture -- mostly great. The more I want it to be coherent with my prompt, the more unusable it becomes
Because the steps are done so quickly, I can't tell whether you are remixing the original prompt or just doing the additions to the prompt in a new prompt
I was beginning to have Christian withdrawals! Great video. I'm not surprised the moderation got in the way. I brought it up during the last Office hours. Based on my experience, you're absolutely correct about mentioning the vibe and atmosphere. They're subtle differences, but they _are_ there, which is impressive. I don't know who fell asleep with the hands, but it's ridiculous. The one thing I do feel bad about is that people need to be more aware of diction. That's fine for those whose proficiency is at a certain level, but not for others - a double-edged sword. Also, there's a lot of focus on photorealism (yes, I know, the market), but one thing worth applauding them for is how fantastic MJ has gotten with textures, lighting and actually achieving other styles. I cannot _wait_ for the style-tuner. Thanks for the video. 👍
Ive been working with midjourney for a year now and trying to use DAlle sucks... first of all I work with artistic styles and dalle wont work with artists names so thats a wrap. You could say Oh well instead of writing Dali use "surreal" but thats for beginners. How would you describe fashion by Oscar de la Renta with style of Olly Moss. All these comparisons are made with simple prompts like "a schoolbus flying through space". Dall-E is not for work. Anyone saying it is, clearly is just doing generic AI stuff.
I think it's pretty clear that DALL-E is perfect for generic imagery and not usable for truly aesthetic artwork. I also wouldn't say that it only works with simple prompts. My previous demonstrated the exact opposite. They are considerably longer than the one-liners you mention. The point is that DALL-E is still far ahead of MJ in terms of truly understanding what you'd like to see in the image (subjects and details) and actually making sure they show up. I work with MJ on A LOT of serious projects and I can tell you that while MJ is amazing and coherence is much better now, it still infuriates me in so many situations. The MJ team is taking a very opinionated approach to all of this and while it's started to listen to its users more, I sometimes feel they are focusing on this that will use more GPU time rather than what actually helps professional creatives achieve results quickly.
@@TokenizedAI oh I didn’t meant to say it only does one liners. I said almost all the comparisons on RUclips are done with one liners. Meaning exactly that the examples are not done with a professional end. So most of the examples are generic prompts that show what you are saying. Of course Dall-E handles language better than anything, it has years of ChatGPT backing that up.
I love how photo realistic people look in version 6. It's insane how they look. If you look close enough, you can see little details, but the untrained eye who is not familiar with AI couldn't even tell. I wish it was a way you could hack it to make NSFW or NSFW ish pics. It won't generate any lingerie or even something like a nightgown for me. It's like it deems almost anything sexual. if anybody has any tips regarding that, please let me know.
I really wish the hand problem in all ai would get fixed. there could be some sort of specialized hand inpainting tool by a third party, or midjourney themselves. I don't understand why it doesn't exist, we're all generating pics of humans and humans have hands. Please for the love of god people 🙏
I think you are making your life hard here. Why don't you just use the "describe" feature of Midjourney? Or upload your image into Bing chat (For me is the best from chatGpt and free). Ask it to describe your image as a prompt and use that in Midjourney, use the image as a reference as well. Or even if you have to use the "style reference" commant. Because for us who have English as a second language, it is hard to write such advance prompts with hard vocabulary.
I'm not making my life hard. I teach people how to prompt properly. The only reason I'm replicating an image in this example is because most people have no imagination and struggle even with the simplest idea. By showing a reference image, it's easier to illustrate how prompts should be written. Using the describe command makes sense if you really just want to replicate an image but that's not the end goal here and for anything else, Describe won't really help people.
Wow it sucks, testament to how bad MJ is at replicating something anywhere near - yeah you can get the lighting and clothing, architecture etc close but the pose is so far off its a joke, and even though you specify which hands to hold the phone and football its totally incapable of doing it correctly
Nice video, thank you! MJ 6 is very crazy. I have come to rely on the zoom out, and I have come to rely on VARY REGION. V6 sort of needs to nail it right off or I go back to 5.2 where I have so much more control. And you can only upscale 2X which is lame. Also, I would NEVER want a "creative" upscale. I want VARY REGION and I want 4X upscale.
I think we need to accept that it's still an early version of v6. I'm sure it will improve further.
@@TokenizedAI soon bro
YES-- around 13 minutes in, you run into a common problem of mine, which is censorship of fully-clothed women! I can't do face-swap on them and it takes up a lot of time. They've been better since I complained and also since I joined a NSFW site to get the images MidJourney hadn't been giving me. IRONICALLY sometimes it gives me topless women I didnt ask for and those are NOT marked NSFW. It seems obsessed with marking clothing "dirty" for whatever reason. It could be you talking about socks that set it off.
I actually think this may become less of an issue if MJ ever fully moved off Discord. It's Discord that is the primary culprit with their terms of service.
@@TokenizedAI OH! That explains why I find images on the MidJourney website that I never saw on Discord LOL!! The censored ones show up there.
@@ScaryStoriesNYC Man, I wish it was a way to hack MidJourney, It won't even let you do anything like lingerie or nightgown. I wish it was a way you could do NSFW or even NSFW ish ingaes on MidJourney especially with version 6. That would be crazy.
Have you tried uploading the reference picture to chatGTP and asking for it to create a Midjourney prompt that would produce the image. Might provide some useful hints
I just gave it a go. Worked very well
Yes, that can help but overly verbose prompts from ChatGPT actually can result in far worse images. Especially if you just smack the entire text in there straight away without any selection. Incremental prompting is far more effective.
PS: You also learn more this way.
Did you give it a go with your reference images? Obviously, I do not have their URLs but I tried creating one with "woman on leafy balcony relaxing in the sun" uploaded the result to Midjourney, asked for a detailed Midjourney prompt, and then used the verbose output verbatim to create a very similar image. Of course, I appreciate this is a special case where you are looking to replicate something exactly. Normally you would want to start from scratch with your own ideas
Given that you reinforced a specific look with an image prompt, I'm not surprised at all. But as you say, although the video obviously demonstrates how to replicate an image, my primary objective is to show how one needs to prompt. If you're creating a new idea from scratch and you don't have a reference, then this is a far more effective way.
I did not use the image as part of the prompt - text only. I found a similar man on a beanbag stock image and repeated the process I described. IMHO, it got a lot closer to the original image first time than yours did with several prompts. It may pick up on useful phrases e.g. from mine " The man has curly hair and is engaged with his smartphone, displaying a joyful expression. He is positioned in a relaxed, slouched posture that suggests comfort and ease" . I agree, in general, with your last sentence but, to me, it suggests that starting from a related image can sometimes be useful and will avoid things like the phone facing the wrong way
Is it better to ask ChatGPT to first describe the image and then proceed from there? The issues with hands and feet might be resolved in vary region once it is released in version 6 . Fingers crossed
Best investment I made last year was to get your course. I do look forward to each new video. V6 is not going to be the last new version of Midjourney, but everything in your course has given me a great foundation that can indeed be a great starting point from which to grow with each new version. The fact that you take time to add new material that incorporates the changes in each new version makes it so the investment continues to pay great divdends as time goes on. While there are many very good teachers out there, you are , as far as I am concerned, one of the very best. Very helpful video! Thanks. P.S. I appreciate you not rushing out videos simply to rush out a video, but you take the time to understand the changes yourself, making sure you are giving something of lasting value.
Thank you so much. It's nice to see that the work is being appreciated and not taken for granted 🤗
Would an RGB value work for the background? ❤ Thank you for another great video.
Not sure. I've tried hex codes in the past but those didn't work.
really love these new kind of speed up videos. great quality improvement
I'm essentially speeding myself up by 12% now by default during the cutting process. That seems to be a sweet spot for me.
I also have to generate a lot of monsters for my channel. 5.2 will eventually produce very coherent realistic werewolves and monsters, but v6 so far does this WEIRD THING. It makes the monsters look like they are really there in the scene but they don't look alive, they look more like statues built out of mud standing there in the middle of the woods at night. If you tell either that the image is from a specific year, then it makes the monster look like how monster looked in horror movies of that specific year. So it will look like a guy in a silly suit if you tell it the 50s but it will look like a Rob Bottin radio-controlled animatronic movie monster if you tell it the 80s.
I think this might be a strong bias that's rooted in the training data.
I do a lot of zombies.
I'd go on google and search make up artists/vfx etc. Tom savini, ray harryhausen. Maybe the dates are referencing culture changes etc.
I think censorship is killing it. Complete mess now. I have collected 600 prompts from gallery searchers. So many dont work now because AI dont like it.
I find your deep dive into recreating an existing image, as you've done here, very helpful. Your comment noting the importance of a command of the language in creating good prompts is right on. AI image creation is, indeed, an art, although words are our brushes. I hope you'll be able to do more like this. Keep up the good work.
Thank you for showing the step by step process needed to produce a very specific image. What you teach (which I appreciate) is that it is unnecessary to create a mini-novel prompt, in one shot, to achieve all you want. Thanks!
Good job matching the control image with the prompt only. Personally, if I were to attempt this, I would start by using the original image as an image reference and try different weights on the image, as well as any aspects of the prompt that weren't coming through strongly enough. As the image got closer to target, I'd possibly add those as image references, which I've found helps to build coherence. My feeling is that by using an image reference, you are saving tokens that would otherwise be taken up with words... which a picture paints a thousand of. 📸
Yes, you can do that. My objective was to address the concern that the image reference is clearly owned by someone else. So it's debatable whether I'd be allowed to use it in this way.
@@TokenizedAI I don't see an issue if you're using it as a reference/starting point, and the final result doesn't end up looking like a photocopy. It's the equivalent of an artist referencing a photograph to paint a portrait.
I guess it doesn't really matter what we think but what the rights holder may think.
@@TokenizedAI There are copyright laws that set out how different an image has to be from another image before it can be classed as an original work in its own right. This is judged by the result, not the process employed to achieve it.
That's not my point.
The mere fact that I'm demonstrating this live on video means that anyone who sees this can use it to justify legal action against me, whether it merits or not. I'm not saying they'd win and you're absolutely right. But why expose myself unnecessarily when I don't have to?
Remember, this is Adobe Stock.
Would it work if you specify color using hex code or RGB value?
I dunno. Give it a try.
I think the hands- and feet issues comes from the fact that nearly every new major version of midjourney is trained seperately and "from scratch". v6 was in training for nearly 9 months. maybe it is harder than we think to merge already trained versions rather than train them from scratch.
Can you get chatgbt to describe the image and copy and paste it into mid journey?
You can try that but I can assure you that it's not that simple. The ChatGPT prompt will add tons of bloat and some words will have double meanings. You'll also have to turn all stylization off, like I did l.
A longer prompt also doesn't automatically make the image better.
Yea but with chatgpt you can always modify the prompt but telling it what it's doing wrong
That doesn't help at all because it's going to change far more than you want it to change.
I really don't recommend this approach if you want to actually learn and understand how to do this. Using ChatGPT for this is not going to give you more control. Instead, you're outsourcing random luck disguised as "control".
Is v6.0 only good at photo realism, I have been trying to use it for doing art (not photo), but it is as if V6 doesn't understand. Your thoughts?
If you're working with the default stylization then coherence will be low. I recommend learning to prompt with --style raw and maybe even --stylize 0.
The high base stylization is always the problem. It makes prompting very beginner friendly but it makes it nearly impossible to prompt coherently if you need something very specific.
@@TokenizedAI Thanks for the advice, I will give this a try. 👍
The prompt master! And this from the Master of MJ course.
Man I wish the Describe would work WAY better, I really needed to duplicate some reference images for a client and just wasn’t even getting close that way so will try these suggests bro
Great walkthrough!
Why are you using --s 0 with raw??
Do the comparison yourself. Use a complex prompt like I did and use the same seed. I get different images when using --s 0 and --style raw separately or in combination.
My guess is that not stipulating --style raw somehow changes other variables beyond just --stylize. 🤷🏻♂️
Wow. At this rate I'll have the literary skills to write a novel by v8.
For specificity, why not just put the image reference into the prompt and/or use /describe?
Imagine a combo of both the understaning of dalle and the artistry of MJ.
Life is so unfair.
One day... 😁
I text this as the vid started. Some interesting results.
Hope u can do a fantasy one.
I'm thinking if everything is created on a database do u think MJ is just drawing the results based on those stock images.
If you were to do a random 1 off non existing visual could you get same results.
I remember in mj4 trying to get yellow vines climbing up a skyscraper buildings. Was getting horrific results. Might have to go back to that 1. Check it out.
Thanks for confirming this approach. I have been trying this approach for a few weeks and it seems to be the best way to try to get a specific look or scene from MJ V6. I have also found that sometimes, it is easier to separate the subject and background into different prompts and merge them in something like photoshop, especially if you plan to animate some parts of the images. Since V6 does not have vary region, I have been using Photoshop AI generation to fix things. It seems to work pretty well.
Thanks again Christian your videos and BTW, I use promptalot - A LOT! 😃😃
Don't you think that with every detail added the output gets worse and worse? Pretty much the way it's always been with MJ
Define "worse"? If you mean the artefacts, then yes. But generally speaking, the image gets more and more accurate.
Bear in mind, this is an alpha model of v6
@@TokenizedAI Yeah, I mean unnatural limbs, weird poses, the image becomes less of a great 'photo' or artwork and more of a... I don't know how to phrase it well, but it reminds me of any complicated website or webapp backend -- the more features the client wants we put in there, the more it looks like a maze or tower that's gonna crush down any minute if you touch any insignificant part of it :)
@@TokenizedAI With previous models I had always had the same feeling: first picture -- mostly great. The more I want it to be coherent with my prompt, the more unusable it becomes
Somehow MJ failed to follow the part of the prompt where you said that the man is smiling with his right hand.
Yes, that sentence was ambiguous at best
Yeah, prompts can always be improved.
Because the steps are done so quickly, I can't tell whether you are remixing the original prompt or just doing the additions to the prompt in a new prompt
No remixing. Just straight prompts. If I were remixing, I would say so.
Wow that was quick. Thanks 😊❤️
You are awesome! Thank You
My pleasure 😊
I was beginning to have Christian withdrawals!
Great video. I'm not surprised the moderation got in the way. I brought it up during the last Office hours. Based on my experience, you're absolutely correct about mentioning the vibe and atmosphere. They're subtle differences, but they _are_ there, which is impressive. I don't know who fell asleep with the hands, but it's ridiculous. The one thing I do feel bad about is that people need to be more aware of diction. That's fine for those whose proficiency is at a certain level, but not for others - a double-edged sword. Also, there's a lot of focus on photorealism (yes, I know, the market), but one thing worth applauding them for is how fantastic MJ has gotten with textures, lighting and actually achieving other styles. I cannot _wait_ for the style-tuner. Thanks for the video. 👍
Ive been working with midjourney for a year now and trying to use DAlle sucks... first of all I work with artistic styles and dalle wont work with artists names so thats a wrap. You could say Oh well instead of writing Dali use "surreal" but thats for beginners. How would you describe fashion by Oscar de la Renta with style of Olly Moss. All these comparisons are made with simple prompts like "a schoolbus flying through space". Dall-E is not for work. Anyone saying it is, clearly is just doing generic AI stuff.
I think it's pretty clear that DALL-E is perfect for generic imagery and not usable for truly aesthetic artwork.
I also wouldn't say that it only works with simple prompts. My previous demonstrated the exact opposite. They are considerably longer than the one-liners you mention.
The point is that DALL-E is still far ahead of MJ in terms of truly understanding what you'd like to see in the image (subjects and details) and actually making sure they show up.
I work with MJ on A LOT of serious projects and I can tell you that while MJ is amazing and coherence is much better now, it still infuriates me in so many situations. The MJ team is taking a very opinionated approach to all of this and while it's started to listen to its users more, I sometimes feel they are focusing on this that will use more GPU time rather than what actually helps professional creatives achieve results quickly.
@@TokenizedAI oh I didn’t meant to say it only does one liners. I said almost all the comparisons on RUclips are done with one liners. Meaning exactly that the examples are not done with a professional end. So most of the examples are generic prompts that show what you are saying. Of course Dall-E handles language better than anything, it has years of ChatGPT backing that up.
Why did you never mention camera position, angle, and “side view”. MJ was all over the place on those parameters the whole time.
No particular reason. My objective wasn't to "copy" the image. I wanted its essence. Why reduce myself to a single possible angle?
I still haven't found nothing better with v6
Then you probably just don't need better prompt coherence for what you typically create. But for many of us, it's crucial.
I love how photo realistic people look in version 6. It's insane how they look. If you look close enough, you can see little details, but the untrained eye who is not familiar with AI couldn't even tell. I wish it was a way you could hack it to make NSFW or NSFW ish pics. It won't generate any lingerie or even something like a nightgown for me. It's like it deems almost anything sexual. if anybody has any tips regarding that, please let me know.
❤❤was really waiting for this🎉🎉. Coherence 😮
thank you!
I really wish the hand problem in all ai would get fixed. there could be some sort of specialized hand inpainting tool by a third party, or midjourney themselves. I don't understand why it doesn't exist, we're all generating pics of humans and humans have hands. Please for the love of god people 🙏
I think you are making your life hard here. Why don't you just use the "describe" feature of Midjourney? Or upload your image into Bing chat (For me is the best from chatGpt and free). Ask it to describe your image as a prompt and use that in Midjourney, use the image as a reference as well. Or even if you have to use the "style reference" commant. Because for us who have English as a second language, it is hard to write such advance prompts with hard vocabulary.
I'm not making my life hard. I teach people how to prompt properly. The only reason I'm replicating an image in this example is because most people have no imagination and struggle even with the simplest idea. By showing a reference image, it's easier to illustrate how prompts should be written.
Using the describe command makes sense if you really just want to replicate an image but that's not the end goal here and for anything else, Describe won't really help people.
@@TokenizedAITrue!
Wow it sucks, testament to how bad MJ is at replicating something anywhere near - yeah you can get the lighting and clothing, architecture etc close but the pose is so far off its a joke, and even though you specify which hands to hold the phone and football its totally incapable of doing it correctly
Heads up. MJ doesn't really care how you feel about it.
@@TokenizedAI NS
And neither does anyone else.
👋