Thanks Nolan! I can see this being very useful when putting together a brand identity for a company. Being able to keep colors, etc. consistent with the brand.
I’ve watch a hundred videos trying to figure out how to do this. This was the only one that worked and made it simple and straight forward. Thank you!!
Another great video. Like your positive energy, Nolan. One thing I discovered when trying out --sref is that when you prompt you can't put a comma after the subject, and before sref--or you get an error message. So portrait --sref, not portrait, --sref
SREF is proably the thing that i asked the most for MJ to implement it and its freaking amazing! ive been inspired like it was the first time using midjourney! I cant wait for the CREF / character reference, that should make MJ a lot better! Would love to be able to use SREF and CREF in the same prompt!
I just used these tools today to regenerate a portrait. I noticed that you can also control the weight of image prompting by typing -iw and a value between 0.1 and 3👍🏼
I tried using the same image for both in the same prompt and I was very happy with the results. Bringing in random styles from the users feed is very funny anyway 😊
dude i literally had this question, like what is the point of prompting anymore with --sref? this is the answer. love the 'sandwich vs milkshake' analogy. thanks man
@@FutureTechPilot so this is something im confused about--can you use sref and an image prompt at the same time? the announcement made it seem like no, but in my testing i think it makes a difference. would love to see a deep dive on using them both together, if indeed it makes a difference
Very good information! Simple and straight forward. What I'm trying to figure out though, is how this style reference is different from creating a prefer option set, which can sort of do the same thing. Might be a rookie question, but...?
I'm not entirely sure I understand the question, so let me know if my answer isn't what you were thinking of - but I would say that it's just another branch of consistency. You could use words to keep a style similar, you could remix the image to keep the style similar, you could image prompt for consistency, or you could include a style reference. They all result in different-looking generations, but they all accomplish the same idea. Hope that makes sense
Hello, what is the absolute best image you've generated with mj in your lifetime yet? Would be cool if you could show your top 3 of all time favorites (should be your original prompts/generations) in the next video, maybe at the end as an extra.
😂 I didn't want to confuse people with that combination - because yes it's possible, and it's a little overwhelming to monitor all those possibilities. Let me know if you find any cool tricks!
Hi sir and thank you for the wonderfull information you always share I have a question how can we get a fully painted scene from our own rough sketch in midjourney v6 would you please help me with that and thank you
Please tell me how to use only the style of an image or the colours of an image but get the exact prompt i put. İ used the sref but it gave me a result too closed to the reference image without taking into account my prompt. Also for example i want to make a wall art and I want to keep a specific painting style. But when i add the sref it doesn't give me the new art but the reference image art with some addition. For instance i want to create a collection of art that will looks like painted by the same person and the same colour scheme 😊
hi i find your videos worldclass but now I have a problem with sref: I always or very often get this Could not fetch image. Received status code 403 what can I do
Sounds frustrating. I'm really not sure of the solution. You should definitely ask around on one of the support channels on Discord! Sorry I'm not more help
we're so close! now we just need a --dref (depth ref) feature and we're golden for composition. well.... for at least the next 9 months anyway until I get greedy again. ;)
I'm not sure I understand the question, but I'd say it's similar, not the same - and you can't use --sref and style tuning together right now (style tuning is only available in 5.2 and sref is only in v 6)
I used a image with Asian buildings as my style but it didn't add asian buildings to the target prompt only the style of the image. Image prompt however adds the Asian buildings.
Do you offer paid support? I'm looking for help with a specific midjourney project, which is to produce images in a given style using a reference image.
I have a couple of hours each week dedicated to consulting - check out my calendar and see if my schedule works for you! calendly.com/futuretechpilot/1-on-1-session - use that link if you'd like to pay with PayPal calendly.com/futuretechpilot/1-on-1-stripe - use that link if you'd like to pay with Stripe
1:25 PRO TIP: The reason why is because you chose the wrong aspect ratio. You'd very likey have gotten full red jacket and shoes in all images with a portrait orientation image 👍
Yah very good point! That's probably the first thing to suggest when Midjourney isn't making what you want - change up the orientation of the generation!
You explained this well but Im surprised you didn't see what would happen if you used both a style reference AND an image prompt. I havent tried it, maybe it wont let you
Very good demonstration of these two features. But did you know that for Style Reference there are 4 versions you can use together as a parameter? This video explains the versions if you like to watch it => ruclips.net/video/stFGM7XdHqo/видео.htmlsi=dg9TU8HArv9a8atu
@@FutureTechPilot yeah. It’s a thing in stable diffusion where the ai tries to copy the “style” of the input image. It has different types of focus. Some ipadapters focus on faces. Some colors. Some lines. Some general image patterns (this is the most common one). The one thing to note. Ultimately it works on a square 1:1 image. Whatever you give it. It’s gonna crop it to 1:1. Usually in the center. Midjourney MAY be doing some complex logic to find where the image focus is and then crop there. (In comfyui and automatic1111 we do this manually)
lol, the guy doesn't understand it himself, so probably subconsciously trying to understand by giving us many "explanations" hoping as he said "one of them click" assuming we don't understand it neither. The answer is in the name of both features and doesn't even require explanation, at least not bigger than this: image prompt is recreating the image with additional content added in the prompt. Style reference only grab the style of the image, doesn't care about the content, otherwise there would be a girl on the couch but no, only the style of the image, colors, lights were taken. So sref is surely not a blend nor a smoothie (unless you believe the girls body parts are hidden in the couch), it is rather taking a dish, throwing away it's main element (in our image it's the content) and only take the seasoning, topping, fresh herbs to put on the top, any other decoractions, etc. Besides that you can use both at the same time even without text, if there is a photo of a girl in the prompt image and there is a dog in a watercolor style as sref, you will get a girl in watercolor style, no dog. If u use the watercolor dog as image prompt and add sref the photo of a girl, you'll get a photo of a dog, no girl, no watercolor anymore. So it's surely not a blend. Blend would be humanoid female dog.
So you start with a mean-girl takedown, then body-slam yourself into a bowl of word salad. I think most people would prefer Nolan's well thought out approach and appreciate his generosity over your verbosity.
@@morpheus2573 I don't create the video, so I can only use words. I know it's too much for you to follow and visual representation seems easier to understand. The problem is, his explanation is wrong. I only used 2 sentences to explaing it all, the rest was relating to what he said in the video. So my "verbosity" is only referring to examples your "generous" author of the video gave, to point out faultiness of his "well thought" understanding of sref image. The simplest explanation is: the sref only takes style from the image. As you already haven't understood once what is the style of the image, I'll describe it to you: it grabs the overall composition, read if it's a photo, anime, animation, 3d render, oil paint, watercolor, etc or combination of these styles. It also reads the colors, the lights, occasionally some decorative elements. Most importantly, it skips the content of the image. It doesn't care if there is a woman or the dog in it, it completely skips it. The explanation in the video this guy gave us, he says it is a blend of the sref image with a text prompt is totally wrong. A blend wouldn't skip the content of the sref image. Read my response to the video author for more details.
@@FutureTechPilot The problem is not in the "many explanations" but that each of your explanations of sref is wrong. A first example you show an image of a pattern. To explain how STYLE reference image works, using a pattern image, that is literally made of style and decorative elements (considered by ai as style) only, is completely illogical and is simply invalid example. You say "style is transferred through and subject get's blended together" - this is the closest you got to understanding it, and it's only becouse the sref image is a pattern, but still not close enough. Becouse they are not even blended together, in the image prompt example of man wearing a red jacket and gold shoes the image and text prompt are blended together more, since the pink parts of the image become red matching the jacket or trousers, and yellow parts of the image become gold, matching the shoes. While sref image + 'man wearing red jacket' only transfer the style to it, not blending together, not taking anything from it. Second example, the dragon + sref of lego image is grabbing only the colors, overall compostion and the size of the dragon (or rather size of the scene). The size of the scene make it a figurine, doesn't make it a toy and neither of those is a toy figurine, just a figurine. "Blend come toghether a little more", yeah, when you prompted it has to be 'dragon made of lego' xD, lol. Blend come only after putting it in the prompt, there is no slightest piece of lego in it until you clarify it in the prompt, only style is trasnferred, there is zero blend before that. 3rd example the couch is absolutely not a blend. There is no girl at all after using sref image (unless you believe she's blended into the couch which we can't see). The couch prompt only grabs the style, so there is no girl, there is strong sunlight grabbed, strong contrast level grabbed, the shadows, product photography, unsplash, everything that is related to style besides the actual objects in this picture, that is no girl, no coat, only the lubricity of the coat. The last example clearly shows the image prompt with text prompt create much more of a blend than sref with text prompt. Ninja turtle and batman are literally blended toghether, the character consists of both, either it looks like a turtle but have a batman cape, or it looks more like batman with the mask, but you can clearly see the turtles face behind it and greenish body. It couldn't be more obvious. Image prompt + text prompt is literally like the text prompt that created that image + text u added, which is way more of a blend between these two, like normal text prompting is usually the blend of all of the things u put in there. While sref only grabs the style of the image, so there is absolutely zero batman in these results, only the style of the image, and turtle literally swap the batman from it's place, grabbing only the style of the image. Now as I clearly described you what is sref - style reference skipping all the content in it, you better rerecord the video with the proper explanation, not misleading your viewers. "Sref doesn't care who was in the original image" "only black and white manga inspired visuals" - finally at the end you said something that makes sense. Unfortunetly all of the multuplication, blending and smoothie nonsense throughout the entire video is all wrong.
Thanks Nolan! I can see this being very useful when putting together a brand identity for a company. Being able to keep colors, etc. consistent with the brand.
Yah that's a really good point!
I’ve watch a hundred videos trying to figure out how to do this. This was the only one that worked and made it simple and straight forward. Thank you!!
😂 well thanks for the feedback, I'm glad to know I was able to help!
the cooking analogy is genius 🙌
Clear, concise, easy to understand information on image prompting vs style reference. Thank you, Nolan.
And thanks for the feedback! I'm really glad my way of teaching works well for you
What a clear and straightforward direction to understand. Thanks Nolan....spot on a usual!
Cheers!
great explanation. the best video tutor about the subject. gratz !
Well thanks a lot for the kind words!
Thank you so much. This is exactly the type of blending I've been looking for.
Cheers!
Thanks - nicely explained. Offers some clarity to what I've been doing, and what to watch for.
Thanks a lot for the feedback!
great analogies... i like the math one
That's awesome to hear!
great analogy, I understand Style refs better now, thanks
hey I'm glad to hear that!
Thank you. Was planning on researching this myself. But hete you go, once again coming to the rescue 👍
😂 happy I could help
Great video! Very helpful and explained so well! Thank you so much!
I'm happy to hear that!
Thanks Nolan,as a professional photographer thinking of commercial ideas your help is of great value! Greetings from London.
Cheers!
you explain it so well
I'm happy to hear that!
that last example really made style referencing make more sense to me. thank you, Nolan! the possibilities now!!! 🤗
Hey I'm glad I found one that worked!
Another great video. Like your positive energy, Nolan. One thing I discovered when trying out --sref is that when you prompt you can't put a comma after the subject, and before sref--or you get an error message. So portrait --sref, not portrait, --sref
Thanks for the feedback! And that's good to know, I'm sure a lot of people will run into that problem
The p&b explanation is an award winning one, thanks!
haha well I appreciate the feedback! I'm glad that one made sense
awesome stuff, great info as usual, thanks bud!
Cheers!
Great explanation! All 3 make sense and have different usages. I'll definitely switch to v6 for my future works !
Hey I'm glad they made sense!
Thank you for this video. I didn't know that I needed to understand this, but now I have so much more control over my art! Seriously, thank you.
And thanks a lot for taking the time to leave a comment. Cheers!
Great video. Thanks, Nolan. Made the difference super clear.
I'm really glad to hear that, thanks for the feedback!
Great video. Can see how much you're improving in how you approach making these videos
I'm happy to hear that and I appreciate the feedback! Cheers pal
One carries the referenced subject more. The other carriers the referenced style more.
Excellent video, thank you. 💯
Cheers!
You explain and exemplify so clearly, it's immensely helpful! 🥰
I'm really glad to hear that, thanks for the feedback!
we love your channel Great explanation! Thanks Nolan! 🤟
Cheers!
Thanks!
And thank you very much!
FAN-FLIPPIN-TASTIC!!!!!! Really helpful tutorial. Many thanks 🙂
Cheers!
My guy you a real one, thanks for the explanation.
Cheers pal!
SREF is proably the thing that i asked the most for MJ to implement it and its freaking amazing! ive been inspired like it was the first time using midjourney!
I cant wait for the CREF / character reference, that should make MJ a lot better! Would love to be able to use SREF and CREF in the same prompt!
Yah I can't wait for that. Let's hope it's as good as we're imagining 😂
true that, hopefully its good enough to be able to recognize the same character across all generations! @@FutureTechPilot
Thank you so much for these comparisons! Seriously makes way more sense now -- the differences.
Hey I'm happy to hear that!
Your videos are great. Thank you for sharing your knowledge with us ❤
And thank you for taking the time to comment!
Thanks, Nolan! Valuable insight. :)
Cheers pal!
Thank you, great explanation !
Cheers!
I just used these tools today to regenerate a portrait.
I noticed that you can also control the weight of image prompting by typing -iw and a value between 0.1 and 3👍🏼
Yah! Super helpful for recreating a specific picture
SREF is the way!!! i love how it transfers the exact look
Isn't it so much fun?
I tried using the same image for both in the same prompt and I was very happy with the results. Bringing in random styles from the users feed is very funny anyway 😊
Yah that sounds fun!
Thank you, I was wondering about the difference also.
Hope I helped a little!
Thank you for the analogies and explanation
I'm happy I could help!
dude i literally had this question, like what is the point of prompting anymore with --sref? this is the answer. love the 'sandwich vs milkshake' analogy. thanks man
Hey I'm glad to hear that haha cheers pal!
@@FutureTechPilot so this is something im confused about--can you use sref and an image prompt at the same time? the announcement made it seem like no, but in my testing i think it makes a difference. would love to see a deep dive on using them both together, if indeed it makes a difference
Very good information! Simple and straight forward. What I'm trying to figure out though, is how this style reference is different from creating a prefer option set, which can sort of do the same thing. Might be a rookie question, but...?
I'm not entirely sure I understand the question, so let me know if my answer isn't what you were thinking of -
but I would say that it's just another branch of consistency. You could use words to keep a style similar, you could remix the image to keep the style similar, you could image prompt for consistency, or you could include a style reference. They all result in different-looking generations, but they all accomplish the same idea.
Hope that makes sense
Excellent
Cheers!
nice:) I have to play more with the new features.
You'll never run out of things to do!
super bien explicado y claro,, GRACIAS SSSSSS
Cheers!
love your videos, it helps me a lot, can someone help me to understand, why - - sref does not work, i try ewerything cannot understand whats wrong ???
You should ask around in the prompt chat channel on Discord! Lots of helpful people there
Wow, thank you. Good examples :) :).
Cheers!
Hello, what is the absolute best image you've generated with mj in your lifetime yet? Would be cool if you could show your top 3 of all time favorites (should be your original prompts/generations) in the next video, maybe at the end as an extra.
Cool idea! I feel like it's an impossible question to answer haha but I'll keep it in mind!
@@FutureTechPilot cool, thank you :) looking forward to it.
PBNJ FTW! Great way to describe it.
Cheers!
I can imagine how the --cref for character consistency would play out later in relation to the --sref workflow.
I'm hoping it works smoothly!
yes me to. cant wait for it
@@FutureTechPilot
I tried this. It's very powerful
💯
Great video as always Nolan, thanks! Is the default style reference --sw 500?
The default --sw is 100!
you are the best
Thanks for the support!
Great. And what about using both sref and image prompt in the same generation? Is it even feasible?
😂 I didn't want to confuse people with that combination - because yes it's possible, and it's a little overwhelming to monitor all those possibilities. Let me know if you find any cool tricks!
Yes, we need this video as well!
Hi sir and thank you for the wonderfull information you always share
I have a question
how can we get a fully painted scene from our own rough sketch in midjourney v6
would you please help me with that
and thank you
Unfortunately there is no real way to accomplish that right now, but we could definitely see that as a feature in the future
thank you sir @@FutureTechPilot
Please tell me how to use only the style of an image or the colours of an image but get the exact prompt i put. İ used the sref but it gave me a result too closed to the reference image without taking into account my prompt. Also for example i want to make a wall art and I want to keep a specific painting style. But when i add the sref it doesn't give me the new art but the reference image art with some addition. For instance i want to create a collection of art that will looks like painted by the same person and the same colour scheme 😊
I appreciate your chancel.
And I appreciate you!
Can --sref using by Specified picture or people portrait?
I'm not sure I understand your question
hi i find your videos worldclass but now I have a problem with sref: I always or very often get this Could not fetch image. Received status code 403 what can I do
Sounds frustrating. I'm really not sure of the solution. You should definitely ask around on one of the support channels on Discord! Sorry I'm not more help
we're so close! now we just need a --dref (depth ref) feature and we're golden for composition. well.... for at least the next 9 months anyway until I get greedy again. ;)
hahah yeah I wonder how long it will take for people to get bored of this
is Sref in ver 6 the same with style tunner in ver 5.2
I'm not sure I understand the question, but I'd say it's similar, not the same - and you can't use --sref and style tuning together right now (style tuning is only available in 5.2 and sref is only in v 6)
I used a image with Asian buildings as my style but it didn't add asian buildings to the target prompt only the style of the image. Image prompt however adds the Asian buildings.
That's a really really good example!
Do you offer paid support? I'm looking for help with a specific midjourney project, which is to produce images in a given style using a reference image.
I have a couple of hours each week dedicated to consulting - check out my calendar and see if my schedule works for you!
calendly.com/futuretechpilot/1-on-1-session - use that link if you'd like to pay with PayPal
calendly.com/futuretechpilot/1-on-1-stripe - use that link if you'd like to pay with Stripe
1:25
PRO TIP:
The reason why is because you chose the wrong aspect ratio. You'd very likey have gotten full red jacket and shoes in all images with a portrait orientation image 👍
Yah very good point! That's probably the first thing to suggest when Midjourney isn't making what you want - change up the orientation of the generation!
You explained this well but Im surprised you didn't see what would happen if you used both a style reference AND an image prompt. I havent tried it, maybe it wont let you
👀oh you can. But I'm saving that for another video
I will add to what you say, "prompting the focus is on the subject and on sref the subject becomes victim to change"
Yah something like that!
Did you know you can do both? An image prompt and a style reference?
Yah haha so many rabbit holes to explore
@@FutureTechPilot it is when you start chucking in -iw and -sw in the mix!
Could you vary your call to action a bit so it doesn’t sound so scripted? Thanks, I need your help with that. 😉
I don't use a script in my videos but sure I'll change things up ;)
👍
🤙
I still don't know exactly how to type out the prompt using --sref
this video might help you out - ruclips.net/video/u0hAkiUeohc/видео.html
I watch it now, but still…. How I write a style reference
You can check out this video here - ruclips.net/video/u0hAkiUeohc/видео.html
Now, how about constant characters
they're working on it!
Very good demonstration of these two features. But did you know that for Style Reference there are 4 versions you can use together as a parameter? This video explains the versions if you like to watch it => ruclips.net/video/stFGM7XdHqo/видео.htmlsi=dg9TU8HArv9a8atu
Yah that's a cool update!
👋
🤝
Style reference seems to just be IPAdapter
yah you recognize that ability?
@@FutureTechPilot yeah. It’s a thing in stable diffusion where the ai tries to copy the “style” of the input image. It has different types of focus. Some ipadapters focus on faces. Some colors. Some lines. Some general image patterns (this is the most common one).
The one thing to note. Ultimately it works on a square 1:1 image. Whatever you give it. It’s gonna crop it to 1:1. Usually in the center. Midjourney MAY be doing some complex logic to find where the image focus is and then crop there. (In comfyui and automatic1111 we do this manually)
lol, the guy doesn't understand it himself, so probably subconsciously trying to understand by giving us many "explanations" hoping as he said "one of them click" assuming we don't understand it neither. The answer is in the name of both features and doesn't even require explanation, at least not bigger than this: image prompt is recreating the image with additional content added in the prompt. Style reference only grab the style of the image, doesn't care about the content, otherwise there would be a girl on the couch but no, only the style of the image, colors, lights were taken. So sref is surely not a blend nor a smoothie (unless you believe the girls body parts are hidden in the couch), it is rather taking a dish, throwing away it's main element (in our image it's the content) and only take the seasoning, topping, fresh herbs to put on the top, any other decoractions, etc. Besides that you can use both at the same time even without text, if there is a photo of a girl in the prompt image and there is a dog in a watercolor style as sref, you will get a girl in watercolor style, no dog. If u use the watercolor dog as image prompt and add sref the photo of a girl, you'll get a photo of a dog, no girl, no watercolor anymore. So it's surely not a blend. Blend would be humanoid female dog.
So you start with a mean-girl takedown, then body-slam yourself into a bowl of word salad. I think most people would prefer Nolan's well thought out approach and appreciate his generosity over your verbosity.
I had multiple people ask me about the difference. So why would I assume one explanation would make sense to all of them?
@@FutureTechPilot Kids these days. 😉
@@morpheus2573 I don't create the video, so I can only use words. I know it's too much for you to follow and visual representation seems easier to understand. The problem is, his explanation is wrong. I only used 2 sentences to explaing it all, the rest was relating to what he said in the video. So my "verbosity" is only referring to examples your "generous" author of the video gave, to point out faultiness of his "well thought" understanding of sref image. The simplest explanation is: the sref only takes style from the image. As you already haven't understood once what is the style of the image, I'll describe it to you: it grabs the overall composition, read if it's a photo, anime, animation, 3d render, oil paint, watercolor, etc or combination of these styles. It also reads the colors, the lights, occasionally some decorative elements. Most importantly, it skips the content of the image. It doesn't care if there is a woman or the dog in it, it completely skips it. The explanation in the video this guy gave us, he says it is a blend of the sref image with a text prompt is totally wrong. A blend wouldn't skip the content of the sref image. Read my response to the video author for more details.
@@FutureTechPilot The problem is not in the "many explanations" but that each of your explanations of sref is wrong. A first example you show an image of a pattern. To explain how STYLE reference image works, using a pattern image, that is literally made of style and decorative elements (considered by ai as style) only, is completely illogical and is simply invalid example. You say "style is transferred through and subject get's blended together" - this is the closest you got to understanding it, and it's only becouse the sref image is a pattern, but still not close enough. Becouse they are not even blended together, in the image prompt example of man wearing a red jacket and gold shoes the image and text prompt are blended together more, since the pink parts of the image become red matching the jacket or trousers, and yellow parts of the image become gold, matching the shoes. While sref image + 'man wearing red jacket' only transfer the style to it, not blending together, not taking anything from it.
Second example, the dragon + sref of lego image is grabbing only the colors, overall compostion and the size of the dragon (or rather size of the scene). The size of the scene make it a figurine, doesn't make it a toy and neither of those is a toy figurine, just a figurine. "Blend come toghether a little more", yeah, when you prompted it has to be 'dragon made of lego' xD, lol. Blend come only after putting it in the prompt, there is no slightest piece of lego in it until you clarify it in the prompt, only style is trasnferred, there is zero blend before that.
3rd example the couch is absolutely not a blend. There is no girl at all after using sref image (unless you believe she's blended into the couch which we can't see). The couch prompt only grabs the style, so there is no girl, there is strong sunlight grabbed, strong contrast level grabbed, the shadows, product photography, unsplash, everything that is related to style besides the actual objects in this picture, that is no girl, no coat, only the lubricity of the coat.
The last example clearly shows the image prompt with text prompt create much more of a blend than sref with text prompt. Ninja turtle and batman are literally blended toghether, the character consists of both, either it looks like a turtle but have a batman cape, or it looks more like batman with the mask, but you can clearly see the turtles face behind it and greenish body. It couldn't be more obvious. Image prompt + text prompt is literally like the text prompt that created that image + text u added, which is way more of a blend between these two, like normal text prompting is usually the blend of all of the things u put in there. While sref only grabs the style of the image, so there is absolutely zero batman in these results, only the style of the image, and turtle literally swap the batman from it's place, grabbing only the style of the image. Now as I clearly described you what is sref - style reference skipping all the content in it, you better rerecord the video with the proper explanation, not misleading your viewers. "Sref doesn't care who was in the original image" "only black and white manga inspired visuals" - finally at the end you said something that makes sense. Unfortunetly all of the multuplication, blending and smoothie nonsense throughout the entire video is all wrong.
Not a fan of SREF
You can't please everyone!
@@FutureTechPilot true 👍
Great explanation! Thanks!
Cheers!