i want to say thanks for the info.. usually 95% time when i see you have something new to show..i go straight to see and experiment..and boy oh boy was i surprised on what {describe} can do.. i being a traditional artist (pencils, oils, acrylics -remember them lol) i took some of my pictures and ran it through just to see the results ...WOW! totally amazing... truly a game changer for Midjourney.. many thanks and keep up the good work!!!!
Very mixed feelings after using this for a while. PROS: -will improve diction (as you mention) -will provide inspiration (as you mention) CONS: -9 out of 10 times the regeneration falls pretty far from the tree. Hopefully, it'll get better if that is something one is looking for. Thanks for the quick upload. 👍
I saw someone suggest using the original picture as an image prompt in front of the newly written prompts. I think it could help for sure ... but you're right, it's definitely not an exact science yet!
One way to use /describe to get results more similar to the input is to use the /describe'd image as an image prompt with the text prompt, with higher or lower --iw to taste. My personal favorite use case is to find a few of the best or most promising sounding portions from each prompt in a /describe result and test with those. It's great for building a style/prompt library through finding things that produce results you like, and then refining combinations of those. It's also a neat way to take an image that has good style and try to reproduce it with a different style. Notably, if the artist names are linked, they will take you to a Google search to discover more about the artist. If the text is not a link, it's a made up name that manipulates the system in similar ways. Midjourney has had a very good capacity to interpret names that aren't "real" for a long time, so it might be a byproduct of that, but I'm not entirely sure how or why it works.
Oh, and about the --ar; Midjourney outputs round to the nearest 32 pixel value, so 16:9 and 4:5 don't necessarily wind up with exactly that aspect ratio, which is why they parse back out slightly differently when processed through /describe.
Woah ... thanks for your insight! Midjourney understanding 'fake' names makes me laugh lol And thanks for pointing out how you can take the best parts of each prompt! I'll have to make a follow up video on some of these tricks
THIS IS MASSIVE!! And just in time here brother, I haven’t been able to create the ideal image of a view under the ocean looking directly up to surface with a woman swimming etc as need similar to a homepage header image a client wants like one of their competitor sites. I’ll jump on this new technique using that reference image!
haha prompting is hard eh ... I've heard some suggestions though - try image prompting your reference picture with the prompts from /describe - try combining the best words from each of the 4 prompts (the words that sound helpful to you) - try the prompts in version 4 - try to add your own words to the prompts from /describe Hope that gets you a little closer!
This is huge. I'm curious if artists will want to use this to be discovered and get known, or if want their art style pulled out to preserve their originality.
I'll copy a comment I got from someone "Midjourney outputs round to the nearest 32 pixel value, so 16:9 and 4:5 don't necessarily wind up with exactly that aspect ratio, which is why they parse back out slightly differently when processed through /describe."
There is a different Free service that has been doing image discriminations for a while now. Can't remember off the top of my head. Just use it off link. But i use it frequently.
Buddy you're preaching to the choir haha that's a really cool idea and I'm pretty sure MJ is working on something like that. I mean, I know they're exploring consistent characters, so maybe allowing us to identify characters we want more of will be part of their plans
So one of the 4 answers it gave to an image I had already made on Midjourney included 'body-painting' although I had never used that term. I decided to copy it and see what version it made to see if it resembled mines. Nope, got a warning that my account could be banned as it used the above term? What the hell........
haha I heard about that happening and yeah, the banned word list is a funny one, I hope they come out with their solution to it soon. I think it's like, Midjourney is obviously capable of understanding and creating all kinds of crazy words, but the results are less than ideal from a community building perspective. So the Dev team need to limit what people have access to because Midjourney can create some disturbing images if left unsupervised. ^ that's not an opinion, just trying to explain what I understand to be the situation.
I'm seeing more Midjourney images being rendered in my suggested 5 denomination such as in your video here. I wonder if by sheer coincidence or some have seen my suggestion for it in the MJ server or in your video from a few weeks back. 🤔
@Future Tech Pilot Sorry for nor being more clear .......5 denomination in aspect ratio means any aspect ratios with a 5 such as 9:5 (typically for landscapes), 5:5 ( a square image obviously), 5:7 ( good for portraits) ......or now in v5 any number within reason coupled with a 5 on the other side of the *":"*
@@LouisGedo haha I never knew '5' was popular! I got into 4:5 a couple of months ago because it's the default in Niji-Journey and I like the look a lot!
This one? ---- 'a ninja character with a jacket, in the style of dark yellow and crimson, urban dreamscapes, all-over composition, vivid contrast, light yellow and light silver, punk-inspired art, surrealist manga, bold color contrast --ar 91:51 --v 5'
This is basically just MJ's own version of a clip interrogator which has been around for a while. What is especially interesting here is that both systems are managed by the same company and (hopefully) will be tweaked for the same model so they won't have the issue other open-source clip interrogators run into. There have been some pretty funny descriptions in the #describe-show channel, the interrogator apparently likes describing things as [random word]core and punk.
@@BenPanna Yeah, that's what I hope comes out of this. The open source clip interrogators don't really allow for this but I'd love to see MJ tackle this issue. Right now it only really works well for simple images and struggles once things become more abstract or complex but if they manage to nail it, this could succeed where other clip intrrogators fail.
Thanks for your reply@@FutureTechPilot . I mean, say I already generated an image on MJ without using the seed prompt. If I want an image that is very similar, can I use the original image and the "seed" prompt in the new /imagine? or does the "seed" have to be included in the original image to use it for another image. Does that make sense? lol
@@lapostajakarta No, it doesn't need to be included in the original image, but you need to make sure you're using the right seed from the first image ... and if you do that without changing the prompt, you'll get identical images. However, if you change the prompt even a little, you might get different pictures!
I can't wait for the Ai to be able to see what it's doing then will be able to make a consistent character world you name it, once GOT4 allows image seeing update for the Ai it's over
now basically Midjourney could unlock 2 ways communicate with us, it could describe pitures, that mean inver engineering from some amazing picture , it help me alot 🥰🥰
I am learning SO MANY new words. GORPCORE?!?! Desertpunk Unicorncore painter with cowboy themes Genderless Heistcore Clownpunk?!?!!?!?!? These are amazing.
I don't think expecting a prompt that will reproduce the image you fed it is realistic...and unnecessary. If you want the same image just copy and paste ;) Or do an image prompt. What I've found /describe very good at is giving me an image in a similar style, and, yes, give me words and phrases to use that I wouldn't have ever thought of myself. Really enjoying it so far.
I like it. But I can see a use to take MJs prompt and run it through Chat to maybe clean up the prompt... fine-tune it. You do realize none of this existed for us a year ago. Chat-GPT, Stable, Dall-e and MJ would all be considered magic back then.
Your second statement here is incorrect, while not as popular and still in its infancy ai image generation and language models have been around for a good few years by this point. Sure, nobody is going to point at a deep dream image generator and say it's anywhere close to what we have now, it was a step towards what we have now. Dalle was released in 2021, etc. It's been a few years since we had a dota playing AI as well if I remember correctly. It's just gained a lot of notoriety now since it's moving into the consumer market.
@@gkjzhgffjh 99.9% of us would have looked at MJ or Chat GPT as magic or fake a year ago. True the models have been in the work for years. A year ago, Google could look at a photo of a dog and recognize a dog or maybe a breed. My point is the tools we have right now are as magical, as if you had shown an iphone to someone in the 1970s.
If the "/imagine" command is fine, the "/describe" command is poor, random and wrong. With the Standard plan for having just tested a dozen images, it's 30 dollars excluding tax, without refund (American business). Total disappointment.
@@FutureTechPilot Tested, but when your goal is just to get an image description, the image prompt is useless when you can't associate other prompts you do not have.
I would honestly say it's the most boring feature release so far 😂 surprised you're calling it "insane" that an AI that turns text to pictures can turn pictures to text.
hahah my enthusiasm gets the better of me sometimes ... I find all of this fascinating but I can see how it may have been a lackluster announcement to some. Thanks for the comment though!
This is awesome! Midjourney just keeps getting better and better. Thanks for taking the time to put this video together.
And thank you for taking the time to comment! Cheers :)
i want to say thanks for the info.. usually 95% time when i see you have something new to show..i go straight to see and experiment..and boy oh boy was i surprised on what {describe} can do.. i being a traditional artist (pencils, oils, acrylics -remember them lol) i took some of my pictures and ran it through just to see the results ...WOW! totally amazing... truly a game changer for Midjourney.. many thanks and keep up the good work!!!!
Yah?? that's a really cool experiment. Thanks for sharing it with me!
Thanks for sharing this! Love how you are "on top" of the latest features! This really helps!
Trying my best! Glad I could help
OMG I was unaware of this, thanks so much
Enjoy the endless rabbit holes haha
@@FutureTechPilot BRO IT IS SOOOO AWESOME
Very mixed feelings after using this for a while.
PROS:
-will improve diction (as you mention)
-will provide inspiration (as you mention)
CONS:
-9 out of 10 times the regeneration falls pretty far from the tree. Hopefully, it'll get better if that is something one is looking for.
Thanks for the quick upload. 👍
I saw someone suggest using the original picture as an image prompt in front of the newly written prompts. I think it could help for sure ... but you're right, it's definitely not an exact science yet!
Where did you get that gundam looking photo from. That thing looks cool
haha I'm so sorry, I never made a follow up video. I used --niji 5 for that! nijijourney is amazing
One way to use /describe to get results more similar to the input is to use the /describe'd image as an image prompt with the text prompt, with higher or lower --iw to taste.
My personal favorite use case is to find a few of the best or most promising sounding portions from each prompt in a /describe result and test with those. It's great for building a style/prompt library through finding things that produce results you like, and then refining combinations of those. It's also a neat way to take an image that has good style and try to reproduce it with a different style.
Notably, if the artist names are linked, they will take you to a Google search to discover more about the artist. If the text is not a link, it's a made up name that manipulates the system in similar ways. Midjourney has had a very good capacity to interpret names that aren't "real" for a long time, so it might be a byproduct of that, but I'm not entirely sure how or why it works.
Oh, and about the --ar; Midjourney outputs round to the nearest 32 pixel value, so 16:9 and 4:5 don't necessarily wind up with exactly that aspect ratio, which is why they parse back out slightly differently when processed through /describe.
Woah ... thanks for your insight!
Midjourney understanding 'fake' names makes me laugh lol
And thanks for pointing out how you can take the best parts of each prompt! I'll have to make a follow up video on some of these tricks
haha the more you know! Thanks :)
THIS IS MASSIVE!! And just in time here brother, I haven’t been able to create the ideal image of a view under the ocean looking directly up to surface with a woman swimming etc as need similar to a homepage header image a client wants like one of their competitor sites. I’ll jump on this new technique using that reference image!
Ahhhhgh man mike didn’t come any anywhere near what my reference images were. Maybe my Settings are diff than yours bro.
haha prompting is hard eh ... I've heard some suggestions though
- try image prompting your reference picture with the prompts from /describe
- try combining the best words from each of the 4 prompts (the words that sound helpful to you)
- try the prompts in version 4
- try to add your own words to the prompts from /describe
Hope that gets you a little closer!
Awesome man! I love your enthusiasm
And I appreciate your comment buddy!
Amazing feature, used some of my previously generated favourit's and came up with some amazing stuff !
Siiiiick, so much more possibilities!
This new feature is awesome!! Thanks for sharing!
And thanks for the comment!
This is huge. I'm curious if artists will want to use this to be discovered and get known, or if want their art style pulled out to preserve their originality.
That's a good question! What would you do?
"no.2 looks like voldemort...I hope I'm not going down that path" 😂
haha keep your fingers crossed for me
Holy cow! what a great surprise!
Yah updates are the best days haha
I'm trying to figure out why they are suggesting some really crazy aspect ratios
I'll copy a comment I got from someone "Midjourney outputs round to the nearest 32 pixel value, so 16:9 and 4:5 don't necessarily wind up with exactly that aspect ratio, which is why they parse back out slightly differently when processed through /describe."
These MJ prompts show us how to talk to the machine, its writing prompts in its language, that's a power feature for us.
That's exactly what has interested me the most! - Finally being able to see what Midjourney is thinking is fascinating
There is a different Free service that has been doing image discriminations for a while now. Can't remember off the top of my head. Just use it off link. But i use it frequently.
Yeah for sure! I think it's great that Midjourney added it to their toolset
Amazing! This changes everything.
It'll be super helpful for lots of people!
I wonder if it would be possible to assign a name to the original image to keep the girl's facial features consistent.
Buddy you're preaching to the choir haha that's a really cool idea and I'm pretty sure MJ is working on something like that. I mean, I know they're exploring consistent characters, so maybe allowing us to identify characters we want more of will be part of their plans
This is what I been waiting for
Right? It's such a nice feature to have
So one of the 4 answers it gave to an image I had already made on Midjourney included 'body-painting' although I had never used that term.
I decided to copy it and see what version it made to see if it resembled mines.
Nope, got a warning that my account could be banned as it used the above term? What the hell........
haha I heard about that happening and yeah, the banned word list is a funny one, I hope they come out with their solution to it soon.
I think it's like, Midjourney is obviously capable of understanding and creating all kinds of crazy words, but the results are less than ideal from a community building perspective. So the Dev team need to limit what people have access to because Midjourney can create some disturbing images if left unsupervised.
^ that's not an opinion, just trying to explain what I understand to be the situation.
I'm seeing more Midjourney images being rendered in my suggested 5 denomination such as in your video here. I wonder if by sheer coincidence or some have seen my suggestion for it in the MJ server or in your video from a few weeks back. 🤔
What do you mean? Aspect ratio?
@Future Tech Pilot
Sorry for nor being more clear
.......5 denomination in aspect ratio means any aspect ratios with a 5 such as 9:5 (typically for landscapes), 5:5 ( a square image obviously), 5:7 ( good for portraits)
......or now in v5 any number within reason coupled with a 5 on the other side of the *":"*
@@LouisGedo haha I never knew '5' was popular! I got into 4:5 a couple of months ago because it's the default in Niji-Journey and I like the look a lot!
@@FutureTechPilot
Oh? I didn't know that because I've not used Niji since that's a style that I've no interest in. But neat to know anyway!
Hey at your 8:10 mark of your video. Can I get the prompt to the upper left image? Would love to see what I make from it.
This one? ---- 'a ninja character with a jacket, in the style of dark yellow and crimson, urban dreamscapes, all-over composition, vivid contrast, light yellow and light silver, punk-inspired art, surrealist manga, bold color contrast --ar 91:51 --v 5'
@@FutureTechPilot yes thank you
This is definitely going to be helpful in understanding the language of AI and prompts
Super super helpful!
now i can copy promptbase images without buying their prompts! :DDDDD
😂 I don't recommend stealing but I guess you can do that now
Midjourney describe is hilarious! 😂 Nonetheless, this is a nice new feature. Looking forward to the future improved version of it!
Yeah it's a little funny haha great tool to have though
This is basically just MJ's own version of a clip interrogator which has been around for a while. What is especially interesting here is that both systems are managed by the same company and (hopefully) will be tweaked for the same model so they won't have the issue other open-source clip interrogators run into.
There have been some pretty funny descriptions in the #describe-show channel, the interrogator apparently likes describing things as [random word]core and punk.
#describe is really interesting to reverse engineering a great picture, and learn to create our own great pics
@@BenPanna Yeah, that's what I hope comes out of this. The open source clip interrogators don't really allow for this but I'd love to see MJ tackle this issue. Right now it only really works well for simple images and struggles once things become more abstract or complex but if they manage to nail it, this could succeed where other clip intrrogators fail.
@@gkjzhgffjh keep your work going on🤩😀😀😀
Yeah haha it's a solid feature to have built into Midjourney. A new user would surely find it useful
Can you use the seed prompt to create new images with an image you have already produced? Or only if you use "seed" to create the original image?
I don't think I understand your question. Can you explain it a little more please.
Thanks for your reply@@FutureTechPilot . I mean, say I already generated an image on MJ without using the seed prompt. If I want an image that is very similar, can I use the original image and the "seed" prompt in the new /imagine? or does the "seed" have to be included in the original image to use it for another image. Does that make sense? lol
@@lapostajakarta No, it doesn't need to be included in the original image, but you need to make sure you're using the right seed from the first image ... and if you do that without changing the prompt, you'll get identical images. However, if you change the prompt even a little, you might get different pictures!
I can't wait for the Ai to be able to see what it's doing then will be able to make a consistent character world you name it, once GOT4 allows image seeing update for the Ai it's over
I'm patiently waiting for that day to come
now basically Midjourney could unlock 2 ways communicate with us, it could describe pitures, that mean inver engineering from some amazing picture , it help me alot
🥰🥰
Yeah it makes it feel like a collaborative process!
I am learning SO MANY new words.
GORPCORE?!?!
Desertpunk Unicorncore painter with cowboy themes
Genderless Heistcore Clownpunk?!?!!?!?!?
These are amazing.
hahaha and the best part is, we now know those words actually mean something to Midjourney
Worked well for me early this morning...
Nice! New things tend to break at first haha but they'll figure it out
Thats awsome, but now we have even more pictures :P
LOL don't remind me
Coool. It'll def help with my prompts.
Yeah at the very least!
WOWOWOW!!! TYTYTY!!!!
Cheers!
How long until Midjourney doesn't need any human input at all LOL
LOL I thought about that too ... people will just let the a.i 'think' for them
I don't think expecting a prompt that will reproduce the image you fed it is realistic...and unnecessary. If you want the same image just copy and paste ;) Or do an image prompt. What I've found /describe very good at is giving me an image in a similar style, and, yes, give me words and phrases to use that I wouldn't have ever thought of myself. Really enjoying it so far.
Yeah it's a great new feature!
combine the prompt with the original image
That's a really good idea and I should have thought of that yesterday haha I'll write that down now
You look so much more friendly, kind and happy than the generated guys.
hahah a.i hasn't quite cracked that code yet
Hey, try render in V4 this describes man.
dang, that's a good idea!! haha I wish I thought of that yesterday. Thanks for the tip
I like it. But I can see a use to take MJs prompt and run it through Chat to maybe clean up the prompt... fine-tune it. You do realize none of this existed for us a year ago. Chat-GPT, Stable, Dall-e and MJ would all be considered magic back then.
Your second statement here is incorrect, while not as popular and still in its infancy ai image generation and language models have been around for a good few years by this point. Sure, nobody is going to point at a deep dream image generator and say it's anywhere close to what we have now, it was a step towards what we have now. Dalle was released in 2021, etc.
It's been a few years since we had a dota playing AI as well if I remember correctly. It's just gained a lot of notoriety now since it's moving into the consumer market.
@@gkjzhgffjh 99.9% of us would have looked at MJ or Chat GPT as magic or fake a year ago. True the models have been in the work for years. A year ago, Google could look at a photo of a dog and recognize a dog or maybe a breed. My point is the tools we have right now are as magical, as if you had shown an iphone to someone in the 1970s.
from this year, our world of work will change forever
I think about that all the time! haha less than 12 months ago, my life was extremely different. The future is wild
Did you catch it? Some of those artists are made up
Yeah haha Midjourney can be kinda hilarious
👋
🤝
This could have been a 1 min video
LOL thanks for the comment. Hope you have a good day
Thats super.... but it can't make Twi'lek ! Goddammit :D
haha it can't ... yet
I don't need --describe in Mj because I know how to make very good prompts and style .
Please share some tips or tricks if you have any!
If the "/imagine" command is fine, the "/describe" command is poor, random and wrong.
With the Standard plan for having just tested a dozen images, it's 30 dollars excluding tax, without refund (American business). Total disappointment.
Using the image as an image prompt along with the new prompts helps a lot
@@FutureTechPilot Oh I see, I will test it! Thanx
@@citoyendumonde9083 ruclips.net/video/PAK873909S8/видео.html I show off image prompting in that video
@@FutureTechPilot Tested, but when your goal is just to get an image description, the image prompt is useless when you can't associate other prompts you do not have.
@@citoyendumonde9083 lol but it's not useless if you include the reference image in the prompt
Bruh... I am a pro
💯 the pro-est
@@FutureTechPilot it started to work as soon as you responded 🤣🙏🏿
@@kongsied4279 haha sick
I would honestly say it's the most boring feature release so far 😂 surprised you're calling it "insane" that an AI that turns text to pictures can turn pictures to text.
hahah my enthusiasm gets the better of me sometimes ... I find all of this fascinating but I can see how it may have been a lackluster announcement to some. Thanks for the comment though!