OMG! The work and time you put in to these is phenomenal! Toward the end, I never even had known the proper names for some of those symbols - and I'm 64 🙂 I'll have to look-up how to make some of them. Man, you just blow me away with these!!!!!!!!!!!!!!!!!!!!
I had a hunch and I Googled "celestial jungle" and discovered an explanation for the type of art it gave you. It is because "celestial jungle" search comes back as a song title with art attached to it, it refers to a set of horoscope cards, thus the continuous addition of text, and it's the name of a 1000pc puzzle explaining the card/game box type borders. I am convince this is what gave MJ the idea that these may have been what you were looking for in that particular grouping of words. Thanks for your thorough and excellent video. I certainly gained a lot of insight from it.
I value your time and am grateful for your videos, thank you so much. It has helped me so much on midjourney and I do my own experiments as well. So what you do HELPS us tremendously I can say it 100% for myself.
This is was a great video and what a lot of work you put into it! I started mixing more than one punctuation mark -- they were interesting too, and I was able to get better images
I know this video is about midjourny, thank you for that. I would also like to share something about Genmo, txt to picture and txt to video Genmo: Sure! Here's what each symbol does: - !: Adds a high contrast effect to the image - #: Adds a sepia tone to the image - $: Adds a grayscale effect to the image - %: Adds a pixelated effect to the image - ?: Adds a blur effect to the image - (: Adds a fisheye effect to the image - ): Adds a bulge effect to the image - ,: Adds a vignette effect to the image These symbols can be added to the end of a caption to apply the effect to the generated image. For example, "a beautiful sunset over the ocean!" would generate an image with a high contrast effect.
At 7:18 the results from "monochromatic::portrait" has given an image that looks strikingly similar to a very famous photo from National Geographic called Afghan Girl by Steve McCurry, but in black and white. Yikes!
I guess that is one downside of improving the photographic style for Midjourney's algorithms. As Midjourney is able to create the likenesses of famous people in v5, the Afghan Girl must be considered yet another famous person. And because there's basically just one photo of that person that is so famous (and the photo is probably synonymous to a very good portrait), then I guess Midjourney just is able to re-generate the likeness that basically looks like the photo. It's unfortunate.
Yes. It brings tons of questions about the validity of images created by AI. There will be a lot of legal issues in the incoming time. Next Napster issue?
@@anhaidelirio yer it's could to. It does use known pictures for reference, and as far as 'Arab' type women pictures It is one of the most reproduced. I have seen one picture generated that 90% looked like someone I know which was odd :)
Thanks for doing this, even your AI narrator sounded weary when you mentioned not wanting to do it again :) I'm fairly new to MJ and this has helped understand which, if any, separators make a difference. I think most of them just add a bit of randomness into the melting pot, but as you found a few do seem to make a distinct difference. Will have some fun experimenting :)
@Thaeyne - --16:9- is not a parameter we "prefer", but rather designates an industry standard for the [A]spect [R]atio of the image. Width / Height of our monitors, photographs, and projections are all created with this ratio. - - ar 9:9 would generate a square image Vertical images get the smaller number first to denote the width - -ar 9:16 whereas, for your videos on the internet - -ar 16:9 fits most displays i.e. 1920 x 1080, or any image size where the equation equals 1.7778 which is the mathematical expression of 1920 / 1080 the ratio of width to height.
The "prefer" part in this case is me personally preferring one ratio to another out of the unlimited possibilities available. And the context in this case was actually me using the --ar 9:16 aspect ratio for the side-by-side comparisons.
This is a great video…. The @ symbol looks like it helps with location…. in the celestial@jungle there is an image at what looks like Jungle Cafe. Im going to hassle to test something like Girl@Beach and see how comes up
recently i removed an extra comma from amongst the handful of negative prompts i had entered. This drastically altered the output.. perhaps it was pushing an embedding outside the 75 token range and just that one erased comma raised its weight somewhat.. the infinite-infinities we have but a space away, this is why i spend 96% of my ML time 'testing' outputs
I get similar portraits randomly even with male subjects, It could be because it is a single most recognized and maybe used image or portrait (other then logos etc) so there catalogs must have tons of his reference of this photo, I assume
I will say adding words like 'mockup' is better for UI design, or device designs, not necessarily drawings. Likewise, saying 'what' to refer to an object doesnt quite work for a computer to understand. It doesnt know what a 'what' is, since it can be anything, and these coded algorithms study millions of images of different things to reference when they recreate it for a prompt. so using 'what' doesnt quite work for that reason. Similar for the word 'look'. Unless you add a descriptor before it, 'i.e., 'wes anderson look' or 'cinematic look' or 'watercolor look' it wont give you results you desire. Other than those things, these are really great tips! Appreciate your own findings and experience. I've generated over 20k images at this point since being approved to use Midjourney when it was still invite only beta, and this is just what i've learned over time. If you have any background in coding or basic html/css, ai like this is a little easier to use because you understand that a computer doesnt know things the way we do, or conceptualize. Being able to describe what you want to a machine takes a bit of practice.
It's cool you do experiments like this, and thanks for sharing! A lot of what you are experiencing here as results are what Midjourney calls "noise" combined with the always variable nature of the output. Even if you set a seed you won't get exactly the same result every time your run it, thereby negating any ability to actually see if tests like these produce meaningful results. The AI only recognizes spaces, ::, and -- for paramaters. the rest does have some small effect, but not specifically. It's "noise", which means you shouldn't try to use it for specific results. Commas, brackets etc do not actually matter other than to help you group your own thoughts and therefore probably generate better prompts. But otherwise they don't make much of a difference. You get far more noticeable results from rearranging the words since Midjourney weights words at the front more heavily than those near the back of the prompt. Also, some words carry such a heavy weight that they overpower other words no matter where you put them. Add something like "art nouveau" to about anything and it dominates most of the output. Whatever the algorithm was trained on was what you get, including some weights and bias, and that's why you can't generate things it's never experienced before. Think if AI/ML like a DJ for music... they create some amazing new songs, but never create the raw building blocks they use, which all come from musicians and other technology.
Like the comparisons. You had some errors in the prompt text. Psytrance was showing for monochromatic @11:53 and you didn't have the word "no" before mockup. If you did that on midjourney it's probably the reason you got board game mockup
Thanks for noticing, I guess sometimes I get so tired that some things start to slip up while editing, I didn't notice that myself. There are infinite possibilities how to prompt and also how to negatively prompt. I've had best results with --no text, mockup. Everything after the --no is considered as one part of a prompt so as a positive prompt it would be /imagine prompt: text, mockup. The double no would go more into the multiprompt territory. I think there is also some kind of a theory going on about the style of the multiple negative prompts, it tends to give really weird stuff sometimes. So to let you know, I tried these: 1. /imagine prompt: Celestial:Jungle --no text, mockup --seed 777 --ar 9:16 --v 5 2. /imagine prompt: Celestial:Jungle --no text --no mockup --seed 777 --ar 9:16 --v 5 3. /imagine prompt: Celestial:Jungle --no mockup --seed 777 --ar 9:16 --v 5 4. /imagine prompt: Celestial:Jungle --no text --seed 777 --ar 9:16 --v 5 1. Was basically similar to what I prompted for the video and it was similar to what I prompted for the video. I think the model has changed a bit, because instead of 3 board games I now get only 1 board game and I did check what I prompted. 2. This gives a really strange look for the images, and it does have text on 3 of the images and some borders on the one that does not, but no board games. 3. Seems to do some kind of card layouts and posters. 4. All 4 images are board games.
*9:40** the face in the lower right hand of the center section looks to be unusually large in proportion to the rest of the body...at least in my opinion*
I thought I did, but I guess my concentration was not totally there, because I was a bit sick when I posted that video. I will triple check in the future.
--no title, text, word, letter, symbol, watermark, autograph, signature, mockup This helps keep words off of the images, as well attempts to prevent it from turning it into a package or product.
@@EricFranz I just use them to separate for my own grouping. It knows all of the prompts after --no are negative prompts. bright and colorful magical scene --ar 16:9 --no blue, red, orange This will produce an image without any blue, red, or orange in it.
*have you looked at or considered the exploration of other AI generating programs that function in the basic manner as midjourney such as blue willow or Leonardo AI? i personally use deep ai, dall e-2 and nightcafe'ai the most or rather have for the past few months...i do like what the results have been for the most part...my experimentation does tend to rely on a few key word prompts at the beginning of each image and the random seed generation does produce some interesting results especially since i typically do a marathon run of one hundred or more images per session...*
There is so much to discover and test and experiment in just Midjourney. I have conciously dedicated myself currently just to Midjourney. I'm not saying it will never happen, but it's unlikely to happen in the near future.
@@thaeyne *no worries...since midjourney is constantly being upgraded and improved i can understand the logic in that course of action...i do like the ai engines that i mess about with and enjoy seeing the results of the same prompts being used on each one and for me it is rather addictive to find one prompt that works really well and then generating over one hundred images based on that and comparing them to each other or how they change from beginning to the last one* *truly appreciate the content of your channel and the detail and research you put into each video*
It sounds like you may have some extra characters there that Midjourney doesn't recognize as numbers. I got the same just now when I tried: "/imagine prompt: something --seed r34". So you should look at what's in your seed number. If you have any other characters besides numbers, you will get the error for it not passing the "check if this integer is actually an integer" check.
The images do look quite similar, but there really are differences with each symbol. They're small, sometimes not so small, but they are all different.
It's just a random number. Midjourney usually assigns a random number to each image generation. If I set it to any number myself, then I can better compare what changes actually are caused by the actual prompt text. The 777 is not necessary. Only if you want to compare the differences in images.
I am sorry that you feel this way. I have a video on my channel where I used my own voice, but I got overwhelming feedback about it to go back to the AI voice. Not all of us are blessed with a natural voice that is better than the AI voice.
OMG! The work and time you put in to these is phenomenal! Toward the end, I never even had known the proper names for some of those symbols - and I'm 64 🙂 I'll have to look-up how to make some of them. Man, you just blow me away with these!!!!!!!!!!!!!!!!!!!!
I had a hunch and I Googled "celestial jungle" and discovered an explanation for the type of art it gave you. It is because "celestial jungle" search comes back as a song title with art attached to it, it refers to a set of horoscope cards, thus the continuous addition of text, and it's the name of a 1000pc puzzle explaining the card/game box type borders. I am convince this is what gave MJ the idea that these may have been what you were looking for in that particular grouping of words. Thanks for your thorough and excellent video. I certainly gained a lot of insight from it.
Nice research
I value your time and am grateful for your videos, thank you so much. It has helped me so much on midjourney and I do my own experiments as well. So what you do HELPS us tremendously I can say it 100% for myself.
Happy to help!
The 7:33 monochromatic, that top right one is identical to the very famous National Geographic photo
i noticed this as well
Loved seeing all of these, thanks for sharing!
You have the mind of an engineer! This was a great breakdown
You seriously took one for the team here, Thaeyne. Thank you.
I wish you would have done Greater than and less than, those seem like symbols that should matter
This is was a great video and what a lot of work you put into it! I started mixing more than one punctuation mark -- they were interesting too, and I was able to get better images
I’ve only used stable diffusion and DALL-E, and from this Midjourney looks like so much work.
3rd video and I'm hooked! Love the format of your videos. Thank you
It's definitely the case (from experience) that words earlier in the prompt have higher weightings.
I know this video is about midjourny, thank you for that. I would also like to share something about Genmo, txt to picture and txt to video
Genmo: Sure! Here's what each symbol does:
- !: Adds a high contrast effect to the image
- #: Adds a sepia tone to the image
- $: Adds a grayscale effect to the image
- %: Adds a pixelated effect to the image
- ?: Adds a blur effect to the image
- (: Adds a fisheye effect to the image
- ): Adds a bulge effect to the image
- ,: Adds a vignette effect to the image
These symbols can be added to the end of a caption to apply the effect to the generated image. For example, "a beautiful sunset over the ocean!" would generate an image with a high contrast effect.
Thank you for these
Very interesting. Thank you
At 7:18 the results from "monochromatic::portrait" has given an image that looks strikingly similar to a very famous photo from National Geographic called Afghan Girl by Steve McCurry, but in black and white. Yikes!
I guess that is one downside of improving the photographic style for Midjourney's algorithms. As Midjourney is able to create the likenesses of famous people in v5, the Afghan Girl must be considered yet another famous person. And because there's basically just one photo of that person that is so famous (and the photo is probably synonymous to a very good portrait), then I guess Midjourney just is able to re-generate the likeness that basically looks like the photo. It's unfortunate.
Spotted that as well.
Yes. It brings tons of questions about the validity of images created by AI. There will be a lot of legal issues in the incoming time. Next Napster issue?
Maybe that happens more often that we will ever know because few photos are as recognizable as that one 😮
@@anhaidelirio yer it's could to. It does use known pictures for reference, and as far as 'Arab' type women pictures It is one of the most reproduced.
I have seen one picture generated that 90% looked like someone I know which was odd :)
6:59 The hooded girl is the famous green eyed girl from the National Geographic magazine cover June 1985
Muy interesante😍, Gracias por compartir
Thanks for doing this, even your AI narrator sounded weary when you mentioned not wanting to do it again :) I'm fairly new to MJ and this has helped understand which, if any, separators make a difference. I think most of them just add a bit of randomness into the melting pot, but as you found a few do seem to make a distinct difference. Will have some fun experimenting :)
Thank you fur such a wonderful study!
@Thaeyne - --16:9- is not a parameter we "prefer", but rather designates an industry standard for the [A]spect [R]atio of the image.
Width / Height of our monitors, photographs, and projections are all created with this ratio. - - ar 9:9 would generate a square image
Vertical images get the smaller number first to denote the width - -ar 9:16
whereas, for your videos on the internet - -ar 16:9 fits most displays i.e. 1920 x 1080, or any image size where the equation equals 1.7778
which is the mathematical expression of 1920 / 1080 the ratio of width to height.
The "prefer" part in this case is me personally preferring one ratio to another out of the unlimited possibilities available. And the context in this case was actually me using the --ar 9:16 aspect ratio for the side-by-side comparisons.
This is a great video…. The @ symbol looks like it helps with location…. in the celestial@jungle there is an image at what looks like Jungle Cafe. Im going to hassle to test something like Girl@Beach and see how comes up
recently i removed an extra comma from amongst the handful of negative prompts i had entered. This drastically altered the output.. perhaps it was pushing an embedding outside the 75 token range and just that one erased comma raised its weight somewhat.. the infinite-infinities we have but a space away, this is why i spend 96% of my ML time 'testing' outputs
7:20 The monochrome portrait here looks like Steven McCurry's portrait, of a Pashtun child, Sharbat Gula
I get similar portraits randomly even with male subjects, It could be because it is a single most recognized and maybe used image or portrait (other then logos etc) so there catalogs must have tons of his reference of this photo, I assume
In a second note mind sharing your speech ai tool? It's pretty good
It's Murf, there should be a link in the description too.
Thanks!
Very interesting work.
I will say adding words like 'mockup' is better for UI design, or device designs, not necessarily drawings. Likewise, saying 'what' to refer to an object doesnt quite work for a computer to understand. It doesnt know what a 'what' is, since it can be anything, and these coded algorithms study millions of images of different things to reference when they recreate it for a prompt. so using 'what' doesnt quite work for that reason. Similar for the word 'look'. Unless you add a descriptor before it, 'i.e., 'wes anderson look' or 'cinematic look' or 'watercolor look' it wont give you results you desire. Other than those things, these are really great tips! Appreciate your own findings and experience.
I've generated over 20k images at this point since being approved to use Midjourney when it was still invite only beta, and this is just what i've learned over time. If you have any background in coding or basic html/css, ai like this is a little easier to use because you understand that a computer doesnt know things the way we do, or conceptualize. Being able to describe what you want to a machine takes a bit of practice.
It's cool you do experiments like this, and thanks for sharing! A lot of what you are experiencing here as results are what Midjourney calls "noise" combined with the always variable nature of the output. Even if you set a seed you won't get exactly the same result every time your run it, thereby negating any ability to actually see if tests like these produce meaningful results. The AI only recognizes spaces, ::, and -- for paramaters. the rest does have some small effect, but not specifically. It's "noise", which means you shouldn't try to use it for specific results. Commas, brackets etc do not actually matter other than to help you group your own thoughts and therefore probably generate better prompts. But otherwise they don't make much of a difference. You get far more noticeable results from rearranging the words since Midjourney weights words at the front more heavily than those near the back of the prompt. Also, some words carry such a heavy weight that they overpower other words no matter where you put them. Add something like "art nouveau" to about anything and it dominates most of the output. Whatever the algorithm was trained on was what you get, including some weights and bias, and that's why you can't generate things it's never experienced before. Think if AI/ML like a DJ for music... they create some amazing new songs, but never create the raw building blocks they use, which all come from musicians and other technology.
awesome work🤩
I have a feeling those punctuation affect how light appears in the picture, like shape of light, direction of the light
Like the comparisons. You had some errors in the prompt text. Psytrance was showing for monochromatic @11:53 and you didn't have the word "no" before mockup. If you did that on midjourney it's probably the reason you got board game mockup
Thanks for noticing, I guess sometimes I get so tired that some things start to slip up while editing, I didn't notice that myself.
There are infinite possibilities how to prompt and also how to negatively prompt. I've had best results with --no text, mockup. Everything after the --no is considered as one part of a prompt so as a positive prompt it would be /imagine prompt: text, mockup. The double no would go more into the multiprompt territory. I think there is also some kind of a theory going on about the style of the multiple negative prompts, it tends to give really weird stuff sometimes.
So to let you know, I tried these:
1. /imagine prompt: Celestial:Jungle --no text, mockup --seed 777 --ar 9:16 --v 5
2. /imagine prompt: Celestial:Jungle --no text --no mockup --seed 777 --ar 9:16 --v 5
3. /imagine prompt: Celestial:Jungle --no mockup --seed 777 --ar 9:16 --v 5
4. /imagine prompt: Celestial:Jungle --no text --seed 777 --ar 9:16 --v 5
1. Was basically similar to what I prompted for the video and it was similar to what I prompted for the video. I think the model has changed a bit, because instead of 3 board games I now get only 1 board game and I did check what I prompted.
2. This gives a really strange look for the images, and it does have text on 3 of the images and some borders on the one that does not, but no board games.
3. Seems to do some kind of card layouts and posters.
4. All 4 images are board games.
*9:40** the face in the lower right hand of the center section looks to be unusually large in proportion to the rest of the body...at least in my opinion*
You may want to double check before posting in the future your text labels in some of the photos had errors / wasn’t changed
I thought I did, but I guess my concentration was not totally there, because I was a bit sick when I posted that video. I will triple check in the future.
+ seems to make my portraits a bit blockier in colour. Almost 'painted'
The girl with the colon separator in monochromatic is like the girl on the National Geographic iconic cover
You used the wrong label at 3:32
Why are you using the "mockup" prompt? What features do you have?😊😊😊😊😊??????
It is to make images a little bit nicer. It should reduce occurrences of people holding things or pictures in picture frames and stuff like that.
@@thaeyne That was a prompt to get rid of the props people are holding??? 😃😃😃😃Thanks for the good knowledge.
--no title, text, word, letter, symbol, watermark, autograph, signature, mockup
This helps keep words off of the images, as well attempts to prevent it from turning it into a package or product.
How does it know what words after the - (minus) to use or not use if comas play no difference?
@@EricFranz I just use them to separate for my own grouping. It knows all of the prompts after --no are negative prompts.
bright and colorful magical scene --ar 16:9 --no blue, red, orange
This will produce an image without any blue, red, or orange in it.
Love your vids ❤ what do you use for voice?
Thank you :) I use Murf for voiceover-
*have you looked at or considered the exploration of other AI generating programs that function in the basic manner as midjourney such as blue willow or Leonardo AI? i personally use deep ai, dall e-2 and nightcafe'ai the most or rather have for the past few months...i do like what the results have been for the most part...my experimentation does tend to rely on a few key word prompts at the beginning of each image and the random seed generation does produce some interesting results especially since i typically do a marathon run of one hundred or more images per session...*
There is so much to discover and test and experiment in just Midjourney. I have conciously dedicated myself currently just to Midjourney. I'm not saying it will never happen, but it's unlikely to happen in the near future.
@@thaeyne *no worries...since midjourney is constantly being upgraded and improved i can understand the logic in that course of action...i do like the ai engines that i mess about with and enjoy seeing the results of the same prompts being used on each one
and for me it is rather addictive to find one prompt that works really well and then generating over one hundred images based on that and comparing them to each other or how they change from beginning to the last one*
*truly appreciate the content of your channel and the detail and research you put into each video*
I always get --seed: invalid int value: '###,' no matter what number I use, Do you have it set to V4 in /settings ? I don't thin the seed is working
It sounds like you may have some extra characters there that Midjourney doesn't recognize as numbers. I got the same just now when I tried: "/imagine prompt: something --seed r34". So you should look at what's in your seed number. If you have any other characters besides numbers, you will get the error for it not passing the "check if this integer is actually an integer" check.
@@thaeyne it was --seed 566 , then 245 even 777 gave me the error. It's working OK today so must have been a MJ glitch maybe
I'm starting to question my sanity too.. 🤣
Cool
Am I the only one who didn't see any difference with all the symbols or do I needed glasses? 😮
The images do look quite similar, but there really are differences with each symbol. They're small, sometimes not so small, but they are all different.
That's fantastic! That's awesome! But what does --seed777 mean? Is the number 777 necessary?
It's just a random number. Midjourney usually assigns a random number to each image generation. If I set it to any number myself, then I can better compare what changes actually are caused by the actual prompt text. The 777 is not necessary. Only if you want to compare the differences in images.
Thanks a lol for the science....
I'd love to watch this but the AI voice is just annoying and painful to listen to. I can't do it for 26 minutes. Please, be a human.
I am sorry that you feel this way. I have a video on my channel where I used my own voice, but I got overwhelming feedback about it to go back to the AI voice. Not all of us are blessed with a natural voice that is better than the AI voice.
Just useless informations...
I'm sorry you feel that way.