Midjourney v5 - Style Prompt Tips and Reference Tricks

Поделиться
HTML-код
  • Опубликовано: 20 мар 2023
  • Today we're taking a look at some at some Midjourney Style Prompt tricks and tips, namely looking at prompts like "Style By" and how to utilize them with Image References.
    We're going to take a look at Midjourney v5's new prompting style, and why I don't think it's quite up to speed yet. But, I'll provide you with an alternate prompt formula that I think you'll find quite useful.
    Additionally, we're going to take a look at how you can use Midjourney and Leonardo.AI together to maximize control over your images in Midjourney!
    Lastly, as this video shows, please don't hesitate to leave a comment! It might just turn into a whole video!!
    Follow Me on Twitter: / theomediaai
    AFFILATE LINKS:
    Camera: amzn.to/3yXMDY2
    Microphone: amzn.to/3K1jIZm
    Audio Interface: amzn.to/3lDX9kf
    Coffee: amzn.to/3JZuBeq
    Cinematic Prompting in Midjourney: • How to Create Cinemati...
    Leonardo Magic Prompt: • Leonardo.AI - Prompts ...
    Rounding out, I did use some background music in this video. It's actually music I composed, and if you'd like to hear more, you can check it out at the links below, and if you ever want to use the track for your own stuff, just shoot me a message and you can have them for free.
    SPOTIFY:
    open.spotify.com/artist/5Q9MN...
    APPLE MUSIC:
    / citizentim
    -------------------------------------------------
    Thanks for watching Theoretically Media! I cover a large range of topics here on the Creative AI space, Technology, Tutorials, and Reviews! Please enjoy your time here, and subscribe!
    Your comments mean a LOT to me, and I read and try to respond to every one of them, so please do drop any thoughts, suggestions, questions, or topic requests!
    -------------------------------------------------
    ► Business Contact: theoreticallymedia@gmail.com
  • НаукаНаука

Комментарии • 116

  • @chrisanderson7820
    @chrisanderson7820 Год назад +14

    I noticed a HUGE difference when I put the scene/set/action at the beginning, right after the style, so a lot of my pictures were just characters standing in various poses but when you put "sword fight" or "climbing mountain" in front of the rest of the description (ie the character etc) then you get much more control.

  • @cyaNshibuya3112
    @cyaNshibuya3112 Год назад +1

    thank you so much! all your streams have really helped me with my cinematic prompt’s for mid journey, the ai art, and my love of films. thank you!

    • @TheoreticallyMedia
      @TheoreticallyMedia  Год назад

      That's awesome to hear! And between AI Art and films, I think we're best friends!

  • @Justin_Hikes
    @Justin_Hikes Год назад +6

    I really enjoyed your working through this example and telling us your thought process throughout. I learned a few things and you earned a new subscriber!

    • @TheoreticallyMedia
      @TheoreticallyMedia  Год назад

      Thanks so much! Glad to have you here! I’ve got some interesting stuff coming up next week!

  • @BitBoy_Gaming
    @BitBoy_Gaming Год назад +1

    Thanks! Clarified some questions I had. Look forward to more videos!

    • @TheoreticallyMedia
      @TheoreticallyMedia  Год назад

      Thanks so much for watching! And super happy it was helpful!

  • @howevisual7099
    @howevisual7099 Год назад +4

    Great video! I'm just getting started so this is super helpful. As a video pro myself, I'd suggest having a graphic on-screen of the prompt you used when you show off each result since the goal is trying to nail down the right syntax for prompting and what you're actually typing in is key for those of us trying to learn. I can see a bit for some of your results but not the whole thing and RUclips's UI gets in the way when I hit pause so don't put it too low on screen if you do go this route in the future. It'll definitely help see the differences between iterations of images you get and see what's changed. Looking forward to more!

    • @TheoreticallyMedia
      @TheoreticallyMedia  Год назад +1

      excellent to hear! I'll say, that was an older video. A number of other people suggested that, and I've started doing it on newer videos. I even started putting out PDF handouts with the prompts in them! Please browse around!! As a fellow video guy, you might dig this one: ruclips.net/video/PlZkWcnnRt4/видео.html on Gen-2! It's SUPER crazy!

  • @ATRcams
    @ATRcams Год назад +2

    Lots of inspiration, one more sub! Thanks for sharing the knowledge!

  • @dougjano
    @dougjano Год назад +1

    Amazing! Im at this exact moment trying some prompts and just came this video helping me out :)

    • @TheoreticallyMedia
      @TheoreticallyMedia  Год назад

      So happy to hear! If you have any questions in the future, please don’t hesitate to reach out! Really happy to hear this met you at just the right moment!

  • @nahiddotai
    @nahiddotai Год назад +7

    It's true Midjourney doesn't always provide the best results when it hasn't been trained on that, as you correctly pointed out. Sometimes though, it is a game of trial and error and reorganising the prompt to be more specific in the beginning. I really enjoyed your combining Leonardo and Midjourney - I'm a big fan of both, and they both have their strengths! Would love to see how they both evolve. Thanks for your work!

    • @TheoreticallyMedia
      @TheoreticallyMedia  Год назад +2

      Thank you for the comment! And yup, I'm a really big fan of using any/every tool we have access to! Bouncing back and forth between Leo and Midjourney can be really, really powerful. And although I was fairly critical of Leo early on (I have a Leo vs Midjourney video on the channel), it has come a LONG way since...
      ...I should probably do a round 2 at some point!

    • @Utoko
      @Utoko Год назад +1

      Reorganizing the prompt is a good point. I am not using midjourney much but for leonardo I do it a lot. Especially in long prompts it seems to give the first few points a higher weight. Just switching something simple makes a huge difference. For example having "full body" prompt in the back. It created 2/8 pictures with a full body view then I moved it to the front and got 8/8 full body views with no other changes.

  • @smaller_cathedrals
    @smaller_cathedrals Год назад +1

    I've followed the whole AI art scene from a distance for quite some time and finally decided to become active myself.
    That meant watching a shit ton of videos and guides to get an initial bearing.
    Some of them were good, some bad, some downright terrible.
    And then there are a few select channels that I'm constantly coming back to. I hope you see where I'm going with this.
    Your videos always help me immensely, give me new ideas and overall are a great source for inspiration.
    Add to that your style of presentation and you're easily one of the best content creators on the subject out there.

    • @TheoreticallyMedia
      @TheoreticallyMedia  Год назад +1

      ahhhh, you're making me blush!
      I think your comment might have finally kicked me over to doing a "here's a starter pack" overview video. I've seen videos with beginner tutorials on various platforms, but I haven't seen one that gives an overview of the landscape. Like, if you were just starting, you wouldn't know the difference between Stable Diffusion and Midjourney.
      ....well, and I think a lot of those "beginner" videos are made by people that quickly blow past the "beginner" part...haha

    • @smaller_cathedrals
      @smaller_cathedrals Год назад +1

      @@TheoreticallyMedia You're very welcome. I cover this technique in my Patreon exclusive video "5 RUclips prompts that help you make content creators blush."

  • @john2510
    @john2510 2 месяца назад

    A quick hack for turning a chest up shot into full body: After you upres the image, use the zoom out tool to get the full subject. You can then crop/zoom to get the composition you want. You may lose some resolution, but you get the shot.

  • @doublestarships646
    @doublestarships646 Год назад +1

    Can't wait to see how this will all turn out. This is all so fascinating. Especially if you need reference art or even cover for books.

    • @TheoreticallyMedia
      @TheoreticallyMedia  Год назад +1

      It’s going to be a wild next few years for imagery for sure. I’m a little behind, but I still need to post some stuff on the whole text to video side. It’s pretty early in, but it won’t be long before we’re looking at moving images as good as Midjourney.

    • @doublestarships646
      @doublestarships646 Год назад +1

      @@TheoreticallyMedia That's incredible. It cuts out so much BS for anyone that truly wants to make an indie project. I hope people become more accepting of it and we learn to still give actual artists jobs still.

  • @umr460
    @umr460 Год назад +1

    Thanks, your guide to making a prompt helped. Changing the artist did make a big difference

    • @TheoreticallyMedia
      @TheoreticallyMedia  Год назад

      Fantastic! Glad to hear! I actually really enjoy trying out new artists on the same prompt. The results can vary quite a bit and are often quite inspiring!

  • @nrgao
    @nrgao Год назад +2

    BTW, this and the cinematic video were both awesome!

    • @TheoreticallyMedia
      @TheoreticallyMedia  Год назад +1

      Thank you!! I've got two pretty cool MJ videos this week, let me know your thoughts when they drop!

    • @nrgao
      @nrgao Год назад

      @@TheoreticallyMedia I'll be waiting with notifications on lol

  • @J4ME5_
    @J4ME5_ Год назад +1

    I only use midjourney for painterly styles, especially for video game content. I need as many videos as possible on this subject thank you so much for this!

    • @TheoreticallyMedia
      @TheoreticallyMedia  Год назад +1

      Glad to hear! I've got "Video Game" style on my future video list as well. I played around with it a lot for a project that never got off the ground. It was in the style of that Cyberpunk video I have on the channel, where I wanted to re-tell the story of Skyrim, but change the setting to Cyberpunk 2077.
      I picked up a few good tips to getting that FPS look. Might be more "Unreal Engine" than what you're looking for, but maybe it'll spark some ideas?

    • @J4ME5_
      @J4ME5_ Год назад

      @@TheoreticallyMedia Again, Unreal is super photorealistic, I am all about that painterly look, in the style of Cozy Grove, Dont Starve Alone and other drawn/painted styles. Low Poly video game style does sound good and whatever you release, I will eat it up! Loving your content. Thanks for the hard work!

  • @TheGeneticHouse
    @TheGeneticHouse Год назад +15

    Thanks for the video bro this is a good example of how difficult how much time and how much work is actually put into creating what you want from a AI image anybody can type something in and get an image but when we are trying to get what we want it takes a lot of work

    • @TheoreticallyMedia
      @TheoreticallyMedia  Год назад +5

      Hey man! Yup, A LOT of work. It's one of the reasons I still say there's plenty of room for artists as AI imagery improves. It's just a new tool, or medium to work with. I used a lot of comic book examples in this video, so I'll continue down that path: Making a comic in Midjourney or Stable Diffusion doesn't really save a ton of time. One guy I spoke with who does really nice MJ comics estimated that it took him about 5 to 8 hours per page.
      There is this assumption with AI art that it's an "Easy" button, and don't get me wrong, it can be-- but if you're trying to put something specific together? You'll need to put in some hours. And in the comic book example, there's still writing, panel layout, and lettering it.

    • @khalilnascimento
      @khalilnascimento Год назад

      Very well said, I pretty much endorse it.

    • @cafemochavibe
      @cafemochavibe Год назад

      Yes! Agree.

  • @bySterling
    @bySterling Год назад +1

    Awesome guidance n tips Tim! Curious man if I could ask your advice- I’m trying to copy a few stock images a client gave we preferred references but just can’t create anything useable from Mid Journey v5. To literally create similar quality images to great stock photos for our client site designs would be a game changer!$ What would you recommend the Settings should be to get the closest in situations like this? And of course it would using an uploaded reference image link You Rock bro!

    • @TheoreticallyMedia
      @TheoreticallyMedia  Год назад

      Got a video coming today that I think will get you RIGHT there! Keep an eye out!

  • @harrygoldhagen2732
    @harrygoldhagen2732 Год назад +2

    Thanks so much, Tim, this video was quite helpful. I especially appreciated seeing MJ going "wrong" so many times, which is my usual experience. It's nice to know I'm not alone! I'm mostly interested in cartoon styles, especially Archie Comics style, although with less stylized faces. Perhaps mixing in the fantastic artistry of Jaime Hernandez in his Love & Rockets indie comics. But I haven't found a way to style MJ v5 that way. I've tried various Disney styles, which are nice but obviously Disney. I've even just typed "cartoon style" and gotten some occasionally good results, though not repeatedly. Any ideas you might have? Thanks again!

    • @TheoreticallyMedia
      @TheoreticallyMedia  Год назад

      Hey! Yeah-- I think it's important to show MJ's flubs. There are a ton of RUclips channels that gloss over (or edit out) the less than desirable outputs. I think that creates an unrealistic narrative for those who don't use Midjourney, or discourages beginners.
      And oh man, Love and Rockets! That's been an age! It was really my introduction to indie comics! I'm playing around right now for you, but it is a bit of a tough nut to crack. MJ does OK at blending styles together when the subject is broad (ie: A medieval Castle in the style of Cyberpunk), but not so much when you start to get more specific like: "This Body with this head"--
      I'm still playing around for you, but trying out things like calling out Carl Banks, Mike Allard or Dan Clowes (who isn't quite what you're looking for), is starting to get somewhere. I'll keep experimenting for you!

  • @bramvanworkum
    @bramvanworkum Год назад

    and it seemingly doesn't know what a sword is. Or any edge weapon for that matter. Anyway. I just started out and want to use MJ for story-boarding. That would need consistent looks in all the characters. I get the need for photobashing and image prompts but whats the stuff I need to know to successfully repeat a face in many storyboard plates?

  • @SpiritVector
    @SpiritVector 11 месяцев назад +2

    One really cool command is /blend which sometimes gives a quick short cut.

    • @TheoreticallyMedia
      @TheoreticallyMedia  11 месяцев назад

      I gotta spend more time with blend. I’ve seen cool results from it, but for the most part I usually end up with some bizarre results. I’ll dig into it this week, though! I know you can get stellar results, I just need to play with it more!

    • @SpiritVector
      @SpiritVector 11 месяцев назад +1

      @@TheoreticallyMedia Yeah it can be unpredictable but I'll tell ya I have formed a method in using /blend by taking only realistic looking images and combining them with more adstract or animated looking images; I get some cool marketable results. I find if I got the other way around it gets really weird.

    • @TheoreticallyMedia
      @TheoreticallyMedia  11 месяцев назад

      @@SpiritVector I totally think that's the trick. I think I've mostly tried to blend realistic to realistic and ended up with terrible results. The idea of Realistic to Surreal is probably where you get the best results. I'll dive into it...this very much seems like an ideal video topic: "Midjourney's most misunderstood command"

  • @birdybutch
    @birdybutch Год назад +1

    Hope it helps!)

  • @christopherray6388
    @christopherray6388 Год назад +2

    I’ve been using midjourney just for website design. I’m going to try more things like this !

    • @TheoreticallyMedia
      @TheoreticallyMedia  Год назад

      Oh, cool! I’ve seen some people doing that, but haven’t really dug in to using MJ for websites: Do you use it for full design, or building individual elements?
      I can only presume someone will figure out how to connect ChatGPT4 to Midjourney and start cranking out fully functional websites at some point! Haha, everything is so crazy!

    • @christopherray6388
      @christopherray6388 Год назад

      @@TheoreticallyMedia well it started with building the whole landing page design. But i have been using it now to get images like illustrations. Also passing elements into midjourney and having it re design it

  • @CreativePunk5555
    @CreativePunk5555 Год назад +1

    I've had to change up my workflow quite a bit on some prompts. I find myself generating something close and then throwing it in Phototshop to tweak it and size it appropriately. From there I will do some color grading to what I want it to represent and then throw back in MJ and use the same prompt but focus more on the seed. The reason I like this workflow is I get to build my own library with seeds. But your video on cinematic prompts works great - and ends up being much easier to edit in PS. Nothing will ever be perfect, in fact, I've not ever used an image 100% out of the box. I also have been successful not using an artist but just breaking down the actual style. It also returns some interesting things I didn't think of. I use MJ mostly for moodboards, screencaps, briefs or storyboards.

    • @TheoreticallyMedia
      @TheoreticallyMedia  Год назад +1

      Ah, brilliant! You're a shining example of how to use Midjourney as a collaboration tool! Personally, I've been popping back and forth between MJ and Leonardo. Leonardo has a lot of the Stable Diffusion toys that I wish MJ would incorporate, like Pose to Image-- which does give you a ton of control over your "actors"-- but, Leo still lacks that extra Midjourney spice that really kicks up the overall aesthetic.
      That's a brilliant idea on the library of seeds! I'm going to nick that from you, or...I will, as soon as v5 gets seeds! (Hopefully soon?)

    • @CreativePunk5555
      @CreativePunk5555 Год назад +1

      @@TheoreticallyMedia Yeah, I've found myself using Leonardo more often as well. I also spend time in SD - there's so many new extensions and possibilities happening with SD. I pop back and forth and try to use them all for one goal. The pose features SD has works amazing and will use MJ for characters and then pop over to SD for some posing and then back to MJ with the seed I got from MJ, the in painting tool works great too - which Leonardo has as well. In the end, if you can think outside the box and find the strengths and weaknesses of each tool you can get some amazing results by having them all work as one unit. And then there's ChatGPT. haha. It's endless with the possibilities these days.

    • @TheoreticallyMedia
      @TheoreticallyMedia  Год назад

      @@CreativePunk5555 I'm literally a kid hopped up on Pixie Sticks with a box of toys right now. It's INSANE. And with this week's explosion of Text to Video, the next few months are going to get really wild.
      I'd love to do more work in SD, but as a Mac guy, it's kind of a pain. I might invest in a decent PC workstation down the road, so I can experiment with local SD generations, but I also feel like by the time I get around to it, Leo (or another service) will have most of that sussed out, and they'll be implementing all the latest extensions as they come out. Leo, in particular, had Pose to Img up and running stupid fast!

    • @CreativePunk5555
      @CreativePunk5555 Год назад

      @@TheoreticallyMedia I'm a Mac guy as well - I'm lost on a PC. But I use a service called Runpod for SD. It's a decent service, might be better cloud based services but it's working for me with running SD. And yes, the new announcements recently has been insane. I'm def a kid in a candy store as well. haha. I'm starting to mess around with some music as well. I've created some music stems and then using AI to compose new music. With ChatGPT, anything is possible. If you can prime and train it on the documentation, it will literally do anything. Especially the new ChatGPT 4. I'm trying different workflows to find ways to bridge them all together. There's a new script that works with SD that is putting out some insane video. It's not EBSynth but a beta script that has crazy results. I can share that with you if you want to mess with SD video.

  • @BUY_YOUTUBE_VIEWS_674
    @BUY_YOUTUBE_VIEWS_674 Год назад

    Fixed my mood

  • @karmaindustrie
    @karmaindustrie Год назад +1

    Awesome Music !

    • @TheoreticallyMedia
      @TheoreticallyMedia  Год назад

      Thank you so much!! I know it’s such a small thing, but I write all the background music- and I do have to keep it low, since most people don’t want to focus on it, but I always wonder if people even notice it! Seriously, this comment made my day!

  • @clemsmith2253
    @clemsmith2253 Год назад +1

    any recommendation for the best AI to edit existing images? i am an architect and want to images from interiors and have AI essentially render in the textures better than the base program. as a work around to doing actual rendering.

    • @TheoreticallyMedia
      @TheoreticallyMedia  Год назад +1

      Can you elaborate a bit? The way I’m taking it is that you would like to use preexisting interior images and then re render them with different interiors? Or are you saying you would like to use flat CAD type images and have MJ render the textures?
      If it is the former, you might want to look into “Descibe” that might get you on a track: Midjourney's Amazing New Command - Diving into Describe / Prompts / Tokens!
      ruclips.net/video/cH8UdeaYQls/видео.html
      If the latter, I can dig around a bit, although to be fair, it is not something that MJ is great at. Although Leo has been talking about it

    • @clemsmith2253
      @clemsmith2253 Год назад

      @@TheoreticallyMedia So ideally I would be able to say 'take this image and use teak wood for the cabinets, espresso colored stain on the beams, and add people using the kitchen wearing neutral tones'. that would be an example. I work in SketchUp a lot and there is about 20 man hours between the flat 3D views in SKP and getting a nice rendering from a secondary program

  • @nrgao
    @nrgao Год назад +1

    Also, tip on the canvas if your image is having trouble outpainting something natural looking, crop your image to a different position. For instance, if the edge of your photo has a hard line of a building or shadow, cut it to where there is empty space like a sky or half a recognizable object and it does way better generations.

    • @TheoreticallyMedia
      @TheoreticallyMedia  Год назад +1

      That's a really great tip! I've found that, if you're willing to do some manual labor, you can take "so-so" outpainting and really make in shine in Photoshop. I always maintain that AI gets you about 75 to 80% there, but with just a little elbow grease, you've got something really stellar!

    • @nrgao
      @nrgao Год назад

      @@TheoreticallyMedia agree 100%. I was a graphics manager and then digital design teacher until I started homeschooling my special needs son. I took a step back, but It feels good just to be creating again and AI was definitely the spark. Great videos.

  • @Steger13
    @Steger13 11 месяцев назад +1

    How to keep the final character and keep it and make new action poses with the same character to end up making a comic book? Thanks

    • @TheoreticallyMedia
      @TheoreticallyMedia  11 месяцев назад

      So, to be honest, I don't think Midjourney is great at this. It's something they've been talking about doing in a future update, but consistent characters are not a thing it does well. In comic book terms, I think of MJ as a "Cover Artist"-- it does very amazing and flashy "poster" type art-- but the day to day to sequential storytelling is not a strong point.
      For comics, you might want to take a look at this: ruclips.net/video/5Y_7DuZaPqU/видео.html

  • @zero2007us
    @zero2007us Год назад +2

    Another good tip is to photo-bash elements of different outputs to the results you are looking for.

    • @TheoreticallyMedia
      @TheoreticallyMedia  Год назад +1

      I should probably do a full tutorial on Photo Bashing. I don't see a lot of that out there, and I feel like A) it would illustrate how much control you can have with Midjourney, and B) Dispel some of the myth that AI Art is "just the machine"-- Yes, it CAN be just the AI, but with some elbow grease, you take the drivers seat and act more as a director.

  • @twitterglobalarmy
    @twitterglobalarmy Год назад +1

    Midjourney is fascinating but after a brief period of free use, customers must pay to access the service. I've just started experimenting with Blue Willow, and I'm amazed. What's your thought about BW?

  • @nrgao
    @nrgao Год назад +1

    I found that to get screams or laughs from midjourney, instead of saying it outright, you have to use adjectives as well that evoke screaming or laughing and then provide, "mouth open, mouth half open, etc.". I found a video of 25 great descriptive words to get emotions on midjourney but can't remember the channel.

    • @TheoreticallyMedia
      @TheoreticallyMedia  Год назад +1

      Ha, I mentioned it somewhat in another comment to you: But wait until you see tomorrow's video!

  • @paullenoue8173
    @paullenoue8173 Год назад +1

    I've found using a name or instruction twice sometimes gets the result you're looking for. For example, Frank Miller style, viking girl in forest shouting, frank miller style, --ar 16:9 got better results than just using his name once.

    • @TheoreticallyMedia
      @TheoreticallyMedia  Год назад

      That's interesting. I wonder if weighting has that same affect? Like, does the repetition of the name work better than Frank Miller ::4?
      That might be something worth diving into for a video this week!
      Thanks so much for the comment!!

  • @magicallymalicious01
    @magicallymalicious01 11 месяцев назад

    I have been trying for days to get a turtle with out a shell standing upright on two feet even using negative prompts and uploading an exact example I am getting the craziest looking results resembling monkey type aliens

  • @kyleduske4830
    @kyleduske4830 Год назад +1

    It would be great if you put a few examples of your full prompts in the description for the video.

    • @TheoreticallyMedia
      @TheoreticallyMedia  Год назад

      Yeah, I got some comments on that, I’ve started putting them in for newer videos!

  • @darrellcrammusic1617
    @darrellcrammusic1617 5 месяцев назад +1

    Has anyone had success coming up with prompts that depict duels/battles with multiple characters? If i try to include anymore than two entities then it really struggles to deliver

    • @TheoreticallyMedia
      @TheoreticallyMedia  5 месяцев назад

      So, funny enough. I’ve been trying to do something similar lately. From a straight text prompt, you could try adding a :: between, to split the prompt. Something like “Two samurai swordfighting:: evil orcs smashing each other with clubs”
      That might work- but to be honest, you’ll probably have an easier time by providing reference images. Perhaps photobash something together and then run a /describe on it?

  • @TheAbulletAway
    @TheAbulletAway Год назад +1

    I am having the same issue as the person in your video. V 5 has been uninspiring for me. I taught GPT to learn Midjourney. It now creates some very detailed prompts following the v 5 instructions. Over the past week, MJ has failed to create or even come close to anything that's it's been prompted to. It ignores most of what is put into the prompt. I plan to try some of your suggestions and hope that works. But so far, Midjourney v 5 has done nothing but waste hours upon hours of my time. Very frustrating and disappointing. I mean it's cool that does hands and teeth better. But, that means nothing if can't create what's actually in the prompt. I'm paying for it, and it hasn't produced a single usable image for me yet. Thanks for the video, it's given me some things to try and I truly appreciate that.

    • @TheoreticallyMedia
      @TheoreticallyMedia  Год назад +1

      I do think they've been playing around with the language in Midjourney-- and I'll admit, I've found myself popping back and forth between v4 and v5 a lot more than I ever did with v4 to v3. I don't think they've quite got the imagination/creativity fully geared up in v5-- and that's something we were told about from launch. That it would stylize as we voted on images more.
      On the prompting side, please let me know if the prompt tips here (and if you haven't, check out the Cinematic Tutorial-- that's more of a deep dive), get you some better results!

    • @v1nigra3
      @v1nigra3 Год назад +2

      From what I’ve seen, v4 is significantly more artistic and creative while v5 is more grounded and gives more defined and clean results but without the artistic influence

    • @TheoreticallyMedia
      @TheoreticallyMedia  Год назад +1

      @@v1nigra3 totally. And from what it looks like, the MJ team have slowly been developing a v5 “look” based on user ratings. So I’m sure the creativity will get better over time. Haha, they’ll likely nail it just as v6 arrives and we’ll be back to the start!

    • @TheAbulletAway
      @TheAbulletAway Год назад

      @@TheoreticallyMedia I meant to reply sooner to this but I am just so overwhelmed with AI and trying to learn and experiment. The tips helped. What I've found is that MJ, like you said, isn't there yet, when saying that we can just talk to it. I get cleaner, more accurate results when I keep the prompts simpler and more in the style of MJ v4, while using V5. The biggest issues I still have with it, is in v5, I'm still finding the occasional logo and text sometimes on things that you can tell are from things it found on search engines. It fails almost every time to create more than one focus of the image. For example, I wanted an image of a cat cartoon cat, zombie and monster sitting on a patio table. I generated dozens and dozens of prompts and it just could never do it. The results were downright laughable. I've found it can do some amazing imagery when it's a single subject with a simple background. But when you go for two or more characters, it's just not happening. My solution was to have it generate each character on a white background. Then, have it create the setting, then I went and photoshopped them into the background. I have no doubt it will get there. But as of right now, it's just really good at creating stunning visuals and not so good at the specifics. But once it can learn to consistently create the same characters and actually create what is in the full prompt, it will be truly mind-blowing. Also, thanks for taking the time to respond. I have literally said to myself every day, I need to go respond, and then I get side tracked in rabbit hole after rabbit hole.

  • @patrickr.8529
    @patrickr.8529 Год назад

    It would be very helpful to see the actual prompts on the screen as a reference

    • @TheoreticallyMedia
      @TheoreticallyMedia  Год назад

      Good feedback. I’ll try to work that in for future videos!

  • @xitcix8360
    @xitcix8360 Год назад +1

    Idk what's cooler, the AI or that website

  • @desmckenzie526
    @desmckenzie526 Год назад +1

    I've come to believe that a huge issue, currently (bring on v6), is MidJourney trying so hard to treat a single composition in it's totality and tie all composition aspects back to the biases it has "learned". Sure you can dive into weighting... but the next step for MidJourney's evolution could possibly be introducing prompts like - - face, - - left foot, - - right foot, - - torso, - - horns, - -background, - - foreground... so that users can then give MidJourney a little more help deciding what to do and a little more power to treat different parts of the composition independently rather than as a whole... and THEN try to rationalise it all into one composition (or not... if the user really really really WANTS a very disparate composition they should be able to indicate such with a - - frankenstein value (for want of a better term). Obviously this applies mainly to compositions featuring humanoids. Those using MDJ to make websites would want a similar ability to compartmentalise.... but with terms like - - header, - - navbar, - - menu, - - slideshow, - - footer, - - sidebar. It is possibly time MDJ learned how to separate so that, for example, 1 - the horns I requested on my Tiefling don't just look like horn shaped hair (because it was too close to the hair), 2 - the black leather magic tome I requested in his LEFT hand isn't set on fire simply because I also requested a Fireball in the RIGHT hand... and 3 - the little girl can go barefoot. Heck, then maybe even take it further and allow - - cellref C5 where A to K is the distance along the horizontal and 0 to 10 is along the vertical... so I can request - - cellref C2 barefoot - - cellref C8 barefoot.

    • @TheoreticallyMedia
      @TheoreticallyMedia  Год назад

      It's funny, I think that while there is a subset of users that are looking for that level of granular control (like you and I, clearly!) I'm not sure that's where MJ's aims are. Personally, I think we're going to see a fork in the road with two flavors of Image Generation. There will be the "easy prompt" version that Midjourney does so well-- and that will be used by the population that just wants to play around, or quickly generate something like a puppy in a field of flowers.
      But, we'll also see another system that incorporates more Stable Diffusion type widgets (like Pose to Image) for those who are looking for a finer level of control.
      And man, while I love the text based idea you described, and would have a blast with it, I think it would drive most people INSANE. Personally, I've always been a fan of the text based (almost programmatic) style of prompting, but I also recognize that the vast majority of users would prefer a slider based UI.

  • @fatjay9402
    @fatjay9402 Год назад +1

    Can you make a Video about promts for mattpatings

    • @TheoreticallyMedia
      @TheoreticallyMedia  Год назад

      I can try to get to that, but in the meantime, here's a super simple prompt I just tried out: Matte Painting, The dock of a large Spaceship, 1970s Sci Fi Design, --ar 16:9 // It looked pretty good, but the downside is that MJ kept putting small figures in the background, which won't work if you're using it as a true Matte. Easiest thing there would be to just zap them out in Photoshop.
      Is that what you were looking for?

  • @Varo_4
    @Varo_4 Год назад +1

    ima try out that of pasting an expresion that was a cool idea🤣🤣🤣😂

    • @TheoreticallyMedia
      @TheoreticallyMedia  Год назад

      excellent! I actually have a more in-depth look at that technique here: ruclips.net/video/qeRVBKi1QMQ/видео.html -- check it out if you get the chance!

  • @bluetablepainting
    @bluetablepainting Год назад +1

    Try "... standing" or "... running" or mention something to do with the feet and MJ will show more or all of the figure. "a humanoid robot running" will almost always produce a whole figure.

  • @waqmanime
    @waqmanime 7 месяцев назад +2

    My problem woth leonardo ai is that it doesn't have unlimited searches like midjourney otherwise i would have gone for them

    • @TheoreticallyMedia
      @TheoreticallyMedia  7 месяцев назад

      I mean, you can always play around with Leo's free tier. They actually offer a LOT there, you just have to be kinda smart about your project to credits ratio. Might take a few days to get what you need, but it can be done.
      I personally love using MJ and Leo as a hybrid workflow.

  • @jRoy7
    @jRoy7 Год назад +1

    I think something like "barefoot" would work better than "without shoes" since "without" probably needs to be swapped to a multi-prompt with with a negative weight for MJ to understand what we're asking for?

    • @TheoreticallyMedia
      @TheoreticallyMedia  Год назад

      Good tip!! I think this week I'll be doing a video on Weighting (positive and negative), I've been playing around a lot more with :: over the weekend, and I think it's a pretty powerful method that most folks aren't utilizing.

  • @user-zo8tq4nx6m
    @user-zo8tq4nx6m Год назад +1

    do you know how to keep a designated part in the picture orignal, no variation in that part, such like dress for a girl, it does not work just using parameter --no blah blah,

    • @TheoreticallyMedia
      @TheoreticallyMedia  Год назад

      So, I’ll say MJ is not really good at that. It does randomize clothing quite a bit. Generally that’s why I stick to calling out simple outfits like “a leather jacket”
      You might get some variation between the images, but it is usually less noticeable than a something like “a patterned dress”

  • @blakeschwier
    @blakeschwier Год назад +1

    MJ doesn't like "no____" you have to use negative prompting "...--no shoes"

    • @TheoreticallyMedia
      @TheoreticallyMedia  Год назад

      True. But admittedly, even with negative weighted prompts, Midge sometimes ignores you! On the plus side, I haven’t typed in -no extra limbs in ages!

  • @danjones7561
    @danjones7561 Год назад +1

    I'm trying to generate color photos of James Dean. But MJ mostly keeps generative black and white shots. The words "contemporary" and "modern" are a fail. "coloured" looks artificialy colorized.
    MJ is stuck in the 50s.

    • @TheoreticallyMedia
      @TheoreticallyMedia  Год назад

      Interesting. What’s your base prompt? I’ll take a stab when I get back to my computer. Sounds like a fun challenge!

    • @danjones7561
      @danjones7561 Год назад

      @@TheoreticallyMedia The best one is "a color photo shoot of James Dean" but it looks artificially colorized.
      "a coloured street photo of James Dean" is second.
      The remaining is b/w :/
      I would love to see Jimmy in our contemporary world.
      Thanks a lot for giving it a try.

    • @danjones7561
      @danjones7561 Год назад

      I had some cool photos with "street photo of Marylin Monroe laughing with James Dean". But always black and white.

  • @shockadelic
    @shockadelic 9 месяцев назад +1

    "she wears no shoes"
    ARRGH!. You mentioned SHOES, so it reads that instruction as SHOES
    You should say barefoot

    • @TheoreticallyMedia
      @TheoreticallyMedia  8 месяцев назад +1

      Yeah, this was an older video-- lots of tricks picked up since-- and I think the MJ language engine has gotten a LOT better!

  • @john2510
    @john2510 2 месяца назад

    I think MJ is getting more "left brain" in its analysis of prompts. It's becoming much more literal in trying to create the subject and less interested in capturing the style and artistry of the reference.
    Two years ago, I got some amazing results from using the "in the style of... " prompt. The images had distorted hands and faces, but they captured the style of the artist completely.
    I asked for a pen & ink drawing in the style of horror illustrator Bernie Wrightson, without mentioning a subject matter. There was nothing identifiable in the resulting picture, but I could have told you it was a Bernie Wrightson from across the room.
    I just recently got back into MJ and am disappointed in this aspect of its development. Those same prompts from two years ago result in very clear representations of the subject matter, but they've lost the style almost completely. I tried changing the order and weight of the style references in the prompt, but that made only a slight difference.
    I suspect there are ways to get to the right place with an extremely detailed narrative, but using style references is less effective. I'm hoping they bring it back more in line with where it was.

  • @cameleonxxl8456
    @cameleonxxl8456 Год назад

    Thx for not showing a single promp u wore using for the images. NOT!

    • @TheoreticallyMedia
      @TheoreticallyMedia  Год назад +1

      Ha’ if you look at future videos I started doing that. Even started putting together little PDF worksheets for everyone w the prompts. You were heard!

    • @cameleonxxl8456
      @cameleonxxl8456 Год назад

      @@TheoreticallyMedia thx! 🤗❤️

  • @paulwal222
    @paulwal222 Год назад +1

    It would help if you showed the actual exact prompts you use for each iteration, word for word. It's mildly infuriating that you don't.

    • @TheoreticallyMedia
      @TheoreticallyMedia  Год назад +1

      But then who would pay $300 for my prompting course? I’m kidding, i’m kidding!
      Noted, you aren’t the first to mention it- new video coming out tomorrow, and I’ll be sure to put the prompts into the examples! Thanks for the feedback!