Create CONSISTENT Characters - Midjourney Character Design

Поделиться
HTML-код
  • Опубликовано: 31 май 2024
  • ⚠️⚠️ WARNING - PLEASE READ ⚠️⚠️
    In this video, I say that you can "train" Midjourney using the Emoji-Buttons. Well, it turns out you can't, no matter how much we imagine that it does. I was wrong. It happens. Life goes on. However, giving your character a name in the prompt DOES help keep your character more consistent.
    Furthermore, I've released Part 7 of this series, which illustrates a much better way to achieve consistent characters and it works in both v4 and v5.
    I hope this will "appease" some of the FB/Reddit folks out there who are quick to point out errors in everyone else's content but apparently aren't willing to put themselves out there and produce their own content that teaches people stuff. 🤷‍♂️
    PS: You wouldn't believe how many people have written me, telling me that this video helped them progress with their projects. Despite its flaws.
    ⚠️⚠️ WARNING - PLEASE READ ⚠️⚠️
    -
    📙 Midjourney COURSE mastersofmidjourney.com Beginners to Advanced
    🚀 FREE Midjourney Cheat Sheet tokenizedhq.com/freebies/mj-c...
    🔗 FREE Promptalot Extension promptalot.com/extension
    🔗 FREE Supporting Material tokenizedhq.com/freebies/vide...
    Folder: 2023-01-23 - Consistent Characters
    🌐 Check out the full blog post:
    tokenizedhq.com/midjourney-co...
    🤝 Credits 🏆
    Shoutout to Kris from @AllAboutAI for inspiring the initial idea on how to do this.
    -
    📰 AI Newsletter 👉 Your Inbox tokenizedhq.com/newsletters/ai
    🐤 Follow me on Twitter / chrisheidorn
    💼 Follow me on LinkedIn / christianheidorn
    📸 Follow me on Instagram / christianheidorn
    💬 Join the Tokenized AI Discord tokenizedhq.com/invite/discord/
    🎬 Character Design Series 🎬
    Playlist: • Create CONSISTENT Char...
    Part 1: Create Consistent Characters • Create CONSISTENT Char...
    Part 2: Place Characters in Action Scenes • PLACE a Character in A...
    Part 3: Apply a Consistent Character Style • Apply a CONSISTENT Cha...
    Part 4: Create Multi-Character Scenes • Create MULTIPLE Charac...
    Part 5: Create Facial Expression for Characters • Create FACIAL EXPRESSI...
    Part 6: Infuse Your Characters with Themes • INFUSE Themes into Cha...
    📺 Recommended Related Videos & Playlists 📺
    Monetizing AI: • Make Money with Midjou...
    Activate GOD MODE in MJ: • Become a GOD in Midjou...
    The SECRET Language of MJ: • The Surprising TRUTH a...
    How to Use --SEED in MJ: • How to Use the --SEED ...
    Ho to Use NEGATIVE Prompts in MJ: • How to Use Negative Pr...
    -
    So you want to create a consistent character in Midjourney?
    Do you feel like you've tried almost everything but for some reason, you keep getting characters that look very different?
    Creating a consistent character in Midjourney isn't easy and it requires a bit of an unconventional approach to prompting.
    In this tutorial, I'll show you how to "train" Midjourney toward the exact look that you want.
    CAUTION: Bear in mind, that you can't actually "train" the model. All we're doing is going through variations until we find a look we like and then we use the seed from that particular image.
    This video is Part 1 of a longer series about Midjourney character design.
    -
    ⏰ Timestamps ⏰
    00:00 Meet Peggy Palermo!
    00:44 The Challenges of Creating Consistent Characters
    01:30 Naming the Character
    03:03 Defining the Character's Features
    03:50 Picking an Initial Look
    05:12 "Training" the AI Model
    07:57 Giving the Character a Role
    08:51 "The Matrix" starring Carla Caruso
    11:53 "Tomb Raider" starring Carla Caruso
    12:26 Carla Caruso: The Marvel Comic Hero
    14:28 Placing the Character in Action Scenes
    -
    #midjourney #midjourneyai #midjourneyart #aiart
  • НаукаНаука

Комментарии • 586

  • @TokenizedAI
    @TokenizedAI  Год назад +21

    ⚠⚠PLEASE READ - PUBLIC SERVICE ANNOUNCEMENT ⚠⚠
    It has come to my attention that quite a few people in various FB groups and Subreddits have got their panties in a twist because this video of mine contains some incorrect assumptions. So they blame me for spreading what they call "misinformation".
    To be clear, when I recorded the video, I was honestly under the impression that this process works, as were dozens of other content creators who have done their own videos about the exact same process. Well, turns out I was wrong. Sh*t happens. We all make mistakes.
    THAT BEING SAID, while the "training" aspect of this video is clearly WRONG, it has been confirmed by the MJ team that giving the character a name within your prompt DOES help.
    Oh and by the way: Part 5 of this series also shows another way to create consistent characters and THAT one definitely works.

    • @TheFuss85
      @TheFuss85 Год назад +1

      trying to use the email reaction but the midjourney bot doesn't DM me the results.

    • @TokenizedAI
      @TokenizedAI  Год назад +1

      @@TheFuss85 Wrong email icon? You also need to make sure that your settings even allow DMs to be sent to you from that server

    • @TheFuss85
      @TheFuss85 Год назад

      @@TokenizedAI that was the issue! Thank you so much Christian 👊

    • @Albopepper
      @Albopepper Год назад +2

      You can use RUclips edit tool to snip that part out of your video.

    • @TokenizedAI
      @TokenizedAI  Год назад +4

      @@Albopepper Yeah, problem is that it's not just "one small part". Cutting an entire section that also contains correct and relevant information will do more harm to the overall plot than help. Especially since the fact that you can't really train MJ doesn't really do any real damage to be honest. It just annoys the keyboard heroes in the Subreddits who spend their entire day pointing out other people's honest mistakes.

  • @arnoldtrinh
    @arnoldtrinh Год назад +28

    A correction to “training the AI” section, I got confirmation from the team that the data is all from 2019 so unless you’ve a time machine that’s not possible 😉
    Using the image reference and -seed does give it something to go off on. And will be your best bet for consistent characters.
    Hope it helps clarify for those who are creating characters over and over.

    • @TokenizedAI
      @TokenizedAI  Год назад +5

      Thanks for sharing this. Sounds reasonable. That's why I've been putting the "training" in quotations.

    • @DiegoSilvaInstrutor
      @DiegoSilvaInstrutor Год назад

      Thanks ^^

    • @markrichards5630
      @markrichards5630 Год назад

      for V4 that's not quite true - new training data has been added.

    • @michaelsbeverly
      @michaelsbeverly Год назад +1

      @@markrichards5630 Yeah, I seem to see what he describes in this video actually happening....I don't know, but it seems to work....I got a super consistent character and then changed him slowly into a vampire-like monster and the character stayed recognizeable but changed into a monster, it's a spooky effect.

    • @evelynannrose
      @evelynannrose Год назад +1

      ​@Paleoism I tried this to, and it does seem to train MJ in a way it starts to get familiar and create consistent facial features and hair style and clothes

  • @karyonite
    @karyonite Год назад +4

    Thanks Christian, that was so insightful! Thanks for explaining more of whats happening, how and why. Part 2 is gonna be very useful as well, so thank you so much and I can't wait :)

    • @TokenizedAI
      @TokenizedAI  Год назад +2

      Thanks for the kind feedback! :)

  • @Dan-mm6bu
    @Dan-mm6bu Год назад +6

    You, sir, are a star YT teacher! Thanks for all the solid material you've created and shared with us all. Cheers!

  • @Tomduckworthable
    @Tomduckworthable Год назад +1

    Thanks for your videos and comments. Really helpful to see what you've been trying, and learn about things to try myself. I have found it quite difficult to find the other parts in this series - I normally see people include the part number in the video title, so it's really simple. Or sometimes they are in a playlist

    • @TokenizedAI
      @TokenizedAI  Год назад

      I literally link to the entire playlist and also list all parts in the description 😉 It's not linked at the end of that video because I didn't originally plan for a full series. But Part 2 is linked at the end.

  • @hpongpong
    @hpongpong Год назад +4

    The duplicated prompt you showed near the end of the video is very interesting. I never thought of putting near identical prompts and assigning weights to fine tune your image. Fantastic tip!

    • @TokenizedAI
      @TokenizedAI  Год назад +2

      Yeah, I'll be covering that bit about multiprompts in Part 2. The example I used isn't even the usual way I do it. In this case I was a bit lazy. Normally I change the sentence a lot more and I also usually do it with 3-4 segments rather than just 2.

  • @czar1703
    @czar1703 Год назад +2

    Don't mind the haters. Very useful content packaged into a very solid delivery style. Thanks for the value, keep pushing :))

    • @TokenizedAI
      @TokenizedAI  Год назад +1

      Appreciate it! I don't mind the haters. I actually just troll them back 🤣

  • @TheAgavi
    @TheAgavi Год назад +2

    I was trying to do this legwork myself but I'm glad you already did. Subbed. Legend

  • @Miguel.Santos
    @Miguel.Santos Год назад +5

    Great quality . Well explained. Keep going with the awesome work!!!

  • @digidope
    @digidope Год назад +7

    Midjourney 4 runs on Stable Diffusion, so to understand how to get consistent look, think how it is done in Stable Diffusion via custom embedings. Your prompt is converted to tokens and certain tokens trigger certain things. But if you really want it to stay 100% all the time, then you need to train your own embeding using your own imageset. MJ can't (at least not yet) be trained by any means.Training a character takes about 10minutes, so it is a "bit faster" the trying to force MJ to do that. But if you want to stay in MJ, then spend time on what word triggers what token. They might even trigger a sampler from the prompt behind the scenes and figuring out that makes life a bit more complicated :) Different samplers tend to read the prompt differently. It's not always easy even in SD where you can see all the parameters, but MJ hides 95% of the parameters making it a lot harder sometimes :D But in exhange you get nice default MJ look where you can simply type: shsjysitug -and get nice looking image. (That can be replicated in SD quite "easily" btw)

    • @TokenizedAI
      @TokenizedAI  Год назад +2

      Stable Diffusion was released after Midjourney, so that's not entirely true. What IS true is that Midjourney has been experimenting with Stable Diffusion since it's release because the Creative ML OpenRAIL license makes this possible. Hence why Midjourney also made adjustments in their Terms of Service shortly after SD's release.
      Midjourney uses Natural Language Processing, while Stable Diffusion does not. This is one of the reasons why relatively "natural" prompts that work very well in Midjourney, do not yield similar results in Stable Diffusion. In SD you need to use far more explicit (sometimes weird) keywords to get what you want.
      PS: I'm curious whether you have an explicit source that confirms that MJ runs entirely on SD? That seems somewhat farfetched in my opinion, but I'll happily have myself proven wrong.

    • @digidope
      @digidope Год назад +5

      ​@@TokenizedAI Emad the CEO of SD posted on Twitter that MJ 4 Beta is using SD. Also in the new lawsuit against MJ the plaintiffs claim that MJ use same dataset as SD.
      NLP is layer between MJ and SD that converts the prompt to format that the AI image generator understands better. SD is just a base technology where anyone can build anything on top. So by default it does not do much. BlueWillow seems to be using some sort of NLP layer as they trigger model depending on what is written on the prompt.
      Replicating MJ look in SD is fairly simple when using right model with right embedings. The power of MJ comes from the embedings they have created in house.
      For those who don't know what embedings are: If i write word SNOWBOARD in SD i get quite crappy image. MJ will produce very nice image with the same word. I generated four snowboard images in MJ and trained a new embeding from those images and named my embeding as SNOWBOARD. Now everytime my prompt has word SNOWBOARD it will generate similar image than MJ.
      Today it makes no difference what tech is behind what AI generator as new models and embedings can be created from AI generated images.
      As transformers and models are available for anyone, it's not hard to write NLP layer for SD where you can enter MJ style prompt and it will convert it to format that is best for AI image generator. One step further is to include GPT3 so one can just write: Give me ten images on topic Scifi Gardening. GPT3 then generates ten ideas, converts those to format for SD and will use model, embedings and negative keywords that are best suited for each prompt and then SD outputs ten images. Kinda next level MJ.

    • @digidope
      @digidope Год назад +3

      Also just noticed this: It was possible to "break" MJ look so it looks like default SD model. This prompt worked in late November, but today you will get error. It means they have added more "training wheels" to prevent "breaking" the system: Hierarchy of power by Robert MacBryde, pixabay contest winner --no text --no infographics --no poster

    • @TokenizedAI
      @TokenizedAI  Год назад +1

      @@digidope Yeah, I assumed that they were using SD for their MJ4 beta. But are you sure that they're using SD exclusively?

    • @digidope
      @digidope Год назад +2

      @@TokenizedAI not sure. Maybe they used SD to create their models and embedings.

  • @kaizen_5091
    @kaizen_5091 Год назад +7

    Love this technique for consistent character creation and this is by far one of the best guides I have seen on it and included points that I haven't seen mentioned before. I am especially excited to see Christian's solution for the background in part 2 and it will hopefully answer any lingering questions I have from Part 1.

    • @TokenizedAI
      @TokenizedAI  Год назад +2

      Yep, I'm pretty sure Part 2 is going to be VERY enlightening for many people. I actually need to do an entirely separate video on multiprompting after that because it's not exclusively applicable to character design.

    • @kaizen_5091
      @kaizen_5091 Год назад +2

      @@TokenizedAI Yes please. I can only image how difficult it is to cram what you need into your content without it being ridiculously long so it's makes sense to approach it like a series.

    • @ecommasters3847
      @ecommasters3847 Год назад +2

      This is incredibly well explained and demonstrated.. thankyou for your videos

  • @BTMOM1933
    @BTMOM1933 Год назад +1

    This is seriously awesome. i've followed your instructions patiently and getting great results. Thanks. can't wait for part 2

    • @TokenizedAI
      @TokenizedAI  Год назад +1

      Awesome!

    • @BTMOM1933
      @BTMOM1933 Год назад +1

      @@TokenizedAI Christian, I guess you may have already answered that question but could you point me towards an answer. Sometimes out of the blue blemishes appear on a character's skin ( the cheeks, but sometimes the forehead also) and it's difficult to get rid of them ( in fact they keep getting uglier in upscales etc) Do you have a way to cure that? I've tried --no blemishes or perfect skin Not sure it works well Thanks

    • @TokenizedAI
      @TokenizedAI  Год назад +1

      @@BTMOM1933 I know exactly what you mean. I honestly haven't tried to get rid of it yet, so I don't have a quick fix for you I'm afraid.

    • @BTMOM1933
      @BTMOM1933 Год назад +1

      @@TokenizedAI Apart from my interest in the technique you ve described here, and my interest in creating consistent characters, I’m intrigued by what that technique seems to tell us about the way Midjourney’s AI creates female characters. As they grow, they seem to become younger. So a « beautiful woman » will in a few iterations become an adolescent. It is as if the collective male gaze had decided that 16 to 23 is the only viable option. Giving a specific age to a character helps up to a point. « a 30 year old woman » or « a 40 year old woman » helps, to a point. Because in a few iterations the 30 yr old woman will slowly creep back down to 25 ;-) , while the 40 year old woman in a few iterations will sprout older versions of herself…I’m not sure I’ve seen the same with male characters, at all. They seem to « stay » the same age.

    • @TokenizedAI
      @TokenizedAI  Год назад

      Interesting observation.
      If it's of any particular interest, 96% of the viewers of this channel are male and the biggest age group is 35-45 years old 😉
      That should give you a good idea of what your typical MJ user looks like.

  • @bitsandbobs5858
    @bitsandbobs5858 Год назад

    I agree with you, there are many who are quick to criticise supportive tutorials of others without offering insight of there own.

    • @TokenizedAI
      @TokenizedAI  Год назад

      A day in the life of RUclipsr 😆

  • @StyleViewStudio
    @StyleViewStudio Год назад +1

    Brilliant!… you seem quite a few videos for mid journey. This one is just one of the best. Thank you.

    • @TokenizedAI
      @TokenizedAI  Год назад

      Just make sure you read the description too. It's important because some of the info in this one is outdated/incorrect.

  • @cybercrafted
    @cybercrafted Год назад +2

    Well this one is a game changer. I already knew a lot you are tolking about but the eastern egg for me was to add name and to realize, it associates it with look. Good job

  • @jamestaylor7021
    @jamestaylor7021 Год назад +3

    Very informative and well structured tutorial. Thank you for taking the time to share your knowledge with the rest of us plebs 😃

    • @TokenizedAI
      @TokenizedAI  Год назад +1

      You're very welcome! Always happy to hear that people find this useful :)

  • @rod4s311
    @rod4s311 Год назад +2

    Great! That’s what I was waiting. Keep going 👍

  • @CorporalDavis
    @CorporalDavis Год назад +1

    Want to say thanks for the tidbit of info.. .started doing while watching your vids -Thanks once again

  • @manuch3384
    @manuch3384 Год назад +2

    really interesting video. Really like the process. It contributes to my better knowledge of how MJ works. I will impatiently wait for the next episode.

  • @StyleViewStudio
    @StyleViewStudio Год назад +1

    Thank you Christian! You inspired me to get serious - and get a (Stealth) subscription to Mid Journey. I like everything about your approach - it is a very creative approach - just as an Artist or a writer would do....Thank you!

    • @TokenizedAI
      @TokenizedAI  Год назад

      Pssst.....don't tell the mob that you think my approach is creative. They said I'm clearly not a creative 🤣

  • @SacredGeometryWeb
    @SacredGeometryWeb Год назад +3

    Thanks. Great tips. Ive watched many other tuts, and created 10k plus images, and this was new to me! :-)

  • @maxwellcoleshow
    @maxwellcoleshow Год назад +1

    Wow! This is super Duper Duper Duper Duper Duper Duper Duper helpful. Thank you for this, and for the part two. Love your channel.

  • @kazulilie
    @kazulilie Год назад +3

    Thank you very much Christian, your videos are a very great source of knowledge about MJ :) I've learned the majors tips, logic and language with this AI thanks to your work sharing :) Waiting for the second part of this video. :)

    • @TokenizedAI
      @TokenizedAI  Год назад +1

      Thank you so much for the kind feedback! 😊

    • @kazulilie
      @kazulilie Год назад

      @@TokenizedAI You're very welcomed :) Thank to you for your amazing work :)

  • @TheFedericogalli
    @TheFedericogalli Год назад +1

    Crazy I searched ‘how create consistent character in midjorney’ and find out your video, that was uploaded only 10 hours ago!
    Great content, subscribed 👍👍

  • @GabrielVenzi
    @GabrielVenzi Год назад +4

    Part 2 please!!! Great explanation. Thank you!

    • @TokenizedAI
      @TokenizedAI  Год назад +2

      Coming soon! I'm working on it.

  • @davidmichaelcomfort
    @davidmichaelcomfort Год назад +24

    Very interesting. It is a challenge to create consistent characters. From my experience, one must generate different types of portraits of a character - headshot, waist-up, full-length - to use them in different settings and taking different actions. And when you place a character taking an action or in a scene, there is "style transfer" or "style creep" - so you might need to use prompt weights for different parts of you prompt. I've written up a Medium on "Creating Facial Expressions on a Consistent Character in Midjourney" - RUclips doesn't allow URLs in comments so you have to Google it. Also, in some cases, you are not going to be able use seeds since they really constrain what the images will look. Creating consistent characters taking lots of different actions and in different portraits is really tough. It can be done to a certain extent, but it is definitely one of the limitations of Midjourney, IMHO.

    • @TokenizedAI
      @TokenizedAI  Год назад +6

      I think it kind of depends on how much of the character you actually want to control in minute detail. For wider shots and certain styles I'd argue that some details aren't that important.
      Placing the character into action scenes (which I'll cover in Part 2) requires extensive use of multiprompts to the point where it's going to really difficult for me to display it on the screen.

    • @davidmichaelcomfort
      @davidmichaelcomfort Год назад +15

      You can actually re-create the exact look of the character by taking the seed from the original set of 4 images and the exact same prompt that you used. Then by using this prompt and the seed ID, you can reliably get the exact character, Then you can append or prepend additional prompts to this base prompt/seed combination. I just experimented using "In the style of the Matrix" using a weighting of 2 and it gave me good results. You can tune the images by changing the weighting and stylize values. I just added the results to the end of my Medium post.

    • @TokenizedAI
      @TokenizedAI  Год назад +4

      Well that's what the seed is for, afterall. Though this behavior is exclusive to v4.

    • @davidmichaelcomfort
      @davidmichaelcomfort Год назад +11

      @@TokenizedAI I am working on Medium posts on creating different types of portraits of characters, having characters interact with each other and another post on Lighting.
      I've written posts on "A Guide to Using Different Shot Types in Midjourney, including Close-up, Medium-shot, Long-Shots" and "Using Color and Color Theory in Midjourney" .
      It is really a "blue sky" time for AI Art and Midjourney. Everyone is learning and struggling how to do things. Sometimes things work great, but most of the time things don't really work out they way you want them too. So persistence and experimentation is key. Thanks again for your videos.

    • @TokenizedAI
      @TokenizedAI  Год назад +5

      Indeed, despite what most opponents of AI art says, it's far from "easy" if you want to do anything remotely meaningful.

  • @bigheadzhang
    @bigheadzhang Год назад +1

    Great information. It introduced me to the power of seeds. Also, you seem to be very knowledgeable about hair.

    • @TokenizedAI
      @TokenizedAI  Год назад

      Hahaha...why do you say that? Because I know what the different hairstyles are called? 😂

    • @bigheadzhang
      @bigheadzhang Год назад

      ​@@TokenizedAI 😝Keep up the good work!

  • @CastPartyDandD
    @CastPartyDandD Год назад +2

    Your videos are great! Very clear! Thank you.

  • @whiplashtv
    @whiplashtv Год назад +2

    Just what I needed. It has been a month long struggle for the project I am working on

    • @TokenizedAI
      @TokenizedAI  Год назад +1

      Well I hope this helps you. It might not solve all problems though. Part 2 is going to me much more of a game changer for people.
      That I'm certain of 😁

  • @TNTGroup_LTD
    @TNTGroup_LTD Год назад +2

    That was an excellent lesson, thank you so much for sharing!

  • @med-3000
    @med-3000 Год назад +2

    Thank you, really well explained tutorial! Great channel!

    • @TokenizedAI
      @TokenizedAI  Год назад +1

      Thanks for the really nice feedback :)

  • @JASHIKO_
    @JASHIKO_ Год назад +1

    This was a really good guide! Thank you!

  • @santarigreen6076
    @santarigreen6076 Год назад +1

    Thank you for your thoroughness. 😀

  • @JuanjoGonzalez
    @JuanjoGonzalez Год назад +1

    Just awesome! Thanks for these tips.

  • @VahnAeris
    @VahnAeris Год назад +3

    very big thanks ! love this again high quality grade🙏
    why then people kept telling that rating job was useless.
    damn thanks for all those new tips !

    • @TokenizedAI
      @TokenizedAI  Год назад +1

      Well, maybe it is useless? I honestly don't know. I tried out this method after hearing about it and found it to work surprisingly well. Might also depend a lot on how people are prompting.
      Not everyone who says that something is or is not useless necessarily knows what they're talking about 😅

    • @VahnAeris
      @VahnAeris Год назад +2

      @@TokenizedAI I agree, this is why the best knowing is the one we experiment anyway.
      I'll try a bit on my side and try to feedback you if I found more certainty.

    • @TokenizedAI
      @TokenizedAI  Год назад +1

      Yes, please do share your findings!

    • @VahnAeris
      @VahnAeris Год назад

      @@TokenizedAI so far you seems a bit ahead of my curve, but I m hard on learning more ! thanks for good share, will do

  • @villagranvicent
    @villagranvicent Год назад +9

    Amazing!! This is one of the most hidden secrets no AI creator wants to share 👏🏼👏🏼 Thanks man 👍🏼

    • @TokenizedAI
      @TokenizedAI  Год назад +4

      I'm not really sure they don't want to share it. Most top-notch creators just don't run a RUclips channel.

    • @villagranvicent
      @villagranvicent Год назад +3

      @@TokenizedAI I know, but I have seen many of unanswered questions about exactly that on their Instagram accounts.

    • @PaladinCiel
      @PaladinCiel Год назад

      I suspect its more so an issue of people not wanting MJ to ban them or patch out techniques their using to get certain results considered NSFW. The censorship is royally killing me. Not even trying to create prawn. My works are Risque not prawnographic. Which MJ and their devs equates to prawn.
      I sometimes wonder if they have some secret code they use to bypass the censors for themselves.

    • @TokenizedAI
      @TokenizedAI  Год назад +2

      This is typical behavior in many other areas that people think is competitive. Point is..... someone is going to share those insights at some point, so it's pretty useless to keep them a secret.

  • @AlexC-xj6ys
    @AlexC-xj6ys Год назад +1

    Thank you so much! It’s so helpful!

  • @vitorscienza7200
    @vitorscienza7200 Год назад +2

    That's great content and information, as always, Christian!
    I have two questions here, if you don't mind:
    - Does doing this in a separate Discord channel has any influence on the results or is it just to keep everything tidy and more organized?
    - Is the name given to the character really that important? I have created a character through various blending attempts, without textual prompts, and have a seed for it but since it was made in that way, I didn't give it a name. Is there a way to continuing doing this with that particular character? (I mainly make creatures instead of regular human characters by the way 🤣)

    • @TokenizedAI
      @TokenizedAI  Год назад +1

      1. Doing this in any particular location in Discord really shouldn't make any difference.
      2. You don't necessarily have to give it a name. Especially if you're not doing close up portraits. The main reason why I'm using a name is because I want to avoid "my prompt" getting "contaminated" by people you might be using a similarly descriptive prompt. I have no idea whether it matters though. You could easily also call it "Wiley the Creature" or "Beast of the West". The point is to introduce something unique (but recurring) into the prompt without it being a word that might influence the image output.
      For example "Windy Wiley" might risk adding "wind" into the image. But maybe I'm also overcomplicating things. Who knows 🤷🏻‍♂️

  • @kkwingtmusicwkkwing2010
    @kkwingtmusicwkkwing2010 Год назад

    Thanks for sharing 😊

  • @charmainegrayphotography1119
    @charmainegrayphotography1119 Год назад +1

    Dude! This is fantastic. Thank you so much. I would LOVE it and appreciate it so very much if you could explain or do a tutorial about starting with a vector-style character or mascot that you already have looking as you desire, and want midjourney to retain as much as possible from your original upload as the source image. As a next step, run a series of poses and facial expressions but keeping all else the same. You touched on some of this, but how can I force midjourney to keep the identical image except maybe change sunglasses from dark to light. Something very small. Is this possible? Thank you again.

    • @TokenizedAI
      @TokenizedAI  Год назад +2

      What you are trying to do is not within the scope of what the technology is currently capable of. At least not with Midjourney.

  • @MarkAhlquist
    @MarkAhlquist Год назад +1

    So great! Thank you

  • @VaibhavShewale
    @VaibhavShewale Год назад +2

    cool, that is amazing!

  • @RealElites
    @RealElites Год назад +1

    Awesome! Thank you!

  • @joshuagrant3821
    @joshuagrant3821 Год назад +2

    Wow. A realistic person on RUclips that works with AI... How refreshing. You're fantastic.

    • @TokenizedAI
      @TokenizedAI  Год назад

      What's unrealistic about the others? 🙂

    • @flickwtchr
      @flickwtchr Год назад

      And how refreshing that he talks at a normal pace unlike the ubiquitous fast talking rapid cut youtubpreneur. I no longer have patience for that style and just click away immediately.

  • @carlovitulo
    @carlovitulo Год назад +1

    Awesome. It’s exactly what I need

  • @thecatslastword7782
    @thecatslastword7782 Год назад +2

    Regardless of what anyone says, this tutorial was VERY helpful. Definitely one of the better tutorials out there on creating a consistent character. I will be binging the whole YT series and looking up any courses you have! Thanks for doing what you do.

  • @carlmartin8723
    @carlmartin8723 Год назад +1

    very good information, well presented, thank you

  • @iCosmictube
    @iCosmictube Год назад +1

    Wow!!! Thank you 🙏🏻

  • @shrvn110
    @shrvn110 Год назад +1

    Have I mentioned how much i love your work?

  • @ryanhowell4492
    @ryanhowell4492 Год назад +1

    Beautiful

  • @MAXTHALOS
    @MAXTHALOS Год назад +1

    Very good tutorial !

  • @think.feel.travel
    @think.feel.travel Год назад +2

    Great tutorial as usual!
    A question: do you think it can be useful to give a name (as you did with Carla Caruso) also when training other things such as graphic styles, objects, and so on?

    • @TokenizedAI
      @TokenizedAI  Год назад +6

      I've never really thought of it, to be honest. Maybe you should try that and share your findings with us :)
      That makes me wonder. I think I really should start a community Discord for everyone watching. There have been some really great discussions and suggestions in the comments.

  • @danwpc
    @danwpc Год назад +1

    Hey, Christian, thanks for your fantastic work. I walked through each video in this series. It sounds like focusing on Part 5 is the most reliable way to go. How would you recommend learning how to train Midjourney with an image of myself or someone I photograph? Would the principles used with Carla apply similarly or are there some critical in between steps?

    • @TokenizedAI
      @TokenizedAI  Год назад

      Actually, I'd recommend Part 7 since it's the latest one and shows a reliable way of maintaining some consistency. In the end though, all parts provide insight into what can be done to tackle the problem.

  • @veilofreality
    @veilofreality Год назад +1

    Thanks for all the info you're putting out. I just subscribed. I have a question, is it the same for environments? Does the seed concept work for also for environments?

    • @TokenizedAI
      @TokenizedAI  Год назад

      I honestly don't know. I haven't experimented with that yet.

    • @veilofreality
      @veilofreality Год назад

      @@TokenizedAI so, if I may ask, how do you deal with the problem of having a character act inside a constant environment, like a room where, for example you want the door, windows and furniture to be consistent? Would that represent an insurmountable problem?

  • @atomicdesignshop
    @atomicdesignshop Год назад +1

    Awesome I gotta jump into midjourney

  • @caseykc5540
    @caseykc5540 Год назад +1

    excellent tutorial, it helps me a lot , thanks

  • @dan0_0nad76
    @dan0_0nad76 Год назад +2

    hey tank you very much for this series, it is immensely useful, is it necessary to create a separate server for the creation of each character or is it possible to just use the midjourney bot directly?

    • @TokenizedAI
      @TokenizedAI  Год назад +1

      No, you can do this anywhere where you have access to the Midjourney Bot.

  • @sootheyoursoul4979
    @sootheyoursoul4979 Год назад +1

    Very interesting, and I love it. Can you do a series for kids, please?

    • @TokenizedAI
      @TokenizedAI  Год назад

      Can you define "for kids"? Cause most people who watch the channel are aged 30-50.

  • @Kali_Ahkil
    @Kali_Ahkil Год назад +1

    Hey there, I'm still very much a newbie, so I really do appreciate your content and your transparency

    • @TokenizedAI
      @TokenizedAI  Год назад

      Check out my dedicated video on that here on the channel. It explains everything.

  • @martinzokov
    @martinzokov Год назад +2

    Awesome video! I was wondering if Midjourney would be able to keep that character and style across multiple prompts though. So let's say you've got your Carla in Marvel comic style, would it be able to generate multiple comic book panels with the same character but in different poses? And do you think it'll reproduce the character if you supply it a link to an image URL if you've found the Carla you want to keep consistently?

    • @TokenizedAI
      @TokenizedAI  Год назад +2

      That's something I'll cover in Part 2 when we get to prompt "action scenes". Part 1 was just about getting the basics right for a portrait.
      I wouldn't recommend using an image prompt simply because it introduces an "uncontrollable" element that will be blended with whatever you put in the prompt. As long as v4 doesn't support image weight, I'd avoid it.
      Especially since I found that it's not really necessary anyway. Knowing how to craft multiprompts for bigger scenes is far more important.

  • @johnmc9073
    @johnmc9073 Год назад +3

    Hey! Will you be making a video or videos in relation to how Quality and Style work? That would be awesome if you touch on that information, I may be able to learn something new from it. Either through /settings options or manual options inside the prompt :D

    • @TokenizedAI
      @TokenizedAI  Год назад +2

      Possibly, but I have a very long list of other topics to cover first.

    • @johnmc9073
      @johnmc9073 Год назад

      @@TokenizedAI Good to know :D

  • @barps
    @barps Год назад

    hey mate, nice video as always! Was wondering if you know a way to have the generated character in a image to be positioned either left or right and maybe zoomed out so it doesn't take the full space... Thanks mate and keep these videos coming!

    • @TokenizedAI
      @TokenizedAI  Год назад

      You can control that a bit by describing it in the prompt. Alternatively be more specific about what's on the other side of the image.

  • @SecretsofancientEgypt
    @SecretsofancientEgypt 5 месяцев назад +1

    Amazing free course :) A note using the new midjourney 5.2 it dosen't give you sets of new images, if I use the seed number for 1 picture. It keeps giving me the same excat images from the 4 imgaes that included this seed image. So there is now no variations when using 5.2 version. Which brings up a new issue, if I am still not happy with that seed number, and want to change it a bit, then what to do? I think we have to now use the subtle variation (I think, not really sure) It is fun to play with, but a bit time consuming when we have to keep changing with each new version relased. What version did you use here? was it midjourney 5? But no matter what I love your explanations and the depth you go into. Amazing and Thank you

    • @TokenizedAI
      @TokenizedAI  5 месяцев назад

      Yes, this behaviour has been around about 10 months. This video is old (probably v4). Please check the description for a disclaimer.

  • @michaelsbeverly
    @michaelsbeverly Год назад +1

    OKaY....I've watched this video maybe 4 times (parts of it 6 or 7 or 8 times) and FINALLY I get the back to the future reference because you're pausing and going forward in time...hahaha......

    • @TokenizedAI
      @TokenizedAI  Год назад

      LOL 😅 Bettrr late than never.

  • @yanaheinstein
    @yanaheinstein Год назад +1

    seams to be my first thing todo on monday. ☀️

  • @ghleader2179
    @ghleader2179 Год назад +2

    I loved watching your videos. You explain very well so I subscribed. do you have any idea how to generate a face from a photo that would look a lot alike. When I send a picture of me to midjourney it has a lot of trouble creating a true facial likeness. do you have a tip?

    • @TokenizedAI
      @TokenizedAI  Год назад +1

      You need to use at least 3-4 images with different angles and add them all as image prompts. A single image prompt usually isn't enough.

  • @peps402
    @peps402 Год назад +1

    Amazing ! thank you, mister )

  • @Juniorgamers99
    @Juniorgamers99 Год назад +1

    Great man

  • @cashflowdriven7036
    @cashflowdriven7036 Год назад +1

    Hey awesome job. I’d like to know if at the end, what would MJ send you if you just put imagine Carla Caruso. Would it deliver herb like that? Or would it start from scratch?

    • @TokenizedAI
      @TokenizedAI  Год назад

      I actually show 3 sets of images in the video. If you just enter the name, you'll get images that look nothing like her. You need to use it in combination with the description.

  • @user-pc7ef5sb6x
    @user-pc7ef5sb6x Год назад +3

    This technique becomes even more effective if you give her an actor's named if you want action scenes

    • @TokenizedAI
      @TokenizedAI  Год назад +2

      Yeah, using real actor names is very effective. But I often worry about the face of the real actor bleeding into the image.

  • @kosmar
    @kosmar Год назад +1

    great insights

  • @maximilianrepstat3716
    @maximilianrepstat3716 Год назад +2

    Thanks for the video !! Do you think it's good to add a art style to the prompt or is it better to leave art style informations in prompt and regenerate till you found the right one and then follow your method?

    • @TokenizedAI
      @TokenizedAI  Год назад +1

      You can always change the art style later, at any time. So I don't really see the value in doing that. I think it doesn't really matter. Do whatever works best for you.

  • @Henry_Drae
    @Henry_Drae Год назад +1

    I loved this, your tips are worth gold! I have two questions, is the Midjourney trial enough to train him for a consistent character? And secondly, can this work on other systems like Stable Diffusion or other apps that pretend to emulate Midjourney but don't reach its quality?

    • @TokenizedAI
      @TokenizedAI  Год назад +1

      1. Technically yes, but then you wouldn't really have much Fast GPU time left. What's keeping you from just getting the Basic subscription. There's little point in trying to do this with a trial account.
      2. Stable Diffusion works very differently from MJ, so I don't really know how one would replicate this process, to be honest.

    • @Henry_Drae
      @Henry_Drae Год назад +1

      @@TokenizedAI Thank you for your response!
      It's not that I don't want to subscribe because I'm stingy, it's just that I live in Argentina and here the dollar is really very expensive, almost like in Venezuela, it ends up being prohibitive. However, I pay some subscriptions to other services I work with because I can finally recover them in the investment.

    • @TokenizedAI
      @TokenizedAI  Год назад +1

      Well, you can always create more than one trial account. Though I don't want to promote that strategy. As long as you have access to the seed from the first account you can continue with the second.

    • @Henry_Drae
      @Henry_Drae Год назад +1

      @@TokenizedAI I understand you perfectly, it is not ideal to promote these actions, but as a resource it is valid. thank you very much for your help and understanding!

  • @elex05
    @elex05 Год назад +1

    this is a gold mine 👀

  • @hardtruth7555
    @hardtruth7555 Год назад +1

    Does comma placement act as a stopper for what parts of the prompts are effected by weight values? Example: "tall guy in trench coat, lookes like bataman::1" is the part that gets weighted only the, "looks like batman" segment, or is everything before the weight modifier given the weight value regardless of comma placement?

    • @TokenizedAI
      @TokenizedAI  Год назад +1

      No. The comma has some influence, very much like it would in regular written language. But it doesn't delimit the segment. So in the case of "tall guy in trench coat, lloks like batman::1", the comma influences how the 2 partial phrases may be interpreted, but the weight applies to the entire segment as a whole.

  • @free_music_sound
    @free_music_sound Год назад +2

    GREAT!!!!

  • @manuelprsnl
    @manuelprsnl Год назад +1

    Love your videos. I've been playing around with this and the name doesn't seem to be doing anything. I copied your prompt and tried different names and Carla Caruso doesn't seem to be making more consistent characters than any other name or no name at all. I think the seed and "beautiful woman" are doing most of the heavy lifting here.
    Also I suspect that making what Midjourney understands as a beautiful woman (caucasian woman in her 20s with flowy hair, big eyes, thick lips, prominent chin, correct makeup, big breasts, thin waist etc etc) is probably way easier to make look consistent and the more we deviate from there the harder it's going to be, but we'll keep learning :)

    • @TokenizedAI
      @TokenizedAI  Год назад

      Ironically, the MJ team has confirmed that names actually DO make a difference, while the "training" part in this video technically does not. So, using a specific seed + giving it a name should usually work.

    • @manuelprsnl
      @manuelprsnl Год назад

      @@TokenizedAI wow that's super interesting, thanks!

    • @manuelprsnl
      @manuelprsnl Год назад +1

      @@TokenizedAI so with this knowledge, do you recommend that for action scenes we use the name or nah?

    • @TokenizedAI
      @TokenizedAI  Год назад

      @@manuelprsnl There's no harm in using it. But it becomes less important if you characters are not longer the dominant subject of a scene (i.e. they are only a small part of the image).

  • @carlvanderpal5321
    @carlvanderpal5321 Год назад +2

    I was looking for this exact same thing the other day, as I ran into the same issues, then once I got it good, it would not get back to the original one. So I will try this method now. But since my characters are more cartoon I will see how this goes

    • @TokenizedAI
      @TokenizedAI  Год назад +1

      Let me know how it works out for your use case!

  • @8ball758
    @8ball758 Год назад +1

    Here it is! Man, you're the best!

    • @TokenizedAI
      @TokenizedAI  Год назад

      Well....Part 1 at least. Part 2 should be even better 😁

  • @seradamframpton-oyamot4828
    @seradamframpton-oyamot4828 10 месяцев назад

    Chris, I have full confidence in your words, but I'm curious if there's a video documenting the midjourney that shows the effects of leaving the 😍. Knowing this would be a game changer for what I'm working on. Please share the video if available.

    • @TokenizedAI
      @TokenizedAI  10 месяцев назад +1

      Read the video description

    • @seradamframpton-oyamot4828
      @seradamframpton-oyamot4828 10 месяцев назад

      @@TokenizedAI I read the description, but it didn't clarify which information was misinformation 😅 If that makes sense! Either way, the process worked tremendously for me 🙌 or maybe I'm just one of the lucky ones 😉 Anyways, thanks, Chris, for the useful misinformation 😄 Looking forward to the next new video 🎉

    • @TokenizedAI
      @TokenizedAI  10 месяцев назад +1

      So basically, you can't "train" Midjourney, so the rating/feedback process I show here doesn't do anything. If it feels like it works, it's just your imagination 😉
      However, using a specific name within the prompt can help with consistency. Although it's not absolutely necessary.

    • @seradamframpton-oyamot4828
      @seradamframpton-oyamot4828 10 месяцев назад

      @@TokenizedAI I appreciate the feedback

  • @NeonXXP
    @NeonXXP Год назад +1

    Training an embed on local stable diffusion is very good for creating a unique consistent character also.

    • @TokenizedAI
      @TokenizedAI  Год назад +2

      Yep, but this was supposed to give MJ users a potential solution. Most MJ users don't use Stable Diffusion.

  • @DiegoSilvaInstrutor
    @DiegoSilvaInstrutor Год назад +2

    I'm from Brazil, I follow tutorials created by colleagues from here, and this content of yours, which is from someone far away, enriched me with your wisdom. I have a doubt, this process will also work when I put my photo at the beginning of the prompt and I add characteristics until I find the ideal photo, and so I do this process you taught, does it work?
    I am grateful and I follow you. Hugs

    • @TokenizedAI
      @TokenizedAI  Год назад +2

      Thanks for the feedback. This process will not work with your own photo. It only works with characters created within Midjourney.

  • @haroldmitts
    @haroldmitts Год назад +2

    Great explanations and content. I wonder if you have tried generating with multiple characters. It seems like this is quite difficult and any tips you have would be greatly appreciated.

    • @TokenizedAI
      @TokenizedAI  Год назад +1

      I have generated scenes with multiple characters but not to the point of explicitly describing details for more than just 1 of them.

  • @baiamankurmanbaev8631
    @baiamankurmanbaev8631 Год назад +1

    Hi Christian, thank you for your work! I just wanted to know can I create two characters and take their seeds and combine them in one image as a two choosen characters speaking or acting some how? I just trying to make manga if its works wold be nice / the question is again can I use two seeds in one image ? will it save characters styles?

    • @TokenizedAI
      @TokenizedAI  Год назад

      Afraid not, because that's not really how seeds work.

  • @Zomfoo
    @Zomfoo Год назад +2

    Naw. I thought those were photographs of a real actress. I’m shocked! 😱😏

  • @shakaama
    @shakaama Год назад +1

    MY GOD.
    you are exactly what I wanted to do.

  • @skdskd4822
    @skdskd4822 Год назад +1

    Great video tutorial. Im sorry for my newbie question, but how did you create a channel specifically for this character, with the midjourney bot in it

    • @TokenizedAI
      @TokenizedAI  Год назад

      Watch this: ruclips.net/video/sihp8OSOH6k/видео.html

    • @skdskd4822
      @skdskd4822 Год назад

      Thanks

  • @jgeise
    @jgeise Год назад +1

    If, after upscaling, you train the model for positive features from the prompt; would it also help to upscale one that you didn't like and rate it with the sad face from the far left (kind of like a negative prompt)?

    • @TokenizedAI
      @TokenizedAI  Год назад

      Good question! I haven't tried that. I don't know how relevant that would be since we keep switching the seed during the "training" process.

  • @nanaverse2787
    @nanaverse2787 Год назад +6

    Thank you. I have been trying to make consistent characters for a long time. Much appreciated.
    In part 2 can you please mention how we can make several consistent characters together in varied scenes.
    I am trying to write a picture story book of 4 children.

    • @TokenizedAI
      @TokenizedAI  Год назад +9

      I think "several consistent characters" is probably something I'd have to save to Part 3. I haven't really tried to do that yet, to be honest. Sounds very challenging too. But thanks for the idea! 🙂

    • @bryanorosco9616
      @bryanorosco9616 Год назад +1

      hey @@TokenizedAI don't forget part 3 with this idea
      ;) great explanation

    • @TokenizedAI
      @TokenizedAI  Год назад

      This won't be in part 3 because I need to figure it out first 😂

  • @mehdidarvish5628
    @mehdidarvish5628 Год назад +1

    thanks alot,,,,,very helpful

  • @basiccomponents
    @basiccomponents Год назад +1

    Do you think it's possible to use a process like this to target a style rather than the facial features?
    I'm trying to use this with --niji but I keep getting inconsistent results, probably because I don't really know how to put into words the style I'm looking for.
    This video was amazing, thank you for sharing such clear instructions!

    • @TokenizedAI
      @TokenizedAI  Год назад +1

      I actually hate Niji mode. I can create much better anime with the regular v4. If you the style of a particular show in mind (Death Note, Naruto or DBZ) it's pretty good at understanding those. Alternatively, use an image prompt with the style. But you'll still need to write a good text prompt to support it.

    • @basiccomponents
      @basiccomponents Год назад

      ​@@TokenizedAI I gave -v4 a serious try and I have to agree with you, the results are more consistent, thanks for the suggestion!
      Also, to target the style I want, I'm feeding it a lot of images in the prompt, along with some text, but after 8 images it starts to give me the "Invalid link!" error on images' links that it accepted in the prompt just before that, do you have an idea on how to avoid this?

  • @thebluriam
    @thebluriam Год назад +1

    Question(s) the Second: How should we think about the differences between seed values and reference images in prompts? Is it known what the effects are, positive or negative, if we use the same image as a reference that we gain a seed from, changing the prompt slightly; do we get a double reference effect?

    • @TokenizedAI
      @TokenizedAI  Год назад +1

      I think that depends. I would discourage using an image prompt for this because you have no way of controlling the weight of the image versus the text prompt in v4 (at least for now). Whatever the image reference contains will likely mess around with your text prompt, because it tries to blend them together (though you can't really control how).
      It might work if the reference and the text prompt are effectively portraits. But if either one contains a bigger scene, while the other doesn't, you're going to have a tough time I think.

    • @thebluriam
      @thebluriam Год назад +1

      @@TokenizedAI Are reference images the strongest weight within a prompt then? I really should be playing with these things systematically like you too; making videos to document all this too lol!
      I'm trying to suss out exactly what the seed value does and what information it encapsulates. Any ideas?
      Side Question: Do we know what the seed token actually is? I imagine it's a serialized value from something like the prompt plus other data. I know we can set alias seed values manually, but I believe those are account level pointers to the real seed values, correct?

    • @TokenizedAI
      @TokenizedAI  Год назад +1

      I'm not sure, but I've found that image prompts usually have too much weight in v4 to reasonably use them in combination with text prompts, unless you use a lot of segments in a multiprompt. Maybe it just has a weight of 1? Who knows.
      As for the seed, my understanding of the documentation is that it's literally just a numerical value that adds varying levels of noise. I don't think it actually represents anything meaningful and is just a randomizer that is meant to create more varied and interesting results.

    • @thebluriam
      @thebluriam Год назад +2

      @@TokenizedAI I don't know, I would be willing to be that the base seed value is actually a serialized value of some sort, sort of like how JWS tokens work, but perhaps on a lower payload scale. I should probably do some reading before I talk too far out of my ass. But either way, as a software engineer, the seed value looks suspiciously like a serialized object.
      I'll look it up and get back to you. If I'm write about the serialization, it probably means the value can be decoded to get the meta data being used to generate the details of the image. It'll make for some damn interesting content for you lol!
      Also, it'd be interesting to test how much weight an image url has vs a single word in a prompt, just keep adding weight to the single descriptive word until the word takes prominence over the reference image context.
      Should I be creating videos about this too?

    • @TokenizedAI
      @TokenizedAI  Год назад +1

      @@thebluriam I can code but I'm not a pro developer so I think that ball is in your court 😂
      I totally support doing more of these experiments because there are simply too many variables for me to cover all of them 😁

  • @Morigirth
    @Morigirth Год назад +1

    Awesome explanation! How long time does MidJourney actually 'remember' this sort of training? Could you still conjure up more consistent images of this particular character after a while of not using midjourney. And what happens if you start creating a new character or just totally different images after Carla's session, will it 'forget' the character or will using the name bring her back?

    • @TokenizedAI
      @TokenizedAI  Год назад +1

      Some of pointed out that MJ isn't actually able to "remember" and it's all just perception. So I honestly don't know. That being said, creating a new character isn't a problem. In the end, your description of the character is what primarily determines the look.

    • @benp7328
      @benp7328 Год назад +1

      @@TokenizedAI potentially what's going on is that you are settling into a local minimum for the seed number. Ie a number that produces stable images and doesn’t flip to other details easily. So the seed number essentially IS the memory. The process you are going through is just exploring a random set of seeds, and stopping when you find some that don't vary much. It feels like this process could be automated.

    • @achliscantplay4202
      @achliscantplay4202 Год назад +2

      MJ developers confirmed that users reactions does not influence or train anything, this is for future research and improvements on developers end. RUclips is full of tutorials on midjourney, when creators take confirmation bias for results. This channel is fantastic, and I am surprised that the author has added to misinformation 😭 If you want to train actual facial models with consistent features - look into 1111. Midjourney does not do it. There is no real time "training" or "memory" of user actions.

    • @TokenizedAI
      @TokenizedAI  Год назад +2

      @@achliscantplay4202 I find the term "misinformation" a bit harsh. It implies malicious intent.
      I explicitly explain at the beginning of the video where I observed someone else presenting this approach. I then tested it myself and felt compelled to share it. Was my own bias misleading me? Yes, perhaps. That's absolutely possible!
      Does that mean that I am intentionally spreading misinformation? No. What some of you seem to forget is that we're all on a learning curve here and we don't have all the answers right her and right now. Our own level of knowledge evolves over time and sometimes it's just a matter of days or maybe a week or two. So some videos may end up being partially incorrect in hindsight.
      But does that invalidate the entire video? No, of course not.

    • @achliscantplay4202
      @achliscantplay4202 Год назад +1

      @Tokenized AI by Christian Heidorn Sorry if I came across harsh, I absolutely did not mean it, mate! English is my second language, too, so we might be dealing with added barrier for expression here 🙇‍♀️ I love MJ, and there is a wonderful thing they are doing - OFFICE HOURS, where developers in real time answer questions and just converse with the community in the voice channel. It is an absolute treasure trove of first-hand information, and I find it extremely helpful :)
      Also, in one of Image Jams someone was trying to prove that they were able to train MJ to do better with hands, and kept arguing with the actual developers that "this is what they see", etc. The problem here is with this - yes, there is no direct harm, apart from people spending actual time, which is the most precious resource in our life, on something that does not work. I was very upset when I tried a tutorial someone did, spending hours on a project, and still was unable to get the results they described. This is where the harm is. I understand that your intentions are great, you make wonderfully creative and easy to understand content, I love your channel :)

  • @moonlightandfuchsias
    @moonlightandfuchsias Год назад +2

    Thank you! Im trying it right now

  • @chainshot
    @chainshot Год назад +2

    Is it possible to train your face for similiar scenarios with midjourney - like with stable diffusion automatic 1111 - based on like 20-30 images?

    • @TokenizedAI
      @TokenizedAI  Год назад +1

      I honestly don't know. Thing is, this isn't the same thing as training your own model. And it's working with existing imagery created entirely in MJ. Doing this with your own face is considerably more difficult I would assume.

  • @alexg5576
    @alexg5576 Год назад +1

    Thank you for a very informative video series. However i find for some reason I can’t get the seed from the single upscaled pic but only the 4 pic grid. So I can’t really follow along. Could you offer any help in this regard?

    • @TokenizedAI
      @TokenizedAI  Год назад +1

      v5 doesn't eve have an upscaler yet. That's why the single image doesn't have it's own seed. It's just the individual image from the grid. The grid has its own seed.

    • @alexg5576
      @alexg5576 Год назад

      @@TokenizedAI Thanks very much. So should I use version 4 for this particular exercise?

  • @MrDomitros
    @MrDomitros Год назад +1

    Great tutorial greatly appreciated! I decided to try it on a dragon, but it did not seem very responsive. Have you used this technique to make non human characters?

    • @TokenizedAI
      @TokenizedAI  Год назад +1

      From what I hear, it's not very effective with non-human characters. 😔

  • @ZeeshanAliLeo
    @ZeeshanAliLeo Год назад +1

    Hi, I have been following your videos, I was trying to read your blog website. Couldn't land on a page for this video. Do you have any geographic restrictions for the accessibility?

    • @TokenizedAI
      @TokenizedAI  Год назад

      My site gets a lot of DDoS attacks from certain regions which is why I have restricted some. You can get around it with a VPN though.

  • @laceyphillips5733
    @laceyphillips5733 Год назад +2

    How do you get MJ to message on your server and channel? Is this important for training it on a keyword, or can I DM MJ and just use my keyword in the prompt? My MJ DM has several stories commingled.

    • @TokenizedAI
      @TokenizedAI  Год назад

      If I understand you question properly, you'll want to watch this video: ruclips.net/video/sihp8OSOH6k/видео.html

    • @laceyphillips5733
      @laceyphillips5733 Год назад

      Beautiful, watching now

    • @laceyphillips5733
      @laceyphillips5733 Год назад +1

      @@TokenizedAI So easy - it worked! Thanks!

    • @TokenizedAI
      @TokenizedAI  Год назад

      @@laceyphillips5733 Awesome :)

  • @AlkanthorTheDragon
    @AlkanthorTheDragon Год назад +1

    Can this be applied in Stable Diffusion (Automatic 1111)? As far as I am aware, you can use only 1 reference image. Is there an extension to change that?

    • @TokenizedAI
      @TokenizedAI  Год назад +1

      No, I don't think so. You're much better off training your own model if you're using Automatic 1111. I haven't done it myself yet though, so I can't really explain how it's done.