Understand PROMPT Formats: (IN 3 Minutes!!)

Поделиться
HTML-код
  • Опубликовано: 23 сен 2024
  • The most important skill an AI-Artist needs to have is being able to Engineer Prompts that produce the results you want - Here's the best format I've found to get consistent results when creating prompts-!!
    Stable Diffusion Online Website:
    beta.dreamstud...
    Stable Diffusion Tutorial Playlist:
    • Using Stable Diffusion...
    AI Generated Art - (Commissions OPEN) Contact:
    www.fiverr.com...
    My Art-Station Store Link:
    www.artstation...
    If you enjoyed this video, please consider joining the Support Squad by clicking "Join" next to the like button, or help protect the channel on Patreon, it really helps out and I truly appreciate the support -
    / royalskies
    Twitter at: / theroyalskies
    Instagram: / theroyalskies
    TikTok: / theroyalskies
    Blender RIGGING & ANIMATION SERIES:
    • Blender 2.82 : Charact...
    Anime Shader Tutorial Series: • Blender Anime Shading ...
    Blender MODELING SERIES: • ENTIRE Low Poly Blende...
    Intro To Unity Programming FULL Series:
    • Introduction To Game P...
    If you're a gamer, check out my latest game on itch.io!
    royalskies.itc...
    I also have a steam game that combines Starfox with Ikaruga gameplay!
    It took over 3 years to create :)
    store.steampow...
    As always, thank you so much for watching, please have a fantastic day, and see you around!
    - Royal Skies -
    #stablediffusion #aiart #art
    ---

Комментарии • 179

  • @TheRoyalSkies
    @TheRoyalSkies  Год назад +32

    Who are some of your other favorite artists you like to draw styles from??

    • @mariocano7263
      @mariocano7263 Год назад +2

      Craig Mullins!

    • @wedgeewoo
      @wedgeewoo Год назад +5

      francisco goya and francis bacon. Using them with the composable-diffusion weights makes for some great halloween pictures!

    • @baraneo8195
      @baraneo8195 Год назад +1

      yusuke murata

    • @LtDan2K
      @LtDan2K Год назад +1

      Casey Baugh and (Michael Garmash) and Donato Giancol and Greg Rutkowski and (Alphonse Mucha)

    • @CYkillone
      @CYkillone Год назад +1

      Makoto Shinkai (he's an anime movie director but the art style in his movies is awesome.)

  • @quantumangel
    @quantumangel 5 месяцев назад +11

    So, basically: "don't do hard stuff"
    Solid prompting advice!

  • @thefablestudio
    @thefablestudio Год назад +8

    Thanks for breaking down prompt engineering in just 3 minutes! This clears up a lot of confusion I had.

  • @EE9EER
    @EE9EER 6 месяцев назад +3

    INSANELY USEFUL. This combined with a good negative prompt embedding and your flying through colors; amazing results. A must have

  • @wendten2
    @wendten2 Год назад +106

    The new version of AUTOMATIC1111 webui of Stable Diffusion includes "Composable Diffusion". Basically you can make two or more prompts at once, by using the "AND" keyword in the prompt, this way it will generate two different styles and merge them together. This makes you results way more precise.

    • @uthergoodman401
      @uthergoodman401 Год назад

      Do you know of one that can combine two images together? I know midjourney can do it now but Id rather have one for stable diffusion with more precise options

    • @clarksellary9002
      @clarksellary9002 Год назад

      @@uthergoodman401 you can do it with (object1|object2)

    • @Maple38
      @Maple38 Год назад

      @@uthergoodman401 Elaborate what you mean

    • @uthergoodman401
      @uthergoodman401 Год назад

      @@Maple38 like using img2img but feeding multiple images to get a single combined result

    • @dayhaysuper3639
      @dayhaysuper3639 Год назад

      ​@@uthergoodman401 .

  • @adorephoto
    @adorephoto 5 месяцев назад +4

    ok how do you prevent it from always cutting off images? I want the entire creation contained within the borders of the box....also is there a way to correct little mistakes in an otherwise perfect image....photoshop isn't an option since it censors things like a human foot etc

  • @vi6ddarkking
    @vi6ddarkking Год назад +12

    The Good Thing About His Tools Is That Time Is On Our Side They Will Get Better With Each Update.

  • @baraneo8195
    @baraneo8195 Год назад +6

    oooh, saw a new video from royalskies, and stopped what I was doing. I've been waiting for this. Keep up the great content man. God loves us always! Amen!

  • @facetheslayer_8996
    @facetheslayer_8996 11 месяцев назад +2

    I came across your video because I was tired of getting images so chaotic lol. A tutorial here and there to watch is always helpful. I will be trying this soon once I get on the PC.

  • @da_roachdogjr
    @da_roachdogjr Год назад +8

    RS is quickly becoming an AI guru.
    Thanks for this. I had kinda figured this out by trial and error but this goes way more in depth as to understanding the why.

  • @juanjosepenabecerra7746
    @juanjosepenabecerra7746 Год назад +9

    Lately I see that the AI get better results when you add few specific Negative prompts

    • @LtDan2K
      @LtDan2K Год назад +8

      Negative's are actually MORE powerful than your positive prompt. the cap on negatives is much higher as well. SD can only handle 75 positive tokens(words/concepts), but much more in the negative.

  • @gdizzzl
    @gdizzzl Год назад +5

    u can use "AND" to merge styles

  • @agsystems8220
    @agsystems8220 Год назад +19

    Stable diffusion uses an outsourced prompt parsing network, so it is not really up to them. It is possible a new tokenizer could be created, but that would be a major undertaking with a slightly different skill set required than the stable diffusion team have. Frankly the current tokenizer is a bit of a hack. You just have to look at how often things like "8k" and "optane render" appear to see that it is not ideal.

  • @JumperAce
    @JumperAce Год назад +3

    prose writing works very well too, the robot is smarter than people give it credit for

    • @LtDan2K
      @LtDan2K Год назад +2

      as does nonsense lol

  • @Doc_Animator_N
    @Doc_Animator_N 5 месяцев назад +1

    2:36 Media of Subject, description, artist
    3:04 Increase number of steps for best quality.

  • @Ikxi
    @Ikxi Год назад +2

  • @bill5304
    @bill5304 7 месяцев назад +1

    🎯 Key Takeaways for quick navigation:
    00:00 🤖 *Understanding machine prompt limitations is crucial for effective prompt engineering.*
    00:56 💡 *When crafting prompts, expect to reliably get only two out of the three specified elements (subject, object, descriptor).*
    01:38 📝 *Best prompt format: Start with media type, followed by subject, limit objects to two, then use descriptors, and finally specify the desired artist or style.*
    02:34 🎨 *Common artist combinations that work well: Art germ, Greg Rutkowski, and Alfonse Mucha. Stick to the format: media, subject, object, descriptors, artist/style.*
    03:03 🌟 *Crafting a comprehensive prompt example: "Beautiful Cottage core fantasy young blue Victorian princess holding a flower full body shot intricate elegant highly detailed digital painting trending on Art station concept art smooth sharp Focus illustration in by Art germ and Greg Rakowski and alfonse mucha."*
    Made with HARPA AI

  • @kuromiLayfe
    @kuromiLayfe Год назад +3

    going more descriptive (like making it a sentence instead of chopped up parts, also helps guiding the web version getting more what you are looking for)

  • @pn4960
    @pn4960 Год назад

    Your channel is a Gold mine !

  • @liquidphilosopher1816
    @liquidphilosopher1816 Год назад +2

    What gpu are you using?

  • @nspc69
    @nspc69 Год назад +1

    for color, we can use image2image

  • @sayuriartsy5108
    @sayuriartsy5108 Год назад +2

    Now I can wear a blue dress in front of a pink castle while holding a yellow umbrella!

  • @DrFeho
    @DrFeho Год назад +7

    Finally some great tips, and tricks, thanks for everything,. i cant wait for more tips and tutorials

  • @Cybored.
    @Cybored. Год назад +11

    if you want it to follow your promotes, you should try increasing your cfg scale or adding the part you want it to focus on between ( ) or less focus with [ ].

    • @atom6_
      @atom6_ Год назад +1

      Exactly, the description is right there on the screen.

  • @kenhiguchi2144
    @kenhiguchi2144 Год назад

    Wow that prompt stuff is crazy

  • @thatsalot3577
    @thatsalot3577 Год назад

    I wanted to know this for such a long time, thanks dude

  • @jppj3278
    @jppj3278 5 месяцев назад

    what if you want to take an existing RPG portrait and make it into HD? The prompts always seem to change what the image is instead of making it HD

  • @yohannesJas
    @yohannesJas 11 месяцев назад

    As always A fantastic Video 👏🤝

  • @arekkubiak5957
    @arekkubiak5957 Год назад +8

    Just FYI, Rutkowski is pronounced 'root kovski'

    • @LuaanTi
      @LuaanTi Год назад +1

      I'd love to explain to him how Mucha is pronounced, but English doesn't have anything similar :D

  • @MorganRG-ej8dj
    @MorganRG-ej8dj 7 месяцев назад

    Is this applicable to automatic1111 too?

  • @Enite
    @Enite Год назад +1

    I keep getting multiple people in my generations. Been using negative value of crowd and it has helped (some). Noticing you’re getting mainly solo subjects. Curious what I’m doing wrong!

    • @joanalbertmayolcolom
      @joanalbertmayolcolom Год назад +3

      It is mainly related to resolution. With resolutions higher than 512*512 the probability to get multiple people increase. In Automatic1111 there is an option called "high res fix" that helps avoid that, but it is a work in progress. Hope it helps!

  • @Gadogadochanel463
    @Gadogadochanel463 8 месяцев назад +1

    Can you explain about negative promt, I'm just learning and need a lot of guidance,thank you,i hope you read my coment 😊

  • @murgatroydfungus4352
    @murgatroydfungus4352 Год назад +2

    Sometimes when I try to get a full body shot the ai produces a horrible creature roughly describable as a human centipede. It shows up like every one in 12 photos I generate.

  • @sribalramdas6158
    @sribalramdas6158 Год назад +1

    Can u tell how do I even find artists? The three names you mentioned are the only ones I know bcoz i found them watching another video.

  • @luciafantin
    @luciafantin Год назад +1

    I have one very specific issue and it's that I cannot possibly make any IA generate an image of a person with blue skin!! Even if I remove everything else and I write like "woman with blue skin"... It always makes the hair blue or blue eye makeup but never ever makes the skin itself to be blue! I don't get it lmao

  • @20xd6
    @20xd6 Год назад +1

    can you do like a 10 minute tutorial on prompts for automatic1111 SD 1.5 ? Thanks!

  • @Mastersoraka
    @Mastersoraka Год назад

    Hi, Royal Skies! Tq for the tutorial! Just asking, any advice to increase the chance of showing full head from the prompt or it's random chance?

  • @psylentsage
    @psylentsage Год назад

    So helpful so fast

  • @shakaama
    @shakaama Год назад

    wait, so i already figured this out? and this is why my prompts were so long and other people were like 5 words?

  • @Небокекисвета
    @Небокекисвета Год назад +2

    Hi! Is there a way I can add MY artworks as an artist style locally? I am an artist and I have a bunch of my drawings with my own style that I want to keep. I dont want to steal drawings styles from others

    • @FlameForgedSoul
      @FlameForgedSoul Год назад +2

      Try the image2image option

    • @OttoMaticInc
      @OttoMaticInc Год назад +1

      you can use the automatic1111 interface to train a model on your images. I bet there are videos on YT explaining the process in detal. Just keep in mind that sample sizes below 100 pieces are probably not enough for consistent outputs.

  • @noobicorn_gamer
    @noobicorn_gamer Год назад

    0:00 Goddamnit I know it's AI image but man I'm floored on imagining how awesome she would be if she were real... dream waifu :3

  • @PinkHairedCoder
    @PinkHairedCoder Год назад +1

    Can you do a video how to feed it art to learn from or is there no way to make a custom database?

    • @LtDan2K
      @LtDan2K Год назад +2

      look up embedding/Textual Inversion, it takes at least 4 images, and takes awhile, but you can put literally anything into the model's database.

  • @artisunknown
    @artisunknown Год назад +1

    Thanks a lot

  • @giotto_4503
    @giotto_4503 Год назад +1

    2:04 - 2:05 HOL UP!

  • @caillousdad5786
    @caillousdad5786 8 месяцев назад

    Got Xion from Kingdom Hearts in Monsieur Muk's style

  • @MarkWilder68
    @MarkWilder68 Год назад +5

    A lot of the time when I put full body it only gets from the neck down is there any way to make sure the image is always centered and you can see it from head to toe? Also is there a way for the pictures to not be cut off on the sides sometime the subject is halfway in the frame and half out. How do I stop things like that? Are there keywords? What can I use? Thank you

    • @xn4pl
      @xn4pl Год назад +1

      Make a bunch of them and choose one with most body coverage then outpaint to get full body picture.

  • @badoww4921
    @badoww4921 9 месяцев назад

    i use unstable diffusion. i type a describing paragraph. and it works better the more i describe.

  • @cakeller98
    @cakeller98 Год назад +4

    Have you tried separating with | or {} or () etc? and THANK you for this series! it's massively helpful.

    • @SymbolCymbals2356
      @SymbolCymbals2356 Год назад +2

      using () will make the keywords inside have more influence

    • @cakeller98
      @cakeller98 Год назад +1

      @@SymbolCymbals2356 yup I went to the wiki and saw that. And | used with “matrix” script will take the first part and create all permutations with each subsequent part. A|B|C|D will produce A AB AC AD ABC ABD ACD
      The wiki is worth a read for sure

    • @AvHasselaar
      @AvHasselaar Год назад

      @cakeller98 Mind pointing us in the right direction for the wiki? 😉

  • @vantyto
    @vantyto Год назад

    Very Informative video,
    acording to some guides, you can use (brackets), but i didn't try it yet
    Ps.: that's how "alfonso mucha" is pronounced in english? Oh...

  • @albiceleste101
    @albiceleste101 Год назад

    I wonder if you can do painted by "rossdraws" or "kittew"

  • @mohamadsukrinurtyanto345
    @mohamadsukrinurtyanto345 Год назад

    Can i know whats ai used?like the application name?

  • @lionkingmerlin
    @lionkingmerlin Год назад +1

    thx

  • @rahul-qm9fi
    @rahul-qm9fi Год назад

    Hi bro how can I use stable diffusion to make pictures based on a picture that I already have?

    • @willmfrank
      @willmfrank 6 месяцев назад

      Use the "img2img" instead of "txt2img" and drag n drop a picture into the img box.

  • @NeilOttoTep
    @NeilOttoTep Год назад +6

    Thanks for doing some SD videos- really helphul ;) You can also try "digital painting by Wojtek Fus, Maciej Kuciara" - I've got pretty good resaults out of that. Also Alfons Mucha it's prenounced All-fons Moo-ha, also Greg Root-ko-vski.

  • @Saint_Eight
    @Saint_Eight Год назад +1

    Damn I was just wondering that

  • @thecbc7658
    @thecbc7658 Год назад

    2:35 is pretty useful

  • @Alarone007
    @Alarone007 7 месяцев назад

    huh? first you reccomend putting the medium at the beginning and subject at the second, 30second later you give an example where subject at 1st and medium at second?

  • @kensaiix
    @kensaiix Год назад +1

    why is "trending on artstation" a valid node? 🤔

  • @JarppaGuru
    @JarppaGuru 6 месяцев назад

    0:42 yes it has those images trained so you cant get blue dress on that when image has yellow. now take that image and tell change it. can it do it?.atleast not when try same seed. whole picture change bcoz you changed promtp. most images are allready what images is trained

  • @ruudygh
    @ruudygh Год назад

    is this thing need to pay to use?

  • @VermillionStallion
    @VermillionStallion Год назад

    Why are my Images are all blured?

  • @coreyrose5875
    @coreyrose5875 Год назад

    have yet had why some words or phrases are in parentheses

  • @BobFudgee
    @BobFudgee Год назад

    Straight nightmare fuel these ai generators bring me lmao why can I not do this correctly lmao

  • @InfiniteComboReviews
    @InfiniteComboReviews Год назад

    What does "trending on artstation" do for it?

    • @Veylon
      @Veylon Год назад

      The model used a whole bunch of art from the artstation website. The stuff that was popular at that time was flagged "trending on artstation". Putting that prompt in tells the neural network to make the output more like that stuff.
      I don't know that you get any real benefit out of it, but that's the idea.

    • @InfiniteComboReviews
      @InfiniteComboReviews Год назад

      @@Veylon Seems vague, but I get the idea. Thanks.

  • @williambarrett1234
    @williambarrett1234 Год назад

    artist

  • @davizack645
    @davizack645 Год назад

    Ye~ thanks a lot~

  • @goldendragonbringer
    @goldendragonbringer Год назад +1

    I wonder if these AI-Artists can create pixel art characters.

    • @oswynfaux
      @oswynfaux Год назад +1

      Try 8-bit as an art style

    • @nocturne6320
      @nocturne6320 Год назад

      From my experience Dall-E is really good with pixel art, SD tries, but keeps trying to add details to the pixels, making the final image noisy

  • @jameshopkins3541
    @jameshopkins3541 Год назад

    what do you mean?

  • @unknownuser3000
    @unknownuser3000 Год назад

    Now tell us how to spread legs without inviting chtulu into my home 9 times out of 1000! For instance woman sitting in chair and legs actually go the correct way instead of 3 legs or she's upside down or it's just legs crossing into more legs. Since trying "sitting" is just insanely inconsistent, sometimes you get a zoom of the face and she isn't even sitting, and then when the view is on the subject the legs are rarely in the right way. I know I'm probably running into limitations but to get focus on her sitting and not get a zoomed in face, is there a trick to that? I even resorted to "riding an invisible horse" which was hilarious and rarely gave me the pose I wanted. Thanks for the awesome videos stable diffusion has taken over my life.

    • @unknownuser3000
      @unknownuser3000 Год назад

      An example is
      "dark elven woman, wow bikini armor, sitting on rock, legs spread, forest at night"
      or ANY combination of trying to get legs to cooperate, ends in a mess.

    • @LtDan2K
      @LtDan2K Год назад +1

      @@unknownuser3000 use in the negative prompt :Multiple Limbs, Bad Anatomy, poorly drawn body, ugly legs, closed legs, standing up, laying down, portrait, close up. ---- this should help with some of your problems

    • @unknownuser3000
      @unknownuser3000 Год назад

      @@LtDan2K Thanks very much, unfortunately the GUI I'm using doesn't use negative prompts, so I guess until they include them - or I can install a better version I'm out of luck. This is kind of what I figured was my main problem so thanks for clarifying. I can't install anything to the C currently so I am out of luck to install the automatic111 gui from what I understand it needs to all be on the C drive (according to the github readme).

    • @PuppetDev
      @PuppetDev Год назад

      @@unknownuser3000 Do you mean the gradio based interface? I put that on a separate drive without any issues. Only python was on the C drive at the beginning, but pretty sure you could install that somewhere else too (just need to figure out path vars).

    • @devnull_
      @devnull_ Год назад

      @@unknownuser3000 Definitely doesn't have to be on C drive, you can "install" it on any drive that has enough free space. You simply create an empty directory, then do git clone of repository and boot the web UI first time with .bat file in folder, and then wait for it to dl everything.

  • @ningen9129
    @ningen9129 Год назад

    Are that ai paid?

  • @JarppaGuru
    @JarppaGuru 6 месяцев назад

    1:28 depend on what model use. if there image then you see that image LOL. if it actually changes images why cant say move castle 10pixels. you cant! change what you not know what will come. prompt change then same seed not any more same "image"

  • @Yamagatabr
    @Yamagatabr Год назад

    SURE AS HECK that one ain't "trending on Artstation" 😂

  • @juraganposter
    @juraganposter Год назад +1

    Dalle could be beter for understanding prompts, but produce not so good quality image and way too overprice.

  • @lapidations
    @lapidations Год назад

    3:00 you said you should start with the medium but you didnt do it

  • @jawadoumar
    @jawadoumar Год назад

    Ocha mucha cocha pucha that's all I can remember

  • @XellosShinomeiYT
    @XellosShinomeiYT Год назад

    is this site free?

  • @usedtobeagrape
    @usedtobeagrape Год назад

    What a helpful yet, low key dodgy AF video.

  • @TroopurHQ
    @TroopurHQ Месяц назад

    "Stable diffusion chokes when using 3 colors in the prompt. Now let me show you how Dall-E is superior while only prompting for 1 color. Then I'm going to prompt for 2 colors but fade the page out immediately so you don't have time to notice that it made the flower the wrong color every single time."
    Do/did YOU... understand prompt formats?

  • @Maple38
    @Maple38 Год назад

    Sorry but this is probably just because you're using dreamstudio. It can't do such a complicated prompt with such low sample size. To increase sample size you should host the AI yourself, which is actually surprisingly easy to do. And you don't need a supercomputer either, 4gb of vram is plenty enough to run it and it's even possible on a cpu with integrated graphics. Personally I have a 3080 10gb and it takes under 2 seconds per image.

  • @Jennifahh
    @Jennifahh Год назад

    You really like Christina Hendricks right?😗

  • @jinxxpwnage
    @jinxxpwnage Год назад +2

    Good for organic subjects but a nightmare to work with for complex designed backgrounds , hard surface is the bane of this tech for the time being unless you literally describe the environment as basic prompt , the moment you combine things or start even remotely getting picky and modifying or adding areas considerably it falls apart at the seams. Not to mention the awful perspective. You're only getting organic hard surfaces for now and natural environments as well as big boobed women, Don't try to replace your workflows for archviz yet though particle systems and modelling are still the only way to get good professional results not to mention the whole vfx side of things getting specific with drastic camera changes.

    • @DefinitelyNotAMachineCultist
      @DefinitelyNotAMachineCultist Год назад +1

      BB women are like 90% of the use case for this, so it seems legit, no complaints here.
      To be honest, speaking as a newb, hard surface poly modelling (or drawing them with perspective using something like the pen tool) always felt easy compared to sculpting or drawing a character traditionally.
      I can easily imagine cutting out the SD character and layering them over the background.
      The real question on my mind though is what the state of the art is for inbetweening.
      If we want 2D animation where we have full easy control over color and shading compared to 3D (where you have to mess a _LOT_ with normal maps to get something like Guilty Gear), is there anything FOSS that isn't something simple like SVG morphing?
      I feel like the path of least resistance for cheap toon-like animation is just really nailing the maps in 3D.
      2D inbetweening is cancerously tedious (especially for something like 24 fps), unless I'm missing something.
      The artist tags are interesting, but you'd probably need something much more sophisticated in order to get a cohesive enough set of frames for something like animation.

    • @jinxxpwnage
      @jinxxpwnage Год назад +1

      @@DefinitelyNotAMachineCultist Animation currently is extremely difficult for stable diffusion , I'm actually not that well versed in 2D approaches I graduated from uni in 3D production in 2013 and have worked only in 3D fields now focused only on houdini and python workflows as well as HDA creation in houdini. The SVG morphing seems to me like a vector based deformation correct? I'm sure you're aware animations change drastically from 1 frame to the next at the moment so who knows if it'll be stable eventually.
      On the topic of Guilty Gear I'm sure you've seen the binormal approaches as well as the necessary modifications for the topology to appear anime like.
      If you truly are a beginner I would actually recommend you to learn to modify an anime base mesh to have the correct topology and fix the maps as well. Don't get too caught up in working from scratch your time is better spent learning to rig and modelling the rest of the character since even if you were to be eventually hired in studio you'd still be given a base mesh to work off of.
      If you have nightmares using normal maps let me tell you blender currently has some excellent texturing you can layer up easily through the shader editor using something like blender kit's material library. you can then basically bake out all necessary maps almost as if it was a substance painter at times. only thing is it takes a bit longer to bake and make sure you bake diffuse maps with a full blown global illumination so you don't accidentally bake the apparent color or the shadows.
      as for the long time rendering a 24fps animation let me tell you. Even with almost real time render engines like eevee or houdini's faster engines like karma or solaris you'll never truly skip the 2 hour wait for around 5 mins of animation , if you go the cycles route or mantra , redshift full ray tracing then that's 1 hour per frame sometimes! rendering will never be fast.

    • @platonicgamer
      @platonicgamer Год назад +1

      @@jinxxpwnage I'm curious, what kind of complex backgrounds are you trying to generate? I'm asking cause I'm wondering what are stable diffusions mains strengths/weakness against the current crop of Ai's like Dall E. I've been generating some cyberpunk night city backgrounds and had some good turnouts, though I had to play around with the prompts a lot.

    • @jinxxpwnage
      @jinxxpwnage Год назад

      @@platonicgamer Getting the first initial images is fairly simple. But for example if you start modifying the given design it starts failing. Say you create a mayan city with hindu temple influences seen from above. You'll be given often buildings which first of all don't quite combine these 2 cultures together but also generates an unnecessary amount of floors per buildings/structures. If you try to take away some floors or create new ones, maybe make entrances or canal systems or even change a constant design language to something specific it won't behave correctly even with negative prompts. The perspective also suffers a lot i mean look. Take any design it spits at you and actually get picky with it like a director would. Design must be iterative with fluid changes not just a single image that can only be modified in a couple ways. AI is cool but in an actual design environment most people would revert back to painting after 30 mins of fighting the AI not to mention the credits are limited.
      If i can add my 2 cents here i don't think this tech is actually that great. I think for most people who are untrained would think they're getting pro results but in reality it's actually quite pointless for a few reasons:
      1. I'm going to say something a bit controversial that maybe i shouldn't disclose but oh well. Your chances of even getting hired to paint or draw are about 1 in 10,000 in a studio setting. Unless you're well connected through some 150k degree. I know someone that has around $250k worth of education but STILL hasn't gotten a job as a designer.
      2. The chances of someone landing a job through AI are slim since your designs will be tried to actually see if you can rework if necesarry not to mention specific ortho views and close up texture designing as well as interiors. You'll be asked to make an architecture generator outside of blender and to show your process and uv maps, polycount to assure you're not just modelling mindlessly.
      3. Painting actually doesn't pay at all anymore in studio. I knew a guy that commited suicide over losing a couple grand in crypto. I'd estimate this poor guy must have had around $90k in the bank for him to be actually shook enough to kill himself over $40k. And he'd been working for around almost a decade , most of these studios are in very expensive cities so after tax the salary of a concept artist ends up being the equivalent of maybe $1000 in a cheap state.
      4. If you're more of an independent patreon artist you're probably fine since your audience can still appreciate the effort and process.

    • @John-lj9ng
      @John-lj9ng Год назад

      @@jinxxpwnage Have you tried to manually change specific parts by using the inpaint function? not sure if this would work, imo the worst part of the A.I is that you cant keep the consistency of what you want between different scenarios, so i really dont see SD being used for animation for a while, i think its really good to get the concept using SD or fill in details of an existing drawing you did, but yeah its limited as of now, still, technology evolves fast, who knows how good the I.A will be in just a few years, this is just the start and it looks promising aside from all the technical problems

  • @themusicplayerrjs4055
    @themusicplayerrjs4055 Год назад

    Hi

  • @deepblue153
    @deepblue153 Месяц назад

    I’ve been doing this all wrong

  • @germanwach
    @germanwach Год назад

    Dall-e can't count to 11, try to make a beast with eleven horns and you will see it

  • @zes7215
    @zes7215 Год назад

    wrg

  • @eshankgupta6763
    @eshankgupta6763 Год назад +1

    Algorithm

  • @block414
    @block414 Год назад

    pretty sus prompt (this guys clearing his hard drives on the regular)-good tutorial though

  • @phillymusclelover
    @phillymusclelover Год назад

    Grease video

  • @f4ust85
    @f4ust85 Год назад

    Its "Mukha", not "Mucha". When killing art as we know it, at least learn a bit about it, Jesus...

  • @woootwoot8533
    @woootwoot8533 Год назад

    This is literally "how to steal" video bro. I adore your blender vids, but this is... kinda below the ground.

  • @velnussablongarment
    @velnussablongarment Год назад

    Maling zaman modern lol

  • @FirriApril
    @FirriApril Год назад +7

    Do you guys EVER stop and think about the ethical issues of emulating, aka stealing, from real artists, many of whom, like Rutkowski, have vocally expressed they do not like their art to be used to teach AI. Do you not see the ethical AND legal problems literally every established real artist is seeing and trying to call you out on when it comes to using AI like this?

    • @alecmackintosh2734
      @alecmackintosh2734 Год назад +13

      Do you know how AI learning works? It takes the inputs and uses them as reference of what is "good" or intended and makes it's own images based on the prompts, if enough originality is used in the intent, it can even come up with something completely unique. This is how artists make their own work, nothing is truly original, you are either compositing work you have seen before and adding your own "twist" or doing the same with active stimuli. So no the AI isn't stealing anything and it is comparable to parody and/or inspiration which is legally permitted. Ethically speaking, I don't care about an already famous artist, only artists that are still growing that will be done hard by AI. If you seriously think that using someone's artstyle should be legally protected, then you can go back to deviantart.

    • @LtDan2K
      @LtDan2K Год назад +6

      Pandora's box is open. There is no shutting it.

    • @FirriApril
      @FirriApril Год назад +3

      @@alecmackintosh2734 Do you know how creating art works? Artists use existing media as sources of inspiration, but that is only a fraction of the process, and the sources extend far beyond images. Creating art involves choices at every step of the way, from choosing subject matters, creating emotion, choosing a composition to strengthen that emotion and highlight certain aspects of the subject, using colour theory to create a balance that supports the feeling you want to convey to the viewer, choosing where to use contrasts and on what details to use highlights to draw the viewer's focus, creatong commentary or a story within the piece with subtle interactions and composition tricks, and so so much more. The only choice you can make when telling an AI to generate you an image is the subject, and the style of an artist whose workflow and decisions you will never understand to "emulate". Calling that art is an insult to real art

    • @alecmackintosh2734
      @alecmackintosh2734 Год назад +2

      @@FirriApril specific pieces do convey emotion and other aspects, that's what the "twist" is, because of conscious and unconscious bias, there is always going to be some form of influence that the artist has on the piece. The decisions on what good composition, proportions and colours are based on previous work to serve as the fundamentals, human psychology and stimuli. The intent or "feeling" is absent from the AI, which is why you use prompts, the more specific you are, the better your intent is conveyed in the image. The AI art workflow essentially takes the skill out of understanding line work, perspective and proportions of creating works. However to make good art using AI you still need to understand those fundamentals and colour theory, contrast and composition to make the AI make satisfactory images. Yes the workflow does come off as mechanical and assembly line like, but that doesn't mean it isn't art just like 3D modellers who work with a character sheet are still artists. The main benefit I foresee AI being useful for is concept art for smaller studios, wherein it would be super expensive and time consuming to hire artists to design characters and environment concepts, not literally replacing artists entirely.

    • @PuppetDev
      @PuppetDev Год назад +4

      @@FirriApril Yes, I think plenty of people stopped and thought about it at least for a bit. One shouldn't assume that people are completely oblivious to the morality of whatever they are doing. There are many different cultures and thoughts in the world. Not everyone values art, or at least the process of creating art, as something sacred or to be protected. So it's not far-fetched to assume that there are plenty of "practically minded" people who simply see the use in a tool. I'm not disagreeing with you btw, just pointing out how some people think.
      Seriously, people will abuse anything to get ahead if they can. - Plenty of artists did that without AI btw
      Seriously, seriously though, there are also some who can't afford to think about your feelings, not everyone is priveleged enough for that, unfortunately. Again, not disagreeing with you, but sometimes you have to look reality in the eye and realize that some people don't care about "your" ethics.
      Another big issue is that this kind of thing is difficult to stop. The way an AI learns is the same as a human artist would, by finding references and trying to extract whatever they need from them. A lot of the time, this process is also very passive, so even if an artist claims that they don't do that, they still get influenced by the things they interact with. The parallels to human and AI learning are there. The problem is that the AI does that at such an increased speed that it seems unfair, not necessarily illegal. But that's a real issue we all will need to face eventually, not just artists. Yes, even the concept of "choice" that you were talking about. AI most definitely will learn to do that too. The limiting factor is the interface, the words we use to describe concepts. There is only so much we can do currently to translate our idea to an image. But people are already working on that too.

  • @yoteslaya7296
    @yoteslaya7296 Год назад

    copied the prompts right off pics on lexica and looked nothing like it

  • @windigo000
    @windigo000 Год назад

    [ˈalfons ˈmuxa]

  • @TheKnightDark
    @TheKnightDark Год назад +1