AI Made Easier
AI Made Easier
  • Видео 22
  • Просмотров 35 995
Want to make a holiday gift using Stable Diffusion?
In this video I show you how to create a great holiday gift using stable diffusion! We bring AI Art into the real world.
In this video:
0:00 - Intro
0:32 - Workflow
1:00 - Generating Images
2:30 - Prepping files in Photoshop
3:36 - Making Printable File
4:55 - Outro
Links:
github.com/AIMadeEasier/workflows
Просмотров: 380

Видео

Stable Diffusion Just Got Better With This User Interface!
Просмотров 6 тыс.11 месяцев назад
Stable Diffusion Just Got Better With This User Interface! Want the power of Comfy UI but an easier to use interface than Automatic 1111. It's here with Swarm UI. In this video: 0:00 - Intro 0:42 - Download and Installation of Swarm UI 05:29 - Swarm UI Walk through 08:33 - How to install upscale model and custom nodes 10:48 - Creating a custom workflow 12:28 - Generating the first image 13:34 -...
Installing Custom Nodes In Comfy UI Is Easy!
Просмотров 5 тыс.11 месяцев назад
Installing Custom Nodes In Comfy UI Is Easy! Take your Stable Diffusion game to the next level with custom nodes. We will show you how to install them. In this video - How to install Comfy UI Manager - Using Git to download and install Comfy UI Manager - Adding A Custom Node using Comfy UI Manager Links ComfyUI Manager (Git Hub) github.com/ltdrdata/ComfyUI-Manager
Upscaling In Comfy UI Done Easy!
Просмотров 17 тыс.11 месяцев назад
Upscaling In Comfy UI Done Easy!
Running SDXL V1.0 Is So Easy!
Просмотров 63411 месяцев назад
Running SDXL V1.0 Is So Easy!
Installing Comfy UI Is Easy!
Просмотров 3,6 тыс.11 месяцев назад
Installing Comfy UI Is Easy!

Комментарии

  • @DimiBer
    @DimiBer 10 дней назад

    Super helpful! Thanks Bro!

  • @ToddDouglas1
    @ToddDouglas1 15 дней назад

    Thanks! You might mention this in a later vid but how do you save your workflow so it keeps your groups and colors the same?

  • @sidharthmahajan6047
    @sidharthmahajan6047 Месяц назад

    Can we download it on adroid tab

  • @RachitVeeturi
    @RachitVeeturi Месяц назад

    LIFE SAVIOR!!!

  • @WanderlustWithT
    @WanderlustWithT Месяц назад

    Thank you bro!

    • @AIMadeEasier
      @AIMadeEasier Месяц назад

      @@WanderlustWithT no problem :)

  • @59Marcel
    @59Marcel 2 месяца назад

    Thank you so much for your video, I really appreciated you clear and easy to follow instructions. I was wondering is there a 2x-UltraSharp upscaler?

  • @JakeDownsWuzHere
    @JakeDownsWuzHere 2 месяца назад

    this is exactly what i needed, thank you!

  • @tomrey5
    @tomrey5 3 месяца назад

    For anyone struggling to download NewDawnXL off of Tensor I had to let the web page load for like 20 minutes until it let me download it

  • @Architectureg
    @Architectureg 3 месяца назад

    when installing models if you get this message in the terminal log, do you have to add it to path or can ignore it? "'D:\COMFYUI\ComfyUI_windows_portable\python_embeded\Scripts"

  • @sargee369
    @sargee369 4 месяца назад

    ..... and.. for mac?

  • @phepheboi
    @phepheboi 4 месяца назад

    Weird tutorial. Stacking 4x image upscale to another 4x upscale doesn't really add any detail. Thats why the 16k image looks really bad. 16k isn't super rare or big. 15 years ago i made a apprecieteship in an agency and used a Hasselblad with like 7k x 9k resolution already. And at this resolution you could clearly see stuff in the reflection on the eyes in a portrait. And 3 years ago i already used cupscale to upscale images from 4k to 16k. I had a bit more expectation when i started this Video. Because i also dont get the tile scaling part. Shouldn't there be any info or option about that? Because i already split an image into 4 tiles in Photoshop to speed up the upscaling process back in the days.

    • @AIMadeEasier
      @AIMadeEasier 4 месяца назад

      Of course this tutorial is not for everyone. Yes using proper techniques you can do a lot with photoshop and photography tools. I use to be a commercial photographer and some of my images out of camera and on film were scanned in at 100mp or higher then upscaled for specific jobs. For the average person though, that is unnecessary. The point of the stack was to show that upscales can be stacked. So if you are just looking for an all in one solution this was one of the best methods at the time of recording. That said there are new upscaling techniques that have since provided better results and less compute intensive. But also trying to compare this to large format photographs is not the point of this video or in the scope I was going for in this video. I appreciate your input tough.

  • @scottmahony4742
    @scottmahony4742 4 месяца назад

    How do you know the Image size that the model was trained on to set the latent image size? To your question, I like following along and building the workflow, helps better to understand what we are doing and how I can do it on my own. Additionally, I have an ultra wide monitor, I think the resolution is 5120×2160 and I would like to create back rounds for it. Would I do it the same way and just change the size in the empty latent image and then upscale?

    • @AIMadeEasier
      @AIMadeEasier 4 месяца назад

      So the way I do this is by setting the latent image width in the range of the trained image in the same aspect ratio. So I use a ratio plugin. Then from there I will upscale to the native resolution of the monitor or what ever use I may need. Now for ultra wide what I would do is go slightly above the ratio and crop down or smaller and crop in. Because the trained image is 1024x1024 you need be be careful going outside of that as it can cause tiling and rendering issues. Using known good ratios will solve this.

  • @scottmahony4742
    @scottmahony4742 4 месяца назад

    Thanks for doing these videos, this is what i've been looking for, i am pretty excited. I like how you break down what each thing does.

    • @AIMadeEasier
      @AIMadeEasier 4 месяца назад

      Thank you so much, I got more videos coming I have just been away from home so haven't fully got a chance to do anything with channel for a bit.

  • @Vashthareaper
    @Vashthareaper 4 месяца назад

    is viddiffusion possible with stableswarm? ive not seen anyone doing that yet, if its just for images wouldnt be just easier to use automatic1111 as it shares resources with comfyui anyways ?

    • @AIMadeEasier
      @AIMadeEasier 4 месяца назад

      Yes absolutely I have done a bit of video in comfy and swarm. I fact in the new swam ui there is a video tab that said I’ll create a work flow that makes it a little better

    • @Vashthareaper
      @Vashthareaper 4 месяца назад

      i have not tried swarm yet so please if you could let me know if that feature is pending or live ? thanks@@AIMadeEasier

  • @mrunal_sen
    @mrunal_sen 5 месяцев назад

    saw several videos on youtube but they were not what i wanted. This is exactly how I want. Thank you so much.

    • @AIMadeEasier
      @AIMadeEasier 5 месяцев назад

      That's awesome thank you so much for the comment!

  • @georgiosburnham
    @georgiosburnham 5 месяцев назад

    Thank you very much, but the Civit AI New Dawn XL Model is not able to download. Could you upload it please?

    • @AIMadeEasier
      @AIMadeEasier 5 месяцев назад

      He has moved it to here tensor.art/models/642019060365894429

  • @ggdevelopment7403
    @ggdevelopment7403 6 месяцев назад

    Yo, thanks for the tut. I have a few questions: 1. Why is it specifically the Python 3.10.6 version and not the latest or at least the 3.11? 2. I'm not sure if I'm just stupid, but I never saw you download Git. I just clicked next and had all the default settings during the setup. 3. What's the difference between DreamShaper and DreamShaper XL?

    • @AIMadeEasier
      @AIMadeEasier 6 месяцев назад

      In programming when you write and compile for a certain version number sometimes the things in updates to later versions would need to be programmed differently and my unstable for certain tasks. Hence why Python is version specific. It could likely be reprogrammed for a new version of python but why change what works. Git is always installed on my computer just because I work a lot with GitHub and my own git repositories. It’s not always needed for the install but is used more for adding plugins. Some installers I believe now include git as well. Dreamshaper vs Dreamshaper XL Dreamshaper is based on stable diffusion 1.5 (faster but not always the best results, Dreamshaper XL is based on SDXL which was the major upgrade last year that improved results hugely. It does also require a higher starting image size so a little slower but the results are way better.

  • @LordObst
    @LordObst 6 месяцев назад

    Thank you so much this was the best tutorial related to upscaling ive seen so far, you saved ma day :)

  • @TheJPinder
    @TheJPinder 7 месяцев назад

    does this work with video?

    • @AIMadeEasier
      @AIMadeEasier 7 месяцев назад

      No not really you would need to export all the frames of the video back to image then upscale each image and then recompile back into a video file. There are tools out there for video upscaling though that would be much easier.

  • @ckhmod
    @ckhmod 7 месяцев назад

    Installed ComfyUI / currently updated / Installed Manager successfully thanks to this video/ *can't install any custom nodes and I think the error is coming via torch?? That is definite out of my element. Any recommended solves or just fresh install?

    • @AIMadeEasier
      @AIMadeEasier 7 месяцев назад

      Doubt it but what error are you getting?

  • @gingercholo
    @gingercholo 7 месяцев назад

    “Didn’t lose any detail” bro turned into a real boy

  • @godofdream9112
    @godofdream9112 8 месяцев назад

    outpaint workflow in cumfyui...?

  • @vivekvp
    @vivekvp 8 месяцев назад

    Hello and Thanks for making vids like this! I installed the Comfyui Manager but when I try to install a module I get: install failed: ComfyUI Frame Interpolation. I am using Comfyui -Windows-Portable. Do Ihave install git manually? Or the modules manually? If so, where do i put them? Thank you!

    • @AIMadeEasier
      @AIMadeEasier 8 месяцев назад

      That’s odd have you updated the comfy ui manager. Also if you installed a while ago there is a new version could be causing the problem.

  • @RobertWildling
    @RobertWildling 8 месяцев назад

    Why are they called "checkpoint loaders"? Is the term "checkpoint" somehow specific to StableDiffusion? (I assume it means something like "location of the model", but I wonder if there is more to that terminology...) And how is it possible that the setup is saved in an image?

    • @AIMadeEasier
      @AIMadeEasier 8 месяцев назад

      Checkpoint models are the files that give the weights to generate the image your request. So it’s looking through the trained model and choosing the weighting based on your prompt. As for the setting in a png it comes down to meta data. Meta data is able to hold a bunch of text or code that can be decoded by the comfy ui interface and others. In this case it’s mostly plain text. I can show an example some time. I’m thinking of doing a questions and answers video coming up. Just finished off my new setup for recording, new microphone and all looking forward to it.

    • @RobertWildling
      @RobertWildling 8 месяцев назад

      @@AIMadeEasier Wow! Thank you for your quick response! Much appreciated!! 🙂

  • @RobertWildling
    @RobertWildling 8 месяцев назад

    Thank you for that great introduction! Exactly what I was looking for! Subscribed! - A question: The standard prompt contains 2 commas. Is there a reason for that? (I tried several image generations with both commas, then with only 1, then 3, but it seems there is no difference. Maybe I am missing something, though...)

    • @AIMadeEasier
      @AIMadeEasier 5 месяцев назад

      commas are ignored its just for our own visual pleasing

  • @StargateMax
    @StargateMax 9 месяцев назад

    I get awful results with those upscaler models. I don't generate robots or anime, only photorealistic people and scenes. The upscaler makes the fine details and sharp edges look like plastic and very harsh. A much better upscaler is a separate software Topaz Gigapixel AI. It does cost money, but it has amazing and clean results.

  • @thegankmanifesto2040
    @thegankmanifesto2040 9 месяцев назад

    Cant wait to learn more from you sensei

  • @RokasRadža
    @RokasRadža 9 месяцев назад

    amazing videos, I am very grateful, hidden gems

  • @avi3dfx1210
    @avi3dfx1210 9 месяцев назад

    thanks great video easy and to the point, can you explaine please how to batch upscale/process more then 1 image at a time?

    • @AIMadeEasier
      @AIMadeEasier 9 месяцев назад

      Yep I almost have my new setup completed

  • @othoapproto9603
    @othoapproto9603 9 месяцев назад

    Thanks for the video, I know how much work it is to make. May I suggest no background music and avatar, they're very distracting to the lesson.

  • @FreyaLovesGaming
    @FreyaLovesGaming 10 месяцев назад

    Brilliant, THANK YOU 😀😀 Been trying to install ComfyUI for ages and couldn't do it with other tutorials - awesome tutorial :D

    • @AIMadeEasier
      @AIMadeEasier 10 месяцев назад

      Love that you got it working! It’s so much fun.

  • @Mr_Purr
    @Mr_Purr 10 месяцев назад

    Thanks for sharing this with us

  • @DarioToledo
    @DarioToledo 10 месяцев назад

    The results of the upscale show some artifacts, is it better to use an after detailer before, or after the upscale? Also, the are many models for upscale, which are more fitting for photographic images and which for illustration?

    • @AIMadeEasier
      @AIMadeEasier 10 месяцев назад

      I would try both but likely after the upscale will yield better results. As for which are more friendly towards photos and illustration so far I find the 4x Ultra Sharp is great for illustrations. It can do well with images that are already a decent size but we are starting to see more and more AI based upscaling which uses more of an image generation mind set which can reduce the amount of noise. That said any fine details like text, or objects the AI can not detect turn out to look a little weird. Text is a great example, it turns our looking like an alien language if it's to small. This is because AI Upscalers just like image generation has not been trained on text and has no clue what it is, so in some cases traditional upscaling methods will still be required in photographic upscaling. I was a professional photographer for years and I remember always trying to push my raw images out as large as I could to see what would happen, trying to find new techniques to get the scale I wanted. In most cases for printing it was unnecessary as if your printing off a really large poster or doing a billboard for example. The viewing distance is further than say something like a 4x6 printed image in your hands. Now that said if you need a larger upscale than something like 4X it may be worth the time to generate in a larger size first and then up scale only that 4x. Another amazing trick is if it's an illustration like a vector image, once created bring it into Adobe Illustrator and use the Trace Function. Work with it to get the vector to what you want then you have a vector image you can scale to any size with out worry.

  • @WalidDingsdale
    @WalidDingsdale 10 месяцев назад

    an amazing upsxaling method, thank you very much.

    • @AIMadeEasier
      @AIMadeEasier 10 месяцев назад

      Thanks for your feedback glad it helped :)

  • @christian_life
    @christian_life 10 месяцев назад

    Thanks dude!

    • @AIMadeEasier
      @AIMadeEasier 10 месяцев назад

      No Problem! Hope it helped

  • @aerofrost1
    @aerofrost1 10 месяцев назад

    Is there a fast way to upscale and maintain/increase quality on 4GB GPU? I can't upgrade to a better graphics card so my only option right now is 4GB.

    • @AIMadeEasier
      @AIMadeEasier 10 месяцев назад

      It depends what your end goal is. If you need a really large image; you may need to use something a little more powerful or in the cloud. So it really depends on your end goal. I am working on a video about running stable diffusion in the cloud and how to do it on a budget.

    • @aerofrost1
      @aerofrost1 10 месяцев назад

      @@AIMadeEasier Not really large. Just doubling a 512 x 768 image. It takes me around 10 minutes per upscale, which is a bit longer than what I'd like. I don't know if there's a faster option.

  • @ShamanicArts
    @ShamanicArts 10 месяцев назад

    amazing tutorial, thank you. would be easier to focus on if music was either not there or much lower volume.

    • @AIMadeEasier
      @AIMadeEasier 10 месяцев назад

      Yeah sorry my recording space is not quiet always so I use the music to cover up the tv on in the background and the kids playing lol

  • @thevoid6756
    @thevoid6756 10 месяцев назад

    if you dont want to immediately upscale your image but stay within the same workflow to upscale, you can right-click on Save Image and select Copy Clipspace. Then right-click on the Load Image node in the upscale workflow and select Paste Clipspace. This way you can first search for a good seed, then render the small image with more steps and finaly upscale all in one workflow.

    • @AIMadeEasier
      @AIMadeEasier 10 месяцев назад

      That's an awesome tip thank you! I love using drag and drop and clipboards for being more efficient

    • @Vallrain
      @Vallrain 10 месяцев назад

      you're a fkn legend sir

    • @merlinnelson9368
      @merlinnelson9368 10 месяцев назад

      Thank you

  • @xevenau
    @xevenau 10 месяцев назад

    im getting a backend error (Invalid operation: No backends match the settings of the request given!). Any idea?

  • @uk3dcom
    @uk3dcom 11 месяцев назад

    Straight upsacling is okay but I'm wondering if we can add in detail with each iteration. That way the very large images are also packed with detail. I've seen demo workflows doing this in Auto1111 and infact in ComfyUI from a latent. I wonder if it's possible from an image if we extract the latent information?

    • @jurgenwulf7425
      @jurgenwulf7425 10 месяцев назад

      Can you please point me to a video for upscaling with adding details in comfy? THX!

    • @TheDocPixel
      @TheDocPixel 10 месяцев назад

      If you only want to do straight upscaling, then ComfyUI is a slow and "expensive" i.e. GPU power and time... way to do it. Use an online or dedicated upscaler like Topaz or Upscayl and get faster and better results. The proper way to use upscale in Comfy or A1111 is to do Latent Upscaling, thereby adding more detail by utilizing latent space. Similar to HiRes Fix.

    • @uk3dcom
      @uk3dcom 10 месяцев назад

      @@TheDocPixel Intresting you say that. I found and adapted a workflow that iterates adding detail as it scales, yes quite slow but what a great job it does. The larger I go the more detail it adds. Straight upscalers just don't work for me. They get bigger sure but something is always sacrafised.

    • @TheDocPixel
      @TheDocPixel 10 месяцев назад

      @@uk3dcom - I agree... and also have adapted Detweiler's workflow. The results are stunning when also adding an upscale model.

    • @Gardener7
      @Gardener7 8 месяцев назад

      ​@@uk3dcomI am looking for a workflow that adds detail and not just upscale. What would you recommend?

  • @sc0peAI
    @sc0peAI 11 месяцев назад

    Hi, I don't know why but when I use this upscaler, it turns my uposcale node red. I short it does not work. I have 4GB of VRAM, iss that the problem?

    • @AIMadeEasier
      @AIMadeEasier 11 месяцев назад

      This is extremely likely. Upscaling requires a bit of horse power. You could try a lighter upscale model from that list and try and run it multiple times. Although not ideal. There is two reasons for a box to go red. #1 the node is not installed (up scaler is pre-installed) so we know it's not that. #2 the node had an error running (lack of v-ram, incorrectly configured ect...) Most stuff these days for Stable Diffusion they recommend 6gb vram or higher, I would not go below 8 personally. Depending if you have the budget or what not, it might be worth looking at a new card. A great example of a budget card is a 3070 used or even a 4060 as these cards are fairly inexpensive and work well with stable diffusion. If your on a laptop then once again depending on your budget might want to run in Google Collab or even using a cloud based solution. How ever with cloud based solutions, if you forget to shut it down can cost you the price of a 3070 used in a single months time. So just be aware that cloud based solutions can be extremely expensive. Example: Paperspace on their basic tier. The basic tier is $0.46 per hour so if you forget to shut down the server when not in use your monthly cost could be as high as $355. That said if you are using it lightly and you are diligent about shutting down the server than it may be more affordable. Just with the risk price of $355 and a used 3070 going for around $300 these days... You get where I am going.

    • @sc0peAI
      @sc0peAI 11 месяцев назад

      thank you so much, I tried other Upscaler and it worked @@AIMadeEasier

    • @AIMadeEasier
      @AIMadeEasier 11 месяцев назад

      @@sc0peAI yeah depending on the upscale will use more or less ram. If you can find a low vram upscaler that is almost lossless you can stack them. So 2x-2x until you hit your vram limit.

  • @freekhitman9916
    @freekhitman9916 11 месяцев назад

    Good video nice that you did present my model 👍

    • @AIMadeEasier
      @AIMadeEasier 11 месяцев назад

      You know I love that model also just got the other model downloaded a testing. In initial tests I love it too

    • @freekhitman9916
      @freekhitman9916 11 месяцев назад

      ​@@AIMadeEasierI know thanks for that 😉👍

  • @nn-db4fw
    @nn-db4fw 11 месяцев назад

    AI art has already revolutionized art.

    • @AIMadeEasier
      @AIMadeEasier 11 месяцев назад

      I agree, and I think it will open up more ideas for traditional artists who embrace the technology

    • @nn-db4fw
      @nn-db4fw 11 месяцев назад

      @@AIMadeEasier Yes. Considering the AI's has been trained on billions of pictures, we can merge arts like never before, which in turn creates new artform. This has never been possible before. I have created some real spectacular art that I never seen before. I am being really protective of my art until the day we can copyright it. And be sure that day will come. Thing is that the filters, seeds etc are always changing, sp the art I am able to create today, might not be possible in the future, which makes it real sensible to protect my art. The art I for example post on Facebook, is like 10% of what I have really created as my "best" pictures. But I am not doing it for the money, that is not the reason I do this, I do it because I love creating art with AI. Hopefully peopole will one day appreciate what I have created after we can copyright it.

    • @AIMadeEasier
      @AIMadeEasier 11 месяцев назад

      @@nn-db4fw Thank you so much for your input. I love to hear it from both perspectives. For me the copyright case that happened the ruling really bothered me. Not in the ruling it's self because it's a complex question, but in the statement made by the judge that AI can't create emotionally impactful images. We good prompting you certainly can get extremely impactful images, and something I have been playing with for a bit. I studied the history of photography for about 20 years and the thing I loved was how when the camera was accepted by artists, framing in traditional changed to match the new medium. I am excited to see where this takes us in the future as the tools in this early stage are really incredible.

    • @nn-db4fw
      @nn-db4fw 11 месяцев назад

      @@AIMadeEasier Yes, we are only on "generation 1 AI art" as for now, and we all are amazed by what it can do. In a few years we are on "gen 2" or maybe even "gen 3" AI art and we can only imagine what AI art can produce when it is able to be creative on it's own, which will happen. There was a huge debate when the cameras came out and if that was really a new form of art as it was "only to click one button". before that, people had to stand beside say a river and paint the scenery maybe for days, weeks or even months. Then the cameras came and with one click the scenery was captured. I fully understand the cameras was controversial back then, but today the cameras are fully accepted. AI art is pretty new and soon that will be accepted as artform as well. It takes a LOT more work to be able to create say a good portrait with AI art than with a camera. All the things one needs to know all from angles, mood, lighting, distance and whatnot. Then we have all the things we can add into the portrait that most who takes pictures won't ever be able to do; like taking a portrait of someone on the moon, or on the bottom of the sea, inside a volcano, balancing on top of a flagpole etc etc. The AI understands these things, and that is fantastic if you ask me. As I see it, the AI art blows photographies out of the water in most areas. Hopefully we will have a new debate and new revised laws regarding AI art no later than Gen 2 or Gen 3 AI art.

  • @letrerote
    @letrerote 11 месяцев назад

    fantastic thanks! where can I get the workflows from you videos?

    • @AIMadeEasier
      @AIMadeEasier 11 месяцев назад

      I can start saving them for you guys and I’ll upload them to a git hub account I’ll set it up in the morning with my workflows :)

    • @letrerote
      @letrerote 11 месяцев назад

      @@AIMadeEasier that's great looking forward to it

  • @letrerote
    @letrerote 11 месяцев назад

    Cool, thanks! Is this integrated with my previous comfyUI install? or a separate install for Swarm+comfyUI?

    • @AIMadeEasier
      @AIMadeEasier 11 месяцев назад

      You can technically do either but I prefer to do it as a new install. Just so there is no chance of messing up the full integration of using the new interface. Now you could reroute your models folder using the extra_model_paths.yaml how ever one install all integrated is all I needed. So I just moved everything over to this and don't use my original install anymore.

  • @teccc42
    @teccc42 11 месяцев назад

    Great tutorial. Thank you!

    • @AIMadeEasier
      @AIMadeEasier 11 месяцев назад

      Hope it helped! I really love this UI and can't wait to see as it matures. It does not feel like an alpha level of production to be honest, it feels like a finished V1. So it will be insane to see where they take it from here.

  • @AIMadeEasier
    @AIMadeEasier 11 месяцев назад

    Have you tried SWARM UI? Let us know what you think.

  • @slcraw4ord
    @slcraw4ord 11 месяцев назад

    Used this easy process and already generating images

    • @AIMadeEasier
      @AIMadeEasier 11 месяцев назад

      That’s what I love to hear!

  • @SimpleHedging
    @SimpleHedging 11 месяцев назад

    How to randomize the seed?

    • @AIMadeEasier
      @AIMadeEasier 11 месяцев назад

      In the Ksampler control_after_generate set that to randomize

  • @SimpleHedging
    @SimpleHedging 11 месяцев назад

    Really helpful tutorial❤️

    • @AIMadeEasier
      @AIMadeEasier 11 месяцев назад

      Thank you, hope it helped.