- Видео 42
- Просмотров 207 263
Neo Professor
Великобритания
Добавлен 18 мар 2023
AI related videos
Email:
contact@neo-professor.com
Email:
contact@neo-professor.com
AI replicates CSGO, try playing it yourself!!!
GitHub Install:
github.com/eloialonso/diamond/tree/csgo
If you get stuck at the conda create section you need to install anaconda:
docs.anaconda.com/anaconda/install/
github.com/eloialonso/diamond/tree/csgo
If you get stuck at the conda create section you need to install anaconda:
docs.anaconda.com/anaconda/install/
Просмотров: 1 183
Видео
How to Run Flux with 6GB/8GB VRAM
Просмотров 74628 дней назад
NF4 install: github.com/lllyasviel/stable-diffusion-webui-forge/discussions/981 GGUF install: civitai.com/articles/6846/running-flux-on-68-gb-vram-using-comfyui Workflows (no sign up needed): www.patreon.com/posts/113933424 Chapters: 0:00:00 Intro 0:00:28 NF4 0:01:19 GGUF
Making Persona Style Portraits in ComfyUI + Unity
Просмотров 379Месяц назад
Persona Lora file: civitai.com/models/5658/persona-5-portrait-lora-locon Counterfeit Model file: civitai.com/models/4468?modelVersionId=57618 Ipadapter install video: ruclips.net/video/c8HWDQ67Dvg/видео.html Remove Background online (if having problem with comfyui version): www.remove.bg Link to workflows and code (No sign up needed): www.patreon.com/posts/persona-style-113348031 0:00:00 Intro ...
One Picture, Multiple Emotions - Advanced Live Portrait
Просмотров 254Месяц назад
Advanced Live Portrait github: github.com/PowerHouseMan/ComfyUI-AdvancedLivePortrait Videos used: ruclips.net/video/1l6TbsGLnco/видео.html (SvC Chaos) ruclips.net/video/RtXdqlOIxPo/видео.html (Ace Attorney) ruclips.net/video/4Ef4kyF8j1c/видео.html ( Shining Force) Support the channel on Patreon: www.patreon.com/NeoProfessor 0:00:00 Intro 0:00:29 Install 0:00:42 Basic workflow 0:01:35 Advanced w...
How To Install AI Image Generation Add On For Krita (Photoshop Generative Fill Free Alternative)
Просмотров 14 тыс.5 месяцев назад
Github: github.com/Acly/krita-ai-diffusion
Want To Change The Lighting Of Your AI Images? (IC-light ComfyUI)
Просмотров 1 тыс.5 месяцев назад
IC light github: github.com/kijai/ComfyUI-IC-Light IC light models: huggingface.co/lllyasviel/ic-light/tree/main Support the channel on Patreon: www.patreon.com/NeoProfessor 0:00:00 Intro and Setup 0:01:31 1st workflow 0:05:52 2nd workflow
Turbo, Lightning, LCM, Hyper SD - An Introduction To Speeding Up Your Stable Diffusion (ComfyUI)
Просмотров 2,2 тыс.6 месяцев назад
Turbo: huggingface.co/stabilityai/sdxl-turbo comfyanonymous.github.io/ComfyUI_examples/sdturbo/ Lightning: huggingface.co/ByteDance/SDXL-Lightning (scroll down for workflows) LCM: huggingface.co/collections/latent-consistency/latent-consistency-models-loras-654cdd24e111e16f0865fba6 comfyanonymous.github.io/ComfyUI_examples/lcm/ Hyper SD: huggingface.co/ByteDance/Hyper-SD huggingface.co/ByteDanc...
A Quick Guide to installing Microsoft Phi-3 (using Ollama)
Просмотров 4516 месяцев назад
Ollama: ollama.com/ More info about Phi-3: news.microsoft.com/source/features/ai/the-phi-3-small-language-models-with-big-potential/ Support the channel on Patreon: www.patreon.com/NeoProfessor
Making Sprite Animations in ComfyUI using Street Fighter 2 (Challenge)
Просмотров 2,8 тыс.6 месяцев назад
Spriters-Resource: www.spriters-resource.com/arcade/streetfighter2/sheet/60224/ Face Detailer Github (ComfyUI Impact Pack): github.com/ltdrdata/ComfyUI-Impact-Pack Face Detailer Video: ruclips.net/video/_uaO7VOv3FA/видео.html Segment Anything Github: github.com/storyicon/comfyui_segment_anything Ezgif: ezgif.com/sprite-cutter Workflow: www.patreon.com/posts/making-sprite-in-102917833?Link& Supp...
Want to use a SD1.5 Lora with an SDXL model? Try this workaround (IP-Adapter Style Transfer / Comfy)
Просмотров 2,6 тыс.6 месяцев назад
Github: github.com/tencent-ailab/IP-Adapter.git Latent Vision: ruclips.net/channel/UCNOzlWHq4LgGcEHliHWm6HA Support the channel on Patreon: patreon.com/NeoProfessor
Automatic1111 vs ComfyUI (Which one should you use???)
Просмотров 10 тыс.7 месяцев назад
In this video I go over the difference between Automatic1111 and ComfyUI and which one is better to use for stable diffusion Support the channel on Patreon: patreon.com/NeoProfessor 0:00:00 Intro 0:00:35 Img2Img Example 0:02:27 Copying Workflows 0:04:24 Custom Workflows Pros 0:06:03 Custom Workflows Cons 0:09:18 Performance
Stable Diffusion Basics: Civitai Lora and Embedding (Part 12)
Просмотров 3 тыс.11 месяцев назад
In this video I go over the basics of using Loras and embeddings
Stable Diffusion Basics: High Res Fix (Part 11)
Просмотров 6 тыс.11 месяцев назад
In this video I go over the basics of using High Res Fix
Stable Diffusion Basics: Upscalers (Part 10)
Просмотров 90611 месяцев назад
In this video I go over the basics of using Upscalers Upscalers comparison link: www.reddit.com/r/StableDiffusion/comments/y2mrc2/the_definitive_comparison_to_upscalers/
Stable Diffusion Basics: Face Restoration (Part 9)
Просмотров 2 тыс.11 месяцев назад
In this video I go over the basics of Face Restoration
Stable Diffusion Basics: X/Y/Z Plot (Part 8)
Просмотров 1 тыс.11 месяцев назад
Stable Diffusion Basics: X/Y/Z Plot (Part 8)
Stable Diffusion Basics: Prompt Matrix (Part 7)
Просмотров 1,3 тыс.11 месяцев назад
Stable Diffusion Basics: Prompt Matrix (Part 7)
Stable Diffusion Basics: Civitai Intro (Part 6)
Просмотров 1 тыс.11 месяцев назад
Stable Diffusion Basics: Civitai Intro (Part 6)
Stable Diffusion Basics: Batching (Part 5)
Просмотров 1 тыс.11 месяцев назад
Stable Diffusion Basics: Batching (Part 5)
Stable Diffusion Basics: CFG scale (Part 4)
Просмотров 1,6 тыс.11 месяцев назад
Stable Diffusion Basics: CFG scale (Part 4)
Stable Diffusion Basics: Sampling Steps (Part 3)
Просмотров 1,6 тыс.Год назад
Stable Diffusion Basics: Sampling Steps (Part 3)
Stable Diffusion Basics: Seeds (Part 2)
Просмотров 1,3 тыс.Год назад
Stable Diffusion Basics: Seeds (Part 2)
Stable Diffusion Basics: Introduction (Part 1)
Просмотров 1,6 тыс.Год назад
Stable Diffusion Basics: Introduction (Part 1)
Quick and Easy Guide to Control Net (Beginner Friendly)
Просмотров 1,7 тыс.Год назад
Quick and Easy Guide to Control Net (Beginner Friendly)
A Basic Intro to Deforum (Stable Diffusion) - Transform Prompts and Images into Video!!!
Просмотров 24 тыс.Год назад
A Basic Intro to Deforum (Stable Diffusion) - Transform Prompts and Images into Video!!!
Remastering Video Game Cutscenes with Stable Diffusion Temporal Kit (and ControlNet and Ebsynth)
Просмотров 9 тыс.Год назад
Remastering Video Game Cutscenes with Stable Diffusion Temporal Kit (and ControlNet and Ebsynth)
Why everyone else's Stable Diffusion Art is better than yours (Checkpoint, LoRA and Civitai)
Просмотров 69 тыс.Год назад
Why everyone else's Stable Diffusion Art is better than yours (Checkpoint, LoRA and Civitai)
Chat GPT has an open source rival (Open Assistant)
Просмотров 534Год назад
Chat GPT has an open source rival (Open Assistant)
I got one question and maybe anybody can help me: Im typing in my text prompt as I wish. To try different text prompts I have to ype in the text an test and the the next. Can I use different text prompts on one picture, press a button and then wait for an hour and see the results? At the moment I am typing in one prompt press refine, see the result. Then I type in the next prompt type refine and so on. I want to type in all prompts (always used on the same single picture), press one button and then see the results later. Can anybody help me?
Thanks a lot!
When I use hires fix it stops the refiner from working. Why might this be?
Hi Res steps at 0 will use the same number of steps as you've got in your regular steps, so 20 in your case. 1 or above will specify to run the Hi Res pass that many times, so experiment, but I tend to go with about half my generation steps or lower. The more you have the longer it will take, but you'll see diminishing returns eventually.
My "Hires.fix" stopped working :( It is showing same resolution, no upscaler, "from 512*512 to 512*512. Do anyone know how to fix it? Thank you.
I know this is crazy but I’m trying to understand why and the actual application
1.improving AI technology and 2.one day, this technology could be used to make brand new games from scratch
i was thinking about ai generated maps, so you could play a completely new map each game. but this is next level stuff right here
Thanks for the video....helped a lot
Mind blown 🤯
Yes , when I saw it I was quite impressed, imagine the future!
Please use WebUI in dark mode so settings are better visible in your video. Thanks. I did not knew that I needed Trigger words for some models.
stable-diffusion-webui\embeddings\ stable-diffusion-online\models\Lora\
Do you know if it's possible to train a lora with a 8gb vram card ? (i have a 2070)
not entirely sure, think minimum is 12 gb
Solid. Flux image to image possible too?
Yes should be
thank you! been holding off on Flux on account of low vram - looks like I've no excuse now! BTW should we have the latest comfyUI installed?
Yes you should update, it may still work with older versions however
As a user of both, I'd say pick what you want, you still going to have to put your brain in gear when either of them break. That's when you really learn ! But auto then comfyui I'd say was a good way for me. At moment I'm going back to Auto to see what I missed while running comfyui for so long.
To me it seems like one of those things that if you need it you need it. Otherwise just stick to ComfyUI. Most people only scratch the surface of A11111 as it is
Amazing work !! did you continue your experiment ?
Brilliant series, I'm quite versed but even this helped me pick up little things. Much appreciated!
How did you get that generation history panel at the bottom? I've been trying to get it to show since I started using Comfy a few weeks ago, but haven't been able to. I've seen people saying there should be an x bottom right that toggles it on or off, but I don't see one, no matter the browser used, including Brave. Some also said set the page zoom to 80% as it's probably hidden, but still nothing. Is it a custom plugin?
Found it. I hadn't noticed that resize feed label in the bottom left of your workspace at first, so after a quick search, I found it in the pythongosssss/ComfyUI-Custom-Scripts node.
Finally 🤤
I was wondering why. Thank you
Its nothing like generative fill
Shit I still don't get it...
Thanks for the Video. Im wondering is it possible to "Bulk" highres fix? Like If I have 10 pictures I wanna highres fix with the same stats? Right now im manually reloading them via png info -> texttoimage and highres fix
Is it better than Photoshop?
Yup
So Automatic1111. Got it.
what do I do if there are multiple different faces? Should I just send it to inpainting and do one face at a time?
Hm, this looks more like a simple color overlay, not a real change of lighting conditions 🤔
Fantastic video. Having a summary of pros and cons int the end for each model would be wonderful tho. Anyways outstanding work
Thank you
thank you for the tutorials 👍
Thank you sir ❤
Very good!
Very helpful!
then how do i download sd 1.5 ?
somehow my results are unsatisfying with stable diffusion automatic 1111 when trying to generate locally on my pc with my old gpu gtx 1080 ti 11gb, cant even get anything close to just even looking good or anything comparable, if i have to be honest automatic 1111 local generation cant even compare to the simple and basic generation of leonardo ai, so with that said am i missing anything? people claim they were able to generate stunning art with automatic 1111 but when i tried it, its soooo childishly unsatisfy like the results are worst than a 5 year old drawing, i've tried about 300x generative images now and i can just conclude that it wont take me anywhere without a better guide to a more pleasing result
Im trying sd1.5 cyberealisticv50 and pictures are always goofy needing editing at best. I think others dont have this problem SDXL seems so much better maybe its because i dont use embedding?
clear explanation, next textual inversion?,
You will be the VaatiVidya for SD....
thanks, clear explanation
Thank u so much
But what I dont understand most video tutorial said comfy UI is easier... what the hell
I would say it is easier because you don't need to touch any settings files. It was so much hassle trying various arguments in A1111 for low vram having to restart the program every time. Not mentioning when A1111 crushes generation, you also need to restart the whole program.
did not work
thank you very much for this video
omg the ctrl arrow keys. didn't know that. tyvm!
7:38 - not really. You could save and load different presets ;-)
Great tutorials man. Appreciate you 👏
I like the hi res steps because you can turn on an extra prompt box that will put in those prompts during those steps, it is an easy way to change faces. 👍
Question: With Hires enabled, 4 batch for generation selected, how will I build on an image I like and let Hi Res do the rest? If I apply highres, each time I will get different results and miss a previously better generated one.
I just wanted to say thank you for your simple explanation. This video is extremely helpful for someone who doesn't know anything about stable diffusion and it's just getting into it. Thanks for making this video!