- Видео 25
- Просмотров 10 569
Professor Lich
Австралия
Добавлен 19 окт 2019
ComfyUI Crash Course 2024 (Part 4 of 4)
💡 Video covers:
- Mask Construction:
- Solid Mask
- Mask Composite
- Input Types:
- Widget Inputs
- Connection Inputs
- Converting Inputs
- Primitive Nodes and Primitive Data Types
- Grid Alignment (Snap-To-Grid)
Links to resources:
- Workflow link: civitai.com/articles/9287
- CyberRealistic: civitai.com/models/15003
👍 If you found this helpful, please Like & Subscribe. Any donations are warmly accepted below:
- www.patreon.com/professorlich/
- ko-fi.com/professorlich
- Mask Construction:
- Solid Mask
- Mask Composite
- Input Types:
- Widget Inputs
- Connection Inputs
- Converting Inputs
- Primitive Nodes and Primitive Data Types
- Grid Alignment (Snap-To-Grid)
Links to resources:
- Workflow link: civitai.com/articles/9287
- CyberRealistic: civitai.com/models/15003
👍 If you found this helpful, please Like & Subscribe. Any donations are warmly accepted below:
- www.patreon.com/professorlich/
- ko-fi.com/professorlich
Просмотров: 235
Видео
ComfyUI Crash Course 2024 (Part 3 of 4)
Просмотров 54521 час назад
💡 Video covers: - Traditional Upscaling Algorithms - AI Upscaling - Latent Upscaling - Input Nodes Links to resources: - Civitai (workflow link): civitai.com/articles/9102 - CyberRealistic: civitai.com/models/15003 👍 If you found this helpful, please Like & Subscribe. Any donations are warmly accepted below: - www.patreon.com/professorlich/ - ko-fi.com/professorlich
ComfyUI Crash Course 2024 (Part 2 of 4)
Просмотров 1,1 тыс.14 дней назад
💡 Video covers: - Traffic Cones - SEGS Education - Workflow Execution - Noise Modes differences - Weight Normalization differences Links to resources: - Civitai Link: civitai.com/articles/8893 - CyberRealistic: civitai.com/models/15003 👍 If you found this helpful, please Like & Subscribe. Any donations are warmly accepted below: - www.patreon.com/professorlich/ - ko-fi.com/professorlich
ComfyUI Crash Course 2024 (Part 1 of 4)
Просмотров 1,9 тыс.21 день назад
Welcome to ComfyUI Crash Course! You may be familiar with Automatic1111, but are you ready for deep dive into ComfyUI? 💡 Covered in video: - text2image - outpainting - inpainting - detailing 🎬 If you wish to replicate my steps, be sure to download below resources: - Masks: civitai.com/articles/8730/comfyui-crash-course-2024 - CyberRealistic v5: civitai.com/models/15003?modelVersionId=537505 - C...
Getting Started with IP Adapter (2024): A1111 and ComfyUI
Просмотров 82521 день назад
Welcome to the Computer Lab component of Topic 3, where we get started using IP Adapter with 'stable-diffusion-webui' (aka 'Automatic1111') and 'ComfyUI'. - Accessing IP Adapter via the ControlNet extension (Automatic1111) and IP Adapter Plus nodes (ComfyUI) - Easy way to get the necessary models, LoRAs and vision transformers using downloadable bundle.- Using IP Adapter in Automatic1111 - Comf...
What is IP Adapter? (Autumn, 2024)
Просмотров 1,2 тыс.2 месяца назад
Welcome to Topic 3 lecture of our series on Stable Diffusion and Artificial Intelligence! In this video, we'll explore IP Adapter, an innovative technique for using image prompts to generate consistent and high-quality visuals in AI art. This short video covers: 🔹 What is IP Adapter 🔹 Decoupled Cross-Attention mechanism 🔹 Differences from classic 'image-to-image' 🔹 Matching IP Adapter with Visu...
Stable Diffusion 101 - Topic 2: Models and Platforms (Computer Lab)
Просмотров 2453 месяца назад
Website: 🔹lichacademy.org GitHub: 🔹github.com/LichAcademy/Lich-Courses Support me: 🔹www.patreon.com/professorlich/ 🔹ko-fi.com/professorlich Key Highlights: 🔹AI tips 🔹Essential WebUI settings 🔹Browser Differences 🔹Resolution Numbers for Convenience 🔹Folder Structures If you find this video helpful, don't forget to like, comment, and subscribe. Share your suggestions for future topics in the comm...
Stable Diffusion 101 - Topic 2: Models and Platforms (Lecture)
Просмотров 1263 месяца назад
Alternative title: "Fantastic Models and Where to Find Them" - tiny glimpse into inner workings of Stable Diffusion: not too much to scare anyone away, but enough to introduce Variational Autoencoders (VAE) and CLIP. - a quick recap of foundation models from 2022 until today github.com/LichAcademy/Lich-Courses www.patreon.com/professorlich/ ko-fi.com/professorlich
Stable Diffusion 101 - Topic 1: Fundamentals (Computer Lab)
Просмотров 1884 месяца назад
Introduction to the course, and overview of some basic principles. github.com/LichAcademy/Lich-Courses www.patreon.com/professorlich/ ko-fi.com/professorlich
ステーブルディフュージョン101-トピック1:基礎(コンピュータラボ)
Просмотров 294 месяца назад
コースの紹介と基本的な原則の概要。 github.com/LichAcademy/Lich-Courses www.patreon.com/professorlich/ ko-fi.com/professorlich
稳定扩散101-主题1:基础(计算机实验)
Просмотров 244 месяца назад
课程介绍和一些基本原理的概述。 github.com/LichAcademy/ComfyUI-Lich-Pack www.patreon.com/professorlich/ ko-fi.com/professorlich
Stable Diffusion 101 - Topic 1: Fundamentals (Lecture)
Просмотров 3295 месяцев назад
Introduction to the course, and overview of some basic principles. github.com/LichAcademy/Lich-Courses www.patreon.com/professorlich/ ko-fi.com/professorlich
ステーブルディフュージョン101-トピック1:基礎(講義)
Просмотров 715 месяцев назад
コースの紹介と基本的な原則の概要。 github.com/LichAcademy/Lich-Courses www.patreon.com/professorlich/ ko-fi.com/professorlich
人工智能暑期学校讲座3 (Lecture 3 in Chinese Mandarin)
Просмотров 895 месяцев назад
有关人工智能、编程和机器学习的暑期学校课程。喜欢、订阅和评论吧。:) github.com/LichAcademy/ComfyUI-Lich-Pack www.patreon.com/professorlich/ ko-fi.com/professorlich
ComfyUI Lecture 3 - Custom Nodes Part III
Просмотров 3275 месяцев назад
ComfyUI Lecture 3 - Custom Nodes Part III
ComfyUI: Making Your Own Custom Nodes
Просмотров 1,6 тыс.6 месяцев назад
ComfyUI: Making Your Own Custom Nodes
How to install Kohya GUI on Windows
Просмотров 1,1 тыс.7 месяцев назад
How to install Kohya GUI on Windows
PF2e Remastered Lecture 05: Immunity, Weakness and Resistance
Просмотров 7010 месяцев назад
PF2e Remastered Lecture 05: Immunity, Weakness and Resistance
PF2e Remastered Lecture 01: Basic Mechanics
Просмотров 11311 месяцев назад
PF2e Remastered Lecture 01: Basic Mechanics
PF2e Remastered Lecture 03: Defending
Просмотров 2311 месяцев назад
PF2e Remastered Lecture 03: Defending
PF2e Remastered Lecture 02: Attacking
Просмотров 5311 месяцев назад
PF2e Remastered Lecture 02: Attacking
Awesome I learned a lot, Would you please do Comfyui adetailer video like face, hand ,feet and body then refine image ( refine mean sharpe the details ) , enhance image and upscale image ?
Thanks for the lecture! All this time I've been using "Int" and "Float" custom nodes not knowing that there's this handy "Primitive" node that's native to ComfyUI that does everything these custom ones do and then more. 🤦♂
Really amazing video! Maybe one thing for next videos would be adding setnodes and getnodes it really do wonders for modularity in comfyui.
Thank you for this excellent video! I truly enjoy content like this. You break down how to work with ComfyUI in a way that not only helps me learn but also ensures I understand what I’m learning. Too often, people spend an enormous amount of time cleaning up the workflow, only to make it harder to follow. I appreciate how you focus on clarity and understanding. Thanks for helping us better grasp ComfyUI!
I had trouble making your method work, but after two days I finally got it. THANK YOU for this essential video!
looking forward to dive into this series
The video is great
The video is great
You’re a phenomenal teacher thank you so much. I would really like to see content that covers video and animation.
Thank you for bringing the theory. I've been dying for content of this nature. Amazing work.
for users of comfyui: Your bundle of models and loras strike me as the perfect complement to YanWenKun's version of ComfyUI Portable with preinstalled nodes. The 3 part zip file on YanWenKun's github gives me portable comfy with insightface etc. etc. installed, resolved depencies - without the models. Your IP Adapter bundle supplies all the relevant and up to date IP Adapter models, which are a pain to sort out.
"(Part 3 of 3)" ---> Sniff. I have really enjoyed your videos so far - you sure there is no way we can bribe you to keep on going? Though I'm sure you have your reasons. Making these videos is very time consuming, and there has to be some pay-off on the horizon. Then again - if ComfUI should pull it off to become the Blender of AI image and video generation, investing into the tedious and painfully SLOW process of increasing follower numbers now might pay off in the future, "the first ones become the big ones". Easy for me to say, since I don't have to put in the work :)
Oh, I'm not done making videos yet. 😅 I'm just contemplating if I should continue 'ComfyUI Crash Course' series, or start a new one. Like 'AnimateDiff 101', or perhaps something on LyCoRIS / LoRA training. More videos will be coming. 👍 For now. 😅
@@professorlich very happy to hear :)
"I have deliberately left putting output nodes into the workflow, because you can use it to control how far the workflow gets executed." (paraphrasing) --> So obvious now that I heard you spell it out - and yet, I hadn't thought about that before
what is the minimum GPU VRAMs required to run ComfyUI bro ?
4-6gb Nvidia card, though it depends on the actual stuff you're going to do
Since there is so much to learn, I feel like I am kind of swimming, in so many respects. I kind of improvise my way through. I had noticed that comfy and forge produce different images, but I had no idea why. I have no idea where comfy actually gets its images from, when using the standard "Load image" node. (With "load image from path", it is clear.) For example, when I use images as reference, like when using controlnet - ideally, I would want to be able to reproduce the exact same output I got, just by throwing in the generated image. Somehow, at the moment, it seems to have no trouble finding those images. If you asked me now, how it does this, and what I would need to do to make sure it will still be able to this a year from now, I would have to say "I have no idea". The list goes on and on, I can use Comfy, and I absolutely love using it, but there are so many fundamental questions I don't know the answer to.
Hmm... I suppose it depends on how far 'down the rabbit hole' you are prepare to go. 🤔 For example, regarding Load Image node... ComfyUI 'knows' exactly where the folder in your OS is, and puts together list of files inside it. Everything is open source, so nothing stops you from doing a deep-dive into ComfyUI codebase. And it is quite fascinating. I did put together 'Learning Python with ComfyUI' course. Although those are some of my older videos, feel free to have a look, if you are interested into programming, or figuring out how it all works 'under the hood.' 👍
@@professorlich I'm very interested, although I'm learning python as slow as a snail, looking forward to your new work!
Your videos are so refreshingly... different. Introductory AND deep, at the same time Both videos that I watched so far gave me answer to a number of questions that have been lingering in the back of my mind for quite some time. great stuff, I'm hooked!
Thank you for the feedback. I try to include as many useful tips and tricks as I can, especially little 'gems' I myself had to learn the hard way. I am glad you found the video helpful.
Segs education had me dying LOL.
Ha. I found it. In the tidying up sped up part you connected the inpainting pipe to the K Sampler for the impainting. I connected the first pipe further...This is what I wrote first:(It is funny how I get a different image than you after the inpainting into the outpainted image. My new part of the kitchen looks very different from yours. Strange.) A very good tutorial.
I am glad it worked out. I realise now that I have sped up the video too fast at times. In fact, it was your comment that gave me the idea of inventing these 'Ikea-style workflows' (see Part 3). Hopefully, this will allow me to deliver content with speed, without people getting lost. Thank you for your kind words. Let me know how you find the new approach. 👍
I didn't expect that song at the end. Nice!
Your videos are amazing Prof. I just subscribed. Thanks for this.
This looks like a very down to earth educational channel with great approach to teaching. Subscribed! :)
I am #300 sub! I learned a lot, and I been trying to get good with CompfyUI for a few weeks, thank you.
Out of over 40 (often very good) videos using ComfyUI I’ve watched this is by far the best introduction by combining an excellent overview with varied practical examples and useful tips and tricks. All in a brief and well structured format. Excellent work. Looking forward to more from you.
Completely agree. The amount of shitty videos made by AI talking about making AI images is to damn high. This is an excellent video.
Wow, thank you so much! Your words of encouragement are what keeps me going, as my video editing skills slowly develop. 😅 I will do my best to keep up with quality output. 👍
Hmm... I know what you mean. I can only speculate, but I suspect that it may be due to privacy concerns. 🤔 Or perhaps stigma around AI. Nonetheless, thank you for the kind words, Justin. 👍
I absolutely loved watching this, so much valuable information in one swift "painless" go :) Now I know how to do outpainting and inpainting! I know about the ToBasicPipe node, and I know how to increase the number of suggestions when dropping a cable into the empty canvas. No time wasted, super useful stuff, presented in a super clear, easy to understand way. Thanks!
Thank you for your kind words. ❤️ Leaving the comment boosts the visibility of the video, and my own moral to keep making more. Part II of Crash Course is coming up. A 2-hour recording has been reduced to 15 minutes, I am nearly done editing it. 😅 It will be up within the next 24 hours. 👍
fantastic thank you
One thing to keep in mind, even if you think someone else explains a topic better than you, just be aware, to the watching individual, you might make more sense. Happens for me all the time. Love your take on topics. Thanks for posting!
Nice. Thank you.
You flip the slides to fast
Thx for the summary.
as a self-proclaimed pun aficianado, all good choices, why choose when you have the undead academic angle cornered?
THANK YOU - straightforward explanations, lighthearted and informative style, logical progression of concepts and clear, no-nonsense examples that don't insult my intelligence or flash by in a cloud of jargon or tangents. It's unfortunate it has taken me this long to find anything remotely resembling a non-biased, non-clickbait, non"hey look at me and the cool dumb thing I did" take on all this because I think it is valuable information that artist and non-artists alike just want to have to explore and learn like with anything else. TYTYTYTYTY....Please keep posting content!
The hideous, obviously AI-generated thumbnail...bleh.
AI-generated? It's far worse than than that, I'm afraid. It's Adobe Photoshop. ❤️
@@professorlich I have been using Photoshop for 32 years. I'm also a machine learning developer. You may have run the model output through Photoshop, but it didn't successfully cover up where you got it.
@@KAZVorpal Shame using Photoshop for 32 years didn't teach you any class... bleh
Indeed, I did exactly as you said. There is no 'cover up'. In my earlier message, I was just trying to be funny (and failed miserably). 😅 Thumbnail is new, so thank you for the feedback. I will try and do better next time. 👍
@Tyrell I only used Photoshop for about 1 year (or so). Regardless, you are probably right regarding class, I will try to do better next time, perhaps less swearing and more professional. 👍 (Edit. Sorry, Tyrell, just realised you were not addressing me.)
1:54 The problem is that human language implies human's body experience. LLM does not and cannot have such experience, all it has is a mishmash of dictionary index on top of picture's library, therefore all efforts to explain to the machine the difference between these two pictures are futile.
I have followed everything and triple checked, but when i try to open SD through the shortcut i get "No Python at '"C:\Users\...\AppData\Local\Programs\Python\Python310\python.exe'" I installed anaconda and SD in different drives, could that be causing the issue?
Possibly. If your Anaconda is on a different drive, try this: 1. Click on Start Menu (Windows Key) and start typing 'Anaconda Prompt' 2. Instead of clicking on it, right-click on it, and click 'Open File Location' (option under 'Run as Administrator' on Windows 11) 3. You will see a folder with several shortcuts: Anaconda Prompt, Anaconda PowerShell, Anaconda Navigator, etc. Right-click on Anaconda Prompt, select Properties. 4. In 'Target', you should see something like this: %windir%\System32\cmd.exe "/K" C:\Users\professorlich\anaconda3\Scripts\activate.bat C:\Users\professorlich\anaconda3 My trick with shortcuts is essentially the modification of the above. %windir%\System32\cmd.exe - this is the path to regular command prompt executable (%windir% is a shortcut for C:\Windows) Pay attention to the second path: C:\Users\professorlich\anaconda3\Scripts\activate.bat If you have installed Anaconda on Drive D or E, this will be different.
By the way, this is one of my first RUclips guides. Shortly after making it, I realised I need to explain better what is going on, rather than getting people to blindly follow the recipe. To address this discrepancy, I have made an update to this guide. See below (rewind ahead): ruclips.net/video/RDuIeuOIB7s/видео.html This updated video has better explanation and visual aids, so that people can develop a sense of what happens when they execute these commands. 👍
@@professorlich Thanks for trying to help, i just ended up placing everything on the same drive and it was fine. Will be checking the updated gude though.
Ohh here it is, time to grab a tea
still waiting for new videos my friend, flux is amazing. i would love to understand how it works and what you can do with it. comfy ui is getting amazing updates too.
Thank you, Kobe. Next video will discuss architecture of SD1.5. In truth, pulling apart architecture and grappling with the basics is the reason it's taking so long. I am, after all, breaking into academic field that is entirely new to me. Still, you would be surprised how much stuff is in there that we don't talk about. Next video will lay important foundation for us to build further upon. In short: expect something by the end of this week, latest. 👍
@@professorlich Looking forward to it :)
excellent! :) Being able to write, even simple, nodes for Comfyui could be really useful!
Hello sir Is it possible to create a UI based on a custom comfyUI workflow? I mean is it possible to do this = 1. create and test comfyui workflow A (this workflow will run custom nodes to remove background from an image) 2. an executable in Linux, mac, or Windows (would still work even when no ComfyUI installed) when this run it will have UI like in this example a button when pressed it will remove background of images in a specified folder. or... rather than individual executable what about remotely trigger ComfyUI workflow A in CLI from client computer to a ComfyUI server? git clone remoteComfyUI sh ./remoteComfrUI. sh -d "~/Pictures/removeBackroungs" -server 192.168.255.255 -verbose 192.168.0.88 ~/Pictures/removeBackroungs/testimg1.jpeg sent 192.168.0.88 ~/Pictures/removeBackroungs/testimg2.jpeg sent 192.168.0.88 ~/Pictures/removeBackroungs/testimg1.png sent 192.168.0.88 ~/Pictures/removeBackroungs/testvid.mp4 sent 192.168.255.255 processing ~/Pictures/removeBackroungs/testimg1.jpeg success 192.168.255.255 processing ~/Pictures/removeBackroungs/testimg2.jpeg success 192.168.255.255 processing ~/Pictures/removeBackroungs/testimg1.png success 192.168.255.255 processing ~/Pictures/removeBackroungs/testvid.mp4 failed (not an image) saved into "~/Pictures/removeBackroungs/output" the idea was to help folks at home and maybe neighbor who wants just do 1 or 2 specific thing with their pictures... or... they can just keep using subscription based of canva 🙃
Great videos, great quality content! I was wondering if it is possible to create a custom node that uses code with a different python version? I want to create a cool node and I have working python codes already but it only works on python 3.7 because it needs tensorflow 1.5. Is this even possible to accomplish in comfy UI, maybe by using code in the custom node that creates a virtual environment?
I won't say it's impossible, but... I personally wouldn't know how to do it. When it comes to TensorFlow (Google) vs PyTorch (Meta), my impression is that PyTorch is "winning", as of this writing (2024). Perhaps due to greater adoption by academics? I don't know. What made you use TensorFlow, if you don't mind me asking? If you are looking at building a front-end, have a look at this repo: github.com/jagenjo/litegraph.js/ This is the node-based library that powers ComfyUI. Also, ComfyUI now has a Discord server, you can approach developers directly: discord.gg/MrtNUNEx I hope this helps. Good luck. 👍
@@professorlich Thx for taking the time to respond and link some useful stuff. And why TensorFlow is because I am playing around with lucid-sonic-dreams and it only works with an older TensorFlow, there are also working version of lucid-sonic-dreams with PyTorch but I had problems getting it to work. Thanks for your good understandable videos it helped me create some custom nodes for this project. I first tried some different things but don't think it was possible at the end how I have made it work, I created a simple node that just makes an API call to my custom flask API were I run the lucid-sonic-dream stuff with all the right versions. So thx again, have a great day sir!
awesome work! its the kind of content i hoped it would be, looking forward to the next video!
Thank you, more to come! :) Just need to tidy up my GitHub repo first. 👍 And those commands I promised.
有人能确认这是否正确吗: Automatic1111: 稳定扩散网页用户界面 ComfyUI: kāng fēi yōu ài (康菲优爱)
If you have a deep understanding and can go beyond the surface and explain to people how things work below the surface to give them a deeper understanding people will listen, to many click this button tutorials. Comfy desperatly needs people that explain whats actualy going on not just click this button, for the average user to understand, so that people can then use it the way they want and might get a grasp of its capabilities. The best channel I've seen so far that does this really well is LatentVision but I wish he had the time to go even more in depth and make more videos. Thanks for what your doing, I hope this will always stay free and open for everyone to see.
Kobe, regarding the last bit. If push comes to shove, I'd rather abandon this channel than abandon my values. GitHub has a short bio of who I am and what I stand for: github.com/LichAcademy/ComfyUI-Lich-Pack Let me put it this way: I would rather shut down this channel, than put any content behind paywall. And even then, before doing so, I'd share OneDrive folder with direct downloads to all my videos with everyone. I wrote on my Patreon to would-be subscribers: "Education should be free and accessible to everyone. I won't hide any educational resources behind a paywall. However, I do want to express my gratitude to those who support me by... (etc.)" Nonetheless, thank you for the kind words, Kobe. ❤️
I'm even conflicted with RUclips monetization. I mean, is the video truly free, if you pay for it with your attention or privacy? I do wonder how Wikipedia does it, what their business model is, etc. But I digress...
looking forward to learning from you Mr. Lich!
Shoutout to the Lich family! Keep the lectures rolling man <3
10:39 I misspoke at this point: *Models (not molecules)
Excellent tutorials. I enjoyed them very much. But how do we get the skull (or any) icon in our category? Apologies if you explained and I missed that :) Please keep up the good work, and I will keep watching even though RUclips wants to feed me Chinese content now rofl
Good question. If you are on Windows, hold down Windows Key + . (dot, punctuation mark). PS This is not a programming thing. It works on Discord, Notepad, Outlook, browsers... I am using it in this comment: 🥲▶️👍😅🙏 Edit. It may depend on the version of Windows you are using. Let us know if it works for you. 👍
@@professorlich Yes, that worked perfect on Windows 10. Thank you, Now I just need to figure out how to add custom icons to Windows. 🤣
Clearly structured and well done! Thank you for your excurse into history! 😊
Glad you enjoyed it! And yeah, I have a tendency to go on tangents occassionally haha 😅
Thanks, I think I'll have to go back to beginning
omg thank you for the Tuple information i never was understood what that's mean !
thank you so much, as a begginer it's hard to find some information and you are so clear and precise ! more more more plz !
More to come! :) This weekend, fingers crossed. 👍