- Видео 47
- Просмотров 22 057
A.eye_101
США
Добавлен 3 фев 2021
Generative
SAM 2 Segment Anything 2 Meta AI for ComfyUI by Kijai
ComfyUI Node that integrates SAM2 by Meta. Kijai is a very talented dev for the community and has graciously blessed us with an early release. Hope everyone enjoys the video.
ComfyUI Node by Kijai
github.com/kijai/ComfyUI-segment-anything-2
Join us on discord
discord.gg/aC7ZXRc4c6
Workflow on
discord.com/channels/1076117621407223829/1268090166388719656/1268090166388719656
ComfyUI Node by Kijai
github.com/kijai/ComfyUI-segment-anything-2
Join us on discord
discord.gg/aC7ZXRc4c6
Workflow on
discord.com/channels/1076117621407223829/1268090166388719656/1268090166388719656
Просмотров: 5 036
Видео
Stunning HD Visuals: Exploring Nature's Raw Beauty with Generative Animations
Просмотров 405 месяцев назад
Discover the mesmerizing world of nature's raw beauty through captivating generative animations and high-definition visuals. This video showcases intricate textures and vibrant colors, providing a serene and immersive experience. Ideal for visual art enthusiasts and lovers of natural aesthetics, our journey highlights the untouched splendor of natural elements. Watch now to explore the wonders ...
PATH: Predictive Algorithm for Temporal Hyperdimensionality
Просмотров 955 месяцев назад
PATH: Predictive Algorithm for Temporal Hyperdimensionality,' a narrative that illustrates how AI can predict and shape our decisions for a better tomorrow. PATH uses real-time data to predict outcomes, offering insights that guide us through complex challenges with precision. . . . . This short film was created as an entry for: . Project Odyssey A.I. Film Making Contest . Using @elevenlabsio f...
TensorRT Real-time Generative VFX in ComfyUI
Просмотров 3,3 тыс.6 месяцев назад
TensorRT Real-time Generative VFX in ComfyUI quick tutorial by A.eye_101
Thanks ! 😀👍
@@MilesBellas totally 😇
I AM, I AM
you can indeed mask many regions, that's on florence2coordinate, it's number coded, first is 0, second is 1, and so on. so if you want bird and claws, zooms in and check your preview images for the number code. also in newer update of sam2segmentation check the individual_objects to true, and add bboxes also from florence2coordinate, not just mask.
@marhensa yeah, I posted that in the discord for the tutorial. Thanks to an update. 😇
Also Florence 2 doesn't also lock on to that number for that object if it loses the number like 1 or 2 and the 1 become 2 than it'll bounce around. This was made the same day sam2 was released.
After creating the mask how would you go about replacing the bird without changing the background scene? lets say you wanted to replace the bird with a cyborg parrot?
useless "tutorial" IT'S RIDICULOUS HOW YOU ASSUME THAT WE ALL ALREADY HAVE THOSE NODES ALREADY INSTALLED OR THAT WE EVEN KNOW WHAT THEY ARE AND WHERE TO GET THEM, stop making "tutorials", you don't even know what you are doing, they are useless.
Great video! Thank you for the clear explanation of everything! A small question - I keep getting the rectangular frame from Florence together with my mask. How do I cancel the frame and only get the mask?
did you found a solution ?
@@philippeheritier9364 Yes, I got some other workflow from someone that just didn't product this
Love your username!
🎉😮❤ thanks 🙏🏽
hahaha I just realized after reading your comment
Great tutorial thanks! 🙌
@@josephine.miller glad you enjoyed it 😊
great tutorial. when i output, it keeps the red on the subject. is there a way to output the subject only and without the red?
@lionhearto6238 this is just to show an example, you'll have to change it for your needs.
@@A.eye_101 I actually came here to say the same thing. I can get Sam2 to mask no problem, but I dont do enough comfyui animatediff workflows to know what nodes i need put in after I have masked the subject. The video combine node only has "filenames" output on that end, or imagecompositemasked node has 'image' output or ImageAndMaskPreview just has "composite" output. Any chance you can share a workflow that has simple conclusion for video workflow noobs :D thank you for the great simple video btw.
Thank you very much for you tutorial. Comfyui UI as a nodal image compositor has made a great step forward
Glad you liked it
How you do it so fast , this is amazing
Have to ask Kijai, the nodes are his magic. I'm just telling people how to use them a certain way
It just released today. thanks for speedy try. but what are the potential use cases?
If you're using segmentation nodes you'll notice a cleaner and better tracking with your mask.
GJ, you are fast! New model is really accurate! Tons of possibilities!
Glad you think so!🎉
Noice, subscribed!
Average new yorker
only average lol
That’s so cool! Thank you Mr tiger
@@noone545 glad you watched and enjoyed it 😇😅
I keep getting error I hate tensor RT , I wanna test svd
An error? More details?
@@A.eye_101 can you try testing tensor RT on SVD ?
@@nirdeshshrestha9056 what is your error?
lol Thanks Tony
That's Mr Tiger 🐅 🤣🤣
How can I achieve such high definition
Magic 🎩
ai is scary
Scary? How so?
@@A.eye_101 man shut up
@@Yaseenoz2aawwwwe you butt hurt 😅
Shitter
Are you mad you can't program? This is from home made code. 😅
It would be cool if you put any effort into it
Says this random that has zero videos 🤔😅😅😬
Transcendence of the cosmos 🎼Artificial Intelligence🎼 @patrykscelinamusic . . . . #alanwatts #transcendence #cosmos #lettinggo #aeye101
This is dope af!!!
Yooo, thank you 🙏🏽
Created with Stable Diffusion, ages 2 to 75. These are what if imagines, if Dr. Kings' life wasn't cut short.
What is textual inversion?
Textual Inversion is the process of teaching an image generator a specific visual concept through the use of fine-tuning. In this case furry lil creatures.
@@A.eye_101 I have another question brother!
@@A.eye_101 i have a data sets of different images which have multiple styles like realistic, neon, cyberpunk etc. All of the images have multiple of these styles. I want to train a model from those images! So my question is how should I arrange the datasets ? Should make separate folders for each? should I explicitly tell the model that this image is Realistic? Or model will guess automatically by prompt? The prompts also have different styles according to the image! Should I arrange them in separate folders? Should we train the one design at a time?if we make folders And which stable diffusion model will be best for it! Sorry that’s a lot but if you know this it will be a great help ❤️❤️
Sweet.
😊 thanks
one of my favorite daft punk songs this is a work of art
This is really cool! Good job!
Music: Dia de los Muertos by Eric Miller
The a.I. created the animation by creating a slightly different image for each frame to create the illusion of movement. Each frame took average 10 to 15 minutes. 24 frames a sec, for a 10 second video. Takes hrs to make these via my pc. Alan Watts & Music by Hippie sabotage