So this will run on colab... just tested it... Now in fairness, it might guzzle up compute though so free users won't have much joy .... but for those that have an account or buy a few credits it'll at least allow a rubrick of testing before deciding if this model should be part of their flow...
On this channel it's "this changes everything" all day, every day. The game changing gamechanger of gamechanging proportions is gamechanging the gaming change as we speak.
Hey, I follow several AI RUclips channels but no one compares to your type of content, straight to the point and easy to follow. Keep it up. Cheers from México
Yes but these days all he does are AI image generation contents! I miss his general AI news. I find his general AI news content very in-depth with his style of narration.
The feature that allows Omnigen to "automatically identify the object in the photo and edit it" really stands out as a game-changer in image editing. I appreciate how this AI takes on the meticulous work traditionally required in Photoshop, making adjustments like de-blurring, color correction, and even complex edits like pose transfers seem effortless. It’s intriguing to see how accessible this could make professional-level image editing for people with no prior experience
for sure, but asus rtx 3070 tuf edition is not enough to have fun with, it only has 8gb vram. im waiting for rtx 5000 series. i hope they will have more vram for me to play.
That giant 15 gig file is a lora. After installing this, I copied the 15 gig file lora and put it in the ComfyUI lora folder, then used the Flux Edit Image workflow, and it works.
@@leapsokha475 Self-built PC with 128GB DDR4 RAM, an i5 16-core processor, and a GeForce RTX 3060 (12GB VRAM) with overclocked CUDA cores. It runs heavy applications with ease.
@@mcdives Stay tuned. I actually just recently started studying everything related to AI image tools. It worked in ComfyUI with no issues, but it's limited in terms of what other nodes it can work with. I changed something with either the CLIP or the sampler and still haven't figured out what it was that I changed. I'll update here once I get it back to the point where it worked the first time with no issues. It does work with ComfyUI; it's just a matter of finding the right nodes and settings for it.
21:53 This is not an accurate depth map. It's more of a light map. Notice how the bright spots in the image are bright on the other image and how the dark pillar is darker than the lights in the background. It probably understood to make an artistic representation of a depth map - while it's not an actual one
installed via instructions but running python app.py failed with errors, when creating the enviroment i used... conda create -n omnigen python=3.10 ... after this everything worked fine, this may help someone that had the same issue
Since I don't have enough space on C drive, I want to install the program on D, my Omnigen folder and miniconda are on D, but whenever I run "python app.py" it starts installing on C. I don't know much about codes and programming, so I don't even know how to search for a solution. Sorry if my question is stupid, but I'd appreciate it if you could give me a fix.
I really think you should've tried different things than what was already shown in the paper. You did all of the same types of prompts (brighten, highlight, change clothes, remove earrings). I think it was smart to try to replace the sweater with a bikini, as it is very tricky and it does a good job showing what level of quality the model has in terms of the human body and plausible poses. I still would've liked to see you use other tricky prompts too.
Exactly. I suspect that the model is not as versatile as the demo shows and will break down if the picture does not follow certain standards like where the head is in the image, side view, or stuff like that. We'll have to wait for another creator testing this or test it locally
@@annyeong5810 Yes, I'm aware the specific examples he did were his own, but I am saying that he did not deviate far from the difficulty or style of prompts as seen in the paper. The bikini prompt was the only one that showed strong limitations, which tells me that the prompts used were not difficult enough to "trick" the model.
I'm only a third of the way thorough your video and had to stop to congratulate you on the thoroughness of your instruction. It's something that's missing from most videos. Good job!
@@mikebreeden6071 More than likely yes, if you got an error message in the python screen indicating errors with GPU/CPU. You will have to install nvidia cuda toolkit 12.4.1 (don't install 12.6 as pytorch isn't compatible yet). Then you'll need to uninstall and reinstall pytorch so it can install cuda support.
(Read this in the most offensive french accent you can imagine.) Le gasp! You didn't test if we could accurately correct the number of finger and or toes. Sacrebleu. In all seriousness. This is not bad at all for a Version 1 of the tool. And I do hope both Flux and Stable Diffusion's next major releases have this level of capabilities built in.
It is a step in the right direction, hopefully one that will be further pursued. Interacting with the model is the only right way. Unfortunately, the image quality still leaves something to be desired, so it is not really suitable for serious work yet-it’s more of a fun experiment. Still, the model is 15 GB in size, while SDXL is only 6 GB yet produces much more realistic creations (at least the trained derivatives of SDXL). But perhaps Omnigen can also be trained…
greetings! I've got here a couple of questions can omni gen colorize b&w photos? I saw it can remove droplets from a photo, can it remove scratches and burns? what would've happened if u use more steps to deblur the photos? would it yield more quality? how well does the model performs with more complex objects/ subjects rather than animals // people for making the compositions? is it crippled with censorship? human censorship, political, religious... compared to comfyUI, how fast or slow is it when using the model's built-in prompts to make a composition with 2 or more images? how many objects/ people / animals can the model detect on one single photo, in order to compose a new one. how many modifications can the model whitstand when msking operations on one single "selected" subject, on multiple subjects, on multiple subjects in different images... Oops! a couple of questions ah?.😅
It's very good. I am amazed. But one thing it is missing is text editing capability. Sometimes, we get good results from other models like flux 1.1 pro or sd 3.5 but somehow they messed a part of the text, and if I regenerate, I may get a new image with the text fixed but the image I liked get changed. So, in this case if we could edit the text keeping all other things untouched, it would be really good.
Thanks for being one of the best RUclipsrs in the AI community. I feel that everything is just straight to the point for all the videos that you do💯 Cheers mate, keep going ahead🚀 Sky is always the limit 🔥
It's just sad that there are no multi modal models coming out for blender operations. Maybe this is a window into the future of such tools. I just don't understand why blender has been ignored for so long
While this AI tool is impressive, the author missed specifying the necessary Python version and the benefits of using a dedicated Conda environment to keep dependencies organized. A Conda setup helps avoid conflicts with other Python and Conda/Miniconda packages, ensuring smooth integration with existing installations. There are some typos in the 'app' Python file that trigger errors and could be optimized to speed up the diffusion process. Improving these details would make a significant difference in functionality.
@@Vernite_Norm_Nikneymi so in the end I got it working with python 3.10 and indeed, I had to install with a cuda enabled torch for it to work seamlessly with my GPU, in my case 2.3.1+cu118 (cu118 as I have the cuda 11.8 sdk installed in my machine already). Results apparently can vary and I had to manually install some missing packages once it complained...
it can be useful in some situations, but not all. Just ask it to remove the yellow light next to her head, or the black bar of the window. or the white thing in the table in the background to the right, it's just going to mess things or just take too much time to know what we really want. a quick selection or brush on the thing we want removed or replaced is just faster. And this is a simple image, just give the ai an image of a busy street or some kind of landscape or something like that. When the complexity rises it would be a nightmare for the ai to know what we want.
I think my command is blocked by YT ... It looks that the default script didn't install torch with cuda You have to use pip install using torch==2.3.1+cu118 Try to google the full command, because here YT block me to paste it.
you might need to reinstall torch in your conda env. check what cuda version you have and then install the appropriate ver of torch: pytorch.org/get-started/locally/ eg. if you're using cuda 11.8, you would enter this in your conda env: pip3 install torch torchvision torchaudio --index-url download.pytorch.org/whl/cu118
I've got the same error! It seems that CUDA (NVIDIA) is required for proper functionality, but it can be quite troublesome in a Conda environment since it sometimes fails to detect it, even after installation. Their GitHub page doesn't even mention CUDA, Nvidia or anything related to it, not even in the README file. Also, even installed the cu118 (cuda), if you have errors like "ModuleNotFoundError: No module named 'peft'", for example, just install the module: "pip install peft" and so on. Good luck!
Sir. you are wrong. this is not software, but this is magic. Soooooo wonderful. My GODD. you have introduced one of the best AI software one can imagine. Thank you very much, Sir. My PC is old and now I will have to upgrade or buy a new one to learn and work on this excellent software. Once again Thank you SIR.
>conda activate omnigen gives an error message: CondaError: Run 'conda init' before 'conda activate'. What parameter(s) to use with the conda init command? EDIT: Okay, nevermind, after solving that issue, the next came up "missing numpy" or something and after installing numpy the next error shows up... Then I found out that there is already a Omnigen script available in PINOKIO - this worked right from the start.
Would you know, if you continue to use output images as input images to do more editing, have you noticed any image degradation or lossy outputs? Or does the model repair or reconstruct the image to compensate for another generation on output?
The image quality does suffer but its a great editing tool for sure. Do you think I could theoretically upload the images to an upscaler like Upscayl and get better resolution?
Cmd prompt keeps giving me an error about how the OffloadedCache can only be used with a GPU. Any idea how to fix it, or do I just have to wait for a more compatible version? I have a laptop with i7 10750H and a RTX 2060 MaxQ, thought it would work regardless
Message in my cmd indicates a "RuntimeError: OffloadedCache can only be used with a GPU" what does it mean sir ? i dont understand . i can install but still error when to generate the image in gradio app for this OmniGen tutorial . Please halp 🙏
you might need to reinstall torch in your conda env. check what cuda version you have and then install the appropriate ver of torch: pytorch.org/get-started/locally/ eg. if you're using cuda 11.8, you would enter this in your conda env: pip3 install torch torchvision torchaudio --index-url download.pytorch.org/whl/cu118
You did not specify it, but when I tried it locally a few days ago (with a very powerful computer), generations took more than 30 minutes sometimes. Was it the same for you ?
This is amazing, I only starting playing with ai image tools recently and this seems to solve several annoyances or problems, unfortunately I don't have a high end GeForce card to run this on, I only have an AMD Radeon APU, which even for stable diffusion needs an outdated port of a project to run stable diffusion webui, haven't been able to get ControlNet working yet as extensions seem disabled when I set it up to remote connect across my network and I'm not sure if they're compatible with the older codebase, and the new codebase doesn't support APUs which makes the generation time ridiculous. For something that gives such good results I might be willing to wait several minutes, but with an average of 40-50 iterations I have a feeling I'd have to go get a snack or something every time I hit generate. It's stuff like this that makes me want to go out and buy a new desktop/laptop with a 40-series GPU at least, though if I need a $1K+ GPU for this I might be out of luck, but it still might be able to better run stable diffusion or something. If I could get something like this working it'd be a major thing. I have aphantasia being an almost complete lack of the ability to visualize images in my head(unless I'm dreaming apparently, as dreams have been the most vivid images I've been able to imagine in my life, as rarely as I'm able to remember them anyway.), and perhaps as a consequence my artistic and creative abilities are nearly non-existent, so having some software that can fill in even if imperfectly is kind of exciting, even if just for personal use.
@@theAIsearch All I have is a weak laptop that relies on Colab to run all these fancy AI model stuffs because I can't run locally. If you find any Colab notebook to run this Omni Gen please share, I have been subscribed to you for this
When i try to run the installation command it says "'pip' is not recognized as an internal or external command, operable program or batch file." help pls
I do not know much about making AI images but a lot about photo editing, so I got very interested! Did you say that this cannot be installed on a PC with a good AMD graphics card? Can anybody help me with this, I cannot find info about this from a search engine.
When I try to create an image I get an error, and CMD window shows this in the last line. I do have an Nvidia RTX3090 but it's complaining about no GPU? "File "C:\Users\VR2\Desktop\Omnigen\OmniGen\OmniGen\scheduler.py", line 16, in __init__ raise RuntimeError("OffloadedCache can only be used with a GPU. If there is no GPU, you need to set use_kv_cache=False, which will result in longer inference time!") RuntimeError: OffloadedCache can only be used with a GPU. If there is no GPU, you need to set use_kv_cache=False, which will result in longer inference time!"
stuck here INFO: pip is looking at multiple versions of omnigen to determine which version is compatible with other requirements. This could take a while. ERROR: Could not find a version that satisfies the requirement torch
you might need to install torch in your conda env. check what cuda version you have and then install the appropriate ver of torch: pytorch.org/get-started/locally/ eg. if you're using cuda 11.8, you would enter this in your conda env: pip3 install torch torchvision torchaudio --index-url download.pytorch.org/whl/cu118
When i try to tun the command pip install -e . i get an error 'pip' is not recognized as an internal or external command, operable program or batch file.
Showing Error!! while generating image... "OffloadedCache can only be used with a GPU. If there is no GPU, you need to set use_kv_cache=False, which will result in longer inference time!" I don't have a dedicated computer GPU, my setup: laptop with 6GB RTX 4060Ti with 16GB LPDDR5 RAM on i7 12700H processor. Won't I be able to run it?
why when we are creating a custom environment does it still spit out dousens of errors from pips dependency resolver about me having the wrong versions of XYZ and not just download the actual ones I need, python is such hot garbage. Also yes i am installing from the new (omnigen) environment EDIT: I suggest people just install this using the Pinokio tool as it handles the environment install and dependancies, this allowed my to finaly use this tool without errors, note that for some reason I have to install it twice, there script may have been updated from my first attempt
why do I get errormessages when doing the cmd thing? Cloning into 'OmniGen'... remote: Enumerating objects: 372, done. remote: Counting objects: 100% (115/115), done. remote: Compressing objects: 100% (62/62), done. error: RPC failed; curl 92 HTTP/2 stream 5 was not closed cleanly: CANCEL (err 8) error: 337 bytes of body are still expected fetch-pack: unexpected disconnect while reading sideband packet fatal: early EOF fatal: fetch-pack: invalid index-pack output
i have this issues how can i fix it error: RPC failed; curl 92 HTTP/2 stream 5 was not closed cleanly: CANCEL (err 8) error: 983 bytes of body are still expected fetch-pack: unexpected disconnect while reading sideband packet fatal: early EOF fatal: fetch-pack: invalid index-pack output
Thanks to our sponsor Abacus AI. Try their new ChatLLM platform here: chatllm.abacus.ai/?token=aisearch
So this will run on colab... just tested it... Now in fairness, it might guzzle up compute though so free users won't have much joy .... but for those that have an account or buy a few credits it'll at least allow a rubrick of testing before deciding if this model should be part of their flow...
Red bikini? Why don't you do naked instead? And then put it on the thumbnail. Guess it is all about clicks and AI is for guys
usually I hate it when creators title "this changes everything" but in this video its true the first time.
😃
x2
@@theAIsearch pip install -e . not recognized...
Just add "actually" in the title
On this channel it's "this changes everything" all day, every day. The game changing gamechanger of gamechanging proportions is gamechanging the gaming change as we speak.
Bro, you're the best! You explain each step so well, ain't like the others. This is totally beginner-friendly. Keep it up!
Thanks!
Hey, I follow several AI RUclips channels but no one compares to your type of content, straight to the point and easy to follow. Keep it up. Cheers from México
Thanks!
Yes but these days all he does are AI image generation contents! I miss his general AI news. I find his general AI news content very in-depth with his style of narration.
Yeah bro thanks for the update
name other channels u know
Top notch info and the installation walk though is super helpful and I’m not exactly new to this sorta stuff
The feature that allows Omnigen to "automatically identify the object in the photo and edit it" really stands out as a game-changer in image editing. I appreciate how this AI takes on the meticulous work traditionally required in Photoshop, making adjustments like de-blurring, color correction, and even complex edits like pose transfers seem effortless. It’s intriguing to see how accessible this could make professional-level image editing for people with no prior experience
Amazing! Thank you for not only showing the steps, but explaining why and what each step does. Thumbs up and subscribed!
Thanks for the sub!
It can be used to colorize black and white images : "colorize this image". Amazing!
God I love locally runnable models
me too!
for sure, but asus rtx 3070 tuf edition is not enough to have fun with, it only has 8gb vram. im waiting for rtx 5000 series. i hope they will have more vram for me to play.
That giant 15 gig file is a lora. After installing this, I copied the 15 gig file lora and put it in the ComfyUI lora folder, then used the Flux Edit Image workflow, and it works.
What is ur specs? On laptop or PC?
@@leapsokha475 Self-built PC with 128GB DDR4 RAM, an i5 16-core processor, and a GeForce RTX 3060 (12GB VRAM) with overclocked CUDA cores. It runs heavy applications with ease.
What is the flux edit image workflow and where can I get it 😅
@@alwaysemployed656 Cool! Does it run most of the AI models like Flux and other smoothly? I am planning to get one budget PC for AI.
@@mcdives Stay tuned. I actually just recently started studying everything related to AI image tools. It worked in ComfyUI with no issues, but it's limited in terms of what other nodes it can work with. I changed something with either the CLIP or the sampler and still haven't figured out what it was that I changed. I'll update here once I get it back to the point where it worked the first time with no issues. It does work with ComfyUI; it's just a matter of finding the right nodes and settings for it.
goodbye my old friend photoshop
rip
Adobe won't be missed.
@@waryth4475 literally
...so from now on, no-one never ever has to die, there is world peace and we fight hunger and corruption. All because of this tool. AMAZING!
21:53
This is not an accurate depth map. It's more of a light map. Notice how the bright spots in the image are bright on the other image and how the dark pillar is darker than the lights in the background. It probably understood to make an artistic representation of a depth map - while it's not an actual one
Also, the woman is a ghost
You're 100% correct. This AI software tool literally does change everything!
Wow -- I've been waiting for an ai that can do this for a long time.. Nice job!
Thanks!
@@theAIsearch how about undress for NSFW?
@@theAIsearch wait... can it do a man of culture thing?
@@kerhabplays Yes it knows culture very well :)
@@DrHanes yoo have you tried it?
installed via instructions but running python app.py failed with errors, when creating the enviroment i used... conda create -n omnigen python=3.10 ... after this everything worked fine, this may help someone that had the same issue
thankyouu!♥
You are the Beeeeest!!!!
You are an angel thank youuuuu!
Yes, this works :) thanks
Since I don't have enough space on C drive, I want to install the program on D, my Omnigen folder and miniconda are on D, but whenever I run "python app.py" it starts installing on C. I don't know much about codes and programming, so I don't even know how to search for a solution. Sorry if my question is stupid, but I'd appreciate it if you could give me a fix.
I really think you should've tried different things than what was already shown in the paper. You did all of the same types of prompts (brighten, highlight, change clothes, remove earrings).
I think it was smart to try to replace the sweater with a bikini, as it is very tricky and it does a good job showing what level of quality the model has in terms of the human body and plausible poses. I still would've liked to see you use other tricky prompts too.
why bikini and not just straight up nudes? 😂
I think the ones were his,A preview of what he does in the video
@youravghuman5231 welcome to youtube 😉
Exactly. I suspect that the model is not as versatile as the demo shows and will break down if the picture does not follow certain standards like where the head is in the image, side view, or stuff like that. We'll have to wait for another creator testing this or test it locally
@@annyeong5810 Yes, I'm aware the specific examples he did were his own, but I am saying that he did not deviate far from the difficulty or style of prompts as seen in the paper. The bikini prompt was the only one that showed strong limitations, which tells me that the prompts used were not difficult enough to "trick" the model.
The virtual environments are critical. Especialy if you use different software. Its good that you mention that. (Thumbs up)
I'm only a third of the way thorough your video and had to stop to congratulate you on the thoroughness of your instruction. It's something that's missing from most videos. Good job!
Thanks!
Pretty Shure you can write a little script to start it each time instead of typing that all into cmd. But I am not familiar with windows cmd syntax.
One step missing in your video was the lack of installation for nvidia cuda toolkit required to run this...fyi. Great video BTW, thanks!
Is this why it gave up after 13 seconds?
@@mikebreeden6071 More than likely yes, if you got an error message in the python screen indicating errors with GPU/CPU. You will have to install nvidia cuda toolkit 12.4.1 (don't install 12.6 as pytorch isn't compatible yet). Then you'll need to uninstall and reinstall pytorch so it can install cuda support.
dopepics AI fixes this. Free AI image editor tutorial.
this one here is free, yours is not?!
To the inventors of this - truly amazing and useful app!
ooh man while watching this video i already have like 4 different saas ideas for this it is just awesome!
🤑🤑🤑
"What? How? You can't just prompt what you want to do, it's against the rules!"
"You never saw this coming! That's what it do, Yugi!"
(Read this in the most offensive french accent you can imagine.) Le gasp! You didn't test if we could accurately correct the number of finger and or toes. Sacrebleu.
In all seriousness. This is not bad at all for a Version 1 of the tool.
And I do hope both Flux and Stable Diffusion's next major releases have this level of capabilities built in.
The French accent sounds offensively over acted in my head. Disliking comment for racism.
Where are the models saved, i want to change them because they suck at nudity.
Thank you so much! Miniconda is so cool, didn’t knew it so far
It is a step in the right direction, hopefully one that will be further pursued. Interacting with the model is the only right way. Unfortunately, the image quality still leaves something to be desired, so it is not really suitable for serious work yet-it’s more of a fun experiment. Still, the model is 15 GB in size, while SDXL is only 6 GB yet produces much more realistic creations (at least the trained derivatives of SDXL). But perhaps Omnigen can also be trained…
greetings! I've got here a couple of questions
can omni gen colorize b&w photos?
I saw it can remove droplets from a photo, can it remove scratches and burns?
what would've happened if u use more steps to deblur the photos? would it yield more quality?
how well does the model performs with more complex objects/ subjects rather than animals // people for making the compositions?
is it crippled with censorship? human censorship, political, religious...
compared to comfyUI, how fast or slow is it when using the model's built-in prompts to make a composition with 2 or more images?
how many objects/ people / animals can the model detect on one single photo, in order to compose a new one. how many modifications can the model whitstand when msking operations on one single "selected" subject, on multiple subjects, on multiple subjects in different images...
Oops! a couple of questions ah?.😅
This is definitely an excellent tool. It'll save lots of time and energy
I've been waiting for this!!!
😃
It's very good. I am amazed. But one thing it is missing is text editing capability. Sometimes, we get good results from other models like flux 1.1 pro or sd 3.5 but somehow they messed a part of the text, and if I regenerate, I may get a new image with the text fixed but the image I liked get changed. So, in this case if we could edit the text keeping all other things untouched, it would be really good.
thanks for sharing. the base model of this is not great - i wish it could match the quality of flux
Crazy how powerful AI truly is.
Simply amazing.
Thanks for being one of the best RUclipsrs in the AI community. I feel that everything is just straight to the point for all the videos that you do💯
Cheers mate, keep going ahead🚀
Sky is always the limit 🔥
Thanks for the super! I appreciate your support
It's just sad that there are no multi modal models coming out for blender operations. Maybe this is a window into the future of such tools. I just don't understand why blender has been ignored for so long
so in the end, was the thumbnail generated? or was it photoshopped (either from original or from a generated)?
Damn, pretty good but kinda large. Nice work again.
thanks
While this AI tool is impressive, the author missed specifying the necessary Python version and the benefits of using a dedicated Conda environment to keep dependencies organized. A Conda setup helps avoid conflicts with other Python and Conda/Miniconda packages, ensuring smooth integration with existing installations. There are some typos in the 'app' Python file that trigger errors and could be optimized to speed up the diffusion process. Improving these details would make a significant difference in functionality.
Python can handle virtual environments itself. Why installing Conda?
This is no longer a problem, as they have updated OmniGen github repo
btw, have you found out the correct python version? I am struggling to make it run correctly here...
@@ViniciusCorreia yep, same problem. Also I have trouble with torch version, it demands 2.4.0+cu118 version
@@Vernite_Norm_Nikneymi so in the end I got it working with python 3.10 and indeed, I had to install with a cuda enabled torch for it to work seamlessly with my GPU, in my case 2.3.1+cu118 (cu118 as I have the cuda 11.8 sdk installed in my machine already). Results apparently can vary and I had to manually install some missing packages once it complained...
Thanks, very helpful content!
you're welcome
Good vid bro, especially the install info.
thanks
it can be useful in some situations, but not all. Just ask it to remove the yellow light next to her head, or the black bar of the window. or the white thing in the table in the background to the right, it's just going to mess things or just take too much time to know what we really want. a quick selection or brush on the thing we want removed or replaced is just faster. And this is a simple image, just give the ai an image of a busy street or some kind of landscape or something like that. When the complexity rises it would be a nightmare for the ai to know what we want.
true, its currently not great for harder tasks. hopefully this (or another competitor) will improve over time.
Finally, dream comes true 🎉
This is truly amazing! Does it only work with the images generated by the same system ?
no you can upload other images
RuntimeError: OffloadedCache can only be used with a GPU
How can I make it detect my GPU? I'm using a 4090 so i'm sure its not a VRAM limitation.
I think my command is blocked by YT ...
It looks that the default script didn't install torch with cuda
You have to use pip install
using torch==2.3.1+cu118
Try to google the full command, because here YT block me to paste it.
you might need to reinstall torch in your conda env. check what cuda version you have and then install the appropriate ver of torch: pytorch.org/get-started/locally/
eg. if you're using cuda 11.8, you would enter this in your conda env: pip3 install torch torchvision torchaudio --index-url download.pytorch.org/whl/cu118
have the same prob
Same problem here!
I've got the same error! It seems that CUDA (NVIDIA) is required for proper functionality, but it can be quite troublesome in a Conda environment since it sometimes fails to detect it, even after installation. Their GitHub page doesn't even mention CUDA, Nvidia or anything related to it, not even in the README file. Also, even installed the cu118 (cuda), if you have errors like "ModuleNotFoundError: No module named 'peft'", for example, just install the module: "pip install peft" and so on. Good luck!
Great tutorial... From your tutorial, OmniGen doesn't really do a faceswap, it just stitches images together.
installing can have lots of problems. I got it working but closing it and opening it again is nightmare. I hope a better installer gets available.
What's 'funny' is that I used 4 tools (DeepAI, Adobe Express AI Image Generator, Microsoft Designer, GIMP) to create an album artwork recently.
"How many sponsors do you want for your video ?"
"Yes"
Sir. you are wrong. this is not software, but this is magic. Soooooo wonderful. My GODD. you have introduced one of the best AI software one can imagine. Thank you very much, Sir. My PC is old and now I will have to upgrade or buy a new one to learn and work on this excellent software. Once again Thank you SIR.
thanks for watching!
Have you found that it can remove glare or rain off glass in front of the subject? I haven't been able to get it to do that.
Enjoyed your video. One question; what is the total gigabytes this will take up on the hard-drive? Thanks.
>conda activate omnigen gives an error message: CondaError: Run 'conda init' before 'conda activate'. What parameter(s) to use with the conda init command? EDIT: Okay, nevermind, after solving that issue, the next came up "missing numpy" or something and after installing numpy the next error shows up... Then I found out that there is already a Omnigen script available in PINOKIO - this worked right from the start.
reboot your computer
and run again:
C:\Users\danca\Desktop\OmniGen>conda activate omnigen
glad you got it to work w pinokio!
Had the same issue! Thanks for the hint with Pinokio, didn't think of that!
Would you know, if you continue to use output images as input images to do more editing, have you noticed any image degradation or lossy outputs? Or does the model repair or reconstruct the image to compensate for another generation on output?
possible to install on a Mac?
this enables sooo much
Thank you for this review!!
You are welcome!
Your thumbnails are legendary clickbaits*
Bro There Is No Other RUclipsr Other Then You Fro Whose Videos I Look Forward To
thanks!
The whole storry can be installed on other disc partition or betther to be on the same as windows?
excellent video! I think I will be trying to see what I can cook up with this for my content
good luck!
The image quality does suffer but its a great editing tool for sure. Do you think I could theoretically upload the images to an upscaler like Upscayl and get better resolution?
yes, but it might not fix larger flaws
@theAIsearch I see thank you!
Your voice is starting to sound like an AI... suspicious
did you test google notebooklm ? they generate a freakin podcast on a document, and they sound so human !
😂😂😂😂😂😂😂
Cmd prompt keeps giving me an error about how the OffloadedCache can only be used with a GPU. Any idea how to fix it, or do I just have to wait for a more compatible version? I have a laptop with i7 10750H and a RTX 2060 MaxQ, thought it would work regardless
It’s a very good tutorial 👍👍👍! Tank you and subscribed !
Thanks for the sub!
Message in my cmd indicates a "RuntimeError: OffloadedCache can only be used with a GPU"
what does it mean sir ? i dont understand . i can install but still error when to generate the image in gradio app for this OmniGen tutorial . Please halp 🙏
you might need to reinstall torch in your conda env. check what cuda version you have and then install the appropriate ver of torch: pytorch.org/get-started/locally/
eg. if you're using cuda 11.8, you would enter this in your conda env: pip3 install torch torchvision torchaudio --index-url download.pytorch.org/whl/cu118
THE FUTURE indeed! WOW
GREAT TUTORIAL!
thanks
You did not specify it, but when I tried it locally a few days ago (with a very powerful computer), generations took more than 30 minutes sometimes. Was it the same for you ?
what's your hardware specs?
@@METALBROOO Just tell me, how long did it take here ?
I mean literally Does anything you want and and amazing job holy shirt 🧠
Appreciated, for the tutorial ...
I am quite impressed, but I do not have any use cases for it. I am curious to know how many people do.
Deserve my sub
thanks!
How do you stop it from uploading your entire PC to Chinese servers?
Great video. Interesting what would happen if you added prompt like make hands realistic.
Thank for your great work, when it gives error message of 'pip not recognize' please what should I do next to correct it?
It's waking up!
How do I set or move the huggingface cache (~\.cache\huggingface\hub) to another directory?
This is amazing, I only starting playing with ai image tools recently and this seems to solve several annoyances or problems, unfortunately I don't have a high end GeForce card to run this on, I only have an AMD Radeon APU, which even for stable diffusion needs an outdated port of a project to run stable diffusion webui, haven't been able to get ControlNet working yet as extensions seem disabled when I set it up to remote connect across my network and I'm not sure if they're compatible with the older codebase, and the new codebase doesn't support APUs which makes the generation time ridiculous. For something that gives such good results I might be willing to wait several minutes, but with an average of 40-50 iterations I have a feeling I'd have to go get a snack or something every time I hit generate. It's stuff like this that makes me want to go out and buy a new desktop/laptop with a 40-series GPU at least, though if I need a $1K+ GPU for this I might be out of luck, but it still might be able to better run stable diffusion or something.
If I could get something like this working it'd be a major thing. I have aphantasia being an almost complete lack of the ability to visualize images in my head(unless I'm dreaming apparently, as dreams have been the most vivid images I've been able to imagine in my life, as rarely as I'm able to remember them anyway.), and perhaps as a consequence my artistic and creative abilities are nearly non-existent, so having some software that can fill in even if imperfectly is kind of exciting, even if just for personal use.
Going to try this right now with the Pinokio 1click installer
Rip my 4070
good luck. i heard 12g is the minimum
@@theAIsearch Update, it worked great! Successfully passed each test in the video
@@theAIsearch mine 1070 ain't doing shit :(
Is it possible to run on colab notebook ? Please make a video
i haven't seen one yet
@@theAIsearch All I have is a weak laptop that relies on Colab to run all these fancy AI model stuffs because I can't run locally. If you find any Colab notebook to run this Omni Gen please share, I have been subscribed to you for this
@@theAIsearchPls make it in colab notebook
@@NCLDMR Brother, me too
I have the error as follows:
C:\Users\danca\Desktop\OmniGen>conda activate omnigen
CondaError: Run 'conda init' before 'conda activate'
reboot your computer
and run again:
C:\Users\danca\Desktop\OmniGen>conda activate omnigen
@@METALBROOO thankx
enter conda init first
When i try to run the installation command it says "'pip' is not recognized as an internal or external command, operable program or batch file."
help pls
Impressive
This is cool
😃
That's so cool!
Two sponsors in one video!!?
I do not know much about making AI images but a lot about photo editing, so I got very interested! Did you say that this cannot be installed on a PC with a good AMD graphics card? Can anybody help me with this, I cannot find info about this from a search engine.
he fooled us again such accuracy don't work
Does it work on Mac M2 ?
Awesome!!
When I try to create an image I get an error, and CMD window shows this in the last line. I do have an Nvidia RTX3090 but it's complaining about no GPU?
"File "C:\Users\VR2\Desktop\Omnigen\OmniGen\OmniGen\scheduler.py", line 16, in __init__
raise RuntimeError("OffloadedCache can only be used with a GPU. If there is no GPU, you need to set use_kv_cache=False, which will result in longer inference time!")
RuntimeError: OffloadedCache can only be used with a GPU. If there is no GPU, you need to set use_kv_cache=False, which will result in longer inference time!"
stuck here
INFO: pip is looking at multiple versions of omnigen to determine which version is compatible with other requirements. This could take a while.
ERROR: Could not find a version that satisfies the requirement torch
you might need to install torch in your conda env. check what cuda version you have and then install the appropriate ver of torch: pytorch.org/get-started/locally/
eg. if you're using cuda 11.8, you would enter this in your conda env: pip3 install torch torchvision torchaudio --index-url download.pytorch.org/whl/cu118
When i try to tun the command pip install -e . i get an error 'pip' is not recognized as an internal or external command,
operable program or batch file.
i keep getting 'conda' is not recognized as an internal or external command,
operable program or batch file.
it seems you haven't added it to path yet
Hi, how to you get your cmd to show the omnigen http as a link? I get the adress shown but i cannot crt+click it.
I love your content , please if there is any sites who provides this tool do another video about them.
Showing Error!! while generating image...
"OffloadedCache can only be used with a GPU. If there is no GPU, you need to set use_kv_cache=False, which will result in longer inference time!"
I don't have a dedicated computer GPU, my setup: laptop with 6GB RTX 4060Ti with 16GB LPDDR5 RAM on i7 12700H processor. Won't I be able to run it?
why when we are creating a custom environment does it still spit out dousens of errors from pips dependency resolver about me having the wrong versions of XYZ and not just download the actual ones I need, python is such hot garbage. Also yes i am installing from the new (omnigen) environment
EDIT: I suggest people just install this using the Pinokio tool as it handles the environment install and dependancies, this allowed my to finaly use this tool without errors, note that for some reason I have to install it twice, there script may have been updated from my first attempt
why do I get errormessages when doing the cmd thing?
Cloning into 'OmniGen'...
remote: Enumerating objects: 372, done.
remote: Counting objects: 100% (115/115), done.
remote: Compressing objects: 100% (62/62), done.
error: RPC failed; curl 92 HTTP/2 stream 5 was not closed cleanly: CANCEL (err 8)
error: 337 bytes of body are still expected
fetch-pack: unexpected disconnect while reading sideband packet
fatal: early EOF
fatal: fetch-pack: invalid index-pack output
yikes, already an error from cloning. unfortunately i haven't seen this error before
Same error.... Have you gotten the solution?
We need a custom model trainin video for generative side of this one. Unless you are not training your own model, it is just playing around.
i have this issues how can i fix it
error: RPC failed; curl 92 HTTP/2 stream 5 was not closed cleanly: CANCEL (err 8) error: 983 bytes of body are still expected fetch-pack: unexpected disconnect while reading sideband packet fatal: early EOF fatal: fetch-pack: invalid index-pack output