- Видео 133
- Просмотров 643 795
Tech-Practice
Добавлен 4 дек 2021
Tech tutorials, AI projects. Democratize AI!
Contact: ttio2tech at gmail
Accomplishments include:
1. AMD $95 APU turned into a 16 GB VRAM GPU. ruclips.net/video/HPO7fu7Vyw4/видео.html
20K views medium article: medium.com/p/51a8636a4719
featured on multiple tech news websites! www.techspot.com/news/99847-modder-converts-95-amd-apu-16gb-linux-ai.html
2. Contributed to multiple open source projects such as Fooocus (so AMD GPU users can use it)
Get better prepared for AI age ! Empowering you with tech knowledge, one tutorial at a time! Master the art of technology, and become the architect of your own digital destiny.
My goal is to create a series of hands-on end to end tutorials so everyone can experience state of art technologies. Please subscribe to my channel to stay tuned.
👉ⓢⓤⓑⓢⓒⓡⓘⓑⓔ
If you would like to support or buy me a cup of coffee, please use following link. Donations are welcome! For crypto donation, use Uniswap address: techpractice.uni.eth
Thanks!
Contact: ttio2tech at gmail
Accomplishments include:
1. AMD $95 APU turned into a 16 GB VRAM GPU. ruclips.net/video/HPO7fu7Vyw4/видео.html
20K views medium article: medium.com/p/51a8636a4719
featured on multiple tech news websites! www.techspot.com/news/99847-modder-converts-95-amd-apu-16gb-linux-ai.html
2. Contributed to multiple open source projects such as Fooocus (so AMD GPU users can use it)
Get better prepared for AI age ! Empowering you with tech knowledge, one tutorial at a time! Master the art of technology, and become the architect of your own digital destiny.
My goal is to create a series of hands-on end to end tutorials so everyone can experience state of art technologies. Please subscribe to my channel to stay tuned.
👉ⓢⓤⓑⓢⓒⓡⓘⓑⓔ
If you would like to support or buy me a cup of coffee, please use following link. Donations are welcome! For crypto donation, use Uniswap address: techpractice.uni.eth
Thanks!
Ollama run Phi4 model on Macbook - the small 14B model from Microsoft that's better than 70B model
A normal Macbook can run a state of art 14B model ! This video will show some demo and the performance (on my M3 Pro 36GB RAM Macbook Pro). #applesilicon #apple #macmini #macbook #m4 #ollama #m3
👉ⓢⓤⓑⓢⓒⓡⓘⓑⓔ
👉 !! Try the HunyuanVideo free at agireact.com/t2v !!
Please join the discord server at discord.gg/SgmBydQ2Mn where you developed free chatgpt bot and stable diffusion bot!
If you would like to support me, here is my Kofi link: ko-fi.com/techpractice and Patreon page: www.patreon.com/user?u=89548519
Thank you for watching!
Tutorial links:
For python virtualenv install, see ruclips.net/video/uOCL6h9fuVc/видео.html
ComfyUI for more advanced workflows
ComfyUI on Macbook tutorial: ruclips.net/vi...
👉ⓢⓤⓑⓢⓒⓡⓘⓑⓔ
👉 !! Try the HunyuanVideo free at agireact.com/t2v !!
Please join the discord server at discord.gg/SgmBydQ2Mn where you developed free chatgpt bot and stable diffusion bot!
If you would like to support me, here is my Kofi link: ko-fi.com/techpractice and Patreon page: www.patreon.com/user?u=89548519
Thank you for watching!
Tutorial links:
For python virtualenv install, see ruclips.net/video/uOCL6h9fuVc/видео.html
ComfyUI for more advanced workflows
ComfyUI on Macbook tutorial: ruclips.net/vi...
Просмотров: 227
Видео
Ultimate guide on training and using LoRA for HunyuanVideo - the best open source local video AI
Просмотров 83821 час назад
It's easy to use and finetune LoRA for HunyuanVideo! This video will show you how to do them! #aivideo #finetune 👉ⓢⓤⓑⓢⓒⓡⓘⓑⓔ 👉 !! Try the HunyuanVideo or Flux or SD3.5 LoRA finetune service at agireact.com/finetune !! The ComfyUI workflow can be downloaded from: github.com/ttio2tech/ComfyUI_workflows_collection The training code is at: github.com/tdrussell/diffusion-pipe Please join the discord ...
Run HunyuanVideo on Mac/Macbook ! (Workflow included)
Просмотров 1,2 тыс.14 дней назад
Great news! Apple Silicon can run HunyuanVideo! This video will show some tips and the performance (on my M3 Pro 36GB RAM Macbook Pro). #applesilicon #apple #macmini #macbook 👉ⓢⓤⓑⓢⓒⓡⓘⓑⓔ 👉 !! Try the HunyuanVideo free at agireact.com/t2v !! The ComfyUI workflow is at: github.com/ttio2tech/ComfyUI_workflows_collection Hunyuan_video_macbook.json Please join the discord server at discord.gg/SgmBydQ...
Introduction and demo of MimicPC: running AI application can not be easier
Просмотров 25521 день назад
My review of MimicPC which is an Open-Source AI Platform, Customizable & Affordable. It makes AI application running a blast. Sign up today for free trail using my link: ai.mimicpc.com/TECHPRACTICE 👉ⓢⓤⓑⓢⓒⓡⓘⓑⓔ 👉 !! Try the HunyuanVideo at agireact.com/workflow/t2v !! Please join the discord server at discord.gg/SgmBydQ2Mn where there are free chatgpt bot and stable diffusion bot! If you would li...
AMD GPU run Hunyuan Video - tutorial and demo
Просмотров 1,5 тыс.28 дней назад
Yes! AMD GPU can run Hunyuan the giant text to video model! The video contains step by step and a run demo. GPU model used in the video: 6700XT with 12GB VRAM #AMD #húnyuán 👉ⓢⓤⓑⓢⓒⓡⓘⓑⓔ 👉 !! Try the HunyuanVideo at agireact.com/workflow/t2v !! Medium post: medium.com/@ttio2tech_28094/run-hunyuanvideo-on-12gb-vram-or-10gb-vram-gpu-tested-on-my-machine-a0b1e21ebee4 The ComfyUI workflow is at: githu...
Macbook finetuning Lora for Stable Diffusion 3.5 Large - shouxin style sketch
Просмотров 575Месяц назад
Demo of Macbook finetuning a lora for Stable diffusion 3.5 large base model. Mac can also teach AI diffusion model new concept! #applesilicon #apple #macmini #macbook 👉ⓢⓤⓑⓢⓒⓡⓘⓑⓔ 👉 !! Try the lora free at agireact.com/workflow/SD35_lora_api_json !! The lora has been uploaded to Huggingface huggingface.co/Ttio2/sketch_shouxin The ComfyUI workflow is at: github.com/ttio2tech/ComfyUI_workflows_coll...
Side by side Comparison - OpenAI SORA vs Hailuo (minimax) vs Hunyuan (Tencent)
Просмотров 1,4 тыс.Месяц назад
Try huntuan video at agireact.com/t2v Head to head comparison. (text to video) Results will surprise you. #aivideo #sora #openai 👉ⓢⓤⓑⓢⓒⓡⓘⓑⓔ 👉 !! Try my online photo studio free at agireact.com/t2v to try the Hunyuan model to generate awesome videos !! Please join the discord server at discord.gg/SgmBydQ2Mn where you developed free chatgpt bot and stable diffusion bot! If you would like to suppo...
HunyuanVideo on ComfyUI - step by step ultimate tutorial
Просмотров 2,5 тыс.Месяц назад
Try it on agireact.com/t2v ! Macbook has issues running it currently. ComfyUI workflow to run HunyuanVideo locally tutorial with a 3090. Can be run with a 16GB or 24GB VRAM GPU #aivideo #comfyui 👉ⓢⓤⓑⓢⓒⓡⓘⓑⓔ 👉 !! Try my online photo studio free at agireact.com to generate awesome images !! The ComfyUI workflow can be downloaded from github.com/ttio2tech/ComfyUI_workflows_collection ComfyUI-Hunyua...
HunyuanVideo up and running on an AWS GPU - text to video
Просмотров 1,4 тыс.Месяц назад
I rented a cloud GPU with 48GB VRAM (Nvidia L40S. about $2 per hour) to test run it. See my test drive experience in the video. See how it performs. #aivideo #sora 👉ⓢⓤⓑⓢⓒⓡⓘⓑⓔ 👉 !! Try my online photo studio free at agireact.com to generate awesome images !! Tutorial links: For python virtualenv install, see ruclips.net/video/uOCL6h9fuVc/видео.html ComfyUI for more advanced workflows ComfyUI on ...
LTX Video Mac vs Nvidia AI video generating - local installation with ComfyUI
Просмотров 2,6 тыс.Месяц назад
Finally MacOS can run AI video generating in reasonable speed. How fast is it? Compared with Nvidia GPU? #applesilicon #apple #aivideo 👉ⓢⓤⓑⓢⓒⓡⓘⓑⓔ 👉 !! Try my online photo studio free at agireact.com to generate any styled images !! 👉Buy Mac Mini M4 from Amazon for less using my link: amzn.to/4emPxrB Tutorial links: For python virtualenv install, see ruclips.net/video/uOCL6h9fuVc/видео.html Comf...
Intel Macbook vs Apple Silicon (M1, M3 Pro, M4) running LLM Ollama
Просмотров 2,9 тыс.Месяц назад
How good are Apple silicons? Tested the token generating speed using Ollama, Qwen-coder model. #applesilicon #apple #m4chip #macmini #ollama 👉ⓢⓤⓑⓢⓒⓡⓘⓑⓔ 👉 !! Try my online photo studio free at agireact.com to generate any styled images !! 👉Buy Mac Mini M4 from Amazon for less using my link: amzn.to/4emPxrB For MacMini M4 vs AMD GPU vs Nvidia GPU, see ruclips.net/video/ayI5FVuEdu8/видео.html Tuto...
Mac Mini M4 takes on M3 Pro, AMD 6700XT, and 3080Ti! LLM Ollama generating side by side
Просмотров 22 тыс.2 месяца назад
Which one can win? Test the token generating speed, the cost, the power consumption. #applesilicon #apple #m4chip #macmini #amdgpu #nvidiagpu 👉ⓢⓤⓑⓢⓒⓡⓘⓑⓔ 👉Latest video: finetuning AI model on Mac ruclips.net/video/OuTEUrf4vvo/видео.html 👉 !! Try my online photo studio free at agireact.com to generate any styled images !! 👉Buy Mac Mini M4 from Amazon for less using my link: amzn.to/4emPxrB Tutori...
New Mac Mini M4 running SD1.5, FLUX, and Ollama (Qwen-coder 2.5 14B model)
Просмотров 30 тыс.2 месяца назад
How fast are they? Demo of MacMini runnning some of the state of art AI models. #applesilicon #apple #m4chip #macmini 👉ⓢⓤⓑⓢⓒⓡⓘⓑⓔ 👉Latest video: finetuning AI model on Mac ruclips.net/video/OuTEUrf4vvo/видео.html 👉 !! Try my online photo studio free at agireact.com to generate any styled images !! 👉Buy MacMini from Amazon for less using my link: amzn.to/4emPxrB Tutorial links: For python virtual...
Mac Mini M4 unboxing and review - what is the hype?
Просмотров 8442 месяца назад
Is Mac Mini M4 worth it? I think so! The video include unboxing, geekbench testing scores for M4 CPU and GPU. #applesilicon #apple #m4chip #macmini 👉Buy it from Amazon for less using my link: amzn.to/4emPxrB 👉ⓢⓤⓑⓢⓒⓡⓘⓑⓔ 👉 !! Try the online photo studio free at agireact.com to generate any styled images !! Tutorial links: For python virtualenv install, see ruclips.net/video/uOCL6h9fuVc/видео.html...
Pulid Flux on ComfyUI for Mac users or PC users - step by step
Просмотров 3,6 тыс.2 месяца назад
Pulid Flux on ComfyUI for Mac users or PC users - step by step
Face cloning and style transfer made easy by Pulid FLUX ComfyUI
Просмотров 3772 месяца назад
Face cloning and style transfer made easy by Pulid FLUX ComfyUI
Stable Diffusion SD 3.5 on Macbook, Windows, or Linux with ComfyUI - Large and Large turbo!
Просмотров 2,6 тыс.2 месяца назад
Stable Diffusion SD 3.5 on Macbook, Windows, or Linux with ComfyUI - Large and Large turbo!
bolt.new from one prompt to deployed website
Просмотров 6293 месяца назад
bolt.new from one prompt to deployed website
One command to run - FLUX.1 on Macbook so EASY! Diffusionkit for Apple Silicon
Просмотров 3,2 тыс.3 месяца назад
One command to run - FLUX.1 on Macbook so EASY! Diffusionkit for Apple Silicon
Face cloning made super easy - FLUX + PuLID
Просмотров 2,2 тыс.4 месяца назад
Face cloning made super easy - FLUX PuLID
Supercharge FLUX with loras - Step by step tutorial. Macbook or PC with ComfyUI
Просмотров 1,1 тыс.4 месяца назад
Supercharge FLUX with loras - Step by step tutorial. Macbook or PC with ComfyUI
FLUX +GGUF: Macbook run FLUX locally reducing RAM requirement using GGUF - step by step guide
Просмотров 10 тыс.4 месяца назад
FLUX GGUF: Macbook run FLUX locally reducing RAM requirement using GGUF - step by step guide
Macbook run FLUX locally - the free and open source model that beats Midjouney
Просмотров 20 тыс.5 месяцев назад
Macbook run FLUX locally - the free and open source model that beats Midjouney
LLM Benchmarking Ollama - 9 Intel/AMD/Nvidia CPU/GPUs and Macbook pro
Просмотров 5 тыс.6 месяцев назад
LLM Benchmarking Ollama - 9 Intel/AMD/Nvidia CPU/GPUs and Macbook pro
$300 RTX 3060 vs $2000 Macbook M3 Pro on Stable diffusion 3 (SD3)
Просмотров 4,2 тыс.6 месяцев назад
$300 RTX 3060 vs $2000 Macbook M3 Pro on Stable diffusion 3 (SD3)
Mac users: Stable diffusion 3 on ComfyUI
Просмотров 3,9 тыс.7 месяцев назад
Mac users: Stable diffusion 3 on ComfyUI
CPU vs GPU running local LLM - AMD GPU 6700XT vs Intel 10300 large language model
Просмотров 9717 месяцев назад
CPU vs GPU running local LLM - AMD GPU 6700XT vs Intel 10300 large language model
We need to be able to run local AI. Can't trust big corps.
hello, i was wondering...do you think you will ever make a video showing us how to install and make a lora for hunyuan using kohya ss musubi for windows? or using the forked version that just came out of diffusion pipe for windows?
Awesome, I shall try to install this on my mini m4 this week. Is there anynchance for you to do a demo like this one: ruclips.net/video/_XxPYTx_mZg/видео.htmlsi=p_vGol6Mlibk3pMy with phi-4 !?
Thank you! I am interested in that too
Is there any chance to try this with one of the open sources models for data analytics and chart production ? Like a demo with the phi-4 model!?
Thanks!
usefull
at 9:00 I have an issue "runtimeError: the gpu will not respond to more command, most likely because of an invalid command passed by the calling application" my GPU is AMD Radeon R9 M370X i edit the run.bat file like explain in the github and I pasted this: ".\python_embeded\python.exe -m pip uninstall torch torchvision torchaudio torchtext functorch xformers -y .\python_embeded\python.exe -m pip install torch-directml .\python_embeded\python.exe -s Fooocus\entry_with_update.py --directml pause ". This was in extreme speed. I changed to "speed" and now I have another error "RuntimeError: The gpu device instance has been suspended. Use GetDeviceRemovedReason to determine the appropriate action"
TL; DR: sudo ubuntu-drivers install && reboot is enough in most of cases.
Does SageAttention work with ROCm?
seems not able to
mann i have a 6600xt that wont work right?
8GB? can try to decrease those settings
Is it possible, that you installs ollama into docker container? so fare I understand, in docker it’s not possible to use the GPU on MacBooks ???
I haven't tried Docker yet
Nice work man. Thankyou for sharing this.
Thanks for the video. At one point, I asked myself whether I should get a Mac or not. Instead I got a $200 USD used PC and installed a new RTX 4070 Super. same workflow 4 steps =10 seconds and 20 steps = 38 seconds
Will Smith and snoop dog combined
Hope you are working on a WaveSpeed or wave-speed tutorial for flux and ltx for us Mac users that could really use the speed boost.
I will take a look
@ actually there are 2 recent speed boost things released in the last few days Comfy-WaveSpeed and TeaCache. TeaCachet looks a bit rough with its returned results.
Hope you can get new Wave-speed tutorial out for us Mac users….we actually need it.
At no point you showed how to run OpenAI Whisper with the AMD GPU. By default, Whisper is using CPU rather than the GPU.
How if I got two 3090, the training time will be reduced a lot? Hope you can share how much is reduced with multiple GPUs, thanks!
I haven't tried it yet. Will report it once tried.
It started to load the image then i have red and black error? why?
what was the error about?
Is this better to use for OnlyFans, rather than ur video with Fooocus???
this is just a 'cloud PC' which can also run Fooocus. Do you want to generate images for OnlyFans?
my output image is noise. How i can fix this problem?
Sometime the gguf file downloaded may have issues. Try some other download source
When i paste the verify script im getting the error: "zsh: command not found: import zsh: unknown file attribute: 5 zsh: no matches found: print(x)" Does anyone know how to fix this?
Those need to run within python. Try type python then press enter
@@tech-practice9805 I now get Torch is not defined
Thank you for this brother!! Can you show us how to install DIFFSENSEI on Mac?? It’s a cool manga maker
I heard about it previously. I will take a look
What a lousy M4 chip.. Flux took 3-4mins to generate?!? Can throw that into the garbage bin for any AI inference works
they can run smaller models fast
What setting do I change to make the video longer?
in the latentvideo node, there is one called 'length' which is number of frames. Just need to increase it
@@tech-practice9805 Thanks mate!
Good, and i7 12700kf?
I followed instructions exactly as stated and get the following error "Error(s) in loading state_dict for AutoencoderKLCausal3D: Missing key(s) in state_dict: "encoder.down_blocks.0.resnets.0.norm1.weight", "encoder.down_blocks.0.resnets.0.norm1.bias"," Goes on for quite a while.
So is Q2 faster than Q5 for example, or is it the opposite? I'm a bit confused on that point.
Q2 is usually slower than Q5, but saves more RAM/VRAM.
@ wouldn’t it be faster if it’s smaller?
It usually requires extra calculations in inferencing if it's smaller in size.
the anaconda Pages changes... what now?
install should still be the similar.
What is the name of the RAM monitor app?
it's called 'stats', see my previous uploaded video for details: ruclips.net/video/USpvp5Uk1e4/видео.html
Anyone still getting this issue: "Trying to convert Float8_e4m3fn to the MPS backend but it does not have support for that type." Yes, latest PyTorch.
Helpful for my studying MAC mini M4 pro...:) Thanks!!!
Bonjour, Merci pour cette vidéo. Est-ce possible de faire de courtes vidéos ?
sorry, I don't understand it
Heyy I am an ML Engineer planning to buy the Mac Mini M4. I usually rent GPUs for finetuning LLMs and my work on my local is usually building RAGs, maybe finetuning SLMs or BERT models, preparing datasets etc along with other casual usage. Should I get one and if yes is the base model enough or should I consider the 24 GB and the 32 GB variants? I could find little to no stuff online so link to resources or benchmarks would be highly appreciated.
I think bigger RAM is usually capable of finetuging bigger models and faster when finetuning
70b model omg run pls
I wish the Mac has more RAM on it
Thank you for that video. There are hardly any tutorials for ComfyUI and other AI Tools (for example RVC or APPLIO) especially for Mac users. It´s even still impossible (at least for me) to make RVC or APPLIO work on my machine (even with Pinokio no success). For comparison purposes here my speed results of your workflow on my MacBook 16 Pro Max 16/40 with 64 GB of RAM: 6/6 [03:04<00:00, 30.67s/it] Prompt executed in 337.82 seconds
The Mac mini M4 is Plenty fast enough for llama, Local AI, and Home Assistant IMO. Can anyone confirm?
For small/medium sized model, fast enough
Hey man. Is it worth still doing this in 2025?
still worth it if there's not discrete GPU
Sorry man. Let me clarify. I'm going to build a compact machine with a thin itx profile and was going to put a 5600g in it and do some AI stuff on the side. Is it still worth experimenting with an APU?
anybody know how I can correct this problem:Cannot copy out of meta tensor; no data! Please use torch.nn.Module.to_empty() instead of torch.nn.Module.to() when moving module from meta to a different device.
which node has issue? Can try another workflow as shown in ruclips.net/video/I6jzCJIii_o/видео.html
@@tech-practice9805 Hi , unfortunately the link no longer works, The node concerned is the Text Encoder
ModuleNotFoundError: No module named 'yaml'
Try to install it? Run: pip install PyYAML
16g may run slow or it just crash?
I didn't test it on 16GB, but it should be able to run it although much slower
@@tech-practice9805不行,显示重新连接中。还有说没找到ffmpeg,这个是怎么装的
Clip missing text projection weight Ash killed这是为啥
你好,请问一下你在b站或者国内平台有号吗?想请教些问题,这里没发发图
Hi! 你好,可以加Discord 或 x.com/TechPractice1
Can run on mac16g?
What about Mac 16g??
I will upload a video about Mac run Hunyuan. But 16G may be really slow.
Thank god someone use macmini16g to run video generation and i wonder hunyuan video will work on mac16g?
See ruclips.net/video/W6g_mCARTfM/видео.html
So it worked on Mac 16vram?
Great guide, although I had to find missing clip files for the workflow but it works in the end, thank you!
Great to hear!
can't i install it without anaconda python environment ?? i just have python installed in my system.
Bro now it's work directly through dml follow up with new tutorial
It's for python virtual entertainment purposes.
@@Zeeshanahrar-kw2lq where is the new tutorial ? give link please
@@tech-practice9805 if i don't use any virtual environment will it work? i tried to install the pip install torch directml says: ERROR: Could not find a version that satisfies the requirement torch-directml (from versions: none) ERROR: No matching distribution found for torch-directml
Normally with a comparison you compare. Saying sora is good but not the best seems awkward, never explaining why something else is better or why sora is bad seems extreme awkward and bias. You are being the car salesman. Pointing out the worse car and then asking what we think. Do we need to comment what we think is the worst or the best?
working with m1 16gb?
yes, it should work. Even work on 8gb m1.