- Видео 155
- Просмотров 267 939
FiveBelowFiveUK
Великобритания
Добавлен 1 фев 2024
We will demonstrate and guide you through new technologies available to artists and creatives that can be used today. De-mystifying transformers and artificial intelligence so that you can spend more time expressing your Art and taking less time to make your ideas into reality!
Available for Collaborations, Check out my Playlist of Recommended AI RUclipsrs too !
If i missed anyone, please tell me so i can add them to the list ;)
Available for Collaborations, Check out my Playlist of Recommended AI RUclipsrs too !
If i missed anyone, please tell me so i can add them to the list ;)
how to setup my Kling Video Prompt Agent with ACE-HOLOFS
This video will explain how you can create a claude project and instantly start using me extendable Prompt Agent for Kling 1.5
github.com/MushroomFleet/Kling-PromptAgent
Full docs are on the github project
Don't forget, if you add your best prompts to the correct examples list (.txt) this will instantly enhance and widen the scope of the Prompt Agent, to keep it clean - I released my version which only used official examples :)
Join this channel to get access to perks:
ruclips.net/channel/UCQ2548DcqVLjUlpO6RrVGsgjoin
discord: discord.gg/uubQXhwzkj
www.fivebelowfive.uk
00:00 Introduction
00:37 Setup
04:09 Abilities
11:41 Cusomisation
- Workflow Packs:
Foda FLUX pack civitai.com/models/620294
Koda KOLOR...
github.com/MushroomFleet/Kling-PromptAgent
Full docs are on the github project
Don't forget, if you add your best prompts to the correct examples list (.txt) this will instantly enhance and widen the scope of the Prompt Agent, to keep it clean - I released my version which only used official examples :)
Join this channel to get access to perks:
ruclips.net/channel/UCQ2548DcqVLjUlpO6RrVGsgjoin
discord: discord.gg/uubQXhwzkj
www.fivebelowfive.uk
00:00 Introduction
00:37 Setup
04:09 Abilities
11:41 Cusomisation
- Workflow Packs:
Foda FLUX pack civitai.com/models/620294
Koda KOLOR...
Просмотров: 7
Видео
The past few days - Workflow Pack Updates #comfyui
Просмотров 4312 часа назад
Due to the busyness of the past few days, i focused on testing and getting workflows up on Github and also CivitAI, so we have a lot of news on this front. Also I'm announcing my upcoming Kling 1.5 PromptAgent, which will release soon and demonstrating more ways to use the LTXV PromptGen that is already out. The new Roda Audio Reactive Pack is jsut getting started so expect some new workflows t...
HUGE! The Fastest Local Video Generator EVER! XODA Video Pack with [LightTricks LTXV]
Просмотров 6 тыс.12 часов назад
This is without question the Fastest Video generation model for Local GPU! Introducing LightTricks LTXV, a brand new high speed, high quality video model for ComfyUI. It enjoys native Day One Support, so make sure to update your ComfyUI. Follow the instructions in my Article linked below to get started, and grab my Workflow Pack to benefit from the latest creature comforts. That's not all ! I a...
FLUX TOOLS - Day One! already supported in ComfyUI - New Controlnet Workflows are up!
Просмотров 2,6 тыс.14 часов назад
Here are the first batch of Flux workflows for the new Flux1 Official Controlnet's Canny Lora, Depth Lora, and Redux style models. Don't forget to update your ComfyUI :) Official BlackForestLabs Announcement: blackforestlabs.ai/flux-1-tools/ How to get started with Flux Tools: civitai.com/articles/9063 My Starter Workflows: github.com/MushroomFleet/DJZ-Workflows/tree/main/Foda_Flux/FluxTools Jo...
Find the best Epochs after training your Flux Loras [Training Tools]
Просмотров 59021 час назад
Using this method you can prove your best epochs by testing all your intermediate Lora models. We use 10 Epochs with 20 repeats, full details in the article linked below. Using this Workflow in my Foda Flux Pack: "Foda-Flux-Lora-COMPARE-lines" (take the latest v number) github.com/MushroomFleet/DJZ-Workflows/tree/main/Foda_Flux/Training Tools I wanted to give you some tips and tricks, so you ca...
How To build Gradio UI with Claude Projects [WH40K Army Manager]
Просмотров 208День назад
Full walkthrough showing how the WH40K Army Manager was built - this will show you how to get started adding a webUI to your python projects by using Claude Projects. Join this channel to get access to perks: ruclips.net/channel/UCQ2548DcqVLjUlpO6RrVGsgjoin discord: discord.gg/uubQXhwzkj www.fivebelowfive.uk - Workflow Packs: Foda FLUX pack civitai.com/models/620294 Koda KOLORS pack civitai.com...
How to Use Warhammer 40k Army Manager (Gradio UI Demo)
Просмотров 135День назад
How to use 40K Army Manager: (WH40K-Collection-Roster) A tool for managing Warhammer 40K army rosters using .cat files. Compatible with 10th Edition Battlescribe/Battleforge catalogue files. Now featuring a web-based Gradio UI interface which will run in a web browser on your PC. Beta Release: github.com/MushroomFleet/WH40K-Collection-Roster/releases/tag/v1.8.0-beta Github Source: github.com/Mu...
FLUX1 - Introducing my Passive Detailers [Part Two]
Просмотров 316День назад
Passive Detailers are Lora that can be used to Enhance Generated images, steering the outputs along defined styles. In Passive mode no prompt triggers are required. In this video i will explain how this works and show example results. Part One: ruclips.net/video/QuCbu9wh47M/видео.html I have released 4 Detailers so far: DJZ Not The True World: civitai.com/models/656031/djz-not-the-true-world-fl...
FLUX1 - Introducing my Passive Detailers [Part One]
Просмотров 407День назад
Passive Detailers are Lora that can be used to Enhance Generated images, steering the outputs along defined styles. In Passive mode no prompt triggers are required. In this video i will explain how this works and show example results. Part two: ruclips.net/video/YpmHP9wce8U/видео.html I have released 4 Detailers so far: DJZ Not The True World: civitai.com/models/656031/djz-not-the-true-world-fl...
The Universe Engine, Full walkthrough, with ACE-HOLOFS-V4 & Claude Projects
Просмотров 60314 дней назад
This would be the pinnacle of my LLM research, don't miss out! This is Fine Tuning for LLM's without actually fine tuning in the traditional way. I have done many videos on this specific system which is 100% of my own design. Adaptive Capacity Elicitation is my own Cognitive enhancement methodology which improves all good LLM's with simple agent instructions designed to make the AI think harder...
Use Claude to setup multi-site Hosting with SSL [LLM]
Просмотров 18314 дней назад
In this video we will run through how I used Claude to create a new website, and start selling products with Shopify. There is a part two that gives the rundown of how I setup multi-hosting with Apache2 and sovled SSL certificates - and fixed my Radio Station. Part 1: ruclips.net/video/q6Cs-7vZo9o/видео.html Part 2: (this video) Part 3: coming soon All done in under 2 hours, from buying the dom...
Use Claude to Design Websites & Sell online with Shopify Integration [LLM]
Просмотров 44314 дней назад
Use Claude to Design Websites & Sell online with Shopify Integration [LLM]
Training New Flux Lora's - Epoch Results - Full Spread
Просмотров 38514 дней назад
Training New Flux Lora's - Epoch Results - Full Spread
Master Claude Projects feature, Full Development walkthrough - SVS Beat Sync Update [LLM]
Просмотров 46021 день назад
Master Claude Projects feature, Full Development walkthrough - SVS Beat Sync Update [LLM]
Donut Mochi Workflow Pack V8 now with Batched Latent VAE Decoding
Просмотров 1,2 тыс.21 день назад
Donut Mochi Workflow Pack V8 now with Batched Latent VAE Decoding
Mochi1 - Pro Video Gen at home, huge improvements with Spatial Tiling VAE !
Просмотров 2,1 тыс.21 день назад
Mochi1 - Pro Video Gen at home, huge improvements with Spatial Tiling VAE !
I wrote Video Shuffle Studio in Python with LLM, you can too !
Просмотров 63828 дней назад
I wrote Video Shuffle Studio in Python with LLM, you can too !
Get Started with E2/F5 Text to Speech on Pinokio.computer (One Click install)
Просмотров 63028 дней назад
Get Started with E2/F5 Text to Speech on Pinokio.computer (One Click install)
Don't sleep on Mochi1 - This is Pro Video Gen with ComfyUI on Local & also Runpod !
Просмотров 8 тыс.28 дней назад
Don't sleep on Mochi1 - This is Pro Video Gen with ComfyUI on Local & also Runpod !
SD3.5 is Here! Get started today with #comfyui
Просмотров 1,3 тыс.Месяц назад
SD3.5 is Here! Get started today with #comfyui
create video prompts from your images with this time saving workflow
Просмотров 712Месяц назад
create video prompts from your images with this time saving workflow
Holographic Filesystem combined with my ACE Cognitive Enhancer [LLM]
Просмотров 455Месяц назад
Holographic Filesystem combined with my ACE Cognitive Enhancer [LLM]
AI Introduction Presentation@DeanCloseSchool on 14th October 2024
Просмотров 486Месяц назад
AI Introduction Presentation@DeanCloseSchool on 14th October 2024
Origin of Species (CyberSocietyV66) LORA Training
Просмотров 333Месяц назад
Origin of Species (CyberSocietyV66) LORA Training
How you can Make music with Udio AI & Master with Audacity/Audition
Просмотров 210Месяц назад
How you can Make music with Udio AI & Master with Audacity/Audition
Start using Runpod today with my Template for Flux with #comfyui
Просмотров 1,1 тыс.Месяц назад
Start using Runpod today with my Template for Flux with #comfyui
Dataset Recaption & Mutation, plus 25 new Lora Releases #comfyui
Просмотров 6322 месяца назад
Dataset Recaption & Mutation, plus 25 new Lora Releases #comfyui
I'd be more concerned with a himars strike than a copyright one :D
Hi, God bless you with your amazing work, I love the open source community and frontrunner heroes like you that make normal people's lives much easier, I am hoping your prompt agent could run locally, because I personally can't get any cloud subscriptions, and it is something that I seriously need, is it possible to run it locally? A video on it could be great, love your channel, ❤
Audioreactive sounds mega!! Many thanks amigo!,❤️🇲🇽❤️
Great video.
did some testing with this and noticed that higher number of steps like 50-70 help fast moving scenes.
For your holo environment, is that with an Anthropic Pro subscription?
Thank's, man! Can't wait to try your version of audioreactive workflow!!
I think I'll wait for a simpler way to use Mochi :)
...and you can, this method is to help you fit it onto 8GB - 12GB GPU's and tbh batching encoding over night allows hundreds of videos to be made automatically, and the batch decoding allows easy creation of those videos, all while you are not even there. "Full-Auto" is as easy as it gets. Even an online service will not allow this, you must sit there and click everytime. it's your choice if you want to do more work ;)
you could try using "ComfyUI-GIMM-VFI" to interpolate between beginning and end frame.
YES - all of Kijai's stuff is automatically on my list :) I just have not gotten to it yet - but thanks for sharing as others can benefit for that
Nice beats in the background
Thanks it's all my latest AI tracks playing on my Radio station :)
Would a first frame selector node warren existence?
yes !! please i think it's a good idea, i spent ages scratching my head on that
😂😂😂😂 2:58:00 i die 😂😂😂😂
i nearly fell off my chair ! XD
It would be cool if you opened a discord server where people could discuss their projects and problems
and i have one ! the join link is in the description of this video ;) Jump In !
A little explanation on base shift and max shift for anyone looking to play with that: Base shift is a small, consistent adjustment that stabilizes the image generation process, while max shift is the maximum allowable change to the latent vectors, preventing extreme deviations in the output. Together, these parameters balance stability and flexibility in image generation. Using a bird as an example: Increasing Base Shift: Raising the base shift results in a more consistent and stable depiction of the bird. For instance, the image might consistently show a bird with clear, well-defined features such as a distinct beak, feathers, and wings. However, this increased stability could lead to less variation, making the bird’s appearance feel repetitive or overly uniform. Decreasing Base Shift: Lowering the base shift allows for more subtle variations and intricate details, like nuanced patterns in the bird’s feathers or unique postures. However, this added variability might make the bird’s image less stable, with occasional irregularities or minor distortions. Increasing Max Shift: A higher max shift enables the model to explore the latent space more freely, leading to creative or exaggerated interpretations of the bird. For example, the bird might develop surreal colors, elongated wings, or fantastical plumage, but it risks straying far from a realistic bird representation. Decreasing Max Shift: Reducing the max shift tightly constrains the model, resulting in a more controlled and realistic depiction of the bird. The image is likely to stay close to a conventional bird appearance, but it might lack creative or distinctive elements that make the bird unique or captivating.
@@Andro-Meta good explanation!
thanks for adding this detailed reply - Pinned !
As I tried it today (updating ComfyUI and installing LTXV) I discovered that it downgrades the transfomers module from 4.45 to 4.44 which broke my system.
How to fix it i see some transformer version not right there ....
this is unfortunate, I can only suggest updating comfyUI with dependancies, as PIP is responsible for maintaining your stack. You might also try editing the requirements.txt for the LTXV nodes to the version you want, that way you can over ride the pip resolver. This kind of thing is common when you have a large amount of conflicting custom nodes, as they are all written by independent devs. Also i provided a pip update tool in my custom node pack for those running comfyui portable, as this may just be out of date.
check the offending requirements.txt and use the newest version, you can experiment by typing the number you need and then reinstalling. I provided a .bat example in my DJZ-Nodes which can install the requirements.txt easily for comfyUI portable users. Always be careful tho as the pip resolver does a lot of work in the background
Thank you for presenting this stunning LTX model. Try right now out
What Vram is required to run this ? Or is it one of those that needs RTX 4090 to run locally ?
works fine on RTX 3060 12gb
I'm running fine 4070 rtx 8vram 16 gb this ltx
i've had people in my group using my workflows report 8gb to 12gb and all working fine, if a speed impact at the 8GB end of the scale.
thanks for sharing - everyone needs this type of info
thanks for sharing your results and specs, this is needed by a lot of people !
What software do you use to create the white moving avatar bro . Thanks
I built it out with ComfyUI of course :) I give a tip how it's used in plain sight on my first channel video, you should see there
so what is the longest you got it to do? can you do 30 seconds?
in the V2 pack release (github has no versions only civit) you will see "loopedmotion3" and "Extendo" these show ways to make the video as long as you want by chaining the last frame as the first frame with img2video, it has limitations sure, but with some skill & planning you can get unlimited length tbh
@@FiveBelowFiveUK That's awesome man.. Yeah currently running a 4090 in my office, and the dev setup about 20 workflows. I haven't figured out the PULID yet, but, I'd think there would be some path of creating a movie with the PULID being connected to the movie gen, starting with some storyboard. I haven't tested that, but I assum that's it.
Hi thanks for all your work. I will test it today. Will leave some review videos on civitai when I get it to work.
always appreciate to see what people make ! thx
We need more people like you in the world.
big thanks !
This is fast, but we need to be able to control the strength of the latents and images.
don't forget that this is the research/eval model so basically a demo for the full model that is coming next - i think it did a great job setting the bar for speed, despite having some weaknesses :)
its great and all but is it 8 vram crash proof
I have not tested this personally, but i have people in my groups that claimed to do video generations on 8GB vram systems, if a little slower than what i'm showing.
Another 8-12 months and these obscure interfaces will start to go away in favor of far more intuitive controls and production friendly ways to create video.
actually, this is an extendible platform with API support. I do a lot of contract work and what you see here is often in the backend of many popular web based paid services. In other words, if you are able to and have the right skillset - you can build web services on top of this. Additionally it is possible to create standalone software with the same pipeline that comfyUI is running on, so actually we are already there - maybe you will build the next app with those intuitive controls you desire. All it takes is an idea. The code is already here.
THIS IS NOT HUGE....I tested half the day and all night....and no way this even comes close to what commercial platforms are able to offer....the only thing they do better is faster renders but the quality and prompt adherence is pure shite! Not to mention it is far too early to be claiming something that has unfriendly UX is going to be huge or is the next best thing.... Disingenuous, at best; dishonest, at worst.
Yes all local video creation models are just for fun testing...
He made it pretty clear that for local video ai creation, this is huge. And this is.
Agree... it is fast as hell, as the OP mentioned, but quality isn't that good... I feel like CogVideoX is better, albeit slower.
The only huge thing is fast rendering time, the outcome is shit!
@@fotszyrzk79 I don't know what you're expecting out of a fast model that doesn't require a GPU that breaks bank that isn't fully trained yet and will be fully trained and released open source, but it really seems like you're missing the point.
This is HUGE! Thanks for being a hero in the community and showing us how powerful local video gen could be!
Thanks so much ! It's a great landmark for local gen video and sets the bar on speed, even if the trade off was model depth and image quality, i remember when SD1.4 was about the same in both regards! here's the next 12 months for Video gen !!
Anyone got their vid2vid in comfy working?
These companies need to roll out this stuff more gradually-these constant dopamine spikes are wrecking my sleep! Oh, and don't think we forgot-you still owe us that deep dive into flux tools. 😉
Do Not Fear ! The Flux Dive is coming - we are adding the Detailer DAEMON to all the workflows, this is an considerable effort if you saw the size of my Flux pack - don't miss it :) it's coming very soon !
❤thank You sir
I had to update from the batch file in the updates folder (comfyui) and then the custom nodes finally installed correctly. Simply using the built in update everything button in manager did not work.
I had the same problem. I try the same thing you did and it worked. Thanks
I also included some .bat files for the comfyui in my DJZ-Nodes for example to force the installation of a node using it's requirements (can copy into any custom node using requirements and run) and also the pip resolver update for ComfyUI portable.
I think the model is trained at 25 also I've been getting 5 seconds no problem however have to do 3 seconds whenever I change the prompt
yes i changed my workflows in V2 to reflect this, i think we have to do (24fps*Seconds)+1
i was never interested in local models until this came out im going to find the best settings and squeeze every last thing of this goldmine
yes if you have an idea this is an amazing leap forward !
Could you create a video that explains all the basic parameters and what they affect? Examples would be great because written articles often don't have any and trying them with slow GPU is just very slow way to learn. I have learned by trial and error what for example CFG does but still don't fully understand its interactions with other node parameters.
Hi , is this mocap that you're using realtime?
cadence of speech with overdubbing of a loop that is rejigged to scenes, mostly. But it's created in ComfyUI, so it will be live one day !
Your workflows don't load
you need to update your comfyUI, i had the same problem with the example workflows, which are identical in terms of the code used.
They work for me
Excellent video, I'm completely new but people like you make it trivial get going. I just put both checkpoints in my stable diffusion folder so Data/Models/StableDiffusion/<checkpoints_here> and it worked fine. I'm running on a laptop with a 3070 integrated GPU and it works pretty well. Anyway thanks for the tutorial
glad to hear its all workign for you ! can you tell us what VRAM is in that laptop 3070? others might like to know :) no worries in any case
Unfortunately, they are also limited to non-commercial use as the unofficial controlnets and loras. We need someone to port them to flux-schnell legally.
this is actually expected, often the demo models are for research, its only a problem reallty if they do not release the commerical weights (which can happen). This fact though explains why it's quite limited in terms of model scope and also the quality will improve with a larger parameter model, LTXV is only 2Billion params, which is a baby by modern standards. I place my hopes on the future as an optimist
Thanks for a well explained but short video with no fluff. Liked & subscribed. Plus thanks for sharing your workflows.
Thanks - and expect a lot more of them !
Thanks for your video and your tricks !
It's my pleasure and credit where creidt is due - we have a good community supporting my work and so everyone benefits from that - without many custom nodes i did not write and the hard work of a lot of volunteer dev's and even people giving me tips - i cannot do it alone !!
Cool ... thank you very much for your fast approach to the new flux tools! It's getting difficult to keep track of all these new developments. And harddisks are always (!) way too small!^^
100% agree - might go back to high speed HDD 3.5" drives as my poor SSD's are fired after less than a year of this :)
Thanks for the video - helpful!
the pleasure is all mine
thank
thanks for watching !
sup Halis ! :)
Which of the 726 links in the video description actually lead to the workflow from the video?
it's true - i'll clean that up soon, the ones at the very top are always the ones we speak of in any video, but you are right some are getting long in the tooth :)
*Exploring FLUX1 Passive Detailers for Enhanced AI Image Generation* * *0:03** Introduction to Passive Detailers:* Passive Detailers are Lora (latent text-to-image representations) models that refine generated images without requiring specific prompt triggers. They subtly influence the style and details of the output. * *0:07** Thora Anime Model Example:* The video demonstrates the impact of four Passive Detailers (Not The True World, Abstract Chaos, Electron Microscopy, Genesis Flux) on a custom Thora anime model. * *0:43** Detailer Effects:* Each detailer has a unique effect: * Not The True World enhances sharpness and detail. * Abstract Chaos creates a more natural, albeit sometimes overexposed, lighting effect. * Electron Microscopy aims for a soft-focus, macro photography style. * Genesis Flux introduces subtle fractalization, especially noticeable in neon lighting and glows. * *2:56** Consistent Character with Variations:* Detailers modify the image's style and details without significantly altering the core subject (e.g., the Thora character). * *3:45** Passive vs. Active Detailers:* Passive Detailers operate without prompt triggers, but they can be made "active" by including their trigger word in the prompt, resulting in a more pronounced effect. * *14:50** Detailers Without Character Lora:* Passive Detailers can be used even without a character Lora, directly influencing the base FLUX1 model's output. * *15:50** Stronger Effects Without Character Lora:* When used without a character Lora, the effects of the detailers are more pronounced as they are not balanced against another Lora. * *18:10** Organic and Fractal Detailers:* Electron Microscopy tends to introduce more organic or biological details, while Genesis Flux emphasizes fractal patterns. * *22:02** Recommendation:* The video encourages viewers to experiment with the four detailers to discover their unique effects and how they can be applied to enhance AI image generation. * *22:18** Accessing Detailers:* Links to the detailers are provided in the video description and in an article on the FiveBelowFiveUK CivitAI profile. I used gemini-1.5-pro-exp-0827 on rocketrecap dot com to summarize the transcript. Cost (if I didn't use the free tier): $0.03 Input tokens: 21454 Output tokens: 465
Thanks for adding this - pinned !
*Tips and Tricks for Streamlining Flux Lora Model Testing in ComfyUI* * *0:03** Introduction:* The video focuses on speeding up the process of testing LoRA models trained using various methods. * *0:27** Prompt Automation:* The creator recommends using a "caption collector" tool to create a prompt list from the training dataset. This allows for automated testing with prompts relevant to the training data using custom nodes like "zenai prompt V2" in ComfyUI. * *2:09** Organizing Intermediates:* Create a folder (starting with "A" for easy access) to store intermediate LoRA models (e.g., Epoch 3 through 10). * *2:40** Workflow Overview:* The demonstrated workflow uses the same seed, prompt, base model, and settings for each run, only changing the intermediate LoRA model (Epoch) to ensure a fair comparison. * *3:32** Default Settings:* The example uses Flux Schnell, T5 clip, 12 steps, Euler a sampler, and a CFG of 3.5. * *4:04** Image Comparison:* The workflow utilizes "CR image compare" nodes to create comparison charts of different Epoch outputs, visually highlighting the differences between them. * *5:37** Accuracy vs. Aesthetics:* The creator emphasizes the choice between prioritizing accuracy to the training dataset or the overall aesthetic appeal of the generated images, noting that the best-looking image may not always be the most accurate. * *6:16** Epoch Selection:* The creator suggests that Epochs 1 and 2 often have a strong bias towards the base model. They generally recommend Epochs 4-7 for good image quality and 8-9 for accuracy to the training data. * *7:00** Utilizing Different Clip Encoders:* The workflow can be modified to use the full T5 clip encoder for potentially better results, although it can increase processing time. * *17:52** Verifying LoRA File Integrity:* It's crucial to ensure that all downloaded intermediate LoRA files are the same size to avoid errors caused by incomplete downloads. * *18:37** Sequential Queuing:* To ensure a strictly sequential processing order, it's recommended to wait for the current queue to complete before adding more jobs from different tabs. * *20:47** Early Epoch Considerations:* Early Epochs might produce visually appealing results due to a stronger influence of the base model, but they might not accurately reflect the learned features of the LoRA. * *24:33** Identifying Outliers:* Be aware of outlier images that deviate significantly from the expected output. These can be caused by various factors and might not represent the overall performance of a particular Epoch. * *34:50** Raw Shacks Clip Exploit:* The creator briefly mentions a technique they call the "Raw Shacks Clip Exploit" or "Vision Game of Telephone" which involves using abstract images and intentionally misleading captions to create unique artistic styles. * *40:54** Maintaining Consistent Comparisons:* Avoid changing generation settings (CFG, steps, samplers, etc.) between Epoch comparisons to ensure a fair assessment of the LoRA's impact. I used gemini-1.5-pro-exp-0827 on rocketrecap dot com to summarize the transcript. Cost (if I didn't use the free tier): $0.03 Input tokens: 24361 Output tokens: 670
oooooooooooooooooooooooooo very nice
Great video with a ton of great information! Thank you for sharing! Jason
always !
I feel like I'm graduating from ChatGPT to Claude for code writing. I've been working on reverse engineering Character AI style websites. I've rented an A4000, running Koboldcpp API , and RVC-TTS-API. I've deployed a web server on a different host routed it all through my domain. I've got the landing page with a grid of characters that I can click through into a chat. Voice chat and TTS are toggle-able. Thanks Claude!
claude blows me away - it's just that good !
Do a video on 40K.groxy image generation 😗..
I am actually using my blood angels as the main focus for how to train good image models - so you can expect a lot more on this front very soon. The first model is already published, but the captions are an example of how people get it wrong. I'll be taking people on a journey to show how we can fix that using them !