ENDPLAN_JOY
ENDPLAN_JOY
  • Видео 26
  • Просмотров 26 630
STABLE PROJECTORZ AI 3D Texturing Test
STABLE PROJECTORZ AI 3D Texturing Test
#StableDiffusion #StableProjectorz #3DProjection #AiTexturing
Просмотров: 47

Видео

Mocopi Metahuman UE5 direct livelink testing without Rokoko Studio
Просмотров 55Месяц назад
Mocopi Metahuman UE5 direct livelink testing without Rokoko Studio #mocopi #metahuman #ue5 #livelink #mocap
DiffusionLight & LucidDreamer Combination Test
Просмотров 61Месяц назад
DiffusionLight & LucidDreamer Combination Test First, an urban image was created with Stable Diffusion. Subsequently, HDR panorama dataset was created with a single image using DiffusionLight and 3D Gaussian Splitting Scene was created using LucidDreamer. Finally, the two results by the Generative AI were imported to the unreal engine and produced in one sequence.
Comfy Textures UE5 Projection Mapping Test
Просмотров 283Месяц назад
Comfy Textures UE5 Projection Mapping Test #UnrealEngine #ComfyUI #SDXL #Automatictexturing
SwitchLight Studio First Testing (Re-Lighting)
Просмотров 53Месяц назад
SwitchLight Studio First Testing (Re-Lighting) #switchlightstudio #beeble #relighter #relighting Test clip - TAMBURINS X JENNIE [The Day of Sorceress]
Metahuman Custom clothes Fast Workflow Test
Просмотров 137Месяц назад
Metahuman Custom clothes Fast Workflow Test #UE5 #Metahuman #METATailor #PRETCOORD I tested how to apply custom clothes to Metahuman in a short time using various tools.
Optitrack To UE5 Retargeting
Просмотров 43Месяц назад
Optitrack To UE5 Retargeting
Runway Gen-3 Alpha & Depth Anything v2 combine test
Просмотров 46Месяц назад
Runway Gen-3 Alpha & Depth Anything v2 combine test #runway #gen3 #depthanything #genaivideo
Surreal Attraction
Просмотров 39Месяц назад
Runway Gen-3
MimicMotion : MetaHuman Full Body Animation Transfer Test
Просмотров 115Месяц назад
MimicMotion : MetaHuman Full Body Animation Transfer Test #MimicMotion #Metahuman #ComfyUI #SVD #DWPose
LivePortrait : Metahuman Facial Animation Transfer Test
Просмотров 174Месяц назад
LivePortrait : Metahuman Facial Animation Transfer Test #LivePortrait #Metahuman #UE5 #ComfyUI #StableDiffusion #RVC
Runway Gen-3 First Testing 🐠
Просмотров 64Месяц назад
Runway Gen-3 First Testing 🐠
CharacterGen : Testing 3D Character Generation & Auto-Rigging Process
Просмотров 140Месяц назад
CharacterGen : Testing 3D Character Generation & Auto-Rigging Process #CharacterGen #StableDiffusion
Meta-Human Eating Chicken Sandwich
Просмотров 26Месяц назад
Meta-Human Eating Chicken Sandwich 🍔 #kling #ai #ue5 #metahuman #ada #chickensandwich
TripoSR GenAI 3D Object Generation Testing
Просмотров 1526 месяцев назад
TripoSR GenAI 3D Object Generation Testing
AI Self-Portrait with Voice Conversion
Просмотров 13811 месяцев назад
AI Self-Portrait with Voice Conversion
AI와 메타휴먼을 활용한 사실적인 디지털 캐릭터 제작하기
Просмотров 2,1 тыс.Год назад
AI와 메타휴먼을 활용한 사실적인 디지털 캐릭터 제작하기
SDXL 1.0 AUTOMATIC1111 Refiner All-in-One Workflow using ComfyUI
Просмотров 136Год назад
SDXL 1.0 AUTOMATIC1111 Refiner All-in-One Workflow using ComfyUI
NewJeans - Zero with Stable Diffusion (for research purposes only)
Просмотров 587Год назад
NewJeans - Zero with Stable Diffusion (for research purposes only)
MetaHuman to Stable Diffusion i2i First Testing
Просмотров 2,5 тыс.Год назад
MetaHuman to Stable Diffusion i2i First Testing
Tech Demo ChatAvatar to MetaHuman (Unreal Engine 5)
Просмотров 1,2 тыс.Год назад
Tech Demo ChatAvatar to MetaHuman (Unreal Engine 5)
UE 5.1 Anamorphic Lens Effect testing
Просмотров 1,9 тыс.2 года назад
UE 5.1 Anamorphic Lens Effect testing
Driving Hyundai IONIQ 5 in Unreal Engine 5
Просмотров 5102 года назад
Driving Hyundai IONIQ 5 in Unreal Engine 5
MetaHuman Self-Portrait MAKING FILM
Просмотров 16 тыс.2 года назад
MetaHuman Self-Portrait MAKING FILM

Комментарии

  • @quicksmilenathan103
    @quicksmilenathan103 23 дня назад

    can you explain this workflow

  • @wolfyrig1583
    @wolfyrig1583 24 дня назад

    Quick question, did you imported the optitrack skeleton and setup a retargeter? Or did you use another method? Calculation from points?

  • @trueLuminus
    @trueLuminus 25 дней назад

    How does it know what the other sides are supposed to look like?

  • @otsoya
    @otsoya 6 месяцев назад

    What was the polycount ?

    • @endplan4680
      @endplan4680 6 месяцев назад

      It consists of 156,928 triangular polygons and It takes up about 8mb of capacity.

  • @HuINg-mv7ti
    @HuINg-mv7ti 7 месяцев назад

    메타휴먼크리에이터로 메타휴먼을 만들고, 그걸 언리얼 엔진에서 불러와서 라이브링크로 말하는 영상을 만든 다음 해당 영상의 얼굴을 ai로 프레임마다 바꿔치기하는 식으로 제작하신걸까요?? 워크플로우가 궁금합니다!

    • @endplan4680
      @endplan4680 6 месяцев назад

      안녕하세요. 답변이 늦었네요. 네 생각하시는 내용이 대부분 맞습니다. 간단히 워크플로우를 정리해드리겠습니다. 1. ChatAvatar에서 생성한 페이스 오브젝트 제작 2. 언리얼엔진5 Mesh to MetaHuman 플러그인을 활용해 해당 오브젝트를 MetaHuman Creator로 전송 3. 제작된 커스텀 메타휴먼을 언리얼엔진 메타휴먼 샘플 프로젝트로 불러들여 애니메이션을 적용 후 이미지 시퀀스 렌더링 4. 렌더링 된 이미지 시퀀스를 Stable Diffusion web UI에서 Img2Img로 프레임 별 생성을 진행

  • @경숙-t6h
    @경숙-t6h Год назад

    What workflow system did you use?

    • @endplan4680
      @endplan4680 Год назад

      I connected the two systems using sd-webui-comfyui.

  • @Ray2kay6HG
    @Ray2kay6HG Год назад

    how did you attach comfy to Automatic1111? looking for links if you have them...

    • @endplan4680
      @endplan4680 Год назад

      I used this extension. github.com/ModelSurge/sd-webui-comfyui

  • @blacksage81
    @blacksage81 Год назад

    gold

  • @24vencedores11
    @24vencedores11 Год назад

    I love metahuman but it's too limited. No clothes and morphs.

  • @aleksandrrush4076
    @aleksandrrush4076 Год назад

    Cringe...

  • @iqbalyuharnanda
    @iqbalyuharnanda Год назад

    How can u make glasses?

  • @DanikaOliver
    @DanikaOliver Год назад

    That's so cool, I'm surprised is not viral though.

  • @marcusmanningtv
    @marcusmanningtv Год назад

    My mesh head always explodes... wtf

  • @Yabbo06
    @Yabbo06 Год назад

    that is impressive

  • @rami2965
    @rami2965 Год назад

    Have you tried training the stable diffusion model on images of the metahuman ?

    • @Nevetsieg
      @Nevetsieg Год назад

      Forgive me but...what's the point? Why "simulate" something when I already have the original version that runs on realtime?

  • @Enzait
    @Enzait Год назад

    Nice! How long does this process usually take?

    • @endplan4680
      @endplan4680 Год назад

      I think it usually took about an hour or two.

  • @Ahmed_Rtv
    @Ahmed_Rtv Год назад

    How did u do that

  • @jvin9539
    @jvin9539 Год назад

    Nice

  • @JustSander
    @JustSander Год назад

    This is awesome! I need to give this a try! 👍🏻

  • @ivonmorales2654
    @ivonmorales2654 Год назад

    The essence of the anamorphic lens is not the bokeh or the blur or the horizontal reflections, it is the IMMERSIVE SENSATION. Like you're there, inside the movie. Do you think a couple of short videos shot with anamorphic should be used and compared to a couple of shots in Unreal? Because I still don't feel that immersive feeling with the plugin. A hug, friend.

    • @endplan4680
      @endplan4680 Год назад

      I fully agree with you. But Unreal is just the beginning. It's gonna be better.

    • @ivonmorales2654
      @ivonmorales2654 Год назад

      @@endplan4680 I have the same expectations as you, about Unreal.

  • @parkmujin1610
    @parkmujin1610 2 года назад

    link to the 5.1 build?

    • @endplan4680
      @endplan4680 2 года назад

      github.com/EpicGames/UnrealEngine

    • @endplan4680
      @endplan4680 2 года назад

      Download it from here.

  • @IstyManame
    @IstyManame 2 года назад

    my gosh, it took me to see no-texture pizza to realize this isn't a real footage lol, good job. Also, did you animate your camera manually or it's tracked by something irl?

    • @endplan4680
      @endplan4680 2 года назад

      I used a hand-held effect on the camera. This is also camera shake.

  • @whitelotus47
    @whitelotus47 2 года назад

    Great work

  • @tamtamX-cq8or
    @tamtamX-cq8or 2 года назад

    Face capture is really good actually! Did you have to do any editing after or is this straight from the capture?

  • @lehaiduong1987
    @lehaiduong1987 2 года назад

    Tầm này thì khi nào em út nó hiểu ha ha

  • @sonydee33
    @sonydee33 2 года назад

    Is this included in the coloso course?

    • @endplan4680
      @endplan4680 2 года назад

      This is not included, but if you listen to the lecture, you will be able to follow it. If you need additional information, please ask me. :)

    • @sonydee33
      @sonydee33 2 года назад

      @@endplan4680 awesome!

  • @jorgeyepezlazo8773
    @jorgeyepezlazo8773 2 года назад

    That's great! How did you manage the voice? I mean, Did you track the lipsync? or what is the process for that part?

    • @endplan4680
      @endplan4680 2 года назад

      Usually, I personally use ARKit to solve it, and I also use text-based sound such as Omniverse audio2face. Thanks :)

    • @jorgeyepezlazo8773
      @jorgeyepezlazo8773 2 года назад

      @@endplan4680 thank you! I will try that