How to Use Your PNGTuber in VtubeStudio (FREE) | Live2D Tutorial

Поделиться
HTML-код
  • Опубликовано: 20 ноя 2024

Комментарии • 33

  • @phloxart
    @phloxart  8 месяцев назад +8

    hi all, hope you find the video helpful!
    please note that at 5:39 I misspoke--default for voice volume should be 0.
    I also apologize for some of my text edits being delayed--I had adjusted something with the audio and forgotten to shift the text as well.
    Oh and, (thank you to brynleabuilds) I have been reminded that the free version of Live2D does allow saving/exporting, but has limited functions. You may be able to do this with just the free version, and potentially save your pro trial for a different project!
    happy pngtubing!
    follow me:
    twitter: twitter.com/flux_vector
    twitch: www.twitch.tv/fluxvector

    • @executrixofficial
      @executrixofficial 8 месяцев назад

      I think i made a mistake, i put the model file into picrew data and nothing poped up into vtube studio pls help :(

    • @phloxart
      @phloxart  8 месяцев назад

      @@executrixofficial Hiya! You mean you moved your model to the folder and it won't show in your list of models? First open and close the menu a few times, or close and re-open VTS. VTS does not instantly update the model list when you add a new model in, so it may take a moment for it to show up.
      Did you make sure that your model data was stored into its own folder before adding to VTS Live2D Models foldeR? VtubeStudio does not read model files alone--the moc3 file, json files, and texture folder (with texture pngs) will need to all be together within their own folder.
      If that doesn't help just explain in a bit more detail and Ill help as best I can!

    • @executrixofficial
      @executrixofficial 8 месяцев назад

      ​@@phloxartI followed your advice and it worked this time thank you for making this tutorial.
      I just have 2 question about 1.how am I supposed to record / stream with vtubestudio, Is it already built into the app or do I need a different program
      2. Now that I've uploaded my model will it automatically save save in Vtubestudio?

    • @phloxart
      @phloxart  8 месяцев назад

      @@executrixofficial 1-- To stream you will need to download & install a streaming software like OBS and link your streaming platform to it. There are tons of tutorials on how to get it setup! After installing and getting used to the app, you will need to add VtubeStudio as a source in OBS. The current best way to do this is using a plugin called Spout2. I have a tutorial on my channel for that here: ruclips.net/video/zA7v_DNVxOI/видео.htmlsi=Nm7EPIYhRni4zOUE
      2-- Yes, your model will now always exist in VtubeStudio so long as you don't edit your Live2D Models folder. Any hotkeys or model adjustments made to your model will save to the model's JSON file. I do recommend backing it up from time to time (as with any files it's good practice). But yes, it will stay there and be accessible every time you open the app now :)

  • @_Kisaragi
    @_Kisaragi 8 месяцев назад +1

    oh i was literally just wondering if this was possible the other day!

  • @saintsnakech
    @saintsnakech 8 месяцев назад +1

    Your tutorials are always so well explained, thoughout and even funny as hell, thank you for all your work always Flux!

    • @phloxart
      @phloxart  8 месяцев назад +1

      thank you so much 🧡

  • @goshirah
    @goshirah 2 месяца назад

    I've used this twice since finding this tutorial and it's worth making - Whether you have your own Picrew or fugi's already or not it's 100% worth setting up even though the programs can be intimidating!

    • @phloxart
      @phloxart  2 месяца назад +2

      thank you so much! so glad you're getting use out of it, that brings me so much joy 🥹

    • @goshirah
      @goshirah 2 месяца назад

      @@phloxart NO U

  • @notthetruedm
    @notthetruedm 6 месяцев назад

    This was absolutely fantastic!!! I've been overwhelmed before trying to use this software and just stuck with a basic pngtuber setup but I want to give it another go, especially with how simple you can make things

  • @russmack11
    @russmack11 8 месяцев назад +2

    Lol... the *bleep*ing kills me every time. Continue with the potty mouth please! Also thanks for this simple tutorial ❤

    • @phloxart
      @phloxart  8 месяцев назад +2

      i always happen to drop an f bomb early so I try to take it out in case youtube nerfs me HAHA

  • @acribuss
    @acribuss 8 месяцев назад +1

    Good shit, might give it a shot👍

  • @shadowscalecosplay6622
    @shadowscalecosplay6622 3 месяца назад

    This saved me a lot of time and money! thank you!

  • @brynleabuilds
    @brynleabuilds 8 месяцев назад +1

    4:26 The free version allows saving project files and exporting a moc3 file, it's just limited in some tools and layers.

    • @phloxart
      @phloxart  8 месяцев назад +2

      Aa okay, good to know, I remembered that wrong--thank you for the correction.
      That's actually great because people don't have to use their trial up for this quick rig!

  • @pancewarrior
    @pancewarrior 5 месяцев назад

    Thank you so much your tutorial has been of great help

  • @primepikachu5
    @primepikachu5 5 месяцев назад

    I have officially become an oyster. thank you.

  • @cmluna90
    @cmluna90 3 месяца назад

    Any chance you’d have the commands for Mac for what you did? I can’t figure out what you selected. Thank you for the video.

    • @phloxart
      @phloxart  3 месяца назад

      @@cmluna90 Should be the same with some key replacements. If I remember Mac correctly, you will be replacing “control”with your “command” key and will be replacing “alt” with the “option” key (example: control shift c would be command shift c for you, or shift alt would be shift option)
      My “H” hotkey (to hide elements bounding boxes) is not a standard hotkey and would need to be added in your Live2D settings if you want to use it.
      Let me know if those key replacements work, if not I will check them tomorrow. I have a Mac its just for work so I dont use it often for Live2D-that being said I will check to make sure if needed 👍

  • @Dtiaraj
    @Dtiaraj 2 месяца назад

    Hi I’m looking for making my model talk like the model you’re using in the video; do you have a video on how to do so. My avatar is more hyper realistic so the mouth just opening and closing looks weird

    • @phloxart
      @phloxart  2 месяца назад

      Hiya! My model setup in this video is a Live2D model running on VtubeStudio with Vbridger.
      I do not have any in depth tutorial on rigging in Live2D (if I have the time someday I'd love to), but model creation this way is quite the extensive process. I would recommend starting with some youtube videos going over it, there are thankfully lots of resources that have become available in recent years.

  • @GrimRogueVT
    @GrimRogueVT 7 месяцев назад

    Any chance you know anything about different expressions? I know some other softwares do that, I was using fugitech before but now thanks to you im using vtubestudio, but now I'm thinking about getting my png redone by someone and wanted to add different expressions, so now im wondering how that would be possible in vtubestudio with a png

    • @phloxart
      @phloxart  7 месяцев назад +2

      Yes, expressions are totally possible to rig for pngtubers using the same method standard .moc3 live2d models would use! Each expression will need to be tied to its own parameter in Live2D (example: excited expression with star eyes as a parameter going from 0 to 1, with 1 being 'on/visible' and 0 being 'off/not visible') and then you can toggle this using a hotkey in VtubeStudio.
      For pngtubers, rigging something like the example above using a simple opacity change would be very easy to implement. More complicated Live2D stuff would let you change the physics intensity or even toggle/add an animation. And if you separate your layers more (ex. each 'speaking' and 'not-speaking' png might consist of 3-4 pngs, one for hair, one for eyes, etc.) you can use VtubeStudio to change the colors and stuff like that. Lots of options for sure!
      I plan on doing an advanced version of this tutorial when I have some time and it will cover animations/autoblinking, expressions, and a more advanced lipsync. No guarantee on when that'll be out though, so if you want to do some research atm, i'd recommend watching a video on 'rigging live2d expressions' and then 'setting up hotkeys in vtubestudio.'
      Hope that helps, happy rigging :)

    • @GrimRogueVT
      @GrimRogueVT 7 месяцев назад +1

      @@phloxart I appreciate you taking the time to respond! I'll look more into the expressions in other videos, and I look forward to your future videos! ♥

  • @cruentorex8334
    @cruentorex8334 6 месяцев назад

    Hi! I have a model that is actually a .gif file, I was wondering if you know of anyway to get that to work instead of the PNG? If it just comes down to modeling the GIFs movement that's fine I was just wondering what your take was.
    if it helps give context the only thing that moves in the gif is a floating ball and mouth movements when I talk, otherwise it's all static.

    • @phloxart
      @phloxart  6 месяцев назад +1

      Totally doable with a gif! Most .gif files can be imported directly to a .psd and will import each frame as a layer.
      If I'm understanding correct then you probably have 2 sets of the same floating frames--one set with mouth open and another with mouth closed. You can use the same method in this video, but around 7:15, you'll have an extra step of grouping all floating mouth open frames in 1 deformer and all mouth closed frames in another deformer (hold shift or control to collect them and make a new deformer). Then continue on and rig those two deformers as if they were your mouth open & mouth closed. Deformers can use opacity and multiply so it should still work the same, you'll just have one deformer holding multiple components for open/closed instead of one artmesh for each.
      As for the floating animation, there's two options for you, but its a little more complicated because both will require you to use the animation menu. It boils down to 'rig the floating in live2d' as you suggested, OR 'use the frames in frame by frame animation.' If its a unique animation I would do option 2 so you can keep it! If it's simple then you can probably rig it easier in Live2D. I'll try my best to explain both but maybe can do a video on this at some point! You will have to make a new parameter for the floating which you will rig depending on which route you go.
      L2D Deformer Method
      Hold ctrl and select your mouth open/mouth closed deformers and make a new deformer 'floating' for both of them. Then you can make a new parameter 'floating' or something that will go from 0 to 1. Make your 'floating' deformer active on this new parameter using your two keyform button. At 0 it stays the same, and at 1 you can hold shift and drag the deformer up.
      Frame by Frame
      Import all your frames for the levitation, add a parameter for the levitation, and make the bounds equal to the number of frames (ex. if it's an 8 frame loop, you'd go from 0 to 7). Then rig visibility for each frame--ex. frame 1 would be visible on param=0 with all other frames invisible, frame 2 visible on param=1 with all other frames invisible, and so on.
      Once you have that rigged, you can make a new animation file (set to 1920x1080p, and make sure your framerate matches what you use in VtubeStudio, ex if you use 60fps then the animation needs to be 60fps). Drag your model into the screen and resize it so you can see it ok (the placement doesnt matter too much since we arent exporting to video or anything). You'll want to open the dropdown 'live2d parameters' so you can view your parameters, and set keyframes for your floating parameter. If you do the deformer way, you really only need 3 keyframes (bottom keyframe, top keyframe, then back to bottom to loop). You can open the dopesheet and set the graph to be ease in so it looks smooth. If you do it the frame by frame way, you will have 1 keyframe for each frame, and you'll have to set your dopesheet to be a stepladder so it doesnt flash/blend your frames.
      You can export the animation file and add it to your model folder.
      Exporting the actual model is the same but you may need a larger texture atlas to fit all your frames.
      The last thing is that in VtubeStudio you'll have to set your idle animation to your floating animation!
      I know that's a lot of info and its hard to translate text to working on something, but if anything I hope it can guide you a bit or know what to look up for assistance. If you have questions feel free to comment back or dm me on twitter or something!

    • @phloxart
      @phloxart  6 месяцев назад +1

      Oh and sorry, forgot to say, if the ball is attached to your pngs, you can detatch it in your psd file by selecting it and erasing or using ctrl+x and ctrl+v. Then when you import, it will be in its own layers / artmesh.

  • @Jebus905
    @Jebus905 5 месяцев назад

    Wow this is really cool! Is there any way to include blinking as well or is that too advanced? Either way, I know what I'm doing this weekend. Thank you so much for this tutorial!

    • @phloxart
      @phloxart  5 месяцев назад +1

      Thank you! Absolutely, you would just make another parameter going from 0 to 1 and rig the eyes with opacity similar to to the mouth (and make sure to adjust the input settings in VTS). The eyes open/close mesh or related deformers should be inside of the squash stretch deformer in the hierarchy so they still move with the rest of the model.
      Because you would then have 4 pngs+ total it will get a little more complicated with overlap. I would recommend separating the eyes on their own layer in the .psd so they do not disrupt your speaking parameter (you could separate the mouth too, this tutorial was just very barebones with separation). If you don't have any way to separate them in the .psd, you can separate them by using the manual mesh tool (button to the left from the one used at 6:19, looks like a triangle with a pen) and mesh *just* the eyes in Live2D by clicking a mesh boundary around them instead of automesh. And area outside of that mesh boundary will not be visible, so you can still separate them in a pinch :)
      I will be doing an advanced version of this tutorial including a more in depth lip sync, hotkeys/toggles/animations, and autoblink at some point though!