Mixed Reality Utility Kit: A Powerful MR Utility for Scene API NOW Available!

Поделиться
HTML-код
  • Опубликовано: 5 ноя 2024

Комментарии • 21

  • @dilmerv
    @dilmerv  2 месяца назад

    📌 The demo Unity project featured today is available via Patreon: www.patreon.com/dilmerv
    👉 For additional documentation about MRUK take a look at: developer.oculus.com/documentation/unity/unity-mr-utility-kit-overview (this was a great resource to get a overall idea of what’s available)

  • @wfleming801
    @wfleming801 2 месяца назад

    Thank you! For sharing/making this video!! I’ve been missing these features! So glad they are available now- Can’t wait to try it out

    • @dilmerv
      @dilmerv  2 месяца назад

      Thanks for your feedback and if you have any questions as you integrate it let me know 😉 have fun!

  • @thomptechdotnet
    @thomptechdotnet 2 месяца назад

    Awesome video Dilmer!

    • @dilmerv
      @dilmerv  2 месяца назад

      Thank you Jeff, I appreciate your feedback man!

  • @jeffg4686
    @jeffg4686 2 месяца назад

    @0:53 - Surface Projected Passthrough - NICE - allows them to be mostly immersed, but also see the real world when needed.

    • @dilmerv
      @dilmerv  2 месяца назад

      That’s a great feature I agree! Thanks for your feedback.

    • @jeffg4686
      @jeffg4686 2 месяца назад

      @@dilmerv - I was wondering about this just yesterday. If developing a VR app for a restaurant for instance - everyone plays together in a VR room, but they still want to be able to see the world around them - I was thinking of a switch view (switch to passthrough to view the world, then back to the game). So, this is much nicer - still get to play the game and view

  • @정주호-f5x
    @정주호-f5x 2 месяца назад +1

    Hello, Dilmer.
    I'm from Korea, and I really enjoy watching your videos.
    Thank you for sharing so much valuable information.
    I have a question for you.
    Is it possible to do image tracking with Meta Quest 3’s passthrough, like how you can do image tracking with AR Foundation?

    • @dilmerv
      @dilmerv  2 месяца назад +1

      That’s a great question and thank you for all your support!
      As far as image tracking, currently Quest 3 doesn’t support image tracking, and you are correct AR Foundation offers it, but the underlaying Meta OS doesn’t expose it, at least not yet.
      Thank you and let me know if you have further questions.

  • @propipos1086
    @propipos1086 6 дней назад +1

    Good day! I have a question: Is there a tool similar to Vuforia for creating image targets, so that depending on which image is detected, a 3D object appears?

    • @dilmerv
      @dilmerv  6 дней назад

      Hey thanks for your great question! Currently, Meta doesn’t provide image tracking support or access to the cameras, I believe it is coming next year based on what they announced during Meta Connect 2024.

  • @jamesprise4252
    @jamesprise4252 2 месяца назад +2

    thanks, Dilmer! Anything for spatial mapping or image tracking?

    • @dilmerv
      @dilmerv  2 месяца назад +1

      Thank you for your feedback. Meta currently only supports it prior to entering the app but image tracking is definitely not supported yet. Are you also considering other XR devices?

    • @jamesprise4252
      @jamesprise4252 2 месяца назад +1

      right now, im just trying to map an experience to a specific room. Maybe there's a way to export the room scans without having to build a 3D model of the space and "fitting it" to a prefab? Thanks, Dilmer!

  • @cruzmoragael983
    @cruzmoragael983 Месяц назад +1

    Hello, thanks for this video.
    I was wondering if I can save that scene model and develop interactions on them, I am working on VR and XR training at my work, I scan a part of my work and save it, and in editor mode I add features and many things. that when I want to export my application to the quest 3, the lenses recognize this environment, and that all the interactions are within it, so to speak that when I walk everything is defined as I scan it, but I cannot find a solution to this

    • @dilmerv
      @dilmerv  17 дней назад

      What type of interactions are you trying to develop?

    • @cruzmoragael983
      @cruzmoragael983 17 дней назад

      The simple interaction of grabbing objects, snap, and I want to make an interaction as if I were smearing something on a surface, for now, as a solution to this, I use particles to simulate it, but it is not completely correct, Thank you very much :)

  • @ar.manimaran
    @ar.manimaran 11 дней назад

    Can we implement luma ai into unity, like i want scan an object(cars) in quest 3 and convert it to 3d object and i want to control the 3d cars like remote controlled cars. Is it possible? everything should happen in quest 3

  • @erenboran3918
    @erenboran3918 2 месяца назад +1

    Thank yok for great videos but I have question
    I'm trying to figure out how to do the room scanning on Quest 3 (like in the demo app "first encounter", not the basic plane detection that sets up walls from your room setup.)
    How can I make a Scan room button do you know anything?

    • @dilmerv
      @dilmerv  2 месяца назад

      My understanding is that each app will check for a scene model at startup, you can’t launch it yourself (with a button) as this is all handle at the OS level. Also, if you require a mesh (global mesh) there is a label that you can turn on within the MRUK prefab that will enable the captured mesh, then you can have more flexibility when it comes to raycasting.