@@firedragongamestudio the previous guide bro, i still cant find it on GitHub, plz guide me, btw can you also make a video how to calculate the map to unity, its not scaling perfectly
@@firedragongamestudio bro i still cant at find the code at your previous tutorial,at the github, and btw how to scale the size to match the map, i know the exact size of my plan site but where to change in unity
Hello there! Thanks for the indoor navigation tutorials. I have a question regarding the 3D model of the environment. I have an iPhone with a Lidar sensor and I'm wondering if I can utilize the 3D scans of the rooms for a more accurate navigation?
Thanks! Can you make a tutorial about accurate plane tracking and changing the color of the selected wall? I can't seem to find an accurate one, all other plane trackings messe it up by highlighing more objects than just walls
Hi @firedragongamestudio Amazing Tutorial on AR, what were the rudimentary steps which you considered while making of this P.O.C like what package to choose ,basically what was the thought process for creation this project from Developer point of view.
As I did smth like this already many times, it's more like checking if the selected framework (ARFoundation in this case) supports the required features, then just use it accordingly. If I were to start from scratch, it's more like trial and error. Like starting by placing a cube in a room and try how far i can move and if the cube stays still. After that adding features like occluding walls step by step usually leads one to the desired outcome.
Hello if I want to implement a real location threw map so what for that process and if want to add so many targets so how to do it? Plz guide me need your help very much for this 🙏
Hello, I'm working on my school project. Could you tell me how can I use ARfoundation ( ARcore) + Navmesh in unity for Indoor Navigation, without using QR code, because i want to use feature matching/image matching.
not quite sure, what you mean with avatar for gimmick, but if you mean some kind of companion you can always add a character which either follows the path until the next corner or is near the player cam and points to the nearest corner.
Could i have your contact? I'm doing my final school project so I need some help from you. I want to know how to match the scene from user's androi camera with an available 3D map go get their location.
Hello, it's me again 😅. I'm currently strugglingh with finding a way to combine my DL model ( output from my model is the Coordinates of user' camera in a 3D environment). Could you tell me how to use this output to
Hi :) It's mostly an origin/position problem. It depends which coordinates you'll get from your ML model. Currently the camera has 0,0,0 when starting and the scanned marker/qr code has position x,y,z depending on where you started the app. The geometry is a child if the marker, so the "real" positions don't really matter, as long as the marker is at its predefined position and the geometry is offseted correctly. For your solution it's crucial to get the correct offset of origin and ML model results. So as an example: if you start the application your "Unity Origin" is 0,0,0. Your "ML model Origin" returns 3,4,5 as position. I guess you want to use only the data from your ML model. The minimap should use the ML model coordinates, as I guess this ML model is trained with appropriate data, regarding a building/room/floor. For placing digital content (e.g. walls) you'll have to offset the "Unity Origin" and the "ML model Origin" by their difference. Hope that makes sense 😅
@@firedragongamestudio Thank you very much. Sorry for asking too much, as I'm a newbie in Unity 😅😅, Could you tell me how I can use my model's output (x, y, z) to initialize the user's position? For example, should I use my model's API in the camera offset script?
@@_NguyenMinhTri-fq1bp No worries :) I think modifying the camera position itself is a bad idea, as it will mess with the current arcore session. applying the offset to the digital content would make more sense here. So instead of moving the user to your models position, move the digital content (walls, etc...) to your ML models position while considering the offset of your running arcore session. this way you'll avoid problems, when arcore tries to set the users position while matching up the tracking.
There are multiple ways to do this. First you'll need to decide where to place the target and add it to the prefab. Next would be a way to select between the targets. This is usually done by some kind of dropdown UI. The selection index should somehow tell, which target to use for the path calculation. In the OnChanged event from the NewIndoorNav script, we're already selecting all available targets. In the update method you'll have to change the navigationTargets[0] in the NavMesh.CalculatePath to the navigation target, which you selected beforehand. Maybe with the dropdown index, name matching, etc. This would change the target of the path calculation and line renderer, which enables the usage of mutliple target. Hope that helps 🙂
@@HaithamSmartUse you can just add another image target, delete the current prefeb on scan and spawn a different prefab, adjusted for this specific location. :)
Omg was this the updated guide lesgooo, i was still dazzled from your previous guide where to copy the code
thx and the github links are in the description box :)
@@firedragongamestudio the previous guide bro, i still cant find it on GitHub, plz guide me, btw can you also make a video how to calculate the map to unity, its not scaling perfectly
@@firedragongamestudio bro i still cant at find the code at your previous tutorial,at the github,
and btw how to scale the size to match the map, i know the exact size of my plan site but where to change in unity
@@floweyishere7122 It's under Assets/Scripts. just use your appartement size in meters, when using a cube and a tenth when using a plane.
@@firedragongamestudio ooo so you scale the whole "environment file" to match the scale
Can you make another applying spatial mapping instead of marker based/tracked image use, spatial mapping using Immersal sdk?
Hello there! Thanks for the indoor navigation tutorials. I have a question regarding the 3D model of the environment. I have an iPhone with a Lidar sensor and I'm wondering if I can utilize the 3D scans of the rooms for a more accurate navigation?
I am also wondering about that, have you implemented it?
you can totally do that!
can you provide tutorial for multifloor?🙏
yeah i need thoo
Can you also provide how to create secondfloors going up staircases
On my list :)
Thanks! Can you make a tutorial about accurate plane tracking and changing the color of the selected wall? I can't seem to find an accurate one, all other plane trackings messe it up by highlighing more objects than just walls
Hi @firedragongamestudio Amazing Tutorial on AR, what were the rudimentary steps which you considered while making of this P.O.C like what package to choose ,basically what was the thought process for creation this project from Developer point of view.
As I did smth like this already many times, it's more like checking if the selected framework (ARFoundation in this case) supports the required features, then just use it accordingly. If I were to start from scratch, it's more like trial and error. Like starting by placing a cube in a room and try how far i can move and if the cube stays still. After that adding features like occluding walls step by step usually leads one to the desired outcome.
@@firedragongamestudio Thanks
ok i tried this tutorial and this work perfectly, can you make it like the previous tutorial, the minimap
How it works bro
On my list :)
Hello if I want to implement a real location threw map so what for that process and if want to add so many targets so how to do it? Plz guide me need your help very much for this 🙏
What database should i use for this
Whatever you like.
what type of project u did? AR Core or 3D CORE ,, And if u dont mind i need any of your social account
It's just Universal 3D Core and sry no social accounts.
@@firedragongamestudio thank you , do you know hot to fix (failed to update android sdk package list see the console) error ?
@@areesbr6938 nope, you'll have to google that, sry
I tried to follow the tutorial but the line would not show
Line render randomly movie not stable .what problem this ?
Hello, I'm working on my school project. Could you tell me how can I use ARfoundation ( ARcore) + Navmesh in unity for Indoor Navigation, without using QR code, because i want to use feature matching/image matching.
This video contains exactly what you described!
@@firedragongamestudiocould i have tou contacts? I need some help from you
If I want to add an avatar for a gimmick, how should I do it?
not quite sure, what you mean with avatar for gimmick, but if you mean some kind of companion you can always add a character which either follows the path until the next corner or is near the player cam and points to the nearest corner.
Only unity 6? Previous versions can support?
If you use AR Foundation 5.2, you can use the same technique starting with Unity 2021+
@@firedragongamestudio love it !!!!
Could i have your contact? I'm doing my final school project so I need some help from you. I want to know how to match the scene from user's androi camera with an available 3D map go get their location.
Hi, sry no direct contact in terms of technical support for things like this.
@@firedragongamestudio for AR indoor navigation only 😢
@@firedragongamestudio I meaning using this way getting location for AR indoor navigation 🥲
I mean getting location for AR indoor navigation, instead of an 2D image or QRcode 🥲
@@_NguyenMinhTri-fq1bp You'll need shared spatial anchors for this.
Hello, it's me again 😅. I'm currently strugglingh with finding a way to combine my DL model ( output from my model is the Coordinates of user' camera in a 3D environment). Could you tell me how to use this output to
Initiate the user's location ( like the way we use ARtrackImage or QRcode) . Please help me 😢
Hi :) It's mostly an origin/position problem. It depends which coordinates you'll get from your ML model. Currently the camera has 0,0,0 when starting and the scanned marker/qr code has position x,y,z depending on where you started the app. The geometry is a child if the marker, so the "real" positions don't really matter, as long as the marker is at its predefined position and the geometry is offseted correctly.
For your solution it's crucial to get the correct offset of origin and ML model results. So as an example: if you start the application your "Unity Origin" is 0,0,0. Your "ML model Origin" returns 3,4,5 as position. I guess you want to use only the data from your ML model. The minimap should use the ML model coordinates, as I guess this ML model is trained with appropriate data, regarding a building/room/floor. For placing digital content (e.g. walls) you'll have to offset the "Unity Origin" and the "ML model Origin" by their difference.
Hope that makes sense 😅
@@firedragongamestudio Thank you very much. Sorry for asking too much, as I'm a newbie in Unity 😅😅, Could you tell me how I can use my model's output (x, y, z) to initialize the user's position? For example, should I use my model's API in the camera offset script?
@@_NguyenMinhTri-fq1bp No worries :) I think modifying the camera position itself is a bad idea, as it will mess with the current arcore session. applying the offset to the digital content would make more sense here. So instead of moving the user to your models position, move the digital content (walls, etc...) to your ML models position while considering the offset of your running arcore session. this way you'll avoid problems, when arcore tries to set the users position while matching up the tracking.
If I want multiple targets?
There are multiple ways to do this. First you'll need to decide where to place the target and add it to the prefab. Next would be a way to select between the targets. This is usually done by some kind of dropdown UI. The selection index should somehow tell, which target to use for the path calculation. In the OnChanged event from the NewIndoorNav script, we're already selecting all available targets. In the update method you'll have to change the navigationTargets[0] in the NavMesh.CalculatePath to the navigation target, which you selected beforehand. Maybe with the dropdown index, name matching, etc. This would change the target of the path calculation and line renderer, which enables the usage of mutliple target. Hope that helps 🙂
My line not showing
same for me
Check the line renderer and the generated path again.
Nice but what if I run the app from somewhere else
I didn't find that in this video and the previous one
you'll usually use another image target for another place and design the environment in unity for that.
"I want to make my own project. Should I change your code or not?"
@@kathirarul4112 feel free to change it, so it fits your needs. it's open source for that reason too :)
were'd ya get the settings folder inside the assets folder
..did i miss a step(;´༎ຶД༎ຶ`)
It's created automatically from Unity, when using URP. The render pipeline settings are in this folder.
I have followed each and every step but at last I am not able to see navigation line 😢
Check the line renderer and the generated path again.
@firedragongamestudio
I have checked but nothing worked
@firedragongamestudio
Tomorrow is my presentation it would be great if you help me and my team
@@highlighter7117 the only thing that may be possible is that the navmesh is not baked/properly configured and the line renderer has nothing to show.
How to this interact the real place how to measure distances.
1 unit in Unity is 1m in real life
do you have an email to contact. you ? i was wandering if youo. can help me to create a similar project with gps location...
can you provide tutorial for multifloor?🙏
Nice but what if I run the app from somewhere else
I didn't find that in this video and the previous one
Meaning that every time I have to run the application from the same place so that it can track the target correctly. Is there a solution for this??
@@HaithamSmartUse you can just add another image target, delete the current prefeb on scan and spawn a different prefab, adjusted for this specific location. :)