Cool! Yes, the Featureless Object Scan mode in KIRI Engine can handle transparent objects, but we are still working on improving the texture quality, right now, the texture isn't as great as traditional photogrammetry
6:13 lol Does Kiri work with just a corner of the object? Like a 45 or 180 degree scan instead of 360? I will be trying it with some vehicles in real life but I don't need their full body, just front and rear parts. It is easier taking it divided into steps for huge vehicles like trucks. Thanks in advance! Your app looks awesome based on the videos I saw.
Hey Marco, thanks for the question :) It does (most of the time, lol), although we still recommend making a full 360 scan. I've seen one of our users on our Discord community scan a door panel for his customer's Porsche 911. Although we have featureless object scan, it's still pretty tricky to scan shiny car bodies. The trick to make better results is to include the entire car in the camera while you slowly circle around the car. Featureless object scan doesn't quite support close-up shots (because it uses AI to recognize the object, if it's just a part of it, it's pretty hard for it to do the job) And if you couldn't get a good result for some reason, I kindly ask you to stay with it a bit longer, because, we have a huge update coming up in a month with a new Featureless Object Scan 2.0 :)
@@KIRI_Engine_App oh nice, thanks! I got the Pro plan, now I just have to make some recordings in bus depots. Based on the early tests it works great for half of the model too! Just a small suggestion… Sometimes 200 pictures or 2 minutes may be not enough, hehe But it’s fine for most usage.
@@marcoseliasmep Haha thank you so much for the support :) We are increasing the photo limit to 300 and video to 3 min in the next update haha. Hope it helps a bit
I went a bit overkilled and got the pro version, since from what I understood from the app it was required to test the new "nerf" feature ? Hope it's gonna work trying to capture polished/shiny wood furniture. Congrats for building integrating all those new features we see a lot of university and research papers that offers tech are simply not accessible to "normal human being" :P Or super complex to use.
I can’t even get a good 3D scan of my model. I tried it yesterday against a greenscreen (free version only 70 images) and today without a greenscreen (pro version 200 images) but it’s even worse than yesterday’s. The greenscreen version did NOT use AI masking…and I don’t know whether I can remove the background artifacts after it’s been processed? And today’s version DID use AI masking, but it’s such a bad image it looks like it’s been chewed up and spit out. That is the same messed up scan I got when trying polycam last year.
Heyy! Thank you so much for letting me know about your experience :P Yes, photoscanning can take some practices, but once you get it you'll just get better and better results. Our friend Kevin actually did a super clear tutorial last year, let me know if you still can't get a good result after watching his video: ruclips.net/video/axdPIc6FqQU/видео.html
On my learning journey, I read not to use a turntable because the software needs to see the background changes. This sure looks like a case where a turntable would be perfect. Thoughts?
Hi Billieb! Believe it or not, you are the first user we talked with who knows this. You are 100% correct, for our Featureless Object Scan, since the object itself won't have enough features points, we have to use the features points on the background to register the photos. That's why you can't use a turntable for Featureless Object Scan.
Thanks for the video, it helped me clarify some concepts, I am starting to learn about this topic. I have a question... I would like to take drone shots of buildings and construction, and use platforms to do 3D modeling, to offer this as a service. What method would you recommend for that type of shots?
Thank you for bringing it up! The traditional photogrammetry mode in KIRI Engine indeed does support rotating and flipping the object with the AI Object Masking turned on; The reason NeRF or our NSR doesn't is that the algorithm needs to calculate camera poses from the video, because the object itself doesn't have enough features points for pose calculation, we look at the background of the video for pose calculation. So if the background of the video doesn't move along with the object, the calculation is gonna messed up. That's why right now no NeRF algorithm is able to support that :( But hey, technologies are advancing so fast every day, I think before long we should be able to solve that!!!
Great! At 5:17 to 5:20 It looks like a ( Jason with machete ) statue. I know it is the camera man/ buddy. So how are these scans for 3d printing. I sent a file to kiri and it came out terrible. I really want to use this to scan head / busts for 3D resin mini figures and DnD style miniatures. I think you might be building the library for what is required for NERF, that is great but Speed and Personal data and controlling things from a creative perspective locally may outweigh the WEB conversion model. Can I get an engine to convert to NERF files? Can I be a beta tester? Like the Anycubic's. Thx for the Vid
Hi there! Thanks for commenting. When trying to scan a large object like a car with Featureless Scanning, you need to make sure the object itself is completely shown in the frame, like you cannot take close-up videos and object capture a portion of the object in some of the frames. But have you tried our 3DGS tho? It's actually better when scanning a car
Man, I would really like to test at least one time the option of creating with photos from my phone, any chance of that happening? If I don't see at least one time for myself, I'm not going to pay, that's the game.
Hi there! That's very fair. We are planning to host some free trials with a time limit. But hey, if you can email me at jack.w@kiri-innov.com, I'll see what I can do to help you give the Pro version a try
Hello, I'm highly intrigued by your technology for 3D modeling of my clients' objects for advertising purposes. However, I'm wondering if there's a way to test the Featureless Object mode without a subscription plan. I only have photos of my client's object, which is a heat pump, and it predominantly consists of neutral surfaces. I'd like to conduct a preliminary test to assess its viability. The goal is to import the 3D object into Unreal Engine for a cinematic animation featuring the object. Thank you for your help.
Hey Damien! Thanks for being interested :D I'm gladly to help you with that. Could you send me an email to jack.w@kiri-innov.com? I'll be able to help you via email
@@KIRI_Engine_App Hey Jack, thank you very much to take time to answer, i've just sent you an email with more details. Looking forward to read your answer ;-) Best.
What would be really cool is a feature to merge the photogrammetry and NeRF models together, so a user does both ways of capture information and then take both 3d models and then have some kind of way to merge them together to make a better model based on both datasets.
Yes, you are absolutely right! Actually the problem we have at this moment is that the texture quality in our NeRF model isn't as great as the one in photogrammetry. So we are ACTUALLY making effort to merge the photogrammetry textures to the NeRF model to make it better!
I really love the app but I have been trying to get the button of the object I am scanning, it just doesn't seem to work, please help, do I use photogrammetry or NeRF for it?
Hey basquera! Thanks for the question :) for Photo scan mode in KIRI Engine, you can rotate the object instead of you rotating around. We even have a dedicated turntable mode in Photo Scan. But for other 3D scanning methods in KIRI Engine, you can't use a turntable
Awwww thank you so much!!!!!! I have no word. Please let us know if you run into any questions or suggestions at contact@kiri-innov.com, we'll always be there
Hi PG! If you mean stitching two point clouds into one, unfortunately, we don't have that feature. But the only reason we don't have it is because we really don't need it. In most cases, you will need the stitching feature to capture the entire bottom-included 3D models, right? But in the Photo Scan mode (photogrammetry mode) you can freely rotate and flip the object while taking photos, so you won't have any blind spots in the capture. And then before uploading photos for processing, turn on Auto-Object Masking toggle, it then uses AI to stitch all the photos from different angles into a bottom-included 3D model. Hope it helps :)
You are right, for NeRF it only accept video for now. The reason is because we train NeRF on sequential photosets, and we found using video as the input would really help us to improve the photoset quality, sequentially
@@KIRI_Engine_App any plan on being able to feed it static images in future versions? I work with a lot of featureless objects, but doing video capture just doesn't work with our project and preservation workflow.
Yes, we do have plan to support photo input as well! But keep in mind that we do need you to take a lot of photos (ideally like 200) for NeRF with photo. Do you think your project is okay with it? @@ethanwatrall4797
@@KIRI_Engine_App yes, that's pretty standard for our workflow. Depending on the object (I'm an archaeologist), we're taking upwards or 250 or 300 images.
You are right, like I explained in the video, this tech isn't exactly NeRFs but a NeRFs variation called NeuS or in our term NSR, the cool thing about NSR is that it can generate mesh :)
I downloaded the Kiri app but it seems that you need to pay for the Pro version in order to use Nerf. It would be nice to have a limited free version. But your videos are informative and balanced.
Thank you so much for your understanding and your supportive comment dkt :) Yeah, we had to make NeRF a Pro feature because the server cost is substantially high compared to photogrammetry. But we are planning for a trial period some time this year so more users get to have a taste of our powerful algorithms
@@dkt4728 Good question! Actually, I did a video just to explain the differences between 3D scanner apps like Luma, check it out: ruclips.net/video/9dyAj9gXIms/видео.htmlsi=AYy1sqruMDoNKqxP
Yeah I understand there’s quite a lot of processing. I think a pro feature could make a lot of sense as long as you are supplying Editing tools and ideally a way to embed and visualize the data (eg on a website). I think you should have a free trial period- and hopefully not for someone to pre-pay. First is that for me turns me off. I would also be OK with a pay per use model.
Great Feature ! Ideally I would like to try it, maybe 3-5 Scans. If they are somewhat ok I will happily Pay 40/60€/year, but it is a lot of money to try out :(
Love ya, but calling something out here. When you say that Nerf scans take slightly longer to process that's a bit disingenuous. Considerably longer. Three times longer. Some other wording would have been a little more honest. Keep up the innovations. I look forward to seeing what comes next.
Frankly speaking, this is the best video from you. Clear and informative.
Fantastic and very clear video! Can't wait to experiment with all this ;)
Hey, Phenomenal video from you guys! Love it !!!
Thanks for the thought-provoking introduction. 1:51 I just received a business inquiry asking about if we can handle such kind transparent objects.
Cool! Yes, the Featureless Object Scan mode in KIRI Engine can handle transparent objects, but we are still working on improving the texture quality, right now, the texture isn't as great as traditional photogrammetry
Can you do NeRF using a turntable instead of walking around?
I don´t think thats a good idea since the light that the object is getting will change :/
you guys go far i can't wait for the next updates
me too! Technology is advancing fast!
6:13 lol
Does Kiri work with just a corner of the object? Like a 45 or 180 degree scan instead of 360? I will be trying it with some vehicles in real life but I don't need their full body, just front and rear parts. It is easier taking it divided into steps for huge vehicles like trucks.
Thanks in advance! Your app looks awesome based on the videos I saw.
Hey Marco, thanks for the question :) It does (most of the time, lol), although we still recommend making a full 360 scan. I've seen one of our users on our Discord community scan a door panel for his customer's Porsche 911. Although we have featureless object scan, it's still pretty tricky to scan shiny car bodies.
The trick to make better results is to include the entire car in the camera while you slowly circle around the car. Featureless object scan doesn't quite support close-up shots (because it uses AI to recognize the object, if it's just a part of it, it's pretty hard for it to do the job)
And if you couldn't get a good result for some reason, I kindly ask you to stay with it a bit longer, because, we have a huge update coming up in a month with a new Featureless Object Scan 2.0 :)
@@KIRI_Engine_App oh nice, thanks! I got the Pro plan, now I just have to make some recordings in bus depots. Based on the early tests it works great for half of the model too!
Just a small suggestion… Sometimes 200 pictures or 2 minutes may be not enough, hehe
But it’s fine for most usage.
@@marcoseliasmep Haha thank you so much for the support :) We are increasing the photo limit to 300 and video to 3 min in the next update haha. Hope it helps a bit
For now i really love it but i just wonder where's the RoomScan feature ? Not anymore in the app ?
Thank you so much LCS! RoomScan is moved inside LiDAR Scan option :D
Does it support the old samsung lidar phones they made too ? or is it only on IOS@@KIRI_Engine_App
Aparently its seems like they never used LIDAR anyway, was always a ToF sensor and i already got one* on my Huawei P30 pro
Right, the LiDAR Scan option right now only available on iPhone and iPad Pro models :(@@Lord_common_sense
I went a bit overkilled and got the pro version, since from what I understood from the app it was required to test the new "nerf" feature ? Hope it's gonna work trying to capture polished/shiny wood furniture. Congrats for building integrating all those new features we see a lot of university and research papers that offers tech are simply not accessible to "normal human being" :P Or super complex to use.
I can’t even get a good 3D scan of my model. I tried it yesterday against a greenscreen (free version only 70 images) and today without a greenscreen (pro version 200 images) but it’s even worse than yesterday’s. The greenscreen version did NOT use AI masking…and I don’t know whether I can remove the background artifacts after it’s been processed? And today’s version DID use AI masking, but it’s such a bad image it looks like it’s been chewed up and spit out. That is the same messed up scan I got when trying polycam last year.
Heyy! Thank you so much for letting me know about your experience :P Yes, photoscanning can take some practices, but once you get it you'll just get better and better results. Our friend Kevin actually did a super clear tutorial last year, let me know if you still can't get a good result after watching his video: ruclips.net/video/axdPIc6FqQU/видео.html
On my learning journey, I read not to use a turntable because the software needs to see the background changes. This sure looks like a case where a turntable would be perfect. Thoughts?
Hi Billieb! Believe it or not, you are the first user we talked with who knows this. You are 100% correct, for our Featureless Object Scan, since the object itself won't have enough features points, we have to use the features points on the background to register the photos. That's why you can't use a turntable for Featureless Object Scan.
@@KIRI_Engine_App Wonder if you put up a grid background for it to register with?
@@billieb You don't have to, nowadays the algorithm is good enough to find sufficient feature points from pretty much any background it sees
Yeah man your videos are always really well-thought-out and explained really well cheers also carry is pretty boss it's the best around
Thank you so much for the support :D Let's goooooo
Good video, thanks for the information.
Thanks for the video, it helped me clarify some concepts, I am starting to learn about this topic. I have a question... I would like to take drone shots of buildings and construction, and use platforms to do 3D modeling, to offer this as a service. What method would you recommend for that type of shots?
Awesome but a shame you can't turn or flip the object, a very important feature too get the full 3d object imo
Thank you for bringing it up! The traditional photogrammetry mode in KIRI Engine indeed does support rotating and flipping the object with the AI Object Masking turned on; The reason NeRF or our NSR doesn't is that the algorithm needs to calculate camera poses from the video, because the object itself doesn't have enough features points for pose calculation, we look at the background of the video for pose calculation. So if the background of the video doesn't move along with the object, the calculation is gonna messed up. That's why right now no NeRF algorithm is able to support that :( But hey, technologies are advancing so fast every day, I think before long we should be able to solve that!!!
@@KIRI_Engine_App I understand that this is a technical limit and not specific to your app, otherwise great product, love it :)
Very well done, Jack.
Great! At 5:17 to 5:20 It looks like a ( Jason with machete ) statue. I know it is the camera man/ buddy. So how are these scans for 3d printing. I sent a file to kiri and it came out terrible. I really want to use this to scan head / busts for 3D resin mini figures and DnD style miniatures. I think you might be building the library for what is required for NERF, that is great but Speed and Personal data and controlling things from a creative perspective locally may outweigh the WEB conversion model. Can I get an engine to convert to NERF files? Can I be a beta tester? Like the Anycubic's. Thx for the Vid
I have a one year subscription, I tried using the featureless option for a shiny car but its not working for me
Hi there! Thanks for commenting. When trying to scan a large object like a car with Featureless Scanning, you need to make sure the object itself is completely shown in the frame, like you cannot take close-up videos and object capture a portion of the object in some of the frames.
But have you tried our 3DGS tho? It's actually better when scanning a car
Man, I would really like to test at least one time the option of creating with photos from my phone, any chance of that happening? If I don't see at least one time for myself, I'm not going to pay, that's the game.
Hi there! That's very fair. We are planning to host some free trials with a time limit. But hey, if you can email me at jack.w@kiri-innov.com, I'll see what I can do to help you give the Pro version a try
Great video Jack!
Thank you Moonkey!!!
Very cool video👍
Thank you so much! Let us know if you have any questions :D
Haha, thank you so much!!
The featureless option on tje kiri app is not free, you have to upgrade to pro, so why do you say it is free?
Hello, I'm highly intrigued by your technology for 3D modeling of my clients' objects for advertising purposes. However, I'm wondering if there's a way to test the Featureless Object mode without a subscription plan. I only have photos of my client's object, which is a heat pump, and it predominantly consists of neutral surfaces. I'd like to conduct a preliminary test to assess its viability. The goal is to import the 3D object into Unreal Engine for a cinematic animation featuring the object. Thank you for your help.
Hey Damien! Thanks for being interested :D I'm gladly to help you with that. Could you send me an email to jack.w@kiri-innov.com? I'll be able to help you via email
@@KIRI_Engine_App Hey Jack, thank you very much to take time to answer, i've just sent you an email with more details. Looking forward to read your answer ;-) Best.
These videos are so helpful!
Glad you then them :) Feel free to let me know if you have any questions
What would be really cool is a feature to merge the photogrammetry and NeRF models together, so a user does both ways of capture information and then take both 3d models and then have some kind of way to merge them together to make a better model based on both datasets.
Yes, you are absolutely right! Actually the problem we have at this moment is that the texture quality in our NeRF model isn't as great as the one in photogrammetry. So we are ACTUALLY making effort to merge the photogrammetry textures to the NeRF model to make it better!
Very Cool! Good Job!
Thank you so much!!!
I really love the app but I have been trying to get the button of the object I am scanning, it just doesn't seem to work, please help, do I use photogrammetry or NeRF for it?
🎉🎉 Nice!!!, What if there was a way to combine these two methods together
What is the smallest size the kiri featureless object scan can scan? I'm trying to scan an art toy which has size around 7cm x 5cm x 5cm.
Hey there! we've seen people scanning Warhammer minis, which is about the same size. So it's should be okay :)
Can I rotate the objects instead me rotating aroud it?
Hey basquera! Thanks for the question :) for Photo scan mode in KIRI Engine, you can rotate the object instead of you rotating around. We even have a dedicated turntable mode in Photo Scan. But for other 3D scanning methods in KIRI Engine, you can't use a turntable
Can you combine photogrammetry and nerf?
This is a very impressive company and app
Awwww thank you so much!!!!!! I have no word. Please let us know if you run into any questions or suggestions at contact@kiri-innov.com, we'll always be there
@@KIRI_Engine_App Yes thank you, I have reached out with some questions.
Hi, does Kiri do stitching 3D models? Best! PG
Hi PG! If you mean stitching two point clouds into one, unfortunately, we don't have that feature. But the only reason we don't have it is because we really don't need it. In most cases, you will need the stitching feature to capture the entire bottom-included 3D models, right? But in the Photo Scan mode (photogrammetry mode) you can freely rotate and flip the object while taking photos, so you won't have any blind spots in the capture. And then before uploading photos for processing, turn on Auto-Object Masking toggle, it then uses AI to stitch all the photos from different angles into a bottom-included 3D model. Hope it helps :)
So, does Kiri Engine *only* accept video for NeRF processing?
You are right, for NeRF it only accept video for now. The reason is because we train NeRF on sequential photosets, and we found using video as the input would really help us to improve the photoset quality, sequentially
@@KIRI_Engine_App any plan on being able to feed it static images in future versions? I work with a lot of featureless objects, but doing video capture just doesn't work with our project and preservation workflow.
Yes, we do have plan to support photo input as well! But keep in mind that we do need you to take a lot of photos (ideally like 200) for NeRF with photo. Do you think your project is okay with it?
@@ethanwatrall4797
@@KIRI_Engine_App yes, that's pretty standard for our workflow. Depending on the object (I'm an archaeologist), we're taking upwards or 250 or 300 images.
Perfect! The development is already in our pipeline. Might still take a while before it sees the light, but it's happening@@ethanwatrall4797
I always thought NeRFs don't have a mesh structure?
You are right, like I explained in the video, this tech isn't exactly NeRFs but a NeRFs variation called NeuS or in our term NSR, the cool thing about NSR is that it can generate mesh :)
@@KIRI_Engine_App oh alright, I must have missed that part. Thanks!
I downloaded the Kiri app but it seems that you need to pay for the Pro version in order to use Nerf. It would be nice to have a limited free version. But your videos are informative and balanced.
Thank you so much for your understanding and your supportive comment dkt :) Yeah, we had to make NeRF a Pro feature because the server cost is substantially high compared to photogrammetry. But we are planning for a trial period some time this year so more users get to have a taste of our powerful algorithms
Just wondering how your visualization compares to Luma AI? Differences?
@@dkt4728 Good question! Actually, I did a video just to explain the differences between 3D scanner apps like Luma, check it out: ruclips.net/video/9dyAj9gXIms/видео.htmlsi=AYy1sqruMDoNKqxP
honest as can be
Yeah I understand there’s quite a lot of processing.
I think a pro feature could make a lot of sense as long as you are supplying
Editing tools and ideally a way to embed and visualize the data (eg on a website).
I think you should have a free trial period- and hopefully not for someone to pre-pay. First is that for me turns me off.
I would also be OK with a pay per use model.
Great Feature ! Ideally I would like to try it, maybe 3-5 Scans. If they are somewhat ok I will happily Pay 40/60€/year, but it is a lot of money to try out :(
I had to pay to use the featureless function. Not free
I'm IN !!!
Love ya, but calling something out here. When you say that Nerf scans take slightly longer to process that's a bit disingenuous. Considerably longer. Three times longer. Some other wording would have been a little more honest. Keep up the innovations. I look forward to seeing what comes next.
Thank you so much for the feedback, and I completely agree. I'll be of more honest in wording in the future haha.