I really wish they focused more on the technical aspects of what went into the final product... We're looking, at the very least, at two projects-- one for the panels, one for the floor. More likely, there's multiple projectors getting blended for the moving panels... There's also lighting cues and ways that the lighting in the room and on the subject mirror what is happening in the projections-- what kind of lighting was used? Did they use standard DMX control in conjunction with their automated motor control software? How's that all work together? Who designed the lighting aspects of it? So many technical aspects that could've been explored, but they focused more on the artistic "feeling" of the piece...
I agree, I love all that technical stuff; but as a graphic designer and aspiring motion graphics artist, without the art, this would not exist, nor be able to exist. The feeling is in actuality, everything embodied into one, and therefore in my opinion the most important. I still love technical stuff though don't get me wrong. Before graphic design and motion graphics interest; I wanted to be a mechanical engineer going into automotive automation :), hence why I love this project probably.
I totally agree with you. I clicked to see this video to find that technical rider. It would be great to find a sheet with that info who can help us to learn and see what we need to do an installation like this. Great video though.
They only used real-life projection for the text and things like that, not the canvas themselves. Here's how they made the video: The robots are holding a plain white canvas and moving them around in sync with the animation. As for rendering it out, they motion-tracked where the camera is. In post, they used information from where the canvas and camera were to render a video where the canvas acted as some sort of window with things moving in terms of where the camera is. Everything is flawless, except for at ruclips.net/video/lX6JcybgDFo/видео.html where the render failed to follow the canvas and is trying to catch up.
> Ideally, we would have everyone come and experience it in the room. [cut] The reason we decided to make a film was just to get it out there and that the widest audience see Box. I assumed that the 3D images were only possible due to the observer PointOfView being aligned with the projection. It seems like the only possible way, since the canvases are flat surfaces. The above direct quote from the "Behind the Scene" video, however, suggests that other POV would have conveyed the same kind of experience. So, if what the quote suggests were correct (And I believe it isn't, making the statement in the video *very* questionable. Don't get me wrong, it's awesome what you guys did.), I would have expected the "Behind the Scenes" video to deal with the question of 3D versus POV a little.
+God, creator of the universe Yeah I was hoping they would shed some more light on that. I think there's a couple of ways it could be acheived, with certain sacrifices. The ideal way would be to show it to one person at a time, who was wearing a motion tracker. The animation software could then change the virtual camera angle in realtime depending on the viewers position. This would be way more impressive than the video because the voluntary movements of your head would be reflected in the 3d shapes, enhancing the illusion. But I'm thinking that lag could be an issue, and only one person at a time. Maybe you could add a second person using 3d style shuttered/polarised glasses, at a cost of processing power and resources. You could also animate it so it worked for a more static viewing angle, then have the audience in a thin line perpendicular to the stage, at the same angle. However you would lose some of the illusion, and it wouldn't be as effective.
the perceived depth is created by mapping on to a moving projection surface, the 3D shape is changing as the screen moves, it's not the viewer that changes the depth perception, you are still looking at a 2D surface as a viewer.
@Marino Canal: I'm not sure if you've understood what I mean, or if I understand what you mean. To clarify: If I and you would both be standing in that room where the video's action takes place, except I would stand a few meters to the left of you, would be both see correct 3D on the surface? If not, then please re-read my comment and/or please clarify yours.
the effect is at its best when the spectator would stand in the exact location for which the projection is designed. However, if you redesign the whole thing to be viewed from a larger distance, this would have two effects: 1/ the 3d effect would be less dramatic, because the further you get, the closest it gets to being just a 3d animation projected on a 2d surface, which everyone is familiar with since the start of the cinema 2/ if you increase the viewing distance, you can also allow several people to be close to the point of view. The person at the perfect spot will have the best experience, and the further you get from the point of view, the shittier it gets. Our brain is pretty good at correcting small perspective issues; for example when you look at your PC monitor at an angle, you still understand that this is a rectangle under perspective projection without problem; So I think they could take this into account to create a animation that is viewable in a larger cone and still looks ok.
4:50 you got your wish, dude. I love the process that ILM have been using with LED volumes and Unreal Engine - so, so cool. Amazing what they simply capture in camera, on set now. It’s like it’s come full-circle.
I honestly thought it was some high res led panels mounted to the robots instead of projection mapping, it's amazing, how you managed to pull it off in perfect sync
I think this is simultaneously less impressive and more impressive than they make it out to be. The wording of some answers suggests this could have been performed before a live audience, when that isn't the case. The projection has to be precisely calibrated to the position of the viewer\camera to maintain the integrity of the illusion. Taking the end result by itself, this exact product could have been achieved more simply "in post", since the live performance is simply a means to an end. However, taken as a demonstration of the technology, it's even more impressive than presented, since the projection "video" has to be (relatively) precisely calibrated to the position of the camera robot. So we've got at least three robots and the output of a (stationary?) projector all synchronized. And yes, with all that ranting out of the way, the technical aspect of projection onto moving surfaces is still the highlight.
The path of the camera was tracked AFTER the physical installation was in place, not as it was being constructed. Sure, if you were in the room and viewed it at an extreme angle, it would distort the panels enough to destroy the illusion, but you wouldn't need your eyes to lock into the exact position of the camera to have an experience comparable to what we see in the final video. This is indeed as impressive as it's made out to be ; )
@@SamGriffinGuitar That's what I thought when reading his comment...WTF is he talking about? You have a relatively wide (ish) area where a live audience could be. Why does that guy assume a projected image is only viewable from an extremely narrow angle directly in front? Maybe he's suggesting it couldn't be viewed by an audience of several thousand?
@@Bhatt_Hole What he is saying is that the perspectives of the objects (the rendering of cubes and spheres) don't hold up if you view from a slightly different angle. You immediately start to see that you are looking at a 2d canvas with a projection
@@rowinvanderschee8043 I'm pretty sure he's just trying to sound highbrow and sophisticated. If the creators really wanted to show it in front of a live audience, they could've just up scaled the whole thing and changed it so that the target POV wasn't moving (which would've costed quite a bit of money and is probably why they didn't). From what I'm reading he's downplaying the presentation due to its budget limitations (it can't be shown to a live audience) and praising it for its demonstration of technological precision, when, personally, I think it should be the other way around.
@@rowinvanderschee8043 It doesn't even need to be a different viewing angle. If you were watching it in person, the simple fact that you have two functioning eyes (assuming you do) would tip you off to the fact that it's a flat surface. It looks 3D in video because it's exactly as 3D as everything else in video is--which is to say, all the signals we use to determine depth other than binocular disparity are telling us it's 3D, but the camera is a single lens so binocular disparity can't come into play.
So minimal being shot in monochrome, any colour would have made it less pure, Box is probably the coolest thing i have seen on youtube. Excellent work, just excellent.
I was also really curious how the depth worked before this video (because I thought the camera was handheld). The short answer to your question is no, the illusion of depth works at only the precise location of the camera at any time. Because the camera was also tracked in 3D space for Maya, the projections were rendered based on that changing camera location.
your assumption is exactly it. They had to map out the movement first, then made their graphic follows those two white canvas. Thus the movements had to be measured and duplicated exactly (by robots no less) so the projection won't miss the canvases. There was always ambient lighting so you don't see the "black" light border as it got washed / diffused.
The illusion that they want for the film doesn't work live, but the actual projection mapping onto moving surfaces does work. And with projection mapping, you have one timeline with one screen. With screens you are now working with multiple timelines and multiple screens and becomes a lot harder and more time consuming to do it.
Can't believe I've just stumbled on this video 5 years later. But this is still, even today, amazing! I was so curious after watching the original video about how the depth was put into the projections. Well I found out! How impressive! I haven't seen anything like it!
Yes. Mapping is essentially that. Masking out where you want the video shown. As far as the camera work goes, I gathered from this video that they had a guy make camera moves and then took that info and entered it into Maya. That info was passed on to the camera arm with the camera so it would make the same moves and they just animated the video accordingly for those perspectives. I'm glad someone else understands how impressive the technology is and not focusing on just the end product.
I think it was just one fixed DLP projector. It's far enough from the screens that the small change in distance from the source doesn't blur the images. If you notice, the entire area in which the panels could move was much than a theater screen. The actual rendering process for the CGI takes account of what kind of distortion needs to be applied based on where the projector is relative to the screen and the viewer.
The projections are like those 3D sidewalk and street chalk drawings. There's one exact POV where it all looks correct. From anywhere else it's all distorted. At best for a live show this technique would work for a few people sitting real close together.
I think the depth effect is just to prove the technology's precision. Being able to project on moving surfaces only regardless of said effect is still a massive innovation and will be valuable to theater production.
This a real life experience idiot, it's not made for RUclips... As he said, the idea of making a film of it came later just to achieve the widest audience they can.
@@mihaly_creative Well, it was all made "live" but would only "work" from the point of view of the camera, which itself is motion-controlled along with the canvas robots. So, in a way it was ONLY made for youtube!
I always feel bad for the people standing in the middle of it, looking at some broken perspective, and pretending it looks amazing. Example at: 2:04 He's looking at some deformed 3d extruded shapes that don't really make sense from his point of view, and he looks amazed for the camera only........
Projection mapping is used like crazy in a lot of live concerts. Just watch the AMAs and you'll see a lot there. However, you don't see it on moving surfaces especially this much movement. I believe its the technology they want to move into a bigger audience and in real life. As cool as the video actually is, its just what they came up with to test and show this technology.
Because he wanted to experiment with projection mapping, not fixed screens playing video. There is no need for any of it, robots, the projectors, etc. But it is an art project and there is a concept and a will to experiment. That's what matters and they were successful.
They're using Maxon Cinema 4D to create the actual animation. This is keyframed to move around the stage. Then they're sending the keyframe data to Autodesk Maya, and the robots are interpreting how to move the canvas(s) to the exact point. From the looks of the lens on the projectors they're projecting 4K covering the entire wall. they then playback the rendered animation as the robots move. Finally, as said in the video, they move the camera around with a third robot and that's how it's done.
Its called tracking projection. It projects only on specific canvas using infrared points and follows the moving object. This was done by Johnny Lee on youtube with a wiimote. Check out his page on "Foldable Displays"
Looks like a front projection. Stunning video. The BTS lends a little more to how they did it, but i'm still fascinated by how the projectors and images moved. Were the projectors also on robots as well?
Should just be one projector, but the video it projects is mapped out to only have video in places where the canvases are. A lot of live concerts use this to project onto multiple screens and let them interact with each other. But they don't move.
+Kead Davidson The soundtrack is by the synthband SOVIET and can be found here: sovietbot.bandcamp.com/album/monomyth. I have no idea why they aren't credited. Hope this helps!
I think he means that if you were actually to walk around the room and watch the video, the depth perception would not line up correctly. You would only see a flat image. The animation was rendered with the tracked camera, so it would look convincing.
although the whole "no one has ever done this before" get's on my nerves a bit too.. but maybe they are correct. This is intended to work for "real eyes". Of course there is CG (computer graphics) but not on the video we are watching here. Just in the video clips being projection mapped onto the surfaces that are being moved by the robots.
Thanks a lot for clearing things up, or at least nice to hear your thoughts. One last thing. They said they wanted to translate the experience from seeing this project in real life to film, so they can reach a bigger audience. But I how would you be able to do this in standing in the room and walking around? Perspective wouldn't be correct.
Actually it is groundbreaking. Don't think about from the aspect of that video, but think about it from an aspect of actually being able to do this live on moving canvases. You don't have to create something with perspective like they did. They just chose that because they are creating a film. I guess most people don't know that projection mapping is used heavily in a lot of live concerts.
Great work, and the integration with graphics is slick. And, it's nice to see we have been vindicated..we have been using conventional robotics for motion control for many years now.
Thanks for the reply. So the mapping is like a mask right? A filter that partially shows video. And I can't wrap my head around how they did the hollow cube thing while moving the camera. That means they should have edited the video to only be seen from a specific moving point of view which is the camera. And the robot arms and beamer mapping must be super precise in synch which it is. It's so impressive!
Kind of disappointed that the BtS didn't answer the big question I had. Where were the projectors and were they fixed or did they motion track as well.
I think they have two more robots from the "camera" side with two projectors. That way, someone can program it to follow the canvases with the projectors perfectly. That's my guess.
it has been done before! just not at this level of awesome! coolux Pandoras Box Media Server and Widget Designer has the ability to work with the ABB robots with real time projection mapping and real time tracking. not pre- rendered. the projection mapping and content was also very good. the exact same effect could have been visually achieved with LCD or LED but would have been no doubt less impressive and more cost effective. as far as pjs go, its: 2x blend on wall 2x blend on floor
I have to say, from watching the Box video, I was doubting the giant robots were real. Given that the main polygon objects were drawn on with CG, it would be entirely possible for the background objects to be fake, too -- "Who would have giant robots with mount arms available for an art project?"
Question - Is the position of the viewer important? ie will for example the 3d box only look like a 3d box if you are viewing it from a specific angle?
So there is are like two beamers that project the 3D art onto the canvases? And those beamers should follow the point of vision from the camera filming it right? In order to get perspective correct. Can anyone clarify?
So humble (starving artists try to figure out how to match this experience and somehow acquire 2 gigantic high-tech high precision industrial factory robotic arms and a studio to, you know, install the frickin arms that are dancing and moving down everyone's throat, on a budget.)
How can the canvases appear more black than the background at some points when they are white. Surely light from other parts of the room would reflect onto them??
This is so cool. So in the final product you use both CG generated robots and the real ones? I might come across as really thick here , but If don't ask questions, we never learn anything right? I don't get why you needed CG ones at all? I get the CG canvases and the final outline of the robots at the very end of the film had to be done with CG, but the other parts?
One obvious application for tech like this would a ride at an amusement park. Stick a chair on the camera robot and strap people into it. I'd have a go for sure.
I work as a screen-writer for media and I have always wanted to work with your approach regarding a project I will be happy to let you know about. I am eager to introduce myself and my idea to you, as I think we would be a great fit. Best regards,
They said that they worked for months on this, so what was the budget and who was paying this? I guess that there were at least 5 people working on this, and they probably don't work for pennies :)
I wondered the same the thing. I am sure they went RUclips to go viral and get some revenue to fund the artistic portion of this project. Robots doing things by day for corporations and working at night for artists. Most companies will fund art studies and experiments for exposure. The best advertising they can get is to have a couple guys design this and go viral on RUclips. Best advertising in the world because the product speaks for itself.
thats exactly how its done. projections are just putting light on something and if you project black, you kinda just... don't :D of course you need good projectors too :)
IIRC, aren't they using this projection for faces on animatronics on some of their more recent show-rides? I seem to recall a mentio of projection being used on the Frozen one.
This project has really caught my attention. I have a project for a cycling client that I would like to create something similar but am interested to know more about the programming time and costs. Is there a contact @Box I can talk to please?
It's an annotation, You can easily turn that off. at least it isn't something that covers half the screen and has someone telling you to "please subscribe" for 10 seconds.
I really wish they focused more on the technical aspects of what went into the final product...
We're looking, at the very least, at two projects-- one for the panels, one for the floor. More likely, there's multiple projectors getting blended for the moving panels... There's also lighting cues and ways that the lighting in the room and on the subject mirror what is happening in the projections-- what kind of lighting was used? Did they use standard DMX control in conjunction with their automated motor control software? How's that all work together? Who designed the lighting aspects of it?
So many technical aspects that could've been explored, but they focused more on the artistic "feeling" of the piece...
BTSs are intended for non-technial audience but I agree, missed too many crucial elements
Only assuming, but perhaps a lot of those processes were proprietary
I agree, I love all that technical stuff; but as a graphic designer and aspiring motion graphics artist, without the art, this would not exist, nor be able to exist. The feeling is in actuality, everything embodied into one, and therefore in my opinion the most important. I still love technical stuff though don't get me wrong. Before graphic design and motion graphics interest; I wanted to be a mechanical engineer going into automotive automation :), hence why I love this project probably.
I totally agree with you. I clicked to see this video to find that technical rider. It would be great to find a sheet with that info who can help us to learn and see what we need to do an installation like this. Great video though.
They only used real-life projection for the text and things like that, not the canvas themselves. Here's how they made the video:
The robots are holding a plain white canvas and moving them around in sync with the animation. As for rendering it out, they motion-tracked where the camera is. In post, they used information from where the canvas and camera were to render a video where the canvas acted as some sort of window with things moving in terms of where the camera is. Everything is flawless, except for at ruclips.net/video/lX6JcybgDFo/видео.html where the render failed to follow the canvas and is trying to catch up.
> Ideally, we would have everyone come and experience it in the room. [cut] The reason we decided to make a film was just to get it out there and that the widest audience see Box.
I assumed that the 3D images were only possible due to the observer PointOfView being aligned with the projection. It seems like the only possible way, since the canvases are flat surfaces. The above direct quote from the "Behind the Scene" video, however, suggests that other POV would have conveyed the same kind of experience.
So, if what the quote suggests were correct (And I believe it isn't, making the statement in the video *very* questionable. Don't get me wrong, it's awesome what you guys did.), I would have expected the "Behind the Scenes" video to deal with the question of 3D versus POV a little.
+God, creator of the universe Yeah I was hoping they would shed some more light on that. I think there's a couple of ways it could be acheived, with certain sacrifices.
The ideal way would be to show it to one person at a time, who was wearing a motion tracker. The animation software could then change the virtual camera angle in realtime depending on the viewers position.
This would be way more impressive than the video because the voluntary movements of your head would be reflected in the 3d shapes, enhancing the illusion. But I'm thinking that lag could be an issue, and only one person at a time. Maybe you could add a second person using 3d style shuttered/polarised glasses, at a cost of processing power and resources.
You could also animate it so it worked for a more static viewing angle, then have the audience in a thin line perpendicular to the stage, at the same angle. However you would lose some of the illusion, and it wouldn't be as effective.
the perceived depth is created by mapping on to a moving projection surface, the 3D shape is changing as the screen moves, it's not the viewer that changes the depth perception, you are still looking at a 2D surface as a viewer.
@Marino Canal: I'm not sure if you've understood what I mean, or if I understand what you mean. To clarify:
If I and you would both be standing in that room where the video's action takes place, except I would stand a few meters to the left of you, would be both see correct 3D on the surface? If not, then please re-read my comment and/or please clarify yours.
the effect is at its best when the spectator would stand in the exact location for which the projection is designed.
However, if you redesign the whole thing to be viewed from a larger distance, this would have two effects:
1/ the 3d effect would be less dramatic, because the further you get, the closest it gets to being just a 3d animation projected on a 2d surface, which everyone is familiar with since the start of the cinema
2/ if you increase the viewing distance, you can also allow several people to be close to the point of view. The person at the perfect spot will have the best experience, and the further you get from the point of view, the shittier it gets. Our brain is pretty good at correcting small perspective issues; for example when you look at your PC monitor at an angle, you still understand that this is a rectangle under perspective projection without problem;
So I think they could take this into account to create a animation that is viewable in a larger cone and still looks ok.
why am I only seeing this now five years later?? This is incredible!
4:50 you got your wish, dude. I love the process that ILM have been using with LED volumes and Unreal Engine - so, so cool. Amazing what they simply capture in camera, on set now. It’s like it’s come full-circle.
I honestly thought it was some high res led panels mounted to the robots instead of projection mapping, it's amazing, how you managed to pull it off in perfect sync
I think this is simultaneously less impressive and more impressive than they make it out to be. The wording of some answers suggests this could have been performed before a live audience, when that isn't the case. The projection has to be precisely calibrated to the position of the viewer\camera to maintain the integrity of the illusion. Taking the end result by itself, this exact product could have been achieved more simply "in post", since the live performance is simply a means to an end.
However, taken as a demonstration of the technology, it's even more impressive than presented, since the projection "video" has to be (relatively) precisely calibrated to the position of the camera robot. So we've got at least three robots and the output of a (stationary?) projector all synchronized. And yes, with all that ranting out of the way, the technical aspect of projection onto moving surfaces is still the highlight.
The path of the camera was tracked AFTER the physical installation was in place, not as it was being constructed. Sure, if you were in the room and viewed it at an extreme angle, it would distort the panels enough to destroy the illusion, but you wouldn't need your eyes to lock into the exact position of the camera to have an experience comparable to what we see in the final video. This is indeed as impressive as it's made out to be ; )
@@SamGriffinGuitar That's what I thought when reading his comment...WTF is he talking about? You have a relatively wide (ish) area where a live audience could be. Why does that guy assume a projected image is only viewable from an extremely narrow angle directly in front? Maybe he's suggesting it couldn't be viewed by an audience of several thousand?
@@Bhatt_Hole What he is saying is that the perspectives of the objects (the rendering of cubes and spheres) don't hold up if you view from a slightly different angle. You immediately start to see that you are looking at a 2d canvas with a projection
@@rowinvanderschee8043 I'm pretty sure he's just trying to sound highbrow and sophisticated. If the creators really wanted to show it in front of a live audience, they could've just up scaled the whole thing and changed it so that the target POV wasn't moving (which would've costed quite a bit of money and is probably why they didn't). From what I'm reading he's downplaying the presentation due to its budget limitations (it can't be shown to a live audience) and praising it for its demonstration of technological precision, when, personally, I think it should be the other way around.
@@rowinvanderschee8043 It doesn't even need to be a different viewing angle. If you were watching it in person, the simple fact that you have two functioning eyes (assuming you do) would tip you off to the fact that it's a flat surface. It looks 3D in video because it's exactly as 3D as everything else in video is--which is to say, all the signals we use to determine depth other than binocular disparity are telling us it's 3D, but the camera is a single lens so binocular disparity can't come into play.
So minimal being shot in monochrome, any colour would have made it less pure, Box is probably the coolest thing i have seen on youtube. Excellent work, just excellent.
I was also really curious how the depth worked before this video (because I thought the camera was handheld). The short answer to your question is no, the illusion of depth works at only the precise location of the camera at any time. Because the camera was also tracked in 3D space for Maya, the projections were rendered based on that changing camera location.
Superb. I now believe in magic.. the magic of film and design can be brought together. Simply wonderful.
your assumption is exactly it. They had to map out the movement first, then made their graphic follows those two white canvas. Thus the movements had to be measured and duplicated exactly (by robots no less) so the projection won't miss the canvases. There was always ambient lighting so you don't see the "black" light border as it got washed / diffused.
agreed. such a creative 'groundbreaking' art form!
The illusion that they want for the film doesn't work live, but the actual projection mapping onto moving surfaces does work.
And with projection mapping, you have one timeline with one screen. With screens you are now working with multiple timelines and multiple screens and becomes a lot harder and more time consuming to do it.
Can't believe I've just stumbled on this video 5 years later. But this is still, even today, amazing! I was so curious after watching the original video about how the depth was put into the projections. Well I found out! How impressive! I haven't seen anything like it!
I still come back to this masterpiece
Yes. Mapping is essentially that. Masking out where you want the video shown.
As far as the camera work goes, I gathered from this video that they had a guy make camera moves and then took that info and entered it into Maya. That info was passed on to the camera arm with the camera so it would make the same moves and they just animated the video accordingly for those perspectives.
I'm glad someone else understands how impressive the technology is and not focusing on just the end product.
I think it was just one fixed DLP projector. It's far enough from the screens that the small change in distance from the source doesn't blur the images. If you notice, the entire area in which the panels could move was much than a theater screen. The actual rendering process for the CGI takes account of what kind of distortion needs to be applied based on where the projector is relative to the screen and the viewer.
GMUNK you are no graphic designer...You are an Artist!
The projections are like those 3D sidewalk and street chalk drawings. There's one exact POV where it all looks correct. From anywhere else it's all distorted. At best for a live show this technique would work for a few people sitting real close together.
I think the depth effect is just to prove the technology's precision. Being able to project on moving surfaces only regardless of said effect is still a massive innovation and will be valuable to theater production.
Who's watching this in 2020? These guys are pioneers in Robotic art!
For those asking, the first song in this video is Fly Away by Mokhov. Not sure about the other music.
This should be in a Museum. Magic.
This could revolutionize filmmaking, and more importantly THEATER!
This would be a great experience if a person could sit in place of the camera, like a solo ride. Cool project, very inspiring
would have loved to see a more "nerdy" in depth behind the scenes. But weill done in any case! Even nearly 10 years later its spectacular!
dont understand why people can dislike this amazing piece of work?
thats a lot of work for something that looked like it was all CGI...
watch the BTS, they said "live-action" CGI not pure CGI
This a real life experience idiot, it's not made for RUclips... As he said, the idea of making a film of it came later just to achieve the widest audience they can.
@@mihaly_creative Well, it was all made "live" but would only "work" from the point of view of the camera, which itself is motion-controlled along with the canvas robots. So, in a way it was ONLY made for youtube!
I always feel bad for the people standing in the middle of it, looking at some broken perspective, and pretending it looks amazing.
Example at: 2:04
He's looking at some deformed 3d extruded shapes that don't really make sense from his point of view, and he looks amazed for the camera only........
Happy we helped make this happen at PhaseSpace
Projection mapping is used like crazy in a lot of live concerts. Just watch the AMAs and you'll see a lot there. However, you don't see it on moving surfaces especially this much movement. I believe its the technology they want to move into a bigger audience and in real life.
As cool as the video actually is, its just what they came up with to test and show this technology.
The X-ray view of the robots is amazing
This project totally blew my mind! Such insight! This, this is why art should always be supported!
i think most people at first glance have no idea what is going on here so thank you for this behind the scenes :)
Because he wanted to experiment with projection mapping, not fixed screens playing video. There is no need for any of it, robots, the projectors, etc. But it is an art project and there is a concept and a will to experiment. That's what matters and they were successful.
Everything is robotically controlled and pre-choreographed. Would be cool to walk around it and see how wired it looks. Great stuff!
They're using Maxon Cinema 4D to create the actual animation. This is keyframed to move around the stage. Then they're sending the keyframe data to Autodesk Maya, and the robots are interpreting how to move the canvas(s) to the exact point. From the looks of the lens on the projectors they're projecting 4K covering the entire wall. they then playback the rendered animation as the robots move. Finally, as said in the video, they move the camera around with a third robot and that's how it's done.
So this behind the scenes video was published Oct 1 2013, but at 6:05 the VICE copyright is 2015.
The work is absolutely stunning
I saw "Box" some years ago, I always thought it was large TV screens with movies playing on them.
when I saw Box it blew me away... great job guys!
Its called tracking projection. It projects only on specific canvas using infrared points and follows the moving object. This was done by Johnny Lee on youtube with a wiimote. Check out his page on "Foldable Displays"
No, the projections are specifically designed to give the illusion of depth at the predetermined camera angle.
Every time I see this video of tear comes to my eye for some reason I cried the first time I saw this
Looks like a front projection. Stunning video. The BTS lends a little more to how they did it, but i'm still fascinated by how the projectors and images moved. Were the projectors also on robots as well?
Just as cool as the presentation! So, the projection is coming via rear projection through the robots holding the panels?
Should just be one projector, but the video it projects is mapped out to only have video in places where the canvases are. A lot of live concerts use this to project onto multiple screens and let them interact with each other. But they don't move.
I want the soundtrack PLEASE!!!
+Kead Davidson The soundtrack is by the synthband SOVIET and can be found here: sovietbot.bandcamp.com/album/monomyth. I have no idea why they aren't credited. Hope this helps!
Gigibyte Thank you for sharing this with me.
+Gigibyte Without looking I found the answer to my question about that song. Thank you so much!
I think he means that if you were actually to walk around the room and watch the video, the depth perception would not line up correctly. You would only see a flat image. The animation was rendered with the tracked camera, so it would look convincing.
although the whole "no one has ever done this before" get's on my nerves a bit too.. but maybe they are correct. This is intended to work for "real eyes". Of course there is CG (computer graphics) but not on the video we are watching here. Just in the video clips being projection mapped onto the surfaces that are being moved by the robots.
We want to see how you guys projected the light with precision on the board
Thanks a lot for clearing things up, or at least nice to hear your thoughts.
One last thing. They said they wanted to translate the experience from seeing this project in real life to film, so they can reach a bigger audience. But I how would you be able to do this in standing in the room and walking around? Perspective wouldn't be correct.
Actually it is groundbreaking. Don't think about from the aspect of that video, but think about it from an aspect of actually being able to do this live on moving canvases. You don't have to create something with perspective like they did. They just chose that because they are creating a film.
I guess most people don't know that projection mapping is used heavily in a lot of live concerts.
Beautiful project but what was the final budget? I mean, one project like this is really very expensive especially the robotics.
that's one of the best film project I've ever seen1
Great work, and the integration with graphics is slick. And, it's nice to see we have been vindicated..we have been using conventional robotics for motion control for many years now.
How long did the project take?
Thanks for the reply. So the mapping is like a mask right? A filter that partially shows video.
And I can't wrap my head around how they did the hollow cube thing while moving the camera. That means they should have edited the video to only be seen from a specific moving point of view which is the camera.
And the robot arms and beamer mapping must be super precise in synch which it is. It's so impressive!
And from this project, they were signed on to be the major FX house for Cuaron's Gravity.
Great work guys. You are really pushing the tools.
Bravo!!! out of words. just perfect!
This masterpiece is asking for a sequel, obviously. Don't forget the other effects of magic. :D
Kind of disappointed that the BtS didn't answer the big question I had. Where were the projectors and were they fixed or did they motion track as well.
Simply cool, unique, cool and awesome!
wow.. amazing.. it makes think about action, space, performance, visual.
I want to try too, Inspired.
no, it is projection mapped.
I think they have two more robots from the "camera" side with two projectors. That way, someone can program it to follow the canvases with the projectors perfectly. That's my guess.
Bot&Dolly would. It's what they do full time.
So intersting... everyone needs friends this smart/fun/creative
The camera actions was not programmed. The said they had someone walk around, then put that route into the computer and machine which held the camera.
it has been done before! just not at this level of awesome! coolux Pandoras Box Media Server and Widget Designer has the ability to work with the ABB robots with real time projection mapping and real time tracking. not pre- rendered.
the projection mapping and content was also very good. the exact same effect could have been visually achieved with LCD or LED but would have been no doubt less impressive and more cost effective.
as far as pjs go, its:
2x blend on wall
2x blend on floor
I have to say, from watching the Box video, I was doubting the giant robots were real. Given that the main polygon objects were drawn on with CG, it would be entirely possible for the background objects to be fake, too -- "Who would have giant robots with mount arms available for an art project?"
Not rear projected, as the robotic arms would get in the way. Its projected from the front.
I'm guessing the projectors had to move to stay parallel to the canvasses, otherwise the projection would not be sharp.
Question - Is the position of the viewer important? ie will for example the 3d box only look like a 3d box if you are viewing it from a specific angle?
So there is are like two beamers that project the 3D art onto the canvases? And those beamers should follow the point of vision from the camera filming it right? In order to get perspective correct. Can anyone clarify?
great job amasing what tecnoligy can do now a days that is just amazing love it love to see you guys do another one how did you get your idea for this
Did I get it right ? You used Maya for the graphic stuff and simulations and also to control the robots ?
What a amazing project never seen anything that is as good it is a credit to you and everybody involved keep up the amazing work
I think the projectors are made to move the exact same way the arms that move the boxes move.
Where are the projectors? Are there two projectors? I want to do this at home.
So humble (starving artists try to figure out how to match this experience and somehow acquire 2 gigantic high-tech high precision industrial factory robotic arms and a studio to, you know, install the frickin arms that are dancing and moving down everyone's throat, on a budget.)
How can the canvases appear more black than the background at some points when they are white. Surely light from other parts of the room would reflect onto them??
This is so cool. So in the final product you use both CG generated robots and the real ones?
I might come across as really thick here , but If don't ask questions, we never learn anything right?
I don't get why you needed CG ones at all?
I get the CG canvases and the final outline of the robots at the very end of the film had to be done with CG, but the other parts?
What if BD Move can be used to animate 3d prints rigged with motors? Custom animatronics and stop-motion sets would be easier to animate, I'd assume.
One obvious application for tech like this would a ride at an amusement park. Stick a chair on the camera robot and strap people into it. I'd have a go for sure.
no its only perfect from the view of the camera. probably looks pretty cool live even so
Increíble trabajo. Mis felicitaciones y saludos desde Chile.
да
I work as a screen-writer for media and I have always wanted to work with your approach regarding a project I will be happy to let you know about. I am eager to introduce myself and my idea to you, as I think we would be a great fit.
Best regards,
They said that they worked for months on this, so what was the budget and who was paying this? I guess that there were at least 5 people working on this, and they probably don't work for pennies :)
+Valent Turkovic Um, actually Valent, you paid for it.
I wondered the same the thing. I am sure they went RUclips to go viral and get some revenue to fund the artistic portion of this project. Robots doing things by day for corporations and working at night for artists. Most companies will fund art studies and experiments for exposure. The best advertising they can get is to have a couple guys design this and go viral on RUclips. Best advertising in the world because the product speaks for itself.
thats exactly how its done. projections are just putting light on something and if you project black, you kinda just... don't :D of course you need good projectors too :)
WHO KNOWS THE NAME OF THE SONG?
Composer is Sounds Red
brilliant....thank you for sharing
AMAZING!!! What song? I really like the background music
Me too. If you find it, please remember to share.
so glad I subscribed to this channel.... So much amazing, inspiring stuff shown here…
I wonder if this is what Disney Imagineering has in mind for the upcoming Micky's Runaway Railway? -Frank
IIRC, aren't they using this projection for faces on animatronics on some of their more recent show-rides? I seem to recall a mentio of projection being used on the Frozen one.
it's a huge project.. keep going.. wishing you a good luck!
It's almost as if this video always existed... it just needed to be made.
Projectors are stationary.
Wouldn't it make more sense for the canvases to be LCD screens?
This project has really caught my attention. I have a project for a cycling client that I would like to create something similar but am interested to know more about the programming time and costs. Is there a contact @Box I can talk to please?
It's an annotation, You can easily turn that off. at least it isn't something that covers half the screen and has someone telling you to "please subscribe" for 10 seconds.