with all your head/camera movements I can see how and why people get sick to their stomachs with this VR stuff. Really liking the Redshift tutorial. Amazing. Getting a system in a week with GTX 1070 installed along with 32 threads CPUs should be able to do direct comparisons with Renderman and Redshift on a system suited for both. I think Redshift is going to be the winner.
It's actually not bad at all when you're in the head set, particularly with head tracking but on youtube it doesn't look the best! A 1070 is what I just got in my new rig it's fantastic, Redshift burns through renders, should have done some renders on my 980 to compare before upgrading. Even with a 6700k Renderman is still way slower than Redshift, but it wins out in other areas like shader complexity and Maya integration (so far, though Redshift just added nParticle support so yay).
I get the same comment in the Renderman forums too when I mention the Redshift comparison, that it's loads faster. Redshift does seem to be more nimble than most in their development of requested new features just hoping it doesn't change price point later on though in the way of maint lic. fees. very attractive right now.
Hi Michael. Have you done more of these 360 stereo videos? In particular adjusting the separation value to match distance between eyes? My question is, wouldn't leaving it at 0 be the equivalent of closing one eye to view the world? With two eyes slightly separated you perceive depth right?
the eye separation is important otherwise you end up having two exact same images. it represents the distance between the two eyes and is usually arround 6.5 cm. the number in the redshift options is in scenes units, so dont set it to 6.5 km :)
Hi Michael, I have a question: I am trying to mix Redshift 360 stereo camera with Renderman 360 stero camera to save some rendering time for the particles! Is there any difference in the camera preset of separation or any points i should consider to composite both images later on perfectly in the same position? (except for the 0.064 renderman eye separation)
No camera separation is required this is handled by whatever image viewer you're using with your HMD. The stereoscopic image is created by the viewer by taking left and right eye information from either a side-by-side 360 degree image or an over-under image.
I had to set the separation or else I had no 3D, only 2D 360. I set it to 2 and it worked great. With it set to 0 the two images where the same. Maya 2018 update 5
Hi I was wondering if having a gpu graphics card could make rendering in renderman faster or is it just depended on your ram, thanks as always for your awesome tutorials.
Renderman is CPU dependent at render so the only thing having a GPU would help is handling some of the load in Maya for displaying what you're seeing in your workspace. Renderman does take advantage of GPU when 'denoise' is enabled however this function only works during a batch render. Hope that makes sense!
Do you mean regarding target sample rate? It's always in adaptive mode now - if you want to force it to render each pixel up to a specific sample set both the lowest and the highest sample to the same number. Feel free to comment on old videos as well I'll still be notified and it'll help others see comments relevant to the tutorial :)
Thanks! I'm not too savvy with render settings but i'll give a try. It's the rendering that keeps going like a progressive renderer that drives me crazy. In Vray it would be Sample mode to Adaptive DMC, also i read around that renderman reyes had it under hinder in the advanced settings.
Thought side-by-side was 2:1 ratio, and separation 6.5cm by default.
with all your head/camera movements I can see how and why people get sick to their stomachs with this VR stuff. Really liking the Redshift tutorial. Amazing. Getting a system in a week with GTX 1070 installed along with 32 threads CPUs should be able to do direct comparisons with Renderman and Redshift on a system suited for both. I think Redshift is going to be the winner.
It's actually not bad at all when you're in the head set, particularly with head tracking but on youtube it doesn't look the best! A 1070 is what I just got in my new rig it's fantastic, Redshift burns through renders, should have done some renders on my 980 to compare before upgrading. Even with a 6700k Renderman is still way slower than Redshift, but it wins out in other areas like shader complexity and Maya integration (so far, though Redshift just added nParticle support so yay).
I get the same comment in the Renderman forums too when I mention the Redshift comparison, that it's loads faster. Redshift does seem to be more nimble than most in their development of requested new features just hoping it doesn't change price point later on though in the way of maint lic. fees. very attractive right now.
Hi Michael. Have you done more of these 360 stereo videos? In particular adjusting the separation value to match distance between eyes? My question is, wouldn't leaving it at 0 be the equivalent of closing one eye to view the world? With two eyes slightly separated you perceive depth right?
the eye separation is important otherwise you end up having two exact same images. it represents the distance between the two eyes and is usually arround 6.5 cm. the number in the redshift options is in scenes units, so dont set it to 6.5 km :)
Readshift, yeah!
yay more redshift! thank you!
Hi Michael, I have a question: I am trying to mix Redshift 360 stereo camera with Renderman 360 stero camera to save some rendering time for the particles! Is there any difference in the camera preset of separation or any points i should consider to composite both images later on perfectly in the same position? (except for the 0.064 renderman eye separation)
Is it ok to give no value in camera separation field? Then how it is creating a stereo image?
No camera separation is required this is handled by whatever image viewer you're using with your HMD. The stereoscopic image is created by the viewer by taking left and right eye information from either a side-by-side 360 degree image or an over-under image.
Thank you so much for your quick reply.
You're welcome :)
I had to set the separation or else I had no 3D, only 2D 360. I set it to 2 and it worked great.
With it set to 0 the two images where the same.
Maya 2018 update 5
Hi I was wondering if having a gpu graphics card could make rendering in renderman faster or is it just depended on your ram, thanks as always for your awesome tutorials.
Renderman is CPU dependent at render so the only thing having a GPU would help is handling some of the load in Maya for displaying what you're seeing in your workspace. Renderman does take advantage of GPU when 'denoise' is enabled however this function only works during a batch render. Hope that makes sense!
Yes, that was very helpful thank you :)
bro! can u make more tutorials about redshiiiifT!!!!!!
Bro! Some more coming soon.
i cannot find any way to do this in 3dsMax
Don't use max so can't help you sorry
Coming from your render settings tutorial, does 21 have an adaptive mode ?
Do you mean regarding target sample rate? It's always in adaptive mode now - if you want to force it to render each pixel up to a specific sample set both the lowest and the highest sample to the same number. Feel free to comment on old videos as well I'll still be notified and it'll help others see comments relevant to the tutorial :)
Thanks! I'm not too savvy with render settings but i'll give a try. It's the rendering that keeps going like a progressive renderer that drives me crazy. In Vray it would be Sample mode to Adaptive DMC, also i read around that renderman reyes had it under hinder in the advanced settings.
Reyes has been discontinued in 21 so you won't find it!
what is VR machine? version?
+조아성 Oculus CV1 with Touch controllers