What you are doing in photoshop is averaging the frames. This is a great way to reduce noise, but pixel shift is an attempt to shift the sensor so each pixel gets data from an R, G and B sensor, 16x pixel shift also attempts do double the resolution by shifting by half pixels. This is what the Sony software is attempting to do (badly it seems). I am not an expert in photoshop, it may be possible to do this in photoshop, I don't know, but there is another piece of software called rawtherapee that attempts proper pixel shift processing that you might try to see if it works better than your averaging frames method. One of the things that pixel shift processing attempts to do is to detect motion in the frame (leaves blowing in the wind etc) and use only a single frame for parts of the frame where motion is detected to keep it sharp. With averaging, these areas will be blurred like a long exposure picture. Depends on your goals though as to which result you prefer. I have not yet attempted this myself as it appears rawtherapee requires pixel shift pictures in the format output by Sony's software, and I'm a linux user, so I can't run Sony's software. I'm currently attempting to find a work around for this problem.
“Nearest neighbor” is what you want to use. Then, you select all layers and create a Smart Object. In the layers menu, choose the blend option as “median”. No opacity calculations needed. The sensor does half-pixel shifts.
Thank you. In photoshop it may be best to create a smart objects fr m those 16 images and choose Mean or Median, instead of manually editing the opacity and such
Thanks. Great to see how to do it manually in photoshop. It would nice to see side by side comparison of 61 MP single shot versus the photoshop manual stack.
Hi! Do you happen to know if this PSMS mode works in APS-C mode? I know it seems stupid to combine them, but I may have a usecase for this combination.
How did you save the selecting of each layer for setting opacity into an action? The action does not seem to work on other files since the action records the name of the layer that you selected when recording.
Thanks John, and indeed, I rushed a bit. In particular on comparing images side by side I should have slowed down a bit and allow people more time for digesting.
You said at 6:04 regarding the white balance that was set differently between shots that you "tried to correct them as good as possible in post." Am I missing something about white balance? I though once you set the temperature and tint on two RAW files to match each other, it's the same as if you had used the same manual setting on the two shots when taking them.
Nice video! I agree with some of the technical recommendations cited in the comments that should make Photoshop work better. I also wanted to ask for a wrist shot of your JLC! What model were you wearing?
Phenomenal video - nice work! Demonstrations like this confirm the view that , moving forward, nearly all "enlargement" can be done in software. People can forget about the megapixel-count in the camera and focus on the "Image" instead.
Many thanks and great comment. Another software which can work with lower megapixel cameras is GigaPixel AI from Topaz. I will post a video on that software in the next weeks, tried it out with my Leica SL2-S in New York and it works like a charm - you easily can triple resolution with still very good quality.
@@mathphotographer Looking forward to that one - the SL2-S is a wonderful camera with beautiful image quality right out of camera. Ability to reliably, easily take a single SL2-S image up to +/- 100MP is great news! As well as GigaPixel AI, I've found, also, that excellent single-image results can be obtained in Photoshop/Camera Raw using "Enhance"
Thank you very much for your positive feedback. The rig I am using is from SmallRig, see here: www.smallrig.com/smallrig-full-camera-cage-for-sony-alpha-7-iv-alpha-7-s-iii-alpha-1-alpha-7r-iv-3667.html
Excellent video...thank you!! One question..I've bought a7rv but is it possible to set up the camera to save the pixel shifts pictures in a subfolder within the main folder for stills? Sometimes is annoying if I shoot a lot and then go back and carefully check them all! Thanks
This is a brilliant video thank you. I photograph artwork with an a7r4 in 16 shot pixel shift mode for prints. Can i ask two questions : 1 would it be worth upgrading to the A7r5 from my a7r4 - is pixel shift better ? 2 And secondly when you are changing opacity why not just set all layers to 6% so that combined they add up to a full image? I am wondering if the 15th layer at 94% and the preceding lower layers are not simply almost totally obliterating the lowest layers? Therefore leaving them out of final image? Where as if they are all same low opacity they add up to 100%. Will try method later today anyway and see. Thanks so much for great video.
This was a terrific video. I have had problems using the pixel shift software because it doesnt seem to be compatible with my travel iPad Pro not my newest OS 14.5. I am wondering why the processing is done using jpg format instead of tiff or psd? 0:34
So i keep trying this with a series of photos I took of Fuji, but everytime I do the Auto align it resizes all but the last frame and shrinks 15 of the layers not sure whats happening, you have any idea or tips, Im wondering if it would make a difference if i auto align first and then resize, or would that throw everything off?
@@mathphotographer Thank you also can which card is best to use on Sony A7RV? Angelbird AV PRO SD Card MK2 V90 vs SanDisk Extreme Pro SDCX Card 128GB 300MB/s I see one is V90 other is not?
Great video. You made it look so easy although I am sure this is not the case. Early in the video you mentioned that using auto white balance was a mistake. I was wondering what setting you do recommend for white balance? Thank you!
That is a truly an excellent video. Many thanks for sharing. Love all your low light images of Zurich etc. Would be great if you did a video on this? Also, a video on your full workflow with Sony A7RV, especially colour reproduction? A video on colour reproduction for landscape photography from Sony vs Hasselblad X2D would be great as well. Many thanks.
Thanks Bob, I actually use the cage from SmallRigs, fits my A1, A7RV, etc., see here: www.smallrig.com/smallrig-full-camera-cage-for-sony-alpha-7-iv-alpha-7-s-iii-alpha-1-alpha-7r-iv-3667b.html
Interesting video - a couple of ideas: - How about using AI upscaling (super resolution, Topaz Gigapixel) to upscale each individual frame by 200%, and then import them into a stack, align and merge? - Try a different algorithm for upscaling, like Lanczos. RawTherapee offers Lanczos for upscaling RAW files. Supposedly it´s one of the best upscaling algorithms out there. - Instead of adjusting opacity for each individual layer, turn the stack into a smart object and use either median or algorithmic middle as stack mode. A question: Would you definitely say that the A7r V is noticeably improved as opposed to the A7r IV when it comes to pixel shift? If so, what would would that be due to? Better image stabilization maybe?
I have been wondering, given this technique, do we really ever need a pixel shift enabled camera (be it Sony, Fuji, Lumix, etc.)? One could even recreate pseudo pixel shift movement of the sensor with even the camera (any camera) on a tripod. There are always tremors in the ground everywhere. Even our feet moving near the camera and tripod should create tremors that move the sensor ever so slightly such that no two consecutive frames on a tripod are exactly same at the pixel level. If we want to recreate some more movement for the camera on the tripod, we could even attach a vibrating object to one of the tripod legs (like attaching a phone to the leg and playing a vibrating ringtone) while taking a bunch of photos consecutively with a remote. We could then post process like the method in the video and "potentially" get high quality high resolution image with any camera. Have been thinking of implementing this. But thoughts?
Spot-on comment, see my video on iPhone super resolution some time back where I did exactly what you say in your comment: ruclips.net/video/vr7JoBGqoCk/видео.html
I tried ... quality will be lower since there is no additional information coming in when you just enlarge one frame. If you have just one frame, I would go for GigaPixel AI from Topaz which uses computational methods to enlarge resolution. I will post a video on GigaPixel AI in the next weeks, played with it in New York with my Leica SL2-S and results are stunning.
Thanks Paul - I actually use only JPGs which BEFORE-hand I post-processed from RAW files. In other words, I shoot RAW images, post-process them to JPGs and then stack them to one composite image. So I fully use the potential of the RAW format. Maybe I should have formulated this better in the video, thanks for coming back on this.
@@mathphotographer In doing this I would think that you're combining images with much less information than would be present in processing RAW files. Having come from PhaseOne medium format images and making very large prints, the whole point of pixel shift is to obtain absolutely maximum image quality. Yes, I know that will take a lot longer to process, but the whole point is maximum detail.
Your video(s) are incredible ! Thanks for your help ! Question, my computer is a "good" one (AMD RYzan 9 3900X, 12-Core, 128GB RAM, NVIDIA GeForce GTX 1660) ... but Photoshop is unable to achieve the 3rd step ! "No answer from the program" Any idea "why" ??? Other question: I have the same camera than yours but not the same resolution !?!? Why ? Thanks and congratulations for your videos !
Really like your video style and found this video informative. I’m motivated to take my a7Rv and try your technique out for myself as I was very impressed. One piece of constructive feed back, I felt like I did not have enough time to really see some of the differences you were pointing out owing to the speed at which you zoomed in and out. Maybe slow that down just a bit next time and mouse over to point out things but really I still think the video was excellent over all.
Thanks! I was in Singapore in December but currently I commute between Zurich and New York ... unfortunately we missed each other in Singapore. It is a great city, I love the culture, the food, the Marina, the surroundings ... just a great place to live and work!
Yes indeed, that would do the trick. But I wanted to go on all frames for the base ISO which is 50 and then had to correct exposure in post. But clearly your suggestion on Auto ISO would fix the issue and take away the burden of correcting exposure later on.
Great video - thanks! A couple of points. For he Singapore image it seemed to me that the Sony software did a slightly better job than did Photoshop, especially on the trees where there were small movements between frames blurring it a little in the Photoshop version. On the other hand, the Photoshop version blends the motion of the water into a pleasing smoothness, whereas the Sony software appears to have selected a single image effectively making this no better than a single 60MP image in those areas. Something I also wonder is the extent to which the PS method with its auto-align feature might (unlike the Sony software) even be capable of constructing a pixel-shift image from a (relatively steady) hand-held series. I'm guessing that it won't work because there won't be a systematic overlay of r, g, and b pixels from the Bayer array, but I'll give it a try tomorrow.
Thanks for your kind feedback, Martin. Your observations are spot-on: in some aspects, the Singapore image looks better with Imaging Edge, on other aspects it looks better with Photoshop. Your last point is spot-on too, I used the same Photoshop method in a video on burst shooting with the iPhone hand-hold and then creating a super resolution image; see here: iPhone 12 Pro Max Super Resolution ( ruclips.net/video/vr7JoBGqoCk/видео.html )
This PS method does not make sense. I mean the the method of manual stacking flat together, why chose 6%, 12%, 18% ... It does not make sense. My test result shows worse image resolution from this method compared with that generated from Sony Edge.
These two cameras can be hardly compared. The Leica M11 is a fully manual camera which needs good photography skills and patience to learn how to shoot with it. The Sony a7R V is a feature-packed system camera with fast autofocus at a very high technical level. Image quality is super high on both cameras so here you will be fine with both.
@@gurugamer8632 Depends on what you want to do with your camera if you have to chose between these two. A landscape or reportage photographer will likely go for the M11: excellent image quality, low weight lenses, lots of lenses available, non-intrusive camera body dimensions, etc. A fashion or sports & action photographer will likely go for the Sony since you have in addition to excellent image quality also a very fast and AI-trained autofocus system, also a good lens portfolio, etc.
Are you 100% sure that the blurriness of the first processed image is not the result of unintentional moves of your camera during pictures capturing? 15s is quite long and there might be different factors impacting the shot (eg tripod on a bridge with moving cars) Also - noise cancelling - wasn't it impacting the final results? I agree photoshop did quite a good job blending those images and better aligning them together, but not convinced this is the best approach. During the blending process in photoshop you are loosing details. Look at the sky in the first picture - you can see clouds on sony merged one, much more blurred in photoshop. Of course your photo was long exposure so maybe you prefer more blurred sky from photoshop process. I believe sony use a different algorithm, leveraging and calculating single pixel shifts movements between shots. Your blurry outcome could be caused by not sharp / fully aligned origin shots. One of the objectives of multi-shot is to capture most of the details with highest color/reproduction fidelity of the moment. I have the feeling that what you've achieved with multiple photos in photoshop is not well representing captured moment (observation based on comparison between single-shot 60mpix photo with 16 shots merged by sony and photoshop). This is not a criticism - rather curiosity. Final observation - your method gave you better results because sony process failed for whatever reason. Regardless of picture/moment fidelity you have useful high res photo which is a win. Overall really like your videos - your last A7rV 2 hours tutorial was great - lot of effort from your end but very useful for the audience. Please continue doing your good work here. Regards
I'm doing the same but with the 16 original shoots; it takes time on macbook pro 16 Retina (I'm not sure anymore macbook pro is an excellent or one of the best laptop.... ZniFF. Don't you think JPG is not a good choice (ok to make a fast tutorial) to have the real best result a client is waiting for from 240 Millions x? And also the resolution from your shots in 240px/inche and not the real 300px/inche which is normal for photographers? After a long long time waiting, the result is a awesome using your process = BrAvO!+++
Thanks for your comment and feedback, much appreciated. I always shoot in RAW, and the frames I used for the Multishot were shot in RAW, then developed in Lightroom to my liking and then processed through Photoshop and the Sony software for stacking, so I did not give up the full quality and information in RAW files :) Having said that, if I would stack the RAWs and then process the stacked image in post, the result could be a tiny bit more better, maybe. Having said that, I like to do it the other way round, post-process RAWs to JPG and then stack into a multishot image.
What you are doing in photoshop is averaging the frames. This is a great way to reduce noise, but pixel shift is an attempt to shift the sensor so each pixel gets data from an R, G and B sensor, 16x pixel shift also attempts do double the resolution by shifting by half pixels. This is what the Sony software is attempting to do (badly it seems). I am not an expert in photoshop, it may be possible to do this in photoshop, I don't know, but there is another piece of software called rawtherapee that attempts proper pixel shift processing that you might try to see if it works better than your averaging frames method. One of the things that pixel shift processing attempts to do is to detect motion in the frame (leaves blowing in the wind etc) and use only a single frame for parts of the frame where motion is detected to keep it sharp. With averaging, these areas will be blurred like a long exposure picture. Depends on your goals though as to which result you prefer. I have not yet attempted this myself as it appears rawtherapee requires pixel shift pictures in the format output by Sony's software, and I'm a linux user, so I can't run Sony's software. I'm currently attempting to find a work around for this problem.
“Nearest neighbor” is what you want to use. Then, you select all layers and create a Smart Object. In the layers menu, choose the blend option as “median”. No opacity calculations needed. The sensor does half-pixel shifts.
Thanks for sharing!
where am i selecting "nearest neighbor"
@@tdawg719 see Resample, make it checked. Then you see “automatic”. Click that to get other options. One is nearest neighbor (hard edges”
Thank you. In photoshop it may be best to create a smart objects fr m those 16 images and choose Mean or Median, instead of manually editing the opacity and such
Thanks. Great to see how to do it manually in photoshop. It would nice to see side by side comparison of 61 MP single shot versus the photoshop manual stack.
Great explanation - thanks!
Great explanation and short cut. Thanks!
Hi! Do you happen to know if this PSMS mode works in APS-C mode? I know it seems stupid to combine them, but I may have a usecase for this combination.
How did you save the selecting of each layer for setting opacity into an action? The action does not seem to work on other files since the action records the name of the layer that you selected when recording.
Great video, thanks for the photoshop alternative method to Image Edge
can you put pixel shift images in different folder in camera like with the bracketing on the a7r5
very engaging and informative if a little dense at 24mins.
Thanks John, and indeed, I rushed a bit. In particular on comparing images side by side I should have slowed down a bit and allow people more time for digesting.
Great video, I love your clear step by step explanation. thank you. I have a question, Is there a way to make double exposure photos with the A7R5 ?
You said at 6:04 regarding the white balance that was set differently between shots that you "tried to correct them as good as possible in post." Am I missing something about white balance? I though once you set the temperature and tint on two RAW files to match each other, it's the same as if you had used the same manual setting on the two shots when taking them.
Really thanks a lot indeed. I own a Sony A7R5 and your video helps a lot to get the best out of this camera that has so many features and setting.
Glad I could help, Paolo.
SUPER! I can not wait to try it!
Thank you for this great tutorial.
Nice video! I agree with some of the technical recommendations cited in the comments that should make Photoshop work better. I also wanted to ask for a wrist shot of your JLC! What model were you wearing?
Phenomenal video - nice work! Demonstrations like this confirm the view that , moving forward, nearly all "enlargement" can be done in software. People can forget about the megapixel-count in the camera and focus on the "Image" instead.
Many thanks and great comment. Another software which can work with lower megapixel cameras is GigaPixel AI from Topaz. I will post a video on that software in the next weeks, tried it out with my Leica SL2-S in New York and it works like a charm - you easily can triple resolution with still very good quality.
@@mathphotographer Looking forward to that one - the SL2-S is a wonderful camera with beautiful image quality right out of camera. Ability to reliably, easily take a single SL2-S image up to +/- 100MP is great news! As well as GigaPixel AI, I've found, also, that excellent single-image results can be obtained in Photoshop/Camera Raw using "Enhance"
Hello dear MathPhotographer; what is your rig cage we see on this video if you please? Excellent tuto from brilliant person!+ THx very much...
Thank you very much for your positive feedback. The rig I am using is from SmallRig, see here: www.smallrig.com/smallrig-full-camera-cage-for-sony-alpha-7-iv-alpha-7-s-iii-alpha-1-alpha-7r-iv-3667.html
How did you find the size that you shifted to in PS? I really didn't want to sift through to find that
Excellent video...thank you!! One question..I've bought a7rv but is it possible to set up the camera to save the pixel shifts pictures in a subfolder within the main folder for stills? Sometimes is annoying if I shoot a lot and then go back and carefully check them all! Thanks
This is a brilliant video thank you. I photograph artwork with an a7r4 in 16 shot pixel shift mode for prints. Can i ask two questions :
1
would it be worth upgrading to the A7r5 from my a7r4 - is pixel shift better ?
2
And secondly when you are changing opacity why not just set all layers to 6% so that combined they add up to a full image? I am wondering if the 15th layer at 94% and the preceding lower layers are not simply almost totally obliterating the lowest layers? Therefore leaving them out of final image? Where as if they are all same low opacity they add up to 100%. Will try method later today anyway and see.
Thanks so much for great video.
This was a terrific video. I have had problems using the pixel shift software because it doesnt seem to be compatible with my travel iPad Pro not my newest OS 14.5. I am wondering why the processing is done using jpg format instead of tiff or psd?
0:34
Cheers mate. Another really helpful video ...
Glad you enjoyed it
So i keep trying this with a series of photos I took of Fuji, but everytime I do the Auto align it resizes all but the last frame and shrinks 15 of the layers not sure whats happening, you have any idea or tips, Im wondering if it would make a difference if i auto align first and then resize, or would that throw everything off?
To recover the highlights, could you not have edited each individual image before combining them? A lot of work I know.
not a lot of work, just use lightroom
Liked and subscribed! Could you share the photoshop script please?
Please can you do a video going through A-Z of the menu and best setting for taking photos
Oooops - that would be another two hour video ... will see what I can do. Such a video would have a lot of overlaps to this video here.
@@mathphotographer Thank you also can which card is best to use on Sony A7RV? Angelbird AV PRO SD Card MK2 V90 vs SanDisk Extreme Pro SDCX Card 128GB 300MB/s I see one is V90 other is not?
Great video. You made it look so easy although I am sure this is not the case. Early in the video you mentioned that using auto white balance was a mistake. I was wondering what setting you do recommend for white balance? Thank you!
That is a truly an excellent video. Many thanks for sharing. Love all your low light images of Zurich etc. Would be great if you did a video on this? Also, a video on your full workflow with Sony A7RV, especially colour reproduction? A video on colour reproduction for landscape photography from Sony vs Hasselblad X2D would be great as well. Many thanks.
Great suggestion! Many thanks for the positive feedback, Ash!
Can you please give us access to your action you made for photoshop and Lightroom? I’d love to use your action for my shots.
what frame do you have around your camera? can we get a link please?
www.smallrig.com/smallrig-full-camera-cage-for-sony-alpha-7-iv-alpha-7-s-iii-alpha-1-alpha-7r-iv-3667b.html
Sorry how do you calculate again the image size based on what?
Very nice instructive video. I have an A7RV and am looking for a L-bracket. What is the one you are using and which seem to fit the A7RV?
Thanks Bob, I actually use the cage from SmallRigs, fits my A1, A7RV, etc., see here: www.smallrig.com/smallrig-full-camera-cage-for-sony-alpha-7-iv-alpha-7-s-iii-alpha-1-alpha-7r-iv-3667b.html
Interesting video - a couple of ideas:
- How about using AI upscaling (super resolution, Topaz Gigapixel) to upscale each individual frame by 200%, and then import them into a stack, align and merge?
- Try a different algorithm for upscaling, like Lanczos. RawTherapee offers Lanczos for upscaling RAW files. Supposedly it´s one of the best upscaling algorithms out there.
- Instead of adjusting opacity for each individual layer, turn the stack into a smart object and use either median or algorithmic middle as stack mode.
A question: Would you definitely say that the A7r V is noticeably improved as opposed to the A7r IV when it comes to pixel shift? If so, what would would that be due to? Better image stabilization maybe?
Fantastic video thank you
If I use the photoshop method with GFX 100S files will it be better too?
I assume yes.
Great Video. Ty Bro... But my english is not so good :) how i know which size i must give in PS. U Had 2 different size in 2 Pics
I have been wondering, given this technique, do we really ever need a pixel shift enabled camera (be it Sony, Fuji, Lumix, etc.)? One could even recreate pseudo pixel shift movement of the sensor with even the camera (any camera) on a tripod. There are always tremors in the ground everywhere. Even our feet moving near the camera and tripod should create tremors that move the sensor ever so slightly such that no two consecutive frames on a tripod are exactly same at the pixel level. If we want to recreate some more movement for the camera on the tripod, we could even attach a vibrating object to one of the tripod legs (like attaching a phone to the leg and playing a vibrating ringtone) while taking a bunch of photos consecutively with a remote. We could then post process like the method in the video and "potentially" get high quality high resolution image with any camera. Have been thinking of implementing this. But thoughts?
Spot-on comment, see my video on iPhone super resolution some time back where I did exactly what you say in your comment: ruclips.net/video/vr7JoBGqoCk/видео.html
How well does the manual method compare if you just take a single shot and enlarge it using Photoshop with Preserve Details?
I tried ... quality will be lower since there is no additional information coming in when you just enlarge one frame. If you have just one frame, I would go for GigaPixel AI from Topaz which uses computational methods to enlarge resolution. I will post a video on GigaPixel AI in the next weeks, played with it in New York with my Leica SL2-S and results are stunning.
Thanks for phenomenal videos! But I'm wondering why you are using JPEG instead of RAW files. Much of the detail is lost in JPEG.
Thanks Paul - I actually use only JPGs which BEFORE-hand I post-processed from RAW files. In other words, I shoot RAW images, post-process them to JPGs and then stack them to one composite image. So I fully use the potential of the RAW format. Maybe I should have formulated this better in the video, thanks for coming back on this.
@@mathphotographer In doing this I would think that you're combining images with much less information than would be present in processing RAW files. Having come from PhaseOne medium format images and making very large prints, the whole point of pixel shift is to obtain absolutely maximum image quality. Yes, I know that will take a lot longer to process, but the whole point is maximum detail.
Your video(s) are incredible ! Thanks for your help !
Question, my computer is a "good" one (AMD RYzan 9 3900X, 12-Core, 128GB RAM, NVIDIA GeForce GTX 1660) ... but Photoshop is unable to achieve the 3rd step !
"No answer from the program"
Any idea "why" ???
Other question:
I have the same camera than yours but not the same resolution !?!? Why ?
Thanks and congratulations for your videos !
i get this The command SELECT is not currently available unless i labal all the files same name as orginal ones
Really like your video style and found this video informative. I’m motivated to take my a7Rv and try your technique out for myself as I was very impressed. One piece of constructive feed back, I felt like I did not have enough time to really see some of the differences you were pointing out owing to the speed at which you zoomed in and out. Maybe slow that down just a bit next time and mouse over to point out things but really I still think the video was excellent over all.
Thanks Andrew, and I appreciate your constructive feedback, will slow down next time when comparing images, very good input.
Very interesting.
Btw I am a big fan, and is based in Singapore. Are you still here. Chance for a meet up?
Thanks! I was in Singapore in December but currently I commute between Zurich and New York ... unfortunately we missed each other in Singapore. It is a great city, I love the culture, the food, the Marina, the surroundings ... just a great place to live and work!
Couldn’t you have your camera in auto ISO to compensate for the change in light in the sixteen images?
Yes indeed, that would do the trick. But I wanted to go on all frames for the base ISO which is 50 and then had to correct exposure in post. But clearly your suggestion on Auto ISO would fix the issue and take away the burden of correcting exposure later on.
Great video - thanks! A couple of points. For he Singapore image it seemed to me that the Sony software did a slightly better job than did Photoshop, especially on the trees where there were small movements between frames blurring it a little in the Photoshop version. On the other hand, the Photoshop version blends the motion of the water into a pleasing smoothness, whereas the Sony software appears to have selected a single image effectively making this no better than a single 60MP image in those areas. Something I also wonder is the extent to which the PS method with its auto-align feature might (unlike the Sony software) even be capable of constructing a pixel-shift image from a (relatively steady) hand-held series. I'm guessing that it won't work because there won't be a systematic overlay of r, g, and b pixels from the Bayer array, but I'll give it a try tomorrow.
Thanks for your kind feedback, Martin. Your observations are spot-on: in some aspects, the Singapore image looks better with Imaging Edge, on other aspects it looks better with Photoshop. Your last point is spot-on too, I used the same Photoshop method in a video on burst shooting with the iPhone hand-hold and then creating a super resolution image; see here: iPhone 12 Pro Max Super Resolution ( ruclips.net/video/vr7JoBGqoCk/видео.html )
I have CaptureOne. How does it compare to Photoshop result
Thanks Donald, I haven't tried yet the same procedure (stacking) with Capture One 23, need to look into it and how it works.
keep getting The command SELECT is not currently available.
Why not align the images before increasing the size?
This PS method does not make sense. I mean the the method of manual stacking flat together, why chose 6%, 12%, 18% ... It does not make sense. My test result shows worse image resolution from this method compared with that generated from Sony Edge.
Is the Leica m11 better than Sony a7r v?
These two cameras can be hardly compared. The Leica M11 is a fully manual camera which needs good photography skills and patience to learn how to shoot with it. The Sony a7R V is a feature-packed system camera with fast autofocus at a very high technical level. Image quality is super high on both cameras so here you will be fine with both.
@@mathphotographer thanks if you could choose out of these two regardless of cost which one would it be ? 😜
@@gurugamer8632 Depends on what you want to do with your camera if you have to chose between these two. A landscape or reportage photographer will likely go for the M11: excellent image quality, low weight lenses, lots of lenses available, non-intrusive camera body dimensions, etc. A fashion or sports & action photographer will likely go for the Sony since you have in addition to excellent image quality also a very fast and AI-trained autofocus system, also a good lens portfolio, etc.
@@mathphotographer I totally agree maybe I need both 😂
Are you 100% sure that the blurriness of the first processed image is not the result of unintentional moves of your camera during pictures capturing? 15s is quite long and there might be different factors impacting the shot (eg tripod on a bridge with moving cars)
Also - noise cancelling - wasn't it impacting the final results?
I agree photoshop did quite a good job blending those images and better aligning them together, but not convinced this is the best approach. During the blending process in photoshop you are loosing details. Look at the sky in the first picture - you can see clouds on sony merged one, much more blurred in photoshop. Of course your photo was long exposure so maybe you prefer more blurred sky from photoshop process.
I believe sony use a different algorithm, leveraging and calculating single pixel shifts movements between shots. Your blurry outcome could be caused by not sharp / fully aligned origin shots.
One of the objectives of multi-shot is to capture most of the details with highest color/reproduction fidelity of the moment. I have the feeling that what you've achieved with multiple photos in photoshop is not well representing captured moment (observation based on comparison between single-shot 60mpix photo with 16 shots merged by sony and photoshop).
This is not a criticism - rather curiosity.
Final observation - your method gave you better results because sony process failed for whatever reason. Regardless of picture/moment fidelity you have useful high res photo which is a win.
Overall really like your videos - your last A7rV 2 hours tutorial was great - lot of effort from your end but very useful for the audience.
Please continue doing your good work here. Regards
👍
Thanks!
what is mighty shot option? ohh he mean multi shot... its call ""Moutie"" shot man
And still people insist on NOT shooting in manual mode. 🙄
:)
Panasonic just does this in Camera handheld. That's why Sony is trash.
I'm doing the same but with the 16 original shoots; it takes time on macbook pro 16 Retina (I'm not sure anymore macbook pro is an excellent or one of the best laptop.... ZniFF. Don't you think JPG is not a good choice (ok to make a fast tutorial) to have the real best result a client is waiting for from 240 Millions x? And also the resolution from your shots in 240px/inche and not the real 300px/inche which is normal for photographers? After a long long time waiting, the result is a awesome using your process = BrAvO!+++
Thanks for your comment and feedback, much appreciated. I always shoot in RAW, and the frames I used for the Multishot were shot in RAW, then developed in Lightroom to my liking and then processed through Photoshop and the Sony software for stacking, so I did not give up the full quality and information in RAW files :) Having said that, if I would stack the RAWs and then process the stacked image in post, the result could be a tiny bit more better, maybe. Having said that, I like to do it the other way round, post-process RAWs to JPG and then stack into a multishot image.
le van a robra dee su tarjeta