Lovely work Nico!! If youre looking for other ways to explore this then as a few others have commented, a major labour saver could be Sharpcap live stacking with fwhm filtering, - you can stack on the fly with calibration and rejection, along with the ability to automatically save and reset at a chosen interval/stack size - that way you can make stacks of sub-stacks, and only have to finally combine a handful of masters which are actually comprised of thousands of frames! 👍 I've had great results with that in the past, highly recommend giving it a go :-D Clear skies! Oh and how could I forget, pre-congratulations on 200k!!! Amazing! ❤
Awesome looking forward to trying out those features in Sharpcap. I didn’t seek out any instruction on this ahead of time to avoid subconsciously taking others ideas, but now that the video is done, I’m looking forward to copying other people’s approaches. 😀
Yep, I watched Luke doing the Sharpcap thing some years ago and imitated him, although I also had the same idea, just around the same time he was doing it. :D . Sharpcap has such awesome features, I wish more people would look into it.
The core is very bright, so it can be captured using planetary imaging techniques. I've captured it with C9.25" and 2.5X barlow, using sub 0.1 seconds exposures. The result is amazing, and I highly recommend you try that.
I don't think 1 second is enough for true lucky imaging, your seeing would have to be really good for that. under 250-500ms, capture 20k-30k, stack 5% or less and you should be good :)
I agree, 1 second is probably too high to avoid seeing. Although in my experience 1 is absolutely better than 5, for example. By 5 seconds seeing is mostly blurred together and by 10 is baked in completely. Beyond 10 seconds makes no difference for seeing.
Ok, so I have a 17.5" Dob I call the " Godsonian"...... It's like a giant trash can for gathering light... It's on a dobsonian mount for the time being but I am nearly done with a new "Equatorial fork mount" on a wedge. It's got NEMA 32 Harmonic drives. I've also got an old Meade 10" Schmitt Newtonian I mount on a Celestron CGX... And an Explorer Scientific 102mm refractor. If I understand correctly, I need to put my smallest pixel camera on the big dob? I'll have to look up the pixel sI've got- Canon T3i Canon art ASI1600 jim.
1 second images is more than enough for deepsky stacking if you have enough to stack. With brighter objects or planets faster video captures are desirable. I try to get the fastest capture time i can manage and use 1% sorting for the lucky image stack. I can perhaps raise the sorting fraction to 6% with really good seeing.
@@blobrana8515 If your seeing isn't great, which is really the whole reason lucky imaging was invented, you wouldn't see much improvement. even going from 10ms to 100ms there's a massive decrease in quality
You should have used Autostakkert for the stacking as it gives you a quality graph allowing you to stack only the best image's. Once stacked you can then use Siril for the actual processing.
Another great presentation Nico, I really liked the opening section with the various explinations. Honestly didn't know what was meant by lucky imaging until now. Hints as to why video capture has benefits for objects like the moon.
Hi Nico. I'm going down a similar rabbit hole by pairing a small sensor camera with my 71mm travel scope. The purpose is to make an airline-friendly setup which can be transported in my carry-on luggage and has an image scale that's usable for imaging galaxies and planetary nebulas. Your video was very informative and is helping me to make a decision on the kind of camera to use. Keep up the great work!
If it's a given that you're going to crop, shooting cropped makes great sense. My memory button on my 200-600mm bird lens toggles to APS-C so the autofocus doesn't bother with scenery I'm going to crop.
I could suggest using a low power barlow on a sight of very steady seeing and use longer exposures1-2 secs and comapre the results and see. another thing to try is dithering. I feel 185mm has more resolution to offer
I do a lot of Ha solar imaging. I use a variety of equipment, filter brands, etc. but find that mono is the number one choice. Having all pixels used gives more data with higher resolution and greater sensitivity than a OSC chip. Now when imaging I can watch my monitor as it sees the image on screen in real time (well close to real at fractions of a second for each exposure). To me it it very obvious as the seeing changes quickly due to the various impact of atmosphere, heat currents,temperature changes of the imaging chain, etc. I found that reducing the number of "keepers" made an obvious improvement in resolution. I have experimented from keeping around 30% down as low as a 2%. The trick is of course you need to shoot a lot longer video to make sure you have enough images stacked. Also it is critically important to have flawless focusing. I have used auto focusing as well as manual focusing by raising the contrast very high to make it obvious when best focus is achieved (make sure you reduce that contrast for the keeper video of course). I found that I am able to get the best focus by manually focusing. I also found that using Sharp Cap HWHM filtering is also an excellent tool. Finally....it is important that once you have everything cooled down and focused..... you do the run immediately. Things change so fast that it isn't wise to think you can keep the same focus, etc. and reshooting multiple runs. Finally, while there are advantages in larger apertures, I found that larger apertures can be a negative as larger apertures are more sensitive to changes in seeing and temperature. Of course really excellent optics are a huge advantage so don't dismiss that factor. Alternative options are aperture masks or off axis aperture masks for larger SCTs or Newts. At least one of the top planetary imagers imagers (Damian Peach) uses an off axis aperture mask to increase optical resolution on a SCT.
Photographing PN is my reason for astrophotography. I use an 8" f5 newtonian and planetary camera. I use an SV705 now, which is the same sensor, but unfortunately I can't afford a cooled camera yet. I've used an ASI178MM but it's very difficult to calibrate the glow out because it isn't cooled. I prefer mono.
Good work Nico. I think some planet experience lucky image lessons translate over and some dont. So here goes: 1. don't be afraid to pump up the gain to help with shortening the exposure. Even at best 5% you'll be stacking so many frames that noise is nothing to fear. 2. I use ROI on my 462 to get highest possible frame rates on planets, but ROI wont be a factor with DSO. Thats because you won't be going anywhere near 100-500 fps (single figure millisecond exposures) where the transfer rates and usb speed begin to matter.
No expert here, but my guess is that the main target is still to small in the frame to really benefit from lucky imaging. What might further improve the resolution is to use a mono camera. Also using the scope without focal reducer to „zoom in“ as much as possible might be worth a try.
Great video thanks, my Equatorial 16" F4.5 is coming along and I'm just finalising the latest summer mods for this season ready for my latest attempts at luckyish imaging. I'm also in Siril particularly for registering tons of short exposures which is soooo important for narrower fields of view with fewer bright stars. One thing I am trying is narrowband but collecting SHO detail all at once (i.e. 3x more subs) using a Astronomik CLS-CCD filter then with much fewer individual SHO images to colour it with typical narrowband filters, however I should not need as many of those frames - fingers crossed this works, quite a few have been scratching their heads about it but some have agreed it makes sense for a big rig like mine (Note: Bortle 4/5), see what you think and maybe add to your list? Great your doing stuff away from the norm...very helpful thanks very much.
In professional astronomy land, ROI imaging is a well known technique for jacking up your frame rate. Having used it myself for high-speed photometry, I'm not aware of any downsides. It works well!
Great video, thanks!! One comment on the gear section: biggest, fastest scope. Really what you want is just aperture. That defines the number of photons you catch from the target object. A faster scope, and hence your focal reducer, shows more sky with more total photons and so is brighter, but for your target, that means it squeezes those photons onto fewer pixels. So, yes, those pixels are brighter, but the image is smaller, working against your desire for a small-pixel camera. The scope F/# gives you nothing useful here, because you are only interested in the image of a particular object. The way you get more photons is aperture and/or time.
Fun video! I am no expert but I think doing drizzle should definitely help resolve smaller details, even without purposely dithering, due to sub-pixel movement from the atmosphere. Definitely crop a ton before doing that obviously. And I would stack way fewer frames, like 1%, and perhaps compare that to 2% and 3% as well.
if youre running half sec frames what you could do is try 1/4 sec frames with the gain bumped up a little bit. youll gather twice as much data and probably end up with the same result if your noise to signal ratio dont change too much.
It comes down to 2 things, cost and data compression... we are relying on ai to solve issues instead of simply allowing a larger sensor which would cost more and require more power to operate. The benefits outweigh the cons of using a larger collector along with a smaller sensor. The best course then going forward would be to pair a good lens with a good sensor, but again, it comes down to budget. Trying to get this hobby into hands of people new to the hobby for example, it would be difficult to sell those higher costing but better setups - and the AI can help significantly, to produce sharper and clear images while staying with a reasonable budget. It can be overkill, leaning too far on the AI however, so its best to tread carefully ahead.
I would suggest that you try to dither (150%, 200%, 300%) the images before stacking, with these images being undersampled, I would expect that it would bring out more detail esp. in the 15% and 30% stacks.
Thank you for this! I have the same Touptek camera and am close to getting set up with an F4 10" Newtonian - both for planetary and smaller DSO capture. 😀
Best lucky imaging explanation I’ve ever watched. I was surprised though that right after you mentioned lucky imaging is best on bright images, you announce your target choice as the Cat’s Eye Nebula. A very small target and other than its very bright core, a very dim target. I assume you were challenging yourself. For a beginner, like me, should I start on the moon, Jupiter, or Saturn? Again, great vid sir!
That's amazing. I never think of using planetary photography methodology on deep sky photography, but if the target is bright enough and small, it should be worth it. In planetary photography, in order to "freeze" the atmosphere, shutter speed is usually lower than 0.01s, which is impractical for deep sky imaging. I really hope to see your test result on 0.5s shooting (because of more read noise, the final image is expected to be a bit noisier) and may post-processing with autostakkert. In addition, I think in sharpcap, it is feasible to use only part of the sensor to reduce file size.
I tried something similar one time with a 130mm f/5 newt and an svbony sv305c camera on the dumbell nebula. I then compared that to a live stacking session that I did on the svbony sv705c camera with much longer exposures (by much longer I mean about 20 second rather than I think 1 second) and I really can tell a difference in sharpness. Both cameras have the same size pixels, but I need to repeat this with the same camera and do some more testing. I think it would have been nice to see what this nebula looks like with only the traditional method, and my question with this is, are you getting more benefit from the sharpness of the lucky imaging, or the HDR effect you are getting from combining two different exposure lengths?
With the traditional method on this particular target with my particular equipment, the core was blown out with 5 minute exposures and the dual narrowband filter. By blown out, I mean completely clipped to white, so I can’t do the comparison. It would be a good idea to compare maybe 30 seconds to 1 second, equal integration. I’ve done that with untracked with a lens, but never with a big telescope at sub-arcsecond pixel scale.
My Dad literally came to me today asking if I was interested in him buying a telescope. I was curious if there's one around $150 -$200 that would accept a typical DSLR. I want to photograph stars without the typical violet and blue shift you get and we don't have enough for a more expensive telescope.
Your best bet with $150 to $200 to spend (I assume only for the scope and not for scope plus mount as there is no way, I'm afraid, you will achieve what you're looking for for that price if you're thinking scope plus mount = $200) is to try to pick up a used SVBony SV503 70ED. I got one new on eBay about 18 months ago for £195 BUT I haven't seen such a deal since. However, to then use it for photography, you will then need an equatorial mount (another few hundred $$ used) and various other bits and bobs. Astro has reduced in price substantially in the last 10 years or so but it still isn't "$200 and I get an entire Astrophotography rig which doesn't produce chromatic aberration".
Hi Nico, can you give any information on your camera settings? I have access to a big cassegrain and want to try this. I would start with the native high gain mode settings of my qhy 268C. But since one takes so many images, higher gain would be possible right? What was your approach on choosing the gain?
I've been using the IMX585 with the largest refractor I could afford (svbony 122) with decent results. I've been using for galaxies ranging from 4 to 11 arc minutes. I think this is literally the price we pay for prefering refractors yet being able to image small targets.
Didn’t read all comments, but I would consider using an ROI to get much higher frame rate, each with less sky (less storage and processing too), yet still capturing the full main subject. On such small subjects, similar planets, you can focus on much smaller region. What’s the shortest frame length you think would be possible to capture the faint outer parts? And in what Bortle was this? Thx!
I'm a little confused.. the sensor size is one thing but isn't it really the pixel size that matters? i.e. you can take a camera that has a sensor with the same size pixels, but many more of them as this one and it will resolve the same.. Cropping into that image will give you the same output as using this camera, no? The imaging circle will be identical, and it will hit the same amount of pixels on this sensor vs your cropped image. That'd lead to a cool test actually... use this camera and then get a different camera with a different pixel size (like the standard IMX571). That camera has bigger pixels. The smaller one should theoretically give a sharper image, but it'd be interesting to see where diffraction creeps in. Anyway, great video as always. Seriously love that scope of yours. I'm tempted to sell my Esprit 120 to buy the Askar 120 because of how small it is.
Would also be interesting to see if you can guess which ones are 15%, 30%, 50% and 70% and letting the audience guess and see if we were both right or neither of us being able to guess, as knowing which one is which could produce a confirmation bias.
Small object, big scope and a reducer? I can see the benefit of a small sensor but not the reducer, guessing you didn't have the 1x flattener? At 1x you could have used a larger sensor and had about the same size image, maybe a better image if you can track it well. Using the reducer would be a better question.
As far as my knowledge goes, I think the layer masking feature in Photoshop may be a great option for combining the "lucky-imaged" and "conventional" images. I haven't tried it for deep sky yet(it proved itself by combining the earthshine and surface features on the moon, though) but I have seen people doing it- even a tutorial from Deep Kosmos (though it is German). Anyway lucky imaging seems like a wonderful idea to try! Great video!🤩
8:52 For planetary, Damien Peach is a good source for more detailed info. I think Rory tried this with a galaxy, and found that there was a study done that said, and this is theoretical, you could improve images from seeing conditions by 3-4 times, but it required exposure setting in the microseconds. I think the conclusion was 1 second was about the max to get any improvement. But of course, the shorter you can make the exposure, the better.
1… Nice introduction to the whole process including rationales and walking through step by step. 2… 100% zoom then 500% zoom. perhaps 1000% and 2000% zooms would make more visible the difference between the percent of collected images used 3… 15% ,30% 50% 70% of images used. With 5000 images, why not try 5% and 1% of images used? 4… half second exposures would be interesting and also comparing a sequence of exposure times all the way down to 10 ms exposures 5… if collecting 10 ms exposures makes images too dark or too noisy, can a 2 step process be used? i.e. 1st, align the images & eliminate images that can not be aligned and reorder based on confidence/accuracy of the alignment, 2nd, reorder the images based on sharpness and eliminate those with poor sharpness. 3rd, stack images with both good alignment and good sharpness. For example, starting with 1,000,000 images of 0.01 seconds exposure time, the goal is to find the best 500 seconds(or 200 seconds)worth of exposure to stack (50,000 frames). To identify the best frames, try weighting the importance of alignment versus sharpness. e.g. identify groups of frames such as top 1% in sharpness while also top 1% in alignment, then top 1% in sharpness while also top 2% in alignment, then top 2% in sharpness while also top 1% in alignment, top 0.5% in sharpness while also top 1% in alignment, etc. After a dozen such groups are identified, then stack each group and look for the best result among the groups. It may be obvious top 0.5 % in sharpness is best, so then try top 0.4% and 0.3%.
Thank you for your videos, thanks to you I can see and compare what I will get in return for the cost. I have a few simple questions and suggestions, can you compare a mirror telescope and a lens telescope of the same diameter and focus, I know that something similar has already happened, but a refractor of a smaller diameter than a reflector will of course show less sharpness, which was visible in your photos, and the second suggestion is to compare a cheap DIY achromat reflector for 70-90 dollars with an Askar 100mm apochromat. Sorry google translate.
Great information Nico! I had no idea that Siril can process .ser files. I wish I'd known this earlier. I assume it can also process Lunar or Solar ser files with a different stacking method? BTW, what do you think of that planetary camera with the 1.45 microns? 8 Mbit resolution, tiny sensor, I think it's the 715 sensor or something. Any good for this type of stuff?
Very cool Nico, I just happened to be in the midst of the same project. I have 71x10min subs taken with the L-Ultimate/ASI533mc/8" Edge. I also grabbed 60x30" shots for the core. Surprisingly it came out pretty good.. but not as much detail as your core. I was planning to try lucky imaging.. just need a break in the weather for it. I'm debating if I should also try shorter subs with the L-Ultimate and compare it to the broadband data for the core. Then there is the question on how to blend them. I don't think the HDRComposition is needed since the core isn't blown out with the 10 mins NB subs. I was thinking maybe layers in PS or that new Image Blend script in PI. Looking forward to seeing how you finish yours. CS!
Very interesting that the 10 min. Subs don’t have blown out core with the L-ultimate. I was using a pretty wide narrowband filter (Svbony Sv220) so it makes sense that mine did clip. Last night I continued with the narrowband 5 minute subs and also took about 80 10-second subs through it just to see what those look like.
@@NebulaPhotos Yea, I was honestly surprised about those 10 mins subs. I expected the core to blow and then just use HDRComposition to put everything together. I am at F7.. not sure if you're running a reducer.. if so that could also be part of it.
Yeah, I did use the reducer (f/5.6) once I saw how bright it was at 1 sec. I wondered if I should have instead used the flattener. With my seeing not sure how much it would have mattered, but I hope to keep experimenting with this.
@@NebulaPhotos Looks like there's a lot of potential content to mine out of this target :) F/5.6 would certainly help with the out bits. It's a very interesting and challenging target.
What's wrong with going as small sub exposure as possible, when it's the total integration time that matters? For example, if I take 1200 images at 0.5 s vs. 3 images at 180s vs. 2 images at 300s. Provided no bad images, I should end up with the same result if using the same equipment, and all other variables are constant. In fact, there's a downfall of going higher exposure sub, for bright objects, the pixels will be saturated before all of the information is captured and as a result the resolution will be compromised per sub.
3 * 200s, 2 * 300s, and 1 *600s will all look about the same if an identical stretch is applied. 1200 * 0.5s will not work the same/better unless the object is very bright as was the core of the cat's eye nebula in this video. 0.5s is typically not enough time with current cameras to sufficiently swamp the electronic noise of the sensor itself unless the signal is quite bright (daylit scene, nebula cores, star cores, etc.). In other words, you will be limited by the readout noise of the sensor. If we assume a perfect sensor, then yes you are correct, but in the real world, only million dollar scientific sensors approach that kind of perfection in counting photons accurately. I have real world examples of this. For example here: ruclips.net/video/mYucAuUrdTs/видео.html
Maybe I'm missing something in this exercise. Why are you not using a longer focal length telescope with greater light capture? Increasing your frame rate is opposite of magnification offset. Longer focal length, increased light capture, higher magnification, but the same 1 sec frame rate would be a closer balance than doubling your frame rate using the same telescope, no? I must be missing something to using this method vs greater light capture and magnification offset.
This is my longest focal length scope with greatest light capture that I’m able to mount on a tracking mount. 7” f/5.6 isn’t terrible. 10” f/4 would be better. I’m mostly a refractor guy for imaging. I’m not sure what you mean about frame rate vs. magnification. Changing frame rate doesn’t change anything when it comes to pixels scale / resolution / magnification, whatever you want to call it. The reason for a faster frame rate is simply to give one more chances for ‘luck’ when the atmospheric conditions are favorable for a clearer view of the object, but if the exposure is too long it won’t matter because the seeing blur will be baked in. 1 second may have been too long so I’ll try 1/2 second next.
Bro edit your image stretch it a bit manually in Siril , then do a starnet to separate the target from the stars edit that don't worry about the core being blow out. Now export it as a 16bit Tiff into Photoshop , bring in the star mask to blend in, now back to Siril re open the stacked result file and stretch it a hair manually to bring in the core details export that as 16bit Tiff and now you have 3 layers in Photoshop process and mask in the 3 layers details, no core blown out and the stars nice
Not something I’ve ever tried. The star tracker won’t interface with the computer, but if your DSLR does, it may be possible to use those programs for focus and platesolving.
Hmm, would like to see you cover shoots spanning multiple nights. Your biases and darks are temperature-dependent, and need to be shot the same night as the lights, and the dust on your lens will shift between sessions, so same thing there. Meaning you need to stack each night separately. But how do you combine them later? Just stack the individual results without calibration frames?
This would be my expectation too. However, I think you'd need to weight each stack by the integration time before doing the final stack, or perhaps even better, by snr if the light pollution or moon light etc changes from night to night. The reason is: stacking is basically an average. So stacking 100 frames means multiplying each frame by 1/100 and summing. If you were to stack the first 80 and last 20 separately, you would multiply the first by 1/80 and the last by 1/20. When you stack them again, you need to skew the weights so that each original sub gets a weight of 1/100. So 8/10 times the first stack and 2/10 times the second stack. If there's different amounts of light pollution, you want to give more weight to the higher snr ones. But probably not a big enough deal to worry about. I tinkered with different weights like this im siril and haven't noticed much of a difference visually.
Great video. This whole astrophotography is new to me ( I didn't know it could even be done 2 months ago). I've looked at sharpcap and I'm confused about exposure. You can set a frame rate of 30FPS , but an exposure of 1sec. How does that work. 30FPS = 33mS/frame. You were talking about doing 60fps , but 1sec exposures. Are they not mutually exclusive? I cant even get a good shot of the moon , so ..... :0) Can't wait to check out the rest of your videos. Cheers
Has anyone ever tried the opposite for this type of thing. I was thinking milky way core with a fast lens and a very wide field of view. Not sure if this is a dumb idea but thought it might be cool watching all the little nebula pop out as the image was created.
Hey Nico! Very interesting experiments :) The effect is always going to be very subtle with this hardware, I think. The diffraction limited resolution of a scope like the one you used lies in the order of half an arc second at best. If you factor in the optics and the fact you’re shooting full spectrum OSC on a refractor, I reckon your practical best optical resolution is closer 0.75”. Now when we add the 585 on this scope in its ≈1000mm configuration, we end up with a resolution of ≈0,60” per pixel for the camera sensor. Which is a fine match. These are values that typically perform well under a normal ‘good’ seeing of around 1.5”. So if you want to use lucky imaging to go beyond the level of seeing you can already expect for a decent night, you’re going to have to drizzle integrate at ~1.5x or larger. Or else you’re simply not generating and interpolating the pixels you need to retrieve the potential extra detail. And even then, there’s only little wiggle room between your scope’s potential and typical, non lucky but still happy, seeing conditions. A factor 2 at best, but probably a little less in practice (I did notice some chromatic aberration on one of the star images while you were showing your Siril workflow, for instance). At least: for as far as I understand what to expect from this. I’m not sure if dithering is even necessary, I never consciously dither when doing planetary lucky imaging and I’ve never seen any artefact from not doing it. Beter still is more focal length. A C11 with a focal reducer would be an interesting OTA for this experiment.
Thanks for the detailed comment Jules! You are right that many planetary imagers seems to drizzle without dithering, so I probably should try it. For normal (long exposure) deep sky I have run the experiment and definitely see little benefit of drizzling without dithering (equivalent to just upsampling), but this is a whole different ballgame. For example, those are typically more sparse datasets (50-100 subexposures). As you say, when dealing with so few pixels its hard to see any differences at all at the default pixel scale of this gear. Thanks again for the input - very helpful!
@@NebulaPhotos no to artefacts, but yes to benefits from the un-dithered planetary drizzle! On the imaging scale we’re discussing-sub-arcsesond resolutions below the happy-not-lucky seeing reported by your local meteorologist-the necessary ‘dithering’ is caused by the atmosphere itself; it is the actual wiggle introduced by turbulent masses of air which move our relevant details around by a few tenths of an arc second every single subframe. That’s also why successful ‘lucky imaging’ is not just dependent on picking the best frames, a large part of it is statistically retrieving the true location of contrast elements by means of deconvolution. Which also happens to explain why actively dithering the scope becomes a necessary condition for a successful drizzle result when we’re sampling at a lower resolution of around or above 1-1.5”. (I’m thinking aloud here, I hope you’ll allow it.) At the typically much coarser resolutions of long exposure sub, deep sky astro, the atmosphere itself usually isn’t sufficiently wiggly and dithery any more. Besides: the typically much longer exposure times will remove all of the true ‘lucky’ aspect-which is why your suggestion of halving exposure time is very relevant. I would even consider going to 0.25 seconds and see if the camera can cope with the gain if just for the very brightest of details.
@@JulesStoop Very helpful again Jules. Makes good sense. On Saturday night I did a run of 5000 frames at 0.25s and will try stacking it with drizzle! If the results are interesting, I'll make a follow-up video.
Have you had any experience with Live stacking planetary or lunar/solar images? I saw Sharpcap implemented this, but I was curious if you have tried it.
I haven’t. I’m a total newbie when it comes to most of this stuff with Sharpcap. Learning a lot from the comments. Sounds like others have found the live stacking and FWHM filter work really well.
Makes sense! I’ve been doin PIPP, autostakkert and registax/astrosurface for the longest time but last night started playing with stacking with mixed results. I got a better stack with the old workflow altho its too early to call it. Def gonna keep playing with it
i do LI a bit different.... sharpcap livestacking with guiding diethering and FWHM filter on and only accepting the best frames ... say at 0.5 secs i get cca 2.1 FWHM I'll set it to say 2.3 -2.5 and i stack say 5-15 minutes results thereafter. that's what works for my frac 115/800 with the 585 on the c11 ... different story...
Hey Nico, I ran into a question that has me baffled. So I took out my Apertura 60mm F6 telescope on my SA2i the other night, mainly to test it out. I also have the Field flattener and also another .8x reducer/flattener. First, it took forever to get focus. But the issue I had was that even with the flattener on there, the stars all the way around the edges looked WORSE than when i used my 75-300mm kit lens. I am connecting the T3i to the telescope via 1.25 T-Adapter... I thought that when I unscrewed to the 1.25 tube, that I would be able to screw the camera on directly, but apparently I dont have the right size. Any thoughts on why the stars are so bad while using a flattener?
You shouldn’t be using any 1.25” accessories in attaching your camera to the telescope. 1.25” is a smaller diameter than APSC and isn’t designed to give correct backspacing, so you have two issues there. You need a 48mm Canon T adapter to attach the Canon t3i to the flattener or flattener/reducer. That will give you perfect backspacing and won’t vignette.
@@NebulaPhotos that is what I thought the problem is. I already ordered the right T-mount from High Point Scientific the day after I went out. It was the only part that i could imagine being the issue. Especially since it did the same thing with both flatteners. I will say, balancing the rig has been quite the challenge too. Hopefully i can test the new adapter this weekend. One last question, have you ever had an issue where your rig would only travel slightly more than halfway before the camera hits the tracker?
@@kevinashley478 No, I'm not sure what you mean. With the SWSA 2i you should be able to point anywhere in the sky without the camera hitting the tracker or tripod. Usually that is only a problem at zenith with very long scopes running into the tripod legs. With that tracker you don't have to worry about balance in both axes, only RA balancing the scope side with the counterweight side. With that scope, I imagine you will want the counterweight all the way out or very close to all the way out.
@@NebulaPhotos Right, so i have the green bar, the piece that the round metal rod screws into, all the way down as far as it can go. I have the weight all the way at the end of the metal rod. The declination part has an adapter on top, to accept the vixen dovetail. I am using about a 6-8 inch vixen dovetail bar (i think it is a william optics) on which the telescope is mounted. The bar is as far forward as it can be. The telescope has a fieled flattener and my T3i on the back of it. When I release the clutch, it will spin left and right and is balanced, but as i spin it further down, just a bit past halfway, the camera hits the top of the tripod where the star tracker is mounted to the tripod. Is that reasonable, or should that never happen? Basically, when you turn the payload and counterweight to horizontal for balancing, the top half is free and clear, but i cant move the camera much further without hitting the top of the tripod. Would it be better to send you a photo of what i am talking about?
@@kevinashley478 Is there any spot in the sky you can't aim at considering you can move the camera to both sides of the meridian? You can send a photo to nicocarver at gmail
Been into mirrorless imaging a long while underwater nature and recently mirrorless Astro planetary esp events my main jam something about it prob started at a hard area .The things for a relative newbie that don't seem covered a lot the many connections mt weight limitations and imaging friendly eyepieces baader Hyperion series should be congratulated for that .Have wasted so much money on not getting the right fit for image train l the biggest difference is fl doesn't necessary follow wider equals wider view also the fov and focal reducer aspect that is another galaxy from mirrorless photography .....for imagers suggest at the very least dip a toe in Astro it's so immersive love Astro Chanel's
Hey Nico! Did you consider using Sharpcap’s FWHM filter during image acquisition? That way you would discard any images that don’t meet defined sharpness criteria already during acquisition and reduce the storage and processing requirements substantially. It is explained here starting at 31 minutes in: ruclips.net/video/khQOnZiz97s/видео.htmlfeature=shared
Nico can you do a review of the Askar 71F it's a quadruplet astrograph for only $600 this is unheard of and I would love to hear your honest opinion 🙏 thanks
Aldrich Astronomical Society (www.aldrich.club/) - Great club out of Central Mass. with an active astro imaging group. They have an observing field and club observatories for members, and hold member star parties there in the summer. ATMOB (www.atmob.org/) - Very big club being the only one in the Boston area. They have a nice clubhouse and observing field in Westford, MA - not super dark there, but okay considering driving distance from Boston. Dues at either are just $35/year!
@@richaellr I'm mostly imaging from New Hampshire these days, but do want to make it down to an Aldrich Star Party this summer if the timing/weather works out, as it's only about an hour from where I live in Western NH.
@@richaellr email me for more recommendations of places to go in New England - I do still do some mobile imaging at dark sites as I don't have a good view of the Milky Way core from home - nicocarver at gmail dot com
Great video. I tried the cat’s eye nebula as well and found the core was pretty blown out with 20sec exposures (alt-az C8) compared to the rest of the image. Hard to stretch them together. I ended up making s composite as well. But mine didn’t have as much detail as yours so this is a great idea. Definitely going to try and revisit it now. Thanks!
Oh and yes, you can use the ROI feature and the only downfall is the target might go out of FOV but you can adjust it in real time as well. Another idea would be to use a camera with smaller pixels. I have the QHY5iii715C which has 1.45um pixels.
Oh, just realized another downfall of using ROI feature is fewer stars so you would need to adjust how you align the images. Siril has 1-2-3 star alignment which sometimes works. Autostakkert can use different positions of the object itself for alignment.
No matter what is the size of sensor or telescope there is always a target that need that kind of combination of setup, so it will be like i should have 20 combos for different targets, so why going with this tiny sensor and not something like 4/3" or 1" sensor already and crop it to death? Even with this small sensor if someone can't afford like 6"-8" refractor or even 10"-20" reflector then it won't help much, mostly those who buy this camera don't have budget for big weapons actually, and me if i buy it after watching so many RUclipsrs talking about it then which scope i have to be perfect for it then?
Yes, i tried ROI mainly for planetary or solar just great, so i will try to do that for DSO sometimes and see how it will be, i have my camera 294M-Pro that can be binned or unlocked to have 46mp and 2.31 pixel size, that is SO MUCH pixel there, so with ROI i can go really that small with enough mp at that small pixel size, and i know many people will say it has amp glow, but think about it, the amp glow don't show on the the entire frame, only by edge of one side mostly, with ROI it is like you take out amp glow and even vignetting, but i didn't try that yet as i spent last 3-4 years only collecting gear instead of light, sooner or later i will put them all into use and see.
Excellent info and probably the first time I see on YT. I would suggest to test QHY5III715C for such lucky imaging. Yes it is un-cooled planetary camera, but have excellent low noise even after running it for long time. Key factor for QHY5III715C is very small pixel size 1.45um (about half of 585). I saw on C/N, few people are using it as main camera for C8+ hyperstar config and producing excellent results for small DSOs. I have the camera and planning to test it on my C6 + hypersatr for similar targets.
I noticed that a Sony camera became very much admired for it's low light capture abilities. Your could capture the Mikey Way in the night sky. The Sony camera had a "full frame" sensor, and all manner of "adapters" were made to gobble up all that imagery. The telescope sensors had fallen behind and they were too expensive. All that sensor gear for telescopes are now e-waste.
astrobiscuit also did lucky imaging. he used a 10 inch newt.
Lovely work Nico!! If youre looking for other ways to explore this then as a few others have commented, a major labour saver could be Sharpcap live stacking with fwhm filtering, - you can stack on the fly with calibration and rejection, along with the ability to automatically save and reset at a chosen interval/stack size - that way you can make stacks of sub-stacks, and only have to finally combine a handful of masters which are actually comprised of thousands of frames! 👍
I've had great results with that in the past, highly recommend giving it a go :-D
Clear skies!
Oh and how could I forget, pre-congratulations on 200k!!! Amazing! ❤
Awesome looking forward to trying out those features in Sharpcap. I didn’t seek out any instruction on this ahead of time to avoid subconsciously taking others ideas, but now that the video is done, I’m looking forward to copying other people’s approaches. 😀
Yep, I watched Luke doing the Sharpcap thing some years ago and imitated him, although I also had the same idea, just around the same time he was doing it. :D . Sharpcap has such awesome features, I wish more people would look into it.
@@ferenc-x7p These technics are well known among us Solar Ha imagers.
I have to watch this again. I have wanted to do lucky image processing with Siril but there was nothing on how to do it till now.
The core is very bright, so it can be captured using planetary imaging techniques. I've captured it with C9.25" and 2.5X barlow, using sub 0.1 seconds exposures. The result is amazing, and I highly recommend you try that.
Bright yes, why did he use a reducer? I'm guessing he doesn'tt have the 1x flattener.
I don't think 1 second is enough for true lucky imaging, your seeing would have to be really good for that. under 250-500ms, capture 20k-30k, stack 5% or less and you should be good :)
I’ll try it!
I agree, 1 second is probably too high to avoid seeing. Although in my experience 1 is absolutely better than 5, for example. By 5 seconds seeing is mostly blurred together and by 10 is baked in completely. Beyond 10 seconds makes no difference for seeing.
Ok, so I have a 17.5" Dob I call the " Godsonian"......
It's like a giant trash can for gathering light...
It's on a dobsonian mount for the time being but I am nearly done with a new "Equatorial fork mount" on a wedge. It's got NEMA 32 Harmonic drives.
I've also got an old Meade 10" Schmitt Newtonian I mount on a Celestron CGX...
And an Explorer Scientific 102mm refractor.
If I understand correctly, I need to put my smallest pixel camera on the big dob?
I'll have to look up the pixel sI've got-
Canon T3i
Canon art
ASI1600 jim.
1 second images is more than enough for deepsky stacking if you have enough to stack.
With brighter objects or planets faster video captures are desirable. I try to get the fastest capture time i can manage and use 1% sorting for the lucky image stack. I can perhaps raise the sorting fraction to 6% with really good seeing.
@@blobrana8515 If your seeing isn't great, which is really the whole reason lucky imaging was invented, you wouldn't see much improvement. even going from 10ms to 100ms there's a massive decrease in quality
OOOOOHH boy, a long, technical video from Nico about a niche but wildly interesting Astro imaging technique? Best Friday ever.🎉🎉🎉
You should have used Autostakkert for the stacking as it gives you a quality graph allowing you to stack only the best image's. Once stacked you can then use Siril for the actual processing.
I would love love love love love to see a video where you're combining the short and long exposures.
brilliant talk, but How do you remove the clouds? i think we have had maybe 2 clear nights so far this year - welcome to UK!
Another great presentation Nico, I really liked the opening section with the various explinations. Honestly didn't know what was meant by lucky imaging until now. Hints as to why video capture has benefits for objects like the moon.
Hi Nico. I'm going down a similar rabbit hole by pairing a small sensor camera with my 71mm travel scope. The purpose is to make an airline-friendly setup which can be transported in my carry-on luggage and has an image scale that's usable for imaging galaxies and planetary nebulas. Your video was very informative and is helping me to make a decision on the kind of camera to use. Keep up the great work!
If it's a given that you're going to crop, shooting cropped makes great sense. My memory button on my 200-600mm bird lens toggles to APS-C so the autofocus doesn't bother with scenery I'm going to crop.
I could suggest using a low power barlow on a sight of very steady seeing and use longer exposures1-2 secs and comapre the results and see. another thing to try is dithering. I feel 185mm has more resolution to offer
I do a lot of Ha solar imaging. I use a variety of equipment, filter brands, etc. but find that mono is the number one choice. Having all pixels used gives more data with higher resolution and greater sensitivity than a OSC chip.
Now when imaging I can watch my monitor as it sees the image on screen in real time (well close to real at fractions of a second for each exposure). To me it it very obvious as the seeing changes quickly due to the various impact of atmosphere, heat currents,temperature changes of the imaging chain, etc. I found that reducing the number of "keepers" made an obvious improvement in resolution. I have experimented from keeping around 30% down as low as a 2%. The trick is of course you need to shoot a lot longer video to make sure you have enough images stacked. Also it is critically important to have flawless focusing. I have used auto focusing as well as manual focusing by raising the contrast very high to make it obvious when best focus is achieved (make sure you reduce that contrast for the keeper video of course). I found that I am able to get the best focus by manually focusing. I also found that using Sharp Cap HWHM filtering is also an excellent tool. Finally....it is important that once you have everything cooled down and focused..... you do the run immediately. Things change so fast that it isn't wise to think you can keep the same focus, etc. and reshooting multiple runs. Finally, while there are advantages in larger apertures, I found that larger apertures can be a negative as larger apertures are more sensitive to changes in seeing and temperature. Of course really excellent optics are a huge advantage so don't dismiss that factor. Alternative options are aperture masks or off axis aperture masks for larger SCTs or Newts. At least one of the top planetary imagers imagers (Damian Peach) uses an off axis aperture mask to increase optical resolution on a SCT.
Photographing PN is my reason for astrophotography. I use an 8" f5 newtonian and planetary camera. I use an SV705 now, which is the same sensor, but unfortunately I can't afford a cooled camera yet. I've used an ASI178MM but it's very difficult to calibrate the glow out because it isn't cooled. I prefer mono.
Good work Nico. I think some planet experience lucky image lessons translate over and some dont. So here goes: 1. don't be afraid to pump up the gain to help with shortening the exposure. Even at best 5% you'll be stacking so many frames that noise is nothing to fear. 2. I use ROI on my 462 to get highest possible frame rates on planets, but ROI wont be a factor with DSO. Thats because you won't be going anywhere near 100-500 fps (single figure millisecond exposures) where the transfer rates and usb speed begin to matter.
No expert here, but my guess is that the main target is still to small in the frame to really benefit from lucky imaging. What might further improve the resolution is to use a mono camera. Also using the scope without focal reducer to „zoom in“ as much as possible might be worth a try.
Great video thanks, my Equatorial 16" F4.5 is coming along and I'm just finalising the latest summer mods for this season ready for my latest attempts at luckyish imaging. I'm also in Siril particularly for registering tons of short exposures which is soooo important for narrower fields of view with fewer bright stars. One thing I am trying is narrowband but collecting SHO detail all at once (i.e. 3x more subs) using a Astronomik CLS-CCD filter then with much fewer individual SHO images to colour it with typical narrowband filters, however I should not need as many of those frames - fingers crossed this works, quite a few have been scratching their heads about it but some have agreed it makes sense for a big rig like mine (Note: Bortle 4/5), see what you think and maybe add to your list? Great your doing stuff away from the norm...very helpful thanks very much.
In professional astronomy land, ROI imaging is a well known technique for jacking up your frame rate. Having used it myself for high-speed photometry, I'm not aware of any downsides. It works well!
Great video, thanks!! One comment on the gear section: biggest, fastest scope. Really what you want is just aperture. That defines the number of photons you catch from the target object. A faster scope, and hence your focal reducer, shows more sky with more total photons and so is brighter, but for your target, that means it squeezes those photons onto fewer pixels. So, yes, those pixels are brighter, but the image is smaller, working against your desire for a small-pixel camera. The scope F/# gives you nothing useful here, because you are only interested in the image of a particular object. The way you get more photons is aperture and/or time.
Good to hear angular resolution and Rayleigh criteria being mentioned.
Fun video! I am no expert but I think doing drizzle should definitely help resolve smaller details, even without purposely dithering, due to sub-pixel movement from the atmosphere. Definitely crop a ton before doing that obviously. And I would stack way fewer frames, like 1%, and perhaps compare that to 2% and 3% as well.
if youre running half sec frames what you could do is try 1/4 sec frames with the gain bumped up a little bit. youll gather twice as much data and probably end up with the same result if your noise to signal ratio dont change too much.
I thought 30% was best (42 inch 4K monitor). I would do an experiment choosing from 15% to 50% at 5% intervals.
That's a good idea. I agree that 30% has the edge. Looking forward to seeing if I can get more of a difference to show with more data.
It comes down to 2 things, cost and data compression... we are relying on ai to solve issues instead of simply allowing a larger sensor which would cost more and require more power to operate. The benefits outweigh the cons of using a larger collector along with a smaller sensor. The best course then going forward would be to pair a good lens with a good sensor, but again, it comes down to budget. Trying to get this hobby into hands of people new to the hobby for example, it would be difficult to sell those higher costing but better setups - and the AI can help significantly, to produce sharper and clear images while staying with a reasonable budget. It can be overkill, leaning too far on the AI however, so its best to tread carefully ahead.
I would suggest that you try to dither (150%, 200%, 300%) the images before stacking, with these images being undersampled, I would expect that it would bring out more detail esp. in the 15% and 30% stacks.
Thank you for this! I have the same Touptek camera and am close to getting set up with an F4 10" Newtonian - both for planetary and smaller DSO capture. 😀
ROI with the 294 in bin 1 mode works for planetary. Fast readout, smaller file size.
Combining the long exposure and the short is a game changer and hope to replicate this after winning the lottery!
Best lucky imaging explanation I’ve ever watched. I was surprised though that right after you mentioned lucky imaging is best on bright images, you announce your target choice as the Cat’s Eye Nebula. A very small target and other than its very bright core, a very dim target. I assume you were challenging yourself. For a beginner, like me, should I start on the moon, Jupiter, or Saturn?
Again, great vid sir!
That's amazing. I never think of using planetary photography methodology on deep sky photography, but if the target is bright enough and small, it should be worth it. In planetary photography, in order to "freeze" the atmosphere, shutter speed is usually lower than 0.01s, which is impractical for deep sky imaging. I really hope to see your test result on 0.5s shooting (because of more read noise, the final image is expected to be a bit noisier) and may post-processing with autostakkert. In addition, I think in sharpcap, it is feasible to use only part of the sensor to reduce file size.
I tried something similar one time with a 130mm f/5 newt and an svbony sv305c camera on the dumbell nebula. I then compared that to a live stacking session that I did on the svbony sv705c camera with much longer exposures (by much longer I mean about 20 second rather than I think 1 second) and I really can tell a difference in sharpness. Both cameras have the same size pixels, but I need to repeat this with the same camera and do some more testing.
I think it would have been nice to see what this nebula looks like with only the traditional method, and my question with this is, are you getting more benefit from the sharpness of the lucky imaging, or the HDR effect you are getting from combining two different exposure lengths?
With the traditional method on this particular target with my particular equipment, the core was blown out with 5 minute exposures and the dual narrowband filter. By blown out, I mean completely clipped to white, so I can’t do the comparison. It would be a good idea to compare maybe 30 seconds to 1 second, equal integration. I’ve done that with untracked with a lens, but never with a big telescope at sub-arcsecond pixel scale.
My Dad literally came to me today asking if I was interested in him buying a telescope. I was curious if there's one around $150 -$200 that would accept a typical DSLR. I want to photograph stars without the typical violet and blue shift you get and we don't have enough for a more expensive telescope.
Your best bet with $150 to $200 to spend (I assume only for the scope and not for scope plus mount as there is no way, I'm afraid, you will achieve what you're looking for for that price if you're thinking scope plus mount = $200) is to try to pick up a used SVBony SV503 70ED. I got one new on eBay about 18 months ago for £195 BUT I haven't seen such a deal since. However, to then use it for photography, you will then need an equatorial mount (another few hundred $$ used) and various other bits and bobs. Astro has reduced in price substantially in the last 10 years or so but it still isn't "$200 and I get an entire Astrophotography rig which doesn't produce chromatic aberration".
Hi Nico, can you give any information on your camera settings? I have access to a big cassegrain and want to try this. I would start with the native high gain mode settings of my qhy 268C. But since one takes so many images, higher gain would be possible right? What was your approach on choosing the gain?
I've been using the IMX585 with the largest refractor I could afford (svbony 122) with decent results. I've been using for galaxies ranging from 4 to 11 arc minutes. I think this is literally the price we pay for prefering refractors yet being able to image small targets.
Didn’t read all comments, but I would consider using an ROI to get much higher frame rate, each with less sky (less storage and processing too), yet still capturing the full main subject. On such small subjects, similar planets, you can focus on much smaller region. What’s the shortest frame length you think would be possible to capture the faint outer parts? And in what Bortle was this? Thx!
I'm a little confused.. the sensor size is one thing but isn't it really the pixel size that matters? i.e. you can take a camera that has a sensor with the same size pixels, but many more of them as this one and it will resolve the same.. Cropping into that image will give you the same output as using this camera, no?
The imaging circle will be identical, and it will hit the same amount of pixels on this sensor vs your cropped image.
That'd lead to a cool test actually... use this camera and then get a different camera with a different pixel size (like the standard IMX571). That camera has bigger pixels. The smaller one should theoretically give a sharper image, but it'd be interesting to see where diffraction creeps in.
Anyway, great video as always. Seriously love that scope of yours. I'm tempted to sell my Esprit 120 to buy the Askar 120 because of how small it is.
Would also be interesting to see if you can guess which ones are 15%, 30%, 50% and 70% and letting the audience guess and see if we were both right or neither of us being able to guess, as knowing which one is which could produce a confirmation bias.
Good idea!
Thank you for this video. I was researching similar telescopes. This video helps my research a lot.
Small object, big scope and a reducer? I can see the benefit of a small sensor but not the reducer, guessing you didn't have the 1x flattener? At 1x you could have used a larger sensor and had about the same size image, maybe a better image if you can track it well. Using the reducer would be a better question.
As far as my knowledge goes, I think the layer masking feature in Photoshop may be a great option for combining the "lucky-imaged" and "conventional" images. I haven't tried it for deep sky yet(it proved itself by combining the earthshine and surface features on the moon, though) but I have seen people doing it- even a tutorial from Deep Kosmos (though it is German). Anyway lucky imaging seems like a wonderful idea to try! Great video!🤩
8:52 For planetary, Damien Peach is a good source for more detailed info. I think Rory tried this with a galaxy, and found that there was a study done that said, and this is theoretical, you could improve images from seeing conditions by 3-4 times, but it required exposure setting in the microseconds. I think the conclusion was 1 second was about the max to get any improvement. But of course, the shorter you can make the exposure, the better.
I know it would be a tiny object give the big scope and and small sensor. Thank you Nico for another adventure with you!
1… Nice introduction to the whole process including rationales and walking through step by step.
2… 100% zoom then 500% zoom. perhaps 1000% and 2000% zooms would make more visible the difference between the percent of collected images used
3… 15% ,30% 50% 70% of images used. With 5000 images, why not try 5% and 1% of images used?
4… half second exposures would be interesting and also comparing a sequence of exposure times all the way down to 10 ms exposures
5… if collecting 10 ms exposures makes images too dark or too noisy, can a 2 step process be used? i.e. 1st, align the images & eliminate images that can not be aligned and reorder based on confidence/accuracy of the alignment, 2nd, reorder the images based on sharpness and eliminate those with poor sharpness. 3rd, stack images with both good alignment and good sharpness. For example, starting with 1,000,000 images of 0.01 seconds exposure time, the goal is to find the best 500 seconds(or 200 seconds)worth of exposure to stack (50,000 frames). To identify the best frames, try weighting the importance of alignment versus sharpness. e.g. identify groups of frames such as top 1% in sharpness while also top 1% in alignment, then top 1% in sharpness while also top 2% in alignment, then top 2% in sharpness while also top 1% in alignment, top 0.5% in sharpness while also top 1% in alignment, etc. After a dozen such groups are identified, then stack each group and look for the best result among the groups. It may be obvious top 0.5 % in sharpness is best, so then try top 0.4% and 0.3%.
Thank you for your videos, thanks to you I can see and compare what I will get in return for the cost. I have a few simple questions and suggestions, can you compare a mirror telescope and a lens telescope of the same diameter and focus, I know that something similar has already happened, but a refractor of a smaller diameter than a reflector will of course show less sharpness, which was visible in your photos, and the second suggestion is to compare a cheap DIY achromat reflector for 70-90 dollars with an Askar 100mm apochromat. Sorry google translate.
Great information Nico! I had no idea that Siril can process .ser files. I wish I'd known this earlier. I assume it can also process Lunar or Solar ser files with a different stacking method? BTW, what do you think of that planetary camera with the 1.45 microns? 8 Mbit resolution, tiny sensor, I think it's the 715 sensor or something. Any good for this type of stuff?
I bought an Altair 269C ProTec as beginners Astrocam. Using an Skywatcher Evolux 62ED. Its almost APS-C.
How about doing this lucky imaging in winter instead of spring/summer? The air would be drier and you will get pretty good seeing ..
Thank you, Nico for this interesting video! Have you tried live stacking in Sharpcap directly ?
I would have used the ASI533MC Pro, I use that with my 8" EdgeHD for shooting galaxies. I pair that with a .7 reducer to bring my focal ratio down.
I think this technique is more suitable for M13. The Cat's Eye is a beautiful object if you also capture the faint outer shell.
Very cool Nico, I just happened to be in the midst of the same project. I have 71x10min subs taken with the L-Ultimate/ASI533mc/8" Edge. I also grabbed 60x30" shots for the core. Surprisingly it came out pretty good.. but not as much detail as your core. I was planning to try lucky imaging.. just need a break in the weather for it. I'm debating if I should also try shorter subs with the L-Ultimate and compare it to the broadband data for the core. Then there is the question on how to blend them. I don't think the HDRComposition is needed since the core isn't blown out with the 10 mins NB subs. I was thinking maybe layers in PS or that new Image Blend script in PI. Looking forward to seeing how you finish yours. CS!
Very interesting that the 10 min. Subs don’t have blown out core with the L-ultimate. I was using a pretty wide narrowband filter (Svbony Sv220) so it makes sense that mine did clip. Last night I continued with the narrowband 5 minute subs and also took about 80 10-second subs through it just to see what those look like.
@@NebulaPhotos Yea, I was honestly surprised about those 10 mins subs. I expected the core to blow and then just use HDRComposition to put everything together. I am at F7.. not sure if you're running a reducer.. if so that could also be part of it.
Yeah, I did use the reducer (f/5.6) once I saw how bright it was at 1 sec. I wondered if I should have instead used the flattener. With my seeing not sure how much it would have mattered, but I hope to keep experimenting with this.
@@NebulaPhotos Looks like there's a lot of potential content to mine out of this target :)
F/5.6 would certainly help with the out bits. It's a very interesting and challenging target.
This was awesome. Thanks for sharing!
What's wrong with going as small sub exposure as possible, when it's the total integration time that matters? For example, if I take 1200 images at 0.5 s vs. 3 images at 180s vs. 2 images at 300s. Provided no bad images, I should end up with the same result if using the same equipment, and all other variables are constant. In fact, there's a downfall of going higher exposure sub, for bright objects, the pixels will be saturated before all of the information is captured and as a result the resolution will be compromised per sub.
3 * 200s, 2 * 300s, and 1 *600s will all look about the same if an identical stretch is applied. 1200 * 0.5s will not work the same/better unless the object is very bright as was the core of the cat's eye nebula in this video. 0.5s is typically not enough time with current cameras to sufficiently swamp the electronic noise of the sensor itself unless the signal is quite bright (daylit scene, nebula cores, star cores, etc.). In other words, you will be limited by the readout noise of the sensor. If we assume a perfect sensor, then yes you are correct, but in the real world, only million dollar scientific sensors approach that kind of perfection in counting photons accurately. I have real world examples of this. For example here: ruclips.net/video/mYucAuUrdTs/видео.html
Maybe I'm missing something in this exercise. Why are you not using a longer focal length telescope with greater light capture? Increasing your frame rate is opposite of magnification offset. Longer focal length, increased light capture, higher magnification, but the same 1 sec frame rate would be a closer balance than doubling your frame rate using the same telescope, no? I must be missing something to using this method vs greater light capture and magnification offset.
This is my longest focal length scope with greatest light capture that I’m able to mount on a tracking mount. 7” f/5.6 isn’t terrible. 10” f/4 would be better. I’m mostly a refractor guy for imaging. I’m not sure what you mean about frame rate vs. magnification. Changing frame rate doesn’t change anything when it comes to pixels scale / resolution / magnification, whatever you want to call it. The reason for a faster frame rate is simply to give one more chances for ‘luck’ when the atmospheric conditions are favorable for a clearer view of the object, but if the exposure is too long it won’t matter because the seeing blur will be baked in. 1 second may have been too long so I’ll try 1/2 second next.
Love your methods, I'm learning so much. thanks man. as always beautiful images of the Creators art. :)
Why not try lower percentages until you see a significant change in quality?
Perhaps I missed it. What gain did you use? I have an svbony 585c, so gain setting should carry over, maybe.
I used the lowest gain (100) in HCG mode.
Bro edit your image stretch it a bit manually in Siril , then do a starnet to separate the target from the stars edit that don't worry about the core being blow out. Now export it as a 16bit Tiff into Photoshop , bring in the star mask to blend in, now back to Siril re open the stacked result file and stretch it a hair manually to bring in the core details export that as 16bit Tiff and now you have 3 layers in Photoshop process and mask in the 3 layers details, no core blown out and the stars nice
Hey Nico, question time, lol. Can you use software like NINA or APT with a star tracker like the 2i, for plate solving or even focusing the dslr?
Not something I’ve ever tried. The star tracker won’t interface with the computer, but if your DSLR does, it may be possible to use those programs for focus and platesolving.
Have you tried with an even smaller number of frames? My best planetary shots typically reject 95-98% of frames.
Hmm, Might give this process a go with my C8 SCT at 2032mm with my ASI585MC planetary camera. Thanks.
Excellent video as always! Keep up the great work.
Hmm, would like to see you cover shoots spanning multiple nights. Your biases and darks are temperature-dependent, and need to be shot the same night as the lights, and the dust on your lens will shift between sessions, so same thing there. Meaning you need to stack each night separately. But how do you combine them later? Just stack the individual results without calibration frames?
This would be my expectation too. However, I think you'd need to weight each stack by the integration time before doing the final stack, or perhaps even better, by snr if the light pollution or moon light etc changes from night to night.
The reason is: stacking is basically an average. So stacking 100 frames means multiplying each frame by 1/100 and summing. If you were to stack the first 80 and last 20 separately, you would multiply the first by 1/80 and the last by 1/20. When you stack them again, you need to skew the weights so that each original sub gets a weight of 1/100. So 8/10 times the first stack and 2/10 times the second stack.
If there's different amounts of light pollution, you want to give more weight to the higher snr ones. But probably not a big enough deal to worry about. I tinkered with different weights like this im siril and haven't noticed much of a difference visually.
@@redjr242 cool, thanks for the reply.
No problem!
What About lucky imaging from light polluted cities? Could you even get some stars at 1s exposure with OSC?
Hi Nico! Did you have a chance to test filter Altair SkyTech L-PRO MAX, which seems is analog of Optolong L-Pro?
Hey do you think you would be able to make a video about untracked dso with a telescope and phone?
Hey are you doing anything for the parade of planets? if so what setup are you thing will be best
Great video. This whole astrophotography is new to me ( I didn't know it could even be done 2 months ago). I've looked at sharpcap and I'm confused about exposure. You can set a frame rate of 30FPS , but an exposure of 1sec. How does that work. 30FPS = 33mS/frame. You were talking about doing 60fps , but 1sec exposures. Are they not mutually exclusive?
I cant even get a good shot of the moon , so ..... :0)
Can't wait to check out the rest of your videos.
Cheers
Has anyone ever tried the opposite for this type of thing. I was thinking milky way core with a fast lens and a very wide field of view. Not sure if this is a dumb idea but thought it might be cool watching all the little nebula pop out as the image was created.
Even on something like the spaghetti nebula would be cool if it works like I'm thinking.
Hey Nico! Very interesting experiments :)
The effect is always going to be very subtle with this hardware, I think. The diffraction limited resolution of a scope like the one you used lies in the order of half an arc second at best. If you factor in the optics and the fact you’re shooting full spectrum OSC on a refractor, I reckon your practical best optical resolution is closer 0.75”. Now when we add the 585 on this scope in its ≈1000mm configuration, we end up with a resolution of ≈0,60” per pixel for the camera sensor. Which is a fine match. These are values that typically perform well under a normal ‘good’ seeing of around 1.5”.
So if you want to use lucky imaging to go beyond the level of seeing you can already expect for a decent night, you’re going to have to drizzle integrate at ~1.5x or larger. Or else you’re simply not generating and interpolating the pixels you need to retrieve the potential extra detail. And even then, there’s only little wiggle room between your scope’s potential and typical, non lucky but still happy, seeing conditions. A factor 2 at best, but probably a little less in practice (I did notice some chromatic aberration on one of the star images while you were showing your Siril workflow, for instance). At least: for as far as I understand what to expect from this. I’m not sure if dithering is even necessary, I never consciously dither when doing planetary lucky imaging and I’ve never seen any artefact from not doing it.
Beter still is more focal length. A C11 with a focal reducer would be an interesting OTA for this experiment.
Thanks for the detailed comment Jules! You are right that many planetary imagers seems to drizzle without dithering, so I probably should try it. For normal (long exposure) deep sky I have run the experiment and definitely see little benefit of drizzling without dithering (equivalent to just upsampling), but this is a whole different ballgame. For example, those are typically more sparse datasets (50-100 subexposures). As you say, when dealing with so few pixels its hard to see any differences at all at the default pixel scale of this gear. Thanks again for the input - very helpful!
@@NebulaPhotos no to artefacts, but yes to benefits from the un-dithered planetary drizzle!
On the imaging scale we’re discussing-sub-arcsesond resolutions below the happy-not-lucky seeing reported by your local meteorologist-the necessary ‘dithering’ is caused by the atmosphere itself; it is the actual wiggle introduced by turbulent masses of air which move our relevant details around by a few tenths of an arc second every single subframe.
That’s also why successful ‘lucky imaging’ is not just dependent on picking the best frames, a large part of it is statistically retrieving the true location of contrast elements by means of deconvolution. Which also happens to explain why actively dithering the scope becomes a necessary condition for a successful drizzle result when we’re sampling at a lower resolution of around or above 1-1.5”. (I’m thinking aloud here, I hope you’ll allow it.)
At the typically much coarser resolutions of long exposure sub, deep sky astro, the atmosphere itself usually isn’t sufficiently wiggly and dithery any more. Besides: the typically much longer exposure times will remove all of the true ‘lucky’ aspect-which is why your suggestion of halving exposure time is very relevant. I would even consider going to 0.25 seconds and see if the camera can cope with the gain if just for the very brightest of details.
@@JulesStoop Very helpful again Jules. Makes good sense. On Saturday night I did a run of 5000 frames at 0.25s and will try stacking it with drizzle! If the results are interesting, I'll make a follow-up video.
@@NebulaPhotos I’m curious!
I love your videos, keep going dude!
Do you have a 3D printer running in the background?
Have you had any experience with Live stacking planetary or lunar/solar images? I saw Sharpcap implemented this, but I was curious if you have tried it.
I haven’t. I’m a total newbie when it comes to most of this stuff with Sharpcap. Learning a lot from the comments. Sounds like others have found the live stacking and FWHM filter work really well.
Makes sense! I’ve been doin PIPP, autostakkert and registax/astrosurface for the longest time but last night started playing with stacking with mixed results. I got a better stack with the old workflow altho its too early to call it. Def gonna keep playing with it
199K!!! Almost there Nico, I wonder if you will be celebrating the 200K mark?
Would a global shutter make a difference, such as with QHY5III568M/C or QHY5III174M, versus a rolling shutter?
i do LI a bit different.... sharpcap livestacking with guiding diethering and FWHM filter on and only accepting the best frames ... say at 0.5 secs i get cca 2.1 FWHM I'll set it to say 2.3 -2.5 and i stack say 5-15 minutes results thereafter.
that's what works for my frac 115/800 with the 585
on the c11 ... different story...
A new way to interpret the phrase “the sky’s the limit” 😁
Hey Nico, I ran into a question that has me baffled. So I took out my Apertura 60mm F6 telescope on my SA2i the other night, mainly to test it out. I also have the Field flattener and also another .8x reducer/flattener. First, it took forever to get focus. But the issue I had was that even with the flattener on there, the stars all the way around the edges looked WORSE than when i used my 75-300mm kit lens. I am connecting the T3i to the telescope via 1.25 T-Adapter... I thought that when I unscrewed to the 1.25 tube, that I would be able to screw the camera on directly, but apparently I dont have the right size. Any thoughts on why the stars are so bad while using a flattener?
You shouldn’t be using any 1.25” accessories in attaching your camera to the telescope. 1.25” is a smaller diameter than APSC and isn’t designed to give correct backspacing, so you have two issues there. You need a 48mm Canon T adapter to attach the Canon t3i to the flattener or flattener/reducer. That will give you perfect backspacing and won’t vignette.
@@NebulaPhotos that is what I thought the problem is. I already ordered the right T-mount from High Point Scientific the day after I went out. It was the only part that i could imagine being the issue. Especially since it did the same thing with both flatteners. I will say, balancing the rig has been quite the challenge too. Hopefully i can test the new adapter this weekend. One last question, have you ever had an issue where your rig would only travel slightly more than halfway before the camera hits the tracker?
@@kevinashley478 No, I'm not sure what you mean. With the SWSA 2i you should be able to point anywhere in the sky without the camera hitting the tracker or tripod. Usually that is only a problem at zenith with very long scopes running into the tripod legs. With that tracker you don't have to worry about balance in both axes, only RA balancing the scope side with the counterweight side. With that scope, I imagine you will want the counterweight all the way out or very close to all the way out.
@@NebulaPhotos Right, so i have the green bar, the piece that the round metal rod screws into, all the way down as far as it can go. I have the weight all the way at the end of the metal rod. The declination part has an adapter on top, to accept the vixen dovetail. I am using about a 6-8 inch vixen dovetail bar (i think it is a william optics) on which the telescope is mounted. The bar is as far forward as it can be. The telescope has a fieled flattener and my T3i on the back of it. When I release the clutch, it will spin left and right and is balanced, but as i spin it further down, just a bit past halfway, the camera hits the top of the tripod where the star tracker is mounted to the tripod. Is that reasonable, or should that never happen? Basically, when you turn the payload and counterweight to horizontal for balancing, the top half is free and clear, but i cant move the camera much further without hitting the top of the tripod. Would it be better to send you a photo of what i am talking about?
@@kevinashley478 Is there any spot in the sky you can't aim at considering you can move the camera to both sides of the meridian? You can send a photo to nicocarver at gmail
70% looks much better, on the darker details
Been into mirrorless imaging a long while underwater nature and recently mirrorless Astro planetary esp events my main jam something about it prob started at a hard area .The things for a relative newbie that don't seem covered a lot the many connections mt weight limitations and imaging friendly eyepieces baader Hyperion series should be congratulated for that .Have wasted so much money on not getting the right fit for image train l the biggest difference is fl doesn't necessary follow wider equals wider view also the fov and focal reducer aspect that is another galaxy from mirrorless photography .....for imagers suggest at the very least dip a toe in Astro it's so immersive love Astro Chanel's
i love this!!
Hey Nico! Did you consider using Sharpcap’s FWHM filter during image acquisition? That way you would discard any images that don’t meet defined sharpness criteria already during acquisition and reduce the storage and processing requirements substantially. It is explained here starting at 31 minutes in: ruclips.net/video/khQOnZiz97s/видео.htmlfeature=shared
Didn’t know about it! Thanks so much for alerting me. This is exactly the kind of comment I hoped would surface.
Nico can you do a review of the Askar 71F it's a quadruplet astrograph for only $600 this is unheard of and I would love to hear your honest opinion 🙏 thanks
It is called compensation. We all know that astronomers who favor big "tubes" have small "sensors" 😂
Nice vid. Nice scope.
Great video! Any astro clubs here in Massachusetts or New England you recommend?
Aldrich Astronomical Society (www.aldrich.club/) - Great club out of Central Mass. with an active astro imaging group. They have an observing field and club observatories for members, and hold member star parties there in the summer.
ATMOB (www.atmob.org/) - Very big club being the only one in the Boston area. They have a nice clubhouse and observing field in Westford, MA - not super dark there, but okay considering driving distance from Boston.
Dues at either are just $35/year!
@@NebulaPhotos Awesome ty! Any chance we ever meet in either one? Haha
@@richaellr I'm mostly imaging from New Hampshire these days, but do want to make it down to an Aldrich Star Party this summer if the timing/weather works out, as it's only about an hour from where I live in Western NH.
@@richaellr email me for more recommendations of places to go in New England - I do still do some mobile imaging at dark sites as I don't have a good view of the Milky Way core from home - nicocarver at gmail dot com
@@NebulaPhotos Cool, might join in. Im in North-Central MASS. Let me know if u ever need your countertops redone haha. Thanks for the info.
I know everyone's here for AP, but when I see a 6" - 7" refractor something in me yearns for a diagonal & eyepiece....
amazing
Great video. I tried the cat’s eye nebula as well and found the core was pretty blown out with 20sec exposures (alt-az C8) compared to the rest of the image. Hard to stretch them together. I ended up making s composite as well. But mine didn’t have as much detail as yours so this is a great idea. Definitely going to try and revisit it now. Thanks!
Oh and yes, you can use the ROI feature and the only downfall is the target might go out of FOV but you can adjust it in real time as well. Another idea would be to use a camera with smaller pixels. I have the QHY5iii715C which has 1.45um pixels.
Oh, just realized another downfall of using ROI feature is fewer stars so you would need to adjust how you align the images. Siril has 1-2-3 star alignment which sometimes works. Autostakkert can use different positions of the object itself for alignment.
The best time for lucky imaging is 1/100 of a second
No matter what is the size of sensor or telescope there is always a target that need that kind of combination of setup, so it will be like i should have 20 combos for different targets, so why going with this tiny sensor and not something like 4/3" or 1" sensor already and crop it to death? Even with this small sensor if someone can't afford like 6"-8" refractor or even 10"-20" reflector then it won't help much, mostly those who buy this camera don't have budget for big weapons actually, and me if i buy it after watching so many RUclipsrs talking about it then which scope i have to be perfect for it then?
Yes, i tried ROI mainly for planetary or solar just great, so i will try to do that for DSO sometimes and see how it will be, i have my camera 294M-Pro that can be binned or unlocked to have 46mp and 2.31 pixel size, that is SO MUCH pixel there, so with ROI i can go really that small with enough mp at that small pixel size, and i know many people will say it has amp glow, but think about it, the amp glow don't show on the the entire frame, only by edge of one side mostly, with ROI it is like you take out amp glow and even vignetting, but i didn't try that yet as i spent last 3-4 years only collecting gear instead of light, sooner or later i will put them all into use and see.
Love you from India
Excellent info and probably the first time I see on YT. I would suggest to test QHY5III715C for such lucky imaging. Yes it is un-cooled planetary camera, but have excellent low noise even after running it for long time. Key factor for QHY5III715C is very small pixel size 1.45um (about half of 585). I saw on C/N, few people are using it as main camera for C8+ hyperstar config and producing excellent results for small DSOs. I have the camera and planning to test it on my C6 + hypersatr for similar targets.
Huge Telescope. Tiny Sensor. Why? Astro gambling.
I noticed that a Sony camera became very much admired for it's low light capture abilities. Your could capture the Mikey Way in the night sky. The Sony camera had a "full frame" sensor, and all manner of "adapters" were made to gobble up all that imagery. The telescope sensors had fallen behind and they were too expensive. All that sensor gear for telescopes are now e-waste.
kill me already is you gonna play it like this
Image train compatibility weight connections for newbies investigate .....then more .....Astro mistakes !!!!
🐈⬛👀
Have you tried with an even smaller number of frames? My best planetary shots typically reject 95-98% of frames.