M33, the Triangulum galaxy, distance about 2.5 - 3 million light years. |
Why? I think it's because M33, the Triangulum galaxy, looks like a `logical next step' after M31, the great Andromeda Galaxy. Getting a decent M31 image isn't trivial, but it's bright enough that one can get something presentable without too much exposure time, and without having to work too hard at processing the data. M33 is different, however. Its surface brightness is lower, and consequently one has a significantly harder time getting the dimmer, outer portions of the galaxy to look good. Even when using a sensitive CCD camera, and when calibrating one's light frames with darks, flats, and biases, much of M33 can easily come out looking noisy and ugly. (I haven't totally overcome those issues in this image, but I think I've imaged the outer regions a bit better than before.) Unless you've got a very `fast' (i.e. numerically small f-ratio) imaging system, and/or a great deal of time, it's hard to get much out of M33. All of this results in M33 being a rather harder thing to acquire and process than M31.
I shot the data for this image at Calstar 2012. Calstar is a yearly get-together of Bay Area and SoCal amateur astronomers, at Lake San Antonio in inland Monterey County. This event is near and dear to many of our hearts, mostly due to its no-frills nature. The sky at LSA can get very dark, dark enough to see not only the gegenschein, but even a nearly horizon-to-horizon zodiacal `band'. I've spent many hours picking out individual objects in M31 and M33, through an 18" Dobsonian telescope. It's a great site for imaging and visual observing.
Although I shot these data in September 2012, it's taken until February 2013 to get them processed and posted on the blog. What an epic it's been! The main thing that ate up all this time was a seemingly-endless series of attempts to properly deconvolve (i.e. sharpen) the image, as described below.
Get the L out
The most unexpected thing about my processing workflow was how much data I ended up throwing away. I'd shot unbinned luminance in 2011, along with 2x2 binned color (and used those data to produce a previous version of M33.) I also shot additional unbinned L and binned color in December 2011, and my 2012 data set included a lot of unbinned L, too.
In the end, I chucked all the binned R, G, and B, and all the unbinned L. In the former case, I never found a good way to combine the binned and unbinned color images. Maybe I should work that problem more, someday, but for now I've given up on it. And when I tried to make an LRGB image from all-unbinned data, it never looked any good. I have two ideas on why this was so:
1) I don't think my L image was any sharper than my RGB image, nor sharper than any of the individual (stacked) color images. It's certainly true that if one is shooting unbinned L and binned R,G,B, the former will have more detail than the latter. That's the whole idea behind that trick. But if the data are all unbinned, then it comes down to a matter of optics. The optical system had better make a Luminance image that's sharper than (or at least as sharp as) the RGB image. And with a refractor (like the Orion ED80 refractor I used), that's a tall order. Even the best refractors will have a tiny bit of chromatic aberration, which means that the R, G, and B components of the Luminance won't all be focused the same. So, I've come to suspect that the Luminance image will, in fact, be a tiny bit blurrier than the RGB image, and I think that's so in my case. I'm just better off shooting straight R, G, and B, unbinned.
2) The uselessness of unbinned LRGB is described by Juan Conejero, author of Pixinsight software, in a thread on the Pixinsight forum. Juan points out that adding luminance to an image reduces chrominance, and so it really doesn't do any good to try unbinned LRGB. As far as I'm concerned, goodbye Luminance. I think I'll mostly use my L filter for focusing, drift alignment, and framing. (I might use the unbinned-L-and-binned-RGB trick if I was shooting a large, diffuse nebula that doesn't have much small-scale detail, though.)
After all this, I think I'll try to acquire image data by means of simple, unbinned R, G, and B. This may require some sort of automated acquisition workflow, however, in order to get some data through each filter, during each imaging session. There's a lot of focusing, slewing, and framing involved!
Pixinsight processing workflow
I followed a fairly `basic' workflow in Pixinsight. I suppose you might call this the `non-multiscale' approach, because I didn't try the RBA-like processing steps I used in the previous M33 image. Maybe next time! (I'm particularly intrigued by Emanuele Todini's recent post about his `multi-scale-layer' workflow.) Here's what I did (the abbreviations will probably be familiar to PI enthusiasts):
- Calibration using the Batch Preprocessing script
- Cropping off the outermost part of the image
- Deconvolution of high-SNR areas (this took forever...)
- MMT-based noise reduction of low-SNR areas (not too hard, thankfully)
- Nonlinear stretch with HT
- Dim the brightest regions a tiny bit with HDRMT
- A little bit of contrast boost with LHE
- Increase overall color saturation with Curves (Lum mask in place)
- Increase color saturation in HII regions, blue spiral arms, and orange galaxy core with ColorSaturation tool (Lum mask in place)
- Small amount of denoising the 1- and 2- pixel-scale layers with ATWT
- Dimming stars a little bit with StarMask and MT
- Making a mask for the largest stars with ATWT, HT, MT, and Convolution
- Desaturating (and slightly dimming) the biggest stars with Curves and MT
- Tiny tweak to the black point with HT
- Color-space conversion, resampling, and saving as JPEG for web publishing
The Agony of Deconvolution - and a savior!
I spent the entire winter beating my head against Deconvolution. Whether it was on the RGB image, or on the (ultimately-not-used) Luminance image, I could not keep the stars from showing subtle dark `ringing' artifacts. I've successfully applied Deconvolution before, with results that pleased me, but this image just wouldn't deconvolve, for some reason. It drove me nuts for months.
In the end, it was something simple. It turned out to be the point-spread function I was using in the Deconvolution module. I knew that the parameters of the PSF were important, but I had no idea how important. I'd used Mike Schuster's excellent PSF evaluation script, but somehow that must have produced an averaged PSF that wasn't quite what Deconvolution wanted.
I came to realize this when I watched the Deconvolution videos in the new Pixinsight series by Warren Keller and Rogelio Bernal Andreo. There's some simple information in there, concerning how to measure one's PSF, and it did the trick! I don't want to give it away here, because I think Warren and RBA deserve to be rewarded for making the videos and helping people learn PI. I don't make any money off their videos, but I will say this... after feeling my months-long `Deconvolution Frustration' go away, I consider the money well spent! If, at any point during my long winter of frustration, someone had said to me "Your problem will go away if you spend an amount of money equivalent to Warren/RBA's "PI Part-1", I'd have said "Where do I sign??"
Thank you for sharing your detailed steps here Marek. I need to get into that video as well to see what else I can learn. Were you running deconvolve on each channel or on the merged LRGB?
ReplyDelete