Sunday, June 10, 2012

The M87 Chain and the Pixinsight Zone System

One of the greatest euphemisms in the world has to be the phrase `learning experience'. How often do we sugar-coat our mistakes by calling them `learning experiences'? I'm sure I've done it many times. This image provides an example, but in this case there's a bit more to it than that...

A portion of the Virgo galaxy cluster, with the giant elliptical galaxy M87 at top left, and part of `Markarian's Chain' of galaxies at right. Click the image for a larger version, or click here for full size.
Data Acquisition: Making the best of a bad situation

A few weeks ago, I was doing some backyard imaging, and the Virgo galaxy cluster seemed like the logical choice. Having shot a luminance image of the Leo Triplet not long before, I decided to do another one-night stand, with just luminance, but this time I wanted to shoot `Downtown Virgo'. (The origins of that term and its enthusiastic usage seem to go back to Jay Freeman and Jamie Dillon, two highly-accomplished Bay Area visual observers.) Specifically, I wanted to shoot the portion of the Virgo cluster called Markarian's Chain. It's a standard target, since it comprises a pretty, arc-ing chain of galaxies that stretch from M84 and M86 towards M88. Almost everyone works on an image of Markarian's Chain at some point. By planning it out in SkySafari Pro 3 on my iPad, I could see that if I rotated my camera just right, I could frame most of the chain pretty nicely on my ST-8300 sensor, using my ED80 f/7.5 refractor.

One thought nagged at me, though... What about conventions? As in sign conventions and angle conventions? Sky Safari Pro 3 has a really nice slider tool for rotating the position angle of one's field-of-view overlay, relative to the sky. This allowed me to plan my framing really easily. And when I'm imaging, I can download a frame from the camera, and use MaximDL to plate-solve it, which gives me the image's position angle on the sky. This is really handy, but.... what if these two pieces of software use different conventions for specifying the position angle? Hmm. I could wind up with a frame that's rotated 90 degrees from what I expect.

So, it wasn't a great shock when that's exactly what happened. Here's the framing I had planned on my iPad:



Here's how things actually worked out, since the two pieces of software treated the position angle differently:



Hrm. Rargh. What to do? I could have rotated my camera 90 degrees, but that would mean refocusing and probably re-doing the GOTO alignment. Given the couple of hours available for shooting Downtown Virgo before it went behind some trees, I didn't want to do that. So, I panned around in SSP 3 and looked for an alternative framing. Here's what I wound up with:



That seemed like the best compromise, since it caught part of Markarian's Chain, and included the giant elliptical galaxy M87, the real `heart' of the Virgo cluster. I shot a couple of hours of luminance (in 5-minute subexposures), and called it a night.

Processing: Pixinsight meets the Astro Zone System

A few weeks later, I had a little time to sit down with the data, and after using the very handy new preprocessing script in Pixinsight, I saw the following preliminary result (this is a closeup of two of the galaxies in the Chain):

Autostretched image of two galaxies in Markarian's Chain.

It's probably worth explaining what I mean by an `autostretched' image (also sometimes called an AutoSTF'ed image amongst Pixinsight enthusiasts). PI has a tool called `Screen Transfer Function' (STF), which stretches the brightness values of the image's pixels, solely for the purpose of displaying the image on the screen. It doesn't change the original pixel values in the image file, but it basically creates a temporary copy of the image to display on the screen, with the brightnesses changed so as to make the dim parts of the image more visible. The STF tool has a `Auto' button, which creates an image that nicely shows `what you got'. (I used one of these AutoSTF'ed images in my annotated Leo Triplet posting.) Such an image, though, usually doesn't make for a very pretty picture, since it shows just how noisy the dim background areas and dim parts of your target look. That graininess is a combination of instrumental noise and the eponymous photon shot noise (the latter coming from both the target objects and from the sky.)

At this point, my big goal was to do some noise reduction, and try to make the noisy, grainy-looking parts of the image look a little better. In this I was aided by Jordi Gallego's new presentation on noise reduction in PI. There's a lot of good information in this document, but I was particularly intrigued Jordi's slides 51 through 53, particularly #53. In this slide, he shows that one can make masks for applying different noise reduction settings to different parts of the image, such as:

  • The dark background sky, which has the lowest signal-to-noise ratio (SNR), and is thus the `grainiest'-looking part of the image.
  • The dim parts of the deep-sky object(s), which have fairly low SNRs, and thus mostly need smoothing and noise reduction.
  • The bright parts of the deep-sky object(s), which have high SNRs, and thus can tolerate some sharpening, such as through deconvolution.

Aha! This is basically the same concept as Ron Wodaski's Astro Zone System. I borrowed a copy of this book from a fellow Bay Area observer a couple of years ago, and found it to be very interesting. Sadly, the book has been out of print for some time, but I was one of the lucky folks at the 2011 Advanced Imaging Conference who managed to get one of the copies Ron gave away. (Thanks, Ron!)

After a little fiddling around, I realized that PI's Range Selection tool works best on images that have already been stretched into a nonlinear state, so I made a copy of the image, applied its AutoSTF settings to Histogram Transformation, and applied that to the copy. I then used Range Selection on this stretched copy.

First, I made a mask that covered up the stars and galaxies, leaving only the dark background sky to work on:



After a little fiddling around, I stumbled on some settings in Multiscale Median Transform that smoothed the background reasonably well:



I was pleased with this result! It's not perfectly smooth, but I'm calling this a win, so far. Then, I made a mask for the `mid-SNR' zone, which included the fainter outer parts of the galaxies:



And then, by pulling back on my MMT noise reduction settings, I was able to smooth those areas somewhat. Next I made a mask to isolate the cores of the galaxies, for sharpening via Deconvolution:



After mid-SNR-range smoothing and high-SNR-range deconvolution, I had this image:



The brightness levels you see here are `Auto-STF' levels, and even with the noise reduction, they're not really good for posting on the web. So, since the image was still at a linear stage (i.e. not really brightness-stretched yet), it was time for a Histogram Transformation, some star shrinking, and a horizontal flip to match the correct appearance of this area on the sky:



Room for Improvement:

I think this was a good proof-of-concept project, for the Range Selection / `Pixinsight Zone System' approach. My masks could use some work, though. When I examine the image closely, I can see that some of the dim parts of the galaxies got left out of the masking process. Also, the various processing steps left an artificial ring around M87. There really are such things as ring galaxies, but M87 isn't one of them. I'm very interested in refining my touch with Range Selection, and to trying out the new Adaptive Stretch tool! A week or so after shooting these data, I managed to shoot Markarian's Chain with proper framing, and so we'll see how things go with this new data set.




No comments:

Post a Comment