Tuesday, August 21, 2012

The Phoenix Butterfly

I spent last week on an imaging trip near Lassen Peak, in northern California. It's a minor miracle that I got an image of the Butterfly Nebula (IC 1318), given how much forest-fire smoke was in the air. The last several years of Lassen trips have been blessed with clear, blue, gorgeous skies, for the most part. Forest fires are par for the course in the area, however, and it was only a matter of time before the dice came up snake-eyes, smoke-wise. In other words, I was bound to lose a Lassen trip to forest fires, someday. That someday was the August 2012 dark-moon cycle... almost. Despite all the smoke (and clouds), there was enough clear sky to image some of the nebulosity around the star Gamma Cygni. I like to think of this as `a butterfly rising like a phoenix from the ashes of a fire-plagued season'.

IC 1318 d and e and LDN 889, a.k.a. the Butterfly Nebula, imaged from Lassen Peak.
Click on the image for a larger version, or click here for full size.

Only a couple of nights in my week-long trip had worthwhile skies, so I had to abandon my plans to image the Swan nebula (M17) and the Triangulum galaxy (M33), and concentrate on a single object that would be near the zenith for most of the night. An object that appears near the overhead point in the sky (the zenith) is seen through the least possible atmosphere. In this case that meant through the least possible smoke, depending on how the smoke was being blown around by the wind.

During northern-hemisphere summer nights, the region of the zenith is dominated by Cygnus, the Swan. Also known as the `Northern Cross', Cygnus is a grand constellation, one of the few that really looks like its namesake. Right at the heart of the swan is the star Gamma Cygni (a.k.a. Sadr). A good deal of bright emission nebulosity and dark dust can be seen around Gamma Cygni, making it a popular target for imagers. I happened to pick up the September 2012 issue of Sky and Telescope right before my trip, and when I had to pick an imaging target in Cygnus, I thought of the Gamma Cygni area. Sue French and Steve Gottlieb had covered this region in two very nice articles in the September S&T, and Rob Gendler's image, accompanying Steve's article, really got me excited about this area.

According to Steve's article, the `butterfly' is formed by two portions of the IC 1318 emission-nebula complex (IC 1318 d and e), in front of which lies the Lynds dark nebula 889, a mass of dark absorbing dust. The bright emission nebulosity forms the wings of the butterfly, and LDN 889 forms the body, complete with a head that sports two antennae! Like other `emission' nebulae, the bright material glows because of the excitation of the hydrogen atoms of which it's made. IC 1318 is a star-forming region, and ultraviolet light from hot, massive, young stars causes the hydrogen atoms to glow, a little like a fluorescent light tube or a fluorescent mineral. LDN 889 consists of microscope grains of interstellar dust, which absorb the light from the nebula. (The sky over the Lassen Peak region often contained clouds of smoke that dimmed the stars in much the same way.)

The Reading fire, one of the fires that turned the blue sky brown for much of this year's trip.
(Image credit: National Park Service, Lassen Volcanic National Park)

Data Acquisition

On two nights, the sky was acceptably transparent for imaging, and I managed to acquire three hours of data through a clear (`Luminance') filter, in 5-minute subexposures. The last night of the trip yielded a very nice sky, thanks to some fortuitous wind patterns, with the Milky blazing bright and `sugary' overhead. Two of my three hours of data were acquired under that sky.

I would have liked to shoot some color data, but equipment issues put an end to that idea.  Perhaps foolishly, I decided to try and `drive' my mount from my laptop. Maxim DL was able to talk to the mount and order it to slew around the sky, but I kept having a problem with `backwards slews' in the western part of the sky. I'd have shot an additional 3 or 4 hours of data on the final, clear night if I hadn't been trying to debug this problem. Oh well, I'll get it sorted eventually, and at least I got three hours of luminance.

Pixinsight processing:

The data for this image followed my standard Pixinsight processing routine for a luminance-only image:

  1. Calibrate subexposures with the BatchPreprocessing script
  2. Register and stack the calibrated subexposures
  3. Deconvolution to sharpen the bright, high-signal-to-noise-ratio (high SNR) areas
  4. Multiscale Median Transform to smooth the dark (low SNR) areas
  5. Stretch the brightness values of the pixels with Histogram Transformation and Local Histogram Equalization
  6. Shrinking (actually more like dimming) stars with StarMask and Morphological Transformation)
  7. Cropping, conversion to standard ICC color profile for web publishing, and saving as JPEG.

Room for Improvement

(Pixinsight geekery ahead...)

Naturally, I would have liked to acquire more data, including color data. Processing-wise, I noticed that some small-scale, `salt-and-pepper-like' noise was introduced somewhere in the processing. This probably happened during the Histogram Transformation or the Local Histogram Equalization, despite my use of a luminance mask. The luminance mask was made in the usual way, by applying an auto-STF to a copy of the image (via HT). I wonder if I should have done a more elaborate intensity transformation when I made the luminance mask, so as to protect the dark areas better, and to get a more effective deconvolution in the bright areas.

After the initial star-shrinking, which worked mostly on the small stars, I tried to build a  new star mask for the larger, more bloated stars, but after a lot of experimentation, I hadn't gotten much of a result. I decided to post the image as-is, but I still dream of dealing with the large stars someday.

Sunday, July 15, 2012

Making `Adaptive' progress with Pixinsight noise reduction

Here's a short technical article for my fellow Pixinsight learners. It's about some progress I recently made in learning how to reduce background noise in astronomical images. These images, like any images made in low-light situations, have the potential to be plagued by a `grainy' appearance, particularly in the dark background areas. In the parlance of astronomers and amateur astro-imagers, we say that our images commonly exhibit `noise' in the `low-signal-to-noise-ratio (low SNR) areas'. This noise can be reduced by racking up as many hours of exposure time as possible, but there's a limit as to what our schedules (and the weather) will allow.

Noise can also be reduced somewhat in post-processing by using software routines, such as those in Pixinsight. This post is a journal entry of sorts, to record how I managed to smooth the noisy background areas of an image of a galaxy cluster. It allowed me to go from this:


To this:


For me, this was a much better result than I'd gotten before, and it happened because I managed to correctly tweak a setting in one of Pixinsight's noise-reduction routines. Details follow, for any other PI users who might find the information useful.


Image Acquisition

As I described in an earlier post, I'd already taken one stab at shooting Markarian's Chain, a prominent grouping of bright galaxies in the Virgo galaxy cluster. After my first attempt, I found a weeknight in late May 2012 when I could re-shoot the Chain, with proper framing this time! The data were acquired with my ED80 imaging rig: An Orion ED 80 f/7.5 refractor and an SBIG ST-8300M CCD camera, on a Losmandy G-11 mount. As I only had one night to acquire the data, I shot through a clear (`Luminance') filter, so as to make a black-and-white image. I managed to get 32 five-minute subexposures, for a total exposure time of 2 2/3 hours.

Ideally, I'd have liked to get at least several hours of exposure time on a target like this, so as to build up a decent SNR in the faint outer parts of the galaxies, and in the dark background areas. I knew that if I was going to make a final image that showed more than just the bright cores of the galaxies, it would take some wizardry with Pixinsight's noise-reduction settings.


Noise Reduction at the Linear Stage - The General Idea

I followed the same general strategy for this image as I had done with the last several images, namely to reduce noise (in the dim, low-SNR areas) and sharpen details (in the bright, high-SNR areas) while the image was still at the linear stage. In other words, the noise reduction and sharpening (`deconvolution') were to be done while the pixels still had their original brightness values, nearly all of which are too dark to show up well on the computer screen, without having their values mathematically `stretched', which would destroy the linear relationship between their brightness values and the true brightness values of the objects in the scene. This basic strategy was laid out by Pixinsight creator Juan Conejero. The ever-obliging Harry Page made a nice video of this type of workflow, and the technique was updated by Juan for a new version of the key tool.

The general idea is to use a powerful-yet-rather-mysterious tool called Multiscale Median Transform (MMT) to smooth the image to a greater or lesser degree. This smoothing can be (and needs to be) applied more strongly to the dimmer, noisier areas. Conversely, it can be (and needs to be) applied less strongly to the brighter areas. A copy of the image, called a luminance mask, is used in order to apply the process more to the dark areas, and less to the light areas - see Juan's posts and Harry's video for more information on luminance masks.

So, to start at the beginning: I had calibrated my light frames with dark, flat, and bias frames, and aligned, registered, and statistically combined the calibrated subexposures. Here's an autostretched closeup of the noise I had to deal with:


My goal was to try and smooth the noise, although I knew I wouldn't be able to do a perfect job of it. I wanted to smooth it enough, however, to make it worth stretching the image so as to bring out most or all of the faint outer parts of the galaxies. Part of this process involves protecting parts of the image from the noise-reduction tool, and this is the goal of luminance masking.


Luminance Masking - Juan Knows Best


In a recent post, I described my use of PI's Range Selection tool to make luminance masks. I thought it made a lot of sense to build at least two or three separate masks, and then to apply MMT noise reduction to the different zones that would be delineated by these masks. In my `M87 Chain' image, I tried applying different MMT noise reduction settings to three zones: 1) The dark, noisy background, 2) the dim, fairly noisy outer parts of the galaxies, and 3) Deconvolution sharpening, rather than noise reduction, to the bright core areas of the galaxies.

Sometime later, I found myself thinking `wouldn't it be nice to be able to make just one mask, which would automatically apply more protection (from the noise-reduction routine) to the brighter areas, and smoothly reduce the amount of protection applied to the dimmer areas?' After a little while, I slapped my head and said `You fool, that's what Juan taught us to do in the first place! He uses an inverted copy of the image itself as the luminance mask, and this does the automatic masking you're looking for!' This is a really basic idea, and I felt silly for having concocted my separate-masks approach in the first place.

So, I made a copy of the image, inverted it, blurred it with the Convolution tool, and applied it to the image:


The redder areas have more protection applied to them, and the closer-to-black areas have less protection applied to them, so they will undergo more noise reduction, even with only one application of the MMT tool.


First Attempt at MMT - Little Dark Blobs

I love Pixinsight because it's so powerful, and because the people who are really good at it are able to achieve some amazing results. I aspire to understand all of PI's tools at a `master' level someday, if that's even possible. However, some of those tools, like MMT, have a lot of settings to tweak, and it's hard to know what values to use for the various settings. As I describe what I did with MMT in this case, I'll assume the reader has examined the posts by Juan Conejero that I linked above.

When using MMT for noise reduction, one generally needs to check the Noise Reduction box for each of the wavelet layers. Additionally, it seems that MMT noise reduction should be applied more strongly for the small-scale layers, and less strongly for the large-scale layers. (As near as I can tell, this seems to mean using larger Threshold settings on the smaller-scale layers, although for the life of me I don't know what the Threshold numbers mean.) After some iterating, I arrived at these settings:


These settings did manage to smooth the background, but I was left with a number of little dark blobs scattered around the image - I think you can see them here:


Hmm. Close, but no cigar. If only there were a way to get rid of those little dark blobs!


`Adaptive' to the rescue

Casting about for a solution, I read the long tooltip for the `Adaptive' sliders in the MMT noise-reduction dialog. It contained this line:  "Increase this parameter when you see isolated, high-contrast, relatively small structures that survive after finding an otherwise good noise threshhold value." This sounded promising. But how to minimize the time I'd have to spend iterating the Adaptive values?

Here's what I did: I used PI's ExtractWaveletLayers script to break the image down into its constituent layers of detail. Zooming in closely to each layer, I noticed that the `dark blobs' seemed to be about 16-32 pixels in size, roughly speaking. So, I gently increased the Adaptive settings for the 16-pixel and 32-pixel wavelet layers in MMT:



 Having done this, I got a better, smoother result:


It's a small victory, and I suppose it's nothing to brag about, but for a Pixinsight learner like me, it felt good to be able to smooth a noisy image this much, without the image looking too drastically over-smoothed. I was eventually able to use this noise reduction as one step in the overall processing of my Markarian's Chain image. That image will be the subject of the next post!

Wednesday, June 27, 2012

Globular Star Cluster M3

Harbinger of Summer - that's how I always think of the globular star cluster M3.

A little less than a hundred years ago, Harlow Shapley measured the distances to the globular clusters, and realized they form a spherical halo around a point that lies in the direction of the constellation Sagittarius. That was the beginning of the realization that our solar system is not at the center of the Milky Way galaxy. Globular clusters like M3 are classic summer objects; I've lost count of the number of times I've passed the short summer nights looking at them, through any number of different telescopes. Constellations like Sagittarius itself are rich hunting grounds for `globs' large and small, bright and dim. A trip to the southern hemisphere has, as one of its many treats, views of the huge, blazing Omega Centauri and 47 Tucanae globulars. Simply put, globular clusters are classic summer `eye candy'. Here's an image of M3 that I shot during the  June 2012 dark-moon cycle:

Globular star cluster M3
8-1/3 hours total exposure time
Evenly split between unbinned R, G, and B, shot in 4-minute subexposures.
Click the image for larger version, or click here for full size.

M3 will always have a special place in my astro-heart, since it was the first object I ever saw through a large amateur telescope. It occurred just over 10 years ago, in April of 2002. I went to one of my first Bay Area observing events, at a local hilltop site. I had my little 5" Meade ETX-125, and I was ready and excited to see some deep-sky objects! To my amazement, Bruce Jensen set up an 18" Starmaster dobsonian next to me. I'd never even looked at a telescope that big, at such close range, let alone looked through one. Bruce showed me M3, which was still rising in the east, and I was blown away. There was no going back - aperture fever took hold of me for good! (I'm lucky enough to be able to enjoy my own views through an 18" scope these days, something for which I'm very grateful, even if I'm mostly using my imaging rig these days.)

M3 is one of the farther-west of the bright globulars, so we see it in the (northern-hemisphere) spring, before the other globs are well-placed for viewing in the summer sky. I'll always associate M3 with April, May, and June, when we're enjoying the galaxies of Coma Berenices and Virgo, taking peeks at globs like M3 and M5, and dreaming of the summer Milky Way...

Acquisition and Processing

I shot the data for this image on three nights during the June 2012 dark-moon period, from the same site where Bruce Jensen showed me M3 through his Starmaster all those years ago. I decided to shoot unbinned R, G, and B images, to try and maximize the resolution of the image, and to avoid having to match the histogram of a luminance image to that of an RGB image. In the end, over the three sessions, I got about 36 four-minute subexposures through each filter. As with my other recent images, I used my Orion ED80 f/7.5 refractor on a Losmandy G-11 mount, with a short-tube 80 refractor and StarShoot camera for autoguiding. My trusty SBIG ST-8300 monochrome CCD camera gathered the photons, with a chip cooled to -15C.

Pixinsight processing followed my usual workflow, with deconvolution (i.e. sharpening) of the innermost core of the cluster, as well as smoothing of the background, done while the image was still linear. A wee touch of HDR Multiscale Transform helped to `un-blow-out' the cluster's core. I pumped up the color saturation in the brightest part of the cluster, so as to bring out the differences between the blue and orange stars.

Pixinsight geekery: The main thing I learned while processing this image was the usefulness of the `Amount' slider in Multiscale Medium Transform's noise-reduction routine. As with many of PI's tools, MMT is powerful yet somewhat hard to understand. I don't really know how to set the parameters for its noise-reduction routines, and I've always wanted to be able to increase the amount of noise reduction ever-so-slowly. Well, I should have guessed that the `Amount' sliders in the noise-reduction settings for each wavelet layer will do exactly that. I guessed at some Threshold values, starting with 4 for the first (1-pixel-scale) layer and decreasing roughly by half as I went from layer to layer. Then, having set those Threshold values, I set all of the Amount sliders to 0.1, and ran MMT. There was just the tiniest little bit of noise reduction in the background sky. (I used a luminance mask to protect the globular's stars.) By moving up the Amount sliders
one little increment at a time, I could get what I wanted: A nice, moderate amount of noise reduction.

Room for Improvement

I could have set the black point a little lower, to suppress the remaining background noise a little better. I could also have tried to dim/shrink the bright, burned-out-looking foreground stars. They're a little distracting. But, since the deep-sky object in question is a star cluster, I couldn't find a good way to make a star mask that didn't include stars from the cluster. So, I just left the stars alone and decided to post what I had. I think the thing I like the best about this image is the halo of very faint stars that makes up the outermost part of the cluster. I doubt I can see those visually, even through a large telescope. That's one of the joys of imaging, going deeper than the eye can see!

Wednesday, June 13, 2012

A shout-out to the film folks

Film! I have a soft spot in my heart for film astrophotography, even though I use a CCD camera. Last night, while surfing the web, I checked to see if Jim Cormier, a modern-day film astrophotographer, had posted any new film images. He has, and they're really cool! I want to post some links to his images, so that more people will get a chance to see them.

I've done some film astrophotography - more on this in a bit - but I'm not one of the old-school `film guys' from back in the day. For well over a century, emulsion-based photography was photography, before sophisticated electronic sensors were developed. The art and science of emulsion-based astrophotography produced some beautiful results, through the heroic efforts of many, many research astronomers and amateur enthusiasts. These results depended on things like long single exposures, manual guiding, cold cameras, gas hypersensitization, and the envelope-pushing techniques that David Malin developed at the Anglo-Australian Observatory. Other than a few star-trail images, and a couple of short guided images of Halley's Comet in 1986, I didn't shoot film back in the day. (I was just a kid/teenager at the time, too.) But plenty of people did, and they left a rich, heroic legacy of astro-imaging on emulsion.

The advent of CCDs meant the `death of film', for the most part, since CCDs are so much more sensitive, and have a (generally linear) response to light that makes them more useful for measuring the brightnesses of things. The recent demise of Kodak is perhaps the best-publicized event in the long twilight of emulsion. However, not all amateur imagers have given up on film! There are a few folks out there who really enjoy shooting film, and enjoy the results they get. Naturally, there's some involvement with the digital realm, since we see their images on the web, after all. But at heart, their `sensors' are emulsion-coated materials, and I just think that's cool. They love film, and I admire them for it. I think that the world of film and processing will always have a special place in my heart, probably because I enjoyed darkroom work when I was a high-school student. I worked in the yearbook darkroom, and I set up a small B&W darkroom in my folks' house during high school. (I even developed a roll or two of slide film during graduate school, which was a hoot.)

If there's a `hero of film' in 2012, it's probably Jim Cormier from Maine. He mostly shoots wide-field images, and largely on Ektachrome 200, which seems to have been the `color astro film of choice' during the latter years of film's heyday. At present, his images can be found in several places on the web. Here are some recommended links:

For an image with a great `wow' factor, check out his latest 4-panel E200 Milky Way panorama.

Jim's Blogspot site also shows his images, and he's got a nice post about `My Most Productive Dark-Run Ever'. I love it! (Also note the `hand-corrected guiding'... John Henry, indeed!)

He has a photostream on Flickr, which is worth exploring. Another highlight from his Flickr stream is his 2011 B&W project to shoot parts of the Milky Way, a la Edward Barnard's atlas. Very cool.

While you're at it, you might enjoy Christopher Barry's Kickstarter proposal, to shoot wide-field film images this summer. It looks like he made his funding goal! I eagerly await his results.

I can't quite describe why I get such a kick out of the work of these `film guys', but I just do. I'm really glad that they're sharing their work.

While I'm on the topic of film, I suppose I ought to post a film image of my own. There's a bit more backstory to this film enthusiasm of mine, as it turns out. I could probably write a long series of blog posts about this, but here's a short version: In the late summer and fall of 2011, I did a film-imaging project. I was finishing my MSc in astronomy, and my final project involved a comparison of film-based and CCD-based imaging techniques. The film side of the story got pretty epic, but keep things short, here's an image of M31 that I shot on Ektachrome 200, using a Nikon FM camera body attached to my ED80 refractor. This is about 150 minutes of total exposure time (I forget the lengths of the subexposures), stacked and processed in Pixinsight:

M31, captured on Ektachrome 200 from a Bay Area hilltop site.
Click on the image for a larger version, or click here for full size.

You've probably noticed the curious flares coming off of the brighter stars. Those are actually due to the film scanner I used. (I've examined the slides under a microscope, and the flares aren't present in the slides.) One of these days I'd like to re-scan my slides and see if I can get a better result. Another issue that came up: The red LEDs from my light meter caused the slides to be badly light-struck. Next time I try shooting with my FM, I'm going to take out the light-meter batteries. Pixinsight's Dynamic Background Extraction routine was able to clean up most of this red mess, but it would have been nice if I hadn't had to deal with it.

Ektachrome 200 is basically gone now, but I was able to buy some on EBay, and a fellow astro-imager gave me several rolls. My leftover E200 is in my fridge, and one of these years I ought to shoot it. Some year, I should devote a fall and a winter to shooting the heck out of M31 and M42 on film. If I can find a 16-bit (or deeper) film scanner that doesn't produce those flares, I'd love to create the best `film-captured' M31 and M42 I can, with help from Pixinsight. Send that good `ol E200 out in one last blaze of glory!

Sunday, June 10, 2012

The M87 Chain and the Pixinsight Zone System

One of the greatest euphemisms in the world has to be the phrase `learning experience'. How often do we sugar-coat our mistakes by calling them `learning experiences'? I'm sure I've done it many times. This image provides an example, but in this case there's a bit more to it than that...

A portion of the Virgo galaxy cluster, with the giant elliptical galaxy M87 at top left, and part of `Markarian's Chain' of galaxies at right. Click the image for a larger version, or click here for full size.
Data Acquisition: Making the best of a bad situation

A few weeks ago, I was doing some backyard imaging, and the Virgo galaxy cluster seemed like the logical choice. Having shot a luminance image of the Leo Triplet not long before, I decided to do another one-night stand, with just luminance, but this time I wanted to shoot `Downtown Virgo'. (The origins of that term and its enthusiastic usage seem to go back to Jay Freeman and Jamie Dillon, two highly-accomplished Bay Area visual observers.) Specifically, I wanted to shoot the portion of the Virgo cluster called Markarian's Chain. It's a standard target, since it comprises a pretty, arc-ing chain of galaxies that stretch from M84 and M86 towards M88. Almost everyone works on an image of Markarian's Chain at some point. By planning it out in SkySafari Pro 3 on my iPad, I could see that if I rotated my camera just right, I could frame most of the chain pretty nicely on my ST-8300 sensor, using my ED80 f/7.5 refractor.

One thought nagged at me, though... What about conventions? As in sign conventions and angle conventions? Sky Safari Pro 3 has a really nice slider tool for rotating the position angle of one's field-of-view overlay, relative to the sky. This allowed me to plan my framing really easily. And when I'm imaging, I can download a frame from the camera, and use MaximDL to plate-solve it, which gives me the image's position angle on the sky. This is really handy, but.... what if these two pieces of software use different conventions for specifying the position angle? Hmm. I could wind up with a frame that's rotated 90 degrees from what I expect.

So, it wasn't a great shock when that's exactly what happened. Here's the framing I had planned on my iPad:



Here's how things actually worked out, since the two pieces of software treated the position angle differently:



Hrm. Rargh. What to do? I could have rotated my camera 90 degrees, but that would mean refocusing and probably re-doing the GOTO alignment. Given the couple of hours available for shooting Downtown Virgo before it went behind some trees, I didn't want to do that. So, I panned around in SSP 3 and looked for an alternative framing. Here's what I wound up with:



That seemed like the best compromise, since it caught part of Markarian's Chain, and included the giant elliptical galaxy M87, the real `heart' of the Virgo cluster. I shot a couple of hours of luminance (in 5-minute subexposures), and called it a night.

Processing: Pixinsight meets the Astro Zone System

A few weeks later, I had a little time to sit down with the data, and after using the very handy new preprocessing script in Pixinsight, I saw the following preliminary result (this is a closeup of two of the galaxies in the Chain):

Autostretched image of two galaxies in Markarian's Chain.

It's probably worth explaining what I mean by an `autostretched' image (also sometimes called an AutoSTF'ed image amongst Pixinsight enthusiasts). PI has a tool called `Screen Transfer Function' (STF), which stretches the brightness values of the image's pixels, solely for the purpose of displaying the image on the screen. It doesn't change the original pixel values in the image file, but it basically creates a temporary copy of the image to display on the screen, with the brightnesses changed so as to make the dim parts of the image more visible. The STF tool has a `Auto' button, which creates an image that nicely shows `what you got'. (I used one of these AutoSTF'ed images in my annotated Leo Triplet posting.) Such an image, though, usually doesn't make for a very pretty picture, since it shows just how noisy the dim background areas and dim parts of your target look. That graininess is a combination of instrumental noise and the eponymous photon shot noise (the latter coming from both the target objects and from the sky.)

At this point, my big goal was to do some noise reduction, and try to make the noisy, grainy-looking parts of the image look a little better. In this I was aided by Jordi Gallego's new presentation on noise reduction in PI. There's a lot of good information in this document, but I was particularly intrigued Jordi's slides 51 through 53, particularly #53. In this slide, he shows that one can make masks for applying different noise reduction settings to different parts of the image, such as:

  • The dark background sky, which has the lowest signal-to-noise ratio (SNR), and is thus the `grainiest'-looking part of the image.
  • The dim parts of the deep-sky object(s), which have fairly low SNRs, and thus mostly need smoothing and noise reduction.
  • The bright parts of the deep-sky object(s), which have high SNRs, and thus can tolerate some sharpening, such as through deconvolution.

Aha! This is basically the same concept as Ron Wodaski's Astro Zone System. I borrowed a copy of this book from a fellow Bay Area observer a couple of years ago, and found it to be very interesting. Sadly, the book has been out of print for some time, but I was one of the lucky folks at the 2011 Advanced Imaging Conference who managed to get one of the copies Ron gave away. (Thanks, Ron!)

After a little fiddling around, I realized that PI's Range Selection tool works best on images that have already been stretched into a nonlinear state, so I made a copy of the image, applied its AutoSTF settings to Histogram Transformation, and applied that to the copy. I then used Range Selection on this stretched copy.

First, I made a mask that covered up the stars and galaxies, leaving only the dark background sky to work on:



After a little fiddling around, I stumbled on some settings in Multiscale Median Transform that smoothed the background reasonably well:



I was pleased with this result! It's not perfectly smooth, but I'm calling this a win, so far. Then, I made a mask for the `mid-SNR' zone, which included the fainter outer parts of the galaxies:



And then, by pulling back on my MMT noise reduction settings, I was able to smooth those areas somewhat. Next I made a mask to isolate the cores of the galaxies, for sharpening via Deconvolution:



After mid-SNR-range smoothing and high-SNR-range deconvolution, I had this image:



The brightness levels you see here are `Auto-STF' levels, and even with the noise reduction, they're not really good for posting on the web. So, since the image was still at a linear stage (i.e. not really brightness-stretched yet), it was time for a Histogram Transformation, some star shrinking, and a horizontal flip to match the correct appearance of this area on the sky:



Room for Improvement:

I think this was a good proof-of-concept project, for the Range Selection / `Pixinsight Zone System' approach. My masks could use some work, though. When I examine the image closely, I can see that some of the dim parts of the galaxies got left out of the masking process. Also, the various processing steps left an artificial ring around M87. There really are such things as ring galaxies, but M87 isn't one of them. I'm very interested in refining my touch with Range Selection, and to trying out the new Adaptive Stretch tool! A week or so after shooting these data, I managed to shoot Markarian's Chain with proper framing, and so we'll see how things go with this new data set.




Monday, June 4, 2012

Annotation Script - What did I capture in my image?

Here's another version of the Leo Triplet (luminance) image. This one has been overlaid with the results of Andres Pozo's plate-solving and annotation scripts. Thanks to Andres's hard work, I can take my image and `see what I captured':

The Leo Triplet, annotated. Click on the image for a larger version, or click here for full size.

Andres started a thread in the Pixinsight Forum back in March (see the link listed in the previous paragraph), and he's posted a number of updates to his scripts since the thread started. His scripts do two very useful things:

1) One script `plate-solves' the image. This basically means figuring what part of the sky has been captured in the image, and assigning a set of on-sky coordinates to each pixel in the image. (This is nicely described in Chapter 9 of Berry and Burnell.) By attaching metadata to the image (as part of something called the `FITS header'), the plate-solving script allows the annotation script to look at the image, and figure out the exact location (on the sky) of each pixel in the image.

2) The next script looks up objects in a set of online catalogues, and overlays symbols and coordinate lines on the image.

The whole thing is very slick, and after only one false start, I got Andres's scripts to work. The Annotation script overlaid the locations of objects from these three catalogues:

The Messier catalogue: This is a list of nebulous-looking objects in the sky, compiled by the 18th-century comet hunter Charles Messier. It's a list of roughly 100 bright deep-sky objects visible from mid-northern latitudes. The two big, bright galaxies in my image are Messier objects 65 and 66.

The NGC and IC catalogues: These catalogues were first compiled by J.L.E. Dreyer in the 19th century, and they list thousands of objects beyond the Messier catalogue. The great 18th-19th-century astronomer William Herschel found about 2500 of the objects that provided the initial `nucleus' of the NGC. Amazingly, Steve Gottlieb (a Bay Area observer) and others have been double-checking the NGC/IC catalogues visually!

The Principal Galaxy Catalogue: This list of about 70,000 galaxies was published by a group of French astronomers in the 1980s. Many of the faint `field galaxies' that an imager is likely to capture will turn out to have PGC designations.

Looking at my Leo Triplet image, it seems like I got pretty much all of the overlaid PGC galaxies. In other images that I've shot recently, about which more anon, the boundary between `what I got' and `what I couldn't get' occurs in the PGC galaxies. This isn't really surprising, since a large catalogue like the PGC includes objects that span a large range of apparent brightnesses. If I had more time, it would be interesting to compile lists of the PGC galaxies that I did and didn't get, so as to characterize the depth of my image. How deep can a 3-inch f/7.5 refractor with an amateur CCD camera go in a night or two? Andres's script offers a way of estimating this.

I'm pleasantly surprised at how much I enjoy looking at the image with the annotation overlays. They give me a sense of what's in this part of the sky, and somehow they add depth and richness to the image. Naturally, the `pretty picture' version of an image probably shouldn't have annotations like this on it, but it's nice to be able to make an annotated version easily. The two versions complement each other, I think.

Friday, June 1, 2012

The Silver Coin galaxy

Just a quick image posting today... I don't have as much time as I'd like to write about this object.

I was going through some files the other day, and I realized I had this image of NGC 253, the `Silver Coin' galaxy, sitting on my hard drive. Might as well add it to Photon Shot Noise!



At the moment, I don't have a lot of information about the details of how I acquired it and processed it. I definitely used my Orion ED80 refractor, and I was using a camera with the Kodak 8300 monochrome chip, shooting through a Luminance filter. I think the subexposures were 15 minutes long. I was using my Orion Sirius mount (which has since been retired for solar-observing duty at school), and I was pleasantly surprised that my polar alignment and autoguiding were good enough for 15-minute subs. I seem to recall acquiring the data on one or more cold nights at a Bay Area hilltop observing site, sometime in the last year or two.

As with the acquisition, the details of processing are a bit hazy at the moment. I'm fairly sure I processed this in Pixinsight, and I recall being pleased at how much detail I was able to bring out. This is due to the reasonably good SNR, which came from taking lots of relatively long subexposures over one or two nights. I either used Deconvolution for sharpening, or perhaps an ATWT-based sharpening and noise-reduction workflow.

If I recall correctly, one of the nice things about NGC 253 is its relatively sharp `edge'. It doesn't have much of an extended, low-surface-brightness halo around it, at least not in my subexposures. As a result, I wasn't tormented by the desire to bring out lots of surrounding faint stuff. Such faint stuff around a galaxy is often hard to make look decent, since it requires either a mountain of exposure time, or a miraculous touch with the noise-reduction routines. I seem to recall that NGC 253 pretty much ends where you see it ending here, and so I didn't have any significant `halo struggles'.

Monday, May 21, 2012

Ring(s) of Fire

I just got back from a great trip to see the 2012 annular eclipse. It was everything I'd hoped for! Nearly all of us in the northern California part of the path got lucky, and we saw the eclipse through mostly-clear skies. I've been interested in eclipses since 1984 (the partial version of which I saw during high school), and this was my first `central' eclipse. I'm still waiting to see a total eclipse - that'll be 2017, with a little luck - but this was a great `dry run' for that experience, I hope.


It might seem surprising that an aspiring astro-imager would only shoot a few iPhone images of the eclipse, but I decided to keep things simple and make this primarily a visual-observing experience. I knew a lot of other people would acquire great images and image sequences, so I decided to just observe the Sun through a safely filtered telescope, and to soak in the weirdness of the light all around me.

One of the highlights of the eclipse were the crescent- and ring-shaped images of the Sun that were cast onto the house where I observed the eclipse. These were produced by very small gaps between the leaves in nearby trees, which acted like hundreds of pinhole-projection setups. As the first partial phase got  underway, we noticed a few crescents:


As the Moon moved farther across the face of the Sun, the crescents became more obvious, and we started to notice the large ones cast by trees in a neighboring yard:


Even though annularity only lasted about 4 minutes, it was worth walking around the yard to see and photograph the rings shown in the first image. Having read Norm Sperling's piece about his `8-second law' for total eclipses, I knew it would be worth doing more than just staring at the annulus through the telescope. Also, I was happy to share the view through the telescope with some family, friends, and neighbors. Moving around, looking at the tree-projected images, looking through the telescope, and savoring the weird and wonderfully dim eclipse light, made the period of annularity really fun and memorable.

The thing I was most interested in, before the eclipse, was what the illumination around me would look like. I knew it wouldn't get nearly as dark as during a total eclipse, and from what I've read, that's not really quite a `nighttime' experience. It sounds like a total eclipse produces its own unique brand of day-meets-night. The light cast by the `ring of fire' Sun was wonderfully strange. The simplest way to describe it would be `much dimmer than usual', but that hardly says anything. I keep finding myself wanting to say things like `odd', `strange', and `weird', but in a good way. Perhaps the most noticeable thing was the lack of heat from the Sun. Prior to first contact, it was a pretty hot day, around 90F (about 32C). I was glad the backyard observing site had large shady areas in which to set up my telescope, before putting it in the sunlight. During the first part of the first partial phase, it was hot! But during the deep-crescent and annular stages, I'd describe it like this: `A warm-looking cool light'. The light didn't have a `cool color' like blue, but it *felt* cool, compared the hot late afternoon we'd been experiencing a short time before.

If there's one imaging project I wish I'd undertaken, it would have been to try and photograph the light on the scene around me. I wish I could have used a DSLR on a tripod, running through a variety of exposure settings, with a grey card and a color card in the scene, to try and reproduce the appearance of the `eclipse light'. If I do imaging during a future eclipse, like 2017, I think that's what I'd like to do. I'll rely on others to image the Sun itself.

All in all, the 2012 annular eclipse was everything I could have hoped for. We sweated the weather all weekend, but it worked out just fine. A ridge of high pressure allowed me to image some galaxies and M5 on the Friday night (more on that anon), and to observe the sky visually on the Saturday night. On eclipse day, we got lucky! There was a bit of high cirrus during annularity, but it didn't materially affect the views or the experience. And during the last part of the second partial phase, thick high clouds rolled in for good and all - what luck! I plan to be as flexible and mobile as possible in 2017, but this time around, everything was great. I was glad to hear that so many other astro-friends had great experiences, too. Here's to the shadow of the Moon!

Wednesday, May 16, 2012

The (Leo) Luminance Triplet

For such a dry winter, California didn't have a lot of imaging-quality skies in early 2012. We had some late-season rain and mountain snow, which was good for our hydro balance, but not so good for the spring galaxy season. I finally got out in mid-May, and spent a couple of nights shooting M65, M66, and NGC 3628, otherwise known as the Leo Triplet. Here's the result, sized for a 15" MacBook Pro screen:



This is what's known as a `luminance' image, which means it was shot with a black-and-white (or `monochrome') CCD camera, through a clear (or `luminance') filter. In order to make a color image, I'll need to shoot it through 2 or 3 color filters. If all goes well, I hope to shoot it through Red, Green, and Blue filters before the spring season slips away. The subexposures for this image were each 5 minutes long, and I shot about 50 of them over two nights, for a total exposure time of about 4 hours. As always, the imaging scope was an Orion ED80 f/7.5 semi-apo refractor.

This was also the inaugural imaging run for my new (to me) Losmandy G-11 mount. I got a great deal on it from a fellow Bay Area imager, and I spent the April dark-moon period learning some of the ins and outs. I feel like I can polar align, acquire targets with the Gemini 1 (Level 4) goto system, and I can get pretty good autoguiding. During the nights when I shot these luminance frames, the RMS error on my guider corrections was running about 1/2 pixel in both RA and Dec.

I processed this image in Pixinsight, making use of the new Batch Preprocessing script. Very handy! Many thanks to the folks who wrote that script. Also many thanks to Mike Schuster for writing the PSF Estimation script, which auto-picked hundreds of stars and gave me the parameters of the point-spread function, which I used for Richardson-Lucy deconvolution. (Deconvolution is a sharpening routine that I used to bring out some of the details in the galaxies.) The hardest part of the whole processing workflow was the noise reduction, which I did with Multiscale Median Transform. Once I had the noise somewhat beaten down, I could get a halfway-decent stretched image from the Histogram Transformation. I did a bit of HDR Median Transform, but not nearly as much as I might use on, say, a large bright nebula.

I hope to be able to get out and shoot some RGB color data if I'm lucky; it would be nice to add color to these galaxies!

Monday, February 27, 2012

From the realm of the galaxies to the microcosm: A thin-section gigapan

I've been looking forward to posting this image (and linking to the corresponding GigaPan) for a long time.



(Click here for the zoomable GigaPan mosaic at gigapan.org)

The image in today's post is a 156-panel mosaic of something called a thin section; it's a `geology thing', in contrast to the `astronomy things' that I've been imaging so far. Basically, it's a piece of rock (from Vermont, in this case) that's been ground so thin, light will shine through it. When viewed with the right kind of microscope, thin sections commonly show bright colors, along with variations in light-vs-dark from grain to grain. The area shown in this image is about 12mm by 18mm. This thin section is one of a set that have been at De Anza College for many years. Students in our introductory geology course look at these thin sections when they're learning to identify the different rock textures. I don't know when these thin sections were made, but I'd guess they date from the 1960s or 1970s

I've written a more detailed explanation in the `About This GigaPan' notes - if you're interested, you can scroll down below the panorama, and my notes are below the camera and image information.

Acquiring the image data:

Instead of a telescope, I had to use a microscope. I used a `trinocular petrographic microscope' that I got from The Microscope Store a few years back, when I was just dying to have my own petrographic microscope. The scope is `trinocular' because not only does it have a binocular viewer for viewing the magnified image with one's eyes, but it has a third port for attaching a camera.

For photographing thin sections, I used a Canon 20D DSLR camera. This is the same one I used when I started digital astro-imaging, several years back. To attach it to the microscope, I kludged an Orion 1.25" eyepiece-projection adapter, which happily fit right over the non-telescope-sized microscope eyepiece quite nicely! The image focuses on the camera sensor, and since it's a DSLR, I can check focus and framing right through the camera viewfinder.

The basic idea behind acquiring data for an image of this type is to shoot lots of adjacent frames, with some overlap. In that sense, it's like a large astronomical mosaic. However, since each frame only takes a fraction of a second to shoot, I can take hundreds of frames. The big difference between deep-sky imaging and microscopic imaging is that I have a nice bright light source (i.e. an incandescent bulb built into the scope). It's much easier to achieve a high signal-to-noise ratio that way!

The thin section is a small glass slide, and I moved the slide between frames. This was accomplished by means of a small slide positioner, which holds the slide and moves in two directions when I turn some small knobs. The positioner has a vernier scale on each axis, so I can make a precise 1.5-mm movement between adjacent frames in a given row, and then move 1mm between rows. This gives the GigaPan Stitch software enough overlap to work with.

Preliminary data processing:

I used Adobe Photoshop CS4 to batch-process the raw 16-bit CR2 files that came off of the camera. I used CS4's Camera Raw module to take out some slight chromatic aberration, and after converting the images to 8-bit JPEG format, I did a bit of sharpening and color saturation. The GigaPan stitching process seems to have desaturated the image a bit. Perhaps I should pre-saturate each image a bit more.

Assembling the mosaic:

This couldn't have been easier! I just turned GigaPan Stitch loose on the image files, and it made a mosaic, which is about 30,000 pixels wide! Simplicity itself. I ran the stitcher on a Mac Pro computer, which made short work of the operation. I had heard that GigaPans can take all night to run, but the Mac Pro banged this sucker out in under 15 minutes. Somewhere, Steve Jobs is smiling.


An Unsung Hero:

Somewhere in China is the real hero of this little technological story. The microscope I used is a Chinese-made import, and it's a mixed bag of build quality. Some things on the scope are perfectly serviceable, and other things could stand to be better. Centration of the objective lenses (something like collimating a telescope) is hard to do, and the objectives don't stay centered for very long. I took the whole objective turret apart, and saw that certain detents in a metal part appeared to have been cut into the metal rather haphazardly. The binocular viewer leaves something to be desired, too - it has some annoying internal reflections, and gives a partially-cross-polarized view even when the `analyzer' (one of the polarizing filters) is out of the optical path. However, this microscope really shines in one important area - the flatness and sharpness of the image field. Man, that thing is flat! By that I mean that there are almost no visible aberrations from the center of the field to the edge, at least in the low-power objective. And there are very few diffraction artifacts visible on high-contrast edge features, which is more than I can say for a more-expensive Japanese-made microscope at school, even when it's adjusted for Kohler illumination. (Sorry for this microscope geekery, interested readers may want to look at the Nikon or Olympus microscopy websites.)

Who knows if I'm correct in my speculations, but I can't help imagining an optical designer in China somewhere, making those objectives as perfect as possible, out of love for the craft of optical design. Somehow or another, they made those things so as to deliver a really remarkable level of performance (at least as judged by my eye), despite the relatively low price point. Whether by luck or by design, the image quality of those objectives largely makes up for the deficiencies in other parts of the microscope.

A Dream of Automation:

As I described in the `About This GigaPan' notes, I have this dream of motorizing the stage-positioner controls, so as to be able to automate the acquisition of the image data. I'd love to be busily grading papers, or surfing the web, or processing images, while the microscope and a computer are robotically churning out the data for another enormo-mosaic. It's quite a bit like my dream of having a robo-focus unit for my telescope, so that I could acquire image data automatically while doing visual observing. One can dream!

Friday, February 3, 2012

The Orion nebula: Reworking some year-old data

If you've not yet had the opportunity to look at M42, the great Orion nebula, through a telescope, you owe it to yourself to try and find an opportunity to do so. Even though today's entry is part of an `imaging blog', M42 is the kind of object that's beautiful any way you look at it. As long as you've got a clear sky, and are (hopefully) away from city lights, you can see this nearby star-forming complex in some way, regardless of the gear you've got.

I just now spent a moony evening reworking some unbinned R, G, and B data that I shot about a year ago. Every winter, it's the same old routine: Try to get some decent data on M42. Something always gets in the way, though. In early 2011, it was bad weather and camera issues... a story for another time. I managed to shoot some unbinned L, R, G, and B data, but not a heck of a lot. To the best of my memory, the data for this image don't amount to much more than several hours total. Since it's now February, and I'm not 100% sure if I'll get in a decent M42 dataset in 2012, I thought I'd fool around with this old stuff from last year. See if I could make something semi-presentable.



After an evening spent in front of the computer, I happened to go outside, and as I was walking back in, I looked up, and there he was: Orion, the hunter. The constellation was just passing across the meridian, with the bright gibbous moon due north of it. Even with lights in my eyes, and under a city sky, I could make out the bright stars that delineate the pattern: Betelgeuse, Rigel, Bellatrix, Saiph, Alnitak, Alnilam, Mintaka. And there was the sword of Orion, with the middle `star' being M42. This object is so bright that it (or at least the stars in and around it) can be seen under almost any sky, it seems.

Unlike most deep-sky objects, M42 is worth looking at with virtually any optical instrument. The belt and sword of Orion are great in binoculars. Small telescopes show the nebulosity. Large telescopes under dark skies provide one of the few `imaging-like' experiences in visual observing. A greenish color can even be seen in the brightest part of the nebula, in a big scope. The details just go on for days and days.

As an imaging project, M42 presents an almost limitless field of challenges and rewards. With modest equipment and short exposures, one can still get something. Advanced imagers have gotten some incredible results.

Processing in Pixinsight:

This image certainly isn't incredible, but I'm glad that I was able to squeeze a bit of detail out of such data as I had. I spent a fair amount of time on this in Pixinsight, and eventually I gave up on trying to combine the luminance data with the color data. Both my RGB image and my Luminance image were the result of high-dynamic range combinations, for which I'd shot long- and short-exposure frames. Matching the histogram from the L image to the histogram from the RGB image seemed to be taking forever, with little end in sight. I bailed and just went for the RGB.

Getting a good color balance was really tricky, and I just couldn't get it quite right. The stars in the linear image had all sorts of blue and cyan issues, and by the time I got them to look semi-normal, the blue color in the nebula was pretty well gone. I could have (and should have) worked that problem harder, but since this was a `let's see what we can get out of this stuff without too much struggle' project, I didn't sweat it that hard.

I did a bit of Richardson-Lucy deconvolution while the image was still linear, but nothing drastic. I would have liked to have gotten a better sharpening result, but I found that I kept getting bright `wormy' artifacts if I wasn't careful. I think that a really good deconvolution would be pretty substantial project, even with the help of Dynamic PSF.

After histogram stretching, I had to spend a fair amount of time finding the right parameters for an application of HDR Multiscale Transform, to knock down the over-brightness of the area around the Trapezium. Once I got that area tamed, it was rather washed out, as usual. Some additional luminance masking and an extra saturation boost in that area helped a bit, although it left some purple haze around the Trapezium stars.

There's plenty of room for improvement in this image, but I'm glad that I can at least post some sort of M42 image. I hardly feel like `an imager' without one. With a little luck, maybe I can finally get a decent set of data later this month and in March. It would be nice to really go deep on this thing, and under good seeing. We'll see how it goes!

As per usual, there's an amazing image of the object from the Hubble Space Telescope.

Wednesday, January 18, 2012

From the Pros: Multi-wavelength Eagle

When I say `the pros', what I'm really talking about are research astronomers. These are the folks who do fundamental research in astronomy, such as the people who make observations with the Herschel infrared space telescope and the XMM-Newton X-ray space telescope. Most of these big-buck research projects are pretty good about remembering the public outreach part of their mission. Here's an example: Today's NASA Image of the Day is a view of the Eagle nebula, captured in two very different wavelengths - infrared and x-ray:


(Image credit: ESA/Herschel/PACS/SPIRE/Hill, Motte, HOBYS Key Programme Consortium)

Having recently spent so much time on processing the Eagle nebula, it's fun to see it in wavelengths that I can't capture with my CCD camera from the Earth's surface.

Naturally, I can't resist including the most famous `pro' shot of a part of the Eagle nebula:


(Image credit: NASA / STScI / Hubble Heritage Team)

I'll bet you've seen this image before. It was acquired by the Hubble Space Telescope in the 1990s, and has been very widely reproduced and distributed. It's probably one of the most famous `Hubble shots' of all time, if not the most famous. (I'll take a moment here to plug a friend's business, where you can buy prints of the Pillars.)

If you look closely in the Herschel / XMM-Newton image, and even in my Eagle image, you can make out these pillars. In fact, they're even visible to the eye, if you use a reasonably large telescope, and you're observing from a very dark site under good conditions. When I've taken my 18" (45cm) scope to observing sites in the northern California mountains in the summer, I've sometimes been able to make out the two largest pillars from the image above. It's tough, but with some practice, an OIII filter, and careful examination of a printed image (using very dim red light, so as not to spoil one's dark-adaptation), they're just visible. It's fun to be able to see something so well known with your own eyes! It's fun to be able to capture it with one's own telescope, too.

UPDATED a couple of hours later...

The Herschel and XMM-Newton missions are run by the European Space Agency, and they've got a nice webpage about these multi-wavelength observations of the Eagle nebula. It includes a video showing the various images and how they correspond to each other. Here's a summary image, which places the various Eagle images next to each other:


(Image Credit: European Space Agency, European Southern Observatory, NASA)

One of my favorite of the `Pillars' images is the near-infrared image; it's the center image in the right-hand column of the mosaic above. It was acquired using one of the giant 8-meter telescopes of the Very Large Telescope observatory at Paranal, Chile. I love the purple color palette of this image:


(Image Credit: VLT/ISAAC/McCaughrean & Andersen/AIP/ESO)

The team that made this image used an infrared camera/spectrograph called ISAAC to collect image data in three wavelength bands, all of which are in the infrared. This means that the wavelength of the `light' in each band is longer than that of visible light - it's beyond our eyes' ability to see. The dust that makes up the Pillars is mostly opaque at visible wavelengths, but infrared `light' can make it through a greater thickness of dust than visible light can. As a result, they can see deeper into the Pillars, or entirely through them in the case of the left-hand pillar. This allows for a clearer view of young stars that are forming out of these clouds of gas and dust.

Sunday, January 15, 2012

M33: Two nights at Dino

Here's M33, the Triangulum galaxy:



(There's probably an issue with orientation or `flipping' of the image, but since I've stared at it for so long in this orientation, this is becoming `how it looks to me'.)

This image has me thinking about two `themes':

1) The pleasures of imaging from a nice dark site, like Dinosaur Point.

2) The difficulties of getting good data on M33, the Triangulum Galaxy.


I shot these data on two successive Saturday evenings, October 22 and 29, 2011, from an observing site called Dinosaur Point. It's a boat ramp on the San Luis Reservoir. The reservoir is part of California's enormous system of water projects, which control floods, supply water, and supply electricity. One function of the San Luis reservoir is, essentially, as a giant electrical storage battery. Water gets pumped uphill into the reservoir at night, when electric rates are low, and the water is drained downhill (through generators) during the day.

Dinosaur Point has long been a favorite winter dark-sky site for Bay Area observers. It tends to be too windy during the warm months. But in the late fall and winter, if the `tule fog' from the nearby Central Valley hasn't covered it, Dino can be a very dark site. I really enjoyed setting up there and imaging M33; the sky was nice and dark. One night, in the wee hours of the morning, we even saw the adaptive-optics laser beam from Lick Observatory, shooting towards some object in the south.

It's very important to note, though, that observing access to Dino is subject to some very specific conditions. If you're a Bay Area observer who hasn't been there, make quite sure that you've read and understood the `gatekeeper' access protocol! You can also check the TAC list and the TAC Observing Intents page to see if a gatekeeper is going. Don't just go there without checking all of these details first!

I acquired these data with the same rig as the last couple of shots - my Orion ED80 refractor (80mm f/7.5) with the SBIG ST-8300M CCD camera. I shot unbinned luminance data, and 2x2 binned color data through R, G, and B filters. If I recall correctly, I think I have a couple of hours from each filter. That would make for 8 or so hours of total exposure time, give or take.

I think that M33 has some potential to be a frustrating object for beginning astro-imagers. Typically, I think a lot of us undergo a pattern like this: a) We get a CCD camera during the summer, and by autumn we have a basic understanding of how to use it. b) During the fall, we shoot M31, which is so bright that we can get a decent signal-to-noise ratio over most parts of the galaxy, without too much trouble. c) Next, we say to ourselves `Aha, look what's nearby - M33! There's another big bright galaxy just waiting to be shot!' As it turns out, however, M33 has a lower surface brightness than most of M31, and it's tough to build enough SNR to get a good image. Unless you're using an optical system with a very fast focal ratio, M33 is going to take a long time to build a decent dataset.

This dataset really isn't long enough, but I decided to go ahead and try to process it anyway. I probably won't be able to shoot M33 again until summer or fall 2012, so here's what I've got, so far. With a considerable amount of time invested in Pixinsight, I was able to get something semi-presentable.

Processing in Pixinsight:

I started with the usual calibration routine, using light, dark, bias, and flat-field frames, and I extracted the small amount of light-pollution gradient that one gets at Dino. This gave me linear (i.e. unstretched) luminance (L) and color (RGB) images. These images had the usual background-neutralization and color-calibration corrections applied to them. Then it was time to get a little more from the linear images. First, a bit of noise reduction using the Multiscale Median Transform tool. Then I used the new DynamicPSF module to build a model point-spread function for each image, and fed that PSF into a gentle application of regularized Richardson-Lucy deconvolution. This helped to bring out a bit more detail in the central part of the galaxy.

Then it was time to go non-linear with each image. I did this the easy way: For each image, I did an auto-STF (Screen Transfer Function), and applied each of those auto-STFs to instances of the Histogram Transformation tool. This gave me stretched images that had very similar histograms - and that's just what the LRGB combination tool wants.

If I recall correctly, I did a bit of SCNR (Selective Color Noise Reduction) to take out some of the `galaxy green' in the RGB image, before performing the LRGB combination. I increased the saturation a bit when making the LRGB image, and used Pixinsight's magic Chrominance Noise Reduction routine.

With the LRGB image in hand, it was time to perform two parallel lines of attack, which would later be combined:

1) Compress the dynamic range a bit with HDR wavelets, so as to take away some of the `over-bright dominance' (for lack of a better term) of the central part of the galaxy, and then punch up the contrast with Local Histogram Equalization.

2) Try my hand at the mystical `multiscale processing', a la Rogelio. I split a copy of the LRGB image into large-scale and small-scale components, following the general method of Rogelio's and Vicent's multiscale tutorials. I didn't to anything extra to the smallscale image; I just didn't have the mental energy. But I did some Histogram Transformation (and possibly HDRWT, IIRC) to the large-scale image, brightening the midtones and re-setting the black point. Then I combined everything back together with PixelMath:

a) The LRGB image
b) The LRGB image that had been HDRWavelets-ed and LHE-ed
c) The smallscale image
d) 0.25 * the stretched-even-more largescale image.

Following this recombination, I made a Star Mask (with default parameters), and used Morphological Transformation to dim/shrink the small and medium-sized stars. At that point, I said `Stick a fork in this sucker, it's done. Put it on the blog.'

Room for Improvement:

When I look at this image, it seems to me like it's still afflicted with a bit of `galaxy green', but when I applied an additional round of SCNR to it, it didn't seem to change. Some of the stars also wound up looking a bit pink, but at this point, I'm too tired to fight about it.

Next, there are the big, bloaty stars. These are the bane of all my images. My temptation is to blame them on the small aperture of my telescope. An 80mm scope will have a big, fat point-spread function, and if I want tiny stars, I'll need a bigger scope. That's probably true, to some extent, but I'll bet it's not the whole story. I am beginning to suspect that the big, halo-y stars are a consequence of the fairly severe stretching that the image has undergone. M33's dim, and it takes a lot of stretching. This probably brings the outer parts of the PSFs up to an objectionable brightness. With a longer total exposure time, I could probably get the faint parts of M33 to show up without as much stretching. (Of course, this raises the question of whether those outer portions of the PSFs would show up, too... hmm...) I'd love to figure out how to shrink those stars, so that it looks like I used a bigger scope. After a lot of fiddling around with Star Mask and Morphological Transformation, however, I haven't found a way. It remains a dream.

With more integration time, I think I could show more of the faint outer portions of M33. I'd love to get in night after night on this object, and really punch out every part of this galaxy. M33 is full of resolved stars and HII regions like NGC 604. I often think of M31 and M33 as the closest thing we've got the Magellanic Clouds up here in the NoHem, and it would be nice to make the deepest, sharpest images of them that I can.

Naturally, many people have gotten some very nice, very deep images of M33. One of my favorites is this one by Stephane Guisard, because he shot it from the Atacama region of Chile - exactly the `wrong' place to get a good image of M33. Shows you how good places like Paranal are! And of course, there's a nice Hubble image of NGC 604, the most prominent star-forming region in M33. (In my image, the way I've got it oriented, NGC 604 is down and to the right of the galaxy's center, above two prominent, bloated orange field stars.)

Sunday, January 1, 2012

Eagle nebula in B&W H-alpha

Ever since this summer's imaging session at Lassen, I've wanted to process the hydrogen-alpha data that I acquired as a black-and-white image. I spent this evening working on the data in Pixinsight, and here's what I've come up with so far:


Acquisition:

I shot these data during two nights, using 15-minute subexposures through an Astrodon 3nm H-alpha filter, for a total exposure time of 5 hours. This was with the 80mm refractor and borrowed QSI camera that I described in a previous post.

I've always enjoyed the look of black-and-white hydrogen-alpha images, and I wanted to try and make one myself. Images like this remind me of the days of heroic long exposures on gas-hypered Technical Pan 2415 film... days that I have to admit I didn't experience first-hand. And, frankly, I'm not too sorry about it, although it would make for some nice bragging rights. Me, I'm grateful for CCD cameras and autoguiders, which make the whole thing a lot more do-able, although it's still a fair amount of work.

The real key to an image like this is the narrowband hydrogen-alpha filter. I'm lucky that my friend from Cilice, who loaned me the camera, had invested in a filter with such a narrow bandpass. Besides bringing out all of the lovely emission nebulosity (which would look deep red in a color image), a filter like this makes the stars look very small! That's a very nice `perk', although it makes focusing and framing the image rather time-consuming. No need to shrink the stars in software when you have such tiny stars to begin with! I can't wait to get an H-alpha filter for my SBIG filter wheel, someday. I think I'll go with 3nm - it's worth the extra effort.

Processing in Pixinsight:

Like most CCD image processing, part of my workflow happened while the image was still linear, and then I took it to the non-linear realm with a histogram stretch, where I did further processing.

I started by using the A Trous Wavelet Transform (ATWT) tool to reduce noise, following the example from Juan Conejero's `tutorial post'. I used considerably less aggressive ATWT noise-reduction settings than Juan's example, though. Then I did some Richardson-Lucy deconvolution, again following a tutorial-like post by Juan. A real key to getting Deconvolution to work is the use of Dynamic PSF to model the telescope's point-spread function.

Once I had reduced noise with ATWT and applied a bit of deconvolution, I stretched the image into the non-linear realm using Histogram Transformation. (I just applied the stretch parameters from an AutoSTF into HistoTrans.) As per usual Pixinsight practice, I used the HDR Multiscale Median Transform (formerly HDRWT) to bring down the brightness in the central part of the nebula. I found that increasing the number of wavelet layers to 8 helped bring out detail nicely, and did the best job of `taming' the brightest areas. I did another moderate histogram stretch to increase contrast, and then applied the Local Histogram Equalization (LHE) tool, with a contrast limit of 2.0 and and Amount of .25.

A last, light little shot of ACDR was the last step. I did this with the built-in lightness mask enabled, so as to apply it only to the darkest areas. These areas had had their noise increased a bit by LHE.

Assessment:

I'm reasonably pleased with how this image turned out. I like the way the deconvolution brought out detail around the `Pillars of Creation' and other dusty structures in the nebula. HDRMMT also helped to bring out a fair amount of detail, and LHE pumped up the contrast between adjacent light and dark areas.

Naturally, I'd love to get additional hours of data, to bring out even more nebulosity at a reasonably high signal-to-noise ratio. Maybe next summer!

Oh, I almost forgot: I flipped the image left-for-right, compared to my previous Eagle nebula image. I hadn't realized that the previous image was oriented incorrectly. I think this one matches the published `Pillars' images better.

Hmmm... I wonder... since LRGB combinations in Pixinsight are supposed to be assembled from non-linear images, I wonder if I could get the histogram of the RGB image into the right kind of shape to match this one, and use this B&W H-alpha image as the luminance for an LRGB combine?  Hmm... I ought to check that out.