Monday, October 23, 2017

Web Roundup for the 2017 Eclipse

It's been a long time since I updated this blog, mostly because I haven't been able to do much imaging. Starting in the fall of 2013, I took on a work schedule that has me teaching classes in the early mornings, and the resulting "early-to-bed" sleep schedule has put a damper on imaging and observing. But I've gotten out for the occasional session, such as imaging the Swan nebula (M17) from Lassen Peak in 2015 and 2016. And, best of all, in August 2017 my astronomy buddies and I saw the Great American Eclipse!

Insofar as my busy Fall 2017 work schedule has allowed, I've been working on a long writeup about the experience of my first total solar eclipse - which I'd been anticipating for 33 years! :-O That's going to take a while to complete, so here's a companion entry: A roundup of my favorite images, videos, and trip reports from the 2017 eclipse. I might update this entry from time to time, if I find more cool stuff.

Let's separate these links into categories:

Closeups of the Eclipsed Sun:

The leader in this category, far and away, has to be the video by Jun Ho Oh's group showing the limb of the Moon during totality. This video is simply mind-blowing!

Professor Oh's group appears to have used a Rainbow Astro telescope mount to make two circuits of the Moon's limb, from just before second contact (i.e. as the eclipse was becoming total) to just after third contact (the end of totality). The result is a tour de force. Part of what makes this so satisfying for me is that I saw a fair amount of this detail visually, through a telescope. The telescopic view was the biggest win for me.

Prof. Oh's group has some other eclipse videos on Vimeo as well: A less zoomed-in video of the eclipsed Sun, and a video of their eclipse day at Warm Springs, Oregon.

For still images of the totally-eclipsed Sun, there is nothing to equal the images of the 2017 eclipse acquired and processed by Miroslav Druckmuller and his collaborators. These are the ne plus ultra of eclipse images! It's well worth visiting his main page of eclipse images - prepare to have your mind blown!

Canada's Alan Dyer is another reliable eclipse imager, and he's made a high-dynamic-range composite of the corona, too. This image is linked from his eclipse-day writeup. Also make sure to check out his "Totality Over the Tetons" video!

My mind was really blown by Alex Roberts' gallery of closeup images! He captured some marvelous detail, especially in the chromosphere (the red layer right above the Sun's blazingly bright photosphere). His "Fiery Prominences" shot really rocks my world. Brings back such incredible memories of seeing that  chromosphere, and those proms, through the eyepiece. Wow.

At the headquarters of DayStar, a company that makes solar filters, experienced eclipse chaser Fred Bruenjes shot video of totality. This is quite interesting, since it's from near the edge of the path of totality. Personally, I'd find it hard to watch a total eclipse from anywhere other than near the centerline, but now I kind of see why some people go near the edge. The Baily's beads are quite fascinating, the way they last a long time and `crawl' along the edge of the Moon. I'm glad I was on the centerline, but I'm also really glad they shot this video! What a unique view :-)

(Maybe if I live to see the 2052 totality, I'll try to watch it from Pensacola or Wakulla Springs! Bet they'll have some good Baily's beads and nice long diamond rings! That would be a good `probable last eclipse'... if I make it that long!)

My friend and fellow Bay Area astro-imager Steve Migol got a really nice shot of the chromosphere.

Another Bay Area imager, Rogelio Bernal Andreo, got his "Great Gig in the Sky" shot at  Phillips Lake, Oregon. He has an epic (but very positive!) travel story about having car trouble on the way to the eclipse. But he made it!

People Watching the Eclipse:

I have to start with a short video of totality in Smith's Ferry, ID. My friends and I were several dozen yards behind the group shown here. What an experience!

There's something important to note about most of the videos you'll see like this: You may notice that they don't look dark until the last several seconds before totality. That's because modern video cameras (mostly in cell phones and GoPro-type cameras) do a good job of compensating for the falling light level. This makes things look pretty normal until right before second contact (C2). In fact, the real-life light level looks noticeably different during the last 15 minutes or so before C2, and the speed and intensity of its fall is incredible to witness - it's almost impossible to capture on video. This incredible falling light is part of what makes a total eclipse the `greatest show on Earth' !

Madras, Oregon was one of the most-publicized places to watch the eclipse. There's a nice video of the eclipse in Madras from Falling Rain Films.

A student of mine mentioned seeing the eclipse at the Symbiosis 2017 festival. Here's a video of the Symbiosis folks enjoying totality (with Pink Floyd's "Eclipse" playing in the backround, of course!)

A very nice post-eclipse article is this one from the Salem, Oregon Statesman-Journal about how *little* adverse impact the eclipse-goers had on Oregon public lands. Way to go, chasers! Thousands of people visited the eclipse path and gave eclipse chasers a GOOD name.  :-D  Awesome!

Destin from Smarter Every Day has a great video of his group shooting a transit of the International Space Station across the Sun during the partial phase of the eclipse!

I can't forget the preliminary video from David Makepeace! He's one of my favorite eclipse-chasing bloggers. I'm looking forward to more writeups and videos from David about Totality 2017 :-)

(Oh, and I can't leave out Glenn Schneider's 2017 writeup or Fred Espenak's eclipse-day gallery! They're both legendary eclipse chasers.)

Here's a Flickr image pool, curated by NASA.

Mountaintop Eclipse Videos:

During my 33 long years of waiting for the eclipse, I often wondered if I'd try to see it from a mountaintop. I thought a lot about the Grand Teton range. I first learned about mountaineering from the Exum Guides in the Grand Tetons. (This was in 1982 and 1983, so basically imagine one of the kids from Stranger Things climbing the Grand, shortly before the events of Season 1.) Sometimes I contemplated trying to climb the Grand for the 2017 eclipse. Over the years, it seemed too risky, weather-wise. In the end, though, fortune favored the bold! Aaron Glasenapp's 360 video from the summit of the Grand is really something. I recommend standing up and watching it on a tablet device. Hold the tablet in front of your face and pivot around to see the view from different directions. Neat!

(This is a little bit like the 360 video of the 2016 Indonesian eclipse from a beach, shot by Daxon of 360 Thrill.) 

Aaron Grafing shot the 2017 eclipse from the Middle Teton. Note the nice views of the south side of the Grand!

Now I am really psyched to try and see totality from Spain's Picos de Europa in 2026! Then maybe a peak in the Canadian Cordillera in 2044? Or maybe Lassen Peak or Brokeoff Mountain in 2045?

After passing over the Tetons, the shadow went over the Wind River range. Here's a wonderfully sharp, hi-def video from Gannett Peak, Wyoming's highest mountain. Austin Cousineau's GoPro footage can be viewed in full hi-definition, and I really enjoy watching it on my 2560x1440 computer monitor. This video does a great job of capturing the weird orange light that I remember seeing all over the landscape, just before and after totality.

(Astrophysics interlude: I assume the orange color - if I perceived it correctly - is due to: 1) the longer-wavelength blackbody peak of the limb-darkened crescent Sun, and 2) A greater proportional contribution of the reddish chromosphere to the light of the thin crescent.)

Before the shadow hit Wyoming, it passed over Idaho's highest point, Borah Peak. That would have been another great mountain to climb on eclipse day. Glad some folks did just that! It was smokier around Borah Peak than in Smith's Ferry, but not smoky enough to spoil peoples' views. A. J. Frabbiele has a nice timelapse from Borah.

I assume Mark Huneycutt's video was shot on a mountaintop in the Appalachians somewhere. Loud audio in a few spots, but this is another great crowd reaction.

Images and Videos from the Air and from Space:

Alaska Airlines had at least one flight that went through the path of totality. Their video gives a nice sense of what it was like on board that aircraft. Two things stand out to me: 1) The sped-up view of the departing shadow at 0:52 is amazing! ... Off it goes towards the Oregon coast!  2) One of the pilots is wearing a GoPro on his head, and I wish I'd done that, too. I'd like to have an even better record for post-Monday-morning quarterbacking my eclipse experience.

This NBC News article highlights the eclipse flight, including planetary scientist Tanya Harrison and astronaut Michael Barratt.

Here's a picture of some F-16 fighter planes on the ground during totality.

 Liem Bahnemann captured the umbra passing over central Oregon from a high-altitude balloon. What I find most interesting about this is how slowly the scene changes. It's a lot like the mountaintop videos, in which the shadow doesn't seem to move towards the mountain very quickly. That surprised me at first, until I realized the umbra-penumbra transition is so gradual that no obvious `edge of the shadow' is visible. If you could visualize the actual edge of the umbra, along with various % illumination contours in the penumbra, you'd see these things racing over the ground at 1000+ miles per hour.

NASA's satellite DSCOVR imaged the Moon's shadow passing across the Earth, and their Lunar Reconnaissance Orbiter saw it from lunar orbit!

And, of course, the astronauts on the ISS saw the umbra from orbit, as shown on this page and this page from NASA.

If I find more good links from the 2017 eclipse, I'll try to update this page. If you decide to try and see a future eclipse - clear skies to you!

Saturday, February 16, 2013

2012's M33, at last

Sometimes I think M33 is my Great White Whale. I'll probably keep shooting this thing every autumn - or nearly so - for as long as I can operate a CCD camera and telescope. I don't have the same vengeful feelings toward it that Ahab had towards Moby-Dick, but I am somewhat obsessed with it. In fact, it wouldn't surprise me if a lot of amateur imagers are, too.

M33, the Triangulum galaxy, distance about 2.5 - 3 million light years.
(You can click on the image for a larger version, or click here for Flickr.)

Why? I think it's because M33, the Triangulum galaxy, looks like a `logical next step' after M31, the great Andromeda Galaxy. Getting a decent M31 image isn't trivial, but it's bright enough that one can get something presentable without too much exposure time, and without having to work too hard at processing the data. M33 is different, however. Its surface brightness is lower, and consequently one has a significantly harder time getting the dimmer, outer portions of the galaxy to look good. Even when using a sensitive CCD camera, and when calibrating one's light frames with darks, flats, and biases, much of M33 can easily come out looking noisy and ugly. (I haven't totally overcome those issues in this image, but I think I've imaged the outer regions a bit better than before.) Unless you've got a very `fast' (i.e. numerically small f-ratio) imaging system, and/or a great deal of time, it's hard to get much out of M33. All of this results in M33 being a rather harder thing to acquire and process than M31.

I shot the data for this image at Calstar 2012. Calstar is a yearly get-together of Bay Area and SoCal amateur astronomers, at Lake San Antonio in inland Monterey County. This event is near and dear to many of our hearts, mostly due to its no-frills nature. The sky at LSA can get very dark, dark enough to see not only the gegenschein, but even a nearly horizon-to-horizon zodiacal `band'. I've spent many hours picking out individual objects in M31 and M33, through an 18" Dobsonian telescope. It's a great site for imaging and visual observing.

Although I shot these data in September 2012, it's taken until February 2013 to get them processed and posted on the blog. What an epic it's been! The main thing that ate up all this time was a seemingly-endless series of attempts to properly deconvolve (i.e. sharpen) the image, as described below.

Get the L out

The most unexpected thing about my processing workflow was how much data I ended up throwing away. I'd shot unbinned luminance in 2011, along with 2x2 binned color (and used those data to produce a previous version of M33.) I also shot additional unbinned L and binned color in December 2011, and my 2012 data set included a lot of unbinned L, too.

In the end, I chucked all the binned R, G, and B, and all the unbinned L. In the former case, I never found a good way to combine the binned and unbinned color images. Maybe I should work that problem more, someday, but for now I've given up on it. And when I tried to make an LRGB image from all-unbinned data, it never looked any good. I have two ideas on why this was so:

1) I don't think my L image was any sharper than my RGB image, nor sharper than any of the individual (stacked) color images. It's certainly true that if one is shooting unbinned L and binned R,G,B, the former will have more detail than the latter. That's the whole idea behind that trick. But if the data are all unbinned, then it comes down to a matter of optics. The optical system had better make a Luminance image that's sharper than (or at least as sharp as) the RGB image. And with a refractor (like the Orion ED80 refractor I used), that's a tall order. Even the best refractors will have a tiny bit of chromatic aberration, which means that the R, G, and B components of the Luminance won't all be focused the same. So, I've come to suspect that the Luminance image will, in fact, be a tiny bit blurrier than the RGB image, and I think that's so in my case. I'm just better off shooting straight R, G, and B, unbinned.

2) The uselessness of unbinned LRGB is described by Juan Conejero, author of Pixinsight software, in a thread on the Pixinsight forum. Juan points out that adding luminance to an image reduces chrominance, and so it really doesn't do any good to try unbinned LRGB. As far as I'm concerned, goodbye Luminance. I think I'll mostly use my L filter for focusing, drift alignment, and framing. (I might use the unbinned-L-and-binned-RGB trick if I was shooting a large, diffuse nebula that doesn't have much small-scale detail, though.)

After all this, I think I'll try to acquire image data by means of simple, unbinned R, G, and B. This may require some sort of automated acquisition workflow, however, in order to get some data through each filter, during each imaging session. There's a lot of focusing, slewing, and framing involved!

Pixinsight processing workflow

I followed a fairly `basic' workflow in Pixinsight. I suppose you might call this the `non-multiscale' approach, because I didn't try the RBA-like processing steps I used in the previous M33 image. Maybe next time! (I'm particularly intrigued by Emanuele Todini's recent post about his `multi-scale-layer' workflow.) Here's what I did (the abbreviations will probably be familiar to PI enthusiasts):

  1. Calibration using the Batch Preprocessing script
  2. Cropping off the outermost part of the image
  3. Deconvolution of high-SNR areas (this took forever...)
  4. MMT-based noise reduction of low-SNR areas (not too hard, thankfully)
  5. Nonlinear stretch with HT
  6. Dim the brightest regions a tiny bit with HDRMT
  7. A little bit of contrast boost with LHE
  8. Increase overall color saturation with Curves (Lum mask in place)
  9. Increase color saturation in HII regions, blue spiral arms, and orange galaxy core with ColorSaturation tool (Lum mask in place)
  10. Small amount of denoising the 1- and 2- pixel-scale layers with ATWT
  11. Dimming stars a little bit with StarMask and MT
  12. Making a mask for the largest stars with ATWT, HT, MT, and Convolution
  13. Desaturating (and slightly dimming) the biggest stars with Curves and MT
  14. Tiny tweak to the black point with HT
  15. Color-space conversion, resampling, and saving as JPEG for web publishing

The Agony of Deconvolution - and a savior!

I spent the entire winter beating my head against Deconvolution. Whether it was on the RGB image, or on the (ultimately-not-used) Luminance image, I could not keep the stars from showing subtle dark `ringing' artifacts. I've successfully applied Deconvolution before, with results that pleased me, but this image just wouldn't deconvolve, for some reason. It drove me nuts for months.

In the end, it was something simple. It turned out to be the point-spread function I was using in the Deconvolution module. I knew that the parameters of the PSF were important, but I had no idea how important. I'd used Mike Schuster's excellent PSF evaluation script, but somehow that must have produced an averaged PSF that wasn't quite what Deconvolution wanted.

I came to realize this when I watched the Deconvolution videos in the new Pixinsight series by Warren Keller and Rogelio Bernal Andreo. There's some simple information in there, concerning how to measure one's PSF, and it did the trick! I don't want to give it away here, because I think Warren and RBA deserve to be rewarded for making the videos and helping people learn PI. I don't make any money off their videos, but I will say this... after feeling my months-long `Deconvolution Frustration' go away, I consider the money well spent! If, at any point during my long winter of frustration, someone had said to me "Your problem will go away if you spend an amount of money equivalent to Warren/RBA's "PI Part-1", I'd have said "Where do I sign??"

Tuesday, August 21, 2012

The Phoenix Butterfly

I spent last week on an imaging trip near Lassen Peak, in northern California. It's a minor miracle that I got an image of the Butterfly Nebula (IC 1318), given how much forest-fire smoke was in the air. The last several years of Lassen trips have been blessed with clear, blue, gorgeous skies, for the most part. Forest fires are par for the course in the area, however, and it was only a matter of time before the dice came up snake-eyes, smoke-wise. In other words, I was bound to lose a Lassen trip to forest fires, someday. That someday was the August 2012 dark-moon cycle... almost. Despite all the smoke (and clouds), there was enough clear sky to image some of the nebulosity around the star Gamma Cygni. I like to think of this as `a butterfly rising like a phoenix from the ashes of a fire-plagued season'.

IC 1318 d and e and LDN 889, a.k.a. the Butterfly Nebula, imaged from Lassen Peak.
Click on the image for a larger version, or click here for full size.

Only a couple of nights in my week-long trip had worthwhile skies, so I had to abandon my plans to image the Swan nebula (M17) and the Triangulum galaxy (M33), and concentrate on a single object that would be near the zenith for most of the night. An object that appears near the overhead point in the sky (the zenith) is seen through the least possible atmosphere. In this case that meant through the least possible smoke, depending on how the smoke was being blown around by the wind.

During northern-hemisphere summer nights, the region of the zenith is dominated by Cygnus, the Swan. Also known as the `Northern Cross', Cygnus is a grand constellation, one of the few that really looks like its namesake. Right at the heart of the swan is the star Gamma Cygni (a.k.a. Sadr). A good deal of bright emission nebulosity and dark dust can be seen around Gamma Cygni, making it a popular target for imagers. I happened to pick up the September 2012 issue of Sky and Telescope right before my trip, and when I had to pick an imaging target in Cygnus, I thought of the Gamma Cygni area. Sue French and Steve Gottlieb had covered this region in two very nice articles in the September S&T, and Rob Gendler's image, accompanying Steve's article, really got me excited about this area.

According to Steve's article, the `butterfly' is formed by two portions of the IC 1318 emission-nebula complex (IC 1318 d and e), in front of which lies the Lynds dark nebula 889, a mass of dark absorbing dust. The bright emission nebulosity forms the wings of the butterfly, and LDN 889 forms the body, complete with a head that sports two antennae! Like other `emission' nebulae, the bright material glows because of the excitation of the hydrogen atoms of which it's made. IC 1318 is a star-forming region, and ultraviolet light from hot, massive, young stars causes the hydrogen atoms to glow, a little like a fluorescent light tube or a fluorescent mineral. LDN 889 consists of microscope grains of interstellar dust, which absorb the light from the nebula. (The sky over the Lassen Peak region often contained clouds of smoke that dimmed the stars in much the same way.)

The Reading fire, one of the fires that turned the blue sky brown for much of this year's trip.
(Image credit: National Park Service, Lassen Volcanic National Park)

Data Acquisition

On two nights, the sky was acceptably transparent for imaging, and I managed to acquire three hours of data through a clear (`Luminance') filter, in 5-minute subexposures. The last night of the trip yielded a very nice sky, thanks to some fortuitous wind patterns, with the Milky blazing bright and `sugary' overhead. Two of my three hours of data were acquired under that sky.

I would have liked to shoot some color data, but equipment issues put an end to that idea.  Perhaps foolishly, I decided to try and `drive' my mount from my laptop. Maxim DL was able to talk to the mount and order it to slew around the sky, but I kept having a problem with `backwards slews' in the western part of the sky. I'd have shot an additional 3 or 4 hours of data on the final, clear night if I hadn't been trying to debug this problem. Oh well, I'll get it sorted eventually, and at least I got three hours of luminance.

Pixinsight processing:

The data for this image followed my standard Pixinsight processing routine for a luminance-only image:

  1. Calibrate subexposures with the BatchPreprocessing script
  2. Register and stack the calibrated subexposures
  3. Deconvolution to sharpen the bright, high-signal-to-noise-ratio (high SNR) areas
  4. Multiscale Median Transform to smooth the dark (low SNR) areas
  5. Stretch the brightness values of the pixels with Histogram Transformation and Local Histogram Equalization
  6. Shrinking (actually more like dimming) stars with StarMask and Morphological Transformation)
  7. Cropping, conversion to standard ICC color profile for web publishing, and saving as JPEG.

Room for Improvement

(Pixinsight geekery ahead...)

Naturally, I would have liked to acquire more data, including color data. Processing-wise, I noticed that some small-scale, `salt-and-pepper-like' noise was introduced somewhere in the processing. This probably happened during the Histogram Transformation or the Local Histogram Equalization, despite my use of a luminance mask. The luminance mask was made in the usual way, by applying an auto-STF to a copy of the image (via HT). I wonder if I should have done a more elaborate intensity transformation when I made the luminance mask, so as to protect the dark areas better, and to get a more effective deconvolution in the bright areas.

After the initial star-shrinking, which worked mostly on the small stars, I tried to build a  new star mask for the larger, more bloated stars, but after a lot of experimentation, I hadn't gotten much of a result. I decided to post the image as-is, but I still dream of dealing with the large stars someday.

Sunday, July 15, 2012

Making `Adaptive' progress with Pixinsight noise reduction

Here's a short technical article for my fellow Pixinsight learners. It's about some progress I recently made in learning how to reduce background noise in astronomical images. These images, like any images made in low-light situations, have the potential to be plagued by a `grainy' appearance, particularly in the dark background areas. In the parlance of astronomers and amateur astro-imagers, we say that our images commonly exhibit `noise' in the `low-signal-to-noise-ratio (low SNR) areas'. This noise can be reduced by racking up as many hours of exposure time as possible, but there's a limit as to what our schedules (and the weather) will allow.

Noise can also be reduced somewhat in post-processing by using software routines, such as those in Pixinsight. This post is a journal entry of sorts, to record how I managed to smooth the noisy background areas of an image of a galaxy cluster. It allowed me to go from this:

To this:

For me, this was a much better result than I'd gotten before, and it happened because I managed to correctly tweak a setting in one of Pixinsight's noise-reduction routines. Details follow, for any other PI users who might find the information useful.

Image Acquisition

As I described in an earlier post, I'd already taken one stab at shooting Markarian's Chain, a prominent grouping of bright galaxies in the Virgo galaxy cluster. After my first attempt, I found a weeknight in late May 2012 when I could re-shoot the Chain, with proper framing this time! The data were acquired with my ED80 imaging rig: An Orion ED 80 f/7.5 refractor and an SBIG ST-8300M CCD camera, on a Losmandy G-11 mount. As I only had one night to acquire the data, I shot through a clear (`Luminance') filter, so as to make a black-and-white image. I managed to get 32 five-minute subexposures, for a total exposure time of 2 2/3 hours.

Ideally, I'd have liked to get at least several hours of exposure time on a target like this, so as to build up a decent SNR in the faint outer parts of the galaxies, and in the dark background areas. I knew that if I was going to make a final image that showed more than just the bright cores of the galaxies, it would take some wizardry with Pixinsight's noise-reduction settings.

Noise Reduction at the Linear Stage - The General Idea

I followed the same general strategy for this image as I had done with the last several images, namely to reduce noise (in the dim, low-SNR areas) and sharpen details (in the bright, high-SNR areas) while the image was still at the linear stage. In other words, the noise reduction and sharpening (`deconvolution') were to be done while the pixels still had their original brightness values, nearly all of which are too dark to show up well on the computer screen, without having their values mathematically `stretched', which would destroy the linear relationship between their brightness values and the true brightness values of the objects in the scene. This basic strategy was laid out by Pixinsight creator Juan Conejero. The ever-obliging Harry Page made a nice video of this type of workflow, and the technique was updated by Juan for a new version of the key tool.

The general idea is to use a powerful-yet-rather-mysterious tool called Multiscale Median Transform (MMT) to smooth the image to a greater or lesser degree. This smoothing can be (and needs to be) applied more strongly to the dimmer, noisier areas. Conversely, it can be (and needs to be) applied less strongly to the brighter areas. A copy of the image, called a luminance mask, is used in order to apply the process more to the dark areas, and less to the light areas - see Juan's posts and Harry's video for more information on luminance masks.

So, to start at the beginning: I had calibrated my light frames with dark, flat, and bias frames, and aligned, registered, and statistically combined the calibrated subexposures. Here's an autostretched closeup of the noise I had to deal with:

My goal was to try and smooth the noise, although I knew I wouldn't be able to do a perfect job of it. I wanted to smooth it enough, however, to make it worth stretching the image so as to bring out most or all of the faint outer parts of the galaxies. Part of this process involves protecting parts of the image from the noise-reduction tool, and this is the goal of luminance masking.

Luminance Masking - Juan Knows Best

In a recent post, I described my use of PI's Range Selection tool to make luminance masks. I thought it made a lot of sense to build at least two or three separate masks, and then to apply MMT noise reduction to the different zones that would be delineated by these masks. In my `M87 Chain' image, I tried applying different MMT noise reduction settings to three zones: 1) The dark, noisy background, 2) the dim, fairly noisy outer parts of the galaxies, and 3) Deconvolution sharpening, rather than noise reduction, to the bright core areas of the galaxies.

Sometime later, I found myself thinking `wouldn't it be nice to be able to make just one mask, which would automatically apply more protection (from the noise-reduction routine) to the brighter areas, and smoothly reduce the amount of protection applied to the dimmer areas?' After a little while, I slapped my head and said `You fool, that's what Juan taught us to do in the first place! He uses an inverted copy of the image itself as the luminance mask, and this does the automatic masking you're looking for!' This is a really basic idea, and I felt silly for having concocted my separate-masks approach in the first place.

So, I made a copy of the image, inverted it, blurred it with the Convolution tool, and applied it to the image:

The redder areas have more protection applied to them, and the closer-to-black areas have less protection applied to them, so they will undergo more noise reduction, even with only one application of the MMT tool.

First Attempt at MMT - Little Dark Blobs

I love Pixinsight because it's so powerful, and because the people who are really good at it are able to achieve some amazing results. I aspire to understand all of PI's tools at a `master' level someday, if that's even possible. However, some of those tools, like MMT, have a lot of settings to tweak, and it's hard to know what values to use for the various settings. As I describe what I did with MMT in this case, I'll assume the reader has examined the posts by Juan Conejero that I linked above.

When using MMT for noise reduction, one generally needs to check the Noise Reduction box for each of the wavelet layers. Additionally, it seems that MMT noise reduction should be applied more strongly for the small-scale layers, and less strongly for the large-scale layers. (As near as I can tell, this seems to mean using larger Threshold settings on the smaller-scale layers, although for the life of me I don't know what the Threshold numbers mean.) After some iterating, I arrived at these settings:

These settings did manage to smooth the background, but I was left with a number of little dark blobs scattered around the image - I think you can see them here:

Hmm. Close, but no cigar. If only there were a way to get rid of those little dark blobs!

`Adaptive' to the rescue

Casting about for a solution, I read the long tooltip for the `Adaptive' sliders in the MMT noise-reduction dialog. It contained this line:  "Increase this parameter when you see isolated, high-contrast, relatively small structures that survive after finding an otherwise good noise threshhold value." This sounded promising. But how to minimize the time I'd have to spend iterating the Adaptive values?

Here's what I did: I used PI's ExtractWaveletLayers script to break the image down into its constituent layers of detail. Zooming in closely to each layer, I noticed that the `dark blobs' seemed to be about 16-32 pixels in size, roughly speaking. So, I gently increased the Adaptive settings for the 16-pixel and 32-pixel wavelet layers in MMT:

 Having done this, I got a better, smoother result:

It's a small victory, and I suppose it's nothing to brag about, but for a Pixinsight learner like me, it felt good to be able to smooth a noisy image this much, without the image looking too drastically over-smoothed. I was eventually able to use this noise reduction as one step in the overall processing of my Markarian's Chain image. That image will be the subject of the next post!

Wednesday, June 27, 2012

Globular Star Cluster M3

Harbinger of Summer - that's how I always think of the globular star cluster M3.

A little less than a hundred years ago, Harlow Shapley measured the distances to the globular clusters, and realized they form a spherical halo around a point that lies in the direction of the constellation Sagittarius. That was the beginning of the realization that our solar system is not at the center of the Milky Way galaxy. Globular clusters like M3 are classic summer objects; I've lost count of the number of times I've passed the short summer nights looking at them, through any number of different telescopes. Constellations like Sagittarius itself are rich hunting grounds for `globs' large and small, bright and dim. A trip to the southern hemisphere has, as one of its many treats, views of the huge, blazing Omega Centauri and 47 Tucanae globulars. Simply put, globular clusters are classic summer `eye candy'. Here's an image of M3 that I shot during the  June 2012 dark-moon cycle:

Globular star cluster M3
8-1/3 hours total exposure time
Evenly split between unbinned R, G, and B, shot in 4-minute subexposures.
Click the image for larger version, or click here for full size.

M3 will always have a special place in my astro-heart, since it was the first object I ever saw through a large amateur telescope. It occurred just over 10 years ago, in April of 2002. I went to one of my first Bay Area observing events, at a local hilltop site. I had my little 5" Meade ETX-125, and I was ready and excited to see some deep-sky objects! To my amazement, Bruce Jensen set up an 18" Starmaster dobsonian next to me. I'd never even looked at a telescope that big, at such close range, let alone looked through one. Bruce showed me M3, which was still rising in the east, and I was blown away. There was no going back - aperture fever took hold of me for good! (I'm lucky enough to be able to enjoy my own views through an 18" scope these days, something for which I'm very grateful, even if I'm mostly using my imaging rig these days.)

M3 is one of the farther-west of the bright globulars, so we see it in the (northern-hemisphere) spring, before the other globs are well-placed for viewing in the summer sky. I'll always associate M3 with April, May, and June, when we're enjoying the galaxies of Coma Berenices and Virgo, taking peeks at globs like M3 and M5, and dreaming of the summer Milky Way...

Acquisition and Processing

I shot the data for this image on three nights during the June 2012 dark-moon period, from the same site where Bruce Jensen showed me M3 through his Starmaster all those years ago. I decided to shoot unbinned R, G, and B images, to try and maximize the resolution of the image, and to avoid having to match the histogram of a luminance image to that of an RGB image. In the end, over the three sessions, I got about 36 four-minute subexposures through each filter. As with my other recent images, I used my Orion ED80 f/7.5 refractor on a Losmandy G-11 mount, with a short-tube 80 refractor and StarShoot camera for autoguiding. My trusty SBIG ST-8300 monochrome CCD camera gathered the photons, with a chip cooled to -15C.

Pixinsight processing followed my usual workflow, with deconvolution (i.e. sharpening) of the innermost core of the cluster, as well as smoothing of the background, done while the image was still linear. A wee touch of HDR Multiscale Transform helped to `un-blow-out' the cluster's core. I pumped up the color saturation in the brightest part of the cluster, so as to bring out the differences between the blue and orange stars.

Pixinsight geekery: The main thing I learned while processing this image was the usefulness of the `Amount' slider in Multiscale Medium Transform's noise-reduction routine. As with many of PI's tools, MMT is powerful yet somewhat hard to understand. I don't really know how to set the parameters for its noise-reduction routines, and I've always wanted to be able to increase the amount of noise reduction ever-so-slowly. Well, I should have guessed that the `Amount' sliders in the noise-reduction settings for each wavelet layer will do exactly that. I guessed at some Threshold values, starting with 4 for the first (1-pixel-scale) layer and decreasing roughly by half as I went from layer to layer. Then, having set those Threshold values, I set all of the Amount sliders to 0.1, and ran MMT. There was just the tiniest little bit of noise reduction in the background sky. (I used a luminance mask to protect the globular's stars.) By moving up the Amount sliders
one little increment at a time, I could get what I wanted: A nice, moderate amount of noise reduction.

Room for Improvement

I could have set the black point a little lower, to suppress the remaining background noise a little better. I could also have tried to dim/shrink the bright, burned-out-looking foreground stars. They're a little distracting. But, since the deep-sky object in question is a star cluster, I couldn't find a good way to make a star mask that didn't include stars from the cluster. So, I just left the stars alone and decided to post what I had. I think the thing I like the best about this image is the halo of very faint stars that makes up the outermost part of the cluster. I doubt I can see those visually, even through a large telescope. That's one of the joys of imaging, going deeper than the eye can see!

Wednesday, June 13, 2012

A shout-out to the film folks

Film! I have a soft spot in my heart for film astrophotography, even though I use a CCD camera. Last night, while surfing the web, I checked to see if Jim Cormier, a modern-day film astrophotographer, had posted any new film images. He has, and they're really cool! I want to post some links to his images, so that more people will get a chance to see them.

I've done some film astrophotography - more on this in a bit - but I'm not one of the old-school `film guys' from back in the day. For well over a century, emulsion-based photography was photography, before sophisticated electronic sensors were developed. The art and science of emulsion-based astrophotography produced some beautiful results, through the heroic efforts of many, many research astronomers and amateur enthusiasts. These results depended on things like long single exposures, manual guiding, cold cameras, gas hypersensitization, and the envelope-pushing techniques that David Malin developed at the Anglo-Australian Observatory. Other than a few star-trail images, and a couple of short guided images of Halley's Comet in 1986, I didn't shoot film back in the day. (I was just a kid/teenager at the time, too.) But plenty of people did, and they left a rich, heroic legacy of astro-imaging on emulsion.

The advent of CCDs meant the `death of film', for the most part, since CCDs are so much more sensitive, and have a (generally linear) response to light that makes them more useful for measuring the brightnesses of things. The recent demise of Kodak is perhaps the best-publicized event in the long twilight of emulsion. However, not all amateur imagers have given up on film! There are a few folks out there who really enjoy shooting film, and enjoy the results they get. Naturally, there's some involvement with the digital realm, since we see their images on the web, after all. But at heart, their `sensors' are emulsion-coated materials, and I just think that's cool. They love film, and I admire them for it. I think that the world of film and processing will always have a special place in my heart, probably because I enjoyed darkroom work when I was a high-school student. I worked in the yearbook darkroom, and I set up a small B&W darkroom in my folks' house during high school. (I even developed a roll or two of slide film during graduate school, which was a hoot.)

If there's a `hero of film' in 2012, it's probably Jim Cormier from Maine. He mostly shoots wide-field images, and largely on Ektachrome 200, which seems to have been the `color astro film of choice' during the latter years of film's heyday. At present, his images can be found in several places on the web. Here are some recommended links:

For an image with a great `wow' factor, check out his latest 4-panel E200 Milky Way panorama.

Jim's Blogspot site also shows his images, and he's got a nice post about `My Most Productive Dark-Run Ever'. I love it! (Also note the `hand-corrected guiding'... John Henry, indeed!)

He has a photostream on Flickr, which is worth exploring. Another highlight from his Flickr stream is his 2011 B&W project to shoot parts of the Milky Way, a la Edward Barnard's atlas. Very cool.

While you're at it, you might enjoy Christopher Barry's Kickstarter proposal, to shoot wide-field film images this summer. It looks like he made his funding goal! I eagerly await his results.

I can't quite describe why I get such a kick out of the work of these `film guys', but I just do. I'm really glad that they're sharing their work.

While I'm on the topic of film, I suppose I ought to post a film image of my own. There's a bit more backstory to this film enthusiasm of mine, as it turns out. I could probably write a long series of blog posts about this, but here's a short version: In the late summer and fall of 2011, I did a film-imaging project. I was finishing my MSc in astronomy, and my final project involved a comparison of film-based and CCD-based imaging techniques. The film side of the story got pretty epic, but keep things short, here's an image of M31 that I shot on Ektachrome 200, using a Nikon FM camera body attached to my ED80 refractor. This is about 150 minutes of total exposure time (I forget the lengths of the subexposures), stacked and processed in Pixinsight:

M31, captured on Ektachrome 200 from a Bay Area hilltop site.
Click on the image for a larger version, or click here for full size.

You've probably noticed the curious flares coming off of the brighter stars. Those are actually due to the film scanner I used. (I've examined the slides under a microscope, and the flares aren't present in the slides.) One of these days I'd like to re-scan my slides and see if I can get a better result. Another issue that came up: The red LEDs from my light meter caused the slides to be badly light-struck. Next time I try shooting with my FM, I'm going to take out the light-meter batteries. Pixinsight's Dynamic Background Extraction routine was able to clean up most of this red mess, but it would have been nice if I hadn't had to deal with it.

Ektachrome 200 is basically gone now, but I was able to buy some on EBay, and a fellow astro-imager gave me several rolls. My leftover E200 is in my fridge, and one of these years I ought to shoot it. Some year, I should devote a fall and a winter to shooting the heck out of M31 and M42 on film. If I can find a 16-bit (or deeper) film scanner that doesn't produce those flares, I'd love to create the best `film-captured' M31 and M42 I can, with help from Pixinsight. Send that good `ol E200 out in one last blaze of glory!

Sunday, June 10, 2012

The M87 Chain and the Pixinsight Zone System

One of the greatest euphemisms in the world has to be the phrase `learning experience'. How often do we sugar-coat our mistakes by calling them `learning experiences'? I'm sure I've done it many times. This image provides an example, but in this case there's a bit more to it than that...

A portion of the Virgo galaxy cluster, with the giant elliptical galaxy M87 at top left, and part of `Markarian's Chain' of galaxies at right. Click the image for a larger version, or click here for full size.
Data Acquisition: Making the best of a bad situation

A few weeks ago, I was doing some backyard imaging, and the Virgo galaxy cluster seemed like the logical choice. Having shot a luminance image of the Leo Triplet not long before, I decided to do another one-night stand, with just luminance, but this time I wanted to shoot `Downtown Virgo'. (The origins of that term and its enthusiastic usage seem to go back to Jay Freeman and Jamie Dillon, two highly-accomplished Bay Area visual observers.) Specifically, I wanted to shoot the portion of the Virgo cluster called Markarian's Chain. It's a standard target, since it comprises a pretty, arc-ing chain of galaxies that stretch from M84 and M86 towards M88. Almost everyone works on an image of Markarian's Chain at some point. By planning it out in SkySafari Pro 3 on my iPad, I could see that if I rotated my camera just right, I could frame most of the chain pretty nicely on my ST-8300 sensor, using my ED80 f/7.5 refractor.

One thought nagged at me, though... What about conventions? As in sign conventions and angle conventions? Sky Safari Pro 3 has a really nice slider tool for rotating the position angle of one's field-of-view overlay, relative to the sky. This allowed me to plan my framing really easily. And when I'm imaging, I can download a frame from the camera, and use MaximDL to plate-solve it, which gives me the image's position angle on the sky. This is really handy, but.... what if these two pieces of software use different conventions for specifying the position angle? Hmm. I could wind up with a frame that's rotated 90 degrees from what I expect.

So, it wasn't a great shock when that's exactly what happened. Here's the framing I had planned on my iPad:

Here's how things actually worked out, since the two pieces of software treated the position angle differently:

Hrm. Rargh. What to do? I could have rotated my camera 90 degrees, but that would mean refocusing and probably re-doing the GOTO alignment. Given the couple of hours available for shooting Downtown Virgo before it went behind some trees, I didn't want to do that. So, I panned around in SSP 3 and looked for an alternative framing. Here's what I wound up with:

That seemed like the best compromise, since it caught part of Markarian's Chain, and included the giant elliptical galaxy M87, the real `heart' of the Virgo cluster. I shot a couple of hours of luminance (in 5-minute subexposures), and called it a night.

Processing: Pixinsight meets the Astro Zone System

A few weeks later, I had a little time to sit down with the data, and after using the very handy new preprocessing script in Pixinsight, I saw the following preliminary result (this is a closeup of two of the galaxies in the Chain):

Autostretched image of two galaxies in Markarian's Chain.

It's probably worth explaining what I mean by an `autostretched' image (also sometimes called an AutoSTF'ed image amongst Pixinsight enthusiasts). PI has a tool called `Screen Transfer Function' (STF), which stretches the brightness values of the image's pixels, solely for the purpose of displaying the image on the screen. It doesn't change the original pixel values in the image file, but it basically creates a temporary copy of the image to display on the screen, with the brightnesses changed so as to make the dim parts of the image more visible. The STF tool has a `Auto' button, which creates an image that nicely shows `what you got'. (I used one of these AutoSTF'ed images in my annotated Leo Triplet posting.) Such an image, though, usually doesn't make for a very pretty picture, since it shows just how noisy the dim background areas and dim parts of your target look. That graininess is a combination of instrumental noise and the eponymous photon shot noise (the latter coming from both the target objects and from the sky.)

At this point, my big goal was to do some noise reduction, and try to make the noisy, grainy-looking parts of the image look a little better. In this I was aided by Jordi Gallego's new presentation on noise reduction in PI. There's a lot of good information in this document, but I was particularly intrigued Jordi's slides 51 through 53, particularly #53. In this slide, he shows that one can make masks for applying different noise reduction settings to different parts of the image, such as:

  • The dark background sky, which has the lowest signal-to-noise ratio (SNR), and is thus the `grainiest'-looking part of the image.
  • The dim parts of the deep-sky object(s), which have fairly low SNRs, and thus mostly need smoothing and noise reduction.
  • The bright parts of the deep-sky object(s), which have high SNRs, and thus can tolerate some sharpening, such as through deconvolution.

Aha! This is basically the same concept as Ron Wodaski's Astro Zone System. I borrowed a copy of this book from a fellow Bay Area observer a couple of years ago, and found it to be very interesting. Sadly, the book has been out of print for some time, but I was one of the lucky folks at the 2011 Advanced Imaging Conference who managed to get one of the copies Ron gave away. (Thanks, Ron!)

After a little fiddling around, I realized that PI's Range Selection tool works best on images that have already been stretched into a nonlinear state, so I made a copy of the image, applied its AutoSTF settings to Histogram Transformation, and applied that to the copy. I then used Range Selection on this stretched copy.

First, I made a mask that covered up the stars and galaxies, leaving only the dark background sky to work on:

After a little fiddling around, I stumbled on some settings in Multiscale Median Transform that smoothed the background reasonably well:

I was pleased with this result! It's not perfectly smooth, but I'm calling this a win, so far. Then, I made a mask for the `mid-SNR' zone, which included the fainter outer parts of the galaxies:

And then, by pulling back on my MMT noise reduction settings, I was able to smooth those areas somewhat. Next I made a mask to isolate the cores of the galaxies, for sharpening via Deconvolution:

After mid-SNR-range smoothing and high-SNR-range deconvolution, I had this image:

The brightness levels you see here are `Auto-STF' levels, and even with the noise reduction, they're not really good for posting on the web. So, since the image was still at a linear stage (i.e. not really brightness-stretched yet), it was time for a Histogram Transformation, some star shrinking, and a horizontal flip to match the correct appearance of this area on the sky:

Room for Improvement:

I think this was a good proof-of-concept project, for the Range Selection / `Pixinsight Zone System' approach. My masks could use some work, though. When I examine the image closely, I can see that some of the dim parts of the galaxies got left out of the masking process. Also, the various processing steps left an artificial ring around M87. There really are such things as ring galaxies, but M87 isn't one of them. I'm very interested in refining my touch with Range Selection, and to trying out the new Adaptive Stretch tool! A week or so after shooting these data, I managed to shoot Markarian's Chain with proper framing, and so we'll see how things go with this new data set.