Welcome to a journey into our Universe with Dr Dave, amateur astronomer and astrophotographer for over 40 years. Astro-imaging, image processing, space science, solar astronomy and public outreach are some of the stops in this journey!
The latest from Orion’s Belt Remote Observatory in Mayhill, NM includes this narrowband project just completed on IC 1318. This is a fairly large and rich emission and dark nebula region surrounding the bright 2nd magnitude star Gamma Cygni at the center of the “Northern Cross” (brightest star in the image). The region is often referred to as the “Sadr Region” referring to the proper name of the star gamma Cygni . IC 1318 is also known as the “Butterfly Nebula”. It is a fascinating region of the Milky Way and I look forward to the fully processed result which I hope to have within the next couple weeks.
IC 1318 was taken with this platform shown above including an FSQ106N, SBIG STXL 16200 camera and Paramount MX+ mount. No sooner had I completed the OIII channel when the filter wheel sensor failed and had to be sent off for repairs. Luckily I am not missing anything soon because the weather has turned for the worse and we are anticipating our “Monsoon Season” to start up in July. Typically during that time there are frequent thunderstorms but generally off and on so you might get a window of good viewing here and there!
Pier 1 has been very quiet unfortunately. The combination of bad weather and bad seeing has rendered this platform barely usable to be honest. I have been at this location for 3 1/2 years and the seeing just isn’t consistently good enough to support this platform. The 16″ scope has a focal length of 2800mm. The local seeing here is average. It’s plenty dark enough yes, but for long focal length optics that’s not going to be enough. I have looked at these seeing monitors here in this astronomy community forever and I’m going to say it’s about 2.5 arc sec average over the year. That’s a tough ask for anything with over perhaps 1500 to 2000mm FL. In contrast my equipment on Pier 2 is only 5-600mm and it doesn’t care what the seeing is. My solution will be to move this scope to a telescope hosting site about 3+ hours from here where the average seeing is close to a full arc sec better. That would be a town called Pie Town, New Mexico located in the far western part of the state and also further north (more on Pie Town in a future post). I will then install most likely an imaging newtonian on Pier 1 with a focal length of only about 1300mm. That should be compatible I would think with the local conditions here. Hopefully all of this happens in the next year or two. In the meantime I will try to finish a couple more projects here on the 16″, weather permitting.
That’s about it for now from Orion’s Belt Remote Observatory.
Deconvolution is one of the most confusing and poorly understood algorithms in all of image processing. Most of the time I think people get frustrated with the bad results in terms of image artifacts that they just forget about it altogether. While it is true for sure that it doesn’t always help and for lower resolution wide field images it probably isn’t applicable, I think it is a mistake to avoid it entirely. You may have a great data set for it and that is really the time in the workflow to address distortion issues.
Before getting into this further just a word about image “enhancement” in general. I am certainly no expert but have been doing this long enough to realize that when it comes to image processing I am a firm believer in a “less is more” approach. I see so many folks trying desperately to create a great image from bad data and that just isn’t possible. All you get from that is overprocessed bad data. However I have sadly also seen overprocessing of good, even superb data. This is the worst combination of all. When you have excellent data you really do not have to do much at all. Just let the data “breathe” and preserve the natural beauty of the object. Don’t crush the life out of it with all kinds of sharpening tools, artificial intelligence apps etc.
The above image is a full resolution sample from a superb data set of Omega Centauri, that from an amateur hosting facility in Chile. You can see especially when you click on the image the stars are peppered with a myriad of processing artifacts, in essence destroyed by overzealous application of sharpening and other enhancement tools.
What is deconvolution? This is a class of algorithms that attempts to correct for atmospheric distortion. It’s kind of a focusing algorithm. It technically is not “sharpening” but the effect is essentially similar. It cannot create resolution in an image that isn’t there, meaning if you examine a raw image at full resolution and your galaxy dust lane is lacking detail, it’s not going to add detail in there but it can certainly decrease the distortion of the detail that is present. To accomplish this the average point spread function or PSF of the stars in your image is determined so that the overall degradation of the entire image can be measured. This can only be applied to an image that has not been stretched yet and is still in the “linear” stage. When an image is linear the brightness value of every pixel is proportional to the photons received at the sensor’s corresponding pixel. Any “sharpening”, focusing etc you can do at this stage will be far better than after the image is stretched which is why when deconvolution works it is a great tool.
This is the case of the “Needle Galaxy” I recently processed where deconvolution was applied with great success which is why I decided to post this. Now I am using the program Pixinsight for this which is very popular but certainly not the only option out there. That’s ok because I still think you can see the approach and basic ideas which will be similar for other applications.
The basic plan let’s say for a galaxy is to enhance the galaxy’s features and perhaps tighten the stars without creating star or background artifacts. So we have to protect the stars and background from “collateral damage”. The workflow in Pixinsight is: 1) Create a star mask to protect stars 2) Create a mask to protect the background, in this case what is called a luminance mask 3) Create a point spread function or PSF so the entire image can be modeled 4) Apply the deconvolution process, adjusting a couple of variables to create the desired result
In Pixinsight you can apply “star mask” to your linear image to produce good star protection of the brighter stars during deconvolution. Just use the default settings for this purpose. You don’t have to change any parameters.
You then want to make the star protection more efficient by brightening the stars, increasing the contrast. I use the “auto clip highlights” button in the histogram transformation process to do this. This mask is not going to be applied directly to the image but will be used as a reference image for the process.
Next step is to create a mask to protect the background by first making a copy (steps shown above), then applying a permanent stretch to the copy to make it non-linear
The PSFImage script is able to create the PSF for image modeling as shown above
Open the PSF script, click on “evaluate”, wait until it’s done, and then click the “create” button to the right of it to produce the PSF which is basically a star image.
The PSF is shown above. The deconvolution process will use this file to model the whole image.
When you open the deconvolution process you’re going to click on “external PSF” at the top and select the PSF file in the drop down when it pops up.
The steps above show how to configure deconvolution to minimize artifacts. The star “support” mask is not directly applied to the image but the software refers to it internally to get the info it needs to carry out the “masking”. The other settings are left as default, so under “algorithm” you should see Regularized Richardson-Lucy selected. The other option is Van Crittert which we typically do not use for deep space images but is better suited for planetary images. Deconvolution is a wavelet based algorithm so “wavelet regularization” should be checked. I have not found adding additional layers beyond the default of 2 to be of any additional benefit. Also note the default iteration value of 10. This is a good starting test point. Typically I might do 15-25 for the finished product but not more.
Last thing to do before actually starting is to protect the background. This is a mask directly applied to the image so we take our stretched copy that we made earlier and apply it as shown above to mask the background. Remember this is a stretched non-linear copy applied to the linear original image. A linear mask will not be effective. I have not typically made any adjustments to this with histogram transformation etc. Just apply as is.
Now we are ready to begin deconvolution, but first we notice that the temporary screen stretch applied to the original image is a little overstretched as you can see in the right side image. We want to be able to clearly see the effects of what we are doing. You can dial it down a tad in the screen transfer function shown at the top (white circle) until you get a level that you are comfortable with just by moving the midtone and black point sliders and arrive at the result on the left. Remember this is NOT a permanent change and the image is still linear. This is just a way to see what you need to see.
Next step is to finally run deconvolution! At this point it is really about experimentation. Always select a small preview of an area of interest (Alt-N keys in Pixinsight). This will make the process much faster when you are testing your settings. The only setting you are going to change is “Global dark” at first. I start around 0.01. If you get the so-called “racoon eyes” around stars your setting is too low. If you get ugly bright artifacts in multiple areas your setting is too high. Once you have a result you are happy with you can increase iterations until you see problems. Remember it is very tempting to overdo it . If you get a nice result with lets say 15 iterations, doing 30 or more is likely to create a problem that you may not necessarily detect until much further in the processing flow when it will be much harder to correct. Quit while you’re ahead!
And the final result is shown above! Before deconvolution is on the left and after is on the right. I think the key is producing a good star support mask and background protection with the stretched luminance copy, dialing in the correct global dark setting and not going too crazy. Remember less is more!
And finally it’s nice to be able to quantitate what improvements we made and these are shown above. Deconvolution reduced the average FWHM by close to 1.5 arc sec which is the most I have ever seen doing this! Typically it’s around maybe 0.5 to not more than 1.
Anyway quite a bit to unpack here in this post! I hope at the very least you can get a sense of how deconvolution can work when it does work.