Image of the day

Captured by
Anthony Zinnanti

Messier 101

My Account

New to Astromart?

Register an account...

Need Help?

Posts Made By: Ron Wodaski

March 14, 2008 06:37 PM Forum: CCD Imaging and Processing/Deep Sky

M81-82 LRGB

Posted By Ron Wodaski

Really nice image. I can not only see a number of dimmer galaxies, but also what is presumably the high-latitude galactic dust that permeates this area (and I believe that is what causes those darks linear features over M81).

I think you may have a bit of a magenta cast to the image, but I don't currently have a calibrated monitor handy to say for sure. sad Looking at the histograms, it appears to be slight but noticeable. You can test my theory by increasing the green chanel in Photoshop.

Ron Wodaski

March 14, 2008 06:53 PM Forum: CCD Imaging and Processing/Deep Sky

Opinions wanted please on the post processing

Posted By Ron Wodaski

I'll call the image attached to the message I am replying to "A" and the other one "B".

Image A:

* Better detail in low-contrast areas (e.g., pgc28757)
* More accurate color ("B" is quite magenta overall)
* A little less detail in the brighter parts of M81
* Core of M82 less blown out (more gradual transition from dim to bright)
* Less very dim detail on M82
* Better star colors
* More saturated color

Ron Wodaski

March 30, 2008 07:04 PM Forum: CCD Imaging and Processing/Deep Sky

NGC 3718

Posted By Ron Wodaski

Sweet image - and really good processing. Nice work!

Ron Wodaski

March 30, 2008 07:08 PM Forum: CCD Imaging and Processing/Deep Sky

Photoshop

Posted By Ron Wodaski

Every version of Photoshop added useful features. If you really want to save money, then you would want to get the oldest version that supports 16-bit image editing and filters; I believe that was either 5.5 or 6.0. Versions earlier than that are usable but they are a real chore; you have to do all of your levels and curves work in 16 bits, but then you have to convert to 8-bit color for many filters or for layering L and RGB. Doable, but if you make a mistake, you have to go back a L O N G way to correct it.

The more recent versions are especially useful; CS is probably where to aim for if you have enough money available to spring for it.

March 30, 2008 07:11 PM Forum: CCD Imaging and Processing/Deep Sky

M81 and M82

Posted By Ron Wodaski

You got a lot of detail considering there was so much moonlight. Moonlight makes color balancing a chore, however. Looks like you have too much blue, and not enough green, in the color balance.

Another tip for color balancing moon-lit images: always remove color bias (different black points) in the color channels first, THEN attempt actual color balancing.

April 19, 2008 07:53 AM Forum: CCD Imaging and Processing/Deep Sky

M107

Posted By Ron Wodaski

I think that's a great image. You have really good color, and the resolution of the image is really good. It has almost a 3-D feeling to it. You've gotten a great image with your focal length - no need to apologize!

Ron Wodaski

April 19, 2008 07:55 AM Forum: CCD Imaging and Processing/Deep Sky

M101 again

Posted By Ron Wodaski

>> Probably pushed the saturation right to the edge of overdone.

Good resolution, good processing, but I do agree that the saturation looks t obe pushed too hard for the amount of data you've got. But everything else looks really good.


April 19, 2008 08:00 AM Forum: CCD Imaging and Processing/Deep Sky

R, G and B

Posted By Ron Wodaski

It all comes down to how much signal you are collecting. If a particular color channel seems noiser, then you need longer exposures in that channel, or more of them. This is normal. It results from a combination of your CCD sensor's sensitivity to various wavelengths, and from how much light your color filters are passing as well.

The simple correction is longer exposure times, so that you get similar signal to noise ratio in all color channels.

Unless you have a curve showing the focal point of various wavelengths for your scope, you would need to try the UV/IR blocking filters to see if they are of much benefit. Some scopes and correctors are fine well outside human vision; some are not. Generally, scopes that were designed originally for film are more likely to need UV/IR blocking. Scopes that were designed for the wider bandwidth of CCD sensors are more likely to handle UV/IR well.

April 19, 2008 11:33 AM Forum: CCD Imaging and Processing/Deep Sky

good exposures

Posted By Ron Wodaski

This is a B I G question.

First, let's move away from the "how much brighter..." because the answer you are looking for isn't just about brightness (which is the signal). The real answer lies in your signal to noise ratio (S/N).

Here's the concept in short form:

* There are two primary sources of noise for you to deal with. One is readout noise: the uncertainty in the values read out from the CCD sensor. The other is the shot noise - the uncertainty in the photon values arriving at the CCD sensor.

* Read noise is the uncertainty in the electron count as each pixel is read. Let's say, for the sake of simplicity, that the read noise is 10 electrons (10e- for short). This means that if you read 11,205 electrons out of a pixel, the actual value that was in there will be somewhere in the range from 11,200 to 11,210e-. So if you had four pixels, all with 11,205e-, they will probably read out with different values. These values might be 11,209; 11,201; 11,205, and 11,210. This difference in values creates the graininess that you see in an image.

* Shot noise is inherent in the physics of light. The uncertainty in the arriving photon flux is equal to the square root of the incoming signal. So if you have 400 incoming photons, the uncertainty is sqrt(400) = 20. In simplest terms, if you determine that there 400 photons have been recorded in your pixel, then you only know that the true value lies somewhere between 390 and 410 photons.

* Noise combines. So if you have some read noise and some shot noise, these two noise sources will combine to give you the total noise in the image. Noise is not simply additive, however, which means the calculation is a little complex if you don't have a math background. To determine the total noise, we use a root mean square (RMS) calculation: sum the squares of the noise sources, then take the square root of that sum. For example, we know that the shot noise is 20 and the read noise is 10. To determine the total noise:

sqrt(20^2 + 10^) = sqrt(400 + 100) = sqrt(500) = 22.4 photons

(For the sake of simplicity, I've assumed that the 10e- noise of the camera is one-for-one photons for electrons - that is, the gain is 1.0.)

* Since read noise is constant, and shot noise grows with the signal level, eventually shot noise will dominate the read noise. The longer your exposure, the more the shot noise dominates the read noise. It is generally accepted that if you can get the read noise down to 5% or less of the total noise, such an exposure is said to be shot noise limited: the shot noise limits the S/N because the read noise it essentially too small to be taken into account.

* There are two sources of shot noise: the wanted signal from the object you are imaging, and the unwanted signal from sky background brightness. The brighter your sky, the greater the unwanted shot noise is. Unwanted shot noise, however, is just like the desired shot noise: it swamps the read noise, too.

* Once your shot noise in individual sub-exposures is large enough to swamp the read noise (that is, make the read noise such a small component of the total noise that it is not significant), you can add sub-exposures and get very nearly the same result you would get with a single very long exposure.

Given all of this information, it follows that you can come up with some optimal sub-exposure time for your particular combination of sky brightness and camera readout noise such that even the background of your images is shot-noise limited. There is a calculator for just this on the CCDWare web site under resources, which you can use to determine the optimal sub-exposure time for your particular conditions.

That's the Big Answer to the question implied by your request. If you want more detail, I can suggest getting a copy of my Zone System book. Anacortes stocks it.

Now, as for sum and average. There is no difference! (This illustrates how non-intuitive noise really is...)

What is an average? It's simply a scaled sum! The only potential problem you run into is if the precision in your sum is such that the scaling reduces the precision. Most CCD cameras only have 14-15 bits of dynamic range, but you are using no smaller than 16-bit numbers to contain the scaled result. Only if you have enough images to generate more than 16 bits of information do you need to worry. And even then the solution is extremely simple: just switch to floating point or 32-bit integers to save your scaled result - then there is no difference whatsoever between a sum and an average.

As it happens, as long as you retain the _real_ precision of your data (which is smaller than the 16-bit container your software typically uses), an average is the gold standard for lowest noise. (except for outliers, which gets us into a WHOLE other area of discussion and now my fingers are tired.)