Image of the day

From the
ATWB Customer Gallery

M3 Globular Cluster

My Account

New to Astromart?

Register an account...

Need Help?

neural networks and focus

Started by yahganlang, 11/02/2005 08:46AM
Posted 11/02/2005 08:46AM Opening Post
Hi- I'm a newbie to this particular forum, but interest in such things as night-vision eyepieces got me thinking about pre-processed images.

What I'm curious about here is focus. With the proper software (perhaps based on neural network or parallel processing?) would it be possible, hypothetically, and assuming pixel density was extremely high, and individual elements very sensitive, to process out-of-focus optical images and return focused ones? The default, first approximation assumption would be no astigmatism or chromatic aberration- but perhaps with color sensitive elements and modified software even these issues might be addressed? Diffraction spikes would be a separate problem.

Any ideas?- all this just popped into my head.

Thanks,
Jess Tauber
Posted 11/02/2005 01:34PM #1
YES an NO.

And I do not know about "neural networks." But when things are messed up in images, it is because the are convoluted by turbulence, optical imperfections, and lack of focus......

If you know what the image is supposed to be, you can write an algorithm to "deconvolute" them. Richardson and Lucy, for instance, have come up with something named after them. It makes stars more pinpoint. The Hubble was rescued by developing a deconvolution and applying it to optics.

But there are a lot of assumptions in the process, so it is not as accurate as proper focus.

Also, simply "sharpening" an image, i.e., increasing the contrast between two pixels next to each other that appear to be part of different intensities, makes the picture look more focused.

But, if the data is not there, you are making up stuff, not actually "focusing."