Maybe a question I should have raised in a CHOICE course, but I think the issue could come up for all CCD observers. Since no CCD chip is perfect, maybe this has much broader interest.
MaxIm DL has a routine for "removal" of bad pixels, via a number of routes. One is an automated "search & destroy" method to "fix" any pixel above or below a set value. Another is to manually select bad pixels (or rows or columns) to create an accurate map of bad pixels. It may be telling that the symbol for removal of bad pixels is a little broom -- as in sweep away data(?)
Question #1: Most -- but not all -- pixel issues are removed with calibration frames. If we're not getting all of them, is this an indication that the process needs improvement?
Question #2: If we use the Remove Bad Pixel command to "correct" for mapped pixels, what does this do to our measurements? In the current case, the offending hot pixel is in the sky annulus on some images, but over a time series of ~200 frames, ends up in the aperture for some of those frames.
For known bad pixels, we have the option of a slight offset in RA or DEC that will insure the offender doesn't interfere with our measures. For stacked & averaged images, we can dither the images slightly, and reduce their impact with an average combine, or eliminate the effect with a median combine. I understand that average is greatly preferred for scientific integrity, but would like to know more.
Question #3: Better to "correct" for bad pixels, or try to quantify their impact on an unaltered measurement?
Thank you kindly,
Brad Vietje
Newbury, VT
Hello Brad
I never fix bad pixels for photometry. The problem is that most of the fixing routines are non linear operations. You are much better off to let the calibration flat take care of things. If you have a dead or full on pixel, its best to avoid it in the Target or Ref/Comp star. If you are taking pretty pictures, that is another story. That is what those routines were designed for. I have no experience with how sell they work for imaging.
Gary
Thanks, Gary -- that is the response I expected. I'll continue to shoot new dark, bias and flat frames, and see if any of the misbehaving pixels go away, but if I remmeber correctly, they don't swing our measures much at all.
Just for fun, and my own education, I'll create copies of a set of my my science images, and compare the photometry before and after removing the bad pixels. My expectation is very little change at all.
Clear skies,
Brad
Hi Brad,
Answering your questions:
#1 - say that you have a dead pixel (no response to light). Then no matter how you dark subtract or flat-field the image, that pixel will remain dead. You can't calibrate it out. Likewise, if you have a hot pixel and do a long exposure, that pixel may saturate, and no amount of dark subtraction will get it correctly calibrated. So there will often be pixels, groups of pixels, or bad columns that remain in an image after calibration. How many pixels fall into this category may depend on the operating temperature or the exposure time, and so may differ from image to image.
That said, you can often find ways to improve the dark-frame subtraction to lessen the effect of hot pixels, or you may save the long-exposure fields for the coldest nights when you can run the camera the coldest. There are ways of scaling darks to maximize the hot-pixel removal. So you can do better, though perhaps not on a consistent basis.
#2 and #3 - the best way to handle bad pixels is to create a bad pixel mask. That is, a fake image that has "1" at all bad pixel locations, and "0" at all good pixel locations, as long as these pixels are consistently good or bad. Then later software can decide what to do about the pixels when performing photometry. The normal rule of thumb is that a bad pixel inside of the measurement aperture is usually enough to discard that measurement. Yes, you can interpolate, but how good the interpolation is depends on many factors and it is best to avoid the issue. If a bad pixel falls within a sky annulus, it is usually rejected anyway and has little effect. I often take multiple images and stack when going deep, to eliminate cosmic rays, images where the telescope trailed, possible bad pixels moving into an unguided sequence, etc. During the stacking, you can median filter or use some fancier scheme to get rid of outliers.
Any time you modify an image, you need to careful and understand what you are doing and what the effect will be. There is no magic bullet that solves all problems!
Arne
Hello Gary and Brad,
Why not fixing bad pixels ? It seems to me this is the only way to manage it. This is what I do in my own software and have never seen a problem in many years of photometry and analysis of measurment anomalies, a very minor problem to me. Bad pixels are pixels that are not properly functionning either shorted to ground or to Vdd, sometimes working a non linear way or having a very strong dark current. Flat or dark can't compensate for such defects. The only way I know is to set an address table of those pixels during the flat and dark preparation and then, at sky image processing, interpolate a new value for those pixels from the surrounding ones. This is also used in IRIS and probably number of others. The error is usually very small given the fact we have number of pixels in a star PSF making a good base for interpolation. The probability such case occurs is also very small as the usual number of bad pixels is of order of one hundred in a 24 Mpix image. Here I should say I am using DSLR with CMOS sensor, not certain astro CCD having few very large pixels, maybe the issue ? I should also say I have implemented a secondary bad pixel detection during the photometry process, the result is reported in the log. From time to time there is one or two such extra pixels that happen. It seems either new ones or more or less random ones.
What would be the other techniques ? I am interested.
Clear Skies !
Roger