Affiliation
American Association of Variable Star Observers (AAVSO)
Fri, 04/05/2013 - 19:21

I am planning to buy a new CCD to do some photometry. I have read that optimal sampling should be within the 2 - 3 FWHM range. I currenly use 2 scopes that have 800 mm and 450 mm FL. I have chosen 2 CCD models from Atik that have 4.54 and 3.69 microns square pixels. The resulting sampling will be as follows, taking into account the average seeing at my location (3.3 arc secs):

- 800 mm FL:   2.82FWHM (with 4.54 micron pixels) or 3.47 FWHM (with 3.69 micron pixels)

- 450 mm FL:  1.52 FWHM (with 4.54 micron pixels) or 1.9 FWHM (with 3.69 micron pixels)

Which of the above CCDs is better in your opinion from the sampling point of view?

Thank you for your help

 

Gianluca (RGN)

Affiliation
American Association of Variable Star Observers (AAVSO)
Matching CCD's & Scopes For Best Sampling

Gianluca,

Before I attmept to help you with your question I need to have some additional information:

What is the diameter of your lens/mirror, i.e. the aperture?

What is your focal ratio, i.e. the f/number?

What is the array size of each of the CCD models you are considering (match to the pixel size) and or provide the specific CCD models, not just the manufacture.

Along with the question of sampling we also want to maximize your FOV with your CCD Choice.

Tim Crawford, CTX

Mentoring Team

 

 

 

 

Affiliation
American Association of Variable Star Observers (AAVSO)
Matching CCD's & Scopes for best sampling

Here are the additional infomration:

- Scope 1: Newton D= 200 mm FL 800 mm f/4

- Scope 2: Apo Refractor D= 90mm FL 450 mm f/5

-CCD 1: Atik 460EX 4.54 micron square pixels, 2700x2250 pixels in a 12.49x9.99 mm array, full well capacity 20,000 e, readout noise 5e

-CCD 2: Atik 490EX 3.69 micron square pixels, 3380x2704 pixels in a 12.49x9.99 mm array, full well capacity 18,000 e, readout noise 5e

I will be doing the photometry mostly with scope 2. Typical seeing is 3.5 arc seconds at my primary observing location.

Affiliation
American Association of Variable Star Observers (AAVSO)
Optimal Sampling & Matching Up Equipment

Gianluca,

Thank you for the forum post with your equipment specifications.

I am glad you asked this question (regarding optimal sampling) as it is sometimes over looked by observers when purchasing their first CCD model.

The objective, as you are already aware,  is to spread the seeing out over a minimum of two pixels, with the goal of somewhere between two-three pixels being typical.  Less than two pixels can result in under sampling which can result in spurious values.  Spreading the light out over more than two-three pixels is called oversampling and this is generally OK to do.  [Reference Section 2.2 of the AAVSO CCD Manual for further discussion]

Sampling value (pixels)  = local seeing (arcsec)/image scale(arcsec/pixel)

Your original post suggests your local seeing as ~ 3.3 arcsec while your post with additional data mentions local seeing as ~ 3.5 arcsec.

For this analysis I will use the 3.3 arcsec seeing.

Analysis

Scope 1 with the Atik 460EX: 

Image Scale = 1.17 arcsec/pixel

FOV = 47.8 X 53.6 arcmin

Spreads seeing over  2.82 pixels (3.3/1.17)

Scope 1 with the Atik 490EX:

Image Scale = 0.95 arcsec/pixels (3.3/.95)

FOV = 42.8 X 53.5 arcmin

Spreads seeing over  3.47 pixels

Scope 2 with Atik 460EX

Image Scale = 2.08 arcsec/pixel

FOV = 77.9 X 93.5 arcmin

Spreads seeing over  1.59 pixels

Scope 2 with Atik 490EX:

Image Scale = 1.69 arcsec/pixel

FOV = 76.1 X 95.2 arcmin

Spreads seeing over 1.95 pixels

The sampling is fine with either CCD with your  Scope 1. 

The sampling is really too marginal probably with either CCD with Scope 2.  As an example, if your seeing were to improve to say 2.9 arcsecs then the seeing would be spread over  1.71 pixels in the case of the Atik 490 on Scope 2. There is, however, an easy work around that could be done with the Model 490 on Scope 2, that comes to mind, and that is to simply defocus your image a bit and that will spread the light out over more pixels.

In all cases your Field of View size is to be envied by many of us with close to a full degree for Scope 1 and even larger for Scope 2.

A word about limitations with Scope 1.  If your seeing were to improve to 2.34 arcsec or better you would be in danger of under sampling with the Atik 460.  If your seeing were to improve to 1.90 arcsec or better you would be in danger of under sampling with the Atik 490.

The calculations above regarding Image Scale and FOV were computed using Ron Wodaski’s CCD calculator, which is a free online download.

http://www.newastro.com/book_new/camera_app.php

While it permits the simple choice of selecting a scope diameter and f ratio along with specific CCD models, if what you have (in this specific case) is not on the drop down menu then you can manually input all the required specifications; just look at the boxes on the calculators main screen.

Ad Astra

Tim Crawford, CTX

Mentoring Team

Affiliation
American Association of Variable Star Observers (AAVSO)
Defocusing for optimal sampling

Tim,

thank you for clarifying the subject of optimal sampling with my scopes. I have a question on defocusing. While defocusing permits to get optimal sampling when seeing gets better, it decreases SNR. If this can be not a big issue with moderately bright stars it can be a problem with faint objects. In that case one should increase exposure time by stacking 2 or more frames as individual subs are likely to be affected by light pollution. Is this procedure correct?

Gianluca

Affiliation
American Association of Variable Star Observers (AAVSO)
Uncertainty Considerations

Gianluca,

It is true,  as you recognize,  that when you defocus to spread the light out over more pixels that the uncertainty(error) box will increase.  It’s a tradeoff.

If your uncertainty gets too much over .06 (keep in mind that visual observers are generally comfortable with .1 changes) with the fainter objects then I would suggest you observe brighter ones.  You can’t make a purse out of a sow’s ear, as my Grandmother used to say.  Now having presented this “limitation” I will confess that I have reported a few observations with uncertainties between .1-.2.  Generally these were really faint targets with little available data; occasionally, this occurred when airmass was really high but the observation was urgently needed.

So, there are no real hard and fast rules with uncertainty, except the ideal is .01.

Also it should be acknowledged that it is a rare paper indeed where the author(s) even bother to reference the uncertainty’s in any of the observations being used.

Yes, you can improve the uncertainty measure somewhat by stacking; just be careful that you do not saturate any of the comp stars, should a summing routine be used, as opposed to averaging.  I don’t think it is a question of one should increase exposure time by stacking but more of a question as to whether or not one wants to with fainter targets within a given frame and then examining   the results if stacking is chosen; especially if those fainter targets are secondary to the brighter target being on the same frame.

Now, if your main interest is a fainter target as the primary one then you will have to experiment as to how many images you need to stack to accomplish your goals.  Choose a modest faint target and take a series of images then divide them into various sized stacks to generate your own reference data.

The truth of the matter is, and I quote Richard Berry, Author of AIP4WIN:

“When you sum a bunch of images, you add the signal and the noise.  The signal adds directly, but noise adds in quadrature, which means it increases more slowly than the signal.  Thus as you add up images, the ratio between the signal and the noise increases (i.e. gets better).

Averaging involves dividing the summed signal by the number of images that were added together.  When you divide the signal, the noise is divided also.  Thus the signal to noise ratio is the same whether you sum a set of images or average them.”

Sometimes, however, the results are not quite as anticipated simply because of poor seeing, which is not a lot different, potentially, than defocusing an image… depends.

Final advice is to experiment with your own setup, when you have it finalized, and then draw your own conclusions as to what works best for your equipment and seeing conditions.

Oh, you probably already know this but make sure you get a “V” filter, at a minimum, with which to do your observations, when you order your CCD.

Ad Astra

Tim Crawford, CTX

Mentoring Team

Affiliation
American Association of Variable Star Observers (AAVSO)
Uncertainty Considerations

Tim and Gianluca,

Regarding increases in uncertainty:

1. If you are defocusing because of under sampling, the error determined by analytical means such as the CCD error equation will go up but the actual empirical error may not. Under sampling is bad for a couple of reasons. The first is that different amounts of light fall on areas of the CCD that are not sensitive to light from image to image which increases the error. When light is spread out over more than 4 pixels, that effect is reduced. Also when you under sample, any residual pixel to pixel variation in sensitivity that flat fielding doesn't eliminate is increased, but this should a secondary effect unless your star image falls on a bad pixel cluster. Error estimates computed analytically will not take the effects of under sampling into account and can report a lower error for the focused image because the signal to noise ration is higher when the empirical error determined from the standard deviation of say 5 images is lower for the de-focused image. Of course I am making an assumption here that the timing of variability is slow compared to your image cadence.

2. If you de-focus, the light is spread out over more pixels, Unless the brightest pixels were already at or over the upper limit of your linearity range, you should be able to increase your exposure duration, as long as your mount will track or guide properly over the longer period, and the variability is not so fast that you need the fastest image cadence you can manage. Even then I would de-focus slightly to avoid under sampling and accept the lower SNR provided it isn't so low that detection of the target and comps becomes uncertain. If you get under, say, SNR of, say, 30 or 40 I have found that to be a problem from time to time and have to manually place the measurement apertures image-by-image. Below 20, I sometimes can't even do that.

You must be careful if you stack images. Some stacking routines reallocate light among pixels to get the best alignment of centroids. That is great for "pretty pictures" but you don't want to do that for photometry. You want to just shift images but maintain pixel values. Many image processing programs give you a choice of which method.

If you look at the mathematics involved in the process of stacking, I think you will find that the result is the same as taking the mean of the photometric measurements of the same images and reporting the uncertainty of the mean (sigma/SQRT(N) rather than the empirical uncertainty derived of the same images (Sigma). However, you have the added advantage of seeing the "scatter" of individual data points that went into the mean and can better determine if something unusual is going on in any of them and you know that no unknown image processing has been done by the stacking program. Further, some stacking programs do strange things with image time. It becomes difficult to determine the midpoint in time of the images you are combining. I have found that a number of programs do not report the mean of the midpoints of the images. You have to go back to the individual image headers and do the math manually.

Finally, unless you report an empirically determined uncertainty from bins of stacked images,  it is better to report sigma from the previous paragraph rather than sigma/SQRT(N) anyway, since sigma is the error of the sample from which you derived that reported mean value.

A worthwhile excercise to repeat a number of times under different conditions and over a wide range of SNR is to compare empirically determined uncertainty to analytically calculated uncertainty for individual images derived by software. I find in most cases the empirically derived uncertainty is larger.

Brad Walter, WBY

Affiliation
American Association of Variable Star Observers (AAVSO)
Purposful Defocus and Photometric Precision

Guianluca,

I can send you the paper on purposful defocus and photometric precision that was conduced by one of my students if you would like. I can send it offline in PDF format if you contact me. There is some useful stuff in there.

tcsmith<at>darkridgeobservatory<dot>org.

Thomas Smith

Dark Ridge Observatory

Purposeful Defocus and Photometric Precision

Thomas:
I too had been working with slightly to heavily defocused photometry.  For most of my later work it was because I was using a short focal length scope and needed the improved PSF sampling.  Earlier on I was doing some occasional exo-planet work which I was doing defocused up to about 15 px FWHM (about the maximum that Maxim would detect as a star) to get increased signal and reduced scintillation.  So I too would be very interested in seeing the paper you mentioned.

Southworth 2009 (arXiv:0903.2139) has an interesting analysis of S/N for defocused photometry.

Rick Wagner

Affiliation
American Association of Variable Star Observers (AAVSO)
Measuring seeing for sampling

Hi all,

thank you for all the contributions to the subject. It is now clear to me that for my scopes it is better to get the Atik 490 EX that minimizes the risks of undersampling. I will also have a look at the paper Thomas is mentioning. I have another question regarding seeing variation during the night. Suppose seeing is improving after 1 hour shall I just check the FWHM of the stars in each individual frame to understand if I am in danger of undersampling? I am using Aip4win. Is there a recommended procedure to check seeing variation?

Gianluca

Affiliation
American Association of Variable Star Observers (AAVSO)
Oversampling

I have got another question. What if seeing gets worse and I get stars with FWHM 5 or 6? In case of undersampling defocusing is a viable option to avoid spurious values but is there any problem with an oversampling mentioned above?

Affiliation
American Association of Variable Star Observers (AAVSO)
Oversampling

As you say defocusing can improve your results with brighter stars as well as with good seeing conditions so if your seeing gets worse your FWHM will be larger and you may find that close stars might get blended into the photometry annulus. This situation will affect your results but if you have all time-series images with the blended stars in your photometric aperture you should be okay. If your star field is fairly sparse in the area of interest then your spread out light should not be too bad.

My advice would be to try your defocus under varing focus situations and then you will know just how the photometry will respond. Use a good known field like M67 field and go from very tight focus to a large defocus in reasonable increments. Perform your normal aperture photoemtry and graph the resulting STDEV of the recorded flux to see how well things respond. Also do the same test but vary your aperture radius and replot that data. I found that there is a combination "sweet spot" using the right combinations of defocus and aperture radius..

Tom Smith

Dark Ridge Observatory

Affiliation
American Association of Variable Star Observers (AAVSO)
oversampling

RGN queried about poor seeing with oversampling.  As Tom Smith mentioned, poor seeing is similar to defocus, in that you run the risk of blending with nearby stars, as you generally use a larger aperture for your measurement.

We had 0.3arcsec pixels at USNO-Flagstaff on the 1.55m telescope, with nominal seeing of about 1arcsec.  We stopped observing when the seeing deteriorated to 2.5arcsec, not because the photometry or astrometry was compromised by the bloated images, but because of blending and the longer exposures required to reach decent signal/noise per pixel.  A similar thing happens with amateur setups; it is just scaled.  If your seeing gets into the 6-7pixel range, then the light is spread over many more pixels; each has less flux and each contributes sky, dark and readout noise.  You can certainly observe under those circumstances, but I'd recommend using longer exposures.  That said, what I often found to be the case was that poor seeing nights had variable seeing - it would stabilze for a few minutes, then deteriorate again.  Increasing the exposure times then meant that some exposures were saturated.

I either closed down on poor seeing with the 1.0m telescope I primarily used for photometry, or switched to very bright targets - ones that would typically saturate with the shortest normal exposures.  By doing so, I could often get photometry of stars 2-3 magnitudes brighter than usual, and therefore continue bright-star research projects.

You will gain experience as to what is useful and what isn't.  Whenever conditions fall outside of the norms, take more care and examine things more closely, but don't automatically assume that the results will be poor.

Arne