Affiliation
American Association of Variable Star Observers (AAVSO)
Wed, 03/30/2022 - 15:10

If I average seeing between 1-3 arc secs and equipment is celestron c8 (non-edge), starlight xpress H35 (9 micron pixels). I seem to be having a hardware issue with 2X2 binning that im currently working out with SX.  So I maybe only have 1X1 binning option to me.  Is this going to be an issue?  What would the effect of adding a focal reducer to optical train have?  Is it ill advised?  

 

Is anyone using the above setup for photometry?  

Affiliation
American Association of Variable Star Observers (AAVSO)
Focal reducers

If you use a decent quality FR and are careful to get your spacing correct you should be fine. 

A 0.66 FR is pretty easy.  A 0.5 can be a bit cranky but should work with your 8" SCT.  I wouldn't go shorter than 0.5.

FWIW I have used both Celestron and Meade and they work.  I have used Optec for the last 10 years or so.

Jim

 

 

Affiliation
American Association of Variable Star Observers (AAVSO)
Focal Reducer

Okay, Alfredo, let's do some math:

Your average seeing is between 1-3 arcseconds. (By the way, 1 arcsecond seeing is pretty extraordinary; most of us experience seeing in the 3-4 arcsecond range.) You have roughly a 2,000 mm focal length and 9 micron pixels. This means your image scale is (9/2000)*206.265 arcsec/pixel = 0.9 arcsec/pixel. This means that your seeing is 1.1 to 3.3 pixels. Version 1.1 of the AAVSO CCD Photometry Guide (on page 18) says that, "You are just looking for an approximate number of 2-3 pixels per FWHM."

This is exactly where you are (2-3 pixels per FWHM) with no binning and with no focal reducer.

If you were to bin 2x2, you would shift to 0.6 to 1.6 pixels for your seeing diameter. This is undersampled. The Guide says that undersampling is a much more serious problem than oversampling, so binning should be avoided.

The only rationale for using a focal reducer in this situation is if you need a larger field of view. However, your camera has a 36mm x 24mm sensor, which gives you a FOV of 62 x 40 arcmin without the focal reducer. The updated Guide that's being worked on right now actually suggests that this FOV is borderline too big, perhaps necessitating differential extinction corrections at some elevation angles because of the large chunk of sky being imaged. A focal reducer would actually be problematic.

You've got a camera well-matched to your telescope for the average seeing conditions with no binning and no focal reducer.

- Mark M

Affiliation
American Association of Variable Star Observers (AAVSO)
Mike,

Thank you so much for…

Mike,

Thank you so much for you well articulated and informative reply,  Ive been doing some testing, and someone told me my seeing was 1-3" but it does seem to be more like 3-4"  everything totally made sense to me in you argument for not using a FR,  my question is with the more likely 3-4" seeing, does the argument still apply?  would it still be better to be oversampled then use the FR?  thanks so much for your time and advice

Affiliation
American Association of Variable Star Observers (AAVSO)
Focal Reducer

Hello! With your C8 and H35, you get a FOV of 1 x 0.7 degrees, so I doubt there is a need for a Focal reducer to increase your FOV. With the full sized sensor, you would be dealing with a lot of vignetting if you choose to use a focal reducer, though it can be flat fielded out. With a 0.67 reducer, that gives a FOV of 1.5 x 1 degrees and 1.4 arc-sec per pixel. Do you need such a large field with what you would like to image?

    With 1x1 binning, you get 0.9"/pixel; binned, 1.8"/pixel. At 3 pixels per FWHM with 1x1 binning, that gives you 2.7 arc-sec. Is that about your seeing? Even if you run closer to four are-sec seeing, you would be only a little oversampled but without the problems of binning, vignetting, etc. I doubt that you would see a difference in measurement error by running oversampled with your setup. Lots of folks run oversampled with the current generation of small size pixels on new chips.

    With binning, as Mark mentioned, you start to run into undersampling when you have above average seeing conditions.

    You could image the same target with and without a focal reducer on different nights and calculate the measurement error difference between both setups. That way you could make an informed decision based on your local seeing conditions.

    For comparison, I use Mead 8" LX200 classic (2000mm focal length) with SBIG ST402 (9 micron pixels). I have to use a focal reducer (about 0.6) to get to 12x18 arc-min FOV (8x12 without a focal reducer, which is small enough to significantly limit my potential targets) Even with such a small chip, I start to get vignetting but fortunately, that can be flat fielded out with my setup. Since my seeing averages 3.5 to 4, I get a good match with a focal reducer and enough FOV to be useful. Rarely are conditions excellent such that I run into potential undersampling. Best regards.

Mike

 

Affiliation
American Association of Variable Star Observers (AAVSO)
hi mike, thanks so much…

hi mike,

 

thanks so much for your time and advice,  youre explanation also makes perfect sense to me,  my issue is after some testing i think my seeing averages more like 3-4" do you think a focal reducer will make sense in this case or better to be oversampled than introduce the FR? when the cloud go away, i plan on doing as you mentioned and using both the FR and without and analyze the results,

 

edit:  after reading over you explanation again, you already mention it would still be better to be over sampled, ill still do the comparison and update you on my results,  thanks again for all your time and help

Affiliation
American Association of Variable Star Observers (AAVSO)
C8 and Focal Reducer

Hello! It seems that with your setup, you can get nearly perfect FWHM with a FR or be mildly oversampled without the use of a focal reducer. With the minor oversampling of your system without a focal reducer, I doubt that you would notice a difference in the quality/error/uncertainty of your data unless you are really pushing your equipment's capabilities for photometry of faint objects. I've never tried that since, with my equipment, that would mean long exposures - like over 10 minutes. With the mediocre tracking of my LX200 mount and my variable local seeing conditions that would mean I would not get a good result anyway.

    To me, this comes down to the FOV you desire. If you can get all your targets with the smaller FOV that you would get without the use of a FR, then a FR would not add anything to your projects.

    If you can get all your targets without a FR, one advantage would be that you would not need to modify your setup on those rare nights when you do get nearly perfect seeing conditions. Unless you would take off the FR during such times, refocus, etc., you would get undersampling and your data on those near perfect nights would be problematic - the last thing desired for nights of near perfect seeing!

    One thing, the Nyquist theorem was developed for acoustics, and we apply it to photometry because it makes sense. I've not read how much work that has been done to "prove" what the optimum coverage is for photometry. Perhaps oversampling is best with photometry while the Nyquist theorem states otherwise for acoustics? Perhaps we can get by with 1.75 pixel coverage of FWHM even though the Nyquist theorem states that would be undesampled for acoustics? Other folks can weigh in on this better than I can.

    If you are going to compare your results with and without a focal reducer, I would be very curious to hear your results. Best regards.

Mike

Affiliation
Variable Stars South (VSS)
FWHM, undersampling, oversampling and the Nyquist theorem

Mike's second last paragraph mentions these. The issue is matching pixels to the point-spread function, which is the heading of Section 1.5.4 on page 29 of "The Handbook of Astronomical Image Processing" by Berry and Burnell. The second paragraph in this section states:

'Applied to image sampling, the Nyquist theorem suggests that the size of a pixel must be half the diameter of the diffraction disk as defined by its full-width half-maximum dimension. Images sampled at this rate are called "critically sampled," because the image has been broken into just enough pixels to capture all detail in the image.'

I think this is the nub. The paragraph refers to sufficient resolution for astrophotography to capture all the detail in the image. It does not refer to photometry.

I have for some time been puzzled by the guideline of an image scale for photometry of 2 to 3 pixels across the FWHM, because I can't see why this optimises the signal-noise ratio.  The FWHM is constant across all well focussed, non-saturated stars in an image. It is a definition of a mathematical parameter, not a physical reality, because the sizes of the seeing disks themselves (the actual images of the stars) vary with the magnitudes of the stars.

Even the faintest stars (depending on the equipment) may have, say, 16 pixels in the seeing disk (not across the seeing disk, but the total number of pixels making up the image). For brighter stars, it may be several dozen pixels or more. It seems to me that, provided seeings disks have these sorts of numbers of pixels or more, that the signal-noise ratio is maximised when the number of photons per stellar image is as high as possible within the range of linearity of the sensor.

Roy

Affiliation
Variable Stars South (VSS)
FWHM, undersampling, oversampling and the Nyquist rate

No-one has replied in this Forum to my 'heretical' view expressed above. There has been an off-line private exchange of views, in part of which I wrote something like the following, which perhaps explains my thoughts better.

This comment relates to the 'dogma' (not meant to be a pejorative term - just can't think of a better one) that the current guideline of an image scale of 2-3 pixels across the FWHM for aperture photometry is based on the Nyquist rate. The Nyquist rate is a sampling, applied initially to acoustic waveforms, where a sampling rate of twice the highest frequency, will faithfully reproduce (or identify, in Fourier analysis) the original frequency.

So we have this ratio of 2/1 which has been transposed to the instruction about the image scale for photometry described in terms of the FWHM. OK, the ratio is the same, but there is no 'frequency' that the 2/1 ratio is relating to - we have a static image, with seeing disks of various sizes.

I have never believed that oversampling is a problem (the AAVSO CCD observing guide implies that it is), provided that the conditions are right (for example, if an image is defocussed, it obviously spreads the seeing disks and adjacent stars can come to overlap. This clearly has to be avoided). Heavily defocussed (and therefore heavily oversampled) images have been used to yield high-precision time series photometry of exoplanet transits (Southworth J. et al, MNRAS Vol 396, Issue 2, June 2009, pages 1023-1031).

Finally, I suspect that the 2-3 pixels per FWHM is a good guide to avoid undersampling. I suspect it is a good guide because it works, not because it is based on the Nyquist rate. It probably works best for observers with CCDs with large pixels, imaging faint stars for photometry, where the seeing disks are small and composed of few pixels.

Roy

Affiliation
American Association of Variable Star Observers (AAVSO)
Imaging is NOT Photometry, Photometry is NOT Imaging.

The discussion about imaging sampling in Section 1.5.4 of Berry and Burnell is about effectively capturing all of the detail in the image at the focal plane of the telescope. Critical sampling is desirable in the context of what sort of images you are making. For wide-field imaging, you may need to undersample the image to include as much field as possible; in planetary imaging, oversampling can help catch detail smaller than the average blur disk in moments of extra-good seeing.

Photometry is about measuring the total flux from a star minus the light of the sky background. Basically, you are dealing with a collection of pixels that are brighter than their surrounding and wish to determine the total flux in ADUs contained in that cluster. The cluster is typically brightest at the center and fades radially. Every pixel in the cluster also contain background sky light that must be estimated from the sky in the annulus and subtracted. It is essential that the brightest pixels in the center of the cluster not be saturated, and because seeing varies, should be a safe distance below saturation. You control this through exposure time and in some cases with moderately defocusing the image. There is no need to defocus more than necessary to avoid pixel saturation. Focal length and image scale are ways to make the cluster of star pixels conveniently small but large enough to require a reasonable aperture radius. Aperture radii between 3 and 6 are common.

The Figure 10.1 in Berry and Burnell is misleading in that for good photometry the aperture should snuggle up to the star image, lying just at the toe of the star PSF profile or slightly inside. In other words, the aperture in the figure is too large. I covered some of this in my "How-To" webinar in March. The optimum radii for the aperture and annulus radii are NOT intuitively obvious, but they follow directly from Equ. 10.8 through 10.14. The purpose of the aperture is to capture as much starlight as possible consistent with including as little skylight as possible. If the aperture is too large, it includes sky pixels without any starlight, increasing the noise with increasing the signal. If it's too small, it includes less starlight as it could.

The purpose of the annulus is to allow the most accurate estimation of the sky background light. The more pixels it includes the better, up to the point that the annulus intercepts light from nearby stars. The suggested 4:1 annulus/aperture ratio is a guideline, not a rule. 

It is not intuitively obvious that the highest signal-to-noise ratio occurs when the aperture includes about 80% of the total starlight. This maximizes starlight and minimizes background skylight. The effect is most important in faint stars where the skylight is an appreciable fraction of the light in the aperture. Most photometrist including me feel uncomfortable squeezing into the star that much, and opt for putting the aperture on the toes of the PSF profile of the star as some cost in the signal-to-noise ratio.

One final note: photometry is not about making pretty pictures with sharp star images. Mushy Gaussian blobs are great. Exposures should be long enough that scintillation averages out, unless there is a need for high cadence to meet the goal of the object you are observing.

--Richard