Signal to noise ratio vs. brightness

Affiliation
American Association of Variable Star Observers (AAVSO)
Mon, 10/29/2012 - 10:23

I have a question regarding linearity of SNR, perhaps it is a dumb question. Is SNR directly proportional to the brightness of a star? For example if I measure a mag. 9.0 star with a SNR=100 should I get a SNR=251 for an 8.0 mag. star, assuming I am working within the linearity range of the camera and under the same conditions (say for instance I want to measure a series of stars in the same frame)? Are any deviations in this regard indicative of a lack of linearity?

Thank you

Gianluca

Affiliation
American Association of Variable Star Observers (AAVSO)
SNR vs brightness

Hi Gianluca,

The signal from the star goes up by a factor of 2.5 when you go from mag 9.0 to mag 8.0, but the noise contribution also goes up.  For bright stars, where the dominant noise source just comes from the photon arrival rate (Poisson noise), SNR goes as

SNR = signal/noise

SNR = Signal / sqrt(signal)

SNR = sqrt(signal)

so if the SNR was 100 for your 9th magnitude star, that means it had 10,000 detected photons in the simple case.  For the 8th magnitude star, you would have 25,100 detected photons.  That means

SNR = sqrt(25,100)

SNR = 158

Arne

Affiliation
American Association of Variable Star Observers (AAVSO)
SNR for faint stars

Hi Gianluca,

The situation gets more complex for faint stars, as the photon arrival rate no longer is the dominant noise source - usually the sky background becomes important.  Noise sources add in quadrature, so the equation both has more terms and is more complex. There is a nice signal/noise calculator on-line that can do a decent job of providing the signal/noise for various conditions:

http://www.tass-survey.org/richmond/signal.shtml

Michael Newberry has a more complex one that include additional parameters, but for just getting a rough idea of how SNR goes as a function of magnitude, the calculator from Michael Richmond is a good starting point.

Arne

Affiliation
American Association of Variable Star Observers (AAVSO)
SNR and aperture annulus

Hi Arne,

thank you for the information. I have another question regarding SNR: what is the recommended aperture annulus to avoid a decrease of the SNR? I have learned and experimented that if the annulus is too large the SNR decreases a lot. If the annulus cointains all of the star pixels and a small surrounding area of the sky the SNR decreases by more than 50%. According to some internet sources the optimal aperture should lay between 0.68 and 1x FWHM. What is your recommendation? Can you also give me some advice for gap and sky annulus?

Gianluca

Affiliation
American Association of Variable Star Observers (AAVSO)
aperture size

Picking the optimum aperture size is a bit more tricky.  In general, something around 4-5x fwhm diameter does a pretty good job on most stars.  If you want more detail, VPHOT produces the "curve of growth" plot that shows you the optimum signal/noise, and Steve Howell's CCD book gives both a description of the procedure and a reference to his earlier paper on the topic.

Arne

Affiliation
American Association of Variable Star Observers (AAVSO)
Aperture Size

Arne, I assume that you mean aperture diameter 4-5x FWHM, which would give an aperture radius of 2 to 2.5 the FWHM. I think I have seen a recommended measurement aperture radius of 1.5 to 2.0 x FWHM elsewhere in AAVSO, which agrees closely with the radius you would recommend, if I am interpreting correctly.

 

Gianluca's question could cause some confusion. The term aperture usually is used to mean the star measurement circle - the center circle formed by the three rings used by most photometry tools (but I have seen some that only use two rings). Annulus usually refers the donut formed by the two outer rings which is used to measure the sky background. From the context it seems clear that the question refers to the measurement circle.

 For reference attached are some growth curves and standard error estimates I made for the "dipper" asterism region of M67. These are curves made from a single image using the "error" output of the photometry software. This error estimate is based on the CCD error equation. The star IDs refer to the numbers on Arne's sequence chart for M67, which is also attached. I omitted the error curve for star 30. Its proximity to a much brighter star caused it to have an order of magnitude large error at all apertures, which expanded the Y axis scale too much to allow the other curves to be clearly seen. Notice that the proximity of nearby stars seems to affect the location of the noise minima more than the magnitude of the star you are measuring. Star 52 is the second highest magnitude but the standard error keeps decreasing as the aperture increases. It is almost exactly the same magnitude as star 49, but star 49 standard error increases after aperture radius ~4.5. Star 49 has a much closer neighboring star than star 52.

 The other thing to keep in mind is that these standard errors are measured using the CCD error equation in a single image. Seeing, which at just over 3 arcsec FWHM was  below average for my location, favors use of larger aperture. If I used the standard error empirically determined from a series of images It is generally much greater than the standard error determined in a single image from the CCD equation. For example, the standard V mag error for star 52 determined from photometry of 7 images at aperture radius 5.1 pixels was +/- .02 magn after extinction correction. That is a perfectly reasonable uncertainty for a 12.8 V Magnitude star from 7 images spanning airmass 1.06 to 1.67 at 4 min exposures in a 10" telescope under less than average seeing conditions. A +/-0.003 magn uncertainty estimate would be unreasonable to the point of making the photometry not credible.  The variation due to seeing, changes in seeing over time and focus over time is why you normally need to use a much bigger aperture than a single image growth curve based on the "error" output of your software package indicates.  This is particularly true if you use a constant aperture for all images in a series which I think most of us do since unless you have an IRAF script, calculating a growth curve for each image and manually picking the best aperture for each image would be extremely time consuming.  

 

Brad Walter
WBY

Affiliation
American Association of Variable Star Observers (AAVSO)
Aperture size

Thank you for clarifying a few points. I must apologise to create some confusion. I indeed meant the star measuring circle by aperture. As regards aperture size I have read a paper by K. Mighell at Kitt Peak Obersvatory that recommends 0.68-1x fwhm radius in aperture size for best results, that is around 1.5-2x fwhm in diameter, but of course I wil follow what you and Arne have recommended for aperture size as the above-mentioned paper can probably be applicable to only professional telescopes and cameras. 
Gianluca

Affiliation
American Association of Variable Star Observers (AAVSO)
Aperture Size

Don't compare me with Arne. I am to Arne as a first year apprentice is to a master craftsman. I am still at the stage of working hard to understand what I really know vs. what I only think I know. So I spend a lot of time checking and testing everything to be sure of what I really know.

I am familiar with the Mighell article. While this may work for sites with very good seeing, such as kit peak, which frequently has sub-arcsecond seeing, it is not optimum for areas with the more normal 2 arcsecond (or worse) seeing. There are a bunch of "well behaved" star image assumptions upon which this conclusion is based. One of the major ones is a Gaussian point spread function. It implies, among other things, that star images are symmetrical about their centers and that you have the same point spread function across an image. When the seeing is poor those assumptions break down.

You can think of the atmosphere as a multitude of layers of bubble wrap comprised of wide range of convex bubbles in a wide range of sizes distorted shapes and transparencies joined together with a similar jumble of convex areas and all the layers are moving past your telescope at various speeds and directions. This mess results in stellar images of constantly changing size and shape, and the changes are not constant across the field of view of your telescope. I have a relatively long focal length scope at 3000 mm, f/12 and a small chip that affords only a 8'x5.4' field of view with 0.63 arcseconds per pixel. Even over that small area, the point spread function of stars varies and the distribution of the variation changes from image to image. in other words the stars are a little lumpy most of the time and they aren't all distorted in the same way and the lumpiness changes from image to image. It isn't caused by a static misalignment in my imaging path.

Mighell points out that locating the centroid is critical with small apertures. Poor seeing makes that more difficult, and on top of that, the amount of light that isn't captured by the measurement aperture varies from star to star and image by image. That means you need a larger aperture to achieve maximum signal to noise for differential photometry than Mighell's theoretical analysis indicates for several reasons

1. You are comparing stars in different parts of your image that have somewhat differing point spread functions and differing spatial distortion.

2. Centroid location is less accurate because because the stars are asymmetrical in an inconsistent manner

3. Even if the centroid is located precisely,  different proportions of light "leak" out of the measurement aperture for individual stars because individual stars have differing asymmetry

4. The variations among stars changes from image to image.

So to minimize the error, you end up with a larger aperture to minimize the variation in the proportion of the total stellar flux captured from star to star and image to image from causes not taken into account in Mighall's analysis and which become increasingly important as seeing becomes increasingly poor.

 

Try comparing the error growth curve that you get from the error output of your software for a single image to the standard deviation of the check star magnitude in a short time series of say a dozen images under seeing conditions of 2 arcseconds or worse. I think you will find that the radius that gives the minimum error based on the scatter (standard deviation) of the magnitudes in the time series is significantly larger than than the one you get from a growth curve created from a single image. The error calculated from the time series is also likely to be larger than the errors given by the error calculated by the software for individual images for the same reasons that you need a larger aperture plus spatially varying changes in focus over the time series.

Another thing seeing affects is guiding. You will find that you need to decrease the aggressiveness of guiding correction and either increase the guiding exposure duration or, perhaps, average multiple guiding exposures so that your guider isn't chasing the seeing. This gave me fits (as well as crummy looking fits images) until I realized what was happening.

Upon re-reading, my note needed a number of grammatical improvements.

Affiliation
American Association of Variable Star Observers (AAVSO)
Aperture Size

11/6/2012: Excuse me, everyone, I need to correct my correction. I seem to be unable to type a simple formula properly into Excel. First let me state the result of the 2D Gaussian integral I am using so that anyone so inclined can check my math. The result is

2*PI()*(1-e^(-0.5*(r^2))), evaluated at r=0 and r=R,

where R is the aperture radius in units of standard deviation. This result comes from the polar coordinate integral over a plane of the Guassian function e^(-0.5*(r^2)), which is the same shape used in the Gaussian probability density function and was used by Stetson, although in rectangular coordinate form, in his 1987 PASP paper on DAOPHOT. This distribution assumes a radially symmetric star profile, which is frequently not true. So as stated in the original posting, the actual errors will be worse.

If you use a constant aperture radius that is equal to FWHM of the comparison star and 93.5% of FWHM for the target (same aperture applied to varying star FWHM in the image), then R values are 2.355 and 2.202, respectively. The proportions of fluxes captured are 93.75% and 91.15%, (2.6% difference). The ratio of the fluxes is 0.9722, and the resulting magnitude error caused by the variation in captured flux with no other causes considered) is 0.031 magnitudes {-2.5*LOG(.9722)}. You get essentially the negative of this magnitude error if the situation is reversed.

If you use an aperture radius that is two times this size, the the comparable values are 99.9985% and 99.9939% (0.0046% difference). The ratio of fluxes is 0.999954, and the magnitude error from this single cause is 0.00005. Other causes of error will dominate, and some, like sky background noise, may increase as the aperture increases.

If instead of a 6.5% difference in FWHM you use the extreme difference of 19% detected between two stars in one image, the R values are 2.355 and 1.908 Standard deviations for R=FWHM and R=0.81. Proportions of fluxes captured are 93.75% and 83.79%, respectively (9.97% difference). The ratio of fluxes becomes 0.8937 and the magnitude error from this single cause becomes 0.122.

With R double this size the R values are 4.71 and 3.815 standard deviations. Proportions of fluxes captured are 99.9985% and 99.9309%, respectively (0.068% difference). The ratio of fluxes becomes 0.9993 and the magnitude error from this single cause becomes 0.0007. Again, Other causes of error will dominate, and some, like sky background noise, may increase as the aperture increases.

Brad Walter

Original Message:

The attached spreadsheet illustrates the FWHM variations in pixels among various stars in a single image and among measurements of the same star in multiple images. This spreadsheet analyzes the FWHM measurements of 10 stars in 10 images of a portion of the standard photometric field in NGC 7790. The plate scale is 0.63 arc seconds per pixel.  Also attached are a PDF image of the spreadsheet, the finder chart for the star field and a screen shot of one image displayed in the photometry program.

In the top part of the spreadsheet each row contains data and calculations for the stars in a particular image. Each row of Columns A through G in this top portion contains statistics relating to the averages and spread of FWHM for all stars in a particular image.

The bottom part of the spreadsheet calculates statistics for the individual stars across all 10 images. Columns A through E of this bottom section calculate statistics across all stars for the average FWHM values of the individual stars calculated across all 10 images.

Results:

The smallest difference between max and min FWHM for stars measured in any of the images was approximately 0.32 pixels (0.2 arc seconds) which amounted to 6.5% of the average FWHM of all stars measured in image NGC7790-004V2H30.fit. The largest difference between min and max FWHM measured in an individual image was approximately 0.7 pixels (0.4 arc seconds) or about 19% of the average FWHM of the stars measured in image NGC7790-001V1H30.fit. Clearly, a measurement aperture with radius 0.63 to 1.0 FWHM would capture different amounts of flux from different stars in an image and the difference would vary significantly from image to image. You might think that a 6.5% variation in FWHM is small and probably not significant, but it is as illustrated in a calculation below. A 19% difference in FWHM is very large.

 The other thing you can see is that the difference in FWHM is not simply due to a misalignment of the telescope or camera, or field curvature. The changes of FWHM among the stars relative to the average FWHM of the image appear to be generally random. A star's FWHM rank is a clear indication of how a particular stars FWHM changes relative to the average FWHM of all stars in the image. The rank (1 is smallest FWHM, 10 is largest) of any particular star's FWHM changes dramatically over the series of images in an apparently random way. 

Star 12 is the one exception, with a relatively consistent and poor rank. Although his star is most consistent, it still fluctuates two orders in rank over the series of images. Star 12 is close to the edge of the chip but stars 18 and 03 are farther from the center than star 12 and do not have the same consistency. Star 18 is also almost as close to the edge of the chip in some images. Star 29 is almost the identical distance from the center of the chip as star 12 and is in the same quadrant.  None of these other stars show consistency of rank and star 29 varies from the best FWHM to the second worst over the series in an apparently random manner.

How important is the random variation in FWHM to selection of aperture? Assume that you set the aperture Radius at 1 x FWHM which amounts to 2.355 sigma in a radially symmetric Gaussian distribution. You capture 98.94% of the flux in your image from the star. If the star you are using for comparison has a 6.5% larger FWHM you only capture 97.29%. You have introduced a random uncertainty of 1.6%. In comparison, suppose you use a radius of 2 x FWHM (4.71 sigma). Then the random uncertainty from flux capture differential is less than 0.001%. Imagine the error with an aperture radius of only 1 x FWHM if the FWHMs had varied by 19%. With a radius of 2 x FWHM, 19% difference in FWHM adds less than 0.01% to the uncertainty.  

Keep in mind that this is a greatly simplified analysis that underestimates the error because, like Mighall, I assumed a radially symmetrical Gaussian flux distribution. The error will be worse when the stars are "lumpy."

From the discussion above, you might conclude I advocate that you always use the largest aperture you can. But background noise comes into play. It doesn't make sense to expand the aperture beyond the point at which the star signal no longer exceeds the background. Beyond that, you add more noise than signal.

Brad Walter, WBY

 

Added note: Just noticed the star 12 has a blended companion, which accounts for the consistently poor FWHM of this star.

WBY

Affiliation
American Association of Variable Star Observers (AAVSO)
Hello Brad


This is reall

Hello Brad

This is reall good data.  I really like what you have done. 

I also took things one step further, using your data.  I had Excel calculate the Correlation between FWHM and the Brightness for each star.  The perfect answer for this would be -1 (high FWHM gives less flux, and lower brightness).  The Correlation Coef turned out to be -.985, -.986, -.993, -.980, -.992, -.945, .964, -.985, -.968 and -.967 for these stars.  Pretty amazing.  Shows that most of the effect is FWHM. 

 

Gary Walker

 

 

 

 

 

 

 

 

 

 

Affiliation
American Association of Variable Star Observers (AAVSO)
Aperture size

Thanks, Gary.

Everyone seems to be aware of the results from the work of Steve Howell and others that determined the best SNR can be achieved with small measurement aperture radii ~ 1 x FWHM. What they sometimes overlook is that you have to do growth curve analysis and correction to use that technique. Very few amateurs do that because, like me, they are doing photometry more or less manually rather than with a fully automated reduction and analysis routine with IRAF, Image J, etc. You can't just pick a small constant aperture and apply it across an image, let alone a sequence of images, without doing the Growth curve analysis and correction and expect to get precise or accurate photometry. However, you want to take your images at night and send in the results next morning, not fiddle around with data analysis and spreadsheets for several days before you send it in. I happen to be a spreadsheet junky and don't mind spending the time as much as the average photon gatherer. However, with observing campaigns, fast reporting is often important.

Affiliation
American Association of Variable Star Observers (AAVSO)
Aperture

Hello Brad

Everyone is aware of Steve Howell's work and his findings--what most folks do not realize is that is for a single star on a single image.  As you put it, we are looking at multiple stars on multiple images, and its a different but related problem.  Unfortunately, most foks take SH's resutls and run with them until they get into trouble.

 

I had a couple of questions about your spredsheet.  How are the Peak Values determined?  Is that a single pixel value, or the peak of an aperture measurement?  Do the Peak Values have the background subtracted from them?  Which software was used to extract the FWHM's?  

Do you have extracted magnitudes for these 10 images and these 10 stars?

 

Thanks

 

Gary Walker, WGR

Affiliation
American Association of Variable Star Observers (AAVSO)
Aperture

Gary,

Good Questions.

1. Background is subtracted before the peak is determined.

2. The peak value is the peak value of a 2D Gaussian fit to the pixel histogram. 

3. I have a dozen csv files with the photometry of the stars. Each CSV file is the photometry of all stars on all of the images with a specific aperture. The aperture is stepped among files in half pixel increments from 3.5 pixels to 9 pixels. With my image chain a pixel is approximately 0.62 arcsec.

I have attached one of the files as a sample. Magnitudes are all "raw" instrumental magnitudes. No star was used as a comparison. If you would like them all I can forward them. They are small files.

My intention is to organize all of the measurements, derived growth curves for individual images and apply a correction to each star on each image from its highest SNR measurement. At highest SNR individual stars may be measured at varying aperture diameters relative to FWHM. That means different stars will be corrected from different points on the growth curve for the image and will require different corrections. That is a bunch of work, but I want to work through this a few times manually rather than applying a "black box" program because I want a first hand feel as to how well the correction works and the conditions under which it goes wrong. For example, it may not work well if star images are very irregular from tracking errors or wind gusts.

The file names contain the aperture configuration: -06.01016-0 means it is a 6.0 pixel measurement aperture radius, 10 pixel inner ring radius for the background measurement annulus, 16 pixel outer ring radius and zero ellipticity of the aperture shapes.

Let me know if you want the photometry files. I will send them directly to you. I don't want to clog up the forum with that much detail.

Brad Walter, WBY

Affiliation
American Association of Variable Star Observers (AAVSO)
SNR vs Magnitude

Gianluca,

For bright stars, meaning your measurements aren't strongly affected by sky background noise or camera readout noise, Signal goes up linearly with the number of net photons captured for the star but the noise (uncertainty) goes up as the square route of the photons since light emission follows a Poisson distribution curve. For a Poisson distribution, the standard deviation of the population is the square route of the mean. So in your example  N/SQRT(N) =100 for magnitude 9, where N is the number of photons you captured from the star. For magnitude 8 your signal to noise is
2.512*N/SQRT(2.512*N) = SQRT(2.512)*100 = 158.5.

the photons you capture from the star are the total of the photons in your measurement aperture minus the "average" value of the sky annulus pixels times the number of pixels in you measurement aperture. So you are only counting the photons from the star not photons from the star plus the sky background.

I put "average" in quotation marks because most programs exclude from the average process an equal number of the brightest and faintest pixels in the sky annulus so that the annulus doesn't have to be completely free of faint background stars. To avoid introducing significant bias in your estimate of the sky background you have to exclude stars on the faint extreme as well as the brightest ones.

Steve Howell's 1989 PASP paper has an excellent discussion of signal to noise ratio of stars (point sources). I have attached it for easy access. Steve's book, Handbook of CCD Astronomy, published by the Cambridge University Press, is also an outstanding source of information. Steve has a number of other refereed papers on photometry in the literature as well. Just look up Howell, Steve B. in the Nasa Astrophysics Data System (ADS) database.

Hope this helps

Brad Walter, WBY

Affiliation
American Association of Variable Star Observers (AAVSO)
SNR vs Magnitude

Gianluca,

I fumble fingered the year when I created the file name for Howell's paper. The attachment is the 1989 PASP paper. This paper also explains the growth curve analysis and correction that has to be done if you want to use small apertures. That technique provides better precision and accuracy, particularly for faint stars, but if you don't do the growth curve analysis and correction it usually makes things worse. Unless you have an automated routine to do the curve fitting and correction. You end up spending a lot of time applying the growth curve method to images one at a time. I encourage you to try it a few times to see how it works for you and to get a feel for what is going on in the analysis of your images. However, without an automated photometric analysis train. You will probably take at most take a growth curve of stars in typical image in a group of images  to figure out what the average FWHM is and how big the spread of FWHM is and pick a constant aperture radius in the range of 1.5 to 2.5 FWHM. You might even repeat the photometry at 1.5, 2.0 and 2.5 to see which one gives the least spread to the data. In many software packages that is quickly accomplished with a couple of mouse clicks.