Results with different software

Affiliation
American Association of Variable Star Observers (AAVSO)
Sat, 02/22/2014 - 22:12

Hi,

I would appreciate your opinions on the (somewhat different) photometry results I obtained by measuring the same image (attached) with different software: MaxIm DL, AIP4WIN and VPhot.

Note that the image was fully calibrated (biases, darks and flats) with MaxIm DL.

My target is T CrB. The sky was bright with the waning gibbous moon (hidden).

For equipment I used a 106mm refractor (Tak FSQ106ED), ST402 CCD with V filter. The results were not transformed.

The AAVSO comparisons I used were: 99, 105 and 112. The check star was 102.

I set the aperture to: star radius 3, gap width 2, annulus width 3.

I tried being as careful as possible, but its possible I make a mistake.

John

 

**************************************************************************************

#TYPE=Extended
#OBSCODE=ONJ
#SOFTWARE=MaxIm DL Version 5.24
#DELIM=,
#DATE=JD
#OBSTYPE=CCD
#NAME,DATE,MAG,MAGERR,FILTER,TRANS,MTYPE,CNAME,CMAG,KNAME,KMAG,AIRMASS,GROUP,CHART,NOTES
T CrB,2456708.9281481481,10.243,0.010,V,NO,STD,ENSEMBLE,na,102,10.148,na,NA,13222ASO,na

#TYPE=EXTENDED
#OBSCODE=ONJ
#OBSLON=-70.95000
#OBSLAT=42.63333
#DELIM= ;
#DATE=JD
#OBSTYPE=CCD
#SOFTWARE=Magnitude Measuring Tool in AIP4Win v. 2.4.8
#NAME;DATE;VMAG;VERR;FILT;TRANS;MTYPE;CNAME;CMAG;KNAME;KMAG;AMASS;GROUP;CHART;NOTES
T CRB;2456708.92815;10.279;0.009;V;NO;STD;102;11.900;99;11.596;1.0486;na;13222ASO;na

#TYPE=EXTENDED
#OBSCODE=ONJ
#SOFTWARE=VPhot 3.1
#DELIM=,
#DATE=JD
#OBSTYPE=CCD
#NAME,DATE,MAG,MERR,FILT,TRANS,MTYPE,CNAME,CMAG,KNAME,KMAG,AMASS,GROUP,CHART,NOTES
T CrB,2456708.92815,10.238,0.060,V,NO,ABS,ENSEMBLE,na,102,10.141,na,na,13226CO,na

*********************************************************************************

 

File Upload
Affiliation
American Association of Variable Star Observers (AAVSO)
3 software results

Hello John

You are using a short focus refractor (530mm) and medium size pixels (9 microns).  This results in a very undersampled image--3.5 arc seconds per pixel.  You should be at 1 to 1.5 arc seconds per pixels, for 3 arc second seeing, which is typical.  Your setup is great for imaging, but a compromise for photometry.

Good practice suggests using an aperture radius of at least 2 pixels per FWHM, and 3 is better.  Your aperture is impinging on the wings of the star profile.  Better results may be obtained with a bigger aperture.  

I would suggest using an extender (2x) if you have one, or a longer focal length telescope, or smaller pixels.  The focal extender is the most cost effective solution.  Don't worry about the system being too slow at f10, as the exposure times for star images are not a fuction of the f ratio like nebulosity.  The aperture is the only thing that matters.  Of course, this will reduce your FOV--but as long as you have about 12-15 arc minutes, that should be fine for most PT.

I suspect that the undersampling and the small aperture combine to show the differences in the partial pixel interpolation of these 3 softwares.  I would like to see the results for these 3 softwares with an aperture radius of 5 or 6 pixels instead of 3 pixels.  

Gary

Affiliation
American Association of Variable Star Observers (AAVSO)
Not too bad

My opinion is its a pretty good set of 3 measurements.

Mean = 10.253 V

StdDev = 0.022

All fall well within the 95% confidence interval about the mean.

Affiliation
American Association of Variable Star Observers (AAVSO)
Not too bad

Hello Michael

The problem is that these are not 3 measurements.  It is the data reduction of one image.  Arne is suggesting that we should have all obsevers agree to 0.01 magnitude, and here is a case that 3 softwares/data reduction proceedures results in 0.022 mags one sigma.

The undersampling can be dealt with, as most of the BSM's are also undersampled, so I am sure Arne has a solution.

Can't wait to see the results with a bigger aperture!

 

Gary

Affiliation
American Association of Variable Star Observers (AAVSO)
Some thoughts

[quote=WGR]

The problem is that these are not 3 measurements.  It is the data reduction of one image.  Arne is suggesting that we should have all obsevers agree to 0.01 magnitude, and here is a case that 3 softwares/data reduction proceedures results in 0.022 mags one sigma.

[/quote]

First, Arne mentioned the MaximDL used two worse comp stars, but I think he meant the (non-ensemble) AIP4Win? The MaximDL result is the closest of the 3 to the "mean".

Second, Whether or not one can call the 3 separate softwares reductions of the same image 3 independent measurements or not, depends a lot on how the softwares handle the reduction process. If its a fairly complex multi-step process with significantly different algorithms, then they could act as 3 "pseudo-random measurements" of that image data. If they works fairly similarly, then the differences should be mainly due to the systematic error of using different comp stars magnitudes, and the differences could be calculated directly from those magnitudes, the mean and stddev really dont mean that much.

Third, Given that a single observer used the same image and 3 softwares and gets a different result by well over 0.01 magnotude - And, given that in real life, observers all use different software, different detectors, different telescopes, different atmospheric seeing conditions, different darks/flats, different comp stars, etc. - Is it reasonable to hope that all observers could ever "agree to 0.01 magnitude" ??

Mike LMK

Affiliation
American Association of Variable Star Observers (AAVSO)
Results for Different Software (pixel comparison)

Hi,

Following Gary's suggestion I have experimented with different apertures.

Please see the attached spreadsheet for the results for aperture radii from 3 to 6 pixels.

I redid all the original 3 pixels aperture. VPhot gave a different value (highligted in the spreadsheet) for 3 pixels. I either made a mistake, or else had the check star annulus radius not 'synched' with the general star annulus (I am not sure of why this is done separately). 

By the way I have an extender for the Tak 106 which gives a focal length of 850mm, but with the small chip it either excludes or puts the comparisons very near the edge of the field.

Many thanks for all the suggestions.

John

Affiliation
American Association of Variable Star Observers (AAVSO)
Different Software Results

John,

While Vphot and MaxIM DL show an ensemble solution (and both within .005 of each other for the target, which is probably acceptable) your AIP4WIN solution is NOT AN ENSEMBLE SOLUTION!

Ensemble Solutions, obviously,  will give you a different answer than does a one comp star solution (this is not the place for arguing the merits of either approach).

To generate an ensemble wtih AIP4WIN you have to check  the last option on the right hand side of the Report Tab (Treat Comp Stars as an Ensemble).

AIP4WIN, also permits you, With the MMT, to be able to check your aperture profile and curve of growth in detriming what your Aperture should be (inner ring).  Rougly, scale the inner and outer annulus's as 1.5X the Aperture radius and 2.5X the Aperture radius, respectfully.

Now, as Gary Walker pointed out, your setup is fundementally flawed in that you are way undersampled (3.5 arcsec/pixel) and undersampling often results in sperious data (over sampling is ok); I doubt if you can do photometry at f/5 with that CCD.  Even with using your extender to achieve f/8 your image scale is still marginal, regarding undersampling, at ~ 2.2 arcsec/pixel in that if your seeing is better than ~ 4.4 arcsec you will still be underampled.  the only solution, with that specific scope, that I can figure out, would be to get a CCD with smaller pixels.

I am going to attempt to send you directly a short artcile on Seeing and using the CCD calculator to make your own sampling and FOV size calculations.

Please remember that there is no such thing as a dumb question.

Per Ardua Ad Astra,

Tim R Crawford, CTX

 

Affiliation
American Association of Variable Star Observers (AAVSO)
Extender for FSQ

Hello John

I don't agree that using the 1.6x extender is not,  an option.  You are currently getting a 30 arc minute field and that is about 2x what most PT is done with.

If you use the 1.6x extender, you would have 850mm focal length, 18 arc minute field, 1.4 arc seconds per pixel, all of which are quite acceptble.  You would then use an "F" chart for your comp stars.  Most of the PT done out there is done with an "F" or a "G" chart.  

This would be a nice way to go.  It also means you do not have to invest more money at this time. You have all the parts to try this.  I assume you have the BVI filter wheel for the ST402?  Its different from the RGB filter wheel.

Thanks for running the software with bigger apertures.  I guess once the data is undersampled, there is no getting it back. 

Gary

Affiliation
American Association of Variable Star Observers (AAVSO)
Extender for FSQ

Hello All

I made a mistake on the ferry coming over to Nantucket this afternoon, and the numbers in this post were in error.  Thanks to Tim for pointing this out to me.  The FSQ with extender, still gives 2.2 arc seconds per pixel, which is still a little undersampled for most applications.  The field of view is also bigger than the 18 arc minute that I calculated. 

I agree that the best solution is probably to defocus.  Some of the best PT on the planet is taken this way. 

 

Gary

Affiliation
American Association of Variable Star Observers (AAVSO)
Re: Results with different software

John,

If your images are undersampled, the simplest solution is to slightly defocus your telescope to get proper sampling.  For example, if your in-focus images have a FWHM of 2 pixels then defocus slightly until the FWHM is at least 3 pixels.  Slightly more than 3 pixels is O.K. too since T Crb is a moderately bright star and can tolerate some defocus without a SNR penalty.  I use this technique with my 80mm f/7 refractor and get good results.

Bob

Affiliation
American Association of Variable Star Observers (AAVSO)
software results

I like Bob's suggestion of defocus.  That keeps the field of view large, yet gets sufficient pixels in each star image to do good photometry.

The problem with the AIP4WIN result is that it uses the 99/102 star pair, and those magnitudes come from TASS.  The errors for those two stars are quite large, so by using them rather than the same stars that formed the ensemble for the other two methods, you are likely to get an offset between the results.  The photometry extraction may be good, but the results will differ.  So the only correct way to compare software packages is to use the same exact methods (single star differential vs. ensemble) and the same exact comparison stars and magnitudes.

I would not worry about the differences that you found at this time, and could not say that one package was better than another based on this result.  It was good that you thought about the process and tried comparing the packages!

Arne

Affiliation
American Association of Variable Star Observers (AAVSO)
Software Comparison Results revised

Yes, Tim, I missed the 'ensemble' radio button in AIP4WIN. I attach a corrected version of my results spreadsheet. The exact same comparisons (i.e. 99, 105 and 112) and 102 check star were used in all three programs. The magnitudes now seem a little closer, but I wonder why the errors are different.

The measured FWHM of the target star (T CrB) from the AIP4WIN and VPhot is given as 1.7 pixels. By the way, MaxIm DL gives 1.5 (all to the nearest 0.1 pixels).

So taking the FWHM as 1.7 pixels (presumably because of the seeing, as I was careful to get a good focus) and 3.5 arcsec/pixel, equates to 5.95 seconds of arc. I was using the rule of thumb of the aperture radius = 2 x FWHM = 3.4 pixels. However, I rounded the result down rather than up.

I have the BVI photometric filters in the ST-402 CCD camera. I am worried about not being able to fit the comparison stars for T CrB in particular, if I use the 1.6x extender.

Thank you for all the suggestions and discussion.

John

 

 

 

 

 

Affiliation
American Association of Variable Star Observers (AAVSO)
comparison of multiple observers

Hi Mike,

First, if the same images are used for the different software extractions, there is no way that they can be considered independent measures.  Differences, assuming the same assumptions as aperture sizes and comparison star measures, can only be considered as systematic uncertainties, not random.

If you want to see how well multiple observers compare, look at some of the recent ANS group papers on novae and supernovae (Ulisse Munari is always one of the authors); there may need to be small adjustments between observers, but the results are easily within the range I mention.  Another example, for members of the AAVSO, is to use the BSM Epoch Photometry Database on stars that are seen with both BSM_NM and BSM_South, such as XZ Cet or IO Aqr, where it is hard to separate the results from the two systems even though they use very different filters.  The trick on all multiple-observer photometry is fully understanding the process - transforming, avoiding pathological stars (or performing careful analysis of these with lots of overlap), observing in a consistent manner.  The AAVSO is continuing to work towards simple solutions to these complex problems.

Arne

Affiliation
American Association of Variable Star Observers (AAVSO)
I by no means know as much as

I by no means know as much as Arne, but I want to reinforce what he has said.  This issue is why when I want to look for < 0.1 mag fluctuations in SNe lightcurves, I ask people to send me their images.  The systems all have offsets from each other for various reasons (different pixel sizes, reference stars in fields of different size, etc).  But it is hard to know which answer is "absolutely right" so having the images allows me to move everything to the same relative calibration.  This is what most novae and supernovae groups do.  They take your data, compute the offset, and apply it (whether they tell you or not).  This is why most self reported brightnesses by non-pros diverge from what gets published in the literature by pros using their data after more careful analysis.

In the end, all photometry is relative.  It is something to keep in mind as a CCD observer that just because your statistical random errors are precise to +/- 0.01 mag does not mean you are that accurate.  Systematic offsets are a big concern when comparing between telescopes, cameras, and filters.  Not all of those can be characterized and communicated well.  This is something that people using the IAD should be aware of and do their best to live with.  

I like to be an optimist about this.  All scientific measurments have to face the difference between precision and accuracy.  It is just more common in astronomy not to face it when you are happy with a factor of 2 answer because you have a precision that is broader than your expected accuracy.  But it is by no means a new problem.  In the lab physcisist and chemists face this all the time where they are dominated by systemtic errors.  Our precision in photometry has reached the level where the difference between precision and accuracy is becoming more of an issue.  I am optimistic that we can begin to tackle it in the IAD.