Having multi-filter CCD observations transformed into the standard system is important: it makes our data cross-observer comparable and useful to the professional astronomers.
In 2014 we developed the tools to document and make this easier to do. Check out the documentation here.
And now M67, an excellent calibration target, is in the perfect position for evening observations.
What are you waiting for? Now is the time to move to the next level!
And there is help available: volunteers are listed below to help you learn each step of the process.
Goal of the campaign:
- Increase the number of observers who are submitting transformed data to WebObs.
24 out of 134 observers submitted transformed observations in Jan-Feb 2015
Let's see if we can improve this metric!
- Use the tools developed this last year for transformation
TG: Transform Generator
TA: Transform Applier version 2.30 or better
Process:
--- Get those M67 images! Its a convenient evening target in the month of March. Use best practice to get them BDF calibrated.
--- Extract the instrumental mags.
VPHOT is the most convenient way to do this. It automatically identifies the standard stars in the field and TG is prepared to work with its output directly.
Ken Menzies is available to help with questions:
If you use some other tool for extracting the mags then the issue will be using the proper labels for the stars. AUID's are always preferred and are available
with the photometry from VSP
--- Use TG to compute your transform coefficients
Installation and process details are available here.
Gordon Myers is available to help with questions.
You can get your coefficients from TG in a format compatible with TA (INI file) when you save your results.
--- Use TA to apply your transform your data
- You prepare you WebObs submission as you already do. The only adjustments to that process:
- Comp and Check stars should be identified by AUID. You may be able to use the VSP labels, but AUID's are preferred.
- The ChartID in your observation records needs to be a photometry page, not the picture chart.
Installation and process details are avaiable at here.
- Get the latest version of TA here.
George Silvis is available to help with questions.
Before you start submitting TA transformed data, you should review the results. TA has a feature built in to help you. If you check "Test TC" the transform process will be applied to your Check Star data. Review the results in the Report tab. If your observations can reliably match the transformed standard magnitude of the Check star, then you can submit your variable star measurements with confidence.
--- Tell us how you're doing by posting comments to this thread. What needs to be changed to this process to make it easier? We'll fix it!
Cheers,
George
SGEO
I am embarking on creating transformation coefficients for my CCD equipment. As I read various documents include this thread it is clear this is not a trivial topic. So I decided I wanted to be able to test my results. To my surprise, the errors listed for the M67 standards are 10s of millimags. However, I note that Landolt stars are available from VSP and have errors an order of magnitude smaller.
So is a comparison of my transformed magnitudes against the Landolt stars a valid test?
And if the magnitudes of the Landolt stars are so much more accurate than the standard fields like M67, why not use them?
Thanks in advance for forgiving a newbie's questions.
best regards,
Cliff Kotnik
First the question why not use Landolt Fields instead of Standard Cluster fields. M67, NGC 7790 and the few other standard cluster fields have many more stars than Landolt fields within a small field of view. That makes the results of the regression much more accurate (precise and true). As to the reported errors for M67 and the other standard clusters, Arne has said a number of times that a big part of the errors shown (and also for APASS comp stars) is systematic rather than random. Since most of the systematic error is simply an additive constant, it doesn't affect transformation coefficients.
Another thing to keep in mind is that the transformation process improves the trueness of your measurements but always increases the analytically determined uncertainty (but probably not the empirically determined uncertainty from multiple measurements) since every calculation you perform adds terms to the error propagation equation. This is true in general not just when transforming. Consider ensemble photometry compared to single comp star photometry. It can make your measured values considerably closer to the True value and can reduce the empirical uncertainty but it adds a zero point error term in quadrature with the CCD error equation terms when analytically calculating measurement error. That doesn't mean that ensemble photometry is less precise. Usually(but not always) it means the analytically determined uncertainty from the ensemble photometry more closely reflects the actual uncertainty of your measurement. The analytically determined "error" values provided by most programs more often than not significantly underestimate the real uncertainty of differential photometry measurements.
There are all-sky means of doing transformations using Landolt fields but they are much more difficult and probably less accurate because of the extinction corrections that have to be included.
In some posts Arne mentioned Trueness can be improved primarily for B and B-V for normal fields of view less than 1 degree by including second order extinction when calculating Tbv and Tb-bv transformation coefficients. HOWEVER that means you must apply 2nd order extinction correction to all instrumental magnitudes to which you apply Tbv and Tb-bv which causes a large increase in the complexity of your magnitude calculations. Consider for a minute if you are doing B,V,I photometry You are now using 2nd order extinction corrected instrumental b and v mags for B and B-V but probably not for V and and not for instrumental v and i mags used for V-I and I.
If you want to make things easier for yourself, I suggest that you get a copy of Astronomical Photometry by Henden and Kaitchuck. Read section 1.6 - 1.9, Chapter 2, Chapter 4 (skip dead time correction unless you are using a PEP) and appendices G and H. If you don't know anything about statistics or error propagation Chapter 3 is a good introduction and Appendix K goes into more detail.
If at some point you really get into data analysis then I suggest you get a copy of Data Reduction and Error Analysis by Philip R Bevington and D. Keith Robinson. This text is definitely not easy reading, but in a concise form it contains almost everything you will ever need to know about statistics, statistical distributions, regression, error analysis and error propagation.
Of all the books I have related to our hobby, these two sit on my desktop. the others are on the bookshelf.
If you get into using VStar, Grant Foster's book Analyzing Light Curves: A Practical Guide is the secret decoder ring that makes all of the mysterious processes understandable. It is almost as though VStar was created to turn the book into a computer program.
Brad Walter, WBY
Hello,
IMHO Landolt fields are certainly the way to go if one uses computerized telescope (locating of those areas can be done very quickly) and is located at "mid-latitudes" (my +58.5N is definitely not such one). There are plainly plenty of stars in Landolt fields, both bright and faint ones, typically fine colour range as well. New(er) addition are standard areas close to +45 declination, though many stars there are not observed as many times as equatorial ones.
Observing two fields allows to determine extinction coefficient(s) as well. But when using multiple Landolt areas, it's not that convenient to reduce the data using e.g. spreadsheets and I think it's even impossible when using TG (?). Normally, determination of coefficients (k', k'', T_X, Z_X, maybe CI^2 terms) when using such data involves solving a set of (linear) equations via (weighted) least squares method, fitting all the parameters simultanously.
Or even better, using data from multiple nights and setting physical constraints on the algorithm (for simple logics see e.g. Sterken & Manfroid 1992 "Astronomical Photometry. A Guide" $10.2-$10.5). Further development(s) are realized nowadays as Ubercalibration scheme etc. Still, not sure if any amateur astonomer has applied those algorithms on their data. Or if even such ready-made software exists.
Of course you can use Landolt fields and they are in general the most accurately measured except for the primary photometric reference stars. However, with a typical FOV radius of 30 minutes or less you have many fewer stars in the field to use in your regression. My telescope is only 250mm aperture so I use stars 13.9 magnitude and brighter. If I recall correclty for 14.0 magnitude and brighter the greatest number of stars I found in the Landolt equitorial fieldsin a 30 arcminute FOV is in the one in SA 98 with about 20. By comparison there are 46 at 13.9 magnitude or brighter in a 25 arcminute radius FOV in NGC 7790. Of course you can image and regress more than one field per night and either average the resulting Transformation coefficients or you could correct for extinction and regress them together. Even if you average the transformations, you should probably correcct for 2nd order extinction, unless you use data from the different fields taken at almost identical airmasses. To get an idea of the effect of 2nd order extinction coefficients I derived for my equipment from 4 nights of data are the following:
k" Coefficients
Uncertainties
k"b-v
-0.03842
k"b-v
0.005724
k"b-bv
-0.05055
k"b-bv
0.005126
k"v-bv
-0.01213
k"v-bv
0.003061
k"v-r
-0.02177
k"v-r
0.01093
k"v-vr
-0.02073
k"v-vr
0.006426
k"r-vr
0.001043
k"r-vr
0.006388
k"v-i
-0.01495
k"v-i
0.00473
k"v-vi
-0.01162
k"v-vi
0.003256
k"i-vi
0.003336
k"i-vi
0.003716
k"r-i
-0.00052
k"r-i
0.010398
k"r-ri
0.005335
k"r-ri
0.006908
k"i-ri
0.005856
k"i-ri
0.007803
As with Transformation coefficients you should use 2nd order extinction coefficients correcsponding to the color index you are using as the basis for correction So if if you using X(b-v) as the you use a different 2nd order extinction coefficient to calculate v0 than if you are using X(v-r) and you end up with slightly different v0 from different color indexes which complicates the transformation process. You can also get the data you need for extinction correction by imaging the standard field several times over a ∆ airmass range of at least 1.0. Aside from the much more complicated process in calculating transformations, the extra imaging time to get the extinction data is another reason why most amateurs simple image a standard cluster near airmass on several nights and average the resulting transformation coefficients. You absolutely need to do it for more than one night and average the results. I suggest a minimum of 4 nights.
If you like ndddling around with spreadsheets or writing programs (R is one handy free language and well supported language and environment that works well but Python, C or whatever you happen to know and have available wwill work well) you can get much fancier and more refined in doing photometric data corrections and analysis. I would not suggest making your first dive into the transformation process from the 10 meter platform. I would suggest that you start with the low board doing simple transformations, get comfortable and then add sophistication.
You cannot use TG with anything but M67 and NGC 7790 at present.
Brad Walter, WBY
The FOV sizes should have been stated as diameter not radius.
Bad Walter, WBY
Hi Brad,
I look in on this topic at times but I think much of it is too complex for the good of what we're all trying to achieve. One thing which intrigues me is that for CCD photometrists if a standard star is outside a particular FOV it doesn't exist and isn't useful. But in the old pep days all measures were sequential - it might take an hour or more to measure 20 or so stars in an E Region to determine the three transformations we used in those days - U-B, B-V, V - so we'd pick a comparison star and relate everything to this as measured three or four times during the procedure. Then we'd adjust the intensities in proportion and then do the reductions and calculations which are much simpler than seem to be described in this series of posts. So what is wrong with applying the same technique to CCD transformations and photometry in general? Maybe it would help CCD photometry to approach what was standard accuracy with pep work.
Primary extinction should not be a factor and secondary extinction is more reliably measured over a season but that's not a point I'm making here.
Regards, Stan
I really had a simple question, but appear to have clouded that with a second question.
We are all attempting to provide accurate, scientifically valid data with known estimates of uncertainty. The transformations addressed in this thread add to a system of equipment, software and a workflow. What are the recommended ways to verify this is all working?
My thought was to measure a known source and see if the result matches the catalog value. That is what the check star is for, but the catalog uncertainty for most stars in the AAVSO sequences seems to exceed the accuracy we are trying to reach. So what about doing a check, perhaps periodically - perhaps nightly, with the Landolt star fields where the uncertainty is a few millimags?
This can't be a novel idea. What do other differential photometrists do? What stars do you use? What sort of results do you get?
thanks,
Cliff
Absolutely you can use Landolt fields as a check of the quality of your photometry. It's a common practice to image a Landolt field at several points over the full range of airmasses at which you observe your targets for the night. Among other things it helps to determine if you need to include extinction correction to achieve the accuracy you are trying to achieve. The check star is a good way to check uncertainty and to look for variability in your comp stars. As youmentioned the Check star is significantly less good than Landolt standard stars for determining the "trueness" (how close they are to the "true" magnitude) of your measurements.
At your location the Landolt N 50 fields are a welcome addition to the equitorial fields.In case you don't have those, a copy is attached.
Brad Walter, WBY
Thanks!
Ciff
Brad, Tonis,
Thank you very much for your suggestions. I certainly will consider those books as additions to my library. My approach is to get a baseline of knowledge of the science and develop my observational techniques. Then to deepen the science knowledge and improve my technique - sort of iterative.
So at my next opportunity I will image M67 along with the two Landolt fields around RA 7hr. I'll create my coefficients from M67 and then apply them when calculating the standard magnitude from a few of the stars in each Landolt field. Since I will know the standard mags of these Landolt stars, I can test how well my whole system + workflow is operating, including the transformations.
Make sense?
Cliff Kotnik
A couple of notes. Yes, first-order extinction can often be ignored in a CCD field of view, depending on the size of the field and the zenith distance. However, not including it when determining transformation coefficients means that you have potentially added a systematic offset. For many of the DSLR observers, or those that have BSM-like wide fields, extinction becomes important.
Second-order extinction is, well, a second-order correction. The term has the form coef*airmass*color, so assuming a small field, in differential photometry it becomes coef*airmass*(color difference). Say that your target is a Mira, with color ~2.5; the comparison star has color ~0.7. You are observing at 2 airmasses, and the second-order extinction coefficient is -0.03. Then the correction is -0.03 * 2.0 * 1.8 = 0.1 magnitudes. So it cannot be ignored when transforming to the standard system, even if you are using a small field of view, since it is not the airmass difference, but the absolute value of airmass that is the critical factor. -0.03 is a typical second-order coefficient for the (B-V) term; most of the other coefficients are zero because of the small change in extinction across the bandpass of the filter. This is another reason why you typically like to have the color of the comparison star match the color of the target star.
Arne
Hi Arne:
So now that I've crawled out on a limb and pulled George along, I need some more specific help with the transformation generation w/o extinction correction issue.
I suspect you have read my arguments about (b-v) and (bx-vx) above. Where is the logic failing concerning slope? Let's assume in this discussion that one takes images of M67 near the meridian for a total period of 30 minutes. Average airmass ~1.4 for us folks about 40 N. FOV = 30 minutes for each image.
In this case, why does one have to correct instrumental magnitudes for extinction at all? Examples and discussion would be helpful. I do acknowledge that a change in airmass occurs over the duration of image collection but convinced myself that as long as it is not too long, change in airmass and change in extinction can be ignored since differential extinction is small? Again, does not the transform coeff come from the slope and shifts in intercept should not matter?
If you are only addressing DSLR and wide FOV (BSM), I understand but even there at what point is the FOV too big to ignore extinction for M67, which is a small object covering a small change in airmass similar to that for CCD during imaging?
Last issue, I don't think any of our current tools (VPhot/TG) apply any extinction corrections to instrumental magnitudes? What to do?
Thanks for your help. I can handle any embarrassment so don't hold back. We all want to understand!
Ken
Arne, Doesn't this mean that we must always correct for 2nd order extinction in our observations before calculating our transformation coefficients because we want to use a wide a range of star colors as possible while avoiding flickering. That is probably somewhere in the range of 1.5 < ∆(B-V) < 2.0. That means Tbv and its analogs will be wrong by a factor of 1/((1-k’’X)if we just use (b-v) instead (b-v)o adjusted for 2nd order extinction where k’’ is the 2nd order extinction coefficient. The Tv_bv transformation coefficient will be off by a factor of ∆(V-v)/[ ∆(V-v)-k’’∆(b-v)X] and analogous factors for other color indexes, provided I didn't mess up the algebra. at k'' of -0.03 that is significant even at airmass 1.0.
Doesn't this mean then that we have to image M97 at two airmasses about 1 airmass or more apart and use the deltas of the color indexes of stars between the two airmasses? If so, what is the best way to go about that? Do I calculate the differences of several stars from the most blue or most red star. Do you I use just a few stars, particularly those towards the extremes of the color range or should I use all stars that I will use in calculating the transformation coefficients? It seems to me this is similar to running the regression on the plots for the transformation coefficients themselves - the more the better and the more color spread the better.
You stated that except for the B-V term, the 2nd order extinction coefficients are essentially zero. I need to clarify what you mean by the B-V term. Do you mean the transformation of (b-v)o to B-V, transformation of vo into V, both or something else? Does this mean the 2nd order extinction for V-R, V-I and R-I can be ignored or just that it is a smaller effect?
I know this e-mail has a lot of questions but I am about to take images and recalculate transformation coefficients and I want to do it just once the best way I can. I did my last set on the meridian and since I am at 30N, they were very close to airmass 1.0 but now I think that some coefficients may still have a systematic error of as much as 3% due to 2nd order extinction. Stupid of me for not thinking 2nd order. Once wrong is too much. My inclination is to calculate 2nd order for every single color index and correct for it. If I am taking that additional step I want to do it using "best practice."
Thanks for your assistance.
Brad Walter, WBY
Hi Ken,
You are basically right for ignoring extinction during coefficent determination, for the 30arcmin fields you are describing. Those using wide field systems or DSLR systems are unlikely to use most of the M67 stars that are familiar to you, as they are too crowded and/or too faint. If you select a wide-field chart, say 2 degrees, you will find that there are additional bright "standard" stars that are part of the basic "m67" calibration field. These were calibrated by the BSM systems. Using these, you may need to pay attention to extinction. Likewise, not everyone is at +40N; there are a number of Canadians that are CCD observers; there are people for whom trees/houses get in the way and they have to observe off of the meridian; there are the southern observers who haven't been given a standard field (yet). For all of these, they might get into a regime where extinction is important, so my suggestion is to program for the worst case.
IF first-order extinction is not an issue, and IF you have accounted for second-order extinction, then I agree that having the same extinction on each star would yield the same slope. The devil is in the details. For you and most observers, you won't see significant changes in your coefficients by ignoring extinction - but you should be aware of what assumptions you are making. I'd think that 0.01mag differences on the stars you are using would be insignificant, if you are using a fair number of stars to calculate your coefficients and if your coefficients are typical for CCD systems with Johnson/Cousins filters. That would mean something like a 0.1airmass difference across your field is acceptible.
Arne
Recently I added an Ic-filter to my B and V filters. First thing: Transformation coefficients. I took images of M67 with/through all.
Then I had a lot of fun determining transformation coefficients using the new tools and capabilities (yes, I really mean it since I am comparing to my old way with spread-sheets...): TG and VPhot, and TA.
Thanks to the guys/gals who created these time savers - you are my heroes!! If there is anybody left who has not used these tools yet - jump right on them, you will never look back!
Apparently the coefficients are determined from pairs of filters, here in my case from B and V and also V and I. That means, if I transform my observations in, e.g. VPhot, I get a transformed V-value from observations in B and V, but also a transformed V-value from observations in V and I.
These two transformed V-values are not necessarily the same, they are slightly different. As an example I have attached a pdf-file with the last 200 days of SU Vir, my own observation (AHM) as blue crosses. You can see the two slightly different values of the transformed V-values on the right side.
How do we deal with these multiple transformed values (here V, but the same challenge will occur with any other filter, too, depending on how the transforms were determined)?
Do we average, do we pick the one with the smallest or the largest error to avoid multiple reports? Or are the multiple reports for one filter actually justified since the transforms were determined by different means?
Hmm, suggestions/discussions will be appreciated! (I apologize if this has already been discussed, but I could not find anything about it...)
Cheers,
Helmar (AHM)
Greetings Helmar,
Good question.
If you have BVI data to transform, you should use a tool that can do that. TA can handle 3 filter transforms, VPhot cannot.
Data submitted to AID that is transformed is supposed to be tied together by a common GROUP designation. This is important if researchers are going to understand the data they view.
Doing as you suggest, a BV and then a VI transformation, means you end up with two values for V, as you pointed out. But you are not supposed to submit the same data twice. WebObs should prevent this. And averageing the V's wrecks the GROUP trail.
I've explored these issues with Matt and Arne. I suggested that transform groups could be expanded by reusing observations. They said don't do it; you end up with mud. Matt was firm too that if you submit an observation as transformed, don't submit it also as un-transformed.
This has all to do with the design of AID. It should hold observations. If they are transformed, they should be in identifiable groups. And no duplicates.
This is my understanding of how it all should work.
George
Thanks George,
For your valuable input (online and offline)!
As you pointed out, TA can deal with three-filter transforms, and thus I reprocessed my former two-filter transformations that I had processed with VPhot.
TA version 2.36 worked very well (except sometimes I had to manually copy/paste the AUIDs to make it work).
Just for the record (not sure if I should mention that here or at all...): I improved/fixed my B-V-Ic observation reports in AID of
R Com, R Vir, RU Vir, S Hya, S Leo, SU Vir, T CVn, U Vir, V Leo, W Leo
Thanks again,
Helmar (AHM)
I just ran a photometry transform test using data on AM Her on 18April15 UT. I shot this object concurrently with two telescopes about 5 meters apart. One was a Celestron 11 with a ST-8XE equiped with a old glass Schuler B filter, Astrodon enhanced green filter (used as V) and a glass Ir filter purchased 15 years ago from an optical company no longer in business. The 2nd telescope, my main right now, is a 13" classical cassegrain with an ST-10XME using Astrodon V and B dichromatics and Rc & Ic Schuler glass filters. I used identical exposures and binning (15 seconds, 2x2 bin) while shooting through both systems. An earlier set (05Apr15) of images of M67 was made on each scope to derrive transform coefficients for each system.
I wanted to see how close their transformed V-magnitudes came to one another. After acquiring, calibrating, reducing raw magintudes in Maxim V5.23 and running them un TG5.6 and TA2.37, I got the following data:
http://www.astroimage.info/files/C11_vs_CC13_Transform Comparison2a.xls
The average error between C11/CC13 temporally matched data was 0.033. This compares with the C-11 average V-mag error of +/-0.032 and CC13 V-mag error of +/-0.019. Please note that the CC13 is both faster than the C-11 (F7.5 vs F/10.1) and has no corrective lenses that adds dispersion to the C-11 optical train.
I'm going to run 1 or 2 more nights of concurrent data run of AM Her btw the two systems, but I believe this test validtaes the use of transforming data to get closer to "real" V-mags as opposed to using and sending to AASVO un-transfomred v-magnitudes.
James
James, maybe a lightcurve merging method used by Munari and his coworkers could be interesting to you as well. Look e.g. http://arxiv.org/abs/1209.4692 section 3.
Best wishes,
Tõnis
Hi Brad,
This is correct: linear solutions for the transformation coefficients will be in error due to the exclusion of second-order extinction. The effect is as if you had non-linear coefficients - the fit bends a little from a linear solution. However, since you calculate your coefficients without 2nd order extinction, and then apply them without 2nd order extinction, the slight slope change in your coefficient calculation is mostly removed when you apply the coefficients. That is, the systematic error in the slope calculation is offset by the systematic error of calculating magnitudes without 2nd order extinction. So the net effect is really small, and probably below the level of your accuracy.
IF you have the opportunity to determine 2nd order extinction, I recommend that you do it. The easiest way is to use a red-blue star pair and follow it from the zenith to about airmass 3, but only on pristine nights. I'd do it for 3 nights, and average the values. Then use these coefficients until you change your camera or your filters. A good table of red-blue pairs was extracted from the Tycho catalog by Richard Miles several years ago, and is in one of the appendices for the BAA CCD manual.
My usual way of transforming Johnson/Cousins data is to use the traditional V + color formulation, so solving for V, (B-V), (V-Rc), (Rc-Ic), (V-Ic). For that set, 2nd order extinction is only important for the calculation of (B-V). This is because the blue wavelengths suffer much more extinction than the red wavelengths, and stars change their slopes much more in the blue than the red. So by the time you get to Rc or Ic, the response change across the filter bandwidth is very minimal. That said, I include a 2nd order coefficient for each filter and each color in my calculations to do a complete, proper formulation, and then set most of those coefficients to zero. That way I never have to remember to add 2nd order when it is important, such as for high airmass observations.
Arne
Thanks, Arne.
That makes me feel a bit less of a schlemiel.
One small point that I want to clarify to make sure I understand. Doesn't second order also come into play in calculating v0 because b-v is used in the calculation
v01 – v02 = v1-v2 +k’’v[(b-v)2-(b-v)1]
or is k’’v so small that it is one of those effectively equal to zero?
I think I am going to calculate them all once just to see that they all come out as expected with most effectively zero. That will give me some confidence that the k''b-v value is good.
Brad Walter
Isn't it more like following:
Thanks for your comments.
First of all I was interrupted when sending the message and left out the X. What I meant to write is the following for stars 1 and 2 with very different color indexes in the same field of view as Arne mentioned in his last e-mail.
v01 – v02 = v1-v2 -k’’v[(b-v)2-(b-v)1]X. Rewriting to get Y = mX + B gives
∆v = v1-v2 = k’’v[(b-v)2-(b-v)1]X + ∆vo since stars 1 and 2 are in the same field of view. Then to get the slope
k’’v = [∆vA1 - ∆vA2] / [ (∆(b-v)A1XA1 - ∆(b-v)A2XA2 ]
Where the A1 and A2 subscripts stand for airmass 1 and airmass 2 of two image FOVs within a range covering airmass 1 to 2 (and more than that if you are doing photometry over a larger airmass range than 0< z < 60o).
There are quite a number relatively close pairs of stars with very different color indexes availbale from several sources. You can even use stars in M67 and NGC 7790
Technically you are correct X1 and X2 of the two stars are not the same for all-sky photometry or if you have a wide field, but for those of us doing differential photometry with a field of view significantly less than 1 degree (mine is about 30 arcmin) X1 and X2 can just be taken as X, the midpoint between the two stars with large color difference you are observing to obtain second order extinction. That means the first order extinction drops out. because X1 and X2 just equal X and net to zero. If you are using a big format camera on a short focal length, wide-field telescope and the stars are far apart in the field you might have to use different X values for the two stars in the same field but that is not typical for amateur photometry. If the stars are half my field of view apart, 15 arcminutes, with the highest altitude star at airmass 2,the difference of 15 minutes airmass is only0.016 airmasses, 0.8% of airmass 2. However, in lookng at the resulting full expression for k" the effect of 0.8% difference in airmass at the second, high airmass observing point isn't obvious The denominator becomes
∆(b-v)A1XA1 -X1A2 [(b-v)1A2 - 1.008 (b-v)2A2 ] ;
Where the A1 subscript denotes the common airmass near 1.0 for stars 1 and 2 and Subsrcipts 1A2 and 2A2 denote the different stars in the same FOV at slightly different airmasses near airmass 2.0. and the 1.008 coefficient for (b-v)2A2 accounts for the extra 0.8% airmass for star 2 15 arcminutes beyond 2.0 airmass.
The effect on the denominator depends on the relative sizes of the terms and I don't have enough experience with 2nd order extinction to know typical values for them. There are people who do know, and two of them wrote a pretty good book that includes an extensive discussion of extinction. Notwithstanding my inability to assess the effect on the formula, according to Astronomical Photometry by Henden and Kaitchuck, the effect of slightly different airmass in the same FOV is not significant for second order extinction under the field of veiw conditions I outlined for differential photometry.
Does this make more sense now?
I disagree with one thing in your formula, For extinction you don't use the transformed magnitudes( B-V)1 and (B-V)2 you use the instrumental magnitudes (b-v)1 and (b-v)2 because extinction corrections are applied before transformation and B-V for any given non varying star shold be constant, not vary with airmass. I suspect this is just a typo in your e-mail.
Chapter 4, section 4c of Astronomical Photometry also gave me the answer to my question as to whether k"v is significantly different from zero. It states:
"From Experience and theory, the second-order codfficient for v is essentially negligible."
So I guess unless you have extremely high precision data, you punt k"v, I will calculate it at once just as a check.
Brad Walter, WBY
Hello Brad,
I just thought that you made accidentally a mistake with that equation. I agree with you about what you said in your following message, except following quoted part:
[quote=WBY]
I disagree with one thing in your formula, For extinction you don't use the transformed magnitudes( B-V)1 and (B-V)2 you use the instrumental magnitudes (b-v)1 and (b-v)2 because extinction corrections are applied before transformation and B-V for any given non varying star shold be constant, not vary with airmass. I suspect this is just a typo in your e-mail.
[/quote]
If one is going to measure k", (s)he tries to find coefficients for (typically linear) function k=k(color). I don't know why that problematic (very small coefficients) process should be made even more problematic - there is plenty of (primary or secondary) standard stars in the sky, having decent range of colors. Why not use them for this process? Of course one could use any colorful pairs of stars, but in that case colors have to be determined as well. Most probably in an iterative way. I have done that once but I really didn't feel myself very well. :-D
I agree that for V-filter, k" is very small. I have determined transformation coefficients, mainly using "all-sky" approach - many primary standards at very different airmasses (~1 .. ~2.7), and only in really perfect nights. My tests have shown that adding or removing k"V from solution, it is not possible to detect any statistically significant difference. In some sources (e.g. "IAPPP guide for smaller observatories" if I rember correctly) k"V is postulated to be 0. IMHO, to be able to measure it, true millimagnitude photometry of (many) hundreds of stars would be needed. And in such case + wideband photometry, "true" (?) transformation equations could be even somewhat non-linear.
Best wishes,
Tõnis
Arne:
Could you elaborate on what constitutes a "pristine" night? Excellent transparency? Highly stable (not necessarily excellent) transparency? Excellent seeing?
Observationally, how would a PEP observer decide that the night had been pristine?
Tom
Hi Tom,
The usual requirement for a pristine night is stable transparency (and with cloudless conditions) and stable seeing, so that your photometry processes with minimal error. For photoelectric work, I usually followed an extinction star from zenith to ~20 degrees above the horizon during the course of a night; if I had time, I'd follow one star for the first half of the night and another star for the second half of the night. Often, I'd choose one star that was setting and one star that was rising to cover both sides of the sky. You can look at the smoothness of the resultant curve to decide whether the night was good or not.
I don't think anyone has come up with a formal definition in hard quantities, as it depends on the observing needs of the researcher. For most continental sites, you will get a couple of dozen of these nights in the course of a year. In AZ/NM, I saw around 100 such nights per year. The problem, of course, is that those really excellent nights are highly prized for both calibration and science!
Arne
Hi Arne and Tom,
Using a SSP-4 J and H photometer on Betelgeuse for the last 12 months (aka "noob"), I have come to the conclusion that I cannot judge any particular night other than the very obvious (cloud, wind, etc. and even then must be cautious) until I see the numbers. When observing in longer than visible wavelengths, I have been surprised at the results given what the visual conditions are. I have concluded that I am kidding myself if I think I can pick the near IR conditions. Oddly enough, none of us can see too well in the near IR.
In my case I have adopted a simple pair of on the night quality indicators:
On a recent "pristine" night for Betelgeuse I got J-H of 1.4 and low s.d. and then J-H of 0.9-ish with low s.d. I got a larger J-H in lower airmass and the 0.9 a couple of hours later in higher airmass. I do multiple sequences (SjShCjChVjVhVjVhCjChSjShVjVhVjVhCjChSjSh) particularly where I get less than believable results. Its a precision vs accuracy thing I guess. The following less "pristine" night I got excellent results. It would be interesting when that happens again to measure stars over more of the sky per Arne's post and see what the resulting curve looks like.
In summary, I now have a go, even if conditions look marginal visually (22 degree moon halo near Orion included), watch the numbers, and if they're consistent, I carry on, average and submit the results. If not, I go find something else to do. But at least I've stopped kidding myself that I can pick the near IR conditions by eye.
- Carl.
Hello
I am having trouble running TA 234 on my BL Boo data from last night. I get a
"BL Boo,2457134.6041782405,15.0124,0.0145,B,NO,STD,140,15.1988,141,15.4757,1.177,na,14742BRV,na
# Duplicate label in chart. No CREFMAG available. Possibly bad chart reference Duplicate label in chart. No KREFMAG available. Possibly bad chart reference.
There is no dupicate that I can find. Just 133,140, 141, and 146. I tried to get a standards chart to see if there were other hidden dups--no standard chart. Internet is working to get to this Forum, so that's not it. This version of TA has worked many times before.
Is there some problem with the online access to these ref values? Any suggestions. Had the same problem about a week ago, and then it started working again.
Gary
The chart you are using, 14742BRV, does have duplicates:
000-BKP-273 | 14:06:08.99 | +28:32:58.1 | 133 | | | | 14.371 |
000-BKP-270 | 14:05:16.97 | +28:25:46.5 | 140 | | | | 14.558 |
000-BKP-271 | 14:05:04.32 | +28:23:18 | 140 | | | | 14.730 |
000-BKP-272 | 14:05:24.82 | +28:20:50.7 | 141 | | | | 14.755
000-BKP-269 | 14:05:15.09 | +28:26:52.1 | 141 | | | | 14.887
000-BKP-268 | 14:05:44.53 | +28:31:13.4 | 146 | | | | 15.590
Be careful to review.
- request a smaller field of view for the chart.. E works.
- use AUID's
George
Or could you not use the AUID instead of label? Then duplicate labels don't matter.
Brad Walter, WBY
Hello George and All
Thanks for looking at this. Not buying it! There is a bug somewhere! I was already using an "F" chart.
I have an "F" chart in front of me, PT .15417 degrees around BL Boo. It has 4 entries on it. 133, 140, 141 and 146. I printed it 4/20 and I have a hard copy. No Dupes.
If I go to VSP and this morning ask for BL Boo, and F chart, I get on the screen 4 comps, the 133, 140, 141 and 146. It gives me 14766CR as the chart field.
Now if I ask VSP for photometry chart 14766CR, guess what, I get 6 comps and the 140 and 141 are dupes.
If I now go back and ask for PT for an "F" chart for BL Boo, without any chart ref, I get 4 comps, no dupes.
Is there something I don't understand?
Gary
There are some sublties to the VSP query mechanism.
Note that a chartid points to a Chart or a photometry table, not both. If you are looking at a chart and click "show photometry", the phtometry page will have a new chartid.
http://www.aavso.org/cgi-bin/vsp.pl?chartid=14766CR is a photometry table.
Something odd going on; I'll start a conversation with Will.
Meanwhile, you can test your chartid with the http listed above. This is what TA uses.
George
Hello George
I am well aware of the difference between a chart id and a PT id. Been there, done that one.
I am talking PT id's here only. "This sequence is called 147BRV" is the one I am talking about. It comes at the top of the Photometry Table, not the one that is in the upper right hand corner of the Chart. Much confusion about that in the past. Glad to hear that you understand that there is a problem.
One should certainly get the same photometry table for these two seemingly nearly identical inquiry's.
Gary
When I enter 147BRV into the VSP asking for a PT or a chart in quick chart section at the top of the page and entering "147BRV into the specific chart ID Box inthe Advanced Options section I get a message
Sorry, we cannot find a Chart ID of 147BRV in our records. If you feel this is an error, please e-mail aavso@aavso.org. Thanks.
Is it possible that the ID is missing a digit or a character? It looks short for a recent ID.
However, if I enter BL Boo in the object designation box in the quick chart section and specify 147BRV in the specific char ID box in the Advanced Options section I get a 3.0 degree (sorry misread the scale at first finder chart) . See attached 147BRV and a newly generated 3.0 degree' finder chart .
There definitely seems to be a bug here.
I just tried plotting a table for BL Boo with chart size set to F and got 133 / 000-BKP-273, 140 / 000BKP-270, 141 / 000-BKP-269 and 146 / 000-BKP-268.
Then I asked for an F CCD finder chart 14771ZB with the same 4 stars. then I tried a table again and got the same 4 stars on table 14771AAZ
I then specified chart 14771ZB in the VSP Advanced Options and got the same chart. One thing I noticed is that when I asked for a specific chart I did not get a selection at the top of the page to print a photometry table for that chart, but if I go back and request a new finder chart for BL Boo with the same characteristics as 14771ZB I get a new chart ID and hot link text asking me if I want a photometry table. Just thought this was curious it isn't a problem unless you have been using a finder chart and want the table that went with it.
Brad Walter, WBY
Hello Brad
Sorry, I missed a couple of digits. It should read "14742BVR", not 147BVR.
Gary
Gary,
I thought the ID might be missing a couple of digit., I think,however, I may have uncovered some other wierdness in the way VSP searches for old charts. Why should I get a "not found" if I just ask for an old chart ID but if I enter the chart ID and a star name I get one? It may have something to do with before and after the conversion to on-line chart generation. The chart that came up for 147BVR looks like it is one of the old hand-created ones.
It is also interesting that you can't tell from the chart when it was created. It appears that the header information is up to date. I can tell when I printed it, and, at least to the year, when I downloaded it from the copywrite statement at the bottom. Here is the problem, If the header information (period, coordinates, magnitude range, etc.) is up to date but the chart is old then how do we know what is current and what isn't? I looked at the guide for VSP and its language implies that the chart is as it was originally created.1 Yet the VSX revision history shows that period, magn range, epoch, rise duration, and spectral type were updated in 2011 and the header information on the reprint of the old chart and the newly created chart ARE THE SAME. This is misleading and the guide information needs to be updated so a user knows it isn't entirely the same chart. only the finder plot and labels are the same.
By the way, why do we use such an illegible font for the chart scale, the VSP URL and the copywrite statement? You have to download in high def for them to be legible. The chart scale is easily misread at the default download resulution. I guess I am going to start downloading at 300 dpi to avoid misreading the chart scale.
Brad Walter, WBY
1. From the VSP Help Guide subsection "DO YOU HAVE A CHART ID": "If you would like to replot a lost chart, just type in the chart id here and the chart will be replicated using all the settngs you used to plot it the first time. " Bold face is added for emphasis.
Hi all, I'm just popping in to say that I'm looking into the problems you've all mentioned with VSP. As soon as I have a solution I'll let everyone know.
Hello
I have been experimenting with various strategies about deleting standard stars from the transformation computation using TG5.6. There appear to be systematic errors in some stars, and they are deleted in almost every coeficient. There is a similar pattern in a couple of plots that other observers have posted. We are seeing X-x values between 0.5 to 2.0 mags.
Here is a list of those stars. I would like to hear if other observers are finding that these stars are often deleted to reduce the errors and get a better fit. These suspect stars are: AUID 000-BLG-922, 926, 915, 909, 894, 920, 893, 907, 912, 919, 959, 943, and 941.
Why are these X-x values so large? Wouldn't we expect them to all be less than 0.1 mags. I am using Vphot for the Instrumental Mags and am using stars with SNR greater than 150 before making any of these edits, so SNR is not the complete answer. Aperture placement and sky annulus overlap in Vphot may be a cause. Are there others? Do you find yourself deleting these same stars. These effects seem to happen on multiple nights. This seems to support the contention, that only 20-30 stars in M67 are really worthy of using from determining
Gary
WGR
Hi Gary,
These differences are far outside the normal realm, and when you are talking about deleting a substantial fraction of the possible stars, that gets me suspicious. Is this with TG or a spreadsheet? Can you give me a table of M67 stars with AUID, RA/DEC, Vstd, V-v, for both the 20-30 stars you are using, as well as the 13 you are rejecting? I can take a look at that photometry and a field image from NOFS to see why you might be getting such incorrect values.
When doing coefficient determination, I never delete more than 10% of my stars, and often never more than one or two. Otherwise, something is not working right.
Arne
Gary,
If you share your images on VPHOT I'll take a look. - id MGW
Gordon
Hello Gordon
I shared 4 images with you on VPhot. Each of these 4 images is a stack of approximately 25 images, chosen from the 35 taken that night.
Gary
Hello Arne
Thanks for the comments. I am using TG to delete these stars. I have attached the initial plot from TG(Figure 1), it has 85 usable stars--I think this was for stars with SNR 100. The pattern I see here has been seen in most of my plots, and I have seen it on this forum. I also have attached from TG, a Figure 4, which as 65 stars remaining. This reduces the error and it also increases the R^2 value. TA uses the error value in its computation of the total error and reports it to AID when you submit data. So it matters. I have attached the V file output from VPhot, and it does not give AUID, but uses a version of the Label that shows up on star charts. I don't have ra and dec for these. I only have the AUID of these deleted stars mentioned.
Hello Arne
There is a post on page 1 of this forum, Posted March 8, 2015 at 2:57pm by our observer nmi. who has very similar TG plots to the ones that I posted above. I will try to post them below, but not sure that will work. It does work.
Anyone else care to post their Tb_bv or Tv_vb plots from TG5.6 plots. I am anxious to see what others look like.
Gary
figure_Ti_ri.png
53.4 KB
figure_Ti_vi.png
56.48 KB
figure_Tr_ri.png
53 KB
figure_Tr_vi.png
39.37 KB
figure_Tr_vr.png
58.62 KB
figure_Tri.png
60.28 KB
figure_Tv_vi.png
46.29 KB
figure_Tv_vr.png
43.79 KB
figure_Tvi.png
91.41 KB
figure_Tvr.png
114.23 KB
Gary,
I'll process your images later today. I took a quick look at my data and attached plots of Tb_bv, Tv_bv, and Tv_vr. They points are tighter than yours. I haven't gone back individually analyze the cause of the outliers.
Gordon
Gary,
Using your images I created the Tv-bv transform plot. It shows one star off scale high - 000-BLG-922. (Plot attached)
When running VPHOT I used an aperture of 4, annulus inner ring of 5 and width 3 on your images. In the VPHOT V image download, this star (000-BLG-922) is labeled 126 (file attached). Notice the instrument magnitude jump compared to adjacent stars with similar standard field magnitudes (shown to the right).
VPHOT slid over and primarily measured a very close brighter star - 000-BLG-893. In the attached image from your scope they lay almost exactly on top of each other. I also attached the standard field AAVSO chart with a 7.5 arc second field. The two stars are in the center, labeled 110 and 126. You can see how close they are - about 9 arc seconds.
I'm pretty sure the problems you highlighted on other transforms have a similar cause.
Hello Gordon and Arne
Thanks for tracking this bug down. I agree that this is the cause of the errant point. AUID-BLK-922 is errant on almost all of my plots. I noticed that it was also the most deviant on plots by other observers. Sunce there are numerous standards with this color, why not eliminate stars like 922 from the AAVSO Standard Sequence, as it seems to cause problems? Perhaps we should also look at stars like 879, 891, 892, 893, 896, 899, 901,920, 922, 907, 912, 915, 926, 953, 943, 941 and 919 and make sure that VPhot works flawlessly with them. There are almost 100 stars available in these images, so taking 20 out of the mix does not seem like a problem. This has to be done with regard to color also.
Perhaps we need to keep the full standard sequence for those who need it, but a trimmed sequence to work with the Vphot tool would help members obtain transformations without glitches. Those who want to do the full IRAF reduction of the full sequence would be welcome to do so--but its not for everyone.
Gary
Hello
I just did an interesting experiment. I took some later/better images of M67. Pulled them into Vphot and did PT on the V image using the AAVSO Standard Sequence. The results were FWHM of 2.3 to 2.5 for most of the stars. Thats at sea level observatory, and is a pretty good night. ( its crap for a mountain top). Vphot designated 86 stars to measure. I examined them closely. Was using Rap = 5 px, Rinner = 10 and Sky Width of 5 (10 is the default, I think). Examined the resulting apertures closely under the zoom feature. I had lots and lots of overlaps.
Then tightened up to a radius to 3 px (2 FWHM which is the lower limit of reccomendation), reduce the R inner to 5 px, which is 3 FWHM, and the width to 3 px. Refreshed the plot and it looked somewhat better. I decided to count how many of the 86 designated stars were doubles in the aperture, or had significant stars in the sky. (Does VPhot have a rejection algorith for the sky, or does it just average, will it ignore a star in the sky annulus?, does it have to be a faint star?)
So anyway, assuming that there is no sky-star rejection, and its a little hard to keep track, I could clearly reject 23 of the measurements as being potentially contaminated. My setup is 3000mm focal length, 24 micron pixels, and 1.65 arc seconds per pixel which is close to being undersampled on this night, but is fine for 3 arc second fwhm nights, which is most of what we get.
I would be curious what others are getting? This number of stars rejected is consistant with what I have to do to get a clean plot in TG and a low error and good R squared. If I was using a source extractor that deals with crowded fields like Arne, I might not have to reject so many stars.
Gary
Hi Gary:
I usually have about 60 comps left after I delete comps that are too close to each other on my images (obvious plus my subjective call).
As I mentioned in my talk at the Fall Meeting, I found it best to tighten up my apertures (I use the VPhot graphs of SNR to help with this decision, and the apertures are the same as I normally use with this scope), and to remove the standard comps that overlap or show obvious "issues" on the image and save that subset of comps in my own M67 sequence in my sequence list. I do not go crazy removing stars from the sky aperture. VPhot does deal with them although I don't remember exactly what the protocol is? Subsequently, I never have to start over again with the selection process. I always use my saved M67 sequence.
The final # selection clearly depends on the scope/ccd system, and its specific plate scale and FOV, so each observer will have a slightly different number of good comps. Sixty is a more than reasonable number of comps with a wide range of color. I get a good/small error (~0.02) and r^2 of 0.97+ for the color transform coeffs (1).
BTW, I usually see slightly better results (errors) with NGC 7790 than M67 but the coeffs were similar (within error) with both clusters from Spring to Fall.
Ken
Hello Ken
Thanks for that info. 60 comps is about what I am left with (86-23 =63, close enough). Glad to hear that Vphot rejects sky defects/stars. We need to hear more about that. Is it good enought for this task.
You mentioned: " I get a good/small error (~0.02) and r^2 of 0.97+ for the color transform coeffs (1)."
I assume you are talking about the Txy coef when you say color. I have no problem getting .99 or 1.0 for those with errors smaller than .005 mags.
The real trick is to get small errors for the Tx_xy coef, which are the ones that TA appears to use. These coef are usually less than 0.1, but their errors are not usually 1% of that. I have often seen .05 mags on these if I am not careful--and TA propogates that right into my estimates. I bust my hump to do .005 and the TA'd error on my estimtes can be 0.050--ugh.
Gary
Note from Gary:
I assume you are talking about the Txy coef when you say color. I have no problem getting .99 or 1.0 for those with errors smaller than .005 mags.
The real trick is to get small errors for the Tx_xy coef, which are the ones that TA appears to use. These coef are usually less than 0.1, but their errors are not usually 1% of that. I have often seen .05 mags on these if I am not careful--and TA propogates that right into my estimates. I bust my hump to do .005 and the TA'd error on my estimtes can be 0.050--ugh.
Gary
I would like to see a discussion about the relative merits of using the magnitude coefficients (of the torm Tx_yz and used in the AAVSO recommended transform scheme) versus using the color coefficients (of the form Txy and used in TA's alternate transform scheme).
As Gary pointed out, for most amateur setups you have much better error values for the color coefficients than with the magnitude coefficients.
Thoughts?
George