Having multi-filter CCD observations transformed into the standard system is important: it makes our data cross-observer comparable and useful to the professional astronomers.
In 2014 we developed the tools to document and make this easier to do. Check out the documentation here.
And now M67, an excellent calibration target, is in the perfect position for evening observations.
What are you waiting for? Now is the time to move to the next level!
And there is help available: volunteers are listed below to help you learn each step of the process.
Goal of the campaign:
- Increase the number of observers who are submitting transformed data to WebObs.
24 out of 134 observers submitted transformed observations in Jan-Feb 2015
Let's see if we can improve this metric!
- Use the tools developed this last year for transformation
TG: Transform Generator
TA: Transform Applier version 2.30 or better
--- Get those M67 images! Its a convenient evening target in the month of March. Use best practice to get them BDF calibrated.
--- Extract the instrumental mags.
VPHOT is the most convenient way to do this. It automatically identifies the standard stars in the field and TG is prepared to work with its output directly.
Ken Menzies is available to help with questions:
If you use some other tool for extracting the mags then the issue will be using the proper labels for the stars. AUID's are always preferred and are available
with the photometry from VSP
--- Use TG to compute your transform coefficients
Installation and process details are available here.
Gordon Myers is available to help with questions.
You can get your coefficients from TG in a format compatible with TA (INI file) when you save your results.
--- Use TA to apply your transform your data
- You prepare you WebObs submission as you already do. The only adjustments to that process:
- Comp and Check stars should be identified by AUID. You may be able to use the VSP labels, but AUID's are preferred.
- The ChartID in your observation records needs to be a photometry page, not the picture chart.
Installation and process details are avaiable at here.
- Get the latest version of TA here.
George Silvis is available to help with questions.
Before you start submitting TA transformed data, you should review the results. TA has a feature built in to help you. If you check "Test TC" the transform process will be applied to your Check Star data. Review the results in the Report tab. If your observations can reliably match the transformed standard magnitude of the Check star, then you can submit your variable star measurements with confidence.
--- Tell us how you're doing by posting comments to this thread. What needs to be changed to this process to make it easier? We'll fix it!
Thank you to George, Ken and Gordon for this initiative. I can’t stress enough how important transforming CCD data is for a complete and self-consistent light curve! As a light curve consists of the collective work of various observers with different instruments (that have different responses and filters), all data need to be on the same system in order to be science-ready. It may look like an extra step in the data reduction process, but it is a very important step nonetheless!
I would like to extend the Transformation Team’s invitation to all to join this effort. M67 is a fun target to observe, with a rich suite of variable and standard stars. Let us know of your progress and of your experience with the Transform Generator (TG). And thank you in advance for all your hard work!
Best wishes - clear skies,
How many images, and how many filters are ideal? I've taken B and V images, and started the process, but have BVRI filters available, so I assume I should get images in each. Should I stack multiple images for this process, say a minimum of 3 of each? Is Lunar interference an issue?
Also, should I use all the standard stars, or select only those stars that don't show up in their neighbor's sky annulus?
Thank you kindly,
You might consider 4 replicate images of each filter in the following order:
2I, 2R, 2V, 2B, 2B, 2V, 2R, 2I to minimize extinction issues.
Stack all 4 of each filter in VPhot. It improves SNR. It's a question of average before or after but not quite the same.
Select a subset of comps that work best for your setup and save your sequence in VPhot.
Collect images as close to the meridian as possible and in the best sky you can get.
Hope this helps, Ken
Ken, I have a question: How important is it to select your stars for a standard field destined to be used in TG?
TG is able to show you the data points collected on a graph where you can weed out the bad points very easily. Sure, taking out the stars with apertures overlapping is good practice, but in this case we will be looking to refine what will clearly be a linear relationship. The graph-and-eye combination is pretty powerful.
Its a balance between the time spent on the step of refining the star list and saving it and trusting that you will be able to catch the bad data in the end.
Hi George and Participants in the Transformation Campaign:
Question - How important is it to select your stars for a standard field destined to be used in TG?
As you noted, TG allows one to easily remove (just a mouse click) bad comps visually during calculation of the slope of the linear best fit curve. I do not disagree with that observation.
My rationale for creating a subset of standard comps for a specific scope and ccd system results from your other comment: "Sure, taking out the stars with apertures overlapping is good practice".
I think it is best practice for every photometrist to regularly inspect their images, select the best comps for their images (e.g., remove comps that overlap or are at the very edge of the FOV), and save the sequence for future use. In my case, it meant I used only about 90 out of 100 comps.
Therefore, is it necessary? No. Is it important? I'll leave that to each observer. I might offer that each observer try this experiment and compare the results. In reality, this experiment is much important for other regular target photometry than for transformation with TG. So is it important, y....?, n.....? ;-)
BUT, this does take more time and I don't like to do unneccesary things either. So for other readers, George and I have a slight difference of opinion about this BUT it works well either way so take your choice.
Our hope is that you generate your transformation coefficients and use them to transform all of your photometry in the future.
You should collect data for all BVRI. Might as well while you're there!
I don't believe stacking is necessary. The TG program will average your data if you load it with multiple sets of, say, V data. I would recommend a minimum of 3 images in each filter.
The goal is images of the standard field as good as you can make them. So, if you regularly process by stacking, don't change that process.
Yeah, the moon is right in the way. Best to wait a couple days so you get good images.
First of all I have to mention that to work with TG using the VPot capabilities is amazingly easy and pleasant. One can review all the saved sets data and manipulate them removing all obvious outliers.
I have just finished with TG calculations. Last night I took 6 series of M67 standard star field with BVIc and g’r’i’ filter sets. I am agree with the Ken’s remark that the “best practice for every photometrist is to regularly inspect their images and save the sequence for future use”. I have done the same because there were lots of stars with overlapping annuluses and it gives me creeps when I see something like that. In my case I finished with about 45 stars (see the screenshot from VPhot).
The TG results shown in the second screenshot are obtained in three different ways:
1-st column: 6 separated series were calculated by TG and the results were averaged.
2-nd column: The images from 1 to 3 and 4 to 6 (for every BVIc filter) were stacked by VPhot and the resulting 2 stacked series were calculated by TG and the results were averaged.
3-th column: All 6 images for every BVIc filter were stacked by VPhot and the resulting series were calculated by TG.
The resulting transformation coefficients are similar although not equal. Any suggestions which way is better will help me to decide which coefficients to use in my transformations.
As I have BVIc and g’r’i’ filter sets now only the BVIc transformation calculations were finished. The reason is that there are standard magnitudes in the M67 only for UBVRcIc filters. Although it is not recommended to do this I will prepare my own sequence with g’r’i’ and will calculate the corresponding transformation coefficients. While the data from APASS not yet have been revised it is the only opportunity to do that. Lately I am imaging only with g’r’i’ (Astrodon) filter sets and according to my experience they give me better results for my imaging setup.
Thanks for sharing that great experiment.
I'm going to answer your question ("Any suggestions which way is better will help me to decide which coefficients to use in my transformations.") with a question. (Note: Some people are profoundly annoyed by this teaching technique, if so I apologize!) ;-)
What information on your TG coeff table allows you to compare the three coeffs (e.g., Tbv)? If you know and I think you do, what do they say about the difference between the three coeffs? So which is better? What could you do with the three values?
This is very interesting. Particularly, why are the 1st and 3rd are different. Over 1 %. I'm not sure averaging everything together is the right answer.
Could you share your VPHOT output files from the above experiments? Either in forum or directly by email to SGEO@GASilvis.net. I'd like to understand what's going on. As a programmer, I want to know why expected results don't happen. Thanks,
Also, for your second filter set, I can work with you to sort out a strategy so that TA will be able to handle the transformation process. Either a code option or a workaround.
I should have looked at the error information provided in the comparison. All three results are effectively the same as they are within each others error. But the stacking results have the better error results.
So I'm changing my vote: stacking the images in VPhot is a better strategy than leaving the averaging to TG. And its easiest to have VPHOT stack them all rather in groups. I say use #3.
Averaging 1-3 would be wrong since the same data is in each option.
Thanks George, Ken and Velimir -- I understand this better now. I'll shoot a series with each filter, load them into VPhot and stack them, and go through the process with TG, eliminating outliers for the best fit.
I've already saved a sequence of non-overlapping standard stars, and I'll bet my selections are almost exactly the same as Velimir's.
Thanks again to all,
Hold on a second guys. I think more investigation and thought is needed before jumping on the "less error means better" bandwagon. That is true if the uncertainties are calculated the same way.That isn't true in this case. The two methods don't calculate the standard error the same way. In one case you are calculating a standard error of something derived from the average of 6 individual regressions presumably using the normal error propagation formulae. In the other you have the standard error of one regression. further the uncertainty of the measurements in that single regression amounts to the standard error of a mean rather than the standard error of a sample.
How does VPhot calculate the standard deviation of combined image? Since it is making only one measurement per star on the the combined image it is probably using the CCD error equation on the stacked image. That is tantamount to calculating the standard error of the mean of the six images rather than the standard error of the six image sample. Therefore just because you are comparing apples to oranges you expect the standard error of the of the stacked image to be smaller by a factor of 1/SQRT(6) and that percolates through the whole calculation process. So you would expect that the uncertainty of the stacked images to be about 40% of the uncertainty derived from the individual image sets just because the are based on different statistical measures (note: this is a very rough estimate that assumes the sigmas of any star are the same in all 6 images, that, isn't true, of course) From the AAVSO CHOICE course, the Uncertainty About Uncertainty you know that you want to base your error estimates on the standard deviation of the sample, NOT the standard error of the mean.
Also you have airmass effects to consider. In what order were the images taken. How do the times of the image center points in different colors compare for the single sequences and the stacked images? There may be differences in systematic error and that won't be reflected in the stochastic uncertainty.
Another thing you are neglecting are the R^2 values. These are the ratios of the Variance of the dependent variable explained by the regression to the total variance of the dependent variable. Higher R^2 are good for the coefficients that should be close to 1 and lower R^2 are better for the coefficients that should be close to zero if the slope (T) is exactly zero R^2 = 0). The Tfilter_color index (e.g.Tv_bv)which are supposed to be close to zero all have much lower R^2 for the TG averaged data. The Tcolorindex (e.g. Tbv are not as high for the TG averaged data as for the stacked data, but the differences are not as dramatic as for the Tfilter_colorindex transformations.
So based on very incomplete information I would have to disagree, and say that I would go with the TG averaged data rather than the stacked data.
I think one needs to see all of the relevant information about the data: sigmas of individual star measurements vs stacked measurements and center times of the individual and and stacked images how the underlying uncertainties that are the inputs to the calculation of the uncertainty of the slope are determined and then work through the math (not calculating everything out but understanding the formulas and how the measured uncertainties carry through the calculations. I think you will find that the biggest difference between the standard errors of the coefficients from the stacked images vs the averaged ones are primarily do to one being calculated from the standard error of the means and the other from the standard error of the samples. You want the latter. There may also be differences in systematic errors which won't show up in the uncertainties but affect the accuracy of the transformations. Looking at the time center points and the corresponding airmasses of images and stacked images will help in analyzing potential systematic errors.
I don't know if I have explained this clearly enough. Am I making sense to anyone?
Brad Walter, WBY
So based on very incomplete information I would have to disagree, and say that I would go with the TG averaged data rather than the stacked data.
Brad, what about transformation data from multiple nights, all having different extinction etc? IMHO in that case you can't do much except average your coefficients. Well, to cover such scenarios, there are much more sophisticated algorithms, too...
But when averaging coefficients - how to treat individual values (typically they have also uncertainties) correctly?
I agree completely. Airmass and sky conditions also vary in a single night over a string of 24 exposures. How much depends on how close you are to the meridian, the length of the exposures, the order in which you take them, and the quality of the sky that night.
I was not arguing against the TG averaging. I was arguing that just because the averaging gives bigger error values doesn't mean the averaged coefficients are worse. Because the regression error estimation of the stacked images uses the sigma of the 6 images combined which gives an SNR that is smaller by a factor of 1/SRT(6) compared to the SNRs of individual images.The TG averaging process uses the SNRs of stars calculated from single image sigmas, which is what you want to use for error estimation. You should always get a significantly lower error from the stacked images simply because you are using a different statistic as the underlying measurement uncertainty.
When you take 6 measurements of something and report the average, what is better to report as the uncertainty the standard deviation calculated empirically from 6 measurements or the CCD equation error of the 6 images stacked? You want to submit the former because it is the standard deviation of your sample. the latter is analogous to reporting the standard error of the mean. As long as you do your stacking correctly, the photometry is equally as good using both methods 99%+ of the time because in CCD photometry you are just manipulating numbers either in a program or in a spreadsheet (One does it before you measure net counts the other after, so stacking may be better at very low SNR). The only difference is which statistic is used to estimate the uncertainty. I suggest that the TG averaging method reports the preferred (and more conservative) uncertainty.
The lesser point I was making is that unless you know mid-exposure times of the images the corresponding airmasses you don't really know what is the best way to group measurements. For example were images taken BVRI BVRI BVRI etc., IRVBBVRI IRVBBVRI etc., all in one big continuous sequence IIIRRRVVVBBBBBBVVVRRRIII, or in some other sequence. Were there gaps between sets to refocus etc.If you don't know that you don't know how you should group your images in different filters to minimize systematic error. If the sky quality is good and your set of 6 images spans the meridian it probably doesn't matter. If you started at airmass 1.3 on the west side of the meridian it probably does matter somewhat.
Hi Ken and George,
The frames were taken during the strong moon light - M67 was only 21 degree away. I reckon this session as an exercise only. Before to include the transformation coefficients to my telescope setup in VPhot Admin section I will repeat everything carefully on a photometric night.
Honestly speaking I will choose the 3-th result as the 6 frame per filter stacked results obviously have better S/N ratio, and something more, doing it this way, we will get only one result and will save our indecision, which result to choose (see my previous post)
Seriously, depending on the imaging system and the target one has to find a balance how many images have to be stacked. In my case, because in VPhot sequence I use mainly stars with magnitudes between 11m-14m, I can improve the exposure time to get optimal S/N ratio on every image and probably I will finish with 3 frames per filter. Then I will stack them and calculate transformation coefficients by TG. And just to pass away the time I will include some uncertainty adding second 3 frames per filter session and probably I will wonder again which one is better.
George as attachments for your investigations you will find zip files with VPhot data for all 6x6 series, 2x3 stacked series and 1x6 stacked series. You have to use SAVE LINK AS and after downloading to change the file extension from TXT to ZIP otherwise was impossible to attach that ZIP to the post
Just a couple notes for PVEA
"Before to include the transformation coefficients to my telescope setup in VPhot Admin section I will repeat everything carefully on a photometric night."
- VPhot does transforrms, but only 2 color. For 3 color you will need TA.
" And just to pass away the time I will include some uncertainty adding second 3 frames per filter session and probably I will wonder again which one is better"
.- That's good practice. Since its a new set of data you can combine it with your first set with TG's facility.
That’s me again because I just realized that TG does not work with Sloan type filters. I have manually created VPhot sequences for my g’r’i’-Sloan type filters but TG refuse to calculate the transformation coefficients. I just saw your note from the previous post and let’s go, let’s do it. I am ready to collaborate with you
It looks like VPhot is set up for handling the sloan filters and transfrom coefficients. But this will be 2 color transforms.
For TA to do it so you can do 3 color transforms:
FIrst pass I would have to suggest a workaround where we map the sloan filters in to VRI, run TA, and then switch back. The process would look like:
- in TA create an INI file with the sloan coefficients and their errors loaded in by hand. SG data maps to V, SR data to R and SI to I. eg. for Tvi you would load what should be called Tsgsr. Since you are creating your own coefficients with a spreadsheet, be very careful in how you define the coefficients. The forms are:
- color coefficients are on the form Txy and are defined as 1/slope of (x-y) vs (X-Y)
You would be doing for x and y SG, SR and SR, SI
- magnitude coefficients are in the form Tx_yz and are defined as slope of (X-x) vs (Y-Z)
You would be doing for x y z: SG SG SR, SR SR SI, SI SR SI There are others you can do; take a look at the coefficients tab of TA.
- Now the data. I assume you would have an extended-format file all ready for WebObs with the filters labeled as SG, SR, SI. With a text editor do a global replace of ";SG;" with ";V" and so on.
- You will need to provide the comp star data since TA will not find it in VSP. TA has a mechanism for that. You will need to add some lines to your file, one line for each filter/comp star combination, This would be: "#CREFMAG= <CMAG> <CERR> <KMAG> <KERR>" where replace the <..> notation with the numeric value. You place these before the first observation line that will need them. If you are doing 3 color and use the same comp/check, then you would need to insert only 3 lines.
- Now run TA and review the data, keeping in mind the renaming!
- Finally, take the result file and switch the labels back with your editor, replacing ";V;" with ";SG;" and so forth.
Whew. Not a tidy process and at risk of getting confused. But it should work.
Meanwhile I'll start thinking about extending TA to other filter sets. That will be a significant project, so I'll wait to see what the demand is.
Everything about mentioned by you procedure to map Sloan filters as BVI and then switch back is clear for me. OK we will delude TA to apply transformation coefficients. After we can map the filters back to their original state and we are ready to report our transformed Sloan type data to the WebObs.
But let’s start with a simple example. Let’s make TA to work.
As I understood:
From my last imaging session of M67 I have prepared short (6 points) time series AAVSO reports of my BVI filter sets (see the R filter is missing). In TA I have created INI file with the calculated by TG transformation coefficients:
And I started my attempts to put TA into operation
Your poster format:
My AAVSO format:
The difficulties and the extra work:
#CREFMAG=13.129 0.021 13.349 0.020
So here we are:
(see the attached screenshot and my AAVSO extended format VPhot files for BVI filters)
So I need your comments and help.
Velmir presented an example of prepping data for TA. Here are some tips to make it easier:
- comp and check stars SHOULD be identified with AUID's. That's the WebObs rule, But most submissions in fact use chart labels. TA will accept chart labels and will try to convert them to AUID's for you. This works 95% of the time. But if there is a case of duplicate labels on the chart then TA will complain and you will have to manually fix that reference, replacing the label with the AUID in the submission.
- #CREFMAG line. This is only needed when TA will not be able to find the standard data in VSP. If the observation is using AUID or a label that it can translate to AUID, then you don't need to use a #CREFMAG line.
- "One should manually change the sequence number to the AUID and to be
careful because VPhot sequence numbers of Comp stars ALMOST ALWAYS ARE
GENERATED DIFFERENTLY.� "
Oh, I hope that is not so. That undermines a lot of basic assumptions used in WebObs and TA. PLease provide an example! In cases of duplicate labels VPhot disambiguates by adding a suffix to the label (eg 110_2), but the label that star would have in the VSP chart would be 110. This added suffix probably applies only to the duplicate stars that appear in that specific chart, so a wider view of the same area might end up with different suffix assignments. But its a bedrock assumption that a unique label found in a photometry chart will exactly give you the correct AUID.
So, I believe you can put your data straight from VPhot into TA and get the same result as you did where you prepped the files with hand translated names and CREFMAGS. Please tell us how it comes out! If there is any further difficulty, please post the raw VPhot data.
Hope this helps
I will do more exercises with TA and raw (unchanged) VPhot data later.
Now I am just sending the screenshots of how VHot load the standard stars of M67 field. Every time when you load the AAVSO standard stars more of the labels have changed and the result are unpredictable.
See the screenshots and the attached GIF animation. My favorite is the number 137 – dancing all the time.
And how can one recognaize the right label from the AAVSO photometric table when there are so many stars with equal numbers. We can only do this comparing the magnitudes from VPhot labels and Photometric table.
You point out the weakness of working with labels on charts. AUID's are unique but they are big and would clutter up a chart display making it unreadable. So that's why they use labels on charts.
I don't think this is an example of VPhot getting the label wrong. This is just a confusion over duplicates. For example: if VPhot says the label 127_3 you can be sure that its refering to one of the 127's in the photometry table.
Where does that leave us? If you're feeding VPhot data into TA for fields not too dense, Its should work fine. If TA is confused by a label like 127_2 you will need to resolve those by hand.
As I believe I have said before (?), you should download the AAVSO Standard Comps into one image and save this sequence as your own. You can choose to save them all or remove bad comps at that point as you desire. When you open the next image use your own saved sequence NOT the AAVSO Standard Comps from the catalog. Your saved sequence will have one and only one set of comps which do not change names. You will not run into different names which do cause confusion and yelling! (The reason that some AAVSO Standard Comps change between images is that the very similar comps (e.g., 127, 127_1, 127_2, etc) may exhibit a slightly different order because of noise in the image.)
Try this out and see if it helps. Ken
I carried out more test with my raw M67 VPhot data. As you said TA works very well but only after manually adding the info line #CREFMAG= <CMAG> <CERR> <KMAG> <KERR>. With the funny VPhot labels it was impossible and TA was confused. The transformed result was able to submit to WebObs without any difficulties. (see the screenshots).
I think that you have created a great tool for applying transformation coefficients to multiple filters data and your work on TA is a cause worthy of support.
With respect to the labels that are placed on the star fields by Vphot I have some recommendations.
I am agree that to work with labels in crowded field is difficult but only if one would like to use printed chart – I never do this in the nowadays digital era. My opinion is that if there are unique AUID number of stars identified as standards, then they should be used as labels in the VPhot. To check the data of a particular star is easy in VPhot. We can use either of the two ways: to click on the star and read the info in the pop-up window, or click on the left panel with all loaded stars which will lead you to a separate tab with Measurement details. We can also zoom in and out the image for better viewing.
If you don’t have a 64-bit computer, visit https://store.continuum.io/cshop/anaconda/ to locate a compatible installation file.
This is my case, my PC run Win7 at 32 bits
After erring a bit, I thought I had found what I needed
Windows 32-bit — Python 2.7 — Graphical Installer
I live in the country side with a slow connection, after abour one hour I had the whole content (310M) and proceeded to installation.
Then, created the "Photometry" directory under Anaconda,where I copied
But, then no sign that TG works.
Any help would be appreciated.
One way to get a handle on python problems is to execute the script from a DOS prompt. When you click on a script from windows and it faults, the messages it was giving you passed in a blink of the eye. But if execute the script from DOS (ie "python TransformGenerator5.5.pyw") then if there is a problem the messages will be left on your screen. Try this and the communicate directly with Gordon on installation problems.
You may need more detail to test the TG Python installation. As George suggested, try the following two command sequence in a command prompt window and let me know what you see.
If this does not work, can you send me a screen capture of the c:/Anaconda and c:/Anaconda/Photometry directories?
(You can email me directly at email@example.com)
This is not the case as I know this very well. I am working with VPhot every single day. The star labels do not confuse me as I always use my own or reworked sequences and always use checked by me standard stars. I just trying to point out some discrepancies and probably to help to resolve them.
I just realized that in my finished example (see the WebObs from my previous post) the TA transformed data contain instrumental magnitudes of Comp and Check stars. Obviously when one use only one Comp and Check star (not ensemble photometry) VPhot calculates Comp and Check magnitudes only in instrumental magnitudes.
So please give me advice how to get VPhot data with magnitudes calculated in standard values otherwise will be wrong to submit incorrect data in WebObs.
When I do ensemble photometry there is not a problem, all magnitudes are calculated as standard values. Only we have to find a way to use ensemble in TA for applying transformation coefficients.
Sorry if I'm a little confused by this thread. Now that I know you are using saved sequences and that you do not get comp names to bounce around, I'm happy. So, I guess you wanted to point out this issue with VPhot and propose an alternative like using auid names in VPhot. Is this correct? My only response is that it probably won't happen since it would take a re-write of VPhot that is unlikely. And, the issue can be easily prevented by the work-around of using your saved sequence.
So on to my second confused issue. See you comment and question below and my response/question:
"I just realized that in my finished example (see the WebObs from my previous post) the TA transformed data contain instrumental magnitudes of Comp and Check stars. Obviously when one use only one Comp and Check star (not ensemble photometry) VPhot calculates Comp and Check magnitudes only in instrumental magnitudes.
So please give me advice how to get VPhot data with magnitudes calculated in standard values otherwise will be wrong to submit incorrect data in WebObs."
Response/Question: The AAVSO Extended Format "requires" instrumental magnitude for comp and check when only one comp is used. This cannot/will not be changed since it has been determined that it is the best choice for use by researchers. Ensemble photometry cannot report the magnitude (instrumental or otherwise) of multiple comps so the AAVSO Extended Format requires/uses the word "ensemble" and happens to report the calculated magnitude of the check.
So do you think that you are reporting "incorrect" data when you report "instrumental magnitudes" for single comp and check situations? That is not the case.
Velimir/George: On to the last issue of helping TA transform ensemble photometry. The only alternative that I can think of is to use the check star as the comp star. The check star is normally used to confirm that the comp is constant. I also think it can be used to confirm that the check star is yielding the correct known magnitude. In other words it is considered another target. In ensemble photometry, I think it is unlikely that the variation in the comps is an issue since there are so many. So I think we could use the check star and its known magnitudes/color to correct the target to the transformed magnitude.
Does this make ANY sense? I'm not sure! ;-(
I'll answer my own question shown below: "On to the last issue of helping TA transform ensemble photometry. The only alternative that I can think of is to use the check star as the comp star. So I think we could use the check star and its known magnitudes/color to correct the target to the transformed magnitude. Does this make ANY sense? I'm not sure!"
NO! Trying to correct the target magnitude, calculated on the basis of several comps (ensemble), to the standard system on the basis of one other comp (i.e., check star) would not provide an accurate standard magnitude unless the check had the same color index as the average ensemble comps. Very unlikely or impossible to know for certain. So, would need to transform each target magnitude for each ensemble comp and then average the individual transformed magnitudes. Which is why TA doesn't do this. Oops! ;-(
TA is expecting instrumental magnitudes for comp and check stars, same as WebObs. I believe you thought the values in the poster example were standard magnitudes; they were not, they were instrumental. My zero point is just different from VPhot. But its not a problem for TA as long a consistent.
You can't transform ensemble observations because you don't have a standard magnitude for the ensemble. If you did you could use TA with its CREFMAG method for presenting the standard mag data.
You asked: "So please give me advice how to get VPhot data with magnitudes calculated in standard values otherwise will be wrong to submit incorrect data in WebObs." You don't want standard values. Comp and check are instrument. That's what VPHot gives and what WebObs wants. Only the target star magnitude is "standardized" in a webobs record by referencing it to the comp: Vstd = (Vins - Cins) + Cstd
If someone is using Sloan Filters:
1- what is the best source for the stars in the standard fields, SDSS Navigation tool or APASS
2- would it be best to get the transformations coeficients for the Sloan Filters and then transform to Johnson photometric filters through the Sloan-johnson transformation.
sorry for the question, I only have Sloan G,R, I. not johnson.
Do not be sorry for using g’r’í’ filters. I mainly use my Astrodon Sloan type g’r’í’ filters and very rare BVRI. I reckon the Soan type filter as more advanced since they have better response to the corresponding part of the spectrum.
I always use g’r’í’ APPASS data or based on it UCAC4 catalog. See the data differences between APASS (UCAC4) and SDSS data in the following example:
Star: GSC 02977-00937
Source B V SU SG SR SI SZ
APASS 12.794 12.149 - 12.427 11.969 11.841 -
SDSS - - 15.31 12.76 12.06 11.95 13.19
APASS is the only reliable source now for Sloan type photometric standards. More about APASS:
I do not think that you have to convert your observations to the UBVRcIc system. This will add more uncertainty in the final results.
Hi George and Ken,
Thinking about the transformations a question hit me:
Why should I transform my ENSEMBLE photometric results?
When we do single Comp-Check star photometry it is mandatory to transform the results according to the relevant transformation coefficients. The reason is obvious - the differences in color indices between the target and Comp-Check stars. Our instrument have different response to the different part of spectrum (colors of the stars) and for the same magnitude star we will get different values because of using single star comparison. We can only compensate this uncertainty by doing transformation and to transfer the results to the standard values according our instrument response (transformation coefficients).
What happens when we use ensemble photometry? We are using lots of standard Comp stars (the more the better), and an additional target, a standard star as a Check star. All of our Comp stars have different colors hence different color indices. Let’s assume that 1/3 are bluer, 1/3 are redder and 1/3 are almost the same color as our target. In the ensemble photometry procedure this should lead to decreasing of uncertainty and for compensation of the different instrument responses. Then our final ensemble photometric results have to be very close to the standard values and it is not necessary to transform.
As George said: “You can't transform ensemble observations because you don't have a standard magnitude for the ensemble. If you did you could use TA with its CREFMAG method for presenting the standard mag data.”
After several attempts and seeking for additional information I came to the conclusion that actually it is not possible to transform ensemble photometric data as we can’t get the ONLY ONE resulting standard magnitudes and hence the ONLY ONE color index for the ensemble. If we try to do this it will be so artificial that will mislead us a lot.
My final decision: I will not trying to transform my ENSEMBLE photometric data and I will submit them to WebObs as they are.
Do you think that this make sense or my last thoughts are totally confused?
Here is the differential transformation equation: M1-M2 = m1-m2 + t*[(B-V)1 - (B-V)2]
(Zero point and extinction terms are removed (subtracted out) since the target and comp are in the same small field.)
The transformation coefficient term (t*...) equals zero if the color indices of the target and comp are identical (or nearly so), no matter what the value of t is (e.g., 1.0 or 1.1). IF one selected/used a single comp with the identical color index as the target, one would not need to transform. However, in most cases this may be difficult to know up-front or is difficult to achieve due to the selection of available comps. Also, the variable target may change its color during its pulsation cycle.
Your stated assumption for ensemble photometry is that "1/3 are bluer, 1/3 are redder and 1/3 are almost the same color as our target. In the ensemble photometry procedure this should lead to decreasing of uncertainty and for compensation of the different instrument responses. Then our final ensemble photometric results have to be very close to the standard values and it is not necessary to transform."
Ensemble photometry does generally improve the accuracy of the untransformed magnitude of the target since one has many (rather than one) measurements of the target magnitude which are then averaged. If one comp is not very good (close to true magnitude), the others correct for that individual error in accuracy. Unless, of course, they are all biased in the same direction. Not impossible if they come from one catalog? However, the more important question is does it improve the agreement (reduce the difference) betwen the untransformed magnitude and the transformed standard magnitude. IF 1/3 of the colors are the same (i.e., identical to the target or close), then the answer from the equation is Yes. (Difference is zero). IF 1/3 are bluer and 1/3 are redder, by similar amounts, I again I think the answer is Yes. The difference is - in one case and + in the other and it zeros out in the average of all magnitudes? If they are not, the answer is of course No. If one has only a few comps in the ensemble, the risk of failure of the assumption is greater. If many comps are used, I think the equation would say the disagrement between transformed and untransformed magnitude would shrink. What is the number (10?) is the wrinkle and it would depend on the range of comp colors. Stating that "final ensemble results have to be very close to the standard values" is THE big assumption/question.
I tend to think yes BUT as Arne would say "it depends". It would be great for Arne to provide an expert opinion. This also does NOT mean that transformation would not be the most accurate. I think it is more of a practical decision based on what catalogs and tools are available? It would still be necassary to call them untransformed and the future researcher would also have your check value to allow some assessment of the accuracy of your target magnitude.
One other very important issue, that I wanted to note but forgot, is the fact that many variables are redder (to the right of the HR diagram main sequence) than most stable main sequence stars which would be selected for comps in an ensemble. Therefore, the comps in the ensemble might not have an equal number of comps that are redder or bluer than most targets. So zeroing out the difference in comp colors will be more difficult?
Thank you Ken,
Your thoughts seem to be correct and lead to conclusions very similar to the conclusions I have come to myself. Of course the conditions when choosing comparison stars may not be ideal, half bluer, half redder and then in the same quantity and quality ratio but as you rightly pointed out: “Ensemble photometry does generally improve the accuracy of the untransformed magnitude of the target since one has many (rather than one) measurements of the target magnitude which are then averaged.”
In my photometric works (I am imaging only in Sloan type filters) I usually use 12 comparison stars and one check star. Why 12+1=13? Good question but there is not a special reason - I just discovered that in my FOV is easily to find 13 stars (lucky number) with a magnitudes similar to the variations (min and max) of my target. Due to the lack of VPhot sequences with Sloan type standard magnitudes I am preparing everything manually using UCAC4 catalogue (or APASS data). Thirteen stars are not so extremely hard work and my goal is to choose the stars with different color. Naturally they cannot be selected to meet the requirement for perfect color indices ratio but however the final ensemble photometric results has to be closer to the standard magnitudes as you already know.
So I patiently with a hope wait to come a time when Sloan type magnitude sequences will be added to the AAVSO data base.
As you said: “It would be great for Arne to provide an expert opinion.”
I see Velimir had a good set of transformed magnitudes using TA, but since I'm using VPhot, I was wondering if the reduction and transforming can be done their without going to TA . One thing that is very annoying, is that VPhot uses transform coefficient names like T(subscript R), T(subscript V), and T(subscript I) which has no equivlent in TG5.5. I'm guessing that TR = Tr_ri when doing the IR(Tri) transform and TV = Tv_bv when doing the BV(Tbv) transform? I get reasonable values for all BVI mags until I try to compute R with any other filter transform; it gives a 0 or negative R magnitude when downloading the data from the transform table screen. I guess I'll have to do what Velimir does and apply the transform coefficients seperately in TA....a pity since doing all the work in as few data programs as possible would encourge more people to transform their data and lessen the complexity of the process. Btw, here are my latest coefficients (from TGA5.5)
James Foster - FJQ
VPhot is using the older coefficient naming scheme, but when you define the magnitude coefficient in the Admin section you have to set the color index too. If you use B-V when setting the B coefficient, then you know that VPhots TB is equivalent to Tb_bv in the TG/TA schema.
I think the only problem with VPhot's transform process is that it is limited to filters at a time. So you can't do BVI, eg.
Hopefully we can help. There are a lot of issues mentioned in your post; perhaps I can make some comments for each item. In some cases I'm not exactly sure what your asking but let's try.
"I see Velimir had a good set of transformed magnitudes using TA, but since I'm using VPhot, I was wondering if the reduction and transforming can be done their without going to TA ."
Velimir has generally been talking about generating his transformation coefficients with TG. For BVRI johnson cousins filters, he has used TG to generate a good set of transform coeffs (close to 0 or 1). Since he mainly uses sloan filters, he has not been able to generate the corresponding sloan transform coeffs since TG doesn't use those filter types yet. In fact, it is more due to the fact that sloan standard magnitudes are not available for M67 yet that this cannot be done. Your correct in your observation that VPhot does not generate transformation coeffs (if that is what you asked). It wasn't designed to do that. I know of only three software tools (TG, Canopus and LesvePhotometry) that do this. That is why up until now, most people use a manual spreadsheet process.
VPhot can transform single target, two filter images only, and only two at a time. Three or more filter, time series transformation can be done with TA. Until or if, AAVSO can find a programmer to support Geir's work, that will not change. And an argument can be made that since TA is now an available AAVSO tool, that may not change.
"One thing that is very annoying, is that VPhot uses transform coefficient names like T(subscript R), T(subscript V), and T(subscript I) which has no equivalent in TG5.5. I'm guessing that TR = Tr_ri when doing the IR(Tri) transform and TV = Tv_bv when doing the BV(Tbv) transform?"
I can only offer that I do not find it "very annoying". I think it is common to see Tv than Tv_bv for the magnitude coeff in many equations in texts. The Tv-bv is more "complete" definition since it tells the reader which pair of filters were used to calculate the slope to generate that coeff. Multiple pairs can be used to generate the Tv coeff. In fact, VPhot knows that and in your telescope setup page you can enter a value for your Tv (or other) coeff for each pair of filters. This would permit VPhot to use the correct pair depending on which pair of filters you used for your images. Yes, VPhot shortens the coeff label to Tv rather than e.g., Tv_bv. I suspect the driving reason was not only space but the expectation that most people would use that pair. So yes, you do need to read something into the label but I don't feel that is a big problem? That is subjective, of course.
"I get reasonable values for all BVI mags until I try to compute R with any other filter transform; it gives a 0 or negative R magnitude when downloading the data from the transform table screen."
I'm not sure I understand this? Do you mean reasonable values for transformation coeffs or the resulting transformed magnitudes for targets? An example would help? I was going to add a bit more here but I don't understand. Sorry.
"I guess I'll have to do what Velimir does and apply the transform coefficients separately in TA....a pity since doing all the work in as few data programs as possible would encourage more people to transform their data and lessen the complexity of the process."
I think Velimir's issue relates to his use of sloan filters. For this filter set, one must add the cref line since no sloan magnitudes exist in AAVSO comp data yet. I always (almost) use VPhot for my photometry. I use TG to generate my Transform coeffs, every 6 months or so. I use TA to transform my time series with two or more filters. I guess having one program do it all might be nice but that is true of much that we do in astronomy. I use SkyX, Maxim, FocusMax and CCDAutopilot to collect my images. Of course, they do interact with each other mainly because of CCDAutopilot (or ACP). It's a relative annoyance! ;-)
"Btw, here are my latest coefficients (from TGA5.5)"
I think that several coeffs are a bit larger than normal. Most filter coeffs (e.g., Tv) should be closer to 0.9 to 1.1. I think you can get r^2 closer to 0.99. Did you have a wide range of colors? I assume you took out the outliers? Can you share an image of the TG plot?
I hope this helps. It is a lot! I hope it does not turn you off or dissuade you from trying to transform. If you have these comments perhaps others do? I hope to hear from you.
Would you take a look at the attachments which show my coeficients taken this week (with the Moon in all it glory) using the M67 field. I would have included a summary table of the coeficients if I knew how to paste an image.
They look reasonable. What I especially would like is anyones comments on those data points I chose to remove. I fear dry labing but if I don't remove these data it really impacts the coeficients? Here is a AIUD list of them: 000-BLG-879, 891, 892, 893, 896, 899 and 901. I believe I can justify their removal because they are in a BV range where there are many other data points in agreement with each other and that I have 20 or more data points in the calculatiosn with a good range of BV values.
I processed the images in VPHOT and so I don't have a way to identify them in my data file. Where can i find the AUID for the M67 field? I'd like to do some additional troubleshooting.
Thanks in advance, esecially to George and yourself for giving this capabilty to the AAVSO.
I processed the images in VPHOT and so I don't have a way to identify them in my data file. Where can i find the AUID for the M67 field? I'd like to do some additional troubleshooting.
You can find AUID's when plotting standard field chart or photometry table using VSP (e.g. EV Cnc and select Standard field chart in the bottom of page).
My first observation is that your coeff plots look reasonable and the values are reasonable. Your choice to remove outliers inside the 3 sigma lines is a subjective call. I think that statistics yields a good estimate/representation of the truth but that your eyes/brain can/may IMHO yield a better result. I know the slope would change but did in fact the coeffs go outside the error range when you left a few more in?
That doesn't mean there are not a few weird things. Why in the Ti coeffs are the outliers clustered at the middle of the color range? Can you see anything in the images that might cause this (location of comp, overlap, magnitude)?
Can you take longer images to get fainter comps and more comps (>50?)? More comps and especially comps at the color extremes really help define the line slope. Collect/include comps down to an SNR of 10 (stack in VPhot to help this) but let TG remove those by setting that filter during its calculation. (20 is default?) BTW, did you take B images?
Ken (PS: I'll look into auid question)
my experience has shown that there are actually just a limited number of very good standard stars in M67. Those that have broad range of colours, are visible in all used (e.g. UBVRI) filters, do not deviate because of something unknown, and do not have close neighbours near stellar aperture. Not more than about 20 of them, I'd say.
Do you have
Thank you Tõnis
Do you have the AUIDs for these stars?
No, unfortunately not AUIDs, but I have coordinates of them. I usually process and measure my data using IRAF, so no use for AUIDs. Still, if you query from VSP for M67 standard table, you'll get both AUIDs and coordinates.
Thanks to a post by CTX, here...
<Posted: March 25, 2014 - 10:47am
Transformation Coefficients and M67.
Enter the coordinates: RA 08:51:24 + 11:45:00 and then select Yes, at the bottom of the VSP chart options, Would You Like A Standard Field Chart?>
I was able to obtain the AUID list
The attached file shows the Boulder ids and corresponding AUID's (plus RA and DEC)
If the AUID is 000-00-000, then I was not able to find a match.
So i used an example of one of the points that appears to be an outlier, 000-BLG-896. When I calculate V-Ic using the AUID database I get 1.079. I also looked into my VPHOT report and the Standard data there agrees that V-Ic is 1.079. However in the attached chart the highlighted (red point) is plotted on the V-I axis roughly at 0.65. In fact I don't see any points that agree between the AUID and TG. I found a Chart of B-V vs V-I and it suggests that there is roughly a 1:1 ratio up to 1.5 between them. The AUID data for the stars supports this ratio. What am I missing?
From AUID Database:
From my TG V and I data sets:
agrees with AUID
V-I TG plots here, see attached
If you get a chance this week maybe we could e-mail/talk off line
Oh, BTW I am still recovering from replacing my scope so I'll be working with the VRI filters for the immediate future, no B filter..
As we now both agree, weird! So let's talk off-line and when resolved report on forum.