I'm new to EB and have just run my first two time series on ST Tri. From the Time Series Analysis report, I see the error bars on the Target Magnitude Graph, which arequite small, but I also see Std numbers for the Target Magnitude of 0.258 which is pretty large. Is this 'Std' number considered to be the 'error' for the Target observations in a Time Series? If this is the case, are there different standards for evaluating Time Series results versus the following which, I believe, apply to variable stars analysis in general:
error: < 0.015 - excellent
error: < 0.050 - good
error: 0.05< x<0.10 - fair and ok to report
error: > 0.10 - poor and not report-able
Thank you for any feedback on the above or related topics!
Best Wishes for 2023....
I see that ST Tri swings about 0.8 magnitude in a half day. Your data may have a range of 0.8 magnitude.
Plug that into a standard deviation formula (Wiki?) and see what you get.
Hunting through the various helps, I don't see anything about how std is calculated in VPHOT. So we assume it is using a (sorry, couldn't resist) standard calculation.
If the values for min and max given in the report show a big change, you might expect that the standard deviation for the data set would be large.
If the values for min and max given in the report show a small change, you might expect that the standard deviation for the data set would be small.
If V0680 Per and PU Per are in your field of view on those images, you might see a small Std for PU Per and a somewhat smaller Std for V0680 Per (13.48 - 13.98 V). Your comps should not change very much, so they would have a small Std. Be aware that a few rare comps do change.
Std seems to be a measure of how scattered the data set is, rather than the error bar on each data point. The manual defines the formula for error bars and it uses the signal to noise ratio.
Hope I'm not confused and am of some small help to you.
Your Std is large because your EB is varying through the time series. To put it another way, the Std is not a measure of your error but a measure of the total variation in magnitude over the time series. Large change in magnitude, large Std. Small change in magnitude, small Std. It has nothing to do with the errors of you individual measures. Have a look at your report. Each individual measure is reported individually and you can see what VPhot reports as the uncertainty for each individual measure. This is different from the Std reported when you do not expect magnitude variation, for example, in a nightly measure of an LPV.
The quality of the time series you report has more to do with the scatter of your data when you look at the light curve and with the quality of a time of minimum (ToM) compared to the most current light elements that predict what you should have as a ToM. It is the familiar O-C analysis; your observed value, O, compared to the predicted value computed, C, which is computed from the light curve elements. If you look at Gerry Samolyk's reports in the JAAVSO you will see those O-C values and errors. The EB section has some instructions on how to determine your ToM. Its pretty easy to do and I would be glad to assist if needed.
Ad astra, Ed Wiley