Wed, 11/27/2019 - 14:40
During testing of the upcoming VStar release, we realized that there is a difference of a second or two between the results of VStar's JD to HJD conversion and other algorithms available. While the VStar algorithm can certainly be updated, we wonder if that is the best use of development time.
What do people think? Are the dates in the dataset you work with in VStar accurate to subsecond?
thanks for any feedback,
Cliff
Hi Cliff:
1. Take my comment with a grain of salt! I rarely use VStar for JD to HJD conversion.
2. IF VStar gives an HJD that is off by 2 seconds from "other" algorithms (which agree with each other?), then I think you should find out why and fix. IMHO, two seconds is quite large for precise TOM assessments over years?
3. Consider asking an expert like Joe Patterson, who looks for small shifts in his CVs?
Ken
Ken,
Thanks for taking the time to comment. It is not really a matter of fixing the software as it is making it incorporate more sources of variation. Before undertaking that work, we wanted to gauge the level of interest.
I'm going to conclude from the level of response that, among VStar users, sub-second time is not widely a concern.
best regards,
Cliff
I use Dimension 4. it reports it is good to about 400 milliseconds. But then a fit needs to be made to datasets to obtain a period. IMHO, the fits to my data don't have one second accuracy. On scraggly datasets I might not miss an hour.
Ray
I use Dimension 4. it reports it is good to about 400 milliseconds. But then a fit needs to be made to datasets to obtain a period. IMHO, the fits to my data don't have one second accuracy. On scraggly datasets I might not miss an hour.
Ray
I never use VStar for the specific purpose of convering JD to HJD data.
If HJD light curve plots in VStar are out by about 2 seconds, that would not be a problem for me, personally.
That said, if a routine I used for heliocentric conversion was found to be out by 2 seconds, I would look for a more accurate one.
Roy
I do EB eclipse timing... The reported uncertainty of the times of minima are very rarely smaller than 0.0001 day, i.e. 8.6 seconds. And, those are debatable: O-C diagrams suggest the true uncertainty is often quite a bit greater.
I suggest that if someone needs accuracies better than a few seconds, for starters they should probably be using BJD, not HJD, and it may be that they should not be referenced to UTC (as Dimension4 does). It is a complex topic, and one I am not expert in, especially for cases where UTC is not appropriate.
I use astropy for conversions from UTC to HJD and BJD.
Gary Billings
I'm not sure there is a problem with UTC per se.
Astropy has been mentioned, and there is a problem with python and its usual UTC implementation: python's core datetime package limits the seconds per minute to 60. This means that python's datetime can garble the leap seconds that otherwise keep UTC and certain other timescales in sync.
Astropy is much smarter about this. And recently I've discovered Brandon Rhodes' easier-to-use skyfield python package that can also do the UTC-to-other conversions carefully, using up-to-date earth-rotation and leap-second corrections (downloaded automatically).
If the issue is a +/- 2 second difference in the HJD conversion, it probably is not an issue for the current photometry. If it is a 2 second difference that cumulates, that is another issue. (O-C) diagrams require precise timing in order to be accurate. Correlations with X-ray data also require precise timing to make sure deviant points are real or noise. Future high-cadence time series might need precise timing using GPS and BJD for correlation.
My basic premise is that anything you can do with programming to increase accuracy, should be done. Find a better algorithm and install it. Then these questions are moot!
Arne
Hi all
I use sub second times for O-C and occultation work and an error of a couple of seconds would be an issue. That said, I do not use VStar for HJD determination but other software.
It would be preferrable that VStar (a fantastic piece of software in all other ways) was accurate so someone who does not realise the issue, uses it for HJD determiniation. It would be better to remove the HJD determination Plug-in until it is fixed.
Robert Jenkins
Cliff
Robert: If you need sub-second times for your work, I'm not sure you should be using HJD. For really demanding O-C plots, I would have thought that the vertical axis should be computed using (earth-orbit-corrected) TAI, not JD or HJD.
Since HJD is tied to the UTC timescale, not to the TAI timescale--won't you miss all the leap seconds, which add up to a lot more than a couple of seconds? Because physics (including eclipsing binaries) ticks to TAI, not to UTC (or to its expression JD).
Or maybe VStar already does that, but it's not clear to me.
Thanks Eric
You may be right but the Group I am working with have requested HJD so I am complying. I wil explore the TAI /HJD issue a lot more.
Robert
HJD (and JD for that matter) per se is not tied to UTC. To be complete, one would have to specify the time standard in addition, like "JD(UTC)" or "HJD(TT)", e.g. see
https://en.wikipedia.org/wiki/Heliocentric_Julian_Day
and anyway, strictly speaking:
to complete the confusion.
Cheers
HB
Good point. And it is interesting to note this is not exactly clear in the data we upload either.
https://www.aavso.org/aavso-extended-file-format
I think all we can assume is UTC.
Cliff
Indeed one can express other timescales as Julian Date (JD). But in the absence of leap-second data or of a differently specified timescale, JD commonly expresses UTC. UTC and JD(UTC) are what PCs write into FITS files, so that's what generally gets transmitted to AAVSO. JD(UTC) is also what AAVSO's own JD/UTC converter tool delivers, even though it says only JD.
And that's fine: UTC or JD(UTC) are perfectly good for recording past datetimes in databases like AID and for visually inspecting lightcurves. There's no ambiguity in storing *past* datetimes as UTC or JD(UTC)--they can be converted to TAI etc unambiguously since we know the leap second record. It's when *analyzing* the lightcurves to extract very precise periods, O-C, etc that the leap seconds matter, and thus that the type of timescale, and yes the type of JD, really matters.
Struth
The more I read the more confused I am!!! I may go back to sundial time - even at night..
I think what helped me was to realize there are two components. One is time as measure at a particular place and allowing for the light travel time involved. So earth, sun, solar system barycenter. The other component is the time scale. For the scale I found this source http://www.hartrao.ac.za/nccsdoc/slalib/sun67.htx/node217.html helpful to me.
I hope you sleep in a nice soft bed tonight and not a damp, cold cave.
Cliff
This thread reminds me how reading about time and its measurement led me to an interest in astronomy as an amateur observer.
I came across the terms Right Ascension and Declination and realized they represented the 'longitude' and 'latitude' of the sky. I went outside on a clear southern summer night, looked up, and said to myself that I must learn something about these stars. For example, that grouping, there, high up, over a large area of sky, must have a name.
My knowledge of the sky then was meagre. I was looking at the constellation Orion!
Roy
HJD and BJD paper I found that may interest
Robert Jenkins
Thanks. That is a very readable description of the problem even if you do not need the ultimate precision described.
cliff
Hi all
Thanks to everyone for all the informative comments and pointers to papers, and to Cliff for coordinating.
In the current release, the HJD conversion takes place when loading a set of non-HJD observations in the presence of an already loaded HJD dataset. In the forthcoming release, this will be removed from VStar, becoming an optional plugin that allows more control over which datasets to convert. This had already been done during development before this issue was raised by Cliff during system testing.
The two second error is very likely due to the fact that the current implementation uses a lower accuracy method from Meeus for part of the code, but I'm checking that and looking at the effort involved in moving to the higher accuracy method.
Cliff has also suggested using AstroimageJ's Java code for this since it doesn't exhibit the discrepancy. Licensing considerations need to be taken into account before going down that track though. AIJ's licence (as with VStar) requires contributing back any changes we make to the code, which is fine of course. Also, the effort to move to the higher accuracy method may not be onerous and if not, I'd like to consider it first.
Arne asked whether the effect is cumulative. The discrepancy is isolated to each HJD conversion.
I certainly agree that fixing this should be done and agree that in general as much as possible should be done to optimise accuracy and performance.
Even if a drop-in relacement is used, unit and system testing is still required of course. VStar has a unit test suite to alert us to possible problems when code is modified and to allow code to be modified with some confidence. Additionally, Cliff and Dave have been doing a great job of system testing.
As per Cliff's first post in this topic, we're interested in determining priority relative to other tickets/issues.
Thanks.
David