While collecting archival photometry especially from the 19th Century, it's occurred to me to wonder how the (purely visual) magnitude scale was extended beyond the naked-eye limits as commonly-available telescopes became progressively larger. Yes, I know that the scale (in terms of flux) was poorly defined. And even after the gradual adoption of the Pogson scale, there remained substantial scale errors at the faint end well into the 20th Century until photoelectric photometers became available attached to big telescopes.
The specific case example is R CrB, which folks observed regularly near maximum at roughly mag 6, but saw of course that it more-or-less disappeared during the deep obscuration events. By roughly 1850 people were following it to what was claimed to be mag 12 or 13, but what sorts of reference were used to establish this? Could it have been rather fainter? (No Landolt standards!) Some decades later one had Zollner-type (polarizing) visual photometers, where one could get a start on extending the magnitudes beyond the BD (which has its own scale error at the faint end), but the need was surely felt well before then, like the 1820s and 30s. What did observers do? What I'm hoping to find is some papers where someone pondered this, or a series over many years where this got worked out by the community. Just now, I think there could be some details about this in the old 'Handbuch der Astrophyzik', which luckily we have in our library, but will have to check this another day. Any other other leads will be welcome.
\Brian
Hi Brian,
To the best of my knowledge, until the wedge and prism photometers came into use around 1850, the faint end of the magnitude system was an extrapolation of the naked-eye system. The BD, for example, assumed that the faintest stars visible in their telescopes were about magnitude 9, and interpolated from there to the faint end of the naked eye sequences.
A good reference for early work is John Hearnshaw's book, The Measurement of Starlight.
Arne
Thanks for these added points. From my collecting of a fair amount of the old data into machine-readable form, it seems that the various visual photometers were not common, and most folks worked with the traditional Argelander method. Starting in the 1890s John Parkhurst, working at Yerkes, used a Pickering-wedge type photometer to measure comp stars, but only occasionally used it on the Miras he followed:
https://ui.adsabs.harvard.edu/abs/1906rspd.book.....P/abstract
These data are in the AID without any zero-point or scale correction, and I think he ended up with a significant scale error at the faint end despite his efforts to avoid it (he was going fainter than he thought).
Similarly the heroic observer George van Biesbroeck did mostly visual step-estimates in the large variable-stars monograph here:
https://ui.adsabs.harvard.edu/abs/1914AnOBN..13..175V/abstract
(mostly unreduced, so it is a difficult dataset, but data for R CrB _are_ reduced to magnitudes, which I have keyed-in)
It looks as though there's a Handbuch der Astrophysik monograph (200+ pages auf Deutsch) by Hassenstein about visual photometry. I'll be able to look at that on Monday in our library.
\Brian