These tests showed that aperture photometry was significantly affected by the higher fraction of dead pixels, but KAMPhot photometry was not, for typical scans and the current operating parameters (e.g., aperture size, thresholds, etc.). The slight degradation of KAMPhot results is still overwhelmed by the effects of interpixel response structure (modelled as a central hole of 0.075 arcsec radius and a square border of 0.075 arcsec width, within which the responsivity is zero).
T. Chester reported that T. Jarrett has a program that examines coadded images and (among other things) determines an effective PSF. This could, in principle, be used to generate time-tagged PSF FWHM values, but would require special code and modifications of the pipeline to fit it in. This effort would not produce code portable into the 2MAPPS pipeline, which goes against a guideline urged by J. Fowler with respect to new software developed at this time.
G. Kopan and J. Fowler had already looked at the possibility of getting PSF FWHM values for KAMPhot to use from the existing PSF and PSFGRID programs. These can run on the aperture-photometry results from DAOPhot that are used in the frame-offsets code, but best results have been obtained in the past by using KAMPhot output for its significantly better centroid results, without which the DAOPhot input would effectively convolve the PSFs with a centroiding error distribution that appears too difficult to deconvolve. In addition, PSF and PSFGRID have been used only on scans ideal for PSF determination, i.e., high density of point sources but unconfused. It is not clear that results from sparse or confused scans would be usable by KAMPhot, and double-passing the data would be required.
Since KAMPhot has code to compute its own PSF from the scan data it is processing, it could conceivably be made to provide time-dependent PSF modelling. The disadvantages to this approach are that it requires very high CPU usage to do this, and it cannot be trusted to produce stable estimates. Code to modify its PSF computation to make it more stable would not be usable in 2MAPPS, wherein a separate method is planned.
The 2MAPPS design for PSF estimation could be attempted, but this means rushing the development of that capability, which is not currently scheduled to be underway at this time. The method involves the STATS subsystem estimating the PSF FWHM from the same data it is using for frame-offset computation. Since some statistically useful number of point sources must be examined to get a stable estimate, the plan is to output a time-tagged estimate once every N (TBD) point sources, since this allows higher frequency time tracking of the PSF in dense regions, and while the tracking frequency will be lower in sparser regions, a smaller number of point sources will be affected. What STATS actually is to estimate is the value for a one-parameter seeing model that describes the optical PSF convolved with a seeing disk; the parameter is expected to be derivable from the FWHM estimate, and the whole algorithm will be constrained to make it stable. This, however, is a lot to develop and test before May.
KAMPhot obtains its PSF model in a sufficiently modular way that patching in various algorithms' results does not appear to involve significant resources or risk. G. Kopan is currently working on the STATS subsystem, but not the PSF-estimation part, only the frame-offset determination part. Nevertheless, it was decided that he and B. Light will pursue modifying his code to include the computation of a 90%-encircled light value to be passed to KAMPhot for use in setting up its PSF model. If this does not run into significantly threatening obstacles, it will be used to obtain the desired PSF time dependence; otherwise, we will have to rethink how to handle this concern.
J. Fowler pointed out that some form of PSF RTB (Regression Test Baseline) will need to be set up so that we will know what the effects of various code changes are. A subset of existing protocam scans will be identified for this purpose. These will cover the variations in density and confusion needed but will be kept small enough not to introduce disk storage problems.