### Variation of daily average emission

**The well-established enhancement of CaII** emission at times of high meteor activity (Vallance and Jones 1956) suggested to us that there might be a correlation between MgI emission and meteor showers. The Orionid shower has a maximum activity around October 20th and the Lyrid shower about April 21st. The September and October 1971 data show little significant variation in emission from night to night, having a mean level of 0.31 Rayleighs. The April 1972 data show a generally lower emission rate of about 0.1 Rayleighs, with some interesting structure beginning on the night of April 14/15, coinciding with the onset of the Lyrid meteor shower.

**Two scans recorded near morning twilight** on April 15th showed an unusually strong emission of about 0.7 Rayleighs. This declined over the next few nights until another enhancement occurred on April 20/21, corresponding to the time of maximum activity of the Lyrid shower. Unfortunately our data do not extend beyond April 21st. It is worth noting that the ‘enhancements’ in late April 1972 merely bring the emission rate up to the September-October 1971 level. In this respect it is not clear whether the emission really underwent an enhancement in late April or is merely in a depressed state in early April! We believe the enhancement to be real, but the difference in ambient emission rates September-October 1971 and April 1972 may be an artefact of the calibration technique, and modifications to the instrument’s fore-optics between observation periods, though every effort was made to correct for these factors.

Enhancement of the emission at the time of the Lyrid shower suggests that at least part of the atmospheric magnesium is of meteoric origin. No such enhancement occurred during the October 1971 Orionid shower, however.

**Figure 3.10 Mean night-time emission as a function of time. The vertical arrows indicate upper limits, and the gap in data corresponds to the period of Full Moon.**

### Variation of emission throughout the night

**Depending on which mechanism produces the Magnesium emission,** different variations of intensity throughout a night could result, so an attempt was made to determine from the data an hourly estimate of intensity averaged over a number of nights. The process was attempted only for the September-October period, since a greater number of scans were affected.

**The result is shown in Figure 3.11.** The error bar has been calculated from the probable errors in the individual estimates assuming a normal distribution of these about a mean value. Each point represents the average of about 20 measurements from individual scans. It is seen that there is a significant increase in intensity near twilight in morning and evening. There is present in the observations, however, a strong observational selection effect. Most of the scans recorded near sunset or sunrise were at low altitude, and airglow phenomena are known to increase in brightness near the horizon (the Van Rijn effect) so these scans might be expected to show a greater intensity of emission than those around midnight.

**Figure 3.11 Hourly averages of MgI emission intensity throughout the night.**

**Figure 3.12 Simple model of an airglow layer.**

**A simple model of an airglow effect is shown in Figure 3.12**. If the emitting layer is optically thin, the total intensity of emission recorded on the ground is proportional to the line-of-sight length of path in the emitting layer, and thus …

Emission ^ sec0 , where 0 is the zenith distance.

**The lenth of absorbing path is also roughly proportional to sec0**, but more precisely is proportional toa function defined using the air mass figures tabulated in Allen’s Astrophysical Quantities (paragraph 54). The intensity transmitted is thus where k is a constant for a given air type, and A a constant governed by the physical mechanism of the emission, and determinable by experiment. If kf is small …

The approximation (kf small) is good only as long as the minimum transmission is about 80%, which is just realised at the lowest elongations under discussion. Using the tabulated values, the relative strengths of emission at two different altitudes can be found.

**To predict what the effect of observational selection** might have been, the scans providing the information for points A and B in Figure 3.11 were individually checked for zenith distance. For each scan the relative strengths from a typical airglow distribution were calculated as above, and the average values were adjusted accordingly. The result was that, to compensate for the observational selection effect due to zenith distance, the value at point B needed to be multiplied by 1.44. As can be seen, this brings point B to roughly the same level as point A, and suggests that the true variation of intensity throughout a night is small.

### Final comments on MgI emission

**The fact that the MgI emission does** not disappear late in the night when the 90-100 km region of the Earth’s atmosphere, where most other metallic ions are observed (Narcisi 1968; Narcisi and Bailey 1965) is completely in shadow, makes it unlikely that any fluorescence mechanism is a source, and indeed it is hard to see how fluorescence could excite the triplet 3p3P2 — 4s3S transition. Radiative recombination of Mg+ with an electron, the mechanism suggested by Anderson and Bath (1971) governing the daytime balance between Mg and Mg+, would seem to be the most likely cause of the emission. It might be expected that other Mg lines, such as the other lines of the triplet, 5172.68 A, and 5167.33 A, would be present also. The interecombination line at 4571.1 A connecting the 3p to the 1s state should also be present. I know of no observational searches for these lines, though more recent studies seem to show that neutral Mg emission is highly variable over a day, with a maximum at dusk (Gardner et al 1995).

Attention was now turned back to the absorption line, at 5183.62 A, for the purposes of determining Doppler shifts as originally planned, and for this purpose the small emission core, once subtracted, was ignored.

## Averaging

**The process of using the observed absorption line spectra** to discriminate between theoretical dust cloud models is complex. Even if we confine our attentions to a ZL produced exclusively by a circumsolar cloud of material, with no other components, there are still many variables. The spectral profile of the absorption line may be expected to depend upon elongation from the Sun, and possibly the position of the Sun on the celestial sphere (reflecting the Earth’s position with respect to the Zodiacal Cloud). There may also be short, medium and long period variations due to solar activity, or due to changes in the density and composition of the cloud with time, perhaps brought about by the visits of comets to the inner Solar System. Because of the way in which the data was collected, it was considered practical to approach the analysis purely as a function of elongation, but to keep the two sets of observations (from 1971 and 1972) separate, in order to highlight any differences between the two periods.

**The ecliptic plane was** divided into 10-degree intervals (or 5 degrees for low elongations), and an average spectrum was computed for each group of scans in the interval. The method was to set up a number of wavelength ‘channels’ of width one ‘lamp’, and for each channel, add up the relevant number of counts recorded for each scan, finally dividing the total by the number of scans. Care was taken to avoid introducing artificial noise by averaging over a different number of scans for different points. Where no value existed in a given channel, an interpolated value was used. Because of the finite width of a channel, this method produced a final spectrum which has been convolved with a ‘box-car’ function, one lamp in width. However the effect of this is quite negligible compared with that of the instrumental function width.

**A typical averaged scan,** in this case an average of 8 individual scans, is shown in Figure 3.7e. Most of these derived spectra exhibit what we might call a ‘well-behaved’ absorption feature. All averages were done on the central four-and-a-half angstroms of the scans only. Attention was then turned towards fitting curves to the experimental points, in order to extract parameters which can be compared with theory. I attempted first to fit simple polynomial shapes to the points.

## Polynomial fits

### General

**In 1973,** the sole computing power available at Imperial College for processing data was contained in a single ‘main frame’ Atlas computer, housed in the Mechanical Engineering building, and taking up a whole floor of space. Input was by means of punched cards or 5-hole paper tape, and due to the demand, and the slow processing speed, it was normally possible to run a programme just once or twice a day. It was thus a very time-consuming business to ‘debug’ a new program, and much patience was needed. It was possible to output on a printer, but the choice could be made to also output on card or paper tape media, and, in this form, the result could be used directly as input to a subsequent program. This was the approach I used, producing decks of cards of the averaged scans and fitted curves for input later to a graph-plotting programme. Beginning with no assumptions about the shape of the line, I computed a best-fit polynomial curve for each averaged scan. The computer program used, LSQFIT, was an adaptation of a CERNUP library program which fits, by the method of orthogonal polynomials, a polynomial of any degree to a set of points.

**Figure 3.13 A ‘difficult’ scan. Least-squares polynomials of order 1 to 8, fitted by LSQFIT to the averaged spectrum for 135° West (morning), September-October 1971.**

The fit minimises the sum of the squares of the residuals, and may be used with suitable weighting according to the standard error on each experimental point. In practice, weighting was found to make very little difference to the fits, and in the end, equal weighting was used in all cases.

**Depending on input format,** the program printed out various information on the fits. One of the most useful is the "F-Ratio", which gives a measure of how good a fit a polynomial of a given order is, relative to the that of a polynomial of the next order up. As would be expected from the shape of the curves, in most cases a fourth order polynomial gave a good fit (three possible turning points). Figure 3.13 shows a scan for which it was particularly hard to find simple fit; it provides a good illustration of the way the shape of the curve can vary with successive orders of the polynomial.

**Figure 3.13,** as others in this report, is from a Calcomp plot. Program LSQFIT was equipped with a card output giving the ‘x’ and ‘y’ values of the experimental points, and coefficients of the fitted polynomial. A program POLYPT was written to accept these as input, and plot the experimental points on the correct axes, and to trace out the fitted curve, calculated at one-twentieth angstrom intervals from the coefficients. (Printouts of the programs used are shown in the Appendices at the end of this thesis). These curves, fitted to the data, give an immediate eyeball estimate of the shift exhibited by the absorption line profile, as compared with the profile in direct sunlight. But, to remove human error from these estimates, methods were devised to extract an average shift parameter in each case. In this context, the word ‘average’ means a best estimate of the shift of the line profile. From Zodiacal Cloud models, as we shall see, a corresponding ‘average shift’ can be computed, the average of the shifts of the various ‘cells’ of dust particles along the line of sight integral for a particular elongation, for comparison with this observational ‘average shift’ (see Figure 1.3). I used two separate methods for extracting a best estimate of the shift from the fitted polynomial curves:

**1)** A determination of the minimum point of the curves (program MINIPT).

**2)** A determination of the wavelength half-way between the wavelengths of the two inflexion points (program FLEXPT).

It would also be possible to make an estimate of the shift by finding an ordinate which bisected the area contained between the spectrum cure and an estimated continuum level.

### Computation of the minimum point (Program MINIPT)

**Consider a fourth order polynomial**. If the curve is represented by

whereis reckoned from a zero position atare the coefficients determined by

which must be zero at the minimum point. Since all thecurves have the general shape of an absorption line slightly shifted from zero position, we can assume that the turning point we want is the one closest toSo any value close to zero can be used as a first approximation in an iterative process (Newton’s method).

**Figure 3.14 Illustrating Newton’s method of successive approximation to solve the equation****giving the minimum point of**

**Referring to Figure 3.14**, given any first approximation ‘root’ to the true root, we can find a better approximation by finding where the tangent to the curve at this point crosses the zero line.

delta being the correction to be subtracted from root to give the value of ‘new root’. Therefore

and the whole process is repeated, using ‘new root’ as the new starting point, and so on, iteratively. We can stop the iterative process when delta < 0.001A.

**Now,** differentiating again,

**So,** using these values, the minimum point can be found for each curve, the distance of these from the zero position giving the shift directly. See topic 3 for the program MINIPT used to implement this operation. The figure below is included mainly as a reminder to myself!