PHYSICAL REVIEW D
Vol. 65 (2002), 042003 (18 pp.)
Data analysis of gravitational-wave signals
from spinning neutron stars
Kazimierz M. Borkowski
Center for Astronomy, Nicolaus Copernicus University, Gagarina 11, 87-100 Toruń, Poland
Institute of Theoretical Physics, University of Białystok, Lipowa 41, 15-424 Białystok, Poland
Institute of Mathematics, Polish Academy of Sciences, Śniadeckich 8, 00-950 Warsaw, Poland
(Received 29 December 2000, published 25 January 2002)
|We develop a set of data analysis tools for a realistic all-sky search for continuous gravitational-wave signals and we test our tools against simulated data. The aim of the paper is to prepare for analysis of the real data from the EXPLORER bar detector, however the methods that we present apply both to data from the resonant bar detectors that are currently in operation and the laser interferometric detectors that are in the final stages of construction and commissioning. With our techniques we shall be able to perform an all-sky coherent search of 2 days of data from the EXPLORER detector for frequency bandwidth of 0.76 Hz in one month with 250 Mflops computing power. This search will detect all the continuous gravitational-wave signals with the dimensionless amplitude larger than 2.8×10–23 with 99% confidence, assuming that the noise in the detector is Gaussian.|
|PACS number(s): 95.55.Ym, 04.80.Nn, 95.75.Pq, 97.60.Gb|
Table of contents||I.||
Introduction and summary||II.||
Detector's noise-free response||III.||
Optimal data analysis method||IV.||
The frequency domain database||V.||
Position and velocity of the detector
with respect to the solar system barycenter||
A. Dealing with time scales||
B. Topocentric coordinates
C. Barycentric coordinates of
the Earth (JPL ephemeris)
A linear filter
Spacing of filters, the search algorithm, and computational
Grid of templates
Monte Carlo simulations
Estimation of parameters of the signal
A narrowband all-sky search of EXPLORER data
Interferometric beam-pattern functions
Bar beam-pattern functions
The time averages <a2> and
Laser interferometeric detector
Resonant bar detector
Appendix C: The Top2Bary procedure
I. INTRODUCTION AND SUMMARY
It was shown that an all-sky full frequency bandwidth coherent search of data of many months duration for continuous gravitational-wave signals is computationally too prohibitive to be realized with presently available computing power [1,2]. Consequently two alternative approaches emerge. One that we shall call tracking involves tracking of lines in the time-frequency plane built from the Fourier transforms of one day long stretches of data [3,4]. The other that we shall call stacking involves dividing the data into a day long stretches, searching each stretch for signals, and enhancing the detectability by incoherently summing the Fourier transforms of data stretches . In this paper we develop data analysis tools for the first stage of the stacking approach, namely for the coherent search of a few days long stretches of data. These tools involve storage of the data in the Fourier domain database, calculation of the precise position of the detector with respect to the solar system barycenter using JPL ephemeris, calculation of the detection thresholds, approximation of the signal by a simple linear model, and construction of the grid of templates in the parameter space ensuring that the loss of signals due to the finite spacing of the grid is minimized.
In this theoretical work we have in mind a particular set of data, namely the data collected by the EXPLORER bar detector. This detector is most sensitive over certain two narrow bandwidths of about 1 Hz wide at frequency around 1 kHz. To make the search computationally feasible we propose an all-sky search of a few days long stretches of data in the narrow band where the detector has the best sensitivity. We find that with a 250 Mflops of computing power we can carry out a complete all-sky search in around a month for signals within a bandwidth of 0.76 Hz at frequency of 922 Hz and for observation time of 2 days. We assume that the minimum characteristic time for the change of the signal's frequency is 1000 years. By our search we shall be able to detect all the continuous gravitational-wave signals with the parameters given above and with the dimensionless amplitude larger than 2.8×10–23 with 99% confidence.
The tools developed in this work can be applied not only to the analysis of the bar data but also to the interferometer data. In the case of the wide-band interferometer one can divide the data into many narrow bands and analyze each band using the algorithms presented in this paper.
The plan of the paper is as follows. In Sec. II we present responses of a laser interferometer and a resonant bar to a continuous gravitational wave, including the effects of both amplitude and frequency modulation of the response. The optimal data processing method is described in Sec. III. In Sec. IV we introduce the frequency domain database and in Sec. V we discuss how to calculate the precise position of the detector with respect to the solar system barycenter. In Sec. VI we introduce an approximate model of the detector's response called the linear model and we describe its performance. In Sec. VII we construct a grid of templates in the parameter space, we calculate the computational requirements to do the search, and we perform Monte Carlo simulations to test the performance of our grid and our search algorithm. In Sec. VIII we discuss and choose parameters for our planned all-sky search of the EXPLORER detector data. Several details are presented in the Appendices.
Many methods presented in this paper rely on general analytic tools developed in a previous paper of this series  that we shall call Paper III.
II. DETECTOR'S NOISE-FREE RESPONSE
Dimensionless noise-free response function h of the gravitational-wave
detector to a weak plane gravitational wave in the long wavelength approximation
[i.e. when the size of the detector is much smaller than the reduced wavelength
λ/(2π) of the wave] can be written as a linear combination of the
wave polarization functions h+ and h×:
It is convenient to decompose the response h of the gravitational-wave
detector (both resonant bar and laser interferometer) described by Eqs. (2.1), (2.2), and (2.4) [cf. also Eqs. (A1) and
(A10) in Appendix A]
into a linear combination of several components:
The functions hi do not depend on the mechanism generating
the gravitational wave and they have the form
The amplitude modulation, determined by the functions a and b, splits the power spectrum of the signal into five lines corresponding to angular frequencies ω0 – 2Ωr, ω0 – Ωr, ω0, ω0 + Ωr, and ω0 + 2Ωr, where Ωr is the rotational angular velocity of the Earth. Moreover the maximum of the signal's power can be in any of these components depending on the position of the source in the sky. Therefore we need to take the amplitude modulation into account in the design of a matched filter since a filter of constant amplitude and matched only to the phase would mean a drastic decrease of the signal-to-noise ratio.
III. OPTIMAL DATA ANALYSIS METHOD
The signal given by Eq. (2.5) will be buried in the noise of a detector. Thus we are faced with the problem of detecting the signal and estimating its parameters. A standard method is the method of maximum likelihood (ML) detection that consists of maximizing the likelihood function, which we shall denote by Λ, with respect to the parameters of the signal. If the maximum of Λ exceeds a certain threshold calculated from the false alarm probability that we can afford, we say that the signal is detected. The values of the parameters that maximize Λ are said to be the maximum likelihood estimators of the parameters of the signal. The magnitude of the maximum of Λ determines the probability of detection of the signal.
In the current paper we consider the gravitational-wave signal consisting of one narrow-band component. The ML detection of more general signal consisting of N narrow-band components was studied in detail in Sec. III of Paper III. We only need here to recall the main results derived in Paper III and specialize them to the case of one-component signal. A related approach to signal detection has also been developed by Owen , Mohanty and Dhurandhar , and Mohanty .
We assume that the noise in the detector is an additive, stationary, Gaussian,
and zero-mean continuous random process. We also assume that over the frequency
bandwith of the signal the (one-sided) noise spectral density Sh(f) is nearly
constant and equal to Sh(f0), where f0 is the frequency of the signal
measured at the SSB at t = 0. Moreover, to simplify analytic formulas, we
restrict to the observation time To being an integer multiple of one
sidereal day, i.e., To =
n(2π/Ωr) for some positive integer n.
Then the analytic formulas for the ML estimators of the amplitudes Ai, cf.
Eq. (2.5), are given by (for derivation, cf. Sec. III A of Paper III)
The reduced log likelihood function F can be written as
When the signal is absent and phase parameters are known 2F has
distribution with four degrees of freedom and when the signal is present it has
a noncentral χ2 distribution with four degrees of freedom and
noncentrality parameter equal to the optimal signal-to-noise ratio d.
When phase parameters are unkown we can approximate F by a homogeneous
χ2 random field and divide the parameter space into elementary cells
defined by the correlation function of F (consult Sec. III B of Paper III
for details). The false alarm probability i.e. the probability PTF that
F exceeds the threshold Fo in one or more cell is given by
It will very often be the case that the filter we use to extract the signal from
the noise is not optimal. This may be the case when we do not know the exact
form of the signal (this is almost always the case in practice) or we choose a
suboptimal filter to reduce the computational cost and simplify the analysis.
For such a case in Sec. VI of Paper III we have developed a theory of
suboptimal filtering. The suboptimal statistics Fsub has the same form as
the optimal statistics F and the data analysis procedure consists of
maximizing the value of the suboptimal statistics. One can show that
The formulas for the false alarm and detection probabilities have the same form
as in the case of the optimal filter except that the optimal signal-to-noise
ratio is replaced by the suboptimal one. Also for the suboptimal filter the
number of cells Nsc may be different than for the optimal one. Thus the
false alarm probability is given by
IV. THE FREQUENCY DOMAIN DATABASE
The data collected by the detector constitute a time series with a sampling interval Δt. This time domain sequence can also be stored in the frequency domain without any loss of information. Organization of the data in the frequency domain in a suitable way provides a very flexible database that is very useful both for search of the data for gravitational-wave signals and for characterization of the noise of the detector. The first step to construct frequency domain database consists simply of taking a fast Fourier transform (FFT) of 2N data points and storing the first N points of the FFT. To improve the quality of the time-domain data that will eventually be extracted from the database the original data are windowed and overlapped. Each FFT is then normalized and calibrated where the calibration process takes into account the transfer function of the detector so that the units of the FFT are strain/√(Hz). Moreover the quality of the data represented by FFT is characterized so that it is possible to choose thresholds for vetoing the data or criteria to weight them. Both the calibration and the data characterization information are stored in the header that is attached to each FFT.
It is very important to choose a suitable number N of data in each FFT. This depends on the type of signal search that one wants to perform. In the case of the search for continuous sources which is a subject of this work we choose N in such a way the maximum expected Doppler shift due to the motion of the detector with respect to SSB is less than the width of one bin. In the case of the EXPLORER data this leads to N = 216. For sampling time Δt = 0.18176 s this means length of data for each FFT of around 2/3 of an hour. Another important criterion for the choice of the length of data interval for each FFT is that within the interval the noise is stationary. In the case of the EXPLORER detector stationarity is in general preserved over 2/3 of an hour intervals.
The detailed steps to create the frequency domain database and to extract narrowband time series from the base are given in Ref. .
V. POSITION AND VELOCITY OF THE DETECTOR WITH RESPECT TO THE SOLAR SYSTEM BARYCENTER
A. Dealing with time scales
The data acquisition system of the detector is synchronized to the Coordinated
Universal Time (UTC) as disseminated by international time services. Thus it is
assumed that each datum corresponds to a given UTC. The UTC scale is
essentially uniform, except for occasional 1 s steps introduced internationally
to compensate for the variable Earth rotation. When these steps are taken into
account, the UTC is reduced to the International Atomic Time (TAI) scale, which
is uniform and differs from the Terrestrial Dynamical Time (TDT or TT, normally
used to describe celestial phenomena by astronomers) only by a constant term.
In principle, the time argument of positions of celestial objects is the
Barycentric Dynamical Time (TDB), however it differs from the TDT only by a very
small additive term (less than 0.001 s in magnitude), thus for all practical
purposes they do not have to be distinguished. So we have:
However, to relate the position of a point on the Earth to the barycenter of the Earth (the latter being obtained from the solar system ephemeris) one has to use yet another time scale — the rotational time scale UT1, which is nonuniform and is determined from astronomical observations. The difference UT1 – UTC, which is maintained within ±0.90 s, is taken from the IERS tabulations of daily values.4
B. Topocentric coordinates
To be able to relate a point on the Earth surface to the solar system barycenter it is necessary to know orientation of the Earth in space. The primary effects that should be taken into account are: diurnal (variable) rotation, precession and nutation of the Earth rotational axis, and polar motion. The precession and nutation can be accounted for by applying standard astronomical theories. The remaining two effects are unpredictable for a longer future, so observational data must be used.5
The rotational angle of the Earth is included through conversion of UTC to UT1
(the quantity UT1 – UTC is tabulated in the IERS
files). UT1 serves to calculate the apparent sidereal time
θ, which is essentially equal to
Eqs. (A2) and (A11) plus the
nutation in longitude projected on the celestial equator. The sidereal time in
turn is used to find rectangular coordinates of the point (the detector) in the
IERS celestial reference frame:
The polar motion (Px and Py) and UT1 – UTC quantities are linearly interpolated between the daily IERS values.
Since these coordinates are naturally referred to the epoch of date, they are further precessed back to the standard epoch J2000. The precessed Cartesian coordinates (xE,yE,zE)2000 may now be straightforwardly added to coordinates of the Earth barycenter with respect to the solar system barycenter (which are described in the next paragraph) to obtain the desired position vector of the detector.
C. Barycentric coordinates of the Earth (JPL ephemeris)
For computing the coordinates of the Earth barycenter, relative to the SSB, use is made of the latest JPL Planetary and Lunar Ephemerides, “DE405/LE405" or just “DE405"6, created in 1997 and described in detail by Standish . It represents an improvement over its predecessor, DE403. DE405 is based upon the International Celestial Reference Frame (ICRF; an earlier popular ephemeris DE200, which has been the basis of the Astronomical Almanac since 1984, is within 0.01 arcseconds of the frame of the ICRF). It constitutes of a set of Chebyshev polynomials fit with full precision to a numerical integration over 1600 AD to 2200 AD. Over this interval the interpolating accuracy is not worse than 25 meters for any planet and not worse than 1 meter for the Moon. The JPL package allows a professional user to obtain the rectangular coordinates of the Sun, Moon, and nine major planets anywhere between JED 2305424.50 (1599 DEC 09) to JED 2525008.50 (2201 FEB 20).
In the application described in this paper we have used only a 21-year (1990 to 2010) subset of the original ephemeris. The ephemeris gives separately the position of the Earth-Moon barycenter and the Moon's position relative to this barycenter. The Earth position vector is thus calculated as a fraction (involving the masses of the two bodies) of the Moon's one and opposite to the latter.
Finally, the vector traveled by the Sun towards its apex (with the speed of 20 km/s) between J2000 and the epoch of observation is added to the Earth barycentric position. The direction of solar apex is assumed at 18h in right ascension and 30° in declination at the epoch J1900. This direction is precessed to J2000. What is commonly known as the solar apex refers rather to solar system barycenter apex. However since the apex motion is known only approximately, really there is no need to distinguish between motions of the Sun and of the barycenter.
Although the primary concern in this project is to convert positions, the velocity of the gravitational-wave detector relative to the SSB may prove to be useful in future analyzes of spectra obtained from the acquired data.
The velocity of the detector is the sum of diurnal rotational motion of the
Earth, Earth motion in space relative to the SSB, and motion of the solar system
itself towards the apex. The last of the named components, towards the apex,
has been described in the previous paragraph. The Earth barycentric velocity
vector is obtained directly from the JPL ephemeris along with the position
vector. Finally, the detector motion relative to the Earth barycenter is
represented by a vector of constant length vo : =
directed always towards the east in the topocentric reference frame. Thus this
diurnal velocity vector has the following Cartesian components (in the
barycentric reference frame):
A short description of the FORTRAN code that generates both the position and the velocity of a detector with respect to the SSB is given in Appendix C.
VI. A LINEAR FILTER
The phase of the gravitational-wave signal contains terms arising from the
motion of the detector with respect to the SSB. These terms consist of two
contributions, one which comes from the motion of the Earth barycenter with
respect to the SSB, and the other which is due to the diurnal motion of the
detector with respect to the Earth barycenter. The first contribution has a
period of one year and thus for shorter observation times can be well
approximated by a few terms of the Taylor expansion. The second term has a
period of 1 sidereal day and to a very good accuracy can be approximated by a
circular motion. We thus propose the following approximate simple model
of the phase of the gravitational-wave signal:
The phase model (6.1) has the property that it is a linear function of the parameters. We have shown in Paper III that for linear phase models the optimal statistics is a homogeneous random field and consequently the statistical theory of signal detection described in Sec. III of the present paper applies to this case. The polynomial in time part of the phase Ψs [cf. Eq. (6.1)] contains two contributions. The first one comes from the intrinsic frequency drift of the gravitational waves emitted by a source. For example, if the source is a spinning neutron star, the frequency of the gravitational waves it emits can evolve as the frequency of the revolution of the star. In general the star will lose its energy and will spin down. This evolution of the frequency can be approximated by a Taylor series. The second contribution comes from the Taylor expansion of the motion of the Earth around the Sun. It is clear that the longer the observation time of the signal the more terms of the Taylor expansion we need to include in order to accurately approximate the true signal.
In order to verify the accuracy of the linear model
(6.1) we have calculated the fitting factor FF between
the linear model and the accurate model of the signal. As the
accurate model of the phase of the signal we have taken the following model:
For the case when the signal is narrow-band around some frequency
ω0 the formula (3.15)
for the fitting factor can be approximated by
We have used the above prescription to calculate the fitting factor numerically. Once we have found the parameters of the extremum (6.7) by solving Eqs. (6.8), we have used these values as input to an optimization routine (based on Nelder-Mead simplex algorithm) to find an accurate value of the fitting factor directly from the formula (6.4) without the Taylor expansion. Nelder-Mead method is one of algorithms to find extremum of a function of n variables [13,14]. This algorithm involves calculation of the function to be maximized at vertices of certain simplexes in the parameter space but not function's derivatives.
Our results are summarized in Tables I, II, and III. For each grid of positions of the source in the sky we have found the minimum value FFmin of the fitting factor (the worst case). In the tables we have given the maximum length of the observation time for which FFmin is greater than 0.9, 0.91/3, and 0.999. The conservative value of the fitting factor equal to 0.91/3 ~ 0.967 comes from the arguments of Apostolatos  by which such a fitting factor leads to affordable 10% loss of events. The ultraconservative value of the fitting factor equal to 0.999 gives a negligible loss of 0.3% of events.
From the results presented in Tables I, II, and III it follows that the monochromatic linear model is adequate for a few hours of the observation time, 1-spindown model for a few days of the observation time, and 2-spindown model for around 1 week of the observation time. For several months of the observation time we do not expect to fit an adequate linear model however we know that for such long observation times a coherent all-sky search is computationally prohibitive [1,2]. Thus we conclude that in realistic (i.e. computationally feasible) coherent searches of not more than 1 week duration a satisfactory linear model can be chosen.
VII. SPACING OF FILTERS, THE SEARCH ALGORITHM, AND COMPUTATIONAL REQUIREMENTS
A. Grid of templates
In this section we shall present a construction of a grid in the parameter space on which the statistics F will be calculated in order to search for signals. We assume that as filter (or template) that we use in order to calculate the statistics we shall use an approximations of the signal by a suitable linear model studied in Sec. VI. We would like to choose the grid in such a way that the additional loss of potential gravitational-wave signals due to the finite spacing of the grid is negligible. In order to determine an appropriate grid of templates we shall use the correlation function of the statistics F. We shall choose the grid in such a way that the correlation function for two filters evaluated for parameters at two neighboring points of the grid is not less than a certain specified value.
This method is conceptually different from the approach that uses the Fisher matrix as a metric on the parameter space. However when the Fisher matrix is constant, i.e. independent of the values of the parameters, which is the case for the linear model, the two approaches coincide. Metric approach is thus more general than ours as it does not require constancy of the parameter-space metric.
The correlation function C of the optimum statistics F for signal
(7.1) can be approximated using Taylor expansion around
τ = 0. Keeping terms at most quadratic in
τi one gets
The statistics F can be expressed as a Fourier transform where the parameter p0 corresponds to angular frequency and consequently F can be calculated using the FFT algorithm. However the FFT gives the values of the statistics on a certain grid of frequencies called Fourier frequencies. These frequencies in normalized units are separated by factor of 2π. Thus when the true frequency falls between the two Fourier frequencies we cannot achieve the theoretical maximum of the optimal statistics F. The worst case is when the frequency falls half way between the Fourier frequencies. One can easily find that in such a case the signal-to-noise ratio calculated approximately by means of the Taylor expansion of the correlation function is equal only to 0.42 of the optimal signal-to-noise ratio. This would lead to a drastic loss of signals. Therefore we need a finer grid. A way to achieve this and still take advantage of the speed of FFT is to pad the time series with zeros.7 Padding with zeros of the length of the original time series gives a grid that is twice as fine as the Fourier grid, i.e. the difference between the Fourier frequencies is equal to π. Then in the worst case the signal-to-noise ratio is 0.89 of the optimal one. As we shall see later from Monte Carlo simulations this choice of spacing of the frequencies leads to a search algorithm where there is no additional loss of events due to bad grid spacing and the rms errors of the estimators of parameters are close to the theoretical minimum. It is also useful to note that the signal-to-noise ratio calculated form the exact correlation function formula yields the values of the fractions of the optimal signal-to-nose ratio equal to 0.63 and 0.90 for the difference between the Fourier frequencies equal to 2π and π, respectively. This indicates the limits of the validity of the Taylor expansion.
We introduce a convenient normalization of the spin-down parameters
pk of the linear model (6.1), namely
It is clear that we cannot fill the parameter space completely with hyperellipsoids. But such filling can be done by means of some prisms constructed with the aid of the correlation hyperellipsoid. We shall illustrate our construction in the special case of 1-spindown (i.e., s = 1) linear phase model (7.5).
In the first step we choose spacing in the frequency
parameter p‾0 to be equal to
π. Then we compute the
value C0 of the correlation at the surface of the 4-dimensional
correlation hyperellipsoid (7.6) by requesting that
the p‾0 axis
intersects this surface at the points
= (Δp‾0,0,0,0) into
Eq. (7.6), one gets
We first construct the elementary cell of the grid in the filter space
(which is the space spanned by the parameters p‾1, A, and B).
We consider the p‾0 = 0 cross section
of the 4-dimensional hyperellipsoid (7.9). This cross
section defines the 3-dimensional ellipsoid in the (p‾1,A,B) space:
The elementary cell in the 4-dimensional space (p‾0,p‾1,A,B) we construct as follows. The 3-dimensional prism of maximal volume, described above, which lies in the p‾0 = 0 subspace, we parallelly translate in four dimensions, in two opposite directions, into the subspaces p‾0 = –π/2 and p‾0 = +π/2. As the direction of translation we choose one of the principal directions8 of the hyperellipsoid (7.9). Such constructed elementary cell is the 4-dimensional prism which bases are two adjacent 3-dimensional hexagonal prisms lying in the p‾0 = –π/2 and p‾0 = +π/2 subspaces.
It is clear from the construction that our elementary cell will stick out the 4-dimensional hyperellipsoid (7.9). For the case of n = 2 we have calculated the correlation function
If as a base of the elementary cell in the filter space we choose a square instead of a regular hexagon the volume of the cell (independently of the value of n) decreases by a factor of 3√3/4 ~ 1.3.
B. Monte Carlo simulations
In order to test the effectiveness of the chosen grid we have made Monte Carlo
simulations of the detection of our signal in the noise and estimation of its
parameters. In the simulations we have taken the signal s to be 1-spindown
linear model with a constant amplitude h0:
The procedure described above differs from another two-step hierarchical algorithm proposed by Mohanty and Dhurandhar [8,9]. The first step of the two procedures is the same but in the second step Mohanty and Dhurandhar propose to use a fine grid in the parameter space around the maximum obtained from the coarse search whereas we propose to use a maximization routine. However as any optimization routine the Nelder-Mead algorithm may fail to find the global maximum. Therefore in the real data search we may have to use a combination of both hierarchical procedures: whenever our maximization routine fails we use as the second step a fine search proposed by Mohanty and Dhurandhar.
We have performed Monte Carlo simulations by generating a signal s(t) given by Eq. (7.15) and adding white Gaussian noise n(t). By adjusting the amplitude h0 we have generated a signal with a chosen optimal signal-to-noise ratio d. In our simulations we have chosen the observation time to be n = 2. We have done 1000 runs for several values of d. We have compared the standard deviations of the parameter estimators obtained from the simulations with the theoretical ones calculated from the Cramèr-Rao bound. We have also compared the probability of detection of the signal obtained from the simulations with the theoretical one calculated from Eq. (3.8). In our simulations we have used the hexagonal grid constructed in Sec. VII A.
The results of our computations are depicted in Fig. 1. We have observed that above a certain signal-to-noise ratio (that we shall call the threshold signal-to-noise ratio) the results of the Monte Carlo simulations agree very well with the calculations of the rms errors from the covariance matrix. However below the threshold signal-to-noise ratio they differ by a large factor. This threshold effect is well-known in signal processing  and has also been observed in numerical simulations for the case of a coalescing binary chirp signal [17,18]. As was explained in Sec. VII of Paper III this effect arises because sometimes the global maximum of the functional Fs occurs as a result of noise and not the signal. This happens the more often the lower the signal-to-noise ratio. Following the theory of this effect developed in Paper III we have calculated the approximate rms errors of the estimators of the parameters. They are shown as thick lines in Fig. 1.
The comparison of probability of detection obtained from the simulations with the theoretical formula (3.8) shows that for lower signal-to-ratios we have more detections than expected. This is because the theoretical formula assumes that when the signal is detected its parameters are located in the cell corresponding to the true parameters of the signal. However in practice for lower values of d as a result of the noise the signal may drift to neighboring cells. Thus for threshold signal-to-noise ratio of 7 we find that simulated probability of detection is 5% greater than the theoretical one whereas for signal-to-noise 8 and greater the simulated and the theoretical probabilities of detection agree within 0.5%.
C. Estimation of parameters of the signal
Once the optimal statistics, calculated with the linear filter of Sec. VI, crosses a chosen threshold we register the estimates of the parameters p0, p1, A, and B. We would like to estimate physically interesting parameters: frequency, spin down, declination and right ascension of the source. We propose the following algorithm to achieve this. From Eqs. (7.18) and least-squares fit of the linear model to the accurate model of the signal we calculate the approximate values of parameters ω0, ω1, δ, α. Then using the accurate model of the signal as a filter we search for accurate estimates of the parameters on a small grid around the approximate estimates.
We have made Monte Carlo simulations of the above procedure by making 100 runs for one position of the source in the sky and for five signal-to-noise ratios: 8, 12, 16, 20, 24. First of all we have found that probability of detection of the signal from simulation agrees within 0.5% with the theoretical one. Concerning accuracy of the parameter estimation we have obtained that rms errors for declination and right ascension were very close to their Cramèr-Rao bounds, however the rms errors for frequency and spin down were worse. The mean biases in the estimators of the parameters ω0, ω1, δ, and α were less than 0.1%, 0.5%, 0.005%, and 0.01%, respectively.
D. Amplitude modulation
Approximation of the continuous gravitational-wave signal by a simple linear model and consequently approximation of the optimal statistics F, Eq. (3.3), by a homogeneous random field were studied under the assumption that the amplitude of the signal was constant. However as the modulation of the amplitude of the signal is very slow (it changes on a time scale of one sidereal day) in comparison to the rapid modulation of the phase we expect that our linear model of the phase is a satisfactory approximation of the phase of the optimal filters given by Eqs. (3.4). We also expect that the calculation of the number of elementary cells (except for a factor of 2, see below) and construction of the grid in the parameter space given in Sec. VII A above are valid for the amplitude modulated signal.
To calculate the optimal statistics F, we first need to correct
the time series for amplitude modulation i.e. multiply the time series by the
functions a(t) and b(t). Our grid is on the space parameterized by p0,
p1, A, and B whereas the functions a(t) and b(t) depend on the
declination δ and the right ascension
α. From Eqs. (6.2) we
have the following expressions for δ and
α in terms of A and
We see that the declination δ of the source, Eq. (7.18a), depends on the angular frequency ω0 of the signal that we do not know before we detected the signal. The uncertainty in δ affects the constant amplitudes in front of the time dependent modulation factors. However for the case of the EXPLORER detector data (which we plan to analyze employing the methods developed above) we have that frequency f0 is around 1 kHz within the band B ~ 1 Hz (see Sec. VIII). Thus by choosing in the equation for declination δ the frequency equal to the middle frequency of the band to be analyzed instead of the unknown frequency ω0, we shall lose at most only a fraction ~ B/(2f0) ~ 0.0005 of the signal-to-noise ratio.
E Computational requirements
To estimate the computational requirements to do the search we
calculate the number of FFTs that one needs to compute. This number depends on
the total volume VF of the filter space, which is given by
The number of grid points on which the optimal statistics F, Eq. (3.3),
should be calculated is obtained by dividing the volume VF of the filter
space by the volume Vgr of the elementary cell of the chosen grid in
the filter space. In the previous subsection we have found that for each set
of parameters A and B there are two sets of parameters δ and α
[see Eqs. (7.18)]. Moreover the optimal statistics involves calculations
of two FFTs-Fa and Fb [see Eqs. (3.3) and (3.4)]. Thus
the total number NFFT of FFTs is 4 times the number of grid points
and it is given by
Assuming that the data processing rate should be comparable to the data
acquisition rate and that the data processing consists only of computing
the FFTs the number P of floating point operations per second
(flops) can be estimated by
VIII. A NARROWBAND ALL-SKY SEARCH OF EXPLORER DATA
The data analysis tools for searching continuous gravitational-wave signals that we have developed in the previous sections of the present work can be applied both to the resonant bar and the laser interferometric detectors. We plan to apply these tools to the data from the resonant bar detector EXPLORER9 . The detector has already collected many years of data with high duty cycle (e.g. in 1991 the duty cycle was 75%). Our primary objective is to carry out an all-sky search. It is a unique property of the gravitational-wave detectors that with a single time series one can search for signals coming from all sky directions. In the case of other instruments like optical and radio telescopes to cover the whole or even part of the sky requires a large amount of expensive telescope time. The directed search of the galactic center has already been carried out and limits for the amplitude of the gravitational waves has been established .
The EXPLORER detector is most sensitive over certain two narrow bandwidths (called minus and plus modes) of about 1 Hz wide at two frequencies around 1 kHz. To make the search computationally feasible we propose an all-sky search of data of a few days long in the narrow band were the detector has the best sensitivity. By narrowing the bandwidth of the search we can shorten the length of the data to be analyzed as we need to sample the data at only twice the bandwidth and thus we reduce the computational time. To reduce the parameter space to search we restrict ourselves to only one spin-down parameter. We would also like to use the linear filter model as the search template. Then as our frequency is around 1 kHz from Table II we read that we can consider up to 2 days of coherent observation time in order that the fitting factor is greater then 0.9. For the sake of the FFT algorithm it is best to keep the length of the data to be a power of 2. Consequently we choose the number of data points to analyze to be N = 218. Then for To = 2 days of observation time the bandwidth Δν of the data will be Δν = N/(2To) ~ 0.76 Hz. We also choose to analyze the data for the plus mode which has frequency around 922 Hz. With these parameters we can calculate the number of filters that we need to compute. From Eq. (7.22) and for the hexagonal grid over the sky we get that NFFT ~ 3.7×108. Assuming that the data processing rate should be comparable to the data acquisition rate the computing power required [calculated by means of Eq. (7.23)] is around 7.7 Gflops. If we allow a month for off-line processing of data with the above parameters we only need around 250 Mflops of computer power.
From Eq. (3.11) we can calculate the threshold that we need to set. For example choosing the false alarm probability to be 1% we find by inverting Eq. (3.11) and using Eq. (3.10) that for the parameters that we have chosen above the threshold signal-to-noise ratio is 8.3. However we know that in the coarse search we are using a suboptimal filter and we are losing signal-to-noise ratio. For the parameters of our search we find that in the worst case the fitting factor is 0.94. Moreover due to our discrete grid in the parameter space the signal-to-noise ratio can decrease in the worst case by an additional factor of √(0.77) ~ 0.88. Thus in the worst case the signal-to-noise ratio of our coarse search can be a factor of 0.83 of the optimal one. Hence in order not to lose any signals we lower the signal-to-noise threshold by a factor of 0.83. By lowering the threshold we must make sure that the number of false alarms does not increase excessively. From Eq. (3.12) we find that the expected number of false alarms with the lower threshold is ~ 310. This is certainly a manageable number that will insignificantly increase the computational time of the total search. We can consider lowering the threshold even further in order to have more candidate events that can be later verified using longer stretches of data. For example by lowering the threshold by a factor of 0.8 the expected number of false alarms is around 1600 which will still be manageable to verify.
The minimum detectable amplitude h0, i.e. the amplitude for which the
signal-to-noise ratio is equal to 1, is given by
One of us (A.K) would like to thank ROG group in Istituto Nazionale di Fisica Nucleare (INFN), Frascati, where part of this work was done, for hospitality and Consiglio Nazionale delle Ricerche (CNR) for support within the exchange agreement with Polish Academy of Sciences. This work was supported in part by the KBN Grant No. 2 P03B 094 17 (K.M.B., P.J., and A.K.). We would also like to thank Interdisciplinary Center for Mathematical and Computational Modeling of Warsaw University for computing time.
APPENDIX A: BEAM-PATTERN FUNCTIONS
1. Interferometric beam-pattern functions
Noise-free response function h of the laser interferometric detector is defined as the difference between the wave induced relative length changes of the two interferometer arms. Derivation of formula (2.1) in the case of the laser interferometer can be found e.g. in Sec. II A of Ref. .
2. Bar beam-pattern functions
We consider here the response of a bar detector to a weak plane gravitational
wave in the long wavelength approximation. Moreover, we assume that the
frequency spectrum of the gravitational wave which hits the bar entirely lies
within the sensitivity band of the detector. Under these assumptions the
dimensionless response function h of the bar detector can be computed from
the formula (cf. Sec. 9.5.2 in Ref. )
Because of the diurnal motion of the Earth the beam-patterns F+ and
F× are periodic functions of time with a period equal to one sidereal
day. We want to express F+ and F× as functions of the right
ascension α and the declination δ
of the gravitational-wave source and the polarization angle ψ
(the angles α, δ, and ψ
determine the orientation of the wave reference frame with respect to the
celestial reference frame defined below). We represent the matrix M of
Eq. (A6) as
Equivalent explicit formulas for the functions a and b from Eqs. (A11) can be found in Ref.  where different angles describing the position of the gravitational-wave source in the sky and the orientation of the detector on the Earth are used.10
APPENDIX B: THE TIME AVERAGES <a2> and <b2>
1. Laser interferometeric detector
The time averages
entering Eqs. (3.1), for
the observation time To chosen as an integer number of sidereal days, for the
laser interferometeric detector take the form (here n is a positive integer):
We see that <a2> and <b2> depend only on one unknown parameter of the signal-the declination δ of the gravitational-wave source. They also depend on the latitude φ of the detector's location and the orientation γ of the detector's arms with respect to local geographical directions.
2. Resonant bar detector
For the resonant bar detector the time averages
the observation time To being an integer number of sidereal
days, have the form:
As in the case of the laser interferometeric detector, the time averages <a2> and <b2> depend on the declination δ of the gravitational-wave source, the latitude φ of the detector's location, and the orientation γ of the detector's axis of symmetry with respect to local geographical directions.
APPENDIX C: THE Top2Bary PROCEDURE
The algorithms described in Sec. V were implemented in an easily callable subroutine named Top2Bary consisting of about 900 lines of FORTRAN code. The ephemeris files (DE405'90.'10, tai-utc.dat and yearly eopc04.yy) required by the program must be kept in the directory \Top2Bary\EphData on the current disk. The procedure header is shown below.
 M. A. Papa, P. Astone, S. Frasca, and B. F. Schutz, in Proceedings of the 2nd Workshop on Gravitational Wave Data Analysis, edited by M. Davier and P. Hello (Éditions Frontièrs, 1998), pp. 241-246.
 M. A. Papa and B. F. Schutz, in Proceedings of the XXXIVth Rencontres de moriond: Gravitational Waves and Experimental Relativity, edited by J. Trân Thanh Vân, J. Dumarchez, S. Reynaud, C. Salomon, S. Thorsett, and J.-Y. Vinet (World Publishers, Hanoi, 2000), pp. 199-205.
 J. Middleditch, W. Deich, and S. Kulkarni, in Isolated Pulsars: Proceedings of the Los Alamos Workshop, held in Taos, New Mexico, February 23-28, 1991, edited by C. Ho, R. Epstein, and K. A. Van Riper (Cambridge University Press, Cambridge, 1993).
1F is the log likelihood function Λ with amplitudes Ai replaced by their ML estimators Âi.
2For the narrow-band signals considered in our paper the scalar product (h|h') can be approximated as
|(h|h') ≈ [2/Sh(f0)]∫||To|
|h(t) h'(t) dt.|
4They are available as eopc04.yy files, where yy stands for a two digit year number (e.g. 99 for 1999, and 00 for 2000); see hpiers.obspm.fr/iers/eop/eopc04.
5For past years, since 1962, the data necessary for reduction are included in eopc04.yy files mentioned in footnote 4.
6It is available via the Internet (anonymous ftp: navigator.jpl.nasa.gov, the directory: ephem/export) or on CD (from the publisher: Willmann-Bell, Inc.; www.willbell.com/software/jpl.htm).
7Padding time series with zeros of the length of the original time series in real signal search means that we would need to calculate FFTs of twice the length of the original data. This means doubling the computational time and the computer memory used. To avoid this pulsar astronomers have invented special interpolation algorithms that work in the Fourier domain and give twice as fine Fourier grid from the FFT of the original data. In the analysis of EXPLORER data we shall use one such algorithm provided to us by Duncan Lorimer (the details of which can be found in Ref. ).
8The reasonable result one obtains only when translating along this principal direction of the hyperellipsoid (7.9), which almost coincides with the p‾0 axis.
9The EXPLORER detector is operated by the ROG collaboration located in Italian Istituto Nazionale di Fisica Nucleare (INFN); see www.roma1.infn.it/rog/explorer/explorer.html.
a and b from Eqs. (A11) are identical with the
functions S+ and S× from Eqs. (2.90) and (2.91) of
Ref. , provided the following identification
of the variables η, α, θ, Ψ used in 
with our variables Ωr, φr, α,
δ, φ, γ is made: η → –
(α – φr – Ωrt),
α → π/2 – δ,
θ → π/2 – φ,
Ψ → γ – π/2.