Overview: The RR Lyrae photometry we have acquired is a series of magnitude measurements taken at different times. They appear as a number of TXDUMP (*.txd) files in a certain directory for each star. You probably tried making light curves for your stars using magdiff.cl, and you may have been disappointed by the ratty appearance of the light curves (lots of scatter). Don't despair! Experience shows that about half of the stars come out this way. Its not a problem with the photometric measurements you have made, but with the period you used to "fold" the data into a light curve. The period you took from the stars.prio file is what was published in the journals, and may have been derived 30 years or more ago. RR Lyrae change period slightly over time, so we need to determine the periods of your stars today. The following are some guidelines specific to the data we have taken at BGSU.
1) In one window, start up IRAF and cd to the directory with TXDUMP files (*.txd) from many nights for a single star, for example:
% cl cl> cd /data/jdoe/PHOT/ALL/UY_Boo
2) In this directory, have two windows open, one running IRAF and one in UNIX. In the UNIX window, start SuperMongo (it'll open an SM window for you, and leave your XTERM window free). You should also start an EMACS session running from the UNIX window, so you can type in any notes you make along the way.
% sm % emacs period_notes.UY_Boo &
3) Copy some important files into this directory. Note: there is a file called vfe.UY_Boo which is the data file used to create this example. You may want to try running through the example using this data file to help you recognize what is happening, before you tackle your own data.
cl> cp /data/layden/BGTEL/PERIOD/* .
4) Chose a comparison star to use. It should be fairly bright and fairly close to the variable. You will probably want to look at the finder chart to decide this. Or, use the first compstar in your magdiff file -- for example, if you star was UY_Boo. While you are here, jot down the period of the star (first line):
cl> !page magdiff.UY_Boo
5) Use the IRAF script stdlc_prep.cl to prepare a single data file from the various *.txd files. For each TXDUMP file, this script will read the date (HJD) of observation, the magnitude and error of the variable star, and the magnitude and error of the specified comparison star. It computes the magnitude difference (var-comp) and error in this quantity. It looks in the file 'stdlc_per10.runs', determines what "run" it was observed in (eg, Summer 1999, or Spring 2000) and assigns it an integer number that SuperMongo will use to determine the plot symbol (so when you plot the light curve, you can identify which run a given point belongs to). It writes the data from each TXDUMP file into one line of the resulting file, which is called 'vfe.dat'.
cl> cl < stdlc_prep.cl Select a comparison star to use (e.g., 3): 1
The next step allows you to set up the data file with either V or I mags in the third column. Subsequent programs will estimate periods and make light curves using this column only. You can try estimating the period from I data later, and see if you get the same value.
For period search, use V data or I data (1 or 0): 1 ...etc... 156 TXDUMP files processed
6) Now we are ready to search for the actual period of the star. A good place to start is to get the estimated period from the magdiff file (eg, magdiff.UY_Boo). Its a good bet the actual period is somewhere between 5% below and 5% above this value, so we will start by searching over that range. Run the period-finding program and answer the questions:
cl> !/data/layden/BGTEL/PROGS/stdlc_per10.e
It tells you what file it has read the template light curves from:
Reading STD LCs from: stdlc.all10
And how many templates there are (Nstd) and how many data points are in each template (Npts,slc):
Nstd, Npts,slc = 10 51
It reminds you how many observing runs there have been (take a look at stdlc_per10.runs to see how they are defined):
Number of observing runs = 3
It then asks you, run by run, if you want to use the data from that run. Runs are specified as a beginning and ending time, where time=HJD-2451000. Generally, you will want to include the data from all the runs (though see tips), so answer 1 to each question:
Run(i), Time(begin), Time(end) = 1 200.0 400.0 Use data from this run? (1=yes, 0=no): 1 Run(i), Time(begin), Time(end) = 2 400.0 500.0 Use data from this run? (1=yes, 0=no): 1 Run(i), Time(begin), Time(end) = 3 550.0 650.0 Use data from this run? (1=yes, 0=no): 1
It asks you for the name of the data file containing HJDs and magnitudes of the observations. This is the file you just created using stdlc_prep.cl:
Enter name of file w/ HJD Vmag Imag (eg, vfe.dat): vfe.dat
It tells you how many magnitude values it has read from vfe.dat. This is probably less than the number of TXDUMP files read by stdlc_prep.cl, since some of those TXDUMP files will be from I observations, and some will be from V observations:
READ 126 mags from variable star data file
It now asks you for the name of the output file. You can call it whatever you want. I suggest something that reminds you of what it is, eg "per62_69" might indicate a period search over the range 0.62 to 0.69 days. Anyway, keep notes on what you did ... this program may take a while to chug (2-30 min!), so you don't want to waste time redoing stuff:
Enter name for output file: per1
Now it asks how chatty you want the program to be as it is going about its business. 2 means it will tell you gory, cryptic details about everything it is doing; 1 chats a bit less; and 0 only tells you minimal stuff. You may try 2 or 1 to see what they look like, but it slows down the program a lot, so you will probably want to use 0 most of the time:
Output verbosity in reporting indiv solns (hi/lo/no = 2/1/0): 0
Now it asks for the period range (low and high ends) over which you want to search -- compute the starting period as 5% below the period estimate from magdiff (0.95*P), and compute the ending period as 5% above the magdiff period (1.05*P). It also asks for a Fractional Increment. This means how large an interval do you want between successive estimates of the period (i.e., compute the next period, P1, from the current period, P0, according to P1={1.0+INCR}*P0). A smaller number means you try more periods over the range you have specified (ie, you are thorough), but it takes longer to run. So there is some vaguely defined "best value" to use which depends on your data set. I suggest 0.0002, though you may want to experiment with your data based on these tips:
Enter Starting and Ending Periods, and Fractional Increment (e.g., 0.2 1.0 0.001): 0.62 0.69 0.0002
Now its gonna sit there a while and think: If you chose verbose=0, it will just say WORKING at first, and will update you every 300 period estimates with whatever period it is working on (i, per(i)).
It finishes with some stuff reminding you what file you wrote out, what the column format of that file is, the largest chi-squared value it encountered in the fits, and some instructions for plotting the results with SuperMongo.
7) In the SuperMongo window, copy/paste the macro and plt commands:
: macro read sm.per1 : plt
You get a plot showing curves for each of the first 6 templates (RRab light curve shapes). For example, the top plot is for Template #1, and it shows the chi-squared value obtained in each fit (Y-axis) plotted as a function of the period at which the fit was tried (X-axis). For most periods, when you "fold" the observed data you get a scattering of points in the light curve plane (phase vs. magnitude plot), and you can't fit them well with the template -- the chi-squared value is large. However, when you are near the correct period, the points will line up in the phase vs. magnitude plot and the template will do a good job of fitting them -- chi-squared will be small. So, we are looking for MINIMA in the chi-squared plots ... in theory, the lowest minimum gives the best fit, and indicates the true period of the star.
It can be a little tricky reading the minimum off the graph. There are a couple programs to help. Here are some tips for choosing.
8) stdlc_peranal.e --After viewing the plot, you decide you want to find the exact period corresponding to particular minimum. Specify the lower and upper bounds of the period range you wish to search. Programs returns the minimum chi-squared value (chimin) in each of the 10 templates (col) and the period at which it occurs, per(chimin).
cl> !/data/layden/BGTEL/PROGS/stdlc_peranal.e Enter name of STDLC_PER output: per1 Enter range of periods to consider (Pmin Pmax): 0.645 0.655 Col chimin per(chimin) = 1 0.0016310 0.6508784 Col chimin per(chimin) = 2 0.0030370 0.6508784 Col chimin per(chimin) = 3 0.0010350 0.6508784 Col chimin per(chimin) = 4 0.0035580 0.6508784 Col chimin per(chimin) = 5 0.0008820 0.6508784 Col chimin per(chimin) = 6 0.0034570 0.6508784 Col chimin per(chimin) = 7 0.0050800 0.6510085 Col chimin per(chimin) = 8 0.0067620 0.6511388 Col chimin per(chimin) = 9 0.0180200 0.6523120 Col chimin per(chimin) = 10 0.0177280 0.6523120
In this example, the template with the lowest chi-square value is #5 (col=5), where chimin=0.0008820 at a period of 0.6508784 days (in fact all 6 RRab templates agree on this period!). It is a good candidate period for the variable star. You may want to repeat this for several different minima, and compare the appearances of their light curves at these fixed periods using stdlc.cl. I always visually inspect each light curve this way, to ensure I am not misled by a coincidentally-small chi-square value.
9) stdlc_peranal2.f -- Sometimes, there will be a series of minima over a small range of periods (a "packet" of possible periods). For a given template, you may want to know which points fall below some maximum value of chi-squared (which you specify).
cl> !/data/layden/BGTEL/PROGS/stdlc_peranal2.e Enter name of STDLC_PER output: per1 Which x^2 column should I consider (1,2,...,ncol): 5 Find minima below this value of x^2 (eg 0.05): 0.01 chimin per(chimin) = 0.0087000 0.6477615 chimin per(chimin) = 0.0043320 0.6493180 chimin per(chimin) = 0.0074790 0.6495779 chimin per(chimin) = 0.0008820 0.6508784 chimin per(chimin) = 0.0034830 0.6511388 chimin per(chimin) = 0.0098640 0.6513993 chimin per(chimin) = 0.0018990 0.6524425 chimin per(chimin) = 0.0047700 0.6527035 chimin per(chimin) = 0.0096280 0.6540103
In this example, we find all the periods for which chimin is less than 0.01 using template #5.
10) So now you have one or more candidate periods. Are they correct? That is, when you "fold" the observed data by this period, does it produce a good light curve? Let's find out.
cl> cl < stdlc.cl
The program reminds you which file it is reading the template light curves from:
Reading STD LCs from stdlc.all10
It now asks for a file name to store the results in. At this point, you probably don't care about this (though it may be of interest later), so enter a random name, "junk". It will create 2 output files called "rd.junk" and "fd.junk".
Enter root name for output data files (rd.* & fd.*): junk
Now it wants the name of the file with all the photometry, i.e., the one you created using stdlc_prep.cl:
Enter name of input phot file (eg vfe.dat; q=quit): vfe.dat
Now it tells you the names of some other files its going to create ... not important to you:
Best fitted curve in avfe.dat Best 4 fitted LCs in bvfe.dat Fitted coefficients in cvfe.dat 4xPhi-shifted obs dat in dvfe.dat 4xVfit SM driver in svfe.dat Final fitted curve in fvfe.dat Final photometry in gvfe.dat
Asks for the period. You input a candidate period you obtained using stdlc_per10:
Enter Period of variable star: 0.6508784
Now it chugs some stuff out on the screen describing the fit it has performed with each of the 10 templates. It has picked the best 4 templates and arranged them in order (best = top = lowest chi-squared value):
Islc, A, P0, Vi, chisq = 5 0.904 0.698 0.660 0.8822E-03 Islc, A, P0, Vi, chisq = 3 0.950 0.724 0.669 0.1035E-02 Islc, A, P0, Vi, chisq = 1 1.062 0.729 0.699 0.1631E-02 Islc, A, P0, Vi, chisq = 2 0.844 0.737 0.703 0.3037E-02
It has also made a plot for you to look at, to see if you really like the best fitting template. Go ahead and plot it in the SM window:
: macro read svfe.dat : plt
Decide which light curve you like best. Usually it is number 1 (the upper-left one ... notice the numbers in the corner of each plot), the one with the smallest chi-squared value. Enter the plot number (1-4, NOT the template number, 1-10) of the light curve you want to use:
Enter plot number (1-4) for final LC: 1
It spits out some data on the light curve based on the observed points and fitted templates. They are not important now:
Light Curve Parameters from Observed Data Points: outfile Vmaxo Vmino Amplo dPro Emaxo Nobs vfe.dat -0.317 0.677 0.994 0.264 368.06553 126 Light Curve Parameters from FIT: infile Vmaxf Vminf Amplf dPrf Emaxf Nobs slc RMS Pshft vfe.dat -0.245 0.660 0.905 0.240 0.00000126 5 0.030 0.698
Finally, it prompts you to plot up the final light curve. Here, it shows both V and I light curves assuming the period you entered:
: macro read zvfe.dat : plt
You are done for this star/period. The program prompts you for another data file and period, or type 'q' to quit. You can make a plot of the light curve by typing the following in the SM window:
: dev postscript : plt : dev x11
Or make a postscript file called "filename" or whatever you like:
: dev postencap filename : plt : dev x11
You may have trouble figuring out which of several candidate periods is best. Its not always easy, or even possible to identify a best period. Ask ACL if you have any questions.
When you are done, show me the results -- I always like to see a light curve. We can then decide if your stars needs more observations, or whether it is time to start measuring the parameters from the light curve, in particular the V-I color at minimum light (phase between 0.5 and 0.8).
ACL -- 2000 May 25
1) Should you always use data from all the runs?
Generally, yes -- the more data the better. However, sometimes if the data series has large time gaps (for example, no data was taken on this star for a few months or years), there can be confusion. The plots of chi-squared versus period get "packets" -- regions where there are many equal-depth minima tightly packed together. In these cases, it can be useful to omit the data from one or more runs at first, to narrow down the number of minima you need to investigate.
2) Choosing the Fractional Increment (FI):
This is a question of resolution. You could make the FI very small and get very high resolution, but it takes longer for the computer to do the extra computations. If you look at the plot chi-squared vs. period and zoom way in to inspect a minimum, and you find it to have a very smooth, rounded bottom, you have probably set the FI too small. You have oversampled the data. This doesn't hurt anything, but takes longer.
On the other hand, if you zoom in on several minima and their bottoms look all jaggedy, you have undersampled the data, and you need to make FI smaller. If you don't, the period you derive will be less accurate than it could be, and the light curves you produce will have more scatter than they should.
On the whole, oversampling is better than undersampling ... it gets the period correct, but may cost you some time. If stdlc_per10 is taking more than 10 min (running on baade), you may be way oversampling.
The "best" value of FI depends on your data set. If you have 100's of data points taken over the course of years, you will need a smaller value of FI to sample the data correctly (i.e., you can determine the period very precisely). I would guess FI = 0.00001 is a good first guess. If you have 10's of data points taken over the course of a month, FI=0.001 is probably good. For most of our data, I suspect FI=0.0002 is good, though this estimate may change as we gain experience.
3) Different templates for different light curve shapes.
The 10 templates we use for fitting our observations are selected to represent a variety of light curve shapes, and even different types of variable stars. The first six templates are derived from ab-type RR Lyrae (the type we observe in our program). RRab as a class all have a fast rise from minimum light (faintest points) to maximum light (brightest points) and a slow descent. The six RRab templates represent different shapes and rise-rates.
Template #7 is derived from c-type RR Lyrae (they pulsate faster, have smaller amplitudes, and a more sinusoidal-shaped curve than the RRab). About 1/10 of all RR Lyrae are RRc, but we are not studying them in our BGSU program.
Template #8 is a simple cosine curve. It is there as an alternate, or middle ground between the RRc curve (#7) and the W Ursae Majoris curve (#9).
Template #9 is of a generic contact eclipsing binary (two stars orbiting so closely around their common center of mass the outer atmospheres of the stars are merged -- so you don't see sharp edges during eclipse). This class of binary is often called W Ursae Majoris (W UMa) after the prototype system.
Template #10 is of a generic detached eclipsing binary (two stars orbiting farther apart, so not in contact -- however, we are in the orbital plane of the system, so we see light variations as one star passes in front of another). These stars are often called Algols after the prototype (a naked-eye variable in Perseus).
Since all the stars we are observing are already known to be RRab type, it would be surprising (distressing!) if you found one of Templates #7-10 to be the best fit for your star.
4) Choosing which minimum is the best.
This is tricky, and requires experience. Sometimes there will be one obvious period (it may even be the same for all the RRab templates (#1-6) as in the example above). Sometimes there will be many minima which look equally good (or bad). If you have any doubt, talk with me and we will work on it together.