Contents: Basics || Parameter Files || Images || Shortcuts || Statistics || File Sharing ||
The Image Reduction and Analysis Facility (IRAF) is suite of sophisticated software designed for processing and analyzing astronomical images. Like Unix, it uses a "command line" interface, where you type commands into a line on the screen in a sentence-like format (commands are the verbs, image and file names are the nouns, and "parameters" act like adjectives & adverbs to modify the function of the command). Because IRAF is so large, it is divided into different "packages" suitable to different jobs. Within each package are a variety of "tasks" (commands) that you can run to accomplish different jobs.
You must enter IRAF from your home directory (it reads a file there called "login.cl" to set some basic parameters; without them, it may not work properly). Once in your home directory, enter IRAF by typing "cl" (for "command language"):
% cd [jump to your home directory]
% cl [start up IRAF]
(FYI: You should get a "Welcome to IRAF" and a list of available packages; if you get a short warning, you were probably not in your home directory. IRAF leaves you with the "cl>" prompt to remind you that you are in the IRAF (as opposed to Unix) environment. The ">" denotes IRAF, and the first two letter indicate what package you have open (it will change as you open and close packages). You type commands at the cl> prompt.
Many of the commands from Unix have analogs in IRAF, and "wildcard" characters work the same way:
cl> dir [list the contents of current directory, like Unix "ls"]
cl> cd PHOT [move down one directory into an exsting directory named "PHOT"]
cl> mkdir RAW [make a new directory named "RAW"]
cl> page bob.dat [list the contents of the file named "bob.dat", like Unix "more"]
cl> copy bob.dat junk3 [make a new, identical copy of an existing file, like Unix "cp"]
cl> rename junk3 junk4 [change the name of a file from "junk3" to "junk4", like Unix "mv"]
cl> del junk4 [delete the file, like Unix "rm" -- I have it set to ask you (y/n) whether to permanently delete the file]
You can run any Unix command from inside IRAF by preceding it with an exclamation point:
cl> !more bob.dat [list the contents of the file named "bob.dat", tends to behave better than IRAF "page" so use it instead]
There are many other IRAF packages and tasks. You can get detailed info about different ones by using "help" followed by the package or task name. For tasks (e.g., "imheader") you get a full description of the taks and its parameters in some gory detail, but potentially helpful. For packages (e.g., "images"), it lists the tasks and/or packages available with short descriptions.
cl> help imhead [view help manual for task "imheader"]
cl> help images [view sub-packages available within the package "images"]
To enter a new package, simply type the pakcage name. To exit the current package, type "bye":
cl> images [enter the "images" package -- it lists the available packages/tasks and leaves you with the "im>" prompt]
im> bye [exit the package "images"]
To exit IRAF altogether and return to Unix, type "logout":
cl> logout [exit IRAF]
Each IRAF task (and some packages) have a number of "parameters" that can be adjusted to vary the function of the task. To see the parameters for a given task, type "lpar" and then the task name, for example:
cl> lpar imheader
cl> lpar hselect
In each case, you should see a list of parameter names in the left column, their current settings in the middle, and a brief description of the parameter along the right (longer descriptions are included in the help file).
The simplest way to change parameters is to specify them explicitly in the command line when the task is run. For example, the default setting of the parameter "longheader" in imhead is "no", so we get a short header when we type:
cl> imhead bob.imh
cl> imhead bob.imh long=yes
cl> imhead bob.imh
cl> imhead bob.imh l+
By adding "long=yes", we modify the parameter setting for this particular run of imhead so it gives us a long header listing. The third command in the set of four above demonstrates that the default setting is still "no". For parameters that are yes/no (IRAF refers to them as Boolean), you can use the the shortcut shown in the fourth command: "l" is the minimum number of characters to uniquely specify "longheader" and "+" signifies "yes" ("-" means "no").
Notice that some parameters in the listing are surrounded by parentheses, while others are not. The ones with parentheses are "hidden" parameters that you do not need to include on the command line unless you specifically want to change them (otherwise the current setting in lpar will be used) -- if you include them you need to specify them via "parameter=setting" or ("param+") on the command line. The non-parentheses (or "query") parameters are essential to the task and must be included on the command line (above, how would we know which images to imhead if we didn't specify it?) -- put them in the order they appear in the lpar listing (or you can use the "parameter=setting" format if you put them out of order).
Play around with hselect to get the feel for setting parameters:
cl> lpar hselect
cl> hselect bob.imh $I,exptime,title yes
Another way to set parameters of a task is by editing the parameter file using the "epar" command:
cl> epar hselect
It appears much like in lpar, but you can use the arrow keys to move around and can delete/add text to change the settings of the different parameters. To exit, type ":q" and you can run the task at the command line (or type ":g" to exit epar and run the task in one step).
Using epar changes the parameter settings so that each subsequent use of the task will use the values you set in epar. These values are saved in files in the "uparm" (U're parameters?) directory. If you want to set them back to the default values, use "unlearn":
cl> dir uparm [list the files in the uparm directory; each file records current params for a different task]
cl> unlearn imhead [return the parameters in this task to their default values]
The last way to edit task parameters is useful if you are doing a task many times, and you want to be quick, accurate (no typos) and consistent (use the same parameters every time), as we do with image processing. I have set the parameters for all the tasks we use using epar, then saved the settings to different tasks using "dpar" into different files (I name them dpar.name so they are easily recognizable). Each time we run a task we set its parameters from that dpar file, e.g. try setting the parameters for hselect using dpar.hsel:
cl> !more dpar.hselect
cl> cl < dpar.hselect
I make it a habit to review the parameter settings of any task before I use it, either via lpar, epar, or "!more dpar.taskname", to ensure I know the settings and that there are no hidden surprises.
Much of what we do in IRAF involves inspecting, processing and analyzing images. There are two image formats we ususally work with: FITS (Flexible Image Transport System) format, and IRAF format. They have different suffixes to distinguish between them.
A FITS image (e.g., "AA_Aqr_1i1.fit" might be an image taken with our CCD) is a single file containing a mix of regular (ASCII) characters (the "header" that contains info on the "what/where/when" of the image) and binary characters (that actually hold the data -- intensity value of each pixel). You can use IRAF commands like copy, rename, and delete (and their Unix analogs) direclty on .fits files.
An IRAF image consists of two files, an ASCII file whose name ends in a ".imh" suffix (e.g., "AA_Aqr_1i1.imh") that holds the header info, and a binary file whose name ends in a ".pix" suffix (e.g., "AA_Aqr_1i1.pix") that contains the data. Usually the .pix file lives in a directory named PIX (or pix) immediately below the .imh file.
Back in earlier days of computing, this 2-file system had advantages, and since I wrote most of the reduction cookbooks we use back then, we do most of our work on IRAF format images. The commands below refer to .imh images, but should work on .fits images just as well.
Becuase an IRAF format image actually consists of two files (.imh and .pix), there are special commands to handle both files simultaneously (note: the .imh suffix can be left off if desired, it is implied):
cl> imcopy bob.imh junk3.imh [copy the image]
cl> imrename junk3.imh junk4.imh [make a new, identical copy of an existing image]
cl> imdelete junk4.imh [delete the image -- I have it set to ask you (y/n) whether to permanently delete the file]
An important command to learn about the contents of an image is "imheader", type:
cl> imhead bob.imh [view the short header of an image]
The line of data it returns contains the name of the image, its size [X,Y] in pixels, the data type ([real] means the pixel intensities can take decimal formats like 15.325 while [short] means they are restricted to integers like 15), and the image title. This is the "short header" -- to see all of the info in the image header, type:
cl> imhead bob.imh l+ [view the long header of an image, names and contents of all header "keywords"]
You get lots of information and may have to scroll back to see it all. Each line contains a "header keyword" and its value (e.g., "DATE-OBS= '2006-07-06'", where DATE-OBS is the keyword, and '2006-07-06' is its value).
In some cases, you may want to pull out values of header keywords for a selection of images. The task "hselect" does the job. In the next line, we will select header keywords from all images that match the "b*.imh" wildcard, and will write out the values of the keywords $I (code for "image name"), date-obs, time-obs, and exptime. Take the "yes" to mean "just do it". The resulting output is shown in the following line.
> hselect b*.imh $I,date-obs,time-obs,exptime yes
bob.imh 2006-07-06 05:04:36 150.00000000000000
You can use the hedit command to edit the contents of the header in any of three ways: (a) changing the value of an existing keyword, (b) deleting an existing keyword and its value, (c) adding a new keyword and its value. The following command does the latter; you can do help hedit to learn how to do (a) and (b).
cl> hedit bob*.imh notice "test data" add+ [add a keyword "notice" with value "test data" to the header of bob.imh -- hit return after each query]
cl> imhead bob.imh l+ [find the new keyword]
cl> hselect bob*.imh $I,notice yes
Perhaps the most important thing you will do is look at your image. We do this in an ximtool window using the "display" command. If you don't have one open, open one:
cl> !ximtool &
The window can be adjusted for different image sizes. Your IRAF account is set so the default is 1024x1024 pixel images, the size of our BGSU CCD images, so you will only need to change it if you are working on images from different CCDs. For example, to set the image size for 2048x2048 pixel images, type:
cl> set stdimage=imt2048
Other available settings include imt800, imt2048, imt4096, etc.
To display your image on the ximtool window, type:
cl> display bob.imh 1
The ximtool window can display up to 16 images in different "buffers", and allows you to flip between them to compare images. The "1" in the command above indicates we are viewing it in buffer#1. Display automatically selects a "color map" that it thinks is appropriate for your image -- it maps different pixel intensity values (numbers, like 695.23) into shades of gray. The numbers that popped up when you ran display (z1=659.1442 z2=830.4911) tell you that pixels with intensities <659 will appear black and those with >830 will appear white, and pixels between will be scaled shades of grey. You can override this automatic map by specifiying z1 and z2 yourself, like:
cl> display bob.imh 2 zra- zsca- z1=650 z2=3000
Poke around ximtool to see what the different buttons do. Open the control panel window by clicking on the rectangle-shaped button near the upper right -- play around with these until you become comfortable:
We can quantitatively examine a displayed image using the imexamine task:
cl> imexamine[interactively examine the images in the ximtool windowc]
Put the cursor over a star and hit the "r" key. A new window will appear and you will get a graph showing the star's radial profile (pixel intensities as a function of distance from the star's center). Put the cursor over another star and hit "r" again (the shape and width of the profile are good ways to assess the focus and seeing on your image). There are lots of other functions (I use the red ones the most):
It is often useful to mark the position of objects on your image once it has been displayed. For example, you can mark the stars you have detected as variable, or the known variable stars in a cluster. For example, the following command will put yellow (?) circles with radii of 9 pixels around the (X,Y) positions listed in the file named xypos.dat:
cl> tvmark 1 xypos.dat mark=circle radi=9 color=207
cl> tvmark 1 xypos.dat mark=circle radi=9 color=207 nxoff=8 nyoff=8 label+
The second version adds labels (centered 8 pix to the right and 8 pix above the X,Y position in the file). The 1 in both commands tells IRAF to add the marks to buffer 1 of the ximtool (you can do 1-16). The file should contain 3 space-separated columns containing (1) X position, (2) Y position, and (3) label (can be numbers, letters, and some special characters), like this:
152.09 281.97 V3 729.83 349.12 39
Do "help tvmark" or "epar tvmark" to get more options and info.
Another command that can be useful in analyzing images is implot (do help implot to learn its functions):
cl> implot bob.imh [analyze an image]
And finally a command to give statistics on images:
cl> imstat bob.imh[get stats on pixel intensities in an image]
# IMAGE NPIX MEAN STDDEV MIN MAX MIDPT
bob.imh 160801 1005. 23714. 450. 2.259E6 1636.
You can specify rectangular sub-regions of the image by appending [Xleft:Xright, Ybottom:Ytop] to the end of the file name, where Xleft etc are the coordinates of the sides of the rectangle you want statistics on. Finally, in our processed images, we often have "bad pixels" set to a large, "out of bounds" number and this can skew the statistics -- use nclip=5 to help clip these pixels out of the stats.
cl> imstat bob.imh[100:241,153:352] nclip=5 [get stats on a sub-region in the image, using clipping]
# IMAGE NPIX MEAN STDDEV MIN MAX MIDPT
bob.imh[100:241,153:352] 27838 744.3 17.38 680.5 809.2 743.7
One of the disadvantages of command line interfaces (compared to point-and-click GUIs) is they require lots of typing. If you are like me, lots of typing means lots of typos, so I try to minimize typing by using shortcuts.
1) You need not write all of a command, just enough of it to be uniquely identified. For example, imhead instead of imheader, or imexa instead of imexamine.
2) I use the mouse to copy/paste whenever I can (particularly repeating commands from a web-page or existing file to the IRAF command line). On Sun/SGI computers, drag across the text with the left mouse button to highlight it (it goes into a buffer), then put the cursor where you want it to go and click the middle mouse button.
3) IRAF keeps a history of recent commands. You can access it by typing "e" and then use the arrow keys to go back/forward through the list (up/down arrows) and left/right within a command to add/delete text and thereby change the command's function (most useful!).
4) You can repeat commands you have done recently by using the ^ symbol (upper-case 6; FYI, you can do this in Unix too, the symbol is ! instead of ^). For example:
cl> imhead bob.imh l+ [do a long header listing of "bob.imh"]
cl> ^ | page [recall the last command (above) and "pipe" the output to the PAGE task]
cl> ^hsel [recall the last hselect command; it may have been several commands ago]
5) Sometimes it is convenient to "pipe" output from a command directly to another command, as we did above ("|" is the pipe symbol). A variation is "redirecting" output of a task to a file rather than to the screen (">>" is the redirect symbol; it will create a new file if one does not exist, or append the output to the end of an exisitng file):
cl> imhead bob* >> imhlist [redirect the short headers of all images matching the wildcard to a file called "imhlist"]
The "files" command is useful in this regard: it simply lists the names of files, so when used as follows makes a list of filenames that can be fed into a task or used by a script.cl> files bob* >> flist [make a list of files whose names match the wildcard and write it to a file called "flist"]
6) The output of a "files" listing can be used as input by some tasks (e.g., ccdproc). By editing the file, you can have great control over which files are operated on, but still do it "in a batch". We use this a lot in processing our CCD images. When running the command, you precede the filename with an "@" sign to let the task (ccdproc) know that it will be working not on a single file (like bob.imh) but a list of image names -- I often refer to the file as an "at-file" (this exam may give an error if you have not opened the imred/ccdred packages... don't worry, its just a sample):
cc> ccdproc @flist [make a list of files whose names match the wildcard and write it to a file called "flist"]
7) A script is a set of commands strung together into a single file -- we use these a lot in our image processing too. Each command goes on a separate line, Unix commands can be included too if preceded by a "!", and you can even run executable programs like fortran and C++ within a script. I encourage you to "page" the scripts in the image processing cookbook to see what they look like and what they do -- you may even want to write your own scripts to further simplify image processing. I always name scripts with the suffix ".cl" so I can recognize them. To run them, use this format:
cl> cl < scriptname.cl [runs a script named "scriptname.cl"]
During our image processing (and in many other applications), it is useful to get basic statistics on a set of data. This can include determining the mean value and its error (RMS ~= standard deviation is the scatter of a typical point about the mean; standard error of the mean indicates uncertainty in the determination of the mean). Another task is fitting a line (or higher power function, a curve) to a set of (X,Y) data and also determining the RMS and uncertainties in the resulting coefficients. The IRAF task "curfit" allows us to do both.
Curfit takes as input a text file containing a table of data with X in the first column and Y in the second column (an optional third column can contain the Yerrors, if weighting is desired). For example:
cl> page xy.dat
15.0 281. 9.3
19.2 349. 12.1
12.9 218. 11.0
9.7 187. 8.3
22.2 397. 13.9
17.3 311. 10.6
10.3 195. 9.5
We can get the mean of the data in the Y column by doing a fit using one coefficient (the parameter "order" determines the number of coefficients):
cl> curfit xy.dat order=1
Curfit will draw a plot of Y vs. X and leave the graphics cursor on the screen so you can interact with the plot, such as:
Notice that as you delete/replace points and refit, the statistics in the upper part of the window change (in particular, the number of points and the RMS). When you quit, curfit spits much of this info to the screen, including the value of the coefficient (the mean) and its error. You can have curfit weight the fit by the errors (in column#3 of xy.dat) by applying the parameter weight to "user":
cl> curfit xy.dat order=1 weight=user
To fit a line or higher-order curve to the data, simply specify a larger order (with each data point given equal or uniform weight):
cl> curfit xy.dat order=2 weight=uniform
Again, you can delete/replace points (d,u) as desired and refit (f). Quit and see the coefficients and errors. A word of warning here: curfit fits a Legendre polynomial and the coefficients it solves for a polynomial of that form. To get coefficients for the more common "power law" polynomial (Y = c0 + c1*X + c2*X^2 + ... + cN*X^N), set the parameter "power" to yes (I usually set it in epar too for the future):
cl> curfit xy.dat order=2 weight=uniform power+
Within the fit, you can change orders by typing ":order 3" and then refitting with "f". There are many other options you can explore (do "?" from within the graphing window). Try fitting the data with ":order 1" -- note the RMS and appearance of the fit. Then increase it to ":order 2" and upward, noting the RMS and appearance of the fit at each order. Which is best?
This is a question every student confronts -- what level of complexity in our fit is enough, and what is too much? The fit can not have more coefficients than it has data points, but even 1:1 is too much -- the curve passes through every data point perfectly as if there were no experimental error associated with the data points (unlikely!). Qualitatively, one should choose the lowest order polynomial that adequately fits your data
-- this is an expression of Occam's Razor (the simplest model that explains your observation is preferable). Similarly, one should not delete data unless one has a reason to question the validity of the data point (an outlier may be telling you something about your data that you may not fully recognize unless you had 3-10 times more data points).
Quantitatively, there are statistical tests that can be used to help you decide whether adding another term to your polynomial is warranted. The simplest may be the "t-test" -- a coefficient adds meaningful information to the model if the ratio (value/error) for the coefficient is larger than 2 (or less than -2). See Andy if you have questions about polynomial fitting.
Sometimes it is useful to switch between image formats, say from IRAF format to FITS, or vice versa -- this can be especially useful when writing data to magnetic tape or CD for archiving.
cl> wfits bob.imh roberta [writes a FITS file named roberta.fits from the IRAF format file bob.imh]
cl> rfits roberta.fits "" junk old+
Tar is another archiving tool you might encounter. It gathers a group of files (of any type, not just images) into a single file (called a tarfile, or more colorfully, a tarball) or onto a magnetic tape. The three main options are:
cl> !tar -cvf bob* bob.tar ["c" = Creates a new tar file called bob.tar that contains all files in the current directory that match the wildcard bob*]
cl> !tar -tvf bob.tar ["t" = Table of contents of bob.tar -- list the files in it]
cl> !tar -xvf bob.tar ["x" = eXpand the tarfile back into the original files]
I often trade research information with colleagues using tar: I will pack up a bunch of files (text, images, plots in the form of post-script or jpeg files, PDFs, etc) into a tarfile, compress it, and send it by email or place it on my website so my colleague can download it. Or, I'll receive one from a colleague. Often, the compression program is "gzip" (indicated by a ".gz" suffix):
cl> !gzip bob.tar [compress a file called bob.tar]
cl> !gunzip jane.tar.gz [uncompress a file called jane.tar -- once uncompressed I can expand the tarfile into its individual content files]
You can learn more about these and other data input/output tasks in the dataio tutorial (from Jeanette Barnes at NOAO) located in exercises in your home directory.
Visit http://iraf.noao.edu/ and http://iraf.net/ (searching for commands using "apropos" can be helpful in finding names of tasks by words describing their function). Thanks to Jeanette Barnes at NOAO for her series of IRAF tutorials, which you can explore (see excersises in your home directory), and on which I leaned heavily in preparing this webpage.
Andy Layden, 2007 May 31.