Progress?

I have made slight headway regarding my installation of CESM. It still isn’t running, but now it’s not running for a different reason than previously! Progress!

It appears that, at some point while porting, I mangled the scripts/ccsm_utils/Machines/mkbatch.kate file for my machine such that the actual call to launch the model wasn’t getting copied from mkbatch.kate to test.kate.run. A bit of trial and error fixed that problem.

I finally got Torque working. The only reason that jobs were getting stuck in the queue was that I didn’t start the pbs_sched daemon! It turns out that qsub isn’t related to the problems I was having, and isn’t necessary to run the model, but it’s nice to have it working just in case I need it in the future.

So, with the relevant call in test.kate.run as

mpiexec -n 16 ./ccsm.exe >&! ccsm.log.$LID

the command line output is

Wed July 6 11:02:33 EDT 2011 -- CSM EXECUTION BEGINS HERE
Wed July 6 11:02:34 EDT 2011 -- CSM EXECUTION HAS FINISHED
ls: No match.
Model did not complete - no cpl.log file present - exiting

The only log file created is ccsm.log, and it is completely empty.

I have MPICH2 installed, the command mpiexec seems to work fine, and I have mpd running. Regardless, I tried taking out mpiexec and calling the executable directly in test.kate.run:

./ccsm.exe >&! ccsm.log.$LID

The command line output becomes

Wed July 6 11:02:33 EDT 2011 -- CSM EXECUTION BEGINS HERE
Segmentation fault.
Wed July 6 11:02:34 EDT 2011 -- CSM EXECUTION HAS FINISHED
ls: No match.
Model did not complete - no cpl.log file present - exiting

Again, ccsm.log is empty, and there seems to be no trace of why the model is failing to launch beyond Segmentation fault. The CESM guide recommends setting the stack size to unlimited, which I did to no avail. Submitting test.kate.run using qsub produces the same messages, but in the output and error files, rather than the terminal.

Thoughts?

Advertisement

Climate Models on Ubuntu

Part 1: Model E

I felt a bit over my head attempting to port CESM, so I asked a grad student, who had done his Master’s on climate modelling, for help. He looked at the documentation, scratched his head, and suggested I start with NASA’s Model E instead, because it was easier to install. And was it ever! We had it up and running within an hour or so. It was probably so much easier because Model E comes with gfortran support, while CESM only has scripts written for commercial compilers like Intel or PGI.

Strangely, when using Model E, no matter what dates the rundeck sets for the simulation start and end, the subsequently generated I file always has December 1, 1949 as the start date and December 2, 1949 as the end date. We edited the I files after they were created, which seemed to fix the problem, but it was still kind of weird.

I set up Model E to run a ten-year simulation with fixed atmospheric concentration (really, I just picked a rundeck at random) over the weekend. It took it about 3 days to complete, so just over 7 hours per year of simulation time…not bad for a 32-bit desktop!

However, I’m having some weird problems with the output – after configuring the model to output files in NetCDF format and opening them in Panoply, only the file with all the sea ice variables worked. All the others either gave a blank map (array full of N/A’s) or threw errors when Panoply tried to read them. Perhaps the model isn’t enjoying having the I file edited?

Part 2: CESM

After exploring Model E, I felt like trying my hand at CESM again. Steve managed to port it onto his Macbook last year, and took detailed notes. Editing the scripts didn’t seem so ominous this time!

The CESM code can be downloaded using Subversion (instructions here) after a quick registration. Using the Ubuntu Software Center, I downloaded some necessary packages: libnetcdf-dev, mpich2, and torque-scheduler. I already had gfortran, which is sort of essential.

I used the Porting via user defined machine files method to configure the model for my machine, using the Hadley scripts as a starting point. Variables for the config_machines.xml are explained in Appendix D through H of the user’s guide (links in chapter 7). Mostly, you’re just pointing to folders where you want to store data and files. Here are a few exceptions:

  • DOUT_L_HTAR: I stuck with "TRUE", as that was the default.
  • CCSM_CPRNC: this tool already exists in the CESM source code, in /models/atm/cam/tools/cprnc.
  • BATCHQUERY and BATCHSUBMIT: the Hadley entry had “qstat” and “qsub”, respectively, so I Googled these terms to find out which batch submission software they referred to (Torque, which is freely available in the torque-scheduler package) and downloaded it so I could keep the commands the same!
  • GMAKE_J: this determines how many processors to commit to a certain task, and I wasn’t sure how many this machine had, so I just put “1”.
  • MAX_TASKS_PER_NODE: I chose "8", which the user’s guide had mentioned as an example.
  • MPISERIAL_SUPPORT: the default is “FALSE”.

The only file that I really needed to edit was Macros.<machine name>. The env_machopts.<machine name> file ended up being empty for me. I spent a while confused by the modules declarations, which turned out to refer to the Environment Modules software. Once I realized that, for this software to be helpful, I would have to write five or six modulefiles in a language I didn’t know, I decided that it probably wasn’t worth the effort, and took these declarations out. I left mkbatch.<machine name> alone, except for the first line which sets the machine, and then turned my attention to Macros.

“Getting this to work will be an iterative process”, the user’s guide says, and it certainly was (and still is). It’s never a good sign when the installation guide reminds you to be patient! Here is the sequence of each iteration:

  1. Edit the Macros file as best I can.
  2. Open up the terminal, cd to cesm1_0/scripts, and create a new case as follows: ./create_newcase -case test -res f19_g16 -compset X -mach <machine name>
  3. If this works, cd to test, and run configure: ./configure -case
  4. If all is well, try to build the case: ./test.<machine name>.build
  5. See where it fails and read the build log file it refers to for ideas as to what went wrong. Search on Google for what certain errors mean. Do some other work for a while, to let the ideas simmer.
  6. Set up for the next case: ./test.<machine name>.clean_build , cd .., and rm -rf test. This clears out old files so you can safely build a new case with the same name.
  7. See step 1.

I wasn’t really sure what the program paths were, as I couldn’t find a nicely contained folder for each one (like Windows has in “Program Files”), but I soon stumbled upon a nice little trick: look up the package on Ubuntu Package Manager, and click on “list of files” under the Download section. That should tell you what path the program used as its root.

I also discovered that setting FC and CC to gfortran and gcc, respectively, in the Macros file will throw errors. Instead, leave the variables as mpif90 and mpicc, which are linked to the GNU compilers. For example, when I type mpif90 in the terminal, the result is gfortran: no input files, just as if I had typed gfortran. For some reason, though, the errors go away.

As soon as I made it past building the mct and pio libraries, the build logs for each component (eg atm, ice) started saying gmake: command not found. This is one of the pitfalls of Ubuntu: it uses the command make for the same program that basically every other Unix-based OS calls gmake. So I needed to find and edit all the scripts that called gmake, or generated other scripts that called it, and so on. “There must be a way to automate this,” I thought, and from this article I found out how. In the terminal, cd to the CESM source code folder, and type the following:

grep -lr -e 'gmake' * | xargs sed -i 's/gmake/make/g'

You should only have to do this once. It’s case sensitive, so it will leave the xml variable GMAKE_J alone.

Then I turned my attention to compiler flags, which Steve chronicled quite well in his notes (see link above). I made most of the same changes that he did, except I didn’t need to change -DLINUX to -DDarwin. However, I needed some more compiler flags still. In the terminal, man gfortran brings up a list of all the options for gfortran, which was helpful.

The ccsm build log had hundreds of undefined reference errors as soon as it started to compile fortran. The way I understand it, many of the fortran files reference each other, but gfortran likes to append underscores to user-defined variables, and then it can’t find the file the variable is referencing! You can suppress this using the flag -fno-underscoring.

Now I am stuck on a new error. It looks like the ccsm script is almost reaching the end, as it’s using ld, the gcc linking mechanism, to tie all the files together. Then the build log says:

/usr/bin/ld: seq_domain_mct.o(.debug_info+0x1c32): unresolvable R_386_32 relocation against symbol 'mpi_fortran_argv_null'
/usr/bin/ld: final link failed: Nonrepresentable section on output
collect2: ld returned 1 exit status

I’m having trouble finding articles on the internet about similar errors, and the gcc and ld manpages are so long that trying every compiler flag isn’t really an option. Any ideas?

Update: Fixed it! In scripts/ccsm_utils/Build/Makefile, I changed LD := $(F90) to LD := gcc -shared. The build was finally successful! Now off to try and run it…

The good thing is that, since I re-started this project a few days ago, I haven’t spent very long stuck on any one error. I’m constantly having problems, but I move through them pretty quickly! In the meantime, I’m learning a lot about the model and how it fits everything together during installation. I’ve also come a long way with Linux programming in general. Considering that when I first installed Ubuntu a few months ago, and sheepishly called my friend to ask where to find the command line, I’m quite proud of my progress!

I hope this article will help future Ubuntu users install CESM, as it seems to have a few quirks that even Mac OS X doesn’t experience (eg make vs gmake). For the rest of you, apologies if I have bored you to tears!

Models and Books

Working as a summer student continues to be rewarding. I get to spend all day reading interesting things and playing with scientific software. What a great deal!

Over the weekend, I ran the “Global Warming_01” simulation from EdGCM, which is an old climate model from NASA with a graphical user interface. Strangely, they don’t support Linux, as their target audience is educators – I doubt there are very many high school teachers running open-source operating systems! So I ran the Windows version on my laptop, and it took about 36 hours. It all felt very authentic.

Unfortunately, as their Windows 7 support is fairly new, there were some bugs in the output. It refused to give me any maps at all! The terminal popped up for a few seconds, but it didn’t output any files. All I could get were zonal averages (and then only from January-March) and time series. Also, for some reason, none of the time series graphs had units on the Y axis. Anyway, here are some I found interesting:

CO2 concentrations increase linearly from 1958 to 2000, and then exponentially until 2100, with a doubling of CO2 (with respect to 1958) around 2062. (This data was output as a spreadsheet, and I got Excel to generate the graph, so it looks nicer than the others.)

Global cloud cover held steady until around 2070, when it decreased. I can’t figure out why this would be, as the water vapour content of the air should be increasing with warming – wouldn’t there be more clouds forming, not less?

Global precipitation increased, as I expected. This is an instance where I wish the maps would have worked, because it would be neat to look at how precipitation amount varied by location. I’ve been pretty interested in subtropical drought recently.

Albedo decreased about 1% – a nice example of the ice-albedo feedback (I presume) in action.

I also ran a simulation of the Last Glacial Maximum, from 21 thousand years ago. This run was much quicker than the first, as (since it was modeling a stable climate) it only simulated a decade, rather than 150 years. It took a few hours, and the same bugs in output were apparent. Time series graphs are less useful when studying stable conditions, but I found the albedo graph interesting:

Up a few percent from modern values, as expected.

It’s fairly expensive to purchase a licence for EdGCM, but they offer a free 30-day trial that I would recommend. I expect that it would run better on a  Mac, as that’s what they do most of the software development and testing on.

Now that I’ve played around with EdGCM, I’m working on porting CESM to a Linux machine. There’s been trial and error at every step, but everything went pretty smoothly until I reached the “build” phase, which requires the user to edit some of the scripts to suit the local machine (step 3 of the user guide). I’m still pretty new to Linux, so I’m having trouble working out the correct program paths, environment variables, modules, and so on. Oh well, the more difficult it is to get working, the more exciting it is when success finally comes!

I am also doing lots of background reading, as my project for the summer will probably be “some sort of written something-or-other” about climate models. Steve has a great collection of books about climate change, and keeps handing me interesting things to read. I’m really enjoying The Warming Papers, edited by David Archer and Ray Pierrehumbert. The book is a collection of landmark papers in climate science, with commentary from the editors. It’s pretty neat to read all the great works – from Fourier to Broecker to Hansen – in one place. Next on my list is A Vast Machine by Paul Edwards, which I’m very excited about.

A quick question, unrelated to my work – why do thunderstorms tend to happen at night? Perhaps it’s just a fluke, but we’ve had a lot of them recently, none of which have been in the daytime. Thoughts?