Scheduled Downtime
On Tuesday 24 October 2023 @ 5pm MT the forums will be in read only mode in preparation for the downtime. On Wednesday 25 October 2023 @ 5am MT, this website will be down for maintenance and expected to return online later in the morning.
Normal Operations
The forums are back online with normal operations. If you notice any issues or errors related to the forums, please reach out to help@ucar.edu

Running CAM3 on yellowstone

eaton

CSEG and Liaisons
I have gotten CAM3 (from the cam3.1.p2 release) to run in an MPI mode on
yellowstone.  I began by trying to follow the process used in CAM3, that is
to allow the Makefile to use the default compilers (pgf90 and pgcc for
linux) and specifying the location of the MPI header files and library.  I
got a successful build, but encountered strange MPI failures when trying to
use the executable.  After some experimentation I've concluded that the
best approach is to make use of the compiler wrapper scripts supported by
CISL (which is of course what they would recommend).  In a nutshell, this
means use module commands to set the programming environment and use mpif90
and mpicc to build and link the executable.  Note that this not only deals
with MPI link issues, but with the NetCDF library as well.  Since the vast
majority of our testing on linux platforms at the time CAM3 was released
used the PGI compilers, I chose to stick with PGI.

In order to use mpif90/mpicc and not specify any MPI or NetCDF paths
several changes were required in the configure script and the Makefile
template.  I also modified the ioFileMod.F90 file to remove the option of
using msread to get a file from the MSS.  This change isn't needed if all
the boundary datasets are on the local filesystem.

I've included a sample bash script for running on yellowstone along with
the modified files in an attached zip file.
 

mai

Member
I tried the equivalent procedure on the cam3.0.p1 code and found that, if I wanted to run beyond 8743 time steps,  I needed to include the following in the MPI env vars (run script):export MP_EUIDEVELOP=min
With this setting, I can use four nodes and get reasonable speed (about 4.5 sec/day).
 
Top