Scheduled Downtime
On Tuesday 24 October 2023 @ 5pm MT the forums will be in read only mode in preparation for the downtime. On Wednesday 25 October 2023 @ 5am MT, this website will be down for maintenance and expected to return online later in the morning.
Normal Operations
The forums are back online with normal operations. If you notice any issues or errors related to the forums, please reach out to help@ucar.edu

Libraries and utilities needing to be built when porting CESM2.1

I am installing cesm2.1.3-rc.01-0-g0596a97

The xml config files refer to mpi-serial and to cprnc
I think I need to build these

1. is there any other utility/library in the installation that I should also build before I can meaningfully run tests, please?

2. mpi-serial
I see the code in my_cesm_sandbox/cime/src/externals/mct/mpi-serial

I use an intel 2017 compiler, and note the README says 4 bytes is the default, so is the following likely to be correct?

CC-icc FC=ifort ./configure --enable-fort-real=8 --enable-fort-double=16
(Do you happen to know if any other flags are desirable for Intel compilation here?)

make
make tests

3. cprnc
Can this run with the parallel netcdf library I had built, or do I need a sequential one for this?
(the README refers to mpi-serial - hence I assume I need to build mpi-serial first.)

Thanks
 

jedwards

CSEG and Liaisons
Staff member
mpi-serial will be built automatically when you build a case with this feature, you don't need to do anything special for this.
cprnc should be build and installed in a location pointed to by the CCSM_CPRNC variable in config_machines.xml There is a readme in the cprnc directory with build instructions. It is a serial tool.
 
Thanks for explaining mpi-serial

README says: "To change the compiler or MPI library used, or to enable debugging options, set the COMPILER, MPILIB, or DEBUG environment variables prior to running configure."

I am building the model with intel 2017 (with only parallel netcdf built), but have a netcdf library (non-parallel) for intel 2016 and I;d like to use that to avoid rebuilding hdf5 and netcdf for serial i/o.

If I was building CPRNC outside of CIME, then I;d do something like:

module load intel2016
module load intel2016-netcdf
CC=icc FC=ifort ./configure etc
make


a) do these modules have to be added to my config_machines file?

b) Can the CPRNC utility then be used from a model environment where I'd have a different intel module loaded?


Thanks
 

jedwards

CSEG and Liaisons
Staff member
There is a readme in the cprnc directory explaining the process to build it.

On cime supported systems you can generate a Macros file using the following
(assuming you are running the command from the directory cime/tools/cprnc):

CIMEROOT=../.. ../configure --macros-format=Makefile --mpilib=mpi-serial

To change the compiler or MPI library used, or to enable debugging options,
set the COMPILER, MPILIB, or DEBUG environment variables prior to running
configure.

Next, run make to build cprnc. For instance, using sh/bash as a login shell:

CIMEROOT=../.. source ./.env_mach_specific.sh && make

Finally, put the resulting executable in CCSM_CPRNC as defined in
config_machines.xml.
 
In case anyone else attains confusion like my own when I sent this query in.... I'm porting CESM to a cluster and still learning how CESM is put together.

Getting to the status of "cime supported" is therefore where I was struggling... what modules were needed and how controlled so the utility builds as described.

I did the following and cprnc seems happily built and tested.

1.Using intel 2017 compilers I built in serial mode the libraries I had previously built for parallel netcdf (netcdf-f 4.5.3; netcdf-c 4.7.4, hdf 1.8.9) and made the corresponding module seq_netcdff/4.5.3 for the sequential libraries I used a shared library for netcdff,

2. in my config_modules I added a branch for mpi-serial:
<modules mpilib="intelmpi">
<command name="rm">intel</command>
<command name="load">intel/2017u4</command>
<command name="use">/exports/csce/eddie/geos/groups/cesd/modules/intel_2017_u4</command>
<command name="rm">parallel_netcdff/4.5.3</command>
<command name="rm">seq_netcdff/4.5.3</command>
<command name="load">parallel_netcdff/4.5.3</command>
</modules>
<modules mpilib="mpi-serial">
<command name="use">/exports/csce/eddie/geos/groups/cesd/modules/intel_2017_u4</command>
<command name="rm">parallel_netcdff/4.5.3</command>
<command name="rm">intel</command>
<command name="load">intel/2017u4</command>
<command name="load">seq_netcdff/4.5.3</command>
<command name="load">lapack/3.9.0</command>
</modules>
And added mpi-serial to the MPI_LIBS element.

3. In my config_compiler I distinguished between mpi-serial and intelmpi, making sure I avoided the intel mkl calls to MPI for the serial case.
<SLIBS>
<append MPILIB="intelmpi"> -L${NETCDF_FORTRAN_PATH}/lib -lnetcdff</append>
<append MPILIB="intelmpi"> -lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_lp64 -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread </append>
<append MPILIB="intelmpi"> $SHELL{${NETCDF_C_PATH}/bin/nc-config --libs}</append>
<append MPILIB="mpi-serial"> -L${LAPACK_LIBDIR} -lrefblas -llapack</append>
</SLIBS>

I saw that in the system config_compilers there are notes about mpi-serial, about how the make finds the mpi-serial code.

4. I also had to find dtypes.h and copy it from a model case to the cprnc directory - that did puzzle me.

Then it built as documented for a cime supported system
 

jedwards

CSEG and Liaisons
Staff member
Thanks - the dtypes.h file should have been found when building cprnc, not sure why it would not have been.
 
Top