I got something to work. May or may not be ideal, but seems to do the job, for now. Process documented below.---------------------------------------Oct 4, 2018
geyser and caldera no longer support multi process runs
pronghorn does not have esmf built with mpi
To get this working on cheyenne, running multi-processors interactively
-----------------------------------------
- Checked out cime master (9ca93cab67369e1d40d15) on Oct 3, 2018 and copied out mapping tool
git clone https://github.com/ESMCI/cime cime.181003
cp -p -r cime.181003/tools/mapping mapping_cime
cd mapping_cime
-----------------------------------------
- Build check_maps, on cheyenne login node
module load esmf_libs/7.0.0
module load esmf-7.1.0r-ncdfio-uni-O
cd check_maps/src
gmake VERBOSE=TRUE
-----------------------------------------
- Build gen_domain, on cheyenne login node
cd gen_domain_files/src
setenv USER_FFLAGS -DAIX
gmake
-----------------------------------------
- Modify gen_mapping tool, on cheyenne login node
cd gen_mapping_files
-edit gen_cesm_maps.sh
add "--large_file" to line 117 of gen_cesm_maps.sh where $make_map_exe is defined
- edit gen_ESMF_mapping_file/create_ESMF_map.sh
change at about line 327
../../../configure --clean
../../../configure --mpilib mpi-serial
. .env_mach_specific.sh
module swap mpt mpt/2.15f
module swap esmf-7.0.0-defio-mpi-O esmf-7.0.0-ncdfio-mpi-O
to
# ../../../configure --clean
# ../../../configure --mpilib mpi-serial
# . .env_mach_specific.sh
# module swap mpt mpt/2.15f
# module swap esmf-7.0.0-defio-mpi-O esmf-7.0.0-ncdfio-mpi-O
module load esmf_libs/7.0.0
module load esmf-7.1.0r-ncdfio-mpi-O
-----------------------------------------
- Run gen_mapping tool, on cheyenne batch node,
- from cheyenne login node, start interactive session on batch node,
qsub -I -l select=1:ncpus=36:mpiprocs=36 -l walltime=01:00:00 -q regular -A [account]
cd mapping_cime/gen_mapping_files
set fatm = "/glade/p/cesmdata/cseg/mapping/grids/TL319.151007.nc"
set tatm = global
set natm = TL319
set focn = "/glade/p/cesmdata/cseg/mapping/grids/ar2v3_150330.nc"
set tocn = regional
set nocn = ar2v3
setenv MPIEXEC "mpiexec_mpt -np 36"
./gen_cesm_maps.sh -fatm $fatm -focn $focn -natm $natm -nocn $nocn -tatm $tatm -tocn $tocn
-logout of batch node
exit
-----------------------------------------
- Run gen_domain if needed
cd gen_domain_files
./gen_domain -m ../gen_mapping_files/map_ar2v3_TO_TL319_aave.181004.nc -o ar2v3 -l TL319
-----------------------------------------
- Review test_map and domain files
via whatever process you prefer, I happen to use ferret to look at fields
-----------------------------------------------------------------------
Notes:
-----------------------------------------
- This worked for me on Oct 4, 2018. There may be other better ways to do the same.
- Have not tested serial capability on geyser, caldera, pronghorn, or cheyenne
- Have not tested batch capability
"-b" option in gen_cesm_maps.sh is rejected even though -h says its valid. There
seems to be no support for batch runs in the current implementation. It should be
possible (easy?) to run this in batch mode rather than via an interactive batch
session.
-----------------------------------------
My cheyenne interactive login node modules were
>module list
Currently Loaded Modules:
1) ncarenv/1.2 3) ncarcompilers/0.4.1 5) netcdf/4.6.1 7) esmf-7.1.0r-ncdfio-uni-O
2) intel/17.0.1 4) mpt/2.15f 6) esmf_libs/7.0.0
-----------------------------------------
My cheyenne batch login modules were
>module list
Currently Loaded Modules:
1) ncarenv/1.2 2) intel/17.0.1 3) ncarcompilers/0.4.1 4) mpt/2.15f 5) netcdf/4.6.1
-----------------------------------------