Hi,
I'm trying to reproduce Abbot et al. (2008, https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2007GL032286) but using a recent release of CESM (v 2.1.3) configured as a single column model (SCAM). As I understand, the FSCAM compset that is provided with the release uses a prescribed SST and sea ice configuration, so I changed the compset configuration using the following long name to include an interactive thermodynamic sea ice model and a slab ocean:
However when I run this on Cheyenne, I get the following error in the run/cesm.log file:
The problem appears to be related to the number of processors that are assigned to the task, but I don't know how to further interpret this error message. I tried naively changing the values of potentially relevant environment variables such as NTASKS and NTHRDS but so far varying this from 1 to 2 has not worked. I would appreciate any guidance if someone knows how to proceed.
Thank you,
Osamu
I'm trying to reproduce Abbot et al. (2008, https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2007GL032286) but using a recent release of CESM (v 2.1.3) configured as a single column model (SCAM). As I understand, the FSCAM compset that is provided with the release uses a prescribed SST and sea ice configuration, so I changed the compset configuration using the following long name to include an interactive thermodynamic sea ice model and a slab ocean:
2000_CAM60%SCAM_CLM50%SP_CICE_DOCN%SOMAQP_SROF_SGLC_SWAV
However when I run this on Cheyenne, I get the following error in the run/cesm.log file:
Invalid PIO rearranger comm max pend req (comp2io), 0
Resetting PIO rearranger comm max pend req (comp2io) to 64
PIO rearranger options:
comm type =
p2p
comm fcd =
2denable
max pend req (comp2io) = 0
enable_hs (comp2io) = T
enable_isend (comp2io) = F
max pend req (io2comp) = 64
enable_hs (io2comp) = F
enable_isend (io2comp) = T
(seq_comm_setcomm) init ID ( 1 GLOBAL ) pelist = 0 0 1 ( npes = 1) ( nthreads = 1)( suffix =)
(seq_comm_setcomm) init ID ( 2 CPL ) pelist = 0 0 1 ( npes = 1) ( nthreads = 1)( suffix =
(seq_comm_setcomm) init ID ( 5 ATM ) pelist = 0 0 1 ( npes = 1) ( nthreads = 1)( suffix =)
(seq_comm_joincomm) init ID ( 6 CPLATM ) join IDs = 2 5 ( npes = 1) ( nthreads = 1)
(seq_comm_jcommarr) init ID ( 3 ALLATMID ) join multiple comp IDs ( npes = 1) ( nthreads = 1)
(seq_comm_joincomm) init ID ( 4 CPLALLATMID ) join IDs = 2 3 ( npes = 1) ( nthreads = 1)
(seq_comm_setcomm) init ID ( 9 LND ) pelist = 0 0 1 ( npes = 1) ( nthreads = 1)( suffix =)
(seq_comm_joincomm) init ID ( 10 CPLLND ) join IDs = 2 9 ( npes = 1) ( nthreads = 1)
(seq_comm_jcommarr) init ID ( 7 ALLLNDID ) join multiple comp IDs ( npes = 1) ( nthreads = 1)
(seq_comm_joincomm) init ID ( 8 CPLALLLNDID ) join IDs = 2 7 ( npes = 1) ( nthreads = 1)
MPI_Group_range_incl: more than 1 proc in group
forrtl: error (76): Abort trap signal
The problem appears to be related to the number of processors that are assigned to the task, but I don't know how to further interpret this error message. I tried naively changing the values of potentially relevant environment variables such as NTASKS and NTHRDS but so far varying this from 1 to 2 has not worked. I would appreciate any guidance if someone knows how to proceed.
Thank you,
Osamu