Scheduled Downtime
On Tuesday 24 October 2023 @ 5pm MT the forums will be in read only mode in preparation for the downtime. On Wednesday 25 October 2023 @ 5am MT, this website will be down for maintenance and expected to return online later in the morning.
Normal Operations
The forums are back online with normal operations. If you notice any issues or errors related to the forums, please reach out to help@ucar.edu

CESM 1.2.2.1 with B_1850_CN throws NetCDF errors, then crashes


With compset B_1850_CN, CESM runs fail soon after starting, with many of the following NetCDF runtime errors in output:



NetCDF: Variable not found

NetCDF: Invalid dimension ID or name

NetCDF: Attribute not found




This does not occur with compset X, which runs problem-free; I will soon try my luck with other compsets.



Is there a workaround for the apparent incompatibility of the B_1850_CN and perhaps other compsets?


 

jedwards

CSEG and Liaisons
Staff member
There are normally many warnings printed from netcdf as the model navigates input files,  some attrbutes, variables and dimensions may or may not be in input files and querying for them generates these messages from the netcdf library.   This is probably not what is causing your run to fail. 
 
I greatly appreciate your quick respsonse; but the CESM1.2 User's Guide's example suggests B_1850_CN should work---is there a workaround, or is the documentation simply incorrect?
 

jedwards

CSEG and Liaisons
Staff member
I am not arguing that you don't have an error, only that the NETCDF messages you are seeing are not the problem.   You need to revisit the cesm.log file and identify the true problem.
 
This is the first error in cesm.log.*:


Code:
MCT::m_Router::initp_: GSMap indices not increasing...Will correct
MCT::m_Router::initp_: RGSMap indices not increasing...Will correct
MCT::m_Router::initp_: RGSMap indices not increasing...Will correct
MCT::m_Router::initp_: GSMap indices not increasing...Will correct

 Warning: Departure points out of bounds in remap
 my_task, i, j =          58          11          94
 dpx, dpy =  -4.73558955121005       -5299032.57915055
 HTN(i,j), HTN(i+1,j) =   13754.2596494342        13886.8891159210
 HTE(i,j), HTE(i,j+1) =   54938.2805929320        55218.1871305735
 istep1, my_task, iblk =           1          58           2
 Global block:         123
 Global i and j:         270         381
(shr_sys_abort) ERROR: remap transport: bad departure points
(shr_sys_abort) WARNING: calling shr_mpi_abort() and stopping
application called MPI_Abort(MPI_COMM_WORLD, 1001) - process 58



My PE layout is NTASKS_*=64, NTHRDS_*=1, ROOTPE_*=0 and MAX_TASKS_PER_NODE=64, PES_PER_NODE=64.
I built with parallel-netcdf/4.3.3.1 and pnetcdf/1.8.1 (PIO_TYPENAME=pnetcdf); also fails with netcdf/4.3.3.1 and with netcdf/4.3.3.1 built with "--disable-netcdf-4", in both cases with PIO_TYPENAME=netcdf.



Does that suggest anything?
 
Top