Hi, I'm new to using CESM1.0.3 and when I sumit the $CASE.$MACH.run script, I'm getting this error from ccsm.log :(seq_io_init) pio init parameters: before nml read(seq_io_init) pio_stride = -99(seq_io_init) pio_root = -99(seq_io_init) pio_typename = nothing(seq_io_init) pio_numtasks = -99(seq_io_init) pio_debug_level = 0 pio_async_interface = F(seq_io_init) pio init parameters: after nml read(seq_io_init) pio_stride = -1(seq_io_init) pio_root = 1(seq_io_init) pio_typename = netcdf(seq_io_init) pio_numtasks = -1(seq_io_init) pio init parameters:(seq_io_init) pio_stride = 4(seq_io_init) pio_root = 1(seq_io_init) pio_typename = NETCDF(seq_io_init) pio_numtasks = 27(seq_io_init) pio_debug_level = 0 pio_async_interface = F(seq_comm_setcomm) initialize ID ( 7 GLOBAL ) pelist = 0 107 1 ( npes = 108) ( nthreads = 1)(seq_comm_setcomm) initialize ID ( 2 ATM ) pelist = 0 107 1 ( npes = 108) ( nthreads = 1)(seq_comm_setcomm) initialize ID ( 1 LND ) pelist = 0 107 1 ( npes = 108) ( nthreads = 1)(seq_comm_setcomm) initialize ID ( 4 ICE ) pelist = 0 99 1 ( npes = 100) ( nthreads = 1)(seq_comm_setcomm) initialize ID ( 5 GLC ) pelist = 0 107 1 ( npes = 108) ( nthreads = 1)(seq_comm_setcomm) initialize ID ( 3 OCN ) pelist = 0 99 1 ( npes = 100) ( nthreads = 1)(seq_comm_setcomm) initialize ID ( 6 CPL ) pelist = 0 107 1 ( npes = 108) ( nthreads = 1)(seq_comm_joincomm) initialize ID ( 8 CPLATM ) join IDs = 6 2 ( npes = 108) ( nthreads = 1)(seq_comm_joincomm) initialize ID ( 9 CPLLND ) join IDs = 6 1 ( npes = 108) ( nthreads = 1)(seq_comm_joincomm) initialize ID ( 10 CPLICE ) join IDs = 6 4 ( npes = 108) ( nthreads = 1)(seq_comm_joincomm) initialize ID ( 11 CPLOCN ) join IDs = 6 3 ( npes = 108) ( nthreads = 1)(seq_comm_joincomm) initialize ID ( 12 CPLGLC ) join IDs = 6 5 ( npes = 108) ( nthreads = 1)[c06n04:31992] *** An error occurred in MPI_Gather[c06n04:31992] *** on communicator MPI COMMUNICATOR 5 CREATE FROM 0[c06n04:31992] *** MPI_ERR_TYPE: invalid datatype[c06n04:31992] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)--------------------------------------------------------------------------mpirun has exited due to process rank 74 with PID 31963 onnode c06n07 exiting without calling "finalize". This mayhave caused other processes in the application to beterminated by signals sent by mpirun (as reported here).--------------------------------------------------------------------------[c06n05:04048] 107 more processes have sent help message help-mpi-errors.txt / mpi_errors_are_fatal[c06n05:04048] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
The MPI is openmpi1.4.3_ifort11.1 and the netcdf is NetCDF4.1.3.ifort11.1.
Leo
The MPI is openmpi1.4.3_ifort11.1 and the netcdf is NetCDF4.1.3.ifort11.1.
Leo