Hi,
I am trying to run CTSM with FATES at a single point in Siberia. I intend to use 1901-1930 GSWP3 data for 300-year spinup simulations. My case was set and modified as
./create_newcase --case Site1 --res f09_g16 --compset I2000Clm50FatesRs --run-unsupported
./xmlchange ROOTPE=0,PTS_MODE=TRUE,PTS_LAT=72.5354,PTS_LON=102.2782
./xmlchange MPILIB=mpi-serial,NTASKS=1
./xmlchange CLM_ACCELERATED_SPINUP="on",STOP_OPTION=nyears,STOP_N=50,DOUT_S=FALSE
./xmlchange CLM_FORCE_COLDSTART=on,RUN_TYPE=startup,RUN_STARTDATE=0001-01-01
The initial run for 50 years finished successfully. However, when I tried to submit the second 50-year run (CONTINUE_RUN=TRUE), the case failed with an error message "NetCDF: Start+count exceeds dimension bound". I searched online resources and it seems to be due to corrupted restart files. The log file is attached.
I also tested a global run with the same settings (with the default MPI and PE layout) and it was able to continue the previous run without any issues.
Since the computing resource is about 5800:1 for a global run to a single-point run, I would like to run my case at a single-point scale. Can anyone please suggest/advise a solution to this issue? Thanks!
YCE
I am trying to run CTSM with FATES at a single point in Siberia. I intend to use 1901-1930 GSWP3 data for 300-year spinup simulations. My case was set and modified as
./create_newcase --case Site1 --res f09_g16 --compset I2000Clm50FatesRs --run-unsupported
./xmlchange ROOTPE=0,PTS_MODE=TRUE,PTS_LAT=72.5354,PTS_LON=102.2782
./xmlchange MPILIB=mpi-serial,NTASKS=1
./xmlchange CLM_ACCELERATED_SPINUP="on",STOP_OPTION=nyears,STOP_N=50,DOUT_S=FALSE
./xmlchange CLM_FORCE_COLDSTART=on,RUN_TYPE=startup,RUN_STARTDATE=0001-01-01
The initial run for 50 years finished successfully. However, when I tried to submit the second 50-year run (CONTINUE_RUN=TRUE), the case failed with an error message "NetCDF: Start+count exceeds dimension bound". I searched online resources and it seems to be due to corrupted restart files. The log file is attached.
I also tested a global run with the same settings (with the default MPI and PE layout) and it was able to continue the previous run without any issues.
Since the computing resource is about 5800:1 for a global run to a single-point run, I would like to run my case at a single-point scale. Can anyone please suggest/advise a solution to this issue? Thanks!
YCE