Scheduled Downtime
On Tuesday 24 October 2023 @ 5pm MT the forums will be in read only mode in preparation for the downtime. On Wednesday 25 October 2023 @ 5am MT, this website will be down for maintenance and expected to return online later in the morning.
Normal Operations
The forums are back online with normal operations. If you notice any issues or errors related to the forums, please reach out to help@ucar.edu

ocean model in cesm1_2_0 crashed during the running

Hi, I created a case to run the ocean model in cesm1_2_0.create_newcase -case ../case_output/ciaf_T62_g16_ctl -compset CIAF -res T62_g16 -mach grexI set the env_run.xml as below:id="RUN_STARTDATE"   value="1948-01-01"id="STOP_OPTION"   value="nyears"id="STOP_N"   value="62"id="REST_OPTION"   value="$STOP_OPTION"id="REST_N"   value="1" The model run successfully at first, it outputed nc files from 1948-01 to 1952-11, but then it crashed. Below is the error information: *** error information*** cesm.log.131122-062046 Overflow: Weddell Sea              Product adjacent mask at global (ij)=316   21
 Overflow: Weddell Sea              Product adjacent mask at global (ij)=316   22
 Overflow: Weddell Sea              Product adjacent mask at global (ij)=318   21
 Overflow: Weddell Sea              Product adjacent mask at global (ij)=318   22
 Overflow: Weddell Sea              Product adjacent mask at global (ij)=318   23
 ovf_loc_prd: nsteps_total=           1  ovf=           1  swap ovf UV old/new
 prd set old/new=           1           7
 ovf_loc_prd: nsteps_total=           1  ovf=           2  swap ovf UV old/new
 prd set old/new=           1           6
 ovf_loc_prd: nsteps_total=           1  ovf=           3  swap ovf UV old/new
 prd set old/new=           1           3
 ovf_loc_prd: nsteps_total=         268  ovf=           1  swap ovf UV old/new
 prd set old/new=           7           6
 ovf_loc_prd: nsteps_total=         815  ovf=           1  swap ovf UV old/new
 prd set old/new=           6           5
 ovf_loc_prd: nsteps_total=         840  ovf=           3  swap ovf UV old/new
 prd set old/new=           3           2
 ovf_loc_prd: nsteps_total=        3346  ovf=           1  swap ovf UV old/new
 prd set old/new=           5           4
 ovf_loc_prd: nsteps_total=        4320  ovf=           1  swap ovf UV old/new
 prd set old/new=           4           3
 ovf_loc_prd: nsteps_total=        5765  ovf=           2  swap ovf UV old/new
 prd set old/new=           6           5
 ovf_loc_prd: nsteps_total=       10505  ovf=           2  swap ovf UV old/new
 prd set old/new=           5           4
 ovf_loc_prd: nsteps_total=       15396  ovf=           2  swap ovf UV old/new
 prd set old/new=           4           3
 ovf_loc_prd: nsteps_total=       21522  ovf=           3  swap ovf UV old/new
 prd set old/new=           2           1
POP_HaloUpdate3DR8: error allocating buffers
step: error updating halo for UVEL
------------------------------------------------------------------------

POP aborting...
 ERROR in step

------------------------------------------------------------------------
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 0.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun has exited due to process rank 0 with PID 19980 on
node n315 exiting improperly. There are two reasons this could occur:

1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.

2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"

This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------


Thank you very much.Best regards,yao zhixiong  
 

njn01

Member
The error message:POP_HaloUpdate3DR8: error allocating buffers
step: error updating halo for UVEL
------------------------------------------------------------------------POP aborting...
 ERROR in stepmeans that POP model shut down when it determined that an allocation request failed.  Your $CASE probably ran out of memory during the run.  You could work around this by running in shorter run segments, or by using more ocean processors in your $CASE.  See the CESM Users Guide for information on how to do this.
 

njn01

Member
The error message:POP_HaloUpdate3DR8: error allocating buffers
step: error updating halo for UVEL
------------------------------------------------------------------------POP aborting...
 ERROR in stepmeans that POP model shut down when it determined that an allocation request failed.  Your $CASE probably ran out of memory during the run.  You could work around this by running in shorter run segments, or by using more ocean processors in your $CASE.  See the CESM Users Guide for information on how to do this.
 
Thank you for reply.I modified "init_ts_file_fmt" as nc according to this thread:
http://bb.cgd.ucar.edu/issue-can-cause-pop2-crash-restartIt's written by santos. I restarted the model after modification. Now the model continue to run.
 
Thank you for reply.I modified "init_ts_file_fmt" as nc according to this thread:
http://bb.cgd.ucar.edu/issue-can-cause-pop2-crash-restartIt's written by santos. I restarted the model after modification. Now the model continue to run.
 
Top