meir02ster@gmail_com
Member
Dear all,
I was trying to run a single-point case with PTS_MODE with CLM4.0 but failed.
When I created the case, I set max_tasks_per_node to be 8.
I checked the env_mach_pes.xml and it is as follows:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%
So it does not support parallel mode becuase there is only one grid. I also set USE_MPISERIAL to FALSE (otherwise it would complain), and everythings goes smoothly until run.
Here is the error message:
%%%%%%%%%%%%%%%%
(seq_comm_printcomms) ID layout : global pes vs local pe for each ID
gpe LND ATM OCN ICE GLC CPL GLOBAL CPLATM CPLLND CPLICE CPLOCN CPLGLC nthrds
--- ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------
0 : 0 0 0 0 0 0 0 0 0 0 0 0 1
(t_initf) Read in prof_inparm namelist from: drv_in
1 pes participating in computation for CLM
-----------------------------------
NODE# NAME
( 0) water
application called MPI_Abort(comm=0x84000002, 1) - process 0
rank 0 in job 3 water_42640 caused collective abort of all ranks
exit status of rank 0: killed by signal 9
%%%%%%%%%%%%%%%%%%%%%%%%%%%%
In the run file, I tried both "mpirun -np 1 ./ccsm.exe >&! ccsm.log.$LID"
and "./ccsm.exe >&! ccsm.log.$LID", both do not work.
If someone can offer any advice, I will appreciate it.
Thanks,
Rui
I was trying to run a single-point case with PTS_MODE with CLM4.0 but failed.
When I created the case, I set max_tasks_per_node to be 8.
I checked the env_mach_pes.xml and it is as follows:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%
So it does not support parallel mode becuase there is only one grid. I also set USE_MPISERIAL to FALSE (otherwise it would complain), and everythings goes smoothly until run.
Here is the error message:
%%%%%%%%%%%%%%%%
(seq_comm_printcomms) ID layout : global pes vs local pe for each ID
gpe LND ATM OCN ICE GLC CPL GLOBAL CPLATM CPLLND CPLICE CPLOCN CPLGLC nthrds
--- ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------
0 : 0 0 0 0 0 0 0 0 0 0 0 0 1
(t_initf) Read in prof_inparm namelist from: drv_in
1 pes participating in computation for CLM
-----------------------------------
NODE# NAME
( 0) water
application called MPI_Abort(comm=0x84000002, 1) - process 0
rank 0 in job 3 water_42640 caused collective abort of all ranks
exit status of rank 0: killed by signal 9
%%%%%%%%%%%%%%%%%%%%%%%%%%%%
In the run file, I tried both "mpirun -np 1 ./ccsm.exe >&! ccsm.log.$LID"
and "./ccsm.exe >&! ccsm.log.$LID", both do not work.
If someone can offer any advice, I will appreciate it.
Thanks,
Rui