Scheduled Downtime
On Tuesday 24 October 2023 @ 5pm MT the forums will be in read only mode in preparation for the downtime. On Wednesday 25 October 2023 @ 5am MT, this website will be down for maintenance and expected to return online later in the morning.
Normal Operations
The forums are back online with normal operations. If you notice any issues or errors related to the forums, please reach out to help@ucar.edu

How to submit the job to batch system in clm5.0?

Status
Not open for further replies.
Dear all. I have problem during submit the job to batch system in clm5.0. When i type ./case.submit, there are no cesm.logxxxx in $RUNDIR and cesm.stdoutxxxx in $CASEROOT. However, when i directly type bsub -W 23:59 < .case.run, cesm.logxxx and cesm.stdoutxxx can be generated. I guess there are some wrong setting in config_batch.xml or other files, but i cannot find the exact reason. Can anybody give me some suggesion? Really appreciate that !

The followings are some information:

./create_newcase --case test_case2 --res f19_g16 --compset X -mach cern --run-unsupported

./preview_run

CASE INFO:
nodes: 13
total tasks: 312
tasks per node: 24
thread count: 1

BATCH INFO:
FOR JOB: case.run
ENV:
Setting Environment OMP_STACKSIZE=256M
Setting Environment OMP_NUM_THREADS=1

SUBMIT CMD:
bsub "all, ARGS_FOR_SCRIPT=--resubmit" < .case.run

MPIRUN (job=case.run):
mpijob.intelmpi /work2/cern1426/clm5/test_case2/bld/cesm.exe >> cesm.log.$LID 2>&1

FOR JOB: case.st_archive
ENV:
Setting Environment OMP_STACKSIZE=256M
Setting Environment OMP_NUM_THREADS=1

SUBMIT CMD:
bsub -w 'done(0)' "all, ARGS_FOR_SCRIPT=--resubmit" < case.st_archive

Config_batch.xml

<batch_system type="lsf">
<batch_query args=" -w" >bjobs</batch_query>
<batch_submit>bsub</batch_submit>
<batch_cancel>bkill</batch_cancel>
<batch_redirect>&lt;</batch_redirect>
<batch_env> </batch_env>
<batch_directive>#BSUB</batch_directive>
<jobid_pattern>&lt;(\d+)&gt;</jobid_pattern>
<depend_string> -w 'done(jobid)'</depend_string>
<depend_allow_string> -w 'ended(jobid)'</depend_allow_string>
<depend_separator>&amp;&amp;</depend_separator>
<directives>
<directive > -J {{ job_id }} </directive>
<directive > -n {{ total_tasks }} </directive>
<directive > -W $JOB_WALLCLOCK_TIME </directive>
<directive default="cesm.stdout" > -o {{ job_id }}.%J </directive>
<directive default="cesm.stderr" > -e {{ job_id }}.%J </directive>
<directive > -R "span[ptile={{ tasks_per_node }}]"</directive>
</directives>
<queues>
<queue walltimemax="23:59">cpuII</queue>
</queues>

</batch_system>
 
Status
Not open for further replies.
Top