Scheduled Downtime
On Tuesday 24 October 2023 @ 5pm MT the forums will be in read only mode in preparation for the downtime. On Wednesday 25 October 2023 @ 5am MT, this website will be down for maintenance and expected to return online later in the morning.
Normal Operations
The forums are back online with normal operations. If you notice any issues or errors related to the forums, please reach out to help@ucar.edu

Build successfully ,but submit failed. The cesm.log is about the usage of ./mpiexec.

Albert

Albert
New Member
Hi all,
I'm new to learn CESM. I use CESM2.1.1.After porting to my machine. build is successfully,but submit failed. The cesm.log is about the usage of ./mpiexec.Could you give me some advice? Here it is the log :

Usage: ./mpiexec [global opts] [local opts for exec1] [exec1] [exec1 args] : [local opts for exec2] [exec2] [exec2 args] : ...

Global options (passed to all executables):

Global environment options:
-genv {name} {value} environment variable name and value
-genvlist {env1,env2,...} environment variable list to pass
-genvnone do not pass any environment variables
-genvall pass all environment variables not managed
by the launcher (default)

Other global options:
-f {name} file containing the host names
-hosts {host list} comma separated host list


Local options (passed to individual executables):

Other local options:
-n/-np {value} number of processes
{exec_name} {args} executable name and arguments


Hydra specific options (treated as global):

Launch options:
-launcher launcher to use (ssh slurm rsh ll sge pbsdsh pdsh srun lsf blaunch qrsh)
-launcher-exec executable to use to launch processes
-enable-x/-disable-x enable or disable X forwarding

Resource management kernel options:
-rmk resource management kernel to use (slurm ll lsf sge pbs cobalt)

Processor topology options:
-bind-to process binding
-map-by process mapping
-membind memory binding policy

Other Hydra options:
-verbose verbose mode
-info build information
-print-all-exitcodes print exit codes of all processes
-ppn processes per node
-prepend-rank prepend rank to output
-prepend-pattern prepend pattern to output
-outfile-pattern direct stdout to file
-errfile-pattern direct stderr to file
-nameserver name server information (host:port format)
-disable-auto-cleanup don't cleanup processes on error
-disable-hostname-propagation let MPICH auto-detect the hostname
-localhost local hostname for the launching node
-usize universe size (SYSTEM, INFINITE, <value>)

Intel(R) MPI Library specific options:

Global options:
-aps Intel(R) Application Performance Snapshot profile
-mps Intel(R) Application Performance Snapshot profile (MPI, OpenMP only)
-gtool tool and rank set
-gtoolfile file containing tool and rank set

Other Hydra options:
-iface network interface to use
-s <spec> redirect stdin to all or 1,2 or 2-4,6 MPI processes (0 by default)

Intel(R) MPI Library, Version 2019 Update 1 Build 20181016 (id: 1f6a76f43)
Copyright (C) 2003-2018 Intel Corporation. All rights reserved.
 

Attachments

  • cesm.log.6447.200705-234545.txt
    3.1 KB · Views: 9

Albert

Albert
New Member
Here is the result of
Code:
cat CaseStatus
:
2020-07-05 23:45:31: case.build success
---------------------------------------------------
2020-07-05 23:45:43: case.submit starting
---------------------------------------------------
2020-07-05 23:45:45: case.submit success case.run:6447, case.st_archive:6448
---------------------------------------------------
2020-07-05 23:45:45: case.run starting
---------------------------------------------------
2020-07-05 23:45:47: model execution starting
---------------------------------------------------
2020-07-05 23:45:47: model execution success
---------------------------------------------------
2020-07-05 23:45:47: case.run error
ERROR: Model did not complete - see /share/home/liujunzhi/liujunzhi/apps/cesm_datafile/scratch/23_Day2Brazil/run/cpl.log.6447.200705-234545
 

Albert

Albert
New Member
What is the result of running ./preview_run from the case directory?
Hi Jedwards,
Thanks for your reply.My problem has been solved. The error is lead by mpi. Mpi variables were not fully added. After adding the commands:
Code:
source /share/apps/intel2019/impi/2019.1.144/intel64/bin/mpivars.sh
source /share/apps/intel2019/compilers_and_libraries_2019.1.144/linux/bin/compilervars.sh intel64
to my .bashrc, the problem solved.
 
Top