Hello
I was troubled in porting CESM on a new server。 Now one error occurred since I started creating the new case, as shown below:
[hejx@login01 scripts]$ ./create_newcase --case con2015 --res f09_g16 --compset BC5L45BGC --run-unsupport --mach hulei
Compset longname is 2000_CAM50_CLM45%BGC_CICE_POP2_MOSART_SGLC_SWAV
Compset specification file is /public/home/hejx/CESM2.1.1/cime/../cime_config/config_compsets.xml
Compset forcing is 1972-2004
ATM component is CAM cam5 physics:
LND component is clm4.5:BGC (vert. resol. CN and methane):
ICE component is Sea ICE (cice) model version 5
OCN component is POP2
ROF component is MOSART: MOdel for Scale Adaptive River Transport
GLC component is Stub glacier (land ice) component
WAV component is Stub wave component
ESP component is
Pes specification file is /public/home/hejx/CESM2.1.1/cime/../cime_config/config_pes.xml
Compset specific settings: name is RUN_STARTDATE and value is 0001-01-01
Could not find machine match for 'login01' or 'login01'
Machine is hulei
ERROR: Expected one child
So, please give me some advice on solving this problem, the files for machine and compilers defining are attached.
config_machine.xml:
<?xml version="1.0"?>
<config_machines>
<machine MACH="hulei">
<!-- customize these fields as appropriate for your system (max tasks) and
desired layout (change '${HOME}/projects' to your
prefered location). -->
<DESC>ORNL XE6, os is CNL, 32 pes/node, batch system is PBS</DESC>
<NODENAME_REGEX> something.matching.your.machine.hostname </NODENAME_REGEX>
<OS>CNL</OS>
<COMPILERS>intel,pgi,cray,gnu</COMPILERS>
<MPILIBS>mpich</MPILIBS>
<CIME_OUTPUT_ROOT>/public/home/hejx/CESM2.1.1/cime/scripts/output</CIME_OUTPUT_ROOT>
<DIN_LOC_ROOT>/data/hejx/xudsh/lsw/cas_esm2/input/rootDirectory/inputdata</DIN_LOC_ROOT>
<DIN_LOC_ROOT_CLMFORC>/data/hejx/xudsh/lsw/cas_esm2/input/rootDirectory/inputdata/atm/datm7</DIN_LOC_ROOT_CLMFORC>
<DOUT_S_ROOT>$CIME_OUTPUT_ROOT/archive/$CASE</DOUT_S_ROOT>
<BASELINE_ROOT>/public/home/hejx/CESM2.1.1/baselines</BASELINE_ROOT>
<CCSM_CPRNC>/public/home/hejx/CESM2.1.1/cime/tools/cprnc</CCSM_CPRNC>
<GMAKE_J>8</GMAKE_J>
<BATCH_SYSTEM>pbs</BATCH_SYSTEM>
<SUPPORTED_BY>cseg</SUPPORTED_BY>
<MAX_TASKS_PER_NODE>32</MAX_TASKS_PER_NODE>
<MAX_MPITASKS_PER_NODE>16</MAX_MPITASKS_PER_NODE>
<PROJECT_REQUIRED>TRUE</PROJECT_REQUIRED>
<mpirun mpilib="default">
<executable>mpirun</executable>
<arguments>
<arg name="num_tasks"> -np $TOTALPES</arg>
</arguments>
</mpirun>
<module_system type="none"/>
<!--
<init_path lang="perl">/usr/share/Modules/init/perl.pm</init_path>
<init_path lang="python">/usr/share/Modules/init/python.py</init_path>
<init_path lang="csh">/usr/share/Modules/init/csh</init_path>
<init_path lang="sh">/usr/share/Modules/init/sh</init_path>
<cmd_path lang="perl">/usr/bin/modulecmd perl</cmd_path>
<cmd_path lang="python">/usr/bin/modulecmd python</cmd_path>
<cmd_path lang="sh">module</cmd_path>
<cmd_path lang="csh">module</cmd_path>
<modules>
<command name="purge"/>
</modules>
<modules compiler="intel">
<command name="load">/public/software/modules/compiler/intel/intel-compiler-2017.5.239</command>
</modules>
<modules mpilib="intelmpi">
<command name="load">mpi/intelmpi/2017.4.239</command>
</modules>
<modules mpilib="netcdf">
<command name="load">/public/software/modules/mathlib/netcdf/4.7.1/impi_pnetcdf</command>
</modules>
</module_system>
-->
<environment_variables>
<env name="OMP_STACKSIZE">256M</env>
<env name="PATH">$ENV{HOME}/bin:$ENV{PATH}</env>
<env name="NETCDF_PATH">/public/software/mathlib/netcdf/intel/4.7.4/include/netcdf</env>
</environment_variables>
</machine>
</config_machines>
config_compilers.xml:
<?xml version="1.0"?>
<config_compilers version="2.0">
<!-- customize these fields as appropriate for your
system. Examples are prodived for Mac OS X systems with
homebrew and macports. -->
<compiler COMPILER="intel" MACH="hulei">
<CFLAGS>
<base> -qno-opt-dynamic-align -fp-model precise -std=gnu99 </base>
<append compile_threaded="true"> -qopenmp </append>
<append DEBUG="FALSE"> -O2 -debug minimal </append>
<append DEBUG="TRUE"> -O0 -g </append>
</CFLAGS>
<CPPDEFS>
<!-- Technical Library -->
<append> -DFORTRANUNDERSCORE -DCPRINTEL</append>
</CPPDEFS>
<CXX_LDFLAGS>
<base> -cxxlib </base>
</CXX_LDFLAGS>
<CXX_LINKER>FORTRAN</CXX_LINKER>
<FC_AUTO_R8>
<base> -r8 </base>
</FC_AUTO_R8>
<FFLAGS>
<base> -qno-opt-dynamic-align -convert big_endian -assume byterecl -ftz -traceback -assume realloc_lhs -fp-model source </base>
<append compile_threaded="true"> -qopenmp </append>
<append DEBUG="TRUE"> -O0 -g -check uninit -check bounds -check pointers -fpe0 -check noarg_temp_created </append>
<append DEBUG="FALSE"> -O2 -debug minimal </append>
</FFLAGS>
<FFLAGS_NOOPT>
<base> -O0 </base>
<append compile_threaded="true"> -qopenmp </append>
</FFLAGS_NOOPT>
<FIXEDFLAGS>
<base> -fixed </base>
</FIXEDFLAGS>
<FREEFLAGS>
<base> -free </base>
</FREEFLAGS>
<LDFLAGS>
<append compile_threaded="true"> -qopenmp </append>
</LDFLAGS>
<MPICC> mpiicc </MPICC>
<MPICXX> mpiicpc </MPICXX>
<MPIFC> mpiifort </MPIFC>
<SCC> icc </SCC>
<SCXX> icpc </SCXX>
<SFC> ifort </SFC>
<MPI_PATH>/public/software/mpi/intelmpi/2017.4.239/intel64</MPI_PATH>
<SLIBS>
<base>-L/public/software/mathlib/netcdf/intel/4.7.4/lib -lnetcdf -lnetcdff -L/public/software/mathlib/libs-gcc/lapack/3.9.1/lib -llapack -lblas -mkl</base>
<append MPILIB="mpich"> -mkl=cluster </append>
<append MPILIB="mpich2"> -mkl=cluster </append>
<append MPILIB="mvapich"> -mkl=cluster </append>
<append MPILIB="mvapich2"> -mkl=cluster </append>
<append MPILIB="mpt"> -mkl=cluster </append>
<append MPILIB="openmpi"> -mkl=cluster </append>
<append MPILIB="impi"> -mkl=cluster </append>
<append MPILIB="mpi-serial"> -mkl </append>
</SLIBS>
<SUPPORTS_CXX>TRUE</SUPPORTS_CXX>
<LAPACK_LIBDIR>/public/software/mathlib/libs-gcc/lapack/3.9.1/lib</LAPACK_LIBDIR>
</compiler>
</config_compilers>
config_batch.xml:
I was troubled in porting CESM on a new server。 Now one error occurred since I started creating the new case, as shown below:
[hejx@login01 scripts]$ ./create_newcase --case con2015 --res f09_g16 --compset BC5L45BGC --run-unsupport --mach hulei
Compset longname is 2000_CAM50_CLM45%BGC_CICE_POP2_MOSART_SGLC_SWAV
Compset specification file is /public/home/hejx/CESM2.1.1/cime/../cime_config/config_compsets.xml
Compset forcing is 1972-2004
ATM component is CAM cam5 physics:
LND component is clm4.5:BGC (vert. resol. CN and methane):
ICE component is Sea ICE (cice) model version 5
OCN component is POP2
ROF component is MOSART: MOdel for Scale Adaptive River Transport
GLC component is Stub glacier (land ice) component
WAV component is Stub wave component
ESP component is
Pes specification file is /public/home/hejx/CESM2.1.1/cime/../cime_config/config_pes.xml
Compset specific settings: name is RUN_STARTDATE and value is 0001-01-01
Could not find machine match for 'login01' or 'login01'
Machine is hulei
ERROR: Expected one child
So, please give me some advice on solving this problem, the files for machine and compilers defining are attached.
config_machine.xml:
<?xml version="1.0"?>
<config_machines>
<machine MACH="hulei">
<!-- customize these fields as appropriate for your system (max tasks) and
desired layout (change '${HOME}/projects' to your
prefered location). -->
<DESC>ORNL XE6, os is CNL, 32 pes/node, batch system is PBS</DESC>
<NODENAME_REGEX> something.matching.your.machine.hostname </NODENAME_REGEX>
<OS>CNL</OS>
<COMPILERS>intel,pgi,cray,gnu</COMPILERS>
<MPILIBS>mpich</MPILIBS>
<CIME_OUTPUT_ROOT>/public/home/hejx/CESM2.1.1/cime/scripts/output</CIME_OUTPUT_ROOT>
<DIN_LOC_ROOT>/data/hejx/xudsh/lsw/cas_esm2/input/rootDirectory/inputdata</DIN_LOC_ROOT>
<DIN_LOC_ROOT_CLMFORC>/data/hejx/xudsh/lsw/cas_esm2/input/rootDirectory/inputdata/atm/datm7</DIN_LOC_ROOT_CLMFORC>
<DOUT_S_ROOT>$CIME_OUTPUT_ROOT/archive/$CASE</DOUT_S_ROOT>
<BASELINE_ROOT>/public/home/hejx/CESM2.1.1/baselines</BASELINE_ROOT>
<CCSM_CPRNC>/public/home/hejx/CESM2.1.1/cime/tools/cprnc</CCSM_CPRNC>
<GMAKE_J>8</GMAKE_J>
<BATCH_SYSTEM>pbs</BATCH_SYSTEM>
<SUPPORTED_BY>cseg</SUPPORTED_BY>
<MAX_TASKS_PER_NODE>32</MAX_TASKS_PER_NODE>
<MAX_MPITASKS_PER_NODE>16</MAX_MPITASKS_PER_NODE>
<PROJECT_REQUIRED>TRUE</PROJECT_REQUIRED>
<mpirun mpilib="default">
<executable>mpirun</executable>
<arguments>
<arg name="num_tasks"> -np $TOTALPES</arg>
</arguments>
</mpirun>
<module_system type="none"/>
<!--
<init_path lang="perl">/usr/share/Modules/init/perl.pm</init_path>
<init_path lang="python">/usr/share/Modules/init/python.py</init_path>
<init_path lang="csh">/usr/share/Modules/init/csh</init_path>
<init_path lang="sh">/usr/share/Modules/init/sh</init_path>
<cmd_path lang="perl">/usr/bin/modulecmd perl</cmd_path>
<cmd_path lang="python">/usr/bin/modulecmd python</cmd_path>
<cmd_path lang="sh">module</cmd_path>
<cmd_path lang="csh">module</cmd_path>
<modules>
<command name="purge"/>
</modules>
<modules compiler="intel">
<command name="load">/public/software/modules/compiler/intel/intel-compiler-2017.5.239</command>
</modules>
<modules mpilib="intelmpi">
<command name="load">mpi/intelmpi/2017.4.239</command>
</modules>
<modules mpilib="netcdf">
<command name="load">/public/software/modules/mathlib/netcdf/4.7.1/impi_pnetcdf</command>
</modules>
</module_system>
-->
<environment_variables>
<env name="OMP_STACKSIZE">256M</env>
<env name="PATH">$ENV{HOME}/bin:$ENV{PATH}</env>
<env name="NETCDF_PATH">/public/software/mathlib/netcdf/intel/4.7.4/include/netcdf</env>
</environment_variables>
</machine>
</config_machines>
config_compilers.xml:
<?xml version="1.0"?>
<config_compilers version="2.0">
<!-- customize these fields as appropriate for your
system. Examples are prodived for Mac OS X systems with
homebrew and macports. -->
<compiler COMPILER="intel" MACH="hulei">
<CFLAGS>
<base> -qno-opt-dynamic-align -fp-model precise -std=gnu99 </base>
<append compile_threaded="true"> -qopenmp </append>
<append DEBUG="FALSE"> -O2 -debug minimal </append>
<append DEBUG="TRUE"> -O0 -g </append>
</CFLAGS>
<CPPDEFS>
<!-- Technical Library -->
<append> -DFORTRANUNDERSCORE -DCPRINTEL</append>
</CPPDEFS>
<CXX_LDFLAGS>
<base> -cxxlib </base>
</CXX_LDFLAGS>
<CXX_LINKER>FORTRAN</CXX_LINKER>
<FC_AUTO_R8>
<base> -r8 </base>
</FC_AUTO_R8>
<FFLAGS>
<base> -qno-opt-dynamic-align -convert big_endian -assume byterecl -ftz -traceback -assume realloc_lhs -fp-model source </base>
<append compile_threaded="true"> -qopenmp </append>
<append DEBUG="TRUE"> -O0 -g -check uninit -check bounds -check pointers -fpe0 -check noarg_temp_created </append>
<append DEBUG="FALSE"> -O2 -debug minimal </append>
</FFLAGS>
<FFLAGS_NOOPT>
<base> -O0 </base>
<append compile_threaded="true"> -qopenmp </append>
</FFLAGS_NOOPT>
<FIXEDFLAGS>
<base> -fixed </base>
</FIXEDFLAGS>
<FREEFLAGS>
<base> -free </base>
</FREEFLAGS>
<LDFLAGS>
<append compile_threaded="true"> -qopenmp </append>
</LDFLAGS>
<MPICC> mpiicc </MPICC>
<MPICXX> mpiicpc </MPICXX>
<MPIFC> mpiifort </MPIFC>
<SCC> icc </SCC>
<SCXX> icpc </SCXX>
<SFC> ifort </SFC>
<MPI_PATH>/public/software/mpi/intelmpi/2017.4.239/intel64</MPI_PATH>
<SLIBS>
<base>-L/public/software/mathlib/netcdf/intel/4.7.4/lib -lnetcdf -lnetcdff -L/public/software/mathlib/libs-gcc/lapack/3.9.1/lib -llapack -lblas -mkl</base>
<append MPILIB="mpich"> -mkl=cluster </append>
<append MPILIB="mpich2"> -mkl=cluster </append>
<append MPILIB="mvapich"> -mkl=cluster </append>
<append MPILIB="mvapich2"> -mkl=cluster </append>
<append MPILIB="mpt"> -mkl=cluster </append>
<append MPILIB="openmpi"> -mkl=cluster </append>
<append MPILIB="impi"> -mkl=cluster </append>
<append MPILIB="mpi-serial"> -mkl </append>
</SLIBS>
<SUPPORTS_CXX>TRUE</SUPPORTS_CXX>
<LAPACK_LIBDIR>/public/software/mathlib/libs-gcc/lapack/3.9.1/lib</LAPACK_LIBDIR>
</compiler>
</config_compilers>
config_batch.xml: