Scheduled Downtime
On Tuesday 24 October 2023 @ 5pm MT the forums will be in read only mode in preparation for the downtime. On Wednesday 25 October 2023 @ 5am MT, this website will be down for maintenance and expected to return online later in the morning.
Normal Operations
The forums are back online with normal operations. If you notice any issues or errors related to the forums, please reach out to help@ucar.edu

error when run ./case.build in CESM 2.2.0

Yincheng

Yincheng
New Member
Hi Helpers,

I am using school clusters to config and run CESM2. Everything is good until there is an error happened when run ./case.build, here is the output:
Bash:
Building case in directory /share/home/czhou171/ycliu/models/CESM/cases/X.test
sharedlib_only is False
model_only is False
Setting Environment MKL_PATH=/share/apps/intel/intel_2019u5/compilers_and_libraries/linux/mkl
Setting Environment NETCDF_HOME=/share/apps/netcdf/
Setting Environment NETCDF_PATH=/share/apps/netcdf/
Generating component namelists as part of build
Creating component namelists
  2022-03-01 16:38:32 atm
   Calling /share/home/czhou171/ycliu/models/CESM/2.2.0/my_cesm_sandbox/cime/src/components/xcpl_comps_mct/xatm/cime_config/buildnml
  2022-03-01 16:38:32 lnd
   Calling /share/home/czhou171/ycliu/models/CESM/2.2.0/my_cesm_sandbox/cime/src/components/xcpl_comps_mct/xlnd/cime_config/buildnml
  2022-03-01 16:38:32 ice
   Calling /share/home/czhou171/ycliu/models/CESM/2.2.0/my_cesm_sandbox/cime/src/components/xcpl_comps_mct/xice/cime_config/buildnml
  2022-03-01 16:38:32 ocn
   Calling /share/home/czhou171/ycliu/models/CESM/2.2.0/my_cesm_sandbox/cime/src/components/xcpl_comps_mct/xocn/cime_config/buildnml
  2022-03-01 16:38:32 rof
   Calling /share/home/czhou171/ycliu/models/CESM/2.2.0/my_cesm_sandbox/cime/src/components/xcpl_comps_mct/xrof/cime_config/buildnml
  2022-03-01 16:38:32 glc
   Calling /share/home/czhou171/ycliu/models/CESM/2.2.0/my_cesm_sandbox/cime/src/components/xcpl_comps_mct/xglc/cime_config/buildnml
  2022-03-01 16:38:32 wav
   Calling /share/home/czhou171/ycliu/models/CESM/2.2.0/my_cesm_sandbox/cime/src/components/xcpl_comps_mct/xwav/cime_config/buildnml
  2022-03-01 16:38:32 iac
   Running /share/home/czhou171/ycliu/models/CESM/2.2.0/my_cesm_sandbox/cime/src/components/stub_comps_mct/siac/cime_config/buildnml
  2022-03-01 16:38:32 esp
   Calling /share/home/czhou171/ycliu/models/CESM/2.2.0/my_cesm_sandbox/cime/src/components/stub_comps_mct/sesp/cime_config/buildnml
  2022-03-01 16:38:32 cpl
   Calling /share/home/czhou171/ycliu/models/CESM/2.2.0/my_cesm_sandbox/cime/src/drivers/mct/cime_config/buildnml
Building gptl with output to file /share/home/czhou171/ycliu/models/CESM/scratch/X.test/bld/gptl.bldlog.220301-163832
   Calling /share/home/czhou171/ycliu/models/CESM/2.2.0/my_cesm_sandbox/cime/src/build_scripts/buildlib.gptl
ERROR: /share/home/czhou171/ycliu/models/CESM/2.2.0/my_cesm_sandbox/cime/src/build_scripts/buildlib.gptl FAILED, cat /share/home/czhou171/ycliu/models/CESM/scratch/X.test/bld/gptl.bldlog.20301-163357

I had a look at the log file(i.e., gptl.bldlog.20301-163357, please see it attached in this thread), it seems like a make problem in the path: /share/home/czhou171/ycliu/models/CESM/2.2.0/my_cesm_sandbox/cime/src/share/timing/. But I am not sure. I would appreciate it if anyone tell me what happened and how to fixed it.

PS: Here is my env varibles and some necessary softwave version in Linux service:
Bash:
# env vars
export USER=/share/home/czhou171/ycliu
export CESM_CASE=/share/home/czhou171/ycliu/models/CESM/cases
export CESM_PATH=/share/home/czhou171/ycliu/models/CESM/2.2.0/my_cesm_sandbox
# module load
module load python/3.6.8
module load intel/2019u5
module load ncview
module load nco/nco-4.6.3
module load hdf/hdf5-1.8.20_intel
module load gcc/gcc-9.3.0
module load netcdf/4.4_intel
export NETCDF_PATH=/share/apps/netcdf/4.4_intel
# No ESMF
# LAPACK and BLAS provided by MKL
module load cmake/3.16.0
# pnetcdf in local
export PATH="/share/home/czhou171/usr/local/pnetcdf_1.8.1/bin:$PATH"   
export LD_LIBRARY_PATH="/share/home/czhou171/usr/local/pnetcdf_1.8.1/lib:$LD_LIBRARY_PATH"
export INCLUDE="/share/home/czhou171/usr/local/pnetcdf_1.8.1/include:$INCLUDE"

Thanks in advanced!
Yincheng
 

Attachments

  • gptl.bldlog.zip
    985 bytes · Views: 12

jedwards

CSEG and Liaisons
Staff member
CESM ignores your shell environment and loads its environment as defined in config_machines.xml for your machine. In
this case it cannot find a c compiler. How are you defining your machine?
 

Yincheng

Yincheng
New Member
CESM ignores your shell environment and loads its environment as defined in config_machines.xml for your machine. In
this case it cannot find a c compiler. How are you defining your machine?
Thank you for your timely reply, here is my machine config:
XML:
<machine MACH="HPCC">
    <DESC>
      Example port to centos7 linux system with gcc, netcdf, pnetcdf and mpich
      using modules from http://www.admin-magazine.com/HPC/Articles/Environment-Modules
    </DESC>
    <!-- should be your regex.expression.matching.your.machine, i.e., run hostname: c01n01 -->
    <NODENAME_REGEX>c01n01</NODENAME_REGEX>
    <OS>RedHat</OS>
      <!-- COMPILERS: compilers supported on this machine, comma seperated list, first is default -->
    <COMPILERS>intel,gnu</COMPILERS>
    <!-- MPILIBS: mpilibs supported on this machine, comma seperated list,
        first is default, mpi-serial is assumed and not required in this list-->
    <MPILIBS>impi, openmpi, mpich</MPILIBS>
      <!-- PROJECT: A project or account number used for batch jobs
      This value is used for directory names. If different from
      actual accounting project id, use CHARGE_ACCOUNT
      can be overridden in environment or $HOME/.cime/config -->
    <PROJECT>czhou171</PROJECT>
    <SAVE_TIMING_DIR> </SAVE_TIMING_DIR>
      <!-- CIME_OUTPUT_ROOT: Base directory for case output,
      the case/bld and case/run directories are written below here -->
    <CIME_OUTPUT_ROOT>$USER/models/CESM/scratch</CIME_OUTPUT_ROOT>
    <DIN_LOC_ROOT>$USER/models/CESM/inputdata</DIN_LOC_ROOT>
    <!-- maybe need to modifiy this path -->
    <DIN_LOC_ROOT_CLMFORC>$USER/models/CESM/inputdata/lmwg</DIN_LOC_ROOT_CLMFORC>
    <DOUT_S_ROOT>$USER/models/CESM/archive/$CASE</DOUT_S_ROOT>
    <BASELINE_ROOT>$USER/models/CESM/cesm_baselines</BASELINE_ROOT>
    <CCSM_CPRNC>$USER/models/CESM/tools/cime/tools/cprnc/cprnc</CCSM_CPRNC>
    <GMAKE>make</GMAKE>
    <GMAKE_J>8</GMAKE_J>
      <!-- please read the menu of Linux clusters to decide batch system -->
    <BATCH_SYSTEM>lsf</BATCH_SYSTEM>
    <SUPPORTED_BY>ycliu@smail.nju.edu.cn</SUPPORTED_BY>
    <!-- please read the menu of Linux clusters to decide the maxiuam kernels in recent node -->
    <MAX_TASKS_PER_NODE>12</MAX_TASKS_PER_NODE>
    <MAX_MPITASKS_PER_NODE>12</MAX_MPITASKS_PER_NODE>
    <PROJECT_REQUIRED>FALSE</PROJECT_REQUIRED>
    <mpirun mpilib="impi">
      <executable>bsub</executable>
      <arguments>
          <arg name="ntasks"> -np {{ total_tasks }} </arg>
      </arguments>
    </mpirun>
      <!-- already manually module load -->
    <module_system type="none"/>
    <!-- <module_system type="module" allow_error="true">
      <init_path lang="perl">/usr/share/Modules/init/perl.pm</init_path>
      <init_path lang="python">/usr/share/Modules/init/python.py</init_path>
      <init_path lang="csh">/usr/share/Modules/init/csh</init_path>
      <init_path lang="sh">/usr/share/Modules/init/sh</init_path>
      <cmd_path lang="perl">/usr/bin/modulecmd perl</cmd_path>
      <cmd_path lang="python">/usr/bin/modulecmd python</cmd_path>
      <cmd_path lang="sh">module</cmd_path>
      <cmd_path lang="csh">module</cmd_path>
      <modules> -->
    <!-- <command name="purge"/>
        </modules>
        <modules compiler="gnu">
    <command name="load">compiler/gnu/8.2.0</command>
    <command name="load">mpi/3.3/gcc-8.2.0</command>
    <command name="load">tool/netcdf/4.6.1/gcc-8.1.0</command>
        </modules>
      </module_system> -->
    <environment_variables>
      <!-- <env name="OMP_STACKSIZE">256M</env> -->
      <env name="MKL_PATH">/share/apps/intel/intel_2019u5/compilers_and_libraries/linux/mkl</env>
      <env name="NETCDF_HOME">/share/apps/netcdf/</env>
      <env name="NETCDF_PATH">/share/apps/netcdf/</env>
    </environment_variables>
    <!-- <resource_limits>
      <resource name="RLIMIT_STACK">-1</resource>
    </resource_limits> -->
  </machine>
should I write the shell environment into the config_machines.xml? Thanks again!
 

Yincheng

Yincheng
New Member
Yes, your case is trying to load
module compiler/gnu/8.2.0 which doesn't exist on your system
You need to follow the porting guide: Case Control System Part 1: Basic Usage — CIME master documentation
Thank you for your reply, I followed your guidance and the porting websit to modify my config_machines.xml, but it still the same error output, did I use the wrong c compiler? Here is my config_machines.xml:

XML:
<machine MACH="HPCC">
    <DESC>
      Example port to centos7 linux system with gcc, netcdf, pnetcdf and mpich
      using modules from http://www.admin-magazine.com/HPC/Articles/Environment-Modules
    </DESC>
    <NODENAME_REGEX>c01n01</NODENAME_REGEX>
    <OS>RedHat</OS>
    <COMPILERS>intel,gnu</COMPILERS>
    <MPILIBS>impi, openmpi, mpich</MPILIBS>
    <PROJECT>czhou171</PROJECT>
    <SAVE_TIMING_DIR> </SAVE_TIMING_DIR>
    <CIME_OUTPUT_ROOT>$USER/models/CESM/scratch</CIME_OUTPUT_ROOT>
    <DIN_LOC_ROOT>$USER/models/CESM/inputdata</DIN_LOC_ROOT>
    <DIN_LOC_ROOT_CLMFORC>$USER/models/CESM/inputdata/lmwg</DIN_LOC_ROOT_CLMFORC>
    <DOUT_S_ROOT>$USER/models/CESM/archive/$CASE</DOUT_S_ROOT>
    <BASELINE_ROOT>$USER/models/CESM/cesm_baselines</BASELINE_ROOT>
    <CCSM_CPRNC>$USER/models/CESM/tools/cime/tools/cprnc/cprnc</CCSM_CPRNC>
    <GMAKE>make</GMAKE>
    <GMAKE_J>8</GMAKE_J>
    <BATCH_SYSTEM>lsf</BATCH_SYSTEM>
    <SUPPORTED_BY>ycliu@smail.nju.edu.cn</SUPPORTED_BY>
    <MAX_TASKS_PER_NODE>12</MAX_TASKS_PER_NODE>
    <MAX_MPITASKS_PER_NODE>12</MAX_MPITASKS_PER_NODE>
    <PROJECT_REQUIRED>FALSE</PROJECT_REQUIRED>
    <mpirun mpilib="impi">
      <executable>bsub</executable>
      <arguments>
        <arg name="ntasks"> -np {{ total_tasks }} </arg>
      </arguments>
    </mpirun>
    <!-- <module_system type="none"/> -->
    <module_system type="module" allow_error="true">
      <init_path lang="perl">/usr/share/Modules/init/perl.pm</init_path>
      <init_path lang="python">/usr/share/Modules/init/python.py</init_path>
      <init_path lang="csh">/usr/share/Modules/init/csh</init_path>
      <init_path lang="sh">/usr/share/Modules/init/sh</init_path>
      <cmd_path lang="perl">/usr/bin/modulecmd perl</cmd_path>
      <cmd_path lang="python">/usr/bin/modulecmd python</cmd_path>
      <cmd_path lang="sh">module</cmd_path>
      <cmd_path lang="csh">module</cmd_path>
      <modules compiler="intel">
      <command name="load">intel/2019u5</command>
      <command name="load">gcc/gcc-9.3.0</command>
      <command name="load">netcdf/4.4_intel</command>
      <command name="load">hdf5-1.8.20_intel</command>
      <command name="load">python/3.6.8</command>
      <command name="load">nco/nco-4.6.3</command>
      <command name="load">ncview</command>
      <command name="load">cmake/3.16.0</command>
      </modules>
    </module_system>
    <environment_variables>
      <!-- <env name="OMP_STACKSIZE">256M</env> -->
      <env name="MKL_PATH">/share/apps/intel/intel_2019u5/compilers_and_libraries/linux/mkl</env>
      <env name="NETCDF_HOME">/share/apps/netcdf/</env>
      <env name="NETCDF_PATH">/share/apps/netcdf/</env>
    </environment_variables>
    <!-- <resource_limits>
      <resource name="RLIMIT_STACK">-1</resource>
    </resource_limits> -->
  </machine>
 
Top