Scheduled Downtime
On Tuesday 24 October 2023 @ 5pm MT the forums will be in read only mode in preparation for the downtime. On Wednesday 25 October 2023 @ 5am MT, this website will be down for maintenance and expected to return online later in the morning.
Normal Operations
The forums are back online with normal operations. If you notice any issues or errors related to the forums, please reach out to help@ucar.edu

CESM build process problem

TINGTING

tingting
New Member
What version of the code are you using?
CESM2.2.2

Describe every step you took leading up to the problem:
-
export PERL5LIB=/rhome/tzhu/.conda/envs/perl_env/lib/perl5/site_perl/5.22.0/x86_64-linux-thread-multi
conda activate perl_env
source /etc/profile.d/hpcc_modules.sh

module unload openmpi

module load cmake

module load intel

source /opt/linux/centos/7.x/x86_64/pkgs/intel/2018/compilers_and_libraries_2018.0.128/linux/bin/compilervars.sh intel64
source /opt/linux/centos/7.x/x86_64/pkgs/intel/2018/compilers_and_libraries_2018.0.128/linux/mkl/bin/mklvars.sh intel64
source /opt/linux/centos/7.x/x86_64/pkgs/intel/2018/compilers_and_libraries_2018.0.128/linux/mpi/intel64/bin/mpivars.sh intel64
# HDF5
export TACC_HDF5_LIB=/opt/linux/centos/7.x/x86_64/pkgs/hdf5/1.8.18_intel/lib
export LD_LIBRARY_PATH=$TACC_HDF5_LIB:$LD_LIBRARY_PATH
export LD_INCLUDE_PATH=/opt/linux/centos/7.x/x86_64/pkgs/netcdf/4.4.1.1_intel/include:/opt/linux/centos/7.x/x86_64/pkgs/netcdf-fortran/4.4.4_intel/include:$LD_INCLUDE_PATH



# Set NETCDF_PATH environment variable (optional, modify based on actual installation path)
module load netcdf/4.4.1.1_intel
module load netcdf-fortran/4.4.4_intel
export NETCDF_PATH=/opt/linux/centos/7.x/x86_64/pkgs/netcdf-fortran/4.4.4_intel/



cd ..../my_cesm2.2.2_sandbox/cime/scripts

./create_newcase --case .../cesm2_runs/case/CESM-F2000-ts --res f09_f09_mg17 --compset F2000climo --mach ucr-hpcc --compiler intel

./xmlchange --file env_run.xml --id STOP_OPTION --val nyear
./xmlchange --file env_run.xml --id STOP_N --val 1
./xmlchange --file env_run.xml --id REST_N --val 1
./xmlchange --file env_run.xml --id RESUBMIT --val 0
./xmlchange RUN_TYPE="startup"
./xmlchange --file env_run.xml --id RUN_STARTDATE --val 0000-01-01

./case.setup
./check_input_data --download
./case.build


If this is a port to a new machine: Please attach any files you added or changed for the machine port (e.g., config_compilers.xml, config_machines.xml, and config_batch.xml) and tell us the compiler version you are using on this machine.
Please attach any log files showing error messages or other useful information.

I change config files:

in config_compilers.xml added:
<compiler COMPILER="intel" MACH="ucr-hpcc">
<MPI_LIB_NAME>mpich</MPI_LIB_NAME>
<MPICC>mpiicc</MPICC>
<MPIFC>mpiifort</MPIFC>
<NETCDF_PATH>/opt/linux/centos/7.x/x86_64/pkgs/netcdf-fortran/4.4.4_intel</NETCDF_PATH>
</compiler>

in config_machines.xml added:
<machine MACH="ucr-hpcc">
<DESC>UCR HPCC cluster with appropriate modules</DESC>
<NODENAME_REGEX>r27</NODENAME_REGEX> <!-- Adjust to match the UCR HPCC hostname -->
<OS>LINUX</OS>
<COMPILERS>intel</COMPILERS> <!-- Modify based on the available compiler, e.g., intel or gnu -->
<MPILIBS>mpich</MPILIBS> <!-- Modify based on your MPI library -->
<PROJECT>none</PROJECT>
<CIME_OUTPUT_ROOT>/bigdata/wliulab/tzhu/cesm2_runs/output</CIME_OUTPUT_ROOT>
<DIN_LOC_ROOT>/bigdata/wliulab/tzhu/cesm2_runs/input/$CASE</DIN_LOC_ROOT>
<DIN_LOC_ROOT_CLMFORC>/bigdata/wliulab/tzhu/cesm2_runs/input/atm/datm7</DIN_LOC_ROOT_CLMFORC>
<DOUT_S_ROOT>$CIME_OUTPUT_ROOT/archive/$CASE</DOUT_S_ROOT>
<GMAKE>gmake</GMAKE>
<GMAKE_J>8</GMAKE_J>
<BATCH_SYSTEM>slurm</BATCH_SYSTEM> <!-- Modify this if the cluster uses a specific batch system -->
<SUPPORTED_BY>user@ucr.edu</SUPPORTED_BY>
<MAX_TASKS_PER_NODE>32</MAX_TASKS_PER_NODE> <!-- Modify based on your system's configuration -->
<MAX_MPITASKS_PER_NODE>32</MAX_MPITASKS_PER_NODE>
<PROJECT_REQUIRED>FALSE</PROJECT_REQUIRED>
<mpirun mpilib="default">
<executable>mpirun</executable>
<arguments>
<arg name="ntasks"> -np {{ total_tasks }} </arg>
</arguments>
</mpirun>
<module_system type="module">
<!-- Load necessary modules for UCR HPCC -->
</module_system>
</machine>

in config_batch.xml added:
<batch_system MACH="ucr-hpcc" type="slurm">
<directives>
<directive>-l nodes={{ num_nodes }}:ppn={{ tasks_per_node }}</directive>
<directive default="/bin/bash" > -S {{ shell }} </directive>
</directives>
<queues>
<queue walltimemax="36:00:00" default="true">batch</queue>
</queues>
</batch_system>

when I run ./case.build, there are some errors. I attached the build log with errors info.
cat: Filepath: No such file or directory
cat: Srcfiles: No such file or directory
cmake: error while loading shared libraries: librhash.so.0: cannot open shared object file: No such file or directory
gmake: *** [/bigdata/*/cesm2_runs/output/CESM-F2000-ts/bld/intel/mpich/nodebug/nothreads/mct/pio/pio2/Makefile] Error 127
ERROR: cat: Filepath: No such file or directory
cat: Srcfiles: No such file or directory
cmake: error while loading shared libraries: librhash.so.0: cannot open shared object file: No such file or directory
gmake: *** [/bigdata/*/cesm2_runs/output/CESM-F2000-ts/bld/intel/mpich/nodebug/nothreads/mct/pio/pio2/Makefile] Error 127
 

Attachments

  • pio.bldlog.240920-131206.txt
    6.2 KB · Views: 3
Top