Scheduled Downtime
On Tuesday 24 October 2023 @ 5pm MT the forums will be in read only mode in preparation for the downtime. On Wednesday 25 October 2023 @ 5am MT, this website will be down for maintenance and expected to return online later in the morning.
Normal Operations
The forums are back online with normal operations. If you notice any issues or errors related to the forums, please reach out to help@ucar.edu

Case.submit issue-a quick one

pansah

Peter Ansah
New Member
Hi,
When I run case.submit I get the error:
/path/to/bld/cesm.exe: /lib64/libm.so.6: version `GLIBC_2.27' not found (required by /path/to/hdf5/lib/libhdf5.so.310)
/path/to/bld/cesm.exe: /lib64/libm.so.6: version `GLIBC_2.27' not found (required by /path/to/hdf5/lib/libhdf5.so.310)
/path/to/bld/cesm.exe: /lib64/libm.so.6: version `GLIBC_2.27' not found (required by /path/to/hdf5/lib/libhdf5.so.310)
/path/to/bld/cesm.exe: /lib64/libm.so.6: version `GLIBC_2.27' not found (required by /path/to/hdf5/lib/libhdf5.so.310)

I am not using batch. The only env variable I have specified in my env_mach_specific.xml script is NETCDF.
The case run points to the HDF5 library libhdf5.so.310. I want it to point to a different hdf5 library "libhdf5.so.200". That is linked to the right GNU C library I could find. How do I specify this in the env_mach_specific.xml script so that it points to the right hdf5 library.

Also, I have GCC and intel compilers on the system. But the default compiler is GCC. I actually built the case with intel. Alternatively, what can I specify in env_mach_specific.xml so that it can use libraries of the intel compiler instead of GLIBC.
 

jedwards

CSEG and Liaisons
Staff member
This is a good argument for building and using a module system like lmod on your machine.
Modules automate the process of making sure that the right compiler is in the path and that the
LD_LIBRARY_PATH is set correctly to find the proper version of libraries such as hdf5 and glibc.

I recommend spack for installing modules and a build environment.

If you are not using a module system then you must do these steps by hand.
 

pansah

Peter Ansah
New Member
This is a good argument for building and using a module system like lmod on your machine.
Modules automate the process of making sure that the right compiler is in the path and that the
LD_LIBRARY_PATH is set correctly to find the proper version of libraries such as hdf5 and glibc.

I recommend spack for installing modules and a build environment.

If you are not using a module system then you must do these steps by hand.
Thanks Edward. How do I set that by hand? In the env_mach_specific.xml script? How do I specifiy that in there?
My env_mach_specific.xml script:
<?xml version="1.0"?>
<file id="env_mach_specific.xml" version="2.0">
<header>
These variables control the machine dependent environment including
the paths to compilers and libraries external to cime such as netcdf,
environment variables for use in the running job should also be set here.
</header>
<group id="compliant_values">
<entry id="run_exe" value="${EXEROOT}/cesm.exe ">
<type>char</type>
<desc>executable name</desc>
</entry>
<entry id="run_misc_suffix" value=" &gt;&gt; cesm.log.$LID 2&gt;&amp;1 ">
<type>char</type>
<desc>redirect for job output</desc>
</entry>
</group>
<module_system type="none"/>
<environment_variables>
<env name="NETCDF">/network/rit/misc/software/netcdf-sandybridge</env>
</environment_variables>
<mpirun mpilib="openmpi">
<executable>mpirun</executable>
</mpirun>
</file>

I had a problem with "modules" during the porting. So I did set it to none and just loaded modules manually.
The problem I encountered was a ./case.setup error:
/modulecmd python load intel-2024/tbb/latest intel-2024/compiler-rt/2024.0.2 intel-2024/oclfpga/latest intel-2024/compiler/latest intel-2024/mkl openmpi/5.1.0 netcdf/latest failed with message: openmpi 5.1.0 (intel-2022 build) ready Loaded netcdf 4.9.2 and netcdf-fortran 4.6.1".
At the time, I had set module_system to:

<module_system type="module">
<init_path lang="sh">/usr/share/Modules/init/sh</init_path>
<init_path lang="csh">/usr/share/Modules/init/csh</init_path>
<init_path lang="perl">/usr/share/Modules/init/perl.pm</init_path>
<init_path lang="python">/usr/share/Modules/init/python.py</init_path>
<cmd_path lang="sh">module</cmd_path>
<cmd_path lang="csh">module</cmd_path>
<cmd_path lang="perl">/usr/bin/modulecmd perl</cmd_path>
<cmd_path lang="python">/usr/bin/modulecmd python</cmd_path>
<modules>
<command name="purge"></command>
</modules>
<modules compiler="intel">
<command name="load">intel-2024/tbb/latest</command>
<command name="load">intel-2024/compiler-rt/latest</command>
<command name="load">intel-2024/oclfpga/latest</command>
<command name="load">intel-2024/compiler</command>
<command name="load">intel-2024/mkl</command>
</modules>
<modules mpilib="openmpi">
<command name="load">openmpi/5.1.0</command>
<command name="load">netcdf/latest</command>
</modules>
</module_system>
 

jedwards

CSEG and Liaisons
Staff member
Maybe you should have asked this question first. Some module implementations output to stderr which causes cime to stop and dump an error, you can override this with an attribute in the config_machines.xml file -
<module_system type="module" allow_error="true">

Try adding the allow_error="true" flag
 

pansah

Peter Ansah
New Member
Hi, Edward,
I have resolved the GLIBC issue. However, an emerging problem with case.submit --no-batch is:
/path/to/bld/cesm.exe: symbol lookup error: /path/to/bld/cesm.exe: undefined symbol: __libm_matherr .
As I mentioned earlier, the default compiler on the system is gcc but I am using the intel compiler.

My LD_LIBRARY_PATH:

LD_LIBRARY_PATH=/path/to/lib:/path/to/intel-2022/compiler/2022.1.0/linux/lib/oclfpga/host/linux64/lib:/path/to/intel-2022/tbb/2021.6.0/lib/intel64/gcc4.8:/path/to/intel-2022/compiler/2022.1.0/linux/lib:/path/to/intel-2022/compiler/2022.1.0/linux/lib/x64:/path/to/intel-2022/compiler/2022.1.0/linux/compiler/lib/intel64_lin:/path/to/intel-2024/oneapi/mpi/2021.11/opt/mpi/libfabric/lib:/path/to/intel-2024/oneapi/mpi/2021.11/lib:/path/to/intel-2024/oneapi/compiler/2024.0/opt/oclfpga/host/linux64/lib:/path/to/intel-2024/oneapi/compiler/2024.0/opt/compiler/lib:/path/to/intel-2024/oneapi/compiler/2024.0/lib:/path/to/intel-2024/oneapi/tbb/2021.11/lib:/path/to/libpng-intel/1.6.42/lib:/path/to/zlib-intel/1.3.1/lib:/path/to/jasper-intel/1.900.29/lib:/path/to/netcdf4-intel2022/lib:/path/to/compilers_and_libraries/linux/mkl/lib/intel64/lib:/path/to/intel-2024/oneapi/mkl/2024.0/lib:/path/to/hdf5/1.14.3/lib:/path/to/hdf4/4.2.16-2/lib:

ldd /path/to/bld/cesm.exe shows the following dependencies: -NB (anything not starting with "/path/to" is a gcc library )

linux-vdso.so.1 => (0x00007ffc77dfc000)
libnetcdff.so.7 => /path/to/lib/libnetcdff.so.7 (0x00007fed3e05a000)
libnetcdf.so.19 => /path/to/lib/libnetcdf.so.19 (0x00007fed3de01000)
libopenblas.so.0 => /path/to/lib/libopenblas.so.0 (0x00007fed3bb86000)
libhdf5.so.310 => /path/to/hdf5/1.14.3/lib/libhdf5.so.310 (0x00007fed3b51e000)
libmkl_intel_lp64.so.2 => /path/to/lib/libmkl_intel_lp64.so.2 (0x00007fed3a00f000)
libmkl_intel_thread.so.2 => /path/to/lib/libmkl_intel_thread.so.2 (0x00007fed37b77000)
libmkl_core.so.2 => /path/to/intel-2024/oneapi/mkl/2024.0/lib/libmkl_core.so.2 (0x00007fed33a45000)
libiomp5.so => /path/to/intel-2022/compiler/2022.1.0/linux/compiler/lib/intel64_lin/libiomp5.so (0x00007fed3360c000)
libmpifort.so.12 => /path/to/intel-2024/oneapi/mpi/2021.11/lib/libmpifort.so.12 (0x00007fed33255000)
libmpi.so.12 => /path/to/intel-2024/oneapi/mpi/2021.11/lib/libmpi.so.12 (0x00007fed3171d000)
libdl.so.2 => /lib64/libdl.so.2 (0x00007fed31519000)
librt.so.1 => /lib64/librt.so.1 (0x00007fed31311000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007fed310f5000)
libm.so.6 => /lib64/libm.so.6 (0x00007fed30df3000)
libc.so.6 => /lib64/libc.so.6 (0x00007fed30a25000)
libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007fed3080f000)
libhdf5_hl.so.200 => /path/to/lib/libhdf5_hl.so.200 (0x00007fed305ed000)
libhdf5.so.200 => /path/to/lib/libhdf5.so.200 (0x00007fed2fef9000)
libz.so.1 => /path/to/zlib-intel/1.3.1/lib/libz.so.1 (0x00007fed2fcde000)
libcurl.so.4 => /lib64/libcurl.so.4 (0x00007fed2fa74000)
libnetcdf.so.18 => /path/to/lib/libnetcdf.so.18 (0x00007fed2f738000)
libgfortran.so.5 => /lib64/libgfortran.so.5 (0x00007fed2f2c0000)
libmpi_usempif08.so.40 => /opt/openmpi-v3.1/3.1.4/lib/libmpi_usempif08.so.40 (0x00007fed2f08a000)
libmpi_usempi_ignore_tkr.so.40 => /opt/openmpi-v3.1/3.1.4/lib/libmpi_usempi_ignore_tkr.so.40 (0x00007fed2ee7f000)
libmpi_mpifh.so.40 => /opt/openmpi-v3.1/3.1.4/lib/libmpi_mpifh.so.40 (0x00007fed2ec23000)
libmpi.so.40 => /opt/openmpi-v3.1/3.1.4/lib/libmpi.so.40 (0x00007fed2e927000)
libquadmath.so.0 => /lib64/libquadmath.so.0 (0x00007fed2e6eb000)
libhdf5_hl.so.310 => /path/to/hdf5/1.14.3/lib/libhdf5_hl.so.310 (0x00007fed2e4c8000)
libbz2.so.1 => /lib64/libbz2.so.1 (0x00007fed2e2b8000)
libzstd.so.1 => /lib64/libzstd.so.1 (0x00007fed2dffd000)
libxml2.so.2 => /lib64/libxml2.so.2 (0x00007fed2dc93000)
/lib64/ld-linux-x86-64.so.2 (0x00007fed3e2d3000)
libimf.so => /path/to/intel-2022/compiler/2022.1.0/linux/compiler/lib/intel64_lin/libimf.so (0x00007fed2d605000)
libsvml.so => /path/to/intel-2022/compiler/2022.1.0/linux/compiler/lib/intel64_lin/libsvml.so (0x00007fed2b647000)
libirng.so => /path/to/intel-2022/compiler/2022.1.0/linux/compiler/lib/intel64_lin/libirng.so (0x00007fed2b2dd000)
libintlc.so.5 => /path/to/intel-2022/compiler/2022.1.0/linux/compiler/lib/intel64_lin/libintlc.so.5 (0x00007fed2b065000)
libidn.so.11 => /lib64/libidn.so.11 (0x00007fed2ae32000)
libssh2.so.1 => /lib64/libssh2.so.1 (0x00007fed2ac05000)
libssl3.so => /lib64/libssl3.so (0x00007fed2a9a0000)
libsmime3.so => /lib64/libsmime3.so (0x00007fed2a778000)
libnss3.so => /lib64/libnss3.so (0x00007fed2a43e000)
libnssutil3.so => /lib64/libnssutil3.so (0x00007fed2a20d000)
libplds4.so => /lib64/libplds4.so (0x00007fed2a009000)
libplc4.so => /lib64/libplc4.so (0x00007fed29e04000)
libnspr4.so => /lib64/libnspr4.so (0x00007fed29bc5000)
libgssapi_krb5.so.2 => /lib64/libgssapi_krb5.so.2 (0x00007fed29978000)
libkrb5.so.3 => /lib64/libkrb5.so.3 (0x00007fed2968f000)
libk5crypto.so.3 => /lib64/libk5crypto.so.3 (0x00007fed2945c000)
libcom_err.so.2 => /lib64/libcom_err.so.2 (0x00007fed29258000)
liblber-2.4.so.2 => /lib64/liblber-2.4.so.2 (0x00007fed29049000)
libldap-2.4.so.2 => /lib64/libldap-2.4.so.2 (0x00007fed28df4000)
libopen-rte.so.40 => /opt/openmpi-v3.1/3.1.4/lib/libopen-rte.so.40 (0x00007fed28b3e000)
libopen-pal.so.40 => /opt/openmpi-v3.1/3.1.4/lib/libopen-pal.so.40 (0x00007fed2885f000)
libnuma.so.1 => /lib64/libnuma.so.1 (0x00007fed28653000)
libutil.so.1 => /lib64/libutil.so.1 (0x00007fed28450000)
libevent-2.0.so.5 => /lib64/libevent-2.0.so.5 (0x00007fed28209000)
libevent_pthreads-2.0.so.5 => /lib64/libevent_pthreads-2.0.so.5 (0x00007fed28006000)
liblzma.so.5 => /lib64/liblzma.so.5 (0x00007fed27de0000)
libssl.so.10 => /lib64/libssl.so.10 (0x00007fed27b6e000)
libcrypto.so.10 => /lib64/libcrypto.so.10 (0x00007fed2770b000)
libkrb5support.so.0 => /lib64/libkrb5support.so.0 (0x00007fed274fb000)
libkeyutils.so.1 => /lib64/libkeyutils.so.1 (0x00007fed272f7000)
libresolv.so.2 => /lib64/libresolv.so.2 (0x00007fed270dd000)
libsasl2.so.3 => /lib64/libsasl2.so.3 (0x00007fed26ec0000)
libselinux.so.1 => /lib64/libselinux.so.1 (0x00007fed26c99000)
libcrypt.so.1 => /lib64/libcrypt.so.1 (0x00007fed26a62000)
libpcre.so.1 => /lib64/libpcre.so.1 (0x00007fed26800000)
libfreebl3.so => /lib64/libfreebl3.so (0x00007fed265fd000)
 
Top