Scheduled Downtime
On Tuesday 24 October 2023 @ 5pm MT the forums will be in read only mode in preparation for the downtime. On Wednesday 25 October 2023 @ 5am MT, this website will be down for maintenance and expected to return online later in the morning.
Normal Operations
The forums are back online with normal operations. If you notice any issues or errors related to the forums, please reach out to help@ucar.edu

for_diags_intel.c:(.text+0x1552): additional relocation overflows omitted from the output make: *** [cesm.exe] Error 1

Lumoss

Member
Greetings!

Error after./case.build:

ERROR: BUILD FAIL: buildexe failed, cat /home/src_cesm2_3_beta08/projects/scratch/mycase_test1/bld/cesm.bldlog.231008-192605

and cesm.bldlog.231008-192605:

Bash:
/home/src_cesm2_3_beta08/cime/scripts/mycase/mycase_test1/Tools/Makefile:8: "Variable MODEL is deprecated, please use COMP_NAME instead"
/home/src_cesm2_3_beta08/components/cmeps/cime_config/../mediator/med_io_mod.F90(126): warning #6843: A dummy argument with an explicit INTENT(OUT) declaration is not given an explicit value.   [RC]
  subroutine med_io_init(gcomp, rc)
--------------------------------^
/opt/intel/oneapi/compiler/2021.1.1/linux/compiler/lib/intel64_lin/libifcoremt.a(for_diags_intel.o): In function `for__io_return':
for_diags_intel.c:(.text+0xce5): relocation truncated to fit: R_X86_64_PC32 against symbol `message_catalog' defined in COMMON section in /opt/intel/oneapi/compiler/2021.1.1/linux/compiler/lib/intel64_lin/libifcoremt.a(for_diags_intel.o)
for_diags_intel.c:(.text+0xe86): relocation truncated to fit: R_X86_64_PC32 against symbol `message_catalog' defined in COMMON section in /opt/intel/oneapi/compiler/2021.1.1/linux/compiler/lib/intel64_lin/libifcoremt.a(for_diags_intel.o)
for_diags_intel.c:(.text+0x1055): relocation truncated to fit: R_X86_64_PC32 against symbol `message_catalog' defined in COMMON section in /opt/intel/oneapi/compiler/2021.1.1/linux/compiler/lib/intel64_lin/libifcoremt.a(for_diags_intel.o)
for_diags_intel.c:(.text+0x1064): relocation truncated to fit: R_X86_64_PC32 against symbol `message_catalog' defined in COMMON section in /opt/intel/oneapi/compiler/2021.1.1/linux/compiler/lib/intel64_lin/libifcoremt.a(for_diags_intel.o)
for_diags_intel.c:(.text+0x11d0): relocation truncated to fit: R_X86_64_PC32 against symbol `message_catalog' defined in COMMON section in /opt/intel/oneapi/compiler/2021.1.1/linux/compiler/lib/intel64_lin/libifcoremt.a(for_diags_intel.o)
for_diags_intel.c:(.text+0x11df): relocation truncated to fit: R_X86_64_PC32 against symbol `message_catalog' defined in COMMON section in /opt/intel/oneapi/compiler/2021.1.1/linux/compiler/lib/intel64_lin/libifcoremt.a(for_diags_intel.o)
for_diags_intel.c:(.text+0x1246): relocation truncated to fit: R_X86_64_PC32 against symbol `message_catalog' defined in COMMON section in /opt/intel/oneapi/compiler/2021.1.1/linux/compiler/lib/intel64_lin/libifcoremt.a(for_diags_intel.o)
for_diags_intel.c:(.text+0x12b9): relocation truncated to fit: R_X86_64_PC32 against symbol `message_catalog' defined in COMMON section in /opt/intel/oneapi/compiler/2021.1.1/linux/compiler/lib/intel64_lin/libifcoremt.a(for_diags_intel.o)
for_diags_intel.c:(.text+0x139e): relocation truncated to fit: R_X86_64_PC32 against symbol `message_catalog' defined in COMMON section in /opt/intel/oneapi/compiler/2021.1.1/linux/compiler/lib/intel64_lin/libifcoremt.a(for_diags_intel.o)
for_diags_intel.c:(.text+0x1543): relocation truncated to fit: R_X86_64_PC32 against symbol `message_catalog' defined in COMMON section in /opt/intel/oneapi/compiler/2021.1.1/linux/compiler/lib/intel64_lin/libifcoremt.a(for_diags_intel.o)
for_diags_intel.c:(.text+0x1552): additional relocation overflows omitted from the output
make: *** [/home/src_cesm2_3_beta08/projects/scratch/mycase_test1/bld/cesm.exe] Error 1

Does anyone have ideas about what might be going wrong here? Maybe there is some extra ifort flag I need to add somewhere?

Thanks for looking!

-Lumos
 

jedwards

CSEG and Liaisons
Staff member
This error often occurs if you attempt to build with NTASKS values that are too small for the domain size. You didn't provide enough information to determine if that is the case here.
 

Lumoss

Member
This error often occurs if you attempt to build with NTASKS values that are too small for the domain size. You didn't provide enough information to determine if that is the case here.

Thank you for your reply.

For the time being, I am trying to port CESM2_3_beta08 in a virtual machine, building the case MUSICAv0 (CAM-chem with CONUS grid).

My config_compilers.xml:

XML:
<?xml version="1.0"?>
<config_compilers version="2.0">
   <!-- customize these fields as appropriate for your
        system. Examples are prodived for Mac OS X systems with
        homebrew and macports. -->
   <compiler COMPILER="intel" MACH="Lumos">
      <!-- homebrew -->
      <CFLAGS>
        <base>  -qno-opt-dynamic-align -fp-model precise -std=gnu99 </base>
        <append compile_threaded="true"> -qopenmp </append>
        <append DEBUG="FALSE"> -O2 -debug minimal </append>
        <append DEBUG="TRUE"> -O0 -g </append>
      </CFLAGS>
      <!--MPI_LIB_NAME MPILIB="mpich">mpich</MPI_LIB_NAME-->
      <CPPDEFS>
    <append>-DFORTRANUNDERSCORE -DCPRINTEL</append>
      </CPPDEFS>
      <CXX_LDFLAGS>
        <base> -cxxlib </base>
      </CXX_LDFLAGS>
      <CXX_LINKER>FORTRAN</CXX_LINKER>
      <FC_AUTO_R8>
        <base> -r8 </base>
      </FC_AUTO_R8>
      <FFLAGS>
        <base> -qno-opt-dynamic-align  -convert big_endian -assume byterecl -ftz -traceback -assume realloc_lhs -fp-model source  </base>
        <append compile_threaded="true"> -qopenmp </append>
        <append DEBUG="TRUE"> -O0 -g -check uninit -check bounds -check pointers -fpe0 -check noarg_temp_created </append>
        <append DEBUG="FALSE"> -O2 -debug minimal </append>
        <append MPILIB="impi"> -mcmodel medium </append>
      </FFLAGS>
      <FFLAGS_NOOPT>
        <base> -O0 </base>
        <append compile_threaded="true"> -qopenmp </append>
      </FFLAGS_NOOPT>
      <FIXEDFLAGS>
        <base> -fixed </base>
      </FIXEDFLAGS>
      <FREEFLAGS>
        <base> -free </base>
      </FREEFLAGS>
      <LDFLAGS>
    <append compile_threaded="TRUE"> -qopenmp </append>
        <append> -mkl </append>
      </LDFLAGS>
      <!-- brew install gcc without-multilib cmake mpich hdf5 enable-fortran netcdf enable-fortran -->
      <SFC> ifort </SFC>
      <SCC> icc </SCC>
      <SCXX> icpc </SCXX>
      <MPIFC> mpiifort </MPIFC>
      <MPICC> mpiicc  </MPICC>
      <MPICXX> mpiicpc </MPICXX>
      <SUPPORTS_CXX>TRUE</SUPPORTS_CXX>
      <NETCDF_PATH>/home/LIBRARIESICC/netcdf</NETCDF_PATH>
      <SLIBS>
        <append> -L/home/LIBRARIESICC/netcdf/lib -lnetcdff -lnetcdf</append>
        <append> -L/opt/intel/oneapi/mkl/2021.1.1/lib/intel64 -lmkl_rt</append>
        <!--append> -L/home/pc/LIBRARIESICC/lapack-3.11 -llapack -lblas</append-->
      </SLIBS>


   </compiler>


</config_compilers>

My config_machines.xml:

XML:
<?xml version="1.0"?>
<config_machines>
   <machine MACH="Lumos">
      <!-- customize these fields as appropriate for your system (max tasks) and
           desired layout (change '${HOME}/projects' to your
           prefered location). -->
      <DESC>__USEFUL_DESCRIPTION__</DESC>
      <OS>LINUX</OS>
      <NODENAME_REGEX> Lumos </NODENAME_REGEX>
      <COMPILERS>intel</COMPILERS>
      <MPILIBS>intelmpi</MPILIBS>
      <CIME_OUTPUT_ROOT>/home/src_cesm2_3_beta08/projects/scratch</CIME_OUTPUT_ROOT>
      <DIN_LOC_ROOT>/mnt/hgfs/cesminputdata</DIN_LOC_ROOT>
      <DIN_LOC_ROOT_CLMFORC>/home/src_cesm2_3_beta08/projects/ptclm-data</DIN_LOC_ROOT_CLMFORC>
      <DOUT_S_ROOT>/home/src_cesm2_3_beta08/projects/scratch/archive/$CASE</DOUT_S_ROOT>
      <BASELINE_ROOT>/home/src_cesm2_3_beta08/projects/baselines</BASELINE_ROOT>
      <CCSM_CPRNC>/home/src_cesm2_3_beta08/cime/tools/cprnc</CCSM_CPRNC>
      <GMAKE>make</GMAKE>
      <GMAKE_J>2</GMAKE_J>
      <BATCH_SYSTEM>none</BATCH_SYSTEM>
      <SUPPORTED_BY>Lumos</SUPPORTED_BY>
      <MAX_TASKS_PER_NODE>1</MAX_TASKS_PER_NODE>
      <MAX_MPITASKS_PER_NODE>1</MAX_MPITASKS_PER_NODE>
      <mpirun mpilib="intelmpi">
    <executable>mpirun</executable>
    <arguments>
          <arg name="ntasks"> -np {{ total_tasks }} </arg>
          <!--
           <arg name="anum_tasks"> -np $TOTALPES</arg>
      <arg name="labelstdout">-prepend-rank</arg>
          -->
    </arguments>
      </mpirun>
      <module_system type="none"/>
      <environment_variables comp_interface="nuopc">
        <env name="OMP_STACKSIZE">256M</env>
        <env name="MKL_PATH">/opt/intel/oneapi/mkl/2021.1.1/lib/intel64</env>
        <env name="ESMFMKFILE">/home/LIBRARIESICC/esmfinstall8_split_imkl/lib/libg/Linux.intel.64.intelmpi.default/esmf.mk</env>
        <env name="NETCDF_PATH">/home/LIBRARIESICC/netcdf</env>
        <env name="NETCDF_HOME">/home/LIBRARIESICC/netcdf</env>
        <env name="HDF5_PATH">/home/LIBRARIESICC/netcdf</env>
        <env name="ZLIB_PATH">/home/LIBRARIESICC/netcdf</env>
      </environment_variables>
      <resource_limits>
        <resource name="RLIMIT_STACK">-1</resource>
      </resource_limits>
   </machine>


  <default_run_suffix>
    <default_run_exe>${EXEROOT}/cesm.exe </default_run_exe>
    <default_run_misc_suffix> >> cesm.log.$LID 2>&amp;1 </default_run_misc_suffix>
  </default_run_suffix>


</config_machines>

Thanks again.

--Lumos
 

Lumoss

Member
Greetings!

I used the default total tasks: 135, then./case.build, and the problem was solved.

Thank you very much!

--Lumos
 
Top