Scheduled Downtime
On Tuesday 24 October 2023 @ 5pm MT the forums will be in read only mode in preparation for the downtime. On Wednesday 25 October 2023 @ 5am MT, this website will be down for maintenance and expected to return online later in the morning.
Normal Operations
The forums are back online with normal operations. If you notice any issues or errors related to the forums, please reach out to help@ucar.edu

error with create_newcase

xgao304

Member
I am able to port the previous cesm version 1.2.2 on our machine. However, when I tried to port cesm2 based on the following linkhttp://esmci.github.io/cime/users_guide/porting-cime.html), I got some error messages with creating new case.Here is the relevant configuration for our machine:----------------------------config_compilers.xml:
  $(NETCDF)
  $(shell $(NETCDF_PATH)/bin/nc-config --flibs) -llapack -lblas
  openmpi
  $(INC_MPI)/..
config_machines.xml: 
    MIT Svante-Login, batch system is SLURM
    LINUX
    pgi
    openmpi...........  config_batch.xml 
 
    sbatch
   
      -S {{ shell }} 
   
  ---------------------------------------------------------------Here is the steps I have taken:1. when I run "xmllint", the results indicate that config_machines.xml validates;2. When I run "scripts_regression_tests.py", I have the following error message:results:  Ran 129 tests in 100.785s    FAILED (failures=32, errors=64, skipped=9)

Some error messages are like "Could not find a matching MPI for attributes: {'mpilib': 'openmpi', 'threaded': False,'compiler': 'pgi'}"

3. When I run "create_newcase" as follows, I am getting the simiar error as I run "scripts_regression_tests.py"
Code:
./create_newcase --case ../../../cases/test2000 --compset I2000Clm50SpGs --res hcru_hcru --machine svante --run-unsupported<br /><br />Compset longname is 2000_DATM%GSWP3v1_CLM50%SP_SICE_SOCN_MOSART_SGLC_SWAV<br />Compset specification file is /net/fs05/d1/xgao/cesm/cesm2/cime/../components/clm//cime_config/config_compsets.xml<br />Compset forcing is 1972-2004<br />ATM component is  Data driven ATM  GSWP3v1 data set <br />LND component is clm5.0:Satellite phenology:<br />ICE component is Stub ice component<br />OCN component is Stub ocn component<br />ROF component is MOSART: MOdel for Scale Adaptive River Transport<br />GLC component is Stub glacier (land ice) component<br />WAV component is Stub wave component<br />ESP component is <br />Pes     specification file is /net/fs05/d1/xgao/cesm/cesm2/cime/../components/clm//cime_config/config_pes.xml<br />Could not find machine match for 'c062.svante.mit.edu' or 'c062.svante.mit.edu'<br />Machine is svante<br />Pes setting: grid          is a%360x720cru_l%360x720cru_oi%null_r%r05_g%null_w%null_m%null <br />Pes setting: compset       is 2000_DATM%GSWP3v1_CLM50%SP_SICE_SOCN_MOSART_SGLC_SWAV <br />Pes setting: tasks       is {'NTASKS_ATM': -1, 'NTASKS_ICE': -1, 'NTASKS_CPL': -1, 'NTASKS_LND': -1, 'NTASKS_WAV': -1, 'NTASKS_ROF': -1, 'NTASKS_OCN': -1, 'NTASKS_GLC': -1} <br />Pes setting: threads     is {'NTHRDS_ICE': 1, 'NTHRDS_ATM': 1, 'NTHRDS_ROF': 1, 'NTHRDS_LND': 1, 'NTHRDS_WAV': 1, 'NTHRDS_OCN': 1, 'NTHRDS_CPL': 1, 'NTHRDS_GLC': 1} <br />Pes setting: rootpe      is {'ROOTPE_OCN': 0, 'ROOTPE_LND': 0, 'ROOTPE_ATM': 0, 'ROOTPE_ICE': 0, 'ROOTPE_WAV': 0, 'ROOTPE_CPL': 0, 'ROOTPE_ROF': 0, 'ROOTPE_GLC': 0} <br />Pes setting: pstrid      is {} <br />Pes other settings: {}<br />Pes comments: none<br /> Compset is: 2000_DATM%GSWP3v1_CLM50%SP_SICE_SOCN_MOSART_SGLC_SWAV <br /> Grid is: a%360x720cru_l%360x720cru_oi%null_r%r05_g%null_w%null_m%null <br /> Components in compset are: ['datm', 'clm', 'sice', 'socn', 'mosart', 'sglc', 'swav', 'sesp', 'drv', 'dart'] <br />No project info available<br />No charge_account info available, using value from PROJECT<br />No project info available<br />cesm model version found: cesm2.0.0<br />starting mpirun_nodes<br />BAR BAR BAR<br />('here is an attribute: ', 'mpilib')<br />('here is an attribute: ', 'openmpi')<br />('here is an attribute: ', 'threaded')<br />('here is an attribute: ', False)<br />('here is an attribute: ', 'compiler')<br />('here is an attribute: ', 'pgi')<br />ERROR: Could not find a matching MPI for attributes: {'mpilib': 'openmpi', 'threaded': False, 'compiler': 'pgi'}<br /><br />--------<br /><br />Any information is appreciated about how to solve the problem.<br /><br />Thanks.
 

jedwards

CSEG and Liaisons
Staff member
Could you run the create_newcase command with the --debug flag and send the resulting create_newcase.log file?   
 

xgao304

Member
Please see the attached file for log file. The command will hang in there as follows:> /net/fs05/d1/xgao/cesm/cesm2/cime/scripts/lib/CIME/utils.py(126)expect()
-> try:
(Pdb) Thanks,Xiang
 

xgao304

Member
I did attach the file, but not sure why it did not show up. I did use ctl-C to end the command, but just want to provide that information to you.
 

jedwards

CSEG and Liaisons
Staff member
Funny your attachment didn't contain any errors.   What python version are you using?
 

jedwards

CSEG and Liaisons
Staff member
Could you create a git fork of cime and a branch with your changes?   I haven't figured out the issue and I want to try to reproduce the problem.    
 

xgao304

Member
I am not very familiar with git, so not sure how to do that. Literally I only changed three files in cime/config/cesm/machines:config_batch.xml, config_compilers.xml, config_machines.xmlI attached these three files. Please let me know if that helps.  
 

jedwards

CSEG and Liaisons
Staff member
Okay the problem is that your machine definition is incomplete.You need to have the section.You need to have a even if type='none'You need to correct the case in it should be 'slurm' not 'SLURM' 
 

mbagerji

MohammadBagher
New Member
I have followed the initial setup instructions, but I encountered an error during the `./case.setup` stage. When I run the command, I receive the following error message:
~/cases/b.e20.B1850.f19_g17.test$ ./case.setup
Could not write to 'replay.sh' script
ERROR: Could not find a matching MPI for attributes: {'compiler': 'intel', 'mpilib': 'mpich2', 'threaded': False}


For context, I created the case using the following command:
./create_newcase --case /home/$USER/test_case1/b.day1.0 --res f19_g17 --compset B1850 --machine athena --run-unsupported --project nn2806

I would greatly appreciate any guidance you can provide to help me resolve this issue.
 

jedwards

CSEG and Liaisons
Staff member
The error Could not write to 'replay.sh' script indicates that you cannot create a simple file in the case directory. You need to have write permission and disk space available to write to /home/$USER/test_case1/b.day1.0 (according to your create_newcase command) or ~/cases/b.e20.B1850.f19_g17.test according to the place you ran case.setup. The mpi error may be related to the write error or it may be that mpi as specified in your config_machines.xml file is not in the path.
 

mbagerji

MohammadBagher
New Member
I solved this section "Could not write to 'replay.sh' script".
But also this is didn't OK:

./case.setup
ERROR: Could not find a matching MPI for attributes: {'compiler': 'intel', 'mpilib': 'mpich2', 'threaded': False}
 

mbagerji

MohammadBagher
New Member
This XML file does not appear to have any style information associated with it. The document tree is shown below.
<machine MACH="athena">
<DESC> CMCC IBM iDataPlex, os is Linux, 16 pes/node, batch system is LSFd mpich</DESC>
<OS>LINUX</OS>
<COMPILERS>intel,intel15</COMPILERS>
<MPILIBS>mpich2</MPILIBS>
<CIME_OUTPUT_ROOT>/work/$USER/CESM2</CIME_OUTPUT_ROOT>
<DIN_LOC_ROOT>/users/home/dp16116/CESM2/inputdata</DIN_LOC_ROOT>
<DIN_LOC_ROOT_CLMFORC>$DIN_LOC_ROOT/atm/datm7</DIN_LOC_ROOT_CLMFORC>
<DOUT_S_ROOT>$CIME_OUTPUT_ROOT/archive/$CASE</DOUT_S_ROOT>
<BASELINE_ROOT>$ENV{CESMDATAROOT}/ccsm_baselines</BASELINE_ROOT>
<CCSM_CPRNC>/users/home/dp16116/CESM2/cesm2.0.1/cime/tools/cprnc/cprnc</CCSM_CPRNC>
<PERL5LIB>/usr/lib64/perl5:/usr/share/perl5</PERL5LIB>
<GMAKE_J>8</GMAKE_J>
<BATCH_SYSTEM>lsf</BATCH_SYSTEM>
<SUPPORTED_BY> </SUPPORTED_BY>
<MAX_TASKS_PER_NODE>30</MAX_TASKS_PER_NODE>
<MAX_MPITASKS_PER_NODE>15</MAX_MPITASKS_PER_NODE>
<PROJECT_REQUIRED>FALSE</PROJECT_REQUIRED>
<mpirun mpilib="default">
<executable> mpirun_Impi5 </executable>
</mpirun>
<module_system type="module">
<init_path lang="perl">/usr/share/Modules/init/perl.pm</init_path>
<init_path lang="python">/usr/share/Modules/init/python.py</init_path>
<init_path lang="csh">/usr/share/Modules/init/csh</init_path>
<init_path lang="sh">/usr/share/Modules/init/sh</init_path>
<cmd_path lang="perl">/usr/bin/modulecmd perl</cmd_path>
<cmd_path lang="python">/usr/bin/modulecmd python </cmd_path>
<cmd_path lang="sh">module</cmd_path>
<cmd_path lang="csh">module</cmd_path>
<modules>
<command name="purge"/>
</modules>
<modules compiler="intel">
<command name="load">ANACONDA2/python2.7</command>
<command name="load">INTEL/intel_xe_2015.3.187</command>
<command name="load">SZIP/szip-2.1_int15</command>
</modules>
<modules compiler="intel" mpilib="!mpi-serial" DEBUG="TRUE">
<command name="load">ESMF/esmf-6.3.0rp1-intelmpi-64-g_int15</command>
</modules>
<modules compiler="intel" mpilib="!mpi-serial" DEBUG="FALSE">
<command name="load">ESMF/esmf-6.3.0rp1-intelmpi-64-O_int15</command>
</modules>
<modules compiler="intel" mpilib="mpi-serial" DEBUG="TRUE">
<command name="load">ESMF/esmf-6.3.0rp1-mpiuni-64-g_int15</command>
</modules>
<modules compiler="intel" mpilib="mpi-serial" DEBUG="FALSE">
<command name="load">ESMF/esmf-6.3.0rp1-mpiuni-64-O_int15</command>
</modules>
<modules mpilib="mpi-serial">
<command name="load">HDF5/hdf5-1.8.15-patch1</command>
<command name="load">NETCDF/netcdf-C_4.3.3.1-F_4.4.2_C++_4.2.1</command>
</modules>
<modules mpilib="!mpi-serial">
<command name="load">HDF5/hdf5-1.8.15-patch1_parallel</command>
<command name="load">NETCDF/netcdf-C_4.3.3.1-F_4.4.2_C++_4.2.1_parallel</command>
<command name="load">PARALLEL_NETCDF/parallel-netcdf-1.6.1</command>
</modules>
<modules>
<command name="load">CMAKE/cmake-3.3.0-rc1</command>
</modules>
<modules compiler="intel">
<command name="unload">INTEL/intel_xe_2013.5.192</command>
<command name="unload">INTEL/intel_xe_2013</command>
<command name="unload">HDF5/hdf5-1.8.10-patch1</command>
<command name="load">INTEL/intel_xe_2015.3.187</command>
</modules>
</module_system>
<environment_variables>
<env name="OMP_STACKSIZE">256M</env>
</environment_variables>
<environment_variables compiler="intel">
<env name="I_MPI_EXTRA_FILESYSTEM_LIST">gpfs</env>
<env name="I_MPI_EXTRA_FILESYSTEM">on</env>
<env name="I_MPI_PLATFORM">snb</env>
<env name="I_MPI_HYDRA_BOOTSTRAP">lsf</env>
<env name="I_MPI_LSF_USE_COLLECTIVE_LAUNCH">1</env>
<env name="I_MPI_DAPL_UD">on</env>
<env name="I_MPI_DAPL_SCALABLE_PROGRESS">on</env>
<env name="XIOS_PATH">/users/home/models/nemo/xios-cmip6/intel_xe_2013</env>
</environment_variables>
</machine>
 

jedwards

CSEG and Liaisons
Staff member
I'm not sure, nothing is obviously wrong - try changing
Code:
<mpirun mpilib="default">
to
Code:
<mpirun mpilib="mpich2">
and make sure that mpi is in your path. I don't see a module load specific to mpi in your list of modules.
 
Top