Scheduled Downtime
On Tuesday 24 October 2023 @ 5pm MT the forums will be in read only mode in preparation for the downtime. On Wednesday 25 October 2023 @ 5am MT, this website will be down for maintenance and expected to return online later in the morning.
Normal Operations
The forums are back online with normal operations. If you notice any issues or errors related to the forums, please reach out to help@ucar.edu

mpi

  1. Y

    CESM2 case run error when porting to a new machine

    Hello all, I am porting CESM2.2.2 to a new machine, Sherlock. Using the attached configuration files, I've successfully passed create_newcase, case.setup, and case.build stages, but hit errors at runtime for the test case "b.e20.B1850.f19_g17.test" (compset=B1850, res=f19_g17). It was built...
  2. J

    Gather data from global grid to be used in a physics parameterization

    Hello, I am trying to modify CAM (based off of git tag `cam6_3_139`) to implement an updated gravity wave parameterization in `src/physics/cam/gw_drag.F90`. As part of my work I need to operate on some variables using the global grid rather than the chunked grid that has been scattered to...
  3. H

    Using multiple CPUs of a Ubutun server to run CLM5.0

    Dear all, I have a question about how to use multiple CPUs of a Ubutun server to run CLM5.0. The machine I used to run CLM is a Ubutun server (no batch system) with 40 Intel(R) Xeon(R) Silver 4210 CPUs. One CPU has 10 cores and 20 threads. I wonder how to set MAX_TASKS_PER_NODE and...
  4. A

    Error while running case.submit-- mpiexec does not support recursive calls

    Hello everyone, I was recently able to port CESM using an example for the compset QPC4. Currently, I was working on another example for the compset I1850Clm50Sp and I had run the following ./create_newcase --case ~/clm_tutorial_cases/I1850CLM50_004 --res f19_g17 --compset I1850Clm50Sp --machine...
  5. marcinkupilas

    Please help with debugging (mpi issue) - help with writing diagnostics to log file as code executes (not field list)

    Hello Thanks in advance for your time. I am running cesm2.2.0 and trying to write out a variable to the atm.log file from the gw_drag module. I only wish to know its values for a single vertical column (exact location doesn't matter), and the code that I have written at the end of the module...
  6. J

    CLM run crashes after first resubmit

    Hi all, I'm doing a regional CLM-only run over Africa with anomaly forcing from DATM. To run from 2015-2100 I separate the run into 17-year blocks and set RESUBMIT=4. The model runs fine for the initial set of 17 years and writes history files as expected. However, when the first resubmitted...
  7. M

    Errors when call shr_mpi_bcast

    Hello, I am trying to add some code for reading some data in cime_comp_mod.F90 referring to the code of cpl reading restart file. But when I add the following line in the code, the program will report a error: I have ensured that the parameters passed into the shr_mpi_bcast are consistent...
  8. hywang

    Error when doing the case.setup in CESM2.2

    Hello everyone, I got an error when I run ./case.setup, and the error info is: ERROR: Could not find a matching MPI for attributes: {'mpilib': 'impi', 'threaded': False, 'compiler': 'intel'} I think this may be caused by a wrong setting of my machine or complier's config file. However, about...
  9. F

    MPI issues with scripts_regression_tests.py script file

    Dear CESM community, I'm a beginner with CLM5.0 but I have already spent several months trying to make it work without success. I'm really looking now for some help or advice. In fact, I'm not able to perform a complete run since issues appear at the ./case.submit step. As far as I have...
  10. V

    execute cesm.exe

    Hello I am trying to run cesm without using case.submit (or .case.run), I mean directly executing cesm.exe, but that does not work at all. My porting is otherwise correct since a "normal" ./case.submit is successful, and I do not understand what is missing. All the needed files seem to be...
  11. liyue1

    MPI error (MPI_File_write_at_all) : I/O error (CESM2.1.1 on Cheyenne)

    Hello there! Recently, I'm running the CESM2.1.1 on Cheyenne with a compset of the piControl. The simulation has successfully run for 55 years, and gets stuck in MPI error during last weekend. Below shows the MPI error in the log file: 1: Opened file...
  12. D

    Waccmx runtime error

    My environment: When I run the waccmx case, it's been automatically canceled. So I check the log, the key error information show blow: My cesm version is 2.1.3, and you can see the detail in the file named version_info.txt. And you can see the change which I made to the XML files in the bash...
  13. A

    known problem with mpiexec_mpt on cheyenne

    Affected release: CESM2.0.0Affected machine: cheyenne.ucar.eduThere is a known problem with launching MPI jobs on cheyenne such that the user may be required to resubmit the model using the case.submit script repeatedly before the model run is launched successfully. The problem will produce...
  14. J

    relocation R_X86_64_PC32 (INTEL COMPILER)

    This error indicates that the model size requirement is too large to fit the default memory model, solutions vary depending on the case: If you are using netcdf 4.6.0 and newer and pio 1_8_12 or older there is an incompatibility that presents this error.  You may update to pio1_8_14.  If you are...
Top