Scheduled Downtime
On Tuesday 24 October 2023 @ 5pm MT the forums will be in read only mode in preparation for the downtime. On Wednesday 25 October 2023 @ 5am MT, this website will be down for maintenance and expected to return online later in the morning.
Normal Operations
The forums are back online with normal operations. If you notice any issues or errors related to the forums, please reach out to help@ucar.edu

Error Using Prescribed MODIS LAI with 0.05° Surface Dataset in CLM5.2

koushanm

Koushan Mohammadi
New Member
I am trying to run the model for a single year using the 2000_DATM%GSWP3v1_CLM50%SP_SICE_SOCN_MOSART_SGLC_SWAV compset. The region I am focusing on is the Central U.S. (latitude 33–46°, longitude 250–272°), and I am using a high spatial resolution of 0.05 degrees for the atmospheric forcing data.
For the surface dataset, I used:

/glade/campaign/cgd/tss/people/oleson/CLM5_datasets/ctsm5.2.0/GLOBAL/surfdata_0.05x0.05-hires_hist_2005_78pfts_c240425.nc

because I want to run the model at a fine resolution. Since the monthly LAI data in the surface dataset follows the same annual cycle each year, I initially enabled the use of external LAI by setting use_lai_streams = .true. in user_nl_clm.
However, after submitting the case, I encountered an error.

/glade/derecho/scratch/koushanm/April_30_USA_MODIS_SP/run> vi cesm.log.9370208.desched1.250430-213018

dec1906.hsn.de.hpc.ucar.edu 175: # of NaNs = 1
dec1906.hsn.de.hpc.ucar.edu 175: Which are NaNs = F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F
dec1906.hsn.de.hpc.ucar.edu 175: F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F
dec1906.hsn.de.hpc.ucar.edu 175: F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F
dec1906.hsn.de.hpc.ucar.edu 175: F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F
dec1906.hsn.de.hpc.ucar.edu 175: F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F T F F F
dec1906.hsn.de.hpc.ucar.edu 175: F F F F F F F F F F F F F F F F F F F F F F F F F F F F F
dec1906.hsn.de.hpc.ucar.edu 175: NaN found in field Sl_t at gridcell index/lon/lat: 188
dec1906.hsn.de.hpc.ucar.edu 175: 271.575000000000 43.8750000000000
dec1906.hsn.de.hpc.ucar.edu 175: ERROR: ERROR: One or more of the CTSM cap export_1D fields are NaN
dec1906.hsn.de.hpc.ucar.edu 175: Image PC Routine Line Source
dec1906.hsn.de.hpc.ucar.edu 175: cesm.exe 000000000107373D shr_abort_mod_mp_ 114 shr_abort_mod.F90
dec1906.hsn.de.hpc.ucar.edu 175: cesm.exe 00000000005A3E3D lnd_import_export 169 lnd_import_export_utils.F90
dec1906.hsn.de.hpc.ucar.edu 175: cesm.exe 00000000005A3040 lnd_import_export 1193 lnd_import_export.F90
dec1906.hsn.de.hpc.ucar.edu 175: cesm.exe 00000000005A0618 lnd_import_export 780 lnd_import_export.F90
dec1906.hsn.de.hpc.ucar.edu 175: cesm.exe 0000000000590083 lnd_comp_nuopc_mp 912 lnd_comp_nuopc.F90


In the next step, I attempted to crop the MODIS LAI stream data using the following command:

ncks -O -d lat,33,46 -d lon,250,272 /glade/u/home/koushanm/MODISPFTLAI_0.5x0.5_c140711.nc MODISPFTLAI_cropped.nc

I also used a mesh file created based on the surface dataset and added the following settings to user_nl_clm:

stream_fldfilename_lai = '/glade/u/home/koushanm/MODISPFTLAI_cropped.nc'
stream_meshfile_lai = '/glade/work/koushanm/ctsm5.2.005/CTSM/tools/site_and_regional/subset_data_regional/lnd_mesh.nc'

After submitting the case again, I encountered another error. The new log file is located at:

/glade/derecho/scratch/koushanm/April_30_USA_MODIS_SP/run/cesm.log.9370835.desched1.250430-222056

dec0253.hsn.de.hpc.ucar.edu 0: PIO2 pio_file.c retry NETCDF
dec0139.hsn.de.hpc.ucar.edu 502: Error dstmbl: pft= 174 lnd_frc_mbl(p)= 6.576454038243204E+031
dec0139.hsn.de.hpc.ucar.edu 502: iam = 502: local patch index = 174
dec0253.hsn.de.hpc.ucar.edu 89: Error dstmbl: pft= 58 lnd_frc_mbl(p)= 2.753951402068762E+020
dec0253.hsn.de.hpc.ucar.edu 89: iam = 89: local patch index = 58
dec0139.hsn.de.hpc.ucar.edu 502: iam = 502: global patch index = 186684
dec0139.hsn.de.hpc.ucar.edu 502: iam = 502: global column index = 108680
dec0253.hsn.de.hpc.ucar.edu 89: iam = 89: global patch index = 51815
dec0139.hsn.de.hpc.ucar.edu 502: iam = 502: global landunit index = 64616
dec0253.hsn.de.hpc.ucar.edu 89: iam = 89: global column index = 29242
dec0253.hsn.de.hpc.ucar.edu 89: iam = 89: global landunit index = 17570
dec0139.hsn.de.hpc.ucar.edu 502: iam = 502: global gridcell index = 26087
dec0139.hsn.de.hpc.ucar.edu 502: iam = 502: gridcell longitude = 256.3250000
dec0253.hsn.de.hpc.ucar.edu 89: iam = 89: global gridcell index = 7107
dec0139.hsn.de.hpc.ucar.edu 502: iam = 502: gridcell latitude = 35.9750000
dec0253.hsn.de.hpc.ucar.edu 89: iam = 89: gridcell longitude = 253.3250000
dec0139.hsn.de.hpc.ucar.edu 502: iam = 502: pft type = 0
dec0253.hsn.de.hpc.ucar.edu 89: iam = 89: gridcell latitude = 33.8250000
dec0139.hsn.de.hpc.ucar.edu 502: iam = 502: column type = 1
dec0253.hsn.de.hpc.ucar.edu 89: iam = 89: pft type = 0
dec0139.hsn.de.hpc.ucar.edu 502: iam = 502: landunit type = 1
dec0253.hsn.de.hpc.ucar.edu 89: iam = 89: column type = 1
dec0253.hsn.de.hpc.ucar.edu 89: iam = 89: landunit type = 1
dec0139.hsn.de.hpc.ucar.edu 502: ENDRUN:
dec0085.hsn.de.hpc.ucar.edu 296: Error dstmbl: pft= 81 lnd_frc_mbl(p)= 3.774403431464960E+015
dec0139.hsn.de.hpc.ucar.edu 502: ERROR: ERROR in DUSTMod.F90 at line 365
dec0253.hsn.de.hpc.ucar.edu 89: ENDRUN:
dec0085.hsn.de.hpc.ucar.edu 296: iam = 296: local patch index = 81
dec0253.hsn.de.hpc.ucar.edu 89: ERROR: ERROR in DUSTMod.F90 at line 365
dec0085.hsn.de.hpc.ucar.edu 296: iam = 296: global patch index = 83305
dec0081.hsn.de.hpc.ucar.edu 226: Error dstmbl: pft= 5 lnd_frc_mbl(p)= 321979374166.511
dec0081.hsn.de.hpc.ucar.edu 226: iam = 226: local patch index = 5
dec0085.hsn.de.hpc.ucar.edu 296: iam = 296: global column index = 46898
dec0139.hsn.de.hpc.ucar.edu 386: Error dstmbl: pft= 5 lnd_frc_mbl(p)= 221199400734.272
dec0253.hsn.de.hpc.ucar.edu 3: Error dstmbl: pft= 63 lnd_frc_mbl(p)= 28693065948.9600
dec0081.hsn.de.hpc.ucar.edu 226: iam = 226: global patch index = 10654


Could the issue be due to the mismatch in spatial resolution between the MODIS LAI data (0.5°) and the surface dataset (0.05°)? Is it possible to regrid the MODIS LAI data to match the 0.05° resolution? Additionally, the surface dataset is configured with 78 plant functional types (PFTs), while the MODIS LAI data uses only 16 PFTs. Could this PFT mismatch also be causing the error?

My goal is to use monthly prescribed LAI that varies interannually while retaining the fine spatial resolution. Is there any way to make this configuration work? Alternatively, is there a surface dataset available that provides interannually varying LAI at high spatial resolution?
 

Attachments

  • cesm.log.9370208.desched1.250430-213018.PNG
    cesm.log.9370208.desched1.250430-213018.PNG
    114.4 KB · Views: 2
  • cesm.log.9370835.desched1.250430-222056.PNG
    cesm.log.9370835.desched1.250430-222056.PNG
    121.1 KB · Views: 2

slevis

Moderator
Staff member
My suggested troubleshooting approach would be to start from a test where you make sure you can run the model without the custom data. Then I would introduce custom elements one at a time to see what works and what doesn't.
 

koushanm

Koushan Mohammadi
New Member
Thank you for your response.

I was able to successfully run the model using only the LAI included in the surface dataset, which repeats the same annual cycle each year. There were no errors in the log files from that run (/glade/derecho/scratch/koushanm/archive/April_30_USA_SP).

However, when I set use_lai_streams = .true. and submitted the model, it ran for a few minutes before failing with an error. The log file for that run is located at:

/glade/derecho/scratch/koushanm/April_30_USA_MODIS_SP/run/cesm.log.9370208.desched1.250430-213018. I attached the log file for your reference.

The MODIS LAI (/glade/campaign/cesm/cesmdata/inputdata/lnd/clm2/lai_streams/MODISPFTLAI_0.5x0.5_c140711.nc) is global, at 0.5° spatial resolution, and includes 16 PFTs. In contrast, my surface dataset is at 0.05° resolution, contains 78 PFTs, and is cropped for the Midwest U.S. When I attempted to crop the MODIS data to match my domain, the model failed immediately without starting the simulation. That log file is here:

/glade/derecho/scratch/koushanm/April_30_USA_MODIS_SP/run/cesm.log.9370835.desched1.250430-222056

Given these differences in spatial resolution and PFT structure, I’d like to ask:
  • Is it possible to run CTSM with a surface dataset and LAI stream that differ in resolution and number of PFTs?
  • Can the model internally reconcile these differences (e.g., through interpolation or remapping)?
  • Is there a way to convert 16-PFT LAI data to be compatible with a 78-PFT surface dataset?
Any guidance would be greatly appreciated.
 

Attachments

  • cesm.log.9370208.desched1.txt
    486 KB · Views: 1

oleson

Keith Oleson
CSEG and Liaisons
Staff member
The subsetted LAI streams file you've created seems to have some latitude values that are different from the region you are attempting to simulate (-73.25 to -66.75. And the "mask" variable is zero everywhere. That is likely causing your error in that particular run.
That being said, I personally don't think I've tried running a regional simulation with the global lai streams file as you did in your initial attempt. But I would think it should work. The 78 pft surface dataset should be collapsed to a single generic crop type (you can verify that in your lnd log) and the lai streams should then be applied to the 16 pfts appropriately in this SP case you've setup.
The error you got in that original case indicated a NaN in the temperature field (Sl_t) which can oftentimes indicate a problem with the atmospheric forcing data. The mesh file you are using to describe the atmospheric data:

/glade/work/koushanm/ctsm5.2.005/CTSM/tools/site_and_regional/subset_data_regional/lnd_mesh.nc

seems to have different coordinates than the forcing files do. For example, the lower left gridcell in the mesh file has coordinates
centerCoords =
250.025, 33.025, ...

while the forcing file has coordinates of -109.95, 33.0. Those should match. They don't even when converting the longitude to be 0 to 360 in the forcing file.

Have you tried your regional simulation without the lai streams on?
 

koushanm

Koushan Mohammadi
New Member
Thank you so much for your input.

I had also noticed the longitude mismatch, where the surface dataset uses longitudes in the 0–360 range and the atmospheric forcing data is in -180 to 180. There’s also a slight offset, even if I convert the forcing data to 0–360. I initially didn’t think this would cause issues, since my previous runs with similar configurations completed without errors.

I was able to successfully run the SP model for the Central U.S. (latitude 33–46°, longitude 250–272°) for the year 2005 using only the LAI from the surface dataset (with the same annual cycle each year). That run completed without errors (/glade/derecho/scratch/koushanm/archive/April_30_USA_SP).I also successfully ran the BGC model for the same region using the 2000_DATM%GSWP3v1_CLM50%BGC-CROP_SICE_SOCN_MOSART_SGLC_SWAV compset with the surfdata_0.05x0.05-hires_noSP_hist_2005_78pfts_c240516.nc surface dataset. Since the runs completed without issues, I initially assumed that the model could automatically handle the coordinate and resolution mismatches between the surface and forcing data.

I will work on aligning the coordinates and attempt the SP run with prescribed LAI again.

Another question I have is: if I need to modify MODISPFTLAI_0.5x0.5_c140711.nc for my case, such as cropping it to match my region, do I also need to modify the corresponding mesh file used in stream_meshfile_lai (i.e., /glade/work/koushanm/ctsm5.2.005/CTSM/tools/site_and_regional/subset_data_regional/lnd_mesh.nc)?
Or can I just use the same mesh file even after cropping the LAI stream? or use surface data meshfile by setting:
stream_meshfile_lai = '/glade/work/koushanm/ctsm5.2.005/CTSM/tools/site_and_regional/subset_data_regional/lnd_mesh.nc'
 

oleson

Keith Oleson
CSEG and Liaisons
Staff member
Right, the model should be able to handle the differences in coordinates between the surface data and the atmospheric forcing data. What I was concerned about was the mismatch between the forcing data coordinates and the coordinates in the mesh file you are using to describe the forcing data. The mesh file you are using for that is in datm.streams.xml:

<meshfile>/glade/work/koushanm/ctsm5.2.005/CTSM/tools/site_and_regional/subset_data_regional/lnd_mesh.nc</meshfile>

But if you are saying you can run successfully with that mismatch between the forcing data coordinates and that mesh file, then maybe that's ok, although they should match. But I would check your model output (history fields) to make sure they look like you would expect.

If you crop the lai streams file then you also would need a new mesh file. You can't use lnd_mesh.nc because its coordinates don't match the cropped lai stream file. E.g., your cropped lai streams file has latitudes:


lat = -73.25, -72.75, -72.25, -71.75, -71.25, -70.75, -70.25, -69.75,
-69.25, -68.75, -68.25, -67.75, -67.25, -66.75

i.e., 0.5deg resolution, while the mesh file you are using for the lai streams appears to be 0.05deg resolution.
 

oleson

Keith Oleson
CSEG and Liaisons
Staff member
I cloned your case, using my checkout of ctsm5.2.005, set use_lai_streams = .false., and it ran fine for a month, as you noted for yourself.
I then set use_lai_streams = .true., using the default global lai streams file and mesh, and that also ran for a month successfully, which I think is what I would expect.
My case is here for you to compare with your own, maybe there is something different that was causing your original simulation to fail (or possibly I'm missing something that is in yours):

/glade/work/oleson/ctsm5.2.005/cime/scripts/April_30_USA_MODIS_SP
 

koushanm

Koushan Mohammadi
New Member
Thank you so much for letting me know.

I also tried running the model for one month using the default global LAI streams file and mesh:

./xmlchange STOP_OPTION=nmonths
./xmlchange STOP_N=1
./xmlchange RUN_STARTDATE=2005-01-01

This run completed successfully (/glade/derecho/scratch/koushanm/archive/May_6_USA_MODIS_SP/logs/atm.log.9444078.desched1.250506-175532.gz). The same happened when I set RUN_STARTDATE=2005-02-01.

However, when I changed the start date to 2005-03-01, the model crashed with the same error as before:

/glade/derecho/scratch/koushanm/May_6_USA_MODIS_SP/run> vi cesm.log.9448502.desched1.250506-233546

dec2003.hsn.de.hpc.ucar.edu 0: sysmem size=1319.7 MB rss=376.6 MB share=83.2 MB text=20.0 MB datastack=0.0 MB
dec2003.hsn.de.hpc.ucar.edu 0: sysmem size=1319.7 MB rss=376.6 MB share=83.2 MB text=20.0 MB datastack=0.0 MB
dec2003.hsn.de.hpc.ucar.edu 0: sysmem size=1319.7 MB rss=376.6 MB share=83.2 MB text=20.0 MB datastack=0.0 MB
dec2003.hsn.de.hpc.ucar.edu 0: sysmem size=1319.7 MB rss=376.6 MB share=83.2 MB text=20.0 MB datastack=0.0 MB
dec2032.hsn.de.hpc.ucar.edu 313: # of NaNs = 1
dec2032.hsn.de.hpc.ucar.edu 313: Which are NaNs = F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F
dec2032.hsn.de.hpc.ucar.edu 313: F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F
dec2032.hsn.de.hpc.ucar.edu 313: F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F
dec2032.hsn.de.hpc.ucar.edu 313: F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F
dec2032.hsn.de.hpc.ucar.edu 313: F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F T F F F
dec2032.hsn.de.hpc.ucar.edu 313: F F F F F F F F F F F F F F F F F F F F F F F F F F F F F
dec2032.hsn.de.hpc.ucar.edu 313: NaN found in field Sl_t at gridcell index/lon/lat: 188
dec2032.hsn.de.hpc.ucar.edu 313: 271.625000000000 43.9750000000000
dec2032.hsn.de.hpc.ucar.edu 313: ERROR: ERROR: One or more of the CTSM cap export_1D fields are NaN
dec2032.hsn.de.hpc.ucar.edu 382: # of NaNs = 1
dec2032.hsn.de.hpc.ucar.edu 382: Which are NaNs = F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F
dec2032.hsn.de.hpc.ucar.edu 382: F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F
dec2032.hsn.de.hpc.ucar.edu 382: F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F
dec2032.hsn.de.hpc.ucar.edu 382: F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F
dec2032.hsn.de.hpc.ucar.edu 382: F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F
dec2032.hsn.de.hpc.ucar.edu 382: T F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F F
dec2032.hsn.de.hpc.ucar.edu 382: NaN found in field Sl_t at gridcell index/lon/lat: 192
dec2032.hsn.de.hpc.ucar.edu 382: 271.625000000000 44.0250000000000
dec2032.hsn.de.hpc.ucar.edu 382: ERROR: ERROR: One or more of the CTSM cap export_1D fields are NaN
dec2032.hsn.de.hpc.ucar.edu 382: Image PC Routine Line Source
dec2032.hsn.de.hpc.ucar.edu 382: cesm.exe 000000000107373D shr_abort_mod_mp_ 114 shr_abort_mod.F90
dec2032.hsn.de.hpc.ucar.edu 382: cesm.exe 00000000005A3E3D lnd_import_export 169 lnd_import_export_utils.F90
dec2032.hsn.de.hpc.ucar.edu 382: cesm.exe 00000000005A3040 lnd_import_export 1193 lnd_import_export.F90
dec2032.hsn.de.hpc.ucar.edu 382: cesm.exe 00000000005A0618 lnd_import_export 780 lnd_import_export.F90
dec2032.hsn.de.hpc.ucar.edu 382: cesm.exe 0000000000590083 lnd_comp_nuopc_mp 912 lnd_comp_nuopc.F90
dec2032.hsn.de.hpc.ucar.edu 382: libesmf.so 000014FBB26A1247 _ZN5ESMCI11Method Unknown Unknown
dec2032.hsn.de.hpc.ucar.edu 382: libesmf.so 000014FBB26A10E5 c_esmc_methodtabl Unknown Unknown
dec2032.hsn.de.hpc.ucar.edu 382: libesmf.so 000014FBB28536AE esmf_attachmethod Unknown Unknown
dec2032.hsn.de.hpc.ucar.edu 382: libesmf.so 000014FBB3144180 nuopc_modelbase_m Unknown Unknown


The same issue occurred for other months as well (except January and February). By checking the atm.log file from the failed run (e.g., /glade/derecho/scratch/koushanm/May_6_USA_MODIS_SP/run/atm.log.9448502.desched1.250506-233546), I noticed that the model advances through a few time steps and then crashes due to a NaN in the Sl_t field at a grid cell.
This behavior is confusing, since the model runs without issue for the full year when use_lai_streams = .false., and also runs successfully for the first two months when use_lai_streams = .true.
 

oleson

Keith Oleson
CSEG and Liaisons
Staff member
I see. So I tried to run the first three months continuously and it crashed March 28 with the same error you show above. Same latitude/longitude.
I guess my next suggestion then would be to run three months continuously but in DEBUG mode, that may point you to the line of code where the calculation is going bad. I believe that Sl_t is calculated as a function of upward longwave, so presumably that is what is causing the bad Sl_t
 

oleson

Keith Oleson
CSEG and Liaisons
Staff member
With DEBUG on in my simulation, the error is occuring at this line in CanopyfluxesMod.F90:

z0mv(p) = exp(egvf * log(z0mv(p)) + (1._r8 - egvf) * log(z0mg(c)))

So, this means that either z0mv or z0mg is zero. I don't see how z0mg can be zero, however, z0mv could possible be zero if htop (canopy top height) was zero and lai was non-zero. In the surface dataset that is being used:

/glade/work/koushanm/ctsm5.2.005/CTSM/tools/site_and_regional/subset_data_regional/surfdata_test_April_SP_29_hist_2000_78pfts_c250429.nc

I do see a very small region where MONTHLY_HEIGHT_TOP (and MONTHLY_HEIGHT_BOT) are zero for pfts 1-16. This region is located at the lat/lon where the error occurs.
I filled that region with similar values from nearby gridcells. Using that modified surface dataset, the simulation ran past the error.

There must be some inconsistency between lai and htop when the lai_streams is switched on. Probably due to the large mismatch in spatial resolution between the surface dataset and the lai streams file.

So, please try this surface dataset in your simulation:

/glade/campaign/cgd/tss/people/oleson/CLM5_datasets/ctsm5.2.005/surfdata_test_April_SP_29_hist_2000_78pfts_c250508.nc

Also, keep in mind that there may be other problems associated with this very high resolution surface dataset as it is generated from source data that is much coarser in resolution. For one thing, there's going to be some blockiness in some of the fields.
 

koushanm

Koushan Mohammadi
New Member
Thank you for your detailed explanation and for sharing the updated surface dataset. I’ll use the dataset you provided and walk through the adjustments you suggested. It’s very possible that something went wrong when I cropped the surface dataset for my region. I remember the process took a long time.

Before seeing your message, I had enabled DEBUG = TRUE and ran the model with my current surface dataset for the first three months. The crash occurred with the error trace pointing to the canopy flux calculation, which is consistent with what you found.

dec2013.hsn.de.hpc.ucar.edu 0: shr_file_mod.F90 913
dec2013.hsn.de.hpc.ucar.edu 0: This routine is depricated - use shr_log_setLogUnit instead -132
dec2013.hsn.de.hpc.ucar.edu 0: shr_file_mod.F90 913
dec2013.hsn.de.hpc.ucar.edu 0: This routine is depricated - use shr_log_setLogUnit instead -131
dec2013.hsn.de.hpc.ucar.edu 0: shr_file_mod.F90 913
dec2013.hsn.de.hpc.ucar.edu 0: This routine is depricated - use shr_log_setLogUnit instead -132
dec2046.hsn.de.hpc.ucar.edu 175: forrtl: error (73): floating divide by zero
dec2046.hsn.de.hpc.ucar.edu 175: Image PC Routine Line Source
dec2046.hsn.de.hpc.ucar.edu 175: libpthread-2.31.s 000014E26907E8C0 Unknown Unknown Unknown
dec2046.hsn.de.hpc.ucar.edu 175: cesm.exe 000000000489F069 Unknown Unknown Unknown
dec2046.hsn.de.hpc.ucar.edu 175: cesm.exe 0000000001576669 canopyfluxesmod_m 896 CanopyFluxesMod.F90
dec2046.hsn.de.hpc.ucar.edu 175: cesm.exe 0000000000A26139 clm_driver_mp_clm 718 clm_driver.F90

d

I also tested the model without DEBUG mode (DEBUG=FALSE) for November 2005 using RUN_STARTDATE=2005-11-01, and STOP_N=1 the model failed. However, when I set RUN_STARTDATE=2005-12-01 and ran the model, it completed successfully. It is interesting since when set RUN_STARTDATE=2005-01-01 and 2005-02-01 and run it for a month, it was also successful.
It seems that the model runs without issues during winter months (December, January, and February), likely because vegetation fluxes are inactive during winter??? I would be interested to hear your thoughts on whether this behavior aligns with your expectations.

I also had a follow-up question. In the lan.log, I saw the following:

collapse_crop_types irrigate=.F., so merging irrigated pfts with rainfed
collapse_crop_types merging crops into C3 generic crops

I wanted to check how the model converts the 78 PFTs in the surface dataset to the 16 PFTs used in the MODIS LAI stream. To understand how the model handles this, I viewed Table 2.2.1 (Plant Functional Types) in the CLM5 tech note (link). I noticed that it doesn’t explicitly list a "C3 generic crop." I assume this corresponds to PFT 15, which maps to LAI_15 in the MODIS file. I also noticed that LAI_16 was zero throughout the year at the gridcell that caused the crash. I would appreciate your clarification on how this mapping is handled internally.

Again, thank you for the surface dataset you provided. I will use it and will go through your steps for other regions as well.
 

koushanm

Koushan Mohammadi
New Member
With DEBUG on in my simulation, the error is occuring at this line in CanopyfluxesMod.F90:

z0mv(p) = exp(egvf * log(z0mv(p)) + (1._r8 - egvf) * log(z0mg(c)))

So, this means that either z0mv or z0mg is zero. I don't see how z0mg can be zero, however, z0mv could possible be zero if htop (canopy top height) was zero and lai was non-zero. In the surface dataset that is being used:

/glade/work/koushanm/ctsm5.2.005/CTSM/tools/site_and_regional/subset_data_regional/surfdata_test_April_SP_29_hist_2000_78pfts_c250429.nc

I do see a very small region where MONTHLY_HEIGHT_TOP (and MONTHLY_HEIGHT_BOT) are zero for pfts 1-16. This region is located at the lat/lon where the error occurs.
I filled that region with similar values from nearby gridcells. Using that modified surface dataset, the simulation ran past the error.

There must be some inconsistency between lai and htop when the lai_streams is switched on. Probably due to the large mismatch in spatial resolution between the surface dataset and the lai streams file.

So, please try this surface dataset in your simulation:

/glade/campaign/cgd/tss/people/oleson/CLM5_datasets/ctsm5.2.005/surfdata_test_April_SP_29_hist_2000_78pfts_c250508.nc

Also, keep in mind that there may be other problems associated with this very high resolution surface dataset as it is generated from source data that is much coarser in resolution. For one thing, there's going to be some blockiness in some of the fields.


I also tried to copy the surface dataset you provided to my directory, but I don’t have the necessary permissions.
 

koushanm

Koushan Mohammadi
New Member
Try it now please.
It is still not available
cp -r /glade/campaign/cgd/tss/people/oleson/CLM5_datasets/ctsm5.2.005/surfdata_test_April_SP_29_hist_2000_78pfts_c250508.nc /glade/work/koushanm/ctsm5.2.005/CTSM/tools/site_and_regional/subset_data_regional/surfdata_oleson_test_April_SP_29_hist_2000_78pfts_c250429.nc

cp: cannot stat '/glade/campaign/cgd/tss/people/oleson/CLM5_datasets/ctsm5.2.005/surfdata_test_April_SP_29_hist_2000_78pfts_c250508.nc': Permission denied
 

oleson

Keith Oleson
CSEG and Liaisons
Staff member
It seems like the permissions should be adequate:

-rwxr-xr-x+ 1 oleson cgdtss 3997623248 May 8 11:07 /glade/campaign/cgd/tss/people/oleson/CLM5_datasets/ctsm5.2.005/surfdata_test_April_SP_29_hist_2000_78pfts_c250508.nc

Why don't you try just pointing to it in your user_nl_clm for now.
 
Top