Scheduled Downtime
On Tuesday 24 October 2023 @ 5pm MT the forums will be in read only mode in preparation for the downtime. On Wednesday 25 October 2023 @ 5am MT, this website will be down for maintenance and expected to return online later in the morning.
Normal Operations
The forums are back online with normal operations. If you notice any issues or errors related to the forums, please reach out to help@ucar.edu

Trouble Running FATES in Fully Coupled CESM Configuration

KumarR

Kumar Roy
New Member
Hello,

I'm trying to run a fully coupled CESM simulation with FATES included in CLM. Since there is no standard compset that includes both FATES and a fully coupled configuration, I created the following custom compset:

1850_CAM60_CLM50%FATES_CICE_POP2%ECO_MOSART_CISM2%NOEVOLVE_WW3_BGC%BDRD

with the resolution f19_g17.

The model builds successfully, but it fails during the run with the following error related to solar radiation balance:
###########################
WARNING:: BalanceCheck, solar radiation balance error (W/m2)
nstep = 98
errsol = 206.567107646436
clm model is stopping - error is greater than 1e-5 (W/m2)
fsa = 589.650018473141
fsr = 58.8546527003309
forc_solad(1) = 109.840006664536
forc_solad(2) = 130.015489733791
forc_solai(1) = 120.094534830304
forc_solai(2) = 81.9875322984055
forc_tot = 441.937563527036
clm model is stopping
...
###############################
The error repeats for several gridcell indices, always stopping due to a large solar radiation imbalance.

Has anyone successfully run FATES in a fully coupled CESM setup? Is there an issue with this specific compset or configuration when using FATES?

Any guidance on how to fix or debug this would be greatly appreciated.

Thanks,
Kumar
 
Solution
This is a known problem with running FATES coupled with CAM, which we know won't work right now. This is something that work is being done to get it going, but we aren't quite there yet. CTSM in NorESM has cases working, but I'm not sure that they have them scientifically vetted yet. So if you can I suggest waiting until we've got this fully figured out and vetted and brought into CESM. I can't really give a timeline for when that will happen, just that it will be a priority after cesm3.0 is released.

erik

Erik Kluzek
CSEG and Liaisons
Staff member
This is a known problem with running FATES coupled with CAM, which we know won't work right now. This is something that work is being done to get it going, but we aren't quite there yet. CTSM in NorESM has cases working, but I'm not sure that they have them scientifically vetted yet. So if you can I suggest waiting until we've got this fully figured out and vetted and brought into CESM. I can't really give a timeline for when that will happen, just that it will be a priority after cesm3.0 is released.
 
Vote Upvote 0 Downvote
Solution

Jeline

New Member
This is a known problem with running FATES coupled with CAM, which we know won't work right now. This is something that work is being done to get it going, but we aren't quite there yet. CTSM in NorESM has cases working, but I'm not sure that they have them scientifically vetted yet. So if you can I suggest waiting until we've got this fully figured out and vetted and brought into CESM. I can't really give a timeline for when that will happen, just that it will be a priority after cesm3.0 is released.
Hi Erik. I encountered a problem similar to Kumar. Thank you for your explanation. Does this mean that dynamic vegetation cannot be enabled in the CESM fully coupled model at present? Under what circumstances can dynamic vegetation be used normally? Thanks very much.
 
Vote Upvote 0 Downvote

wvsi3w

wvsi3w
Member
This is a known problem with running FATES coupled with CAM, which we know won't work right now. This is something that work is being done to get it going, but we aren't quite there yet. CTSM in NorESM has cases working, but I'm not sure that they have them scientifically vetted yet. So if you can I suggest waiting until we've got this fully figured out and vetted and brought into CESM. I can't really give a timeline for when that will happen, just that it will be a priority after cesm3.0 is released.
Dear Erik,
I have looked at the website and searched online but it say the release date for CESM3 is summer of 2025. Do you know when it will be released exactly?
 
Vote Upvote 0 Downvote

slevis

Moderator
Staff member
@Jeline our model releases are always intended for scientific research. When we release these models, they have been tested rigorously. Certain configurations we even consider supported, suggesting higher confidence on our part regarding these configurations. Despite all that, you as the researcher need to make decisions regarding the quality of your research and, therefore, the extent of testing and validation that you will perform to feel confident in your results and your conclusions.
 
Vote Upvote 0 Downvote

robbert02

Robbert Kouwenhoven
New Member
I am encountering the same problem (solar radiation balance error) in LND only simulations, using the following compset:

I2000Clm60Fates

with resolution f09g17 on model version cesm3.0 beta06.

Given that I don't run coupled simulations, is there a way for me to bypass this problem? Or am I also supposed to wait for the full release of CESM3?
 
Vote Upvote 0 Downvote

oleson

Keith Oleson
CSEG and Liaisons
Staff member
I would expect the I-cases with Fates to at least run as we have testing for that. Can you provide more information on your case setup and errors:

 
Vote Upvote 0 Downvote

robbert02

Robbert Kouwenhoven
New Member
model version:
cesm3_0_beta06-0-g637b593

changes to files in source tree:
- set the threshold in the check_for_errors subroutine in lnd_import_export_utils.F90 (in my tree living under $CESM_ROOT/components/clm/src/cpl/utils/ ) more lenient to try and avoid this issue.

describe steps taken that lead to this problem:
- I'm trying to do a 30-year run (starting 2010) with modified fsurdat and fates_params to look at the influence of initial tree density on climate variables like soil moisture and temperature.
- I use the following: I2000Clm60Fates on f09g17 resolution.
- I have set use_fates_fixed_biogeog=true.
- The case - and similar other cases - keeps on crashing at a sticking point in oct/nov 2021.

The error I'm getting in my lnd.log* file is the following:
Code:
 hist_htapes_wrapup : Writing current time sample to local history file
 ./FATES_5yr_run_2010_20251028-0907.clm2.h0.2010-01-31-00000.nc at nstep =
      207360  for history time interval beginning at    4290.00000000000
  and ending at    4320.00000000000


 (cpl:utils:check_for_errors) ERROR: Longwave down sent from the atmosphere mode
 l is negative or zero

As said before, I've tried to 'ignore' this issue by setting the threshold for the abort to occur more and more lenient, but it does not seem to help. Also, if I go further other parts of CLM start crashing (UrbanRadiationMod, atm2lndMod)
 
Vote Upvote 0 Downvote

oleson

Keith Oleson
CSEG and Liaisons
Staff member
This indicates a data atmospheric forcing (datm) problem, not a radiation balance error problem. The default atmospheric forcing data for that compset is GSWP3. It is global (valid values at every grid cell), so under default conditions you shouldn't get that error. If you are not running on Derecho, it could indicate you have a corrupted forcing file. I would check the atm log file to see what file(s) you are reading in when the error occurs and check those files. The files should have names like clmforc.GSWP3.c2011.0.5x0.5.TPQWL.2010-01.nc. The longwave variable that is being read in is FLDS.
 
Vote Upvote 0 Downvote

robbert02

Robbert Kouwenhoven
New Member
The last few entries of the atm log file are as follows:

Code:
(shr_strdata_readstrm) reading file ub: /capstor/scratch/cscs/rkouwenh/cesm3_0_beta01/inputdata/atm/datm7/atm_forcing.datm7.GSWP3.0.5d.v1.c170516/Precip/clmforc.GSWP3.c2011.0.5x0.5.Prec.2001-11.nc     240
(shr_strdata_readstrm) reading file ub: /capstor/scratch/cscs/rkouwenh/cesm3_0_beta01/inputdata/atm/datm7/atm_forcing.datm7.GSWP3.0.5d.v1.c170516/TPHWL/clmforc.GSWP3.c2011.0.5x0.5.TPQWL.2001-11.nc     240
 atm : model date     20211130       70200
 atm : model date     20211130       72000
 atm : model date     20211130       73800
(shr_strdata_readstrm) close  : /capstor/scratch/cscs/rkouwenh/cesm3_0_beta01/inputdata/atm/datm7/atm_forcing.datm7.GSWP3.0.5d.v1.c170516/Solar/clmforc.GSWP3.c2011.0.5x0.5.Solr.2001-11.nc
(shr_strdata_readstrm) opening   : /capstor/scratch/cscs/rkouwenh/cesm3_0_beta01/inputdata/atm/datm7/atm_forcing.datm7.GSWP3.0.5d.v1.c170516/Solar/clmforc.GSWP3.c2011.0.5x0.5.Solr.2001-12.nc
(shr_strdata_readstrm) reading file ub: /capstor/scratch/cscs/rkouwenh/cesm3_0_beta01/inputdata/atm/datm7/atm_forcing.datm7.GSWP3.0.5d.v1.c170516/Solar/clmforc.GSWP3.c2011.0.5x0.5.Solr.2001-12.nc       1
 atm : model date     20211130       75600
 atm : model date     20211130       77400
 atm : model date     20211130       79200
(shr_strdata_readstrm) close  : /capstor/scratch/cscs/rkouwenh/cesm3_0_beta01/inputdata/atm/datm7/atm_forcing.datm7.GSWP3.0.5d.v1.c170516/Precip/clmforc.GSWP3.c2011.0.5x0.5.Prec.2001-11.nc
(shr_strdata_readstrm) opening   : /capstor/scratch/cscs/rkouwenh/cesm3_0_beta01/inputdata/atm/datm7/atm_forcing.datm7.GSWP3.0.5d.v1.c170516/Precip/clmforc.GSWP3.c2011.0.5x0.5.Prec.2001-12.nc
(shr_strdata_readstrm) reading file ub: /capstor/scratch/cscs/rkouwenh/cesm3_0_beta01/inputdata/atm/datm7/atm_forcing.datm7.GSWP3.0.5d.v1.c170516/Precip/clmforc.GSWP3.c2011.0.5x0.5.Prec.2001-12.nc       1
(shr_strdata_readstrm) close  : /capstor/scratch/cscs/rkouwenh/cesm3_0_beta01/inputdata/atm/datm7/atm_forcing.datm7.GSWP3.0.5d.v1.c170516/TPHWL/clmforc.GSWP3.c2011.0.5x0.5.TPQWL.2001-11.nc
(shr_strdata_readstrm) opening   : /capstor/scratch/cscs/rkouwenh/cesm3_0_beta01/inputdata/atm/datm7/atm_forcing.datm7.GSWP3.0.5d.v1.c170516/TPHWL/clmforc.GSWP3.c2011.0.5x0.5.TPQWL.2001-12.nc
(shr_strdata_readstrm) reading file ub: /capstor/scratch/cscs/rkouwenh/cesm3_0_beta01/inputdata/atm/datm7/atm_forcing.datm7.GSWP3.0.5d.v1.c170516/TPHWL/clmforc.GSWP3.c2011.0.5x0.5.TPQWL.2001-12.nc       1
 atm : model date     20211130       81000
 atm : model date     20211130       82800
 atm : model date     20211130       84600
(shr_strdata_readstrm) reading file ub: /capstor/scratch/cscs/rkouwenh/cesm3_0_beta01/inputdata/atm/datm7/atm_forcing.datm7.GSWP3.0.5d.v1.c170516/Solar/clmforc.GSWP3.c2011.0.5x0.5.Solr.2001-12.nc       2
 atm : model date     20211201           0

following the same naming convention as you suggested.

Upon inspecting the TPHWL file (with the FLDS variable) for december 2001 I noticed that it is filled with zeroes, as opposed to previous months. This dataset was downloaded by default when I created this case. How do I replace this one with a working dataset?
 
Vote Upvote 0 Downvote

oleson

Keith Oleson
CSEG and Liaisons
Staff member
You could try ./check_input_data --download I think. You might have to remove the file first.
If that doesn't work, I could put it on our ftp site.
 
Vote Upvote 0 Downvote
Top