Scheduled Downtime
On Tuesday 24 October 2023 @ 5pm MT the forums will be in read only mode in preparation for the downtime. On Wednesday 25 October 2023 @ 5am MT, this website will be down for maintenance and expected to return online later in the morning.
Normal Operations
The forums are back online with normal operations. If you notice any issues or errors related to the forums, please reach out to help@ucar.edu

How select a special region to run the CLM from the global model in CLM?

jteymoori

Javad Teymoori
Member
This indicates that you are using manage_externals but fates is using git-fleximod, try removing the
.gitmodules file from the fates directory and see if that solves the problem
or you can go to the CTSM/src/fates directory and manually checkout the fates code indicated by the
top level Externals_CLM.cfg file.
There is no file with gitmodules or Externals_CLM.cfg in the fate directory.
How can I remove them?

1736362575690.png
 

oleson

Keith Oleson
CSEG and Liaisons
Staff member
The file name has a period in the front of it, i.e., ".gitmodules".
Don't remove Externals_CLM.cfg
 

jteymoori

Javad Teymoori
Member
Hi
I applied all the suggestions to solve the error, including removing ".gitmodules" from CTSM/src/fates directory, but I faced the error again.
Do you know how I can solve it?
1737399599125.png
 

slevis

Moderator
Staff member
Given the confusion all around (yours and mine and possibly everyone's), I would recommend @jedwards and @oleson's advice of starting over clean, i.e. checking out a new copy of ctsm5.2.005 in a new directory.
 

slevis

Moderator
Staff member
I expect that from the 10 previous pages of support from mostly @oleson (and a bit from me and @jedwards), plus anything you picked up from CLM's User's Guide and Technote, you can recreate the steps that you need.
 

jteymoori

Javad Teymoori
Member
Dear @slevis and @oleson
Thank you so much for your helpful suggestions; I could solve the error related to the ctsm version and manage_externals.
The name of the case that I created it is newland_clm50_ctsm520_0.05x0.05conus_GSWP3V1_2000
I created and submitted this case, but it did not run. I checked the run directory, but there is no error in the lnd.log and atm.log files.
How can I find the errors and solve them for a run?
 

wwieder

Will Wieder
Member
looking at the cesm.log I see this error

dec2318.hsn.de.hpc.ucar.edu 4: (shr_strdata_advance) ERROR: for stream 1 and calendar GREGORIAN
dec2318.hsn.de.hpc.ucar.edu 4: (shr_strdata_advance) ERROR: dtime, dtmax, dtmin, dtlimit =
dec2318.hsn.de.hpc.ucar.edu 4: 0.125000000000000 5479.12500000000 0.125000000000000


I agree this isn't enough to determine the issue, but digging a bit into the atm.log file it seems like it was trying to read the following file

/glade/derecho/scratch/jteymoori/Daymet4/Data/atm/datm7/Solar/clmforc.Daymet4.0.05x0.05.Solar.2005-01.nc
an ncdump -h on this file shows the following time metadata

float time(time) ;
time:_FillValue = NaNf ;
time:standard_name = "time" ;
time:long_name = "time" ;
time:description = "time stamp in UTC, middle of each 3-hour" ;
time:bounds = "time_bnds" ;
time:units = "days since 1950-01-01 00:00:00" ;
time:calendar = "standard" ;


My hunch is that your input data time metadata needs to be either GREGORIAN or NO_LEAP. These are the only two options that CESM supports. I'd modify your input data accordingly and see if this avoids the issue

More broadly, I'd like to ask that you spend a bit more time investigating issues on your own before posting to the bulletin board. I appreciate that running CLM is difficult and that this is likely new to you, but the log files typically provide enough of a thread that can be helpful in diagnosing issues like this.
 

jteymoori

Javad Teymoori
Member
I changed the time calendar of my Atmospheric Forcing data to "No Leap" and ran the model for one month (Jan 2005), but I faced the same previous error in the cesm.log file in the run directory.
Why did I face this error?
Does my Atmospheric Forcing data have any problem?
I have attached a screenshot of the error
I ran the model for just one month (Jan 2005); the case name is:Jan_clm50_ctsm520_0.05x0.05conus_GSWP3V1_2000
1738960764114.png
 

wwieder

Will Wieder
Member
It looks like you have issues with your time intervals. For example,

ncdump -h /glade/derecho/scratch/jteymoori/Daymet4/Data/atm/datm7/Solar/clmforc.Daymet4.0.05x0.05.Solar.2005-01.nc shows there are 248 time steps in your file.

ncdump -h /glade/derecho/scratch/jteymoori/Daymet4/Data/atm/datm7/Solar/clmforc.Daymet4.0.05x0.05.Solar.2005-0w.nc ALSO shows there are 248 time steps in your file.

Thus, one issue is that Jan and Feb don't have the same number of days, so their datm files shouldn't have the same number of time steps

The second issue (that's actually seems to be causing the issue that that your time data starts on Jan 15, 2005.


Both of these issues need to be fixed in your datm files

----

I just opened your dataset in xarray, and saw this:
ds = xr.open_dataset('/glade/derecho/scratch/jteymoori/Daymet4/Data/atm/datm7/Solar/clmforc.Daymet4.0.05x0.05.Solar.2005-01.nc',
decode_times=True)
ds.time

xarray.DataArray
'time'
  • time: 248
  • array([cftime.DatetimeNoLeap(2005, 1, 15, 1, 30, 0, 0, has_year_zero=True),
    cftime.DatetimeNoLeap(2005, 1, 15, 4, 30, 0, 0, has_year_zero=True),
    cftime.DatetimeNoLeap(2005, 1, 15, 7, 30, 0, 0, has_year_zero=True),
    ...,
    cftime.DatetimeNoLeap(2005, 2, 14, 16, 30, 0, 0, has_year_zero=True),
    cftime.DatetimeNoLeap(2005, 2, 14, 19, 30, 0, 0, has_year_zero=True),
    cftime.DatetimeNoLeap(2005, 2, 14, 22, 30, 0, 0, has_year_zero=True)],
    dtype=object)

  • Coordinates:
    • time
      (time)
      object
      2005-01-15 01:30:00 ... 2005-02-...

 

jteymoori

Javad Teymoori
Member
Thank you for your response.
I fixed the issue for Jan 2005; and now the data starts on Jan 1st, 2005, and you can see them below:
/glade/derecho/scratch/jteymoori/Daymet4/Data/atm/datm7/Solar_Jan/clmforc.Daymet4.0.05x0.05.Solar.2005-01.nc
/glade/derecho/scratch/jteymoori/Daymet4/Data/atm/datm7/Precip_Jan/clmforc.Daymet4.0.05x0.05.Prec.2005-01.nc
/glade/derecho/scratch/jteymoori/Daymet4/Data/atm/datm7/TPHWL_Jan/clmforc.Daymet4.0.05x0.05.TPQWL.2005-01.n
c
I Only tried to submit the model for one month (Jan 2005) and the time calendar for these data is "no Leap," but after submitting the model just for Jan 2005, I faced the same error in the "cesm.log" files in the run directory.

How Can I solve this error?
Do you think my data are incorrect?

It looks like you have issues with your time intervals. For example,

ncdump -h /glade/derecho/scratch/jteymoori/Daymet4/Data/atm/datm7/Solar/clmforc.Daymet4.0.05x0.05.Solar.2005-01.nc shows there are 248 time steps in your file.

ncdump -h /glade/derecho/scratch/jteymoori/Daymet4/Data/atm/datm7/Solar/clmforc.Daymet4.0.05x0.05.Solar.2005-0w.nc ALSO shows there are 248 time steps in your file.

Thus, one issue is that Jan and Feb don't have the same number of days, so their datm files shouldn't have the same number of time steps

The second issue (that's actually seems to be causing the issue that that your time data starts on Jan 15, 2005.


Both of these issues need to be fixed in your datm files

----

I just opened your dataset in xarray, and saw this:
ds = xr.open_dataset('/glade/derecho/scratch/jteymoori/Daymet4/Data/atm/datm7/Solar/clmforc.Daymet4.0.05x0.05.Solar.2005-01.nc',
decode_times=True)
ds.time

xarray.DataArray
'time'
  • time: 248
  • array([cftime.DatetimeNoLeap(2005, 1, 15, 1, 30, 0, 0, has_year_zero=True),
    cftime.DatetimeNoLeap(2005, 1, 15, 4, 30, 0, 0, has_year_zero=True),
    cftime.DatetimeNoLeap(2005, 1, 15, 7, 30, 0, 0, has_year_zero=True),
    ...,
    cftime.DatetimeNoLeap(2005, 2, 14, 16, 30, 0, 0, has_year_zero=True),
    cftime.DatetimeNoLeap(2005, 2, 14, 19, 30, 0, 0, has_year_zero=True),
    cftime.DatetimeNoLeap(2005, 2, 14, 22, 30, 0, 0, has_year_zero=True)],
    dtype=object)

  • Coordinates:
    • time
      (time)
      object
      2005-01-15 01:30:00 ... 2005-02-...

1738976033328.png
 

oleson

Keith Oleson
CSEG and Liaisons
Staff member
In your datm.streams.xml I see:

<year_first>1991</year_first>
<year_last>2010</year_last>
<year_align>1991</year_align>

These years need to match the available data you have (which currently is year 2005).
So, to fix that:

./xmlchange DATM_YR_START=2005
./xmlchange DATM_YR_END=2005

Furthermore, since you just have one month of data right now (2005-01), you are going to get a dtlimit error even with the fix I suggest.
To fix that you would need to set taxmode to "extend" the data instead of "cycle" which you have right now. Otherwise, the datm will try to use the last time slice of 2005-01.nc for interpolation.
So, you need to add the following to your user_nl_datm_streams:

CLMGSWP3v1.Solar:taxmode=extend
CLMGSWP3v1.Solar:dtlimit=1.e30

CLMGSWP3v1.Precip:taxmode=extend
CLMGSWP3v1.Precip:dtlimit=1.e30

CLMGSWP3v1.TPQW:taxmode=extend
CLMGSWP3v1.TPQW:dtlimit=1.e30

It's possible just setting dtlimit to 1.e30 alone would fix it, but I haven't tested that.

Once you have a full year of forcing data, you won't need to do that second fix.

With these fixes, I was able to run a clone of your case for one month:

/glade/work/oleson/ctsm_runs/Only_Jan_clm50_ctsm520_0.05x0.05conus_GSWP3V1_2000
 

jteymoori

Javad Teymoori
Member
Dear all,
I have run the CTSM with my own atmospheric forcing data (Daymet4), but I have a problem:
  1. Both the CTSM model outputs and Daymet4 data are in UTC. As you can see in the pictures, for a specific region in the central US (lat: 39-40, lon: 270-271) from January 1 to January 2, 2005, for the variable FSDS (Total Incident Solar Radiation), the positive values in Daymet4 start at 13:30 UTC, while in the CESM outputs they start at 16:00 UTC. This means there is no solar radiation until 16:00 UTC (approximately 12:00 local time). What do you think went wrong?
Also, why are the model outputs and forcing data different in time?
  1. In the CESM outputs, after peaking at 23:30, the FSDS suddenly falls to zero at 00:00, but this is not the case in Daymet, where the decrease is gradual. It seems that in the model output, the FSDS is forced to become zero after midnight. I could not determine what went wrong.


Daymet4-FSDS.pngCTSM-FSDS.png
 

oleson

Keith Oleson
CSEG and Liaisons
Staff member
It obviously looks like there is an offset between the forcing data and the model, so the forcing data and model times are not lined up properly. Per the User's Guide:

"In CLMGSWP3v1 mode the GSWP3 NCEP forcing dataset is used and all of it’s data is on a 3-hourly interval. Like CLM_QIAN the dataset is divided into those three data streams: solar, precipitation, and everything else (temperature, pressure, humidity, Long-Wave down and wind). The time-stamps of the data were also adjusted so that they are the beginning of the interval for solar, and the middle for the other two. Because, of this the offset is set to zero, and the tintalgo is: coszen, nearest, and linear for the solar, precipitation and other data respectively."

Your solar forcing data has:

time:units = "hours since 2005-01-01 01:30:00.000000" ;

So your data is set to the middle of the interval for solar. Perhaps changing it to the beginning of the interval will help alleviate the problem.
 
Top