Scheduled Downtime
On Tuesday 24 October 2023 @ 5pm MT the forums will be in read only mode in preparation for the downtime. On Wednesday 25 October 2023 @ 5am MT, this website will be down for maintenance and expected to return online later in the morning.
Normal Operations
The forums are back online with normal operations. If you notice any issues or errors related to the forums, please reach out to help@ucar.edu

CLM4(& datm) dtlimit error for single point simulation with my own atm forcing

Dear all.

I'm working on a single-point simulation with CLM4. The test case for Vancourver works on my linux machine, so I am trying to run a simulation for a site of my interest (Amazon). I made atm forcing data similar to the one used in the Vancouver example.

At run time, I get this error message:

(datm_comp_run) atm: model date 10101 0s
(datm_comp_init) datm_comp_init done
(datm_comp_run) atm: model date 10101 0s
(datm_comp_init) datm_comp_init done
(datm_comp_run) atm: model date 10101 1800s
(datm_comp_run) atm: model date 10101 3600s
(shr_dmodel_readLBUB) reading file: /home/sakaguch/cesm1_0/inputdata/atm/datm7/CLM1PT_data/1x1pt_f19_
TapajosKM67/2002-01.nc 2
(shr_strdata_advance) ERROR: dt limit1 8.331018518518518E-002
4.166666666666666E-002 1.50000000000000
(shr_strdata_advance) ERROR: dt limit2 10101 3600 10101
7200
(shr_sys_abort) ERROR: (shr_strdata_advance) ERROR dt limit
(shr_sys_abort) WARNING: calling shr_mpi_abort() and stopping


The time step for CLM is default (same as the Vancourver case, 1800 s) and the time vectors in my atm forcing and Vancouver example are almost the same, one-hour resolution:

Vancourver: time = 0, 0.04166667, 0.08333334, 0.125, 0.1666667, 0.2083333, 0.25,
0.2916667, 0.3333333, 0.375, 0.4166666, 0.4583333, 0.4999999, 0.5416666,
0.5833333, 0.625, 0.6666667, 0.7083334, 0.7500001, 0.7916667, 0.8333334, ...

my forcing data: time = 0, 0.04166667, 0.08333334, 0.125, 0.1666667, 0.2083333, 0.25,
0.2916667, 0.3333333, 0.375, 0.4166667, 0.4583333, 0.5, 0.5416667,
0.5833334, 0.625, 0.6666667, 0.7083334, 0.75, 0.7916667, 0.8333334, ...

and the attributes for the time variable are also the same

Vancouver:
float time(time) ;
time:long_name = "observation time" ;
time:units = " days since 1992-08-12 20:00:00" ;
time:calendar = "noleap" ;
my data:
float time(time) ;
time:_FillValue = -999.f ;
time:calendar = "noleap" ;
time:units = "days since 2002-01-01 01:00:00" ;
time:long_name = "observation time" ;


I could not quite understand what shr_strdata_advance module is doing there (in $CESMroot/models/csm_share/shrshr_strdata_mod)

Please let me know if anyone has suggestions.

Thank you,

Koichi
 

erik

Erik Kluzek
CSEG and Liaisons
Staff member
Koichi

The problem is that there is limited atmospheric forcing data for this site and worse it's less than a year. So you can only run it over a limited number of days. If you look in the models/lnd/clm/doc/KnownLimitations file you'll see some documentation on this issue. Basically the issue is that once it gets to the end of the data it loops back to the beginning, but registers this as a time-difference to that date -- hence it appears to have a big-jump in time compared to the rest of the data.

You can look at the data files to figure out what the valid data range is, or just run over a very short period of a few days.

I'm in the process of adding instructions of how to set the time range to run over. Here are the instructions I've got right now...

# Set the start date
> setenv RUN_STARTDATE
`../../models/lnd/clm/bld/queryDefaultNamelist.pl -res 1x1_vancouverCAN
-namelist default_settings -silent -var run_startdate -justvalue`
> @ START_YEAR = $RUN_STARTDATE / 10000
> ./xmlchange -file env_conf.xml -id RUN_STARTDATE -val $RUN_STARTDATE
# Set the run length and start time of day
> ./xmlchange -file env_run.xml -id STOP_OPTION
-val `../../models/lnd/clm/bld/queryDefaultNamelist.pl -res 1x1_vancouverCAN
-namelist seq_timemgr_inparm -silent -var stop_option -justvalue`
> ./xmlchange -file env_run.xml -id STOP_N
-val `../../models/lnd/clm/bld/queryDefaultNamelist.pl -res 1x1_vancouverCAN
-namelist seq_timemgr_inparm -silent -var stop_n -justvalue`
> ./xmlchange -file env_run.xml -id START_TOD
-val `../../models/lnd/clm/bld/queryDefaultNamelist.pl -res 1x1_vancouverCAN
-namelist seq_timemgr_inparm -silent -var start_tod -justvalue`
# Set datm start and end range...
> ./xmlchange -file env_conf.xml -id DATM_CLMNCEP_YR_START -val
`../../models/lnd/clm/bld/queryDefaultNamelist.pl -res 1x1_vancouverCAN
-namelist default_settings -silent -var datm_cycle_beg_year -justvalue`
> ./xmlchange -file env_conf.xml -id DATM_CLMNCEP_YR_END -val
`../../models/lnd/clm/bld/queryDefaultNamelist.pl -res 1x1_vancouverCAN
-namelist default_settings -silent -var datm_cycle_end_year -justvalue`
# Set align year to start year as defined above
> ./xmlchange -file env_conf.xml -id DATM_CLMNCEP_YR_ALIGN -val $START_YEAR
-namelist default_settings -silent -var datm_cycle_end_year -justvalue`

If you try this -- let me know if it works or not.

Thanks -- good luck!
 
Thanks Erik!

We are trying to apply your suggestion to my own test case for the Amazon. I found out the starting time for our simulation and the atm forcing was offset by one hour, so I will set them the same. I'm also looking at shr_stream_mod.F90 and shr_strdata_mod.F90 - not quite sure if the dtime (or lower bound) calculation is done correctly there.

I will post again when I get the result.

Best,

Koichi
 
Hi Erik,

By setting the start year&date&time of the simulation and atmo forcing (like you showed in your previous post), I can run it for one month (say, January) without the error above. I'm talking about a case with my own atmo forcing, and running from CESM script (CESM1.0.2).

But when it goes to February, we get this error

(shr_dmodel_readLBUB) reading file: /home/sakaguch/cesm1_0/inputdata/atm/datm7/CLM1PT_data/1x1pt_f19_TapajosKM67/2002-01.nc 744
(datm_comp_run) atm: model date 10131 82800s
(datm_comp_run) atm: model date 10131 84600s
(datm_comp_run) atm: model date 10201 0s
(shr_stream_verifyTCoord) ERROR: elapsed seconds on a date must be strickly increasing
(shr_stream_verifyTCoord) secs(n), secs(n+1) = 00000000 00000000
(shr_sys_abort) ERROR: (shr_stream_verifyTCoord) ERROR: elapsed seconds must be increasing
(shr_sys_abort) WARNING: calling shr_mpi_abort() and stopping


The time vector starts from zero in each month in my atmo forcing data, just like Qian data. However, I do not have this problem when I run this same single point (same domain & land forcing) case with Qian data.

I can work around this issue by making the time vector continuous and keep increasing throughout all the atmo forcing files - but it is not so convenient, besides since Qian data works with time vector resetting for each month, I'd like to figure out why the same does not work with my own atmo forcing.

I'm looking at log files, and later name list files, to find the differences in setting/options between the case with Qian and my atm forcing data. If you know anything about it, please let me know.


Thank you,

Koichi
 
The previous issue at the end of the month was simply the problem in my time vector values. The very last value of the time vector in January was 31.04167, which is actually equal to the first value in February 0..04167. thus the error of "(shr_stream_verifyTCoord) ERROR: elapsed seconds on a date must be strickly increasing"

Now I can run for a year, but I am getting the same dtime issue at the end of the year:

(shr_dmodel_readLBUB) reading file: /home/sakaguch/cesm1_0/inputdata/atm/datm7/CLM1PT_data/1x1pt_f19_TapajosKM67/2002-01.nc 1
(shr_strdata_advance) ERROR: dt limit1 8.333333333333337E-002
4.166666666666663E-002 1.50000000000000
(shr_strdata_advance) ERROR: dt limit2 11231 82800 20101
3600
(shr_sys_abort) ERROR: (shr_strdata_advance) ERROR dt limit
(shr_sys_abort) WARNING: calling shr_mpi_abort() and stopping


, which is exactly the case Erik explained in the first reply. In this case I'm cycling the same 12 months atm forcing data. Now I am stuck....


Koichi
 

erik

Erik Kluzek
CSEG and Liaisons
Staff member
Koichi

We think we have a fix for you to try.

Here's a change made to the datm namelist...


in your case Buildconf/datm.buildnml.csh file

add the following options for tintalgo and dtlimit to the shr_strdata_nml namelist...

&shr_strdata_nml
.
.
.
tintalgo = 'nearest','linear'
dtlimit = 25000.,1.5
/

tintalgo is the time-interpolation algorithm to use, dtlimit is the max ratio of max time-difference to previous time-difference to allow. This sets it up to use the closest point in time and hold at that value. So it would NOT be appropriate if you have a big gap in your data.

Let us know if it works.
 
Hi Erik,

Thank you for your continuous help.
I tried your fix, and it worked!

I'm still trying to understand how the data model calculates the lower and upper bounds, so that I really understand what I should be careful of in this fix (as your suggestion for the data with long missing period).

But anyway, thank you again for your help!

Koichi
 

akhtert

Tanjila Akhter
New Member
Koichi

We think we have a fix for you to try.

Here's a change made to the datm namelist...


in your case Buildconf/datm.buildnml.csh file

add the following options for tintalgo and dtlimit to the shr_strdata_nml namelist...

&shr_strdata_nml
.
.
.
tintalgo = 'nearest','linear'
dtlimit = 25000.,1.5
/

tintalgo is the time-interpolation algorithm to use, dtlimit is the max ratio of max time-difference to previous time-difference to allow. This sets it up to use the closest point in time and hold at that value. So it would NOT be appropriate if you have a big gap in your data.

Let us know if it works.
Hello, I am running CLM5 with the aquifer and ERA5 forcing for 2000-2020. The model ran from 2000-2012; however, it stops at 2013 with a similar error. Below are the last few lines of the atm log file.

(shr_dmodel_readstrm) file ub: /glade/campaign/univ/umsu0014/ERA5/Precip_re/ERA5_Precip.2013-10.nc 248
(shr_dmodel_readstrm) file ub: /glade/campaign/univ/umsu0014/ERA5/TPQWL_re/ERA5_TPQWL.2013-10.nc 248
(datm_comp_run) atm: model date 20131031 72000s
(datm_comp_run) atm: model date 20131031 73800s
(datm_comp_run) atm: model date 20131031 75600s
(shr_dmodel_readstrm) close : /glade/campaign/univ/umsu0014/ERA5/Solar_re/ERA5_solar.2013-10.nc
(shr_dmodel_readstrm) open : /glade/campaign/univ/umsu0014/ERA5/Solar_re/ERA5_solar.2013-11.nc
(shr_dmodel_readstrm) file ub: /glade/campaign/univ/umsu0014/ERA5/Solar_re/ERA5_solar.2013-11.nc 1
(datm_comp_run) atm: model date 20131031 77400s
(datm_comp_run) atm: model date 20131031 79200s
(datm_comp_run) atm: model date 20131031 81000s
(shr_dmodel_readstrm) close : /glade/campaign/univ/umsu0014/ERA5/Precip_re/ERA5_Precip.2013-10.nc
(shr_dmodel_readstrm) open : /glade/campaign/univ/umsu0014/ERA5/Precip_re/ERA5_Precip.2013-11.nc
(shr_dmodel_readstrm) file ub: /glade/campaign/univ/umsu0014/ERA5/Precip_re/ERA5_Precip.2013-11.nc 1
(shr_stream_verifyTCoord) ERROR: elapsed seconds on a date must be strickly increasing
(shr_stream_verifyTCoord) secs(n), secs(n+1) = 00000000 00000000
ERROR: (shr_stream_verifyTCoord) ERROR: elapsed seconds must be increasing

I tried to change the '
tintalgo = 'nearest','linear'
dtlimit = 25000.,1.5
' in shr_strdata_nml
which I could find in /Buildconf/datmconf/datm_in file, copied the datm_in file and pasted in the case directory with user_ as a prefix, and put the suggested values. It did not work and the file changes to the defaults after I submit the run.
I am attaching the atm log file here also. I will appreciate any help/suggestion regarding the error. Thank you.
 

akhtert

Tanjila Akhter
New Member
Hello, I am running CLM5 with the aquifer and ERA5 forcing for 2000-2020. The model ran from 2000-2012; however, it stops at 2013 with a similar error. Below are the last few lines of the atm log file.

(shr_dmodel_readstrm) file ub: /glade/campaign/univ/umsu0014/ERA5/Precip_re/ERA5_Precip.2013-10.nc 248
(shr_dmodel_readstrm) file ub: /glade/campaign/univ/umsu0014/ERA5/TPQWL_re/ERA5_TPQWL.2013-10.nc 248
(datm_comp_run) atm: model date 20131031 72000s
(datm_comp_run) atm: model date 20131031 73800s
(datm_comp_run) atm: model date 20131031 75600s
(shr_dmodel_readstrm) close : /glade/campaign/univ/umsu0014/ERA5/Solar_re/ERA5_solar.2013-10.nc
(shr_dmodel_readstrm) open : /glade/campaign/univ/umsu0014/ERA5/Solar_re/ERA5_solar.2013-11.nc
(shr_dmodel_readstrm) file ub: /glade/campaign/univ/umsu0014/ERA5/Solar_re/ERA5_solar.2013-11.nc 1
(datm_comp_run) atm: model date 20131031 77400s
(datm_comp_run) atm: model date 20131031 79200s
(datm_comp_run) atm: model date 20131031 81000s
(shr_dmodel_readstrm) close : /glade/campaign/univ/umsu0014/ERA5/Precip_re/ERA5_Precip.2013-10.nc
(shr_dmodel_readstrm) open : /glade/campaign/univ/umsu0014/ERA5/Precip_re/ERA5_Precip.2013-11.nc
(shr_dmodel_readstrm) file ub: /glade/campaign/univ/umsu0014/ERA5/Precip_re/ERA5_Precip.2013-11.nc 1
(shr_stream_verifyTCoord) ERROR: elapsed seconds on a date must be strickly increasing
(shr_stream_verifyTCoord) secs(n), secs(n+1) = 00000000 00000000
ERROR: (shr_stream_verifyTCoord) ERROR: elapsed seconds must be increasing

I tried to change the '
tintalgo = 'nearest','linear'
dtlimit = 25000.,1.5
' in shr_strdata_nml
which I could find in /Buildconf/datmconf/datm_in file, copied the datm_in file and pasted in the case directory with user_ as a prefix, and put the suggested values. It did not work and the file changes to the defaults after I submit the run.
I am attaching the atm log file here also. I will appreciate any help/suggestion regarding the error. Thank you.
Seems the file did not upload in my last comment.

Also, I tired opening a new case and ran form 2013 using default GSWP forcing, and the model ran. In the same case, I changed the forcing to ERA5 and it gets the same error. I checked the timestamp of the 2013 GSWP data and ERA5; both have same timestamp for the Ocotber and November 2013 and for rest of the dats also.
 

akhtert

Tanjila Akhter
New Member
I am really sorry the atm file still did not upload last time.
 

Attachments

  • atm.log.7326062.desched1.241226-161338.gz
    78.6 KB · Views: 0
Top