Scheduled Downtime
On Tuesday 24 October 2023 @ 5pm MT the forums will be in read only mode in preparation for the downtime. On Wednesday 25 October 2023 @ 5am MT, this website will be down for maintenance and expected to return online later in the morning.
Normal Operations
The forums are back online with normal operations. If you notice any issues or errors related to the forums, please reach out to help@ucar.edu

Initialization of CAM6 in the T31 dyncore

pdinezio

Pedro DiNezio
New Member
I'm trying to do a run with CAM6 using the T31 dyncore. There are no standard cami files for this dynamical core under "/glade/p/cesmdata/cseg/inputdata/atm/cam/inic/", but I see that there might be an option to initialize using the "ncdata" namelist with the option:
atm/cam/inic/cam_vcoords_L32_c180105.nc for: {'analytic_ic': '1', 'nlev': '32'}.
Is this possible?
 

islas

Member
Hi Pedro, I don't think it's going to work to use the analytic_ic. That's for Held-Suarez to start from an isothermal atmosphere that is at rest. Held-Suarez is the only physics option that is supported with the Eulerian dynamical core at this point. To use CAM6 physics with T31 you'll need to have an ic file containing all the fields that are needed. I'm afraid I don't have any experience with that, but I imagine you could attempt to interpolate all the fields from the ic for the f09_f09 grid onto the T31 grid. I have no idea whether that will actually work. Hopefully someone else has the answer.
 

jet

Member
Hi Pedro and Isla (thanks for the cc):

If Pedro would like to try an unsupported EUL option here is the procedure for setting up with an analytic us_std_atmosphere. What you will find though is that there are other boundary datasets that will be needed at T31 to run the model and generating all those datasets is painful. If you are interested in proceeding I can point you in the right direction. Here is the procedure for activating the unsupported analytic start for Eul.

create your case
cd casename
./*.setup
./xmlchange --append CAM_CONFIG_OPTS='-analytic_ic'
edit user_nl_cam and add
ncdata=cam_vcoords_L32_c180105.nc
analytic_ic_type = 'us_standard_atmosphere'
./*.build
./*.submit

In theory, this might run but it's never been tried with the Eul dycore and as Isla stated is unsupported so YMMV. If it does start up It's possible that the model might blow up after a few timesteps due to the shock of starting with an analytic atmosphere state. In that case, you could try to add the atm_in namelist option
divdampn=2
in user_nl_cam and resubmit. This might damp out the gravity waves from the initial shock.

divdampn
Number of days (from timestep 0) to run divergence damper. Use only if spectral
model becomes dynamicallly unstable during initialization. Suggested value:
2. (Value must be >= 0.)
Default: 0.

If you're lucky and the model runs, you could set the namelist to dump out an initial condition after running longer to get something more balanced to start up with. If you are using a balanced IC you shouldn't need to keep the divdampn setting if you needed to use that for the analytic start. Note divdampn is only available with the eulerian dycore.

jt


 

pdinezio

Pedro DiNezio
New Member
I followed the steps, but I get the following error:
ERROR: in validate_variable_value (package Build::Namelist): Variable name analytic_ic_type has values that does NOT match any of the valid values: 'none' 'held_suarez_1994' 'moist_baroclinic_wave_dcmip2016' 'dry_baroclinic_wave_dcmip2016' 'dry_baroclinic_wave_jw2006'.
 

jet

Member
Hmmmm. I do see some changes in the latest analytic code that are not part of the cesm2.2 release. It could be that the standard atmos analytic state wasn't working properly or vetted by the time of the release. It should be a part of the next release but I'm not sure when that will be. The error is coming from us_standard_atmosphere not being listed as one of the valid values of analytic_ic_type in the file namelist_definition.xml. You could add it to the file yourself and get past the initial check but if there are other issues or incompatibilities you are on your own to get things working.

Another approach would be to interpolate an existing initial condition file to T31. You could try this if you have access to Cheyenne and Glade. I don't think these modules are available on Casper.

module load gnu
module load esmf_libs
module load esmf-8.0.0-ncdfio-mpi-O
module load nco/4.7.9

set srcgrid=T42
set dstgrid=T31
set srcgridfile=/glade/p/cesmdata/inputdata/share/scripgrids/T42_001005.nc
set dstgridfile=/glade/p/cesmdata/inputdata/share/scripgrids/T31_040122.nc
set srcinitfile=/glade/p/cesmdata/inputdata/atm/cam/inic/gaus/cami_0000-01-01_64x128_L32_c170510.nc
set dstinitfile=/glade/scratch/$USER/cami_0000-01-01_64x128_L32_c170510.regrid.T31.nc
cd /glade/scratch/$USER

#create the map file
ESMF_RegridWeightGen --ignore_unmapped -m conserve -w map_${srcgrid}_to_${dstgrid}_aave.nc -s ${srcgridfile} -d ${dstgridfile}

#use the mapfile to remap srcinitfile to dstinitfile
ncremap -m ./map_${srcgrid}_to_${dstgrid}_aave.nc-i ${srcinitfile} -o ${dstinitfile}
cleardot.gif


You can use the same procedure to remap other atmosphere boundary data like the surface deposition files that you may also need. Sometimes I interpolate from the following FV atmsrf to get a new destination grid.

/glade/p/cesmdata/inputdata/atm/cam/chem/trop_mam/atmsrf_0.23x0.31_181018.nc

In this case you would modify the srcgrid, srcgridfile, srcinitfile, and give it a reasonable dstinitfile name all which corresponds to the new src file and then create a new map and call ncremap.




 

jet

Member
The ncremap line above is missing a space before the -i option. It should read:
ncremap -m ./map_${srcgrid}_to_${dstgrid}_aave.nc -i ${srcinitfile} -o ${dstinitfile}
 

pdinezio

Pedro DiNezio
New Member
Hi John, thank you so much, it worked, although I bumped into another error, so I still don't know if CAM won't blow up with the new initial conditions.
 

jet

Member
Since you seemed to have a recent run under /glade/scratch I took a look at it (/glade/scratch/pdinezio/b.e21.T31_g37.Carib.001/run/cpl.log.9198830.chadmin1.ib0.cheyenne.ucar.edu.210630-162509)

Looks like your domain area's are little different between atm and lnd. I recently found a bug in gen_domain where some of the real(r8) parameters like pi are declared like this
real(r8),parameter :: pi = 3.14159265358979323846
which will give only single precision for pi. To get all the digits it needs to be declared as
real(r8),parameter :: pi = 3.14159265358979323846_r8

If you created domain files using that program, that could be where you are getting your single precision accuracy errors from. In any case, the error is close to the tolerance, you could just bump the tolerance up by 10 and see if the model runs. The EPS tolerances are part of env_run.xml in your case directory.

In your case directory, you can execute
./xmlchange EPS_AAREA="9.0e-06"
and rerun to see if it gets further.
 

pdinezio

Pedro DiNezio
New Member
John, thank you so much for your message. I always suspected why my customized domain/mapping files where giving those types of errors. The model is running.
 
Top