Scheduled Downtime
On Tuesday 24 October 2023 @ 5pm MT the forums will be in read only mode in preparation for the downtime. On Wednesday 25 October 2023 @ 5am MT, this website will be down for maintenance and expected to return online later in the morning.
Normal Operations
The forums are back online with normal operations. If you notice any issues or errors related to the forums, please reach out to help@ucar.edu

How to run with multiple forcing files (one per year) in data_table

cspencerjones

Spencer Jones
New Member
I am running OM4_025.JRA (the exact run from the MOM6-examples repo). I would like to extend beyond 1958. I know that I need to download the JRA data and pad the files using GitHub - adcroft/pad_JRA: Makefile to create "padded" versions of the JRA-55-do forcing files, and to regrid friver and licalvf onto the model grid using GitHub - adcroft/regrid_runoff: Conservatively re-grids runoff data to the nearest coastal wet-point of a MOM6 ocean grid. But how do I format the data_table so it knows to use multiple files (one for each year of the run)?

Thanks for your continued patience and help!
 

Raphael Dussin

Raphael Dussin
New Member
I create a template data_table with <YEAR> in place of the year in filenames and my script update it.
This might be useful:

cat mom.sub


#!/bin/bash
#SBATCH -n 252
#SBATCH -J OM4p5_bp
#SBATCH --error=OM4p5.err
#SBATCH --output=OM4p5.out
#SBATCH --exclusive
#SBATCH --time=10:00:00
#SBATCH --qos=urgent
#SBATCH --partition=batch
#SBATCH --clusters=c4
## obviously use your group account:
#SBATCH --account=gfdl_o

njobs=40

#--------------------------------- system settings ----- ---------------------------
module load cray-netcdf

export NC_BLKSZ=1M
ulimit -s unlimited

# setup the run directory
if [ ! -d RESTART ] ; then mkdir RESTART ; fi
if [ ! -d outputs_raw ] ; then mkdir outputs_raw ; fi
if [ ! -d restarts_raw ] ; then mkdir restarts_raw ; fi
if [ ! -d logs ] ; then mkdir logs ; fi

#--------------------------------- prepare input files ---------------------------

ctrldir=$( pwd )
subscript=mom.sub

if [ ! -f jobscompleted ] ; then touch jobscompleted ; fi

lastjob=$( tail -1 jobscompleted )
thisjob=$(( $lastjob + 1 )) # if file empty, takes job number one

echo 'starting job #' $thisjob

# at first job, replace restart by cold start
if [[ $thisjob == 1 ]] ; then
sed -i -e "s/input_filename = 'r'/input_filename = 'n'/g" input.nml
# at second job, replace init by restart
elif [[ $thisjob == 2 ]] ; then
sed -i -e "s/input_filename = 'n'/input_filename = 'r'/g" input.nml
fi

# grep the first year of the run
yearbeg=$( grep "current_date" input.nml | sed -e "s/,/ /g" | awk '{print $3}' )
thisyear=$(( $yearbeg + $thisjob -1 ))

cat data_table.template | sed -e "s/<YEAR>/$thisyear/g" > data_table

#--------------------------------- run the model -----------------------------------

srun --cpu_bind=rank -n 252 ./MOM6

#--------------------------------- check status of run -----------------------------

runok=$( tail -200 OM4p5.out | grep -i "Total runtime" )
if [[ $runok != '' ]] ; then
# move outputs
mv *.nc ./outputs_raw/.
mv *.nc.???? ./outputs_raw/.
# initiate transfer
#sbatch transfer.sub $thisyear
# save restarts and move to input
tar -cvf restarts.$thisjob RESTART/*
mv restarts.$thisjob ./restarts_raw
mv RESTART/* INPUT/.
# move logs
tar -cvf logs.tar.$thisjob MOM_parameter_doc.* SIS_parameter_doc.* OM4p5.err OM4p5.out ocean.stats* logfile.000000.out available_diags.000000 seaice.stats SIS.available_diags SIS_fast.available_diags ocean_stats*
mv logs.tar.$thisjob ./logs/.

# notify completion
echo $thisjob >> $ctrldir/jobscompleted
# test for resubmission
if (( $thisjob < $njobs )) ; then
cd $ctrldir ; sbatch ./$subscript
else
# final job
echo this is the last job
fi
else
# run blew up
echo "run blew up"
exit 1
fi
 
Top