Main menu

Navigation

Cam5-som

9 posts / 0 new
Last post
bing.pu@...
Cam5-som

Hi all,

I'm a new user of CAM5 and I got a few questions about running it with the slab ocean model:

1. Is CAM5 (standalone) able to run with the mixed layer ocean model (DOCN-SOM), or it must be run under the CESM framework (e.g., compset E)?
2. If CAM can run standalone with SOM, what changes do I need to make when configuring CAM?
3. Is pop_frc.1x1d.090130.nc the right inputdata for the slab ocean model? For CAM model namelist, what should I set for 'bndtvs' ?

I've been struggling with these questions for a few days. Has any one run CAM5-SOM before? Any suggestions? Thanks in advance!

eaton

The CAM standalone scripts don't support running the SOM mode. The only supported way to do this is using the CESM scripts with an "E" compset.

bing.pu@...

Thank you very much, eaton!

I start to run CESM1.0.2 in "E" compset, and have a problem with restart files.

No matter I set the restart options in env_run.xml file, REST_DATE, REST_N, and REST_OPTION, to some values, e.g., 6 months, or simply use the default values, REST_DATE=$STOP_DATE, REST_OPTION=$STOP_OPTION, simulation stopped right after producing one restart file XXX.docn.r.yyyy-mm-dd-hhmmss.nc, and history files of the month before the restart month are missing (history files are wrote out every month as default).

The error message in ccsm.log file is:
===========================
[cli_9]: aborting job:
Fatal error in MPI_Info_free: Invalid argument, error stack:
MPI_Info_free(102): MPI_Info_free(info=0x73109fc) failed
MPI_Info_free(62).: Invalid MPI_Info
[cli_10]: aborting job:
Fatal error in MPI_Info_free: Invalid argument, error stack:
MPI_Info_free(102): MPI_Info_free(info=0x73109fc) failed
MPI_Info_free(62).: Invalid MPI_Info
….
============================
Is there anything wrong in simulation setting? What should I do to get monthly restart files?

Thanks a bunch!

eaton

Errors encountered while writing restart files are often due to hitting a memory limit. Try running with more tasks and on more nodes to increase the total available memory and decrease the memory per task.

bing.pu@...

Thanks, eaton! I increased 32 processors (on 4 nodes) to 64 (8 nodes), the maximum in our cluster, still get the same error message. Our IT support said that each node has 24G memory. How can I tell if it hits the limit?

Here are the memory use info from the ccsm.log file:

===============================
.....
Memory block size conversion in bytes is 4094.00
8 MB memory alloc in MB is 8.00
8 MB memory dealloc in MB is 8.00
Memory block size conversion in bytes is 4094.00
8 MB memory alloc in MB is 8.00
8 MB memory dealloc in MB is 8.00
Memory block size conversion in bytes is 4094.00
64 pes participating in computation
===============================

Thank you very much!!!

eaton

It seems that you have plenty of available memory.

The error message indicates a problem with an mpi_info_free call. I only see one reference to that routine in the source code, and it's in the pio code. What compiler (and version) are you using?

jedwards

This is fixed in a later version of pio. You can update to the latest version using svn here
http://parallelio.googlecode.com/svn/trunk_tags/pio1_2_6/pio

or (this is much easier) you can simply comment out the mpi_info_free call in
models/utils/pio/piolib_mod.F90

CESM Software Engineer

bing.pu@...

I commented out the mpi_info_free call, and it works! Thank you all!!!

1475922792@...

Hello all,

Is pop_frc.1x1d.090130.nc the right inputdata for the slab ocean model? Can I do the parameterizations intercomparisons by using the slab-ocean model simulations? Hope someone can help.

Thank you!

Jocelyn

Log in or register to post comments

Who's new

  • max@...
  • bidyut@...
  • rui.yang@...
  • rory
  • dangcheng111@...