Scheduled Downtime
On Tuesday 24 October 2023 @ 5pm MT the forums will be in read only mode in preparation for the downtime. On Wednesday 25 October 2023 @ 5am MT, this website will be down for maintenance and expected to return online later in the morning.
Normal Operations
The forums are back online with normal operations. If you notice any issues or errors related to the forums, please reach out to help@ucar.edu

Nodes/cores/storage for CLM simulations

Status
Not open for further replies.

Aber

New Member
Hi, I apologize for the complete newbie question (I asked this for CESM in another part of the forum but I also needed to ask for CLM run offline):
I've never run CLM before. I am trying to determine ~ how much resources I would need on my local university HPC cluster to do so. What are the typical resources required (in terms of number of nodes/cores) to run an global simulation of CLM at a decent resolution (e.g., 1deg) and for a few decades, within an acceptable amount of time? I'd also be very grateful if you could describe how much space is needed to store model outputs in that case (for a typical number of variables and temporal resolutions).
Thanks so much!
 

oleson

Keith Oleson
CSEG and Liaisons
Staff member
You could look here at the timing tables.


These are for our machine cheyenne that was mothballed early this year, our new machine is a bit faster.
For example, this land-only CLM5 BGC-CROP compset at 1deg (1850_DATM%GSWP3v1_CLM50%BGC-CROP_SICE_SOCN_MOSART_CISM2%NOEVOLVE_SWAV_SIAC_SESP) uses 1836 cores and gets about 230 model years per wall-clock day.
I imagine your results will depend on your system configuration.
Currently, a standard CLM monthly average history file is about 285 MB in size.
 
Status
Not open for further replies.
Top