Hi, I apologize for the complete newbie question, this must have been answered before but I couldn't find easy info on this here or on the web. Not even sure my question is sufficiently precise to make any sense... but here it is:
I've never run CESM before. I am trying to determine how much resources I would need on my local university HPC cluster to do so (possibly to purchase them to have priority access). To run a full (ocean/atmo) CESM simulation, let's say a historical or transient climate change experiment, in an acceptable amount of time (a few weeks I suppose?), what are the computing resources needed, in terms of number of nodes and cores? I am a bit (a lot) out of my depth here, so don't hesitate to ELI5 (explain like I am 5...). I'd also be very grateful if you can describe how much space is needed to store model outputs in that case (for a typical number of variables/spatial-temporal resolution). What about in a land-atmo (prescribed SST) case only?
Perhaps the answer for someone like me would be to simply ask to run simulations on NCAR machines, but I heard you could only do this with NSF grants.
Thank you!
I've never run CESM before. I am trying to determine how much resources I would need on my local university HPC cluster to do so (possibly to purchase them to have priority access). To run a full (ocean/atmo) CESM simulation, let's say a historical or transient climate change experiment, in an acceptable amount of time (a few weeks I suppose?), what are the computing resources needed, in terms of number of nodes and cores? I am a bit (a lot) out of my depth here, so don't hesitate to ELI5 (explain like I am 5...). I'd also be very grateful if you can describe how much space is needed to store model outputs in that case (for a typical number of variables/spatial-temporal resolution). What about in a land-atmo (prescribed SST) case only?
Perhaps the answer for someone like me would be to simply ask to run simulations on NCAR machines, but I heard you could only do this with NSF grants.
Thank you!