Scheduled Downtime
On Tuesday 24 October 2023 @ 5pm MT the forums will be in read only mode in preparation for the downtime. On Wednesday 25 October 2023 @ 5am MT, this website will be down for maintenance and expected to return online later in the morning.
Normal Operations
The forums are back online with normal operations. If you notice any issues or errors related to the forums, please reach out to help@ucar.edu

Cores, memory, network interconnection for CESM2

agon

New Member
I would like to purchase nodes on my institution's supercomputers. Can someone estimate how many cores and how much memory are needed as well as what the speed of the network interconnection is for CESM2 simulations at 0.25 deg resolution for about 30 years?
 

jedwards

CSEG and Liaisons
Staff member
Considering your system here: Systems & Equipment | High Performance Computing
The Mellanox EDR (100Gbps) switch is fine. You currently have Intel skylake nodes on the system,
increasing the number of skylake nodes is an option, Following the intel roadmap to a newer technology may also be an option. Most recently AMD nodes are outperforming Intel in price/performance however it is probably not possible for you to consider adding AMD to your cluster. For cesm2 ne120 core (0.25 degrees) we are getting up to nearly 10 ypd on the TACC frontera system (Cascade lake) using up to 45K cores.
 

agon

New Member
Thank you
Considering your system here: Systems & Equipment | High Performance Computing
The Mellanox EDR (100Gbps) switch is fine. You currently have Intel skylake nodes on the system,
increasing the number of skylake nodes is an option, Following the intel roadmap to a newer technology may also be an option. Most recently AMD nodes are outperforming Intel in price/performance however it is probably not possible for you to consider adding AMD to your cluster. For cesm2 ne120 core (0.25 degrees) we are getting up to nearly 10 ypd on the TACC frontera system (Cascade lake) using up to 45K cores.
Thanks so much for your help!
 

agon

New Member
Considering your system here: Systems & Equipment | High Performance Computing
The Mellanox EDR (100Gbps) switch is fine. You currently have Intel skylake nodes on the system,
increasing the number of skylake nodes is an option, Following the intel roadmap to a newer technology may also be an option. Most recently AMD nodes are outperforming Intel in price/performance however it is probably not possible for you to consider adding AMD to your cluster. For cesm2 ne120 core (0.25 degrees) we are getting up to nearly 10 ypd on the TACC frontera system (Cascade lake) using up to 45K cores.
How many cores are needed to run at 10 ypd? 45,000 cores seems out of the range of possibilities. Did you mean 4,500 cores instead? @jedwards
 

jedwards

CSEG and Liaisons
Staff member
10ypd for ne120 is an impressive performance number that can be achieved on few current HPC systems. I did mean 45,000
 

agon

New Member
10ypd for ne120 is an impressive performance number that can be achieved on few current HPC systems. I did mean 45,000
Thanks for that information. I guess I could run it at 1 ypd as my resources do not allow for such high performance. And I could also run at 1 deg resolution, which I think is standard these days.
 
Top