Scheduled Downtime
On Tuesday 24 October 2023 @ 5pm MT the forums will be in read only mode in preparation for the downtime. On Wednesday 25 October 2023 @ 5am MT, this website will be down for maintenance and expected to return online later in the morning.
Normal Operations
The forums are back online with normal operations. If you notice any issues or errors related to the forums, please reach out to help@ucar.edu

cam5 FV dynamical core

jedwards

CSEG and Liaisons
Staff member
Hi Dan, ... build larger blocks out of all the chunks stored in a MPI node.The upcoming pio2 library will do this.  
 

jedwards

CSEG and Liaisons
Staff member
Hopefully by the end of the year.   If I understand what you are trying to do, and I'm not sure yet that I do it would involve modifying the API a bit so that you can intercept the data between the rearrangement and the write steps.   We can discuss details via email if you like.  - Jim
 

jedwards

CSEG and Liaisons
Staff member
Hopefully by the end of the year.   If I understand what you are trying to do, and I'm not sure yet that I do it would involve modifying the API a bit so that you can intercept the data between the rearrangement and the write steps.   We can discuss details via email if you like.  - Jim
 

jedwards

CSEG and Liaisons
Staff member
Hopefully by the end of the year.   If I understand what you are trying to do, and I'm not sure yet that I do it would involve modifying the API a bit so that you can intercept the data between the rearrangement and the write steps.   We can discuss details via email if you like.  - Jim
 

jedwards

CSEG and Liaisons
Staff member
Hopefully by the end of the year.   If I understand what you are trying to do, and I'm not sure yet that I do it would involve modifying the API a bit so that you can intercept the data between the rearrangement and the write steps.   We can discuss details via email if you like.  - Jim
 

jedwards

CSEG and Liaisons
Staff member
Hopefully by the end of the year.   If I understand what you are trying to do, and I'm not sure yet that I do it would involve modifying the API a bit so that you can intercept the data between the rearrangement and the write steps.   We can discuss details via email if you like.  - Jim
 

jedwards

CSEG and Liaisons
Staff member
Hopefully by the end of the year.   If I understand what you are trying to do, and I'm not sure yet that I do it would involve modifying the API a bit so that you can intercept the data between the rearrangement and the write steps.   We can discuss details via email if you like.  - Jim
 
Hi Jim,The goal of ParaView Catalyst is to enable post-processing on the compute nodes. This contrasts with the traditional post-processing workflow: save derived data, move the data on a different machine, create visualizations with it. For this we link a slimmed down version of ParaView to the simulation code, pass the data on each compute node to the ParaView running on that node and perform post-processing using any VTK/ParaView pipeline. The pipeline is written in Python. We can save processed datasets, visualizations or we can send processed data to a remote ParaView.The challenge for cam5 (fv, cam5, trop_mam3) is that computational load balance (chunking) that is performed by the physics module seems to imply spatial non-coherence between data stored on different compute nodes. (columns are not next to each other on one compute node).Our current project is limited in goals (and funds) so I will not pursue the PIO route further at the moment.Just out of curiosity though, why do you want to build larger blocks on the compute nodes? This will help us for visualization, but the physics module does not seem to need that.Thank you,
 
Hi Jim,The goal of ParaView Catalyst is to enable post-processing on the compute nodes. This contrasts with the traditional post-processing workflow: save derived data, move the data on a different machine, create visualizations with it. For this we link a slimmed down version of ParaView to the simulation code, pass the data on each compute node to the ParaView running on that node and perform post-processing using any VTK/ParaView pipeline. The pipeline is written in Python. We can save processed datasets, visualizations or we can send processed data to a remote ParaView.The challenge for cam5 (fv, cam5, trop_mam3) is that computational load balance (chunking) that is performed by the physics module seems to imply spatial non-coherence between data stored on different compute nodes. (columns are not next to each other on one compute node).Our current project is limited in goals (and funds) so I will not pursue the PIO route further at the moment.Just out of curiosity though, why do you want to build larger blocks on the compute nodes? This will help us for visualization, but the physics module does not seem to need that.Thank you,
 
Hi Jim,The goal of ParaView Catalyst is to enable post-processing on the compute nodes. This contrasts with the traditional post-processing workflow: save derived data, move the data on a different machine, create visualizations with it. For this we link a slimmed down version of ParaView to the simulation code, pass the data on each compute node to the ParaView running on that node and perform post-processing using any VTK/ParaView pipeline. The pipeline is written in Python. We can save processed datasets, visualizations or we can send processed data to a remote ParaView.The challenge for cam5 (fv, cam5, trop_mam3) is that computational load balance (chunking) that is performed by the physics module seems to imply spatial non-coherence between data stored on different compute nodes. (columns are not next to each other on one compute node).Our current project is limited in goals (and funds) so I will not pursue the PIO route further at the moment.Just out of curiosity though, why do you want to build larger blocks on the compute nodes? This will help us for visualization, but the physics module does not seem to need that.Thank you,
 
Hi Jim,The goal of ParaView Catalyst is to enable post-processing on the compute nodes. This contrasts with the traditional post-processing workflow: save derived data, move the data on a different machine, create visualizations with it. For this we link a slimmed down version of ParaView to the simulation code, pass the data on each compute node to the ParaView running on that node and perform post-processing using any VTK/ParaView pipeline. The pipeline is written in Python. We can save processed datasets, visualizations or we can send processed data to a remote ParaView.The challenge for cam5 (fv, cam5, trop_mam3) is that computational load balance (chunking) that is performed by the physics module seems to imply spatial non-coherence between data stored on different compute nodes. (columns are not next to each other on one compute node).Our current project is limited in goals (and funds) so I will not pursue the PIO route further at the moment.Just out of curiosity though, why do you want to build larger blocks on the compute nodes? This will help us for visualization, but the physics module does not seem to need that.Thank you,
 
Hi Jim,The goal of ParaView Catalyst is to enable post-processing on the compute nodes. This contrasts with the traditional post-processing workflow: save derived data, move the data on a different machine, create visualizations with it. For this we link a slimmed down version of ParaView to the simulation code, pass the data on each compute node to the ParaView running on that node and perform post-processing using any VTK/ParaView pipeline. The pipeline is written in Python. We can save processed datasets, visualizations or we can send processed data to a remote ParaView.The challenge for cam5 (fv, cam5, trop_mam3) is that computational load balance (chunking) that is performed by the physics module seems to imply spatial non-coherence between data stored on different compute nodes. (columns are not next to each other on one compute node).Our current project is limited in goals (and funds) so I will not pursue the PIO route further at the moment.Just out of curiosity though, why do you want to build larger blocks on the compute nodes? This will help us for visualization, but the physics module does not seem to need that.Thank you,
 
Hi Jim,The goal of ParaView Catalyst is to enable post-processing on the compute nodes. This contrasts with the traditional post-processing workflow: save derived data, move the data on a different machine, create visualizations with it. For this we link a slimmed down version of ParaView to the simulation code, pass the data on each compute node to the ParaView running on that node and perform post-processing using any VTK/ParaView pipeline. The pipeline is written in Python. We can save processed datasets, visualizations or we can send processed data to a remote ParaView.The challenge for cam5 (fv, cam5, trop_mam3) is that computational load balance (chunking) that is performed by the physics module seems to imply spatial non-coherence between data stored on different compute nodes. (columns are not next to each other on one compute node).Our current project is limited in goals (and funds) so I will not pursue the PIO route further at the moment.Just out of curiosity though, why do you want to build larger blocks on the compute nodes? This will help us for visualization, but the physics module does not seem to need that.Thank you,
 

santos

Member
spatial non-coherence between data stored on different compute nodesI should point out that this is something of an understatement; the most common load balancing strategies try to deliberately minimize spatial coherence, in order to improve the chances that each task has a mix of columns from "expensive" regions and those from "cheap" regions. E.g. day is more expensive than night, cloudy is more expensive than clear sky.
 
Top