Scheduled Downtime
On Tuesday 24 October 2023 @ 5pm MT the forums will be in read only mode in preparation for the downtime. On Wednesday 25 October 2023 @ 5am MT, this website will be down for maintenance and expected to return online later in the morning.
Normal Operations
The forums are back online with normal operations. If you notice any issues or errors related to the forums, please reach out to help@ucar.edu

Zonally homogenizing parameters at each timestep

minminfu

Member
Hello All,I am currently trying to zonally average parameters such as surface flux and radiative heating within each timestep in CAM4.  I have written some code to attempt this using the gatther_to_chunk and scatter_to_field subroutines, but I realized that what I did does not solve the problem since I am calculating these averages after tphysbc has finished.  I need a way to calculate these fields, replace the interactive values, and then integrate them all in the middle of the parallelized tphysbc section.  Do you guys have any suggestions about the best way to do this?  I think I would have to first gather all of the chunks to the master processor, calculate averages, and then scatter the average back to all of the other processors, perhaps using something like mpi_gather for this.  Anyways this seems a bit tricky, and if anybody has some example code or could point me in the right direction, that would be very helpful.  Thank you!Regards,Minmin  
 

hhzhang

Honghai Zhang
New Member
Hi Minmin,

I am trying to do similar things as in your post here, and wonder if you have figured out a way to do the correct gather/scatter between global fields and chunks.

The subroutine in phys_grid.F90, 'gather_chunk_to_field()', is designed to gather the chunks from each MPI process to a global lat/lon field on the master MPI process, and vice versa for 'scatter_field_to_chunk()'. However, it seems that these subroutines or any generic MPI_gatherv/scatterv cannot be used inside the parallelized tphysbc or tphysac sections, where the chunks on each MPI process are dealt parallelly over different OMP threads but sequentially on each thread. If my understanding is correct, then the only way to do gather/scatter inside the parallelized tphysbc or tphysac sections is to distribute each individual chunk to a different thread (or process). This would require a very large number of PEs, equal to the number of chunks on the physics grid. I am not sure if this is technically doable.

Best,
Honghai
 
Top