Welcome to the new DiscussCESM forum!
We are still working on the website migration, so you may experience downtime during this process.

Existing users, please reset your password before logging in here: https://xenforo.cgd.ucar.edu/cesm/index.php?lost-password/

Zonally homogenizing parameters at each timestep

Hello All,I am currently trying to zonally average parameters such as surface flux and radiative heating within each timestep in CAM4.  I have written some code to attempt this using the gatther_to_chunk and scatter_to_field subroutines, but I realized that what I did does not solve the problem since I am calculating these averages after tphysbc has finished.  I need a way to calculate these fields, replace the interactive values, and then integrate them all in the middle of the parallelized tphysbc section.  Do you guys have any suggestions about the best way to do this?  I think I would have to first gather all of the chunks to the master processor, calculate averages, and then scatter the average back to all of the other processors, perhaps using something like mpi_gather for this.  Anyways this seems a bit tricky, and if anybody has some example code or could point me in the right direction, that would be very helpful.  Thank you!Regards,Minmin  


Honghai Zhang
New Member
Hi Minmin,

I am trying to do similar things as in your post here, and wonder if you have figured out a way to do the correct gather/scatter between global fields and chunks.

The subroutine in phys_grid.F90, 'gather_chunk_to_field()', is designed to gather the chunks from each MPI process to a global lat/lon field on the master MPI process, and vice versa for 'scatter_field_to_chunk()'. However, it seems that these subroutines or any generic MPI_gatherv/scatterv cannot be used inside the parallelized tphysbc or tphysac sections, where the chunks on each MPI process are dealt parallelly over different OMP threads but sequentially on each thread. If my understanding is correct, then the only way to do gather/scatter inside the parallelized tphysbc or tphysac sections is to distribute each individual chunk to a different thread (or process). This would require a very large number of PEs, equal to the number of chunks on the physics grid. I am not sure if this is technically doable.