Scheduled Downtime
On Tuesday 24 October 2023 @ 5pm MT the forums will be in read only mode in preparation for the downtime. On Wednesday 25 October 2023 @ 5am MT, this website will be down for maintenance and expected to return online later in the morning.
Normal Operations
The forums are back online with normal operations. If you notice any issues or errors related to the forums, please reach out to help@ucar.edu

How does "chunk" and "pcols" work in CAM's physics?

FuchengY

Fucheng Yang
New Member
What version of the code are you using?
CESM2.2.2 on Derecho (grid: 288*192*32)


Have you made any changes to files in the source tree?
Yes. In cam_init, I manually output pcols, pver, begchunk, and endchunk by write(iulog,*).


Describe your problem or question:
I got the following result in log file:
pcols=16, pver=32, begchunk=513, and endchunk=519.
I am quite confused on how variables are stored in these chunks. I think "pver" means the number of vertical levels, but pcols*(begchunk:endchunk) is obviously not match the spatial grid 288*192. Could anyone give me a help on understanding this result?
 

hplin

Haipeng Lin
Moderator
Staff member
Hi Fucheng,

You are correct that pver is # of vertical levels.
For horizontal grid, the whole grid (288*192) is divided across different MPI tasks; each task sees a subset of the full domain. Within each task, there are a number of chunks (begchunk:endchunk). Within each chunk, pcols is the maximum of columns per chunk; ncol is the number of columns in the currently active chunk (lchnk).

Hope this is helpful!
 

FuchengY

Fucheng Yang
New Member
Hi Fucheng,

You are correct that pver is # of vertical levels.
For horizontal grid, the whole grid (288*192) is divided across different MPI tasks; each task sees a subset of the full domain. Within each task, there are a number of chunks (begchunk:endchunk). Within each chunk, pcols is the maximum of columns per chunk; ncol is the number of columns in the currently active chunk (lchnk).

Hope this is helpful!
Thank you Haipeng!

I am not familiar with how MPI works. So is there any point that I can see the whole domain rather than a subset?
 

hplin

Haipeng Lin
Moderator
Staff member
Hi Fucheng, could you elaborate on what you're trying to accomplish?

Because the domain is decomposed across MPI tasks, it is not possible for a single task to see the whole domain but you may be able to use MPI to pass neighboring columns if needed. But most of the time you can do computations locally on a subset of the domain in each MPI task! So it will be helpful to know your specific use case.
 

FuchengY

Fucheng Yang
New Member
Hi Fucheng, could you elaborate on what you're trying to accomplish?

Because the domain is decomposed across MPI tasks, it is not possible for a single task to see the whole domain but you may be able to use MPI to pass neighboring columns if needed. But most of the time you can do computations locally on a subset of the domain in each MPI task! So it will be helpful to know your specific use case.
It is not about a specific accomplishment. I am just trying to understand how CAM work with parallel computing.

I also notice that there are code like: if(masterproc) then... I guess that the master process does all tasks which other processes do, and it will do extra tasks in such "if" block. Is my understanding correct?
 
Top