Main menu

Navigation

Q: Ensuring CESM is using parallel I/O?

6 posts / 0 new
Last post
bdobbins@...
Q: Ensuring CESM is using parallel I/O?

Hi guys,

I sent an e-mail to cesm-help with a related issue, but began to dive into things a bit more and thought I'd perhaps post here in case others have tackled this issue.

In a nutshell, I'm wondering how I can verify that CESM is doing the 'right' thing with PIO?

I'm running CESM (v1.0.3) with pNetCDF selected as the output for the OCN component, with 512 cores total and PIO_NUMTASKS (and OCN_PIO_NUMTASKS) set to 32 and my I/O performance is not noticeably improving - as in, any improvements seem to be within a noise threshold. I downloaded the (full) PIO source and have been using the 'testpio' functionality to run a number of tests on my system using a slightly modified POPD configuration and can see noticeable differences between the serial NetCDF (snc) and pNetCDF (pnc) tests on a given number of cores & IO tasks. So I'm questioning whether my CESM build is truly using 'pnetcdf' mode in PIO for output. What things can I check/look for? Does anyone have any performance numbers from PIO on, say, Janus? Or similar systems?

I'm also intending to perform some tests on the (Lustre) file system and play around with our stripe count and size settings, but any sort of sanity check at this point sure would be nice.

Thanks very much,
- Brian

jedwards

Hi Brian,

Glade that someone is looking at IO performance. You should be able to look near the top of the ccsm.log file and see what the PIO settings were. When you downloaded the PIO library did you get the same version as in 103 or the latest version? If you got the latest version then the improved performance in the standalone makes sense because it is a much newer version than the one in the 103 release. So you could try linking the library you built standalone with the cesm case you are running.

CESM Software Engineer

bdobbins@...

Hi Jim,

Thanks for the ideas - I checked my ccsm.log file and it is showing that 'pnetcdf' is being used (both via name and iotype=5), so that at least seems solved. As for the PIO library I downloaded for the tests, I actually used the latest (v1.4.0), whereas CESM claims it's using 1.2.0. Interestingly enough, CESM 1.0.3 says it's using 1.2.0 and CESM 1.0.4 says it's also using 1.2.0, but the two directories have quite a few differences, so my guess is CESM 1.0.4 actually has some changes? (Nothing in the release notes seemed to indicate PIO changes, but I'll give it a shot anyway.)

As for grafting the newer libpio.a into the CESM 1.0.3 compilation, I'll play around with it a bit more, but an initial attempt showed a bunch of routines are different and/or missing, leading to linking problems. In the meantime, I'm also setting up some tests on <512 cores to simplify and verify a few things - it appears that with standard (non-parallel) NetCDF output, we might actually run faster on 256 cores than 512, since while the standard day dt is roughly double, the end-of-month dt is almost a third, and that end-of-month step accounts for >50% of the total run time. This seems strange to me. I think with standard NetCDF I/O, all output is done through rank 0, correct? But with a fast QDR IB network, I wouldn't think the extra communication, even if serialized, would take that much longer on 2x the number of cores.

Anyway, setting up some additional tests on 128 - 512 cores with PIO should help clarify things. In the meantime, and I sent a similar note to John Dennis, do you happen to have any cpl.log entries from runs on Janus (for example) on 256-1024 cores with PIO set for B component sets on, ideally, 0.9x1.25_gx1v6 grids? Having some numbers to compare to would probably be a healthy sanity check.

Thanks again,
- Brian

(PS. Once I solve whatever it is, I'll post any necessary steps here in case it helps others.)

jedwards

There was a change made to the 1.4.0 pio version that did not make it into the cesm1_0_4 release.
Try replacing the file calcdecomp.F90 in your cesm1_0_4 version with the one from pio1_4_0

This will not work for cesm1_0_3 but will for cesm1_0_4.

Jim

CESM Software Engineer

bdobbins@...

Excellent, thanks. I'm off to various meetings shortly, but will configure a few CESM 1.0.4 cases later today and see whether that helps.

bdobbins@...

Hi Jim,

I'll provide a more thorough update tomorrow or so (once additional resources are free on my cluster), but in the meantime, it looks like the problem is solved. In addition to switching to CESM 1.0.4, plus the updated file you mentioned for PIO, I also moved the CESM input directory around so it now sits on a stripe with a count of 8 instead of 1 - this probably also contributes to the change a bit, though I'll quantify that later.

In the meantime, running two identical cases, both using 256 cores and a B_1850_CAM5_CN component set shows end-of-month times dropping from ~300-400s to ~70-80s. This was with 8 PIO tasks for most components. I've got some 512-core tests configured for tomorrow, varying the number of PIO processes and Lustre stripes.

In short, things are looking encouraging, and once I do a few additional tests I'll provide some info here in case it's helpful to anyone else.

Thanks again,
-Brian

Log in or register to post comments

Who's new

  • 1658093099@...
  • mborreggine@...
  • kabirtam@...
  • suns@...
  • liangpeng0405@...