Scheduled Downtime
On Tuesday 24 October 2023 @ 5pm MT the forums will be in read only mode in preparation for the downtime. On Wednesday 25 October 2023 @ 5am MT, this website will be down for maintenance and expected to return online later in the morning.
Normal Operations
The forums are back online with normal operations. If you notice any issues or errors related to the forums, please reach out to help@ucar.edu

*** glibc detected *** when using initial conditions created by CAM3.1

emonier@mit_edu

New Member
Hi,

I ran into a strange problem. I have compiled CAM3.1 with pgf90 (pgi/9.0-4), netcdf/3.6.2 and openmpi/1.3.3a to use on Linux x86_64 cluster, both with and without spmd enabled. I run the default setup with the finite volume 4x5 core and add to the namelist:

&camexp
MSS_IRT = 0
NELAPSE = -2
INITHIST = 'DAILY'
/
&clmexp

Everything works fine. I then use the initial conditions created by that run as new initial conditions adding this to the namelist:

&camexp
MSS_IRT = 0
NELAPSE = -2
INITHIST = 'DAILY'
NCDATA = 'path/to/the/new/initial/conditions'

/
&clmexp

I get the following error message:

*** glibc detected *** /home/emonier/cam3.1/test-run/bld-single-proc-4x5/cam: double free or corruption (!prev): 0x000000000d8fd830 ***
[c001:17694] *** Process received signal ***
[c001:17692] *** Process received signal ***
[c001:17692] Signal: Segmentation fault (11)
...
...

right before what should read (if it wasn't failing) nstep, te 0 3338593048.973364 0.000000000000000
0.000000000000000 98463.20727559768

This problem occurs also with FV2x2.5, whether I compile with or without spmd and regardless of the number of cores I run with. However, everything works fine with FV10x15, T21, T31 and T42.

Can anyone help me with this. I am lost.
Thanks,


PS: I tried setting "export MALLOC_CHECK_=0" as I have read on a forum that might fix the problem but it still crashes with a segmentation fault.
 
Top