Scheduled Downtime
On Tuesday 24 October 2023 @ 5pm MT the forums will be in read only mode in preparation for the downtime. On Wednesday 25 October 2023 @ 5am MT, this website will be down for maintenance and expected to return online later in the morning.
Normal Operations
The forums are back online with normal operations. If you notice any issues or errors related to the forums, please reach out to help@ucar.edu

Issues while model execution CAM6_03_80

dharmendraks841

Dharmendra Kumar Singh
Member
ONLY ERROR, I am suffering while running CAM6_03_80 one-year simulation, it is stopped after 11 months in previous and now after 8 months after the following ERROR

Do you have some expert to respond me

About this

What is MPI_COMM_WORLD Rank
1:MPT ERROR: MPI_COMM_WORLD rank 55 has terminated without calling MPI_Finalize()

Why my case run is stopped after 8 months of successful run (one year simulation), not running in September, October, November, December 2019.

The path is as follows

/glade/scratch/dksingh/anual_2019_contrail_runtest



The detailed about error is as follows



55:MPT ERROR: Rank 55(g:55) received signal SIGFPE(8).

280:MPT ERROR: Rank 280(g:280) received signal SIGFPE(8).

25:MPT ERROR: Rank 25(g:25) received signal SIGFPE(8).

55:MPT: header=header@entry=0x7fff5be9cc50 "MPT ERROR: Rank 55(g:55) received signal SIGFPE(8).\n\tProcess ID: 11926, Host: r11i2n5, Program: /glade/scratch/dksingh/anual_2019_contrail_runtest/bld/cesm.exe\n\tMPT Version: HPE MPT 2.25 08/14/21 03:"...) at sig.c:340

280:MPT: header=header@entry=0x7ffd134089d0 "MPT ERROR: Rank 280(g:280) received signal SIGFPE(8).\n\tProcess ID: 20831, Host: r11i6n2, Program: /glade/scratch/dksingh/anual_2019_contrail_runtest/bld/cesm.exe\n\tMPT Version: HPE MPT 2.25 08/14/21 0"...) at sig.c:340

55:MPT: cam_in=<error reading variable: value requires 97248 bytes, which is more than max-value-size>,

55:MPT: cam_out=<error reading variable: value requires 107808 bytes, which is more than max-value-size>)

55:MPT: cam_in=<error reading variable: value requires 97248 bytes, which is more than max-value-size>,

55:MPT: cam_out=<error reading variable: value requires 107808 bytes, which is more than max-value-size>)

55:MPT: index=<error reading variable: Cannot access memory at address 0x0>,

55:MPT: existflag=<error reading variable: Cannot access memory at address 0x0>,

55:MPT: syncflag=<error reading variable: Cannot access memory at address 0x0>,

55:MPT: port=<error reading variable: Cannot access memory at address 0x0>,

55:MPT: keywordenforcer=<error reading variable: Cannot access memory at address 0x0>, importstate=..., exportstate=..., clock=...,

55:MPT: syncflag=<error reading variable: Cannot access memory at address 0x0>,

55:MPT: timeout=<error reading variable: Cannot access memory at address 0x0>,

55:MPT: timeoutflag=<error reading variable: Cannot access memory at address 0x0>,

55:MPT: index=<error reading variable: Cannot access memory at address 0x0>,

55:MPT: existflag=<error reading variable: Cannot access memory at address 0x0>,

280:MPT: cam_in=<error reading variable: value requires 97248 bytes, which is more than max-value-size>,

280:MPT: cam_out=<error reading variable: value requires 107808 bytes, which is more than max-value-size>)

280:MPT: cam_in=<error reading variable: value requires 97248 bytes, which is more than max-value-size>,

280:MPT: cam_out=<error reading variable: value requires 107808 bytes, which is more than max-value-size>)

280:MPT: index=<error reading variable: Cannot access memory at address 0x0>,

280:MPT: existflag=<error reading variable: Cannot access memory at address 0x0>,

55:MPT: syncflag=<error reading variable: Cannot access memory at address 0x0>,

55:MPT: port=<error reading variable: Cannot access memory at address 0x0>,

55:MPT: keywordenforcer=<error reading variable: Cannot access memory at address 0x0>, importstate=..., exportstate=..., clock=...,

55:MPT: syncflag=<error reading variable: Cannot access memory at address 0x0>,

55:MPT: timeout=<error reading variable: Cannot access memory at address 0x0>,

55:MPT: timeoutflag=<error reading variable: Cannot access memory at address 0x0>,

55:MPT: index=<error reading variable: Cannot access memory at address 0x0>,

55:MPT: existflag=<error reading variable: Cannot access memory at address 0x0>,

280:MPT: syncflag=<error reading variable: Cannot access memory at address 0x0>,

280:MPT: port=<error reading variable: Cannot access memory at address 0x0>,

280:MPT: keywordenforcer=<error reading variable: Cannot access memory at address 0x0>, importstate=..., exportstate=..., clock=...,

280:MPT: syncflag=<error reading variable: Cannot access memory at address 0x0>,

280:MPT: timeout=<error reading variable: Cannot access memory at address 0x0>,

280:MPT: timeoutflag=<error reading variable: Cannot access memory at address 0x0>,

280:MPT: index=<error reading variable: Cannot access memory at address 0x0>,

280:MPT: existflag=<error reading variable: Cannot access memory at address 0x0>,

55:MPT: importstate=<error reading variable: Location address is not set.>,

55:MPT: exportstate=<error reading variable: Location address is not set.>,

55:MPT: clock=<error reading variable: Cannot access memory at address 0x0>,

55:MPT: syncflag=<error reading variable: Cannot access memory at address 0x0>,

55:MPT: phase=<error reading variable: Cannot access memory at address 0x0>,

55:MPT: port=<error reading variable: Cannot access memory at address 0x0>,

55:MPT: keywordenforcer=<error reading variable: Cannot access memory at address 0x0>, importstate=<error reading variable: Location address is not set.>,

55:MPT: exportstate=<error reading variable: Location address is not set.>,

55:MPT: clock=<error reading variable: Cannot access memory at address 0x0>,

55:MPT: syncflag=<error reading variable: Cannot access memory at address 0x0>,

55:MPT: phase=<error reading variable: Cannot access memory at address 0x0>,

55:MPT: timeout=<error reading variable: Cannot access memory at address 0x0>,

55:MPT: timeoutflag=<error reading variable: Cannot access memory at address 0x0>,

280:MPT: syncflag=<error reading variable: Cannot access memory at address 0x0>,

280:MPT: port=<error reading variable: Cannot access memory at address 0x0>,

280:MPT: keywordenforcer=<error reading variable: Cannot access memory at address 0x0>, importstate=..., exportstate=..., clock=...,

280:MPT: syncflag=<error reading variable: Cannot access memory at address 0x0>,

280:MPT: timeout=<error reading variable: Cannot access memory at address 0x0>,

280:MPT: timeoutflag=<error reading variable: Cannot access memory at address 0x0>,

280:MPT: index=<error reading variable: Cannot access memory at address 0x0>,

280:MPT: existflag=<error reading variable: Cannot access memory at address 0x0>,

280:MPT: importstate=<error reading variable: Location address is not set.>,

280:MPT: exportstate=<error reading variable: Location address is not set.>,

280:MPT: clock=<error reading variable: Cannot access memory at address 0x0>,

280:MPT: syncflag=<error reading variable: Cannot access memory at address 0x0>,

280:MPT: phase=<error reading variable: Cannot access memory at address 0x0>,

280:MPT: port=<error reading variable: Cannot access memory at address 0x0>,

280:MPT: keywordenforcer=<error reading variable: Cannot access memory at address 0x0>, importstate=<error reading variable: Location address is not set.>,

280:MPT: exportstate=<error reading variable: Location address is not set.>,

280:MPT: clock=<error reading variable: Cannot access memory at address 0x0>,

280:MPT: syncflag=<error reading variable: Cannot access memory at address 0x0>,

280:MPT: phase=<error reading variable: Cannot access memory at address 0x0>,

280:MPT: timeout=<error reading variable: Cannot access memory at address 0x0>,

280:MPT: timeoutflag=<error reading variable: Cannot access memory at address 0x0>,

-1:MPT ERROR: MPI_COMM_WORLD rank 55 has terminated without calling MPI_Finalize()
 

peverley

Courtney Peverley
Moderator
Staff member
It looks like you're running out of memory.

This could be that you have run out of space in your scratch directory (could try removing some old cases/directories).

Let me know if that doesn't work. There are a couple more things we could try.

Courtney
 

dharmendraks841

Dharmendra Kumar Singh
Member
not at all!

Current GLADE space usage: dksingh





Space Used Quota % Full # Files


----------------------------------------- ------------ ------------ --------- -----------


/glade/scratch/dksingh 1396.12 GiB 10.00 TiB 13.63 % 47171


/glade/work/dksingh 931.74 GiB 1024.00 GiB 90.99 % 31532


/glade/u/home/dksingh 0.13 GiB 100.00 GiB 0.13 % 2319
 

peverley

Courtney Peverley
Moderator
Staff member
Ok, yep. That's not your problem!

Have you tried changing the number of tasks that the case is running on?

It looks like you currently have 360 tasks, so you could try upping that to 720 with:

./xmlchange NTASKS=720

Let me know if that gets you further.

Courtney
 

dharmendraks841

Dharmendra Kumar Singh
Member
hi,
Hi,

Today again I create a new case to run the CAM6_03_80 and followed your and other expert suggestions from CEDSM forum but

Again model is not executed.

For testing I started only for one moth run 20200101 and found the following error:



2023-01-08 00:52:13: model execution error

ERROR: Command: 'mpiexec_mpt -p "%g:" -np 720 omplace -tm open64 -vv /glade/scratch/dksingh/no_aviation_contrail/bld/cesm.exe >> cesm.log.$LID 2>&1 ' failed with error '' from dir '/glade/scratch/dksingh/no_aviation_contrail/run'

---------------------------------------------------

2023-01-08 00:52:13: case.run error

ERROR: RUN FAIL: Command 'mpiexec_mpt -p "%g:" -np 720 omplace -tm open64 -vv /glade/scratch/dksingh/no_aviation_contrail/bld/cesm.exe >> cesm.log.$LID 2>&1 ' failed

See log file for details: /glade/scratch/dksingh/no_aviation_contrail/run/cesm.log.8032767.chadmin1.ib0.cheyenne.ucar.edu.230108-005041





43:MPT ERROR: Rank 43(g:43) is aborting with error code 1001.

25:MPT ERROR: Rank 25(g:25) is aborting with error code 1001.

648:MPT ERROR: Rank 648(g:648) is aborting with error code 1001.

-1:MPT ERROR: MPI_COMM_WORLD rank 585 has terminated without calling MPI_Finalize()

-1:MPT ERROR: MPI_COMM_WORLD rank 555 has terminated without calling MPI_Finalize()

-1:MPT ERROR: MPI_COMM_WORLD rank 103 has terminated without calling MPI_Finalize()

(base) dksingh@cheyenne1:/glade/scratch/dksingh/no_aviation_contrail>


Please also check my user_nl_cam (namelist)

! namelist_var = new_namelist_value

&metdata_nl

ncdata = '/glade/scratch/dksingh/RF_2020/01/MERRA2_0.9x1.25_L32_20200101.nc'

met_nudge_temp = .true.

met_data_file = '01/MERRA2_0.9x1.25_L32_20200101.nc'

met_data_path = '/glade/scratch/dksingh/RF_2020/01/'

met_filenames_list = '/glade/scratch/dksingh/RF_2020/filelist_2020_met.txt'

met_fix_mass = .true.

met_qflx_factor = 1.0

met_rlx_time = 24.

met_srf_land = .false.

met_srf_land_scale = .true.

met_srf_nudge_flux = .false.

met_srf_rad = .true.

met_srf_refs = .true.
/
 

dharmendraks841

Dharmendra Kumar Singh
Member
AGAIN getting the same issues after 4 months of the successful run (out of 12 months).
Please check it and try to resolve a continuously the same issue for a long time .

2023-01-11 01:40:39: model execution error


ERROR: Command: 'mpiexec_mpt -p "%g:" -np 720 omplace -tm open64 -vv /glade/scratch/dksingh/feb_dec20_no_aviation_contrail/bld/cesm.exe >> cesm.log.$LID 2>&1 ' failed with error '' from dir '/glade/scratch/dksingh/feb_dec20_no_aviation_contrail/run'
---------------------------------------------------
2023-01-11 01:40:39: case.run error
ERROR: RUN FAIL: Command 'mpiexec_mpt -p "%g:" -np 720 omplace -tm open64 -vv /glade/scratch/dksingh/feb_dec20_no_aviation_contrail/bld/cesm.exe >> cesm.log.$LID 2>&1 ' failed
See log file for details: /glade/scratch/dksingh/feb_dec20_no_aviation_contrail/run/cesm.log.8070919.chadmin1.ib0.cheyenne.ucar.edu.230111-011905
---------------------------------------------------
(base) dksingh@cheyenne1:/glade/scratch/dksingh/feb_dec20_no_aviation_contrail> grep -i error /glade/scratch/dksingh/feb_dec20_no_aviation_contrail/run/cesm.log.8070919.chadmin1.ib0.cheyenne.ucar.edu.230111-011905
162:MPT ERROR: Rank 162(g:162) received signal SIGFPE(8).
162:MPT: index=<error reading variable: Cannot access memory at address 0x0>,
162:MPT: existflag=<error reading variable: Cannot access memory at address 0x0>,
162:MPT: syncflag=<error reading variable: Cannot access memory at address 0x0>,
162:MPT: port=<error reading variable: Cannot access memory at address 0x0>,
162:MPT: keywordenforcer=<error reading variable: Cannot access memory at address 0x0>, importstate=..., exportstate=..., clock=...,
162:MPT: syncflag=<error reading variable: Cannot access memory at address 0x0>,
162:MPT: timeout=<error reading variable: Cannot access memory at address 0x0>,
162:MPT: timeoutflag=<error reading variable: Cannot access memory at address 0x0>,
162:MPT: index=<error reading variable: Cannot access memory at address 0x0>,
162:MPT: existflag=<error reading variable: Cannot access memory at address 0x0>,
162:MPT: syncflag=<error reading variable: Cannot access memory at address 0x0>,
162:MPT: port=<error reading variable: Cannot access memory at address 0x0>,
162:MPT: keywordenforcer=<error reading variable: Cannot access memory at address 0x0>, importstate=..., exportstate=..., clock=...,
162:MPT: syncflag=<error reading variable: Cannot access memory at address 0x0>,
162:MPT: timeout=<error reading variable: Cannot access memory at address 0x0>,
162:MPT: timeoutflag=<error reading variable: Cannot access memory at address 0x0>,
162:MPT: index=<error reading variable: Cannot access memory at address 0x0>,
162:MPT: existflag=<error reading variable: Cannot access memory at address 0x0>,
162:MPT: importstate=<error reading variable: Location address is not set.>,
162:MPT: exportstate=<error reading variable: Location address is not set.>,
162:MPT: clock=<error reading variable: Cannot access memory at address 0x0>,
162:MPT: syncflag=<error reading variable: Cannot access memory at address 0x0>,
162:MPT: phase=<error reading variable: Cannot access memory at address 0x0>,
162:MPT: port=<error reading variable: Cannot access memory at address 0x0>,
162:MPT: keywordenforcer=<error reading variable: Cannot access memory at address 0x0>, importstate=<error reading variable: Location address is not set.>,
162:MPT: exportstate=<error reading variable: Location address is not set.>,
162:MPT: clock=<error reading variable: Cannot access memory at address 0x0>,
162:MPT: syncflag=<error reading variable: Cannot access memory at address 0x0>,
162:MPT: phase=<error reading variable: Cannot access memory at address 0x0>,
162:MPT: timeout=<error reading variable: Cannot access memory at address 0x0>,
162:MPT: timeoutflag=<error reading variable: Cannot access memory at address 0x0>,
-1:MPT ERROR: MPI_COMM_WORLD rank 162 has terminated without calling MPI_Finalize()
 

peverley

Courtney Peverley
Moderator
Staff member
Hi again,

Did you get the case in your previous post (/glade/scratch/dksingh/no_aviation_contrail) to run? It looks like the log files were archived which indicates a successful run.

As for the continued issue, another thing to try:

./xmlchange DOUT_S_SAVE_INTERIM_RESTART_FILES=true

Then do:
./case.setup --reset
./case.build --clean-all
./case.build

If that doesn't work, please try the newest version of CAM (cam6_3_089) and post the error if you're still getting one.

Courtney
 

dharmendraks841

Dharmendra Kumar Singh
Member
Did you get the case in your previous post (/glade/scratch/dksingh/no_aviation_contrail) to run? It looks like the log files were archived which indicates a successful run.
Yes, it was required to run only for one month and the output file is as follows
no_aviation_contrail.cam.h0.2020-01.nc
After, I wanted to run for the remaining 11 months (start-up run again) using the following case
feb_dec20_no_aviation_contrail
In this case, the run was successful from Feb_2020 to May_2020 (only ran 4 months) after that I got the same error posted above.
 

dharmendraks841

Dharmendra Kumar Singh
Member
SAME ISSUES/ERROR while model execution on the newest version cam6_3_089
hi there,
I am getting the same issue even after updating the newest version of the model execution



2023-01-12 00:59:49: model execution error


ERROR: Command: 'mpiexec_mpt -p "%g:" -np 720 omplace -tm open64 -vv /glade/scratch/dksingh/06_12_2020_no_aviation_contrail/bld/cesm.exe >> cesm.log.$LID 2>&1 ' failed with error '' from dir '/glade/scratch/dksingh/06_12_2020_no_aviation_contrail/run'


---------------------------------------------------


2023-01-12 00:59:49: case.run error


ERROR: RUN FAIL: Command 'mpiexec_mpt -p "%g:" -np 720 omplace -tm open64 -vv /glade/scratch/dksingh/06_12_2020_no_aviation_contrail/bld/cesm.exe >> cesm.log.$LID 2>&1 ' failed


See log file for details: /glade/scratch/dksingh/06_12_2020_no_aviation_contrail/run/cesm.log.8084950.chadmin1.ib0.cheyenne.ucar.edu.230112-005817
'


---------------------------------------------------


2023-01-12 00:59:49: case.run error


ERROR: RUN FAIL: Command 'mpiexec_mpt -p "%g:" -np 720 omplace -tm open64 -vv /glade/scratch/dksingh/06_12_2020_no_aviation_contrail/bld/cesm.exe >> cesm.log.$LID 2>&1 ' failed


See log file for details: /glade/scratch/dksingh/06_12_2020_no_aviation_contrail/run/cesm.log.8084950.chadmin1.ib0.cheyenne.ucar.edu.230112-005817


---------------------------------------------------


(base) dksingh@cheyenne4:/glade/scratch/dksingh/06_12_2020_no_aviation_contrail> grep -i Error /glade/scratch/dksingh/06_12_2020_no_aviation_contrail/run/cesm.log.8084950.chadmin1.ib0.cheyenne.ucar.edu.230112-005817



383:MPT ERROR: Rank 383(g:383) is aborting with error code 1001.


157:MPT ERROR: Rank 157(g:157) is aborting with error code 1001.


687:MPT ERROR: Rank 687(g:687) is aborting with error code 1001.


303:MPT ERROR: Rank 303(g:303) is aborting with error code 1001.


-1:MPT ERROR: MPI_COMM_WORLD rank 512 has terminated without calling MPI_Finalize()
 

dharmendraks841

Dharmendra Kumar Singh
Member
SAME ISSUES/ERROR while model execution on the newest version cam6_3_089
hi there,
I am getting the same issue even after updating the newest version of the model execution



2023-01-12 00:59:49: model execution error


ERROR: Command: 'mpiexec_mpt -p "%g:" -np 720 omplace -tm open64 -vv /glade/scratch/dksingh/06_12_2020_no_aviation_contrail/bld/cesm.exe >> cesm.log.$LID 2>&1 ' failed with error '' from dir '/glade/scratch/dksingh/06_12_2020_no_aviation_contrail/run'


---------------------------------------------------


2023-01-12 00:59:49: case.run error


ERROR: RUN FAIL: Command 'mpiexec_mpt -p "%g:" -np 720 omplace -tm open64 -vv /glade/scratch/dksingh/06_12_2020_no_aviation_contrail/bld/cesm.exe >> cesm.log.$LID 2>&1 ' failed


See log file for details: /glade/scratch/dksingh/06_12_2020_no_aviation_contrail/run/cesm.log.8084950.chadmin1.ib0.cheyenne.ucar.edu.230112-005817
'


---------------------------------------------------


2023-01-12 00:59:49: case.run error


ERROR: RUN FAIL: Command 'mpiexec_mpt -p "%g:" -np 720 omplace -tm open64 -vv /glade/scratch/dksingh/06_12_2020_no_aviation_contrail/bld/cesm.exe >> cesm.log.$LID 2>&1 ' failed


See log file for details: /glade/scratch/dksingh/06_12_2020_no_aviation_contrail/run/cesm.log.8084950.chadmin1.ib0.cheyenne.ucar.edu.230112-005817


---------------------------------------------------


(base) dksingh@cheyenne4:/glade/scratch/dksingh/06_12_2020_no_aviation_contrail> grep -i Error /glade/scratch/dksingh/06_12_2020_no_aviation_contrail/run/cesm.log.8084950.chadmin1.ib0.cheyenne.ucar.edu.230112-005817



383:MPT ERROR: Rank 383(g:383) is aborting with error code 1001.


157:MPT ERROR: Rank 157(g:157) is aborting with error code 1001.


687:MPT ERROR: Rank 687(g:687) is aborting with error code 1001.


303:MPT ERROR: Rank 303(g:303) is aborting with error code 1001.


-1:MPT ERROR: MPI_COMM_WORLD rank 512 has terminated without calling MPI_Finalize()
please also check the following error related to mpt 2.25
47:MPT ERROR: Rank 47(g:47) received signal SIGFPE(8).
47:MPT: header=header@entry=0x7ffcf3792c10 "MPT ERROR: Rank 47(g:47) received signal SIGFPE(8).\n\tProcess ID: 66501, Host: r9i3n0, Program: /glade/scratch/dksingh/06_12_2020_no_aviation_contrail/bld/cesm.exe\n\tMPT Version: HPE MPT 2.25 08/14/21 "...) at sig.c:340
-1:MPT ERROR: MPI_COMM_WORLD rank 47 has terminated without calling MPI_Finalize()
(base) dksingh@cheyenne4:/glade/scratch/dksingh/06_12_2020_no_aviation_contrail>
 

dharmendraks841

Dharmendra Kumar Singh
Member
please also check the following error related to mpt 2.25
47:MPT ERROR: Rank 47(g:47) received signal SIGFPE(8).
47:MPT: header=header@entry=0x7ffcf3792c10 "MPT ERROR: Rank 47(g:47) received signal SIGFPE(8).\n\tProcess ID: 66501, Host: r9i3n0, Program: /glade/scratch/dksingh/06_12_2020_no_aviation_contrail/bld/cesm.exe\n\tMPT Version: HPE MPT 2.25 08/14/21 "...) at sig.c:340
-1:MPT ERROR: MPI_COMM_WORLD rank 47 has terminated without calling MPI_Finalize()
(base) dksingh@cheyenne4:/glade/scratch/dksingh/06_12_2020_no_aviation_contrail>
I found this message while sign in Cheyenne
Inactive Modules:
1) mpt/2.25 2) netcdf/4.8.1
 

erik

Erik Kluzek
CSEG and Liaisons
Staff member
The above is telling us that it's dying due to a floating point error. This means things like a divide by zero, or doing math with values that are too large. Looking on cheyenne I see the following for the traceback

47:MPT: #5 <signal handler called>
47:MPT: #6 0x00002b145399cba9 in __libm_pow_e7 ()
47:MPT: from /glade/p/cesmdata/cseg/PROGS/esmf/8.4.0/mpt/2.25/intel/19.1.1/lib/libg/Linux.intel.64.mpt.default/libesmf.so
47:MPT: #7 0x0000000008e437e1 in photosynthesismod::plc (x=29.47704822647222, p=63,
47:MPT: level=4, plc_method=0)
47:MPT: at /glade/scratch/dksingh/cam6_03_089/components/clm/src/biogeophys/PhotosynthesisMod.F90:5151
47:MPT: #8 0x0000000008e32eb3 in photosynthesismod::spacf (p=63, c=18, x=..., f=...,
47:MPT: qflx_sun=3.6469951983592559e-09, qflx_sha=7.0240982289169682e-11,
47:MPT: atm2lnd_inst=..., canopystate_inst=..., soilstate_inst=...,
47:MPT: temperature_inst=..., waterfluxbulk_inst=...)
47:MPT: at /glade/scratch/dksingh/cam6_03_089/components/clm/src/biogeophys/PhotosynthesisMod.F90:4919
47:MPT: #9 0x0000000008e2017e in photosynthesismod::calcstress (p=63, c=18, x=...
47:MPT: bsun=1, bsha=1, gb_mol=650821.45792241523, gs_mol_sun=210058.70294447767,

Note that it shows the floating point error and then has line 5151 of PhotosynthesissMod.F90

Which is this line...

plc=2._r8**(-(x/params_inst%psi50(ivt(p),level))**params_inst%ck(ivt(p),level))

which tells me either the parameters psi50 or ck are bad (or the subscript indices are off) -- or that x is a bad value (maybe a NaN or too large).

The reason for that are numerous, so it will take work to figure out what is going on.

Good luck on your debugging...
 

erik

Erik Kluzek
CSEG and Liaisons
Staff member
Oh, and another suggestion I have is that you try to replicate this problem in a vanilla case, with a code base without changes (including changes to CAM) and as standard of a case as possible. You can likely go through and have a simple case that works, and then gradually make it more complex and like your failing case, and see what change causes it to fail
 
Top