Scheduled Downtime
On Tuesday 24 October 2023 @ 5pm MT the forums will be in read only mode in preparation for the downtime. On Wednesday 25 October 2023 @ 5am MT, this website will be down for maintenance and expected to return online later in the morning.
Normal Operations
The forums are back online with normal operations. If you notice any issues or errors related to the forums, please reach out to help@ucar.edu

CICE startup picard convergence error

richf

kvEebTezFZ7oNt7uxfGy
New Member
Hello -

I have a CESM2 case (FHIST compset) that I've been working on some changes in CAM that suddenly (as of yesterday afternoon) crashes when initializing CICE with a 'Picard convergence failed error' - the same case ran fine yesterday morning. The changes I've been making in CAM should have no impact on CICE, and the error persists after undoing the changes, recompiling, and running again.

The case is running on Cheyenne, with default initial conditions. I've tried most of the suggestions posted by Dave Bailey in the CICE Thermodynamic convergence errors thread with no luck. One apparent difference between the new case (which fails) and old cases (which ran) is that values of CICE_BLCKX, CICE_BLCKY, and CICE_MXBLCKS changed between this case and a different old case - both have CICE_AUTO_DECOMP = TRUE, so this was not a change that was purposely made (and cannot imagine what I would have changed in the case that would cause these values to change from yesterday).

Any suggestions on how to proceed?
Thanks, Rich

Error is posted below:

-------------------------------------
picard convergence failed!
0 -1.94520421392896 NaN
1 0.000000000000000E+000 0.000000000000000E+000
-110121000.000000
2 0.000000000000000E+000 0.000000000000000E+000
-110121000.000000
3 0.000000000000000E+000 0.000000000000000E+000
-110121000.000000
1 -1.96777096722729 NaN
0.221199672492076 NaN 6.304061675620776E-003
-307921112.518665
2 -1.99478514785355 NaN
1.14518662779639 NaN 3.221067427306291E-002
-300157970.491017
3 -2.00194393690185 NaN
2.02653107296603 NaN 5.680367558998695E-002
-292753101.770898
4 -2.00442903231686 NaN
2.60742314029629 NaN 7.299870339955408E-002
-287872570.075585
5 -2.00545397596257 NaN
2.93354474504646 NaN 8.208848954089151E-002
-285132565.661641
6 -2.00590153294081 NaN
3.09856473808458 NaN 8.668753922529904E-002
-283746102.751097
7 -2.00608879298850 NaN
3.17243369914036 NaN 8.874615845809578E-002
-283125471.439346
8 -2.00615100738449 NaN
3.19765400201525 NaN 8.944900027672407E-002
-282913575.814550
-------------------------------------
ice_therm_mushy solver failure: istep1, my_task, i, j: 1 0
6 6
ice_therm_mushy solver failure
ERROR: ice_therm_mushy solver failure
 

dbailey

CSEG and Liaisons
Staff member
You have "NaNs" here. This is usually not a good sign. I expect there is a land/ocean mismatch where the sea ice is not getting valid forcing. What is your resolution combination? Did you try all the steps here?


I really think steps 1 and 2 will help you figure out the problem.
 

richf

kvEebTezFZ7oNt7uxfGy
New Member
Hi Dave -

The NaNs were suspicious to me as well (forgot to mention in the original message) - but haven't been able to track down their origin yet. Resolution in create_newcase was f19_f19 - this resolution worked fine yesterday morning.

Results from tests suggested in the thread: 1) setting HIST_OPTION=nsteps, HIST_N=1 had no effect (I had assumed this was because it crashed before completing a timestep). 2) From the logs, it wasn't obvious to me what to set latpnt and lonpnt to (the "Global i and j..." lines do not appear in the log files), but arbitrary points also did not help. 3-5) also did not fix the issue (changed nit_max to 500, halved the time step, and made suggested mod in ice_therm_vertical).

Thanks,
Rich
 

richf

kvEebTezFZ7oNt7uxfGy
New Member
Seems to be working now - seems that a change I had made in CAM was causing the incorrect fields to be passed to CICE through the coupler (though I'm not sure why it didn't run previously when I had reversed these changes!)

Thanks again - Rich
 
Top