Scheduled Downtime
On Tuesday 24 October 2023 @ 5pm MT the forums will be in read only mode in preparation for the downtime. On Wednesday 25 October 2023 @ 5am MT, this website will be down for maintenance and expected to return online later in the morning.
Normal Operations
The forums are back online with normal operations. If you notice any issues or errors related to the forums, please reach out to help@ucar.edu

run CCSM4 from CESM1.0.5 code base

rfeng@umich_edu

New Member
Hi, there,Has any of you run CCSM4 from CESM1.0.5 code base? I tried to compile the code with CAM phys set to CAM4, but I am not sure if there are other places I need to change. Any help will be greatly appreciated.Thank you!Ran 
 

jedwards

CSEG and Liaisons
Staff member
cam4/ ccsm4 is the default configuration of cesm1_0_5, you shouldn't need to do anything special.
 

rfeng@umich_edu

New Member
Hi,I used the default compiling options B_1850 from cesm1_0_5 and hope to branch a CCSM4 run with restart file getting from the mass storage /CCSM/csm/b40.1850.track1.2deg.003/rest/1001-01-01-00000.I had this run branched with CCSM4 code on bluefire and ran for more than 100 years without any problem, however, with the cesm1_0_5 code base and on yellowstone. It crashed at the third month. Here is the error message:236:(shr_sys_abort) ERROR: ice: Vertical thermo errorERROR: 0031-250  task 236: Segmentation fault121:ERROR 1 from file /project/sprelbarlx2/build/rbarlx2s009a/src/ppe/lapi/Sam.cpp line 1066 156:ERROR 1 from file /project/sprelbarlx2/build/rbarlx2s009a/src/ppe/lapi/Sam.cpp line 1066 157:ERROR 1 from file /project/sprelbarlx2/build/rbarlx2s009a/src/ppe/lapi/Sam.cpp line 1066 155:ERROR 1 from file /project/sprelbarlx2/build/rbarlx2s009a/src/ppe/lapi/Sam.cpp line 1066 I am guessing there might be some updates with the CICE module that does not maintaining the consistency with CCSM4. Can you help me on this?Thank you!Ran
 

dbailey

CSEG and Liaisons
Staff member
Sounds like a problem with the ice restart file. I will try a case on yellowstone and let you know.Dave
 

dbailey

CSEG and Liaisons
Staff member
I just saw your message about running 100 years with no problem. First thing I would try is to back up the run to the beginning of the year (last restart) and try again. However, change the restarts to monthly in case it blows up in the same place.xmlchange -file env_run.xml -id REST_OPTION -val nmonthsxmlchange -file env_run.xml -id REST_N -val 1If it makes it past this point, there was simply a cosmic ray or something like that. You can reset the restart frequency back to once a year. Otherwise, if it blows up in exactly the same place, then you should turn on some higher frequency history to see what is going wrong first. Generally a thermodynamic error in the CICE is indicative of something going haywire in the atmospheric or oceanic forcing. For higher freqency coupler history:xmlchange -file env_run.xml -id AVGHIST_OPTION -val ndaysxmlchange -file env_run.xml -id AVGHIST_N -val 1Look at all of the x2i fields in the coupler history and you will likely see the problem here.Dave
 

rfeng@umich_edu

New Member
Hi, Dave,I actually already tried to re-branch from my 100yr run and the run crashed after the first month or so. Then I tried to branch it from the standard CCSM4 run in mass storage (/CCSM/csm/b40.1850.track1.2deg.003/rest/1001-01-01-00000) just to check if that's due to my modification or something else. And even I branch it from the standard run(using the default compiling and everything), it still crashes. Are you able to branch the run? Let me know whether it is working for you or not. I will try to see if I can find anything.Thank you!Ran
 

dbailey

CSEG and Liaisons
Staff member
Ok. So try turning on the high frequency coupler output and the higher frequency restart from your 100-year restarts. I was able to do a 'hybrid' start from the year 1001 restart files just fine with cesm1_0_5. I don't think you'll be able to do a 'branch' run from these in your run.Dave
 

bates

Member
Was this ever resolved, and if so, what was the problem/solution? We are getting this same error in a single forcing run. I haven't started debugging this yet, so I don't have details, but if there was a resolution here, I"ll try that first.Thanks,

Susan
 
Top