Scheduled Downtime
On Tuesday 24 October 2023 @ 5pm MT the forums will be in read only mode in preparation for the downtime. On Wednesday 25 October 2023 @ 5am MT, this website will be down for maintenance and expected to return online later in the morning.
Normal Operations
The forums are back online with normal operations. If you notice any issues or errors related to the forums, please reach out to help@ucar.edu

errors in porting cesm2

lisa17

New Member
Hello, I am trying to port CESM2 to the HPC of my university. I get some errors when I run scripts_regression_test.py --machine login01 --compiler intel, I guess the problem maybe caused by the baseline setting in $HOME/.cime/config_machines.xml,config_compiler.xml or config_batch.xml? I have attached the output of scripts_regression_tests and the .xml files. I have no idea how to solve these problems. Any help is much appreciated. Thanks a lot!
Part of the output document is as follows:

FAIL: test_environment_variable_insertion (__main__.H_TestMakeMacros)
Test that <env> elements insert environment variables.
----------------------------------------------------------------------
Traceback (most recent call last):
File "scripts_regression_tests.py", line 2547, in test_environment_variable_insertion
env={"NETCDF": "/path/to/netcdf"})
File "scripts_regression_tests.py", line 2190, in assert_variable_equals
self.parent.assertEqual(self.query_var(var_name, env, var), value)
AssertionError: '-L/share/apps/netcdf-fortran/4.5.2/lib:-L/sh[48 chars]tcdf' != '-L/path/to/netcdf -lnetcdf'
- -L/share/apps/netcdf-fortran/4.5.2/lib:-L/share/apps/netcdf/4.6.2/lib -L/path/to/netcdf -lnetcdf
+ -L/path/to/netcdf -lnetcdf

======================================================================
FAIL: test_cime_case_resubmit_immediate (__main__.K_TestCimeCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "scripts_regression_tests.py", line 1656, in test_cime_case_resubmit_immediate
self.assertTrue(depend_string in cmd[1])
AssertionError: False is not true
======================================================================
FAIL: test_cime_case_test_walltime_mgmt_6 (__main__.K_TestCimeCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "scripts_regression_tests.py", line 1889, in test_cime_case_test_walltime_mgmt_6
self.assertEqual(result, "421:32:11")
AssertionError: '421:32' != '421:32:11'
- 421:32
+ 421:32:11
? +++

======================================================================
FAIL: test_cime_case_test_walltime_mgmt_7 (__main__.K_TestCimeCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "scripts_regression_tests.py", line 1910, in test_cime_case_test_walltime_mgmt_7
self.assertEqual(result, "421:32:11")
AssertionError: '421:32' != '421:32:11'
- 421:32
+ 421:32:11
? +++
======================================================================
FAIL: test_bless_test_results (__main__.Q_TestBlessTestResults)
----------------------------------------------------------------------
Traceback (most recent call last):
File "scripts_regression_tests.py", line 1358, in test_bless_test_results
self._create_test(compargs)
File "scripts_regression_tests.py", line 954, in _create_test
self._wait_for_tests(test_id, expect_works=(not pre_run_errors and not run_errors))
File "scripts_regression_tests.py", line 963, in _wait_for_tests
from_dir=self._testroot, expected_stat=expected_stat)
File "scripts_regression_tests.py", line 67, in run_cmd_assert_result
test_obj.assertEqual(stat, expected_stat, msg=msg)
AssertionError: 100 != 0 :
COMMAND: /data/gpfs01/shlwang/liss/my_cesm_sandbox/cime/scripts/Tools/wait_for_tests *20200321_204940/TestStatus
FROM_DIR: /data/gpfs01/shlwang/testcase/cesm/shlwang/scripts_regression_test.20200321_201235
SHOULD HAVE WORKED, INSTEAD GOT STAT 100
OUTPUT: Test 'TESTRUNDIFF_P1.f19_g16_rx1.A.login01_intel' finished with status 'DIFF'
Path: TESTRUNDIFF_P1.f19_g16_rx1.A.login01_intel.C.20200321_204940/TestStatus
ERRPUT:

======================================================================
FAIL: test_full_system (__main__.Z_FullSystemTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "scripts_regression_tests.py", line 1494, in test_full_system
self._create_test(["--walltime=0:15:00", "cime_developer"], test_id=self._baseline_name)
File "scripts_regression_tests.py", line 954, in _create_test
self._wait_for_tests(test_id, expect_works=(not pre_run_errors and not run_errors))
File "scripts_regression_tests.py", line 963, in _wait_for_tests
from_dir=self._testroot, expected_stat=expected_stat)
File "scripts_regression_tests.py", line 67, in run_cmd_assert_result
test_obj.assertEqual(stat, expected_stat, msg=msg)
AssertionError: 100 != 0 :
COMMAND: /data/gpfs01/shlwang/liss/my_cesm_sandbox/cime/scripts/Tools/wait_for_tests *fake_testing_only_20200321_210431/TestStatus
FROM_DIR: /data/gpfs01/shlwang/testcase/cesm/shlwang/scripts_regression_test.20200321_201235
SHOULD HAVE WORKED, INSTEAD GOT STAT 100
OUTPUT: Test 'DAE.f19_f19.A.login01_intel' finished with status 'PASS'
Path: DAE.f19_f19.A.login01_intel.fake_testing_only_20200321_210431/TestStatus
Test 'ERI.f09_g16.X.login01_intel' finished with status 'FAIL'
Path: ERI.f09_g16.X.login01_intel.fake_testing_only_20200321_210431/TestStatus
Test 'ERIO.f09_g16.X.login01_intel' finished with status 'FAIL'
Path: ERIO.f09_g16.X.login01_intel.fake_testing_only_20200321_210431/TestStatus
Test 'ERP.f45_g37_rx1.A.login01_intel' finished with status 'PASS'
Path: ERP.f45_g37_rx1.A.login01_intel.fake_testing_only_20200321_210431/TestStatus
Test 'ERR.f45_g37_rx1.A.login01_intel' finished with status 'PASS'
Path: ERR.f45_g37_rx1.A.login01_intel.fake_testing_only_20200321_210431/TestStatus
Test 'ERS.ne30_g16_rx1.A.login01_intel.drv-y100k' finished with status 'PASS'
Path: ERS.ne30_g16_rx1.A.login01_intel.drv-y100k.fake_testing_only_20200321_210431/TestStatus
Test 'IRT_N2.f19_g16_rx1.A.login01_intel' finished with status 'PASS'
Path: IRT_N2.f19_g16_rx1.A.login01_intel.fake_testing_only_20200321_210431/TestStatus
Test 'LDSTA.f45_g37_rx1.A.login01_intel' finished with status 'PASS'
Path: LDSTA.f45_g37_rx1.A.login01_intel.fake_testing_only_20200321_210431/TestStatus
Test 'MCC_P1.f19_g16_rx1.A.login01_intel' finished with status 'PASS'
Path: MCC_P1.f19_g16_rx1.A.login01_intel.fake_testing_only_20200321_210431/TestStatus
Test 'NCK_Ld3.f45_g37_rx1.A.login01_intel' finished with status 'PASS'
Path: NCK_Ld3.f45_g37_rx1.A.login01_intel.fake_testing_only_20200321_210431/TestStatus
Test 'PEM_P4.f19_f19.A.login01_intel' finished with status 'PASS'
Path: PEM_P4.f19_f19.A.login01_intel.fake_testing_only_20200321_210431/TestStatus
Test 'PET_P4.f19_f19.A.login01_intel' finished with status 'PASS'
Path: PET_P4.f19_f19.A.login01_intel.fake_testing_only_20200321_210431/TestStatus
Test 'PRE.f19_f19.ADESP.login01_intel' finished with status 'PASS'
Path: PRE.f19_f19.ADESP.login01_intel.fake_testing_only_20200321_210431/TestStatus
Test 'PRE.f19_f19.ADESP_TEST.login01_intel' finished with status 'PASS'
Path: PRE.f19_f19.ADESP_TEST.login01_intel.fake_testing_only_20200321_210431/TestStatus
Test 'SEQ_Ln9.f19_g16_rx1.A.login01_intel' finished with status 'PASS'
Path: SEQ_Ln9.f19_g16_rx1.A.login01_intel.fake_testing_only_20200321_210431/TestStatus
Test 'SMS.T42_T42.S.login01_intel' finished with status 'PASS'
Path: SMS.T42_T42.S.login01_intel.fake_testing_only_20200321_210431/TestStatus
Test 'SMS_D_Ln9.f19_g16_rx1.A.login01_intel' finished with status 'PASS'
Path: SMS_D_Ln9.f19_g16_rx1.A.login01_intel.fake_testing_only_20200321_210431/TestStatus
ERRPUT:

----------------------------------------------------------------------
Ran 129 tests in 4472.516s
FAILED (failures=6, skipped=11)
pylint version 1.5 or newer not found, pylint tests skipped
Detected failures, leaving directory: /data/gpfs01/shlwang/testcase/cesm/shlwang/scripts_regression_test.20200321_201235
 

Attachments

  • config_batch.txt
    2.3 KB · Views: 24
  • config_machines.txt
    6.7 KB · Views: 15
  • config_compilers.txt
    5.6 KB · Views: 7
  • output.txt
    20.1 KB · Views: 3

jedwards

CSEG and Liaisons
Staff member
I believe that walltime on LSF batch systems is "HH:MM" not "HH:MM:SS" try changing the format in config_batch.xml.

You will want to change the Machine name and NODENAME_REGEX to something more unique to your system than login01

It looks like you've tried a lot of things with netcdf path and that the c and fortan libraries are installed separately. Try NetCDF_Fortran_PATH and NetCDF_C_PATH. Also the module load commands should load these or similar variables into the environment, use module show to figure out what those are and then set these to match. If you get these correct here you should not need to set them in config_compilers.xml
 

lisa17

New Member
Thank you so much for your suggestions!!
About second suggestion, the cluster name is "hpc". And I check the file /etc/hosts,there are several machines.Should I change the Machine name to "hpc" or io01 ~ io04?
"mgt mgt.hpc.xmu
login01 login01.hpc.xmu
login01-bmc login01-bmc.hpc.xmu
io01 io01.hpc.xmu
io02 io02.hpc.xmu
io03 io03.hpc.xmu
io04 io04.hpc.xmu
io01-bmc io01-bmc.hpc.xmu
io02-bmc io02-bmc.hpc.xmu
io03-bmc io03-bmc.hpc.xmu
io04-bmc io04-bmc.hpc.xmu "

I've tried to change Machine name to " io01" ,NODENAME_REGEX to " io01.hpc.xmu "and the "HH:MM" to "HH:MM:SS" , the output changes and mainly about --walltime, I'll post the output below. I don't know where the problem is. It seems to have something to do with " <directive > -W $JOB_WALLCLOCK_TIME </directive>" in LSF batch system. Do you have any more suggestions?

Part of the output is as follows:
======================================================================
ERROR: test_cime_case_st_archive_resubmit (__main__.K_TestCimeCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "scripts_regression_tests.py", line 1670, in test_cime_case_st_archive_resubmit
case.case_st_archive(resubmit=True)
File "/data/gpfs01/shlwang/liss/my_cesm_sandbox/cime/scripts/tests/../lib/CIME/case/case_st_archive.py", line 643, in case_st_archive
self.submit(resubmit=True)
File "/data/gpfs01/shlwang/liss/my_cesm_sandbox/cime/scripts/tests/../lib/CIME/case/case_submit.py", line 156, in submit
custom_success_msg_functor=verbatim_success_msg)
File "/data/gpfs01/shlwang/liss/my_cesm_sandbox/cime/scripts/tests/../lib/CIME/utils.py", line 1667, in run_and_log_case_status
rv = func()
File "/data/gpfs01/shlwang/liss/my_cesm_sandbox/cime/scripts/tests/../lib/CIME/case/case_submit.py", line 154, in <lambda>
batch_args=batch_args)
File "/data/gpfs01/shlwang/liss/my_cesm_sandbox/cime/scripts/tests/../lib/CIME/case/case_submit.py", line 99, in _submit
mail_type=mail_type, batch_args=batch_args)
File "/data/gpfs01/shlwang/liss/my_cesm_sandbox/cime/scripts/tests/../lib/CIME/case/case.py", line 1200, in submit_jobs
batch_args=batch_args, dry_run=dry_run)
File "/data/gpfs01/shlwang/liss/my_cesm_sandbox/cime/scripts/tests/../lib/CIME/XML/env_batch.py", line 496, in submit_jobs
dry_run=dry_run)
File "/data/gpfs01/shlwang/liss/my_cesm_sandbox/cime/scripts/tests/../lib/CIME/XML/env_batch.py", line 668, in _submit_single_job
output = run_cmd_no_fail(submitcmd, combine_output=True)
File "/data/gpfs01/shlwang/liss/my_cesm_sandbox/cime/scripts/tests/../lib/CIME/utils.py", line 500, in run_cmd_no_fail
expect(False, "Command: '{}' failed with error '{}' from dir '{}'".format(cmd, errput.encode('utf-8'), os.getcwd() if from_dir is None else from_dir))
File "/data/gpfs01/shlwang/liss/my_cesm_sandbox/cime/scripts/tests/../lib/CIME/utils.py", line 130, in expect
raise exc_type(msg)
SystemExit: ERROR: Command: 'bsub < .case.run --resubmit' failed with error 'b'0:05:00: Bad RUNLIMIT specification. Job not submitted.'' from dir '/data/gpfs01/shlwang/testcase/cesm/shlwang/scripts_regression_test.20200325_194140/st_archive_resubmit_test'
======================================================================
FAIL: test_environment_variable_insertion (__main__.H_TestMakeMacros)
Test that <env> elements insert environment variables.
----------------------------------------------------------------------
Traceback (most recent call last):
File "scripts_regression_tests.py", line 2547, in test_environment_variable_insertion
env={"NETCDF": "/path/to/netcdf"})
File "scripts_regression_tests.py", line 2190, in assert_variable_equals
self.parent.assertEqual(self.query_var(var_name, env, var), value)
AssertionError: '-L/share/apps/netcdf-fortran/4.5.2/lib:-L/sh[48 chars]tcdf' != '-L/path/to/netcdf -lnetcdf'
- -L/share/apps/netcdf-fortran/4.5.2/lib:-L/share/apps/netcdf/4.6.2/lib -L/path/to/netcdf -lnetcdf
+ -L/path/to/netcdf -lnetcdf

======================================================================
FAIL: test_cime_case_resubmit_immediate (__main__.K_TestCimeCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "scripts_regression_tests.py", line 1656, in test_cime_case_resubmit_immediate
self.assertTrue(depend_string in cmd[1])
AssertionError: False is not true
======================================================================
FAIL: test_save_timings (__main__.L_TestSaveTimings)
----------------------------------------------------------------------
Traceback (most recent call last):
File "scripts_regression_tests.py", line 2025, in test_save_timings
self.simple_test()
File "scripts_regression_tests.py", line 2002, in simple_test
self._create_test(["SMS_Ln9_P1.f19_g16_rx1.A", timing_flag, "--walltime=0:15:00"], test_id=self._baseline_name)
File "scripts_regression_tests.py", line 951, in _create_test
expected_stat=expected_stat)
File "scripts_regression_tests.py", line 67, in run_cmd_assert_result
test_obj.assertEqual(stat, expected_stat, msg=msg)
AssertionError: 100 != 0 :
COMMAND: /data/gpfs01/shlwang/liss/my_cesm_sandbox/cime/scripts/create_test SMS_Ln9_P1.f19_g16_rx1.A --save-timing --walltime=0:15:00 -t fake_testing_only_20200325_195425 --baseline-root /data/gpfs01/shlwang/testcase/cesm/shlwang/scripts_regression_test.20200325_194140/baselines --compiler=intel --test-root=/data/gpfs01/shlwang/testcase/cesm/shlwang/scripts_regression_test.20200325_194140 --output-root=/data/gpfs01/shlwang/testcase/cesm/shlwang/scripts_regression_test.20200325_194140
FROM_DIR: /data/gpfs01/shlwang/testcase/cesm/shlwang/scripts_regression_test.20200325_194140/st_archive_resubmit_test
SHOULD HAVE WORKED, INSTEAD GOT STAT 100
OUTPUT: Testnames: ['SMS_Ln9_P1.f19_g16_rx1.A.io01_intel']
No project info available
Creating test directory /data/gpfs01/shlwang/testcase/cesm/shlwang/scripts_regression_test.20200325_194140/SMS_Ln9_P1.f19_g16_rx1.A.io01_intel.fake_testing_only_20200325_195425
RUNNING TESTS:
SMS_Ln9_P1.f19_g16_rx1.A.io01_intel
Starting CREATE_NEWCASE for test SMS_Ln9_P1.f19_g16_rx1.A.io01_intel with 1 procs
Finished CREATE_NEWCASE for test SMS_Ln9_P1.f19_g16_rx1.A.io01_intel in 0.885206 seconds (PASS)
Starting XML for test SMS_Ln9_P1.f19_g16_rx1.A.io01_intel with 1 procs
Finished XML for test SMS_Ln9_P1.f19_g16_rx1.A.io01_intel in 0.209842 seconds (PASS)
Starting SETUP for test SMS_Ln9_P1.f19_g16_rx1.A.io01_intel with 1 procs
Finished SETUP for test SMS_Ln9_P1.f19_g16_rx1.A.io01_intel in 1.025976 seconds (PASS)
Starting SHAREDLIB_BUILD for test SMS_Ln9_P1.f19_g16_rx1.A.io01_intel with 1 procs
Finished SHAREDLIB_BUILD for test SMS_Ln9_P1.f19_g16_rx1.A.io01_intel in 128.386781 seconds (PASS)
Starting MODEL_BUILD for test SMS_Ln9_P1.f19_g16_rx1.A.io01_intel with 4 procs
Finished MODEL_BUILD for test SMS_Ln9_P1.f19_g16_rx1.A.io01_intel in 23.521909 seconds (PASS)
Starting RUN for test SMS_Ln9_P1.f19_g16_rx1.A.io01_intel with 1 proc on interactive node and 1 procs on compute nodes
Finished RUN for test SMS_Ln9_P1.f19_g16_rx1.A.io01_intel in 1.764829 seconds (FAIL). [COMPLETED 1 of 1]
Case dir: /data/gpfs01/shlwang/testcase/cesm/shlwang/scripts_regression_test.20200325_194140/SMS_Ln9_P1.f19_g16_rx1.A.io01_intel.fake_testing_only_20200325_195425
Errors were:
b"submit_jobs case.test\nSubmit job case.test\nERROR: Command: 'bsub < .case.test --skip-preview-namelist' failed with error 'b'0:15:00: Bad RUNLIMIT specification. Job not submitted.'' from dir '/data/gpfs01/shlwang/testcase/cesm/shlwang/scripts_regression_test.20200325_194140/SMS_Ln9_P1.f19_g16_rx1.A.io01_intel.fake_testing_only_20200325_195425'"
Due to presence of batch system, create_test will exit before tests are complete.
To force create_test to wait for full completion, use --wait
At test-scheduler close, state is:
FAIL SMS_Ln9_P1.f19_g16_rx1.A.io01_intel (phase RUN)
Case dir: /data/gpfs01/shlwang/testcase/cesm/shlwang/scripts_regression_test.20200325_194140/SMS_Ln9_P1.f19_g16_rx1.A.io01_intel.fake_testing_only_20200325_195425
test-scheduler took 156.80935382843018 seconds
ERRPUT:
 

Attachments

  • output.txt
    83.2 KB · Views: 4

jedwards

CSEG and Liaisons
Staff member
The machine name itself can be anything that you want as long as it is unique in config_machines.xml
The regex match for your system looks like it should be <NODENAME_REGEX>.*.hpc.xmu</NODENAME_REGEX>

I think that the issue with LSF is fixed if you change the values in config_batch.xml to have a single : instead of 2.
 

lisa17

New Member
Thank you for your reply! I have changed <NODENAME_REGEX> like you said.

And I also change the values in $HOME/.cime/config_batch.xml and my_cesm_sandbox/cime/config/cesm/machines/config_batch.xml to have a single :
such as, use <queue walltimemax="335:59" >normal</queue> instead of <queue walltimemax="335:59:00" >normal</queue> in .cime/config_batch.xml ,
<walltime_format>%H:%M</walltime_format> instead of <walltime_format>%H:%M:%S</walltime_format> in cime/config/cesm/machines/config_batch.xml ;

but just in case the first mistake happens (AssertionError: '421:32' != '421:32:11'),I change the def test_cime_case_test_walltime_mgmt_6 and def test_cime_case_test_walltime_mgmt_7 in scripts_regression_tests.py , 421:32:11(before modification ) to 421:32 (after modification ). I don't know if this modification is allowed?

def test_cime_case_test_walltime_mgmt_6(self):
###########################################################################
if not self._hasbatch:
self.skipTest("Skipping walltime test. Depends on batch system")
test_name = "ERS_P1.f19_g16_rx1.A"
self._create_test(["--no-build", test_name], test_id=self._baseline_name,
env_changes="unset CIME_GLOBAL_WALLTIME &&")
casedir = os.path.join(self._testroot,
"%s.%s" % (CIME.utils.get_full_test_name(test_name, machine=self._machine, compiler=self._compiler), self._baseline_name))
self.assertTrue(os.path.isdir(casedir), msg="Missing casedir '%s'" % casedir)
run_cmd_assert_result(self, "./xmlchange JOB_WALLCLOCK_TIME=421:32 --subgroup=case.test", from_dir=casedir)
#run_cmd_assert_result(self, "./xmlchange JOB_WALLCLOCK_TIME=421:32:11 --subgroup=case.test", from_dir=casedir)
run_cmd_assert_result(self, "./case.setup --reset", from_dir=casedir)
result = run_cmd_assert_result(self, "./xmlquery JOB_WALLCLOCK_TIME --subgroup=case.test --value", from_dir=casedir)
self.assertEqual(result, "421:32")
#self.assertEqual(result, "421:32:11")

then I test again, but there are still 3 failures , output as follows, and I check the TESTRUNDIFF_P1.f19_g16_rx1.A.io01_intel.C.20200327_224954/TestStatus and TestStatus.log(I've attached the file), it shows “no hist files found ......" ; and subfolder of the test output directory, scripts_regression_test.20200327_222628/baselines/ is empty. I‘m not sure is there a relationship between them?
What do you think I should do to fix it ?

(1) part of output of scripts_regression_tests.py:
======================================================================
FAIL: test_cime_case_resubmit_immediate (__main__.K_TestCimeCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "scripts_regression_tests.py", line 1656, in test_cime_case_resubmit_immediate
self.assertTrue(depend_string in cmd[1])
AssertionError: False is not true
======================================================================
FAIL: test_bless_test_results (__main__.Q_TestBlessTestResults)
----------------------------------------------------------------------
Traceback (most recent call last):
File "scripts_regression_tests.py", line 1358, in test_bless_test_results
self._create_test(compargs)
File "scripts_regression_tests.py", line 954, in _create_test
self._wait_for_tests(test_id, expect_works=(not pre_run_errors and not run_errors))
File "scripts_regression_tests.py", line 963, in _wait_for_tests
from_dir=self._testroot, expected_stat=expected_stat)
File "scripts_regression_tests.py", line 67, in run_cmd_assert_result
test_obj.assertEqual(stat, expected_stat, msg=msg)
AssertionError: 100 != 0 :
COMMAND: /data/gpfs01/shlwang/liss/my_cesm_sandbox/cime/scripts/Tools/wait_for_tests *20200327_224954/TestStatus
FROM_DIR: /data/gpfs01/shlwang/testcase/cesm/shlwang/scripts_regression_test.20200327_222628
SHOULD HAVE WORKED, INSTEAD GOT STAT 100
OUTPUT: Test 'TESTRUNDIFF_P1.f19_g16_rx1.A.io01_intel' finished with status 'DIFF'
Path: TESTRUNDIFF_P1.f19_g16_rx1.A.io01_intel.C.20200327_224954/TestStatus
ERRPUT:

======================================================================
FAIL: test_full_system (__main__.Z_FullSystemTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "scripts_regression_tests.py", line 1494, in test_full_system
self._create_test(["--walltime=0:15:00", "cime_developer"], test_id=self._baseline_name)
File "scripts_regression_tests.py", line 954, in _create_test
self._wait_for_tests(test_id, expect_works=(not pre_run_errors and not run_errors))
File "scripts_regression_tests.py", line 963, in _wait_for_tests
from_dir=self._testroot, expected_stat=expected_stat)
File "scripts_regression_tests.py", line 67, in run_cmd_assert_result
test_obj.assertEqual(stat, expected_stat, msg=msg)
AssertionError: 100 != 0 :
COMMAND: /data/gpfs01/shlwang/liss/my_cesm_sandbox/cime/scripts/Tools/wait_for_tests *fake_testing_only_20200327_230039/TestStatus
FROM_DIR: /data/gpfs01/shlwang/testcase/cesm/shlwang/scripts_regression_test.20200327_222628
SHOULD HAVE WORKED, INSTEAD GOT STAT 100
OUTPUT: Test 'DAE.f19_f19.A.io01_intel' finished with status 'PASS'
Path: DAE.f19_f19.A.io01_intel.fake_testing_only_20200327_230039/TestStatus
Test 'ERI.f09_g16.X.io01_intel' finished with status 'FAIL'
Path: ERI.f09_g16.X.io01_intel.fake_testing_only_20200327_230039/TestStatus
Test 'ERIO.f09_g16.X.io01_intel' finished with status 'FAIL'
Path: ERIO.f09_g16.X.io01_intel.fake_testing_only_20200327_230039/TestStatus
Test 'ERP.f45_g37_rx1.A.io01_intel' finished with status 'PASS'
Path: ERP.f45_g37_rx1.A.io01_intel.fake_testing_only_20200327_230039/TestStatus
Test 'ERR.f45_g37_rx1.A.io01_intel' finished with status 'PASS'
Path: ERR.f45_g37_rx1.A.io01_intel.fake_testing_only_20200327_230039/TestStatus
Test 'ERS.ne30_g16_rx1.A.io01_intel.drv-y100k' finished with status 'PASS'
Path: ERS.ne30_g16_rx1.A.io01_intel.drv-y100k.fake_testing_only_20200327_230039/TestStatus
Test 'IRT_N2.f19_g16_rx1.A.io01_intel' finished with status 'PASS'
Path: IRT_N2.f19_g16_rx1.A.io01_intel.fake_testing_only_20200327_230039/TestStatus
Test 'LDSTA.f45_g37_rx1.A.io01_intel' finished with status 'PASS'
Path: LDSTA.f45_g37_rx1.A.io01_intel.fake_testing_only_20200327_230039/TestStatus
Test 'MCC_P1.f19_g16_rx1.A.io01_intel' finished with status 'PASS'
Path: MCC_P1.f19_g16_rx1.A.io01_intel.fake_testing_only_20200327_230039/TestStatus
Test 'NCK_Ld3.f45_g37_rx1.A.io01_intel' finished with status 'PASS'
Path: NCK_Ld3.f45_g37_rx1.A.io01_intel.fake_testing_only_20200327_230039/TestStatus
Test 'PEM_P4.f19_f19.A.io01_intel' finished with status 'PASS'
Path: PEM_P4.f19_f19.A.io01_intel.fake_testing_only_20200327_230039/TestStatus
Test 'PET_P4.f19_f19.A.io01_intel' finished with status 'PASS'
Path: PET_P4.f19_f19.A.io01_intel.fake_testing_only_20200327_230039/TestStatus
Test 'PRE.f19_f19.ADESP.io01_intel' finished with status 'PASS'
Path: PRE.f19_f19.ADESP.io01_intel.fake_testing_only_20200327_230039/TestStatus
Test 'PRE.f19_f19.ADESP_TEST.io01_intel' finished with status 'PASS'
Path: PRE.f19_f19.ADESP_TEST.io01_intel.fake_testing_only_20200327_230039/TestStatus
Test 'SEQ_Ln9.f19_g16_rx1.A.io01_intel' finished with status 'PASS'
Path: SEQ_Ln9.f19_g16_rx1.A.io01_intel.fake_testing_only_20200327_230039/TestStatus
Test 'SMS.T42_T42.S.io01_intel' finished with status 'PASS'
Path: SMS.T42_T42.S.io01_intel.fake_testing_only_20200327_230039/TestStatus
Test 'SMS_D_Ln9.f19_g16_rx1.A.io01_intel' finished with status 'PASS'
Path: SMS_D_Ln9.f19_g16_rx1.A.io01_intel.fake_testing_only_20200327_230039/TestStatus
ERRPUT:

(2) part of TESTRUNDIFF_P1.f19_g16_rx1.A.io01_intel.C.20200327_224954/TestStatus.log:
---------------------------------------------------
2020-03-27 22:40:05: Comparing hists for case 'TESTRUNDIFF_P1.f19_g16_rx1.A.io01_intel.C.20200327_224954' dir1='/data/gpfs01/shlwang/testcase/cesm/shlwang/scripts_regression_test.20200327_222628/TESTRUNDIFF_P1.f19_g16_rx1.A.io01_intel.C.20200327_224954/run', suffix1='', dir2='/data/gpfs01/shlwang/testcase/cesm/shlwang/scripts_regression_test.20200327_222628/baselines/fake_testing_only_20200327_224927/TESTRUNDIFF_P1.f19_g16_rx1.A.io01_intel' suffix2=''
comparing model 'datm'
no hist files found for model datm
comparing model 'slnd'
no hist files found for model slnd
comparing model 'dice'
no hist files found for model dice
comparing model 'docn'
no hist files found for model docn
comparing model 'drof'
no hist files found for model drof
comparing model 'sglc'
no hist files found for model sglc
comparing model 'swav'
no hist files found for model swav
comparing model 'cpl'
/data/gpfs01/shlwang/testcase/cesm/shlwang/scripts_regression_test.20200327_222628/TESTRUNDIFF_P1.f19_g16_rx1.A.io01_intel.C.20200327_224954/run/TESTRUNDIFF_P1.f19_g16_rx1.A.io01_intel.C.20200327_224954.cpl.hi.0.nc did NOT match /data/gpfs01/shlwang/testcase/cesm/shlwang/scripts_regression_test.20200327_222628/baselines/fake_testing_only_20200327_224927/TESTRUNDIFF_P1.f19_g16_rx1.A.io01_intel/cpl.hi.0.nc
cat /data/gpfs01/shlwang/testcase/cesm/shlwang/scripts_regression_test.20200327_222628/TESTRUNDIFF_P1.f19_g16_rx1.A.io01_intel.C.20200327_224954/run/TESTRUNDIFF_P1.f19_g16_rx1.A.io01_intel.C.20200327_224954.cpl.hi.0.nc.cprnc.out
FAIL
---------------------------------------------------
 

Attachments

  • TestStatus.txt
    9.2 KB · Views: 0

lisa17

New Member
contents in file “/data/gpfs01/shlwang/testcase/cesm/shlwang/scripts_regression_test.20200327_222628/TESTRUNDIFF_P1.f19_g16_rx1.A.io01_intel.C.20200327_224954/run/TESTRUNDIFF_P1.f19_g16_rx1.A.io01_intel.C.20200327_224954.cpl.hi.0.nc.cprnc.out” is:

file 1=
/data/gpfs01/shlwang/testcase/cesm/shlwang/scripts_regression_test.20200327_222
628/TESTRUNDIFF_P1.f19_g16_rx1.A.io01_intel.C.20200327_224954/run/TESTRUNDIFF_P
1.f19_g16_rx1.A.io01_intel.C.20200327_224954.cpl.hi.0.nc
file 2=
/data/gpfs01/shlwang/testcase/cesm/shlwang/scripts_regression_test.20200327_222
628/baselines/fake_testing_only_20200327_224927/TESTRUNDIFF_P1.f19_g16_rx1.A.io
01_intel/cpl.hi.0.nc
Failed to open file

============================
I have no idea why the folder baselines/ does not have files.
 
Top