srogstad@geo_umass_edu
Member
Hi all,
The very last thing I have left to do before vacating Cheyenne is redoing a timeseries postprocessing that failed. About 2 months ago I ran timeseries postprocessing on an in-progress simulation to check how it looked 20 years in. This ran fine and the data in the files looked good so I continued the simulation for another 100 years. I then re-ran the timeseries postprocessing but it seemed like in encountered a glitch on Cheyenne because it worked fine for about 15 minutes and was generating files but then just hung for 2 hours doing nothing so I killed the script. It looked like it had just stalled because the last thing it did before stalling was generate a bunch of temp files in the ocean proc directory. In the log file it just says 'NetCDF: HDF error' many times at the end of the file. While it was still working before it stalled out it created timeseries files from the time step it had left off on when I looked at the initial 20 years which meant that I had the 2100-2120 files that generated correctly a few months back but then a broken set of files for 2120-2170 and 2170-end. I assume this means it creates files in 50 year blocks from whatever timestep it left off on if the script is run more than once.
Ideally I would like to have one functional clean set of files like 2100-2150, 2150-2200 etc. To do this do I need to delete everything in the proc folders and just run the timeseries script again? If I ran it again now would it clean up the temp files?
Thanks!!
The very last thing I have left to do before vacating Cheyenne is redoing a timeseries postprocessing that failed. About 2 months ago I ran timeseries postprocessing on an in-progress simulation to check how it looked 20 years in. This ran fine and the data in the files looked good so I continued the simulation for another 100 years. I then re-ran the timeseries postprocessing but it seemed like in encountered a glitch on Cheyenne because it worked fine for about 15 minutes and was generating files but then just hung for 2 hours doing nothing so I killed the script. It looked like it had just stalled because the last thing it did before stalling was generate a bunch of temp files in the ocean proc directory. In the log file it just says 'NetCDF: HDF error' many times at the end of the file. While it was still working before it stalled out it created timeseries files from the time step it had left off on when I looked at the initial 20 years which meant that I had the 2100-2120 files that generated correctly a few months back but then a broken set of files for 2120-2170 and 2170-end. I assume this means it creates files in 50 year blocks from whatever timestep it left off on if the script is run more than once.
Ideally I would like to have one functional clean set of files like 2100-2150, 2150-2200 etc. To do this do I need to delete everything in the proc folders and just run the timeseries script again? If I ran it again now would it clean up the temp files?
Thanks!!