scapps@uci_edu
Member
I noticed the following messages in my output file while running CAM3 at T85 resolution on Bluevista:
print_memusage: size, rss, share, text, datastack= -1 217700 -1 -1 -1
print_memusage iam 4 End stepon. -1 in the next line means unavailable
print_memusage: size, rss, share, text, datastack= -1 216600 -1 -1 -1
print_memusage iam 19 End stepon. -1 in the next line means unavailable
print_memusage: size, rss, share, text, datastack= -1 212936 -1 -1 -1
Is this something I should be concerned about? Here is bsub script I am using:
## Setting LSF options for batch queue submission.
## 70 8-way nodes available on Bluevista
#BSUB -J cam_eul_som_T85_1pdf
#BSUB -n 32 # total tasks and threads (processors) needed
#BSUB -R "span[ptile=4]" # max number of tasks (MPI) per node
#BSUB -P 36271015 # Project 36271015
#BSUB -B # sends mail at dispatch and intiation times
#BSUB -o /ptmp/scapps/cam_eul_som_T85_1pdf/out.%J # output filename
#BSUB -e /ptmp/scapps/cam_eul_som_T85_1pdf/out.%J # error filename
#BSUB -q regular # queue
#BSUB -W 3:00 # 6 hour wallclock limit (required)
#BSUB -N
##BSUB -x # exclusive use of node (not_shared)
##BSUB -u scapps@uci.edu # email notifications
limit stacksize unlimited
limit datasize unlimited
setenv OMP_NUM_THREADS 4
setenv MP_SHARED_MEMORY yes
setenv XLSMPOPTS "stack=256000000"
setenv AIXTHREAD_SCOPE S
setenv MALLOCMULTIHEAP true
setenv OMP_DYNAMIC false
setenv MP_STDINMODE 0
## POE Environment. Set these for interactive jobs. They're ignored by LoadLeveler
## MP_NODES is the number of nodes. The number chosen should be a power of 2, up to a max of 16 for T42.
setenv MP_NODES 2
setenv MP_TASKS_PER_NODE 2
setenv MP_EUILIB us
setenv MP_RMPOOL 1
# TH: bug fix suggested by Brian Eaton 1/24/03
unsetenv MP_PROCS
print_memusage: size, rss, share, text, datastack= -1 217700 -1 -1 -1
print_memusage iam 4 End stepon. -1 in the next line means unavailable
print_memusage: size, rss, share, text, datastack= -1 216600 -1 -1 -1
print_memusage iam 19 End stepon. -1 in the next line means unavailable
print_memusage: size, rss, share, text, datastack= -1 212936 -1 -1 -1
Is this something I should be concerned about? Here is bsub script I am using:
## Setting LSF options for batch queue submission.
## 70 8-way nodes available on Bluevista
#BSUB -J cam_eul_som_T85_1pdf
#BSUB -n 32 # total tasks and threads (processors) needed
#BSUB -R "span[ptile=4]" # max number of tasks (MPI) per node
#BSUB -P 36271015 # Project 36271015
#BSUB -B # sends mail at dispatch and intiation times
#BSUB -o /ptmp/scapps/cam_eul_som_T85_1pdf/out.%J # output filename
#BSUB -e /ptmp/scapps/cam_eul_som_T85_1pdf/out.%J # error filename
#BSUB -q regular # queue
#BSUB -W 3:00 # 6 hour wallclock limit (required)
#BSUB -N
##BSUB -x # exclusive use of node (not_shared)
##BSUB -u scapps@uci.edu # email notifications
limit stacksize unlimited
limit datasize unlimited
setenv OMP_NUM_THREADS 4
setenv MP_SHARED_MEMORY yes
setenv XLSMPOPTS "stack=256000000"
setenv AIXTHREAD_SCOPE S
setenv MALLOCMULTIHEAP true
setenv OMP_DYNAMIC false
setenv MP_STDINMODE 0
## POE Environment. Set these for interactive jobs. They're ignored by LoadLeveler
## MP_NODES is the number of nodes. The number chosen should be a power of 2, up to a max of 16 for T42.
setenv MP_NODES 2
setenv MP_TASKS_PER_NODE 2
setenv MP_EUILIB us
setenv MP_RMPOOL 1
# TH: bug fix suggested by Brian Eaton 1/24/03
unsetenv MP_PROCS