Scheduled Downtime
On Tuesday 24 October 2023 @ 5pm MT the forums will be in read only mode in preparation for the downtime. On Wednesday 25 October 2023 @ 5am MT, this website will be down for maintenance and expected to return online later in the morning.
Normal Operations
The forums are back online with normal operations. If you notice any issues or errors related to the forums, please reach out to help@ucar.edu

cesm1_2_0 Installation

Hello CESM users,I have been trying to install CESM 1_2_0 in cluster serially../create_newcase -case mycase1 -res f19_g16 -compset B1850CN -mach GNU/Linux
-------------------------------------------------------------------------------
For a list of potential issues in the current tag, please point your web browser to:
https://svn-ccsm-models.cgd.ucar.edu/cesm1/known_problems/
-------------------------------------------------------------------------------
 grid longname is f19_g16
Component set: longname (shortname) (alias)
  1850_CAM4_CLM40%CN_CICE_POP2_RTM_SGLC_SWAV (B_1850_CN) (B1850CN)
Component set Description:
  CAM: CLM: RTM: CICE: POP2: SGLC: SWAV: pre-industrial: cam4 physics: clm4.0 physics: clm4.0 cn: prognostic cice: POP2 default:
Grid:
  a%1.9x2.5_l%1.9x2.5_oi%gx1v6_r%r05_m%gx1v6_g%null_w%null (1.9x2.5_gx1v6)
  ATM_GRID = 1.9x2.5  NX_ATM=144 NY_ATM=96
  LND_GRID = 1.9x2.5  NX_LND=144 NX_LND=96
  ICE_GRID = gx1v6  NX_ICE=320 NX_ICE=384
  OCN_GRID = gx1v6  NX_OCN=320 NX_OCN=384
  ROF_GRID = r05  NX_ROF=720 NX_ROF=360
  GLC_GRID = 1.9x2.5  NX_GLC=144 NX_GLC=96
  WAV_GRID = null  NX_WAV=0 NX_WAV=0
Grid Description:
  null is no grid: 1.9x2.5 is FV 2-deg grid: gx1v6 is Greenland pole v6 1-deg grid: r05 is 1/2 degree river routing grid:
Non-Default Options:
  ATM_NCPL: 48
  BUDGETS: TRUE
  CAM_CONFIG_OPTS: -phys cam4
  CAM_DYCORE: fv
  CAM_NML_USE_CASE: 1850_cam4
  CCSM_BGC: CO2A
  CCSM_CO2_PPMV: 284.7
  CICE_MODE: prognostic
  CLM_CO2_TYPE: diagnostic
  CLM_CONFIG_OPTS: -phys clm4_0 -bgc cn
  CLM_NML_USE_CASE: 1850_control
  COMP_ATM: cam
  COMP_GLC: sglc
  COMP_ICE: cice
  COMP_LND: clm
  COMP_OCN: pop2
  COMP_ROF: rtm
  COMP_WAV: swav
  CPL_ALBAV: false
  CPL_EPBAL: off
  GET_REFCASE: TRUE
  OCN_COUPLING: full
  OCN_NCPL: 1
  OCN_TIGHT_COUPLING: FALSE
  OCN_TRACER_MODULES:  iage
  ROF_NCPL: 8
  RTM_BLDNML_OPTS: -simyr 1850
  RUN_REFCASE: b40.1850.track1.2deg.003
  RUN_REFDATE: 0501-01-01
  RUN_TYPE: hybrid
  SCIENCE_SUPPORT: NO

set_machine: no match for machine GNU/Linux - possible machine values are
 
  MACHINES:  name (description)
    userdefined (User Defined Machine)
    bluewaters (ORNL XE6, os is CNL, 32 pes/node, batch system is PBS)
    brutus (Brutus Linux Cluster ETH (pgi(9.0-1)/intel(10.1.018) with openi(1.4.1)/mvapich2(1.4rc2), 16 pes/node, batch system LSF, added by UB)
    eastwind (PNL IBM Xeon cluster, os is Linux (pgi), batch system is SLURM)
    edison (NERSC XC30, os is CNL, 16 pes/node, batch system is PBS)
    erebus (NCAR IBM , os is Linux, 16 pes/node, batch system is LSF)
    evergreen (UMD cluster)
    frankfurt ("NCAR CGD Linux Cluster 16 pes/node, batch system is PBS")
    gaea (NOAA XE6, os is CNL, 24 pes/node, batch system is PBS)
    hera (LLNL Linux Cluster, Linux (pgi), 16 pes/node, batch system is Moab)
    hopper (NERSC XE6, os is CNL, 24 pes/node, batch system is PBS)
    intrepid (ANL IBM BG/P, os is BGP, 4 pes/node, batch system is cobalt)
    janus (CU Linux Cluster (intel), 2 pes/node, batch system is PBS)
    lynx (NCAR XT5, os is CNL, 12 pes/node, batch system is PBS)
    mira (ANL IBM BG/Q, os is BGP, 16 pes/node, batch system is cobalt)
    olympus (PNL cluster, os is Linux (pgi), batch system is SLURM)
    pleiades-har (NASA/AMES Linux Cluster, Linux (ia64), Altix ICE, 3.0 GHz Harpertown processors, 8 pes/node and 8 GB of memory, batch system is PBS)
    pleiades-wes (NASA/AMES Linux Cluster, Linux (ia64), Altix ICE, 2.93 GHz Westmere processors, 12 pes/node and 24 GB of memory, batch system is PBS)
    pleiades-san (NASA/AMES Linux Cluster, Linux (ia64), Altix ICE, 2.6 GHz Sandy Bridge processors, 16 cores/node and 32 GB of memory, batch system is PBS)
    sierra (LLNL Linux Cluster, Linux (pgi), 12 pes/node, batch system is Moab)
    titan (ORNL XK6, os is CNL, 16 pes/node, batch system is PBS)
    yellowstone (NCAR IBM, os is Linux, 16 pes/node, batch system is LSF)
    stampede (TACC DELL, os is Linux, 16 pes/node, batch system is SLURM)
set_machine: exiting

All required files are not created inside the case folder.Could someone help me to fix this bug. ?Thank you.
 

jedwards

CSEG and Liaisons
Staff member
First you should use cesm1_2_2.   Then you should read the section of the users manual on porting.     Then you should read the output of the create_newcase command which told you that : set_machine: no match for machine GNU/Linux possible machine values are     MACHINES:  name (description)    userdefined (User Defined Machine)     bluewaters (ORNL XE6, os is CNL, 32 pes/node, batch system is PBS)     brutus (Brutus Linux Cluster ETH (pgi(9.0-1)/intel(10.1.018) with openi(1.4.1)/mvapich2(1.4rc2), 16 pes/node, batch system LSF, added by UB)     eastwind (PNL IBM Xeon cluster, os is Linux (pgi), batch system is SLURM)     edison (NERSC XC30, os is CNL, 16 pes/node, batch system is PBS)     erebus (NCAR IBM , os is Linux, 16 pes/node, batch system is LSF)     evergreen (UMD cluster)     frankfurt ("NCAR CGD Linux Cluster 16 pes/node, batch system is PBS")     gaea (NOAA XE6, os is CNL, 24 pes/node, batch system is PBS)     hera (LLNL Linux Cluster, Linux (pgi), 16 pes/node, batch system is Moab)     hopper (NERSC XE6, os is CNL, 24 pes/node, batch system is PBS)     intrepid (ANL IBM BG/P, os is BGP, 4 pes/node, batch system is cobalt)     janus (CU Linux Cluster (intel), 2 pes/node, batch system is PBS)     lynx (NCAR XT5, os is CNL, 12 pes/node, batch system is PBS)     mira (ANL IBM BG/Q, os is BGP, 16 pes/node, batch system is cobalt)     olympus (PNL cluster, os is Linux (pgi), batch system is SLURM)     pleiades-har (NASA/AMES Linux Cluster, Linux (ia64), Altix ICE, 3.0 GHz Harpertown processors, 8 pes/node and 8 GB of memory, batch system is PBS)     pleiades-wes (NASA/AMES Linux Cluster, Linux (ia64), Altix ICE, 2.93 GHz Westmere processors, 12 pes/node and 24 GB of memory, batch system is PBS)     pleiades-san (NASA/AMES Linux Cluster, Linux (ia64), Altix ICE, 2.6 GHz Sandy Bridge processors, 16 cores/node and 32 GB of memory, batch system is PBS)     sierra (LLNL Linux Cluster, Linux (pgi), 12 pes/node, batch system is Moab)     titan (ORNL XK6, os is CNL, 16 pes/node, batch system is PBS)     yellowstone (NCAR IBM, os is Linux, 16 pes/node, batch system is LSF)     stampede (TACC DELL, os is Linux, 16 pes/node, batch system is SLURM) set_machine: exiting
 
Top