SGE_SHARE_MON(1) General Commands Manual SGE_SHARE_MON(1) NAME sge_share_mon - show status of Univa Grid Engine share tree SYNTAX sge_share_mon [ -a ] [ -c count ] [ -d delimiter ] [ -f field[,field...] ] [ -h ] [ -i interval ] [ -l delimiter ] [ -m out- put_mode ] [ -n ] [ -o output_file ] [ -r delimiter ] [ -s string_for- mat ] [ -t ] [ -u ] [ -x ] DESCRIPTION sge_share_mon shows the current status of the Univa Grid Engine share tree nodes. Without any options sge_share_mon will display a formatted view of the share tree every 15 seconds. The share tree is the configu- ration object for the Univa Grid Engine Share Tree Policy. OPTIONS -a Display alternate format containing usage fields defined in the usage_weight_list attribute of the scheduler configuration. -c count The number of times to collect share tree information (default is infinite). -d delimiter The column delimiter to use between fields (default is ). -f field[,field...] List of fields to print. This overrides the list of fields in the OUTPUT FORMATS section. -h Print a header containing the field names. -i interval The collection interval in seconds (default is 15). -l delimiter The line delimiter to use between nodes (default is ). -m output_mode The output file fopen mode (default is "w"). -n Use name=value format for all fields. -q Show information about the queues instances hosted by the dis- played hosts. -o output_file Write output to file. -r delimiter The delimiter to use between collections of the share tree (default is ). -s string_format Format of displayed strings (default is %s). -t Show formatted times. -u Show decayed usage (since timestamp) in nodes. -x Exclude non-leaf nodes. OUTPUT FORMATS For each node in the share tree, one line is printed. The output con- sists of: o curr_time - the time stamp of the last status collection for this node. o usage_time - the time stamp of the last time the usage was updated. o node_name - the name of the node. o user_name - the name of the user if this is a user node. o project_name - the name of the project if this is a project node. o shares - the number of shares assigned to this node. o job_count - the number of active jobs associated to this node. o level% - the share percentage of this node amongst its siblings. o total% - the overall share percentage of this node amongst all nodes. o long_target_share - the long term target share percentage that we are trying to achieve. o short_target_share - the short term target share percentage that we are trying to achieve in order to meet the long term target. o actual_share - the actual share percentage that the node is receiv- ing based on usage. o usage - the combined and decayed usage for this node. By default, each node status line also contains the following fields: o wallclock - the accumulated and decayed wallclock time for this node. o cpu - the accumulated and decayed CPU time for this node. o mem - the accumulated and decayed memory usage for this node. This represents the amount of virtual memory used by job processes multi- plied by the user and system CPU time. The value is expressed in gigabyte seconds. o io - the accumulated and decayed I/O usage for this node. o ltwallclock - the total accumulated wallclock time for this node. o ltcpu - the total accumulated CPU time for this node. o ltmem - the total accumulated memory usage (in gigabyte seconds) for this node. o ltio - the total accumulated I/O usage for this node. If the -a option is supplied, an alternate format is displayed where the fields which are listed above starting with wallclock are not dis- played. Instead, each node status line contains a field for each usage value defined in the usage_weight_list attribute of the scheduler con- figuration. The usage fields are displayed in the order they appear in the usage_weight_list. Below are some of the supported fields that can be specified. o memvmm - the accumulated and decayed memory usage for this node. This represents the amount of virtual memory used by all processes multiplied by the wallclock run-time of the processes. The value is expressed in gigabyte seconds. o memrss - the accumulated and decayed memory usage for this node. This represents the resident set size (RSS) used by all processes multiplied by the wallclock run-time of the processes. The value is expressed in gigabyte seconds. The resident set size is the amount of physical private memory plus the amount of physical shared memory being used by the process. o mempss - the accumulated and decayed memory usage for this node. This represents the proportional set size (PSS) used by all pro- cesses multiplied by the wallclock run-time of the processes. The value is expressed in gigabyte seconds. The proportional set size is the amount of physical private memory plus a proportion of the shared memory being used by the process. o - If a consumable resource is specified in the usage_weight_list, the total accumulated and decayed virtual usage for jobs associated with this node is displayed. The amount of the consumable resource which has been requested by the job is multi- plied by the wallclock run-time of the job. If the consumable resource is a slot-based resource, the value is also multiplied by the number of slots which are granted to the job. The value is expressed in gigabyte seconds. ENVIRONMENTAL VARIABLES SGE_ROOT Specifies the location of the Univa Grid Engine standard configuration files. SGE_CELL If set, specifies the default Univa Grid Engine cell. To address a Univa Grid Engine cell qhost uses (in the order of precedence): The name of the cell specified in the environment variable SGE_CELL, if it is set. The name of the default cell, i.e. default. SGE_DEBUG_LEVEL If set, specifies that debug information should be writ- ten to stderr. In addition the level of detail in which debug information is generated is defined. SGE_QMASTER_PORT If set, specifies the tcp port on which sge_qmaster(8) is expected to listen for communication requests. Most installations will use a services map entry for the ser- vice "sge_qmaster" instead to define that port. FILES //common/act_qmaster Univa Grid Engine master host file SEE ALSO sge_intro(1), qconf(1), qstat(1), qsub(1), queue_conf(5), sched_conf(5), share_tree(5), sge_priority(5) COPYRIGHT See sge_intro(1) for a full statement of rights and permissions. Univa Grid Engine User Commands UGE 8.5.4 SGE_SHARE_MON(1)