Using hardware control software
Using hardware control software
CSM hardware control software provides remote hardware control functions for cluster nodes and devices from a single point of control. CSM allows you to control cluster nodes remotely through access to the cluster management server. From the management server, an administrator runs cluster management commands using the command line, Web-based System Manager graphical user interface (GUI), System Management Interface Tool (SMIT) panels, or the DCEM graphical user interface.
CSM supports hardware control for non-node devices, and provides power control and where applicable, remote console access for a wide range of devices such as hardware control points, external console servers, and remote supervisor adapters. The predefined dynamic device group AllDevices includes all defined devices.
CSM hardware control functions depend on the specific hardware, software, network, and configuration requirements described in this book. The requirements for remote power are separate and distinct from the requirements for remote console. clusters without the hardware, software, network, or configuration required to use CSM hardware control can still have CSM installed on some or all cluster nodes. However, in such clusters the hardware control commands may be inoperable or provide only limited function.
CSM for AIX 5L supports remote hardware control for pSeries, xSeries, BladeCenter, SP Nodes, p660 nodes, 325 and 326, and OpenPower 720 from an AIX management server. Hardware control commands can be run on the AIX management server to simultaneously control both AIX and Linux nodes in a mixed cluster. See the CSM for AIX 5L and Linux: Administration Guide for a complete description of CSM mixed clusters.
In CSM documentation “p660 nodes” refers to pSeries 660 nodes (which are not HMC-attached), and the M80, H80, 6H0/6H1, and 6M1 RS/6000 servers.
CSM for Linux supports remote hardware control for xSeries, pSeries, BladeCenter, and eServer 325 and 326 servers from a Linux management server.
The following list describes the CSM hardware control commands; see the man pages or the CSM for AIX 5L and Linux: Command and Technical Reference for detailed command usage information.
- Defines a remote console user name for node BMCs.
- Changes a device definition in the CSM database.
- Removes, adds, or rewrites console entries in the Conserver configuration file.
- Sets the SNMP agent configuration information for xSeries and BladeCenter servers.
- Administers the cspd daemons log file and debug flags.
- Configures SP frame hardware control points for expansion I/O units.
- Displays the configuration information for expansion I/O units, IBM POWER3 SMP High Nodes (F/C 2054), and IBM 375 MHz POWER3 SMP High Nodes (F/C 2058).
- Controls the state of SP Nodes, SP frames, and p660 nodes in the cluster.
- Monitors the state of SP Nodes, SP frames, p660 nodes, and expansion I/O units in the cluster.
- Defines the devices in a CSM cluster.
- Collects information for LAN adapters.
- Manages device group definitions in the CSM database.
- Returns the remote console user names for node BMCs.
- Lists the device definitions in the CSM database.
- Collects node information from one or more hardware control points.
- Collects environmental and Vital Product Data (VPD) information from xSeries and BladeCenter servers. This command is not supported for CSM for Linux on pSeries.
- Collects SNMP agent configuration information from xSeries and BladeCenter servers.
- Initiates a network boot and install of an AIX node over the CSM cluster network.
- Opens a remote console for a node.
- Refreshes the Conserver daemon.
- Collects service processor log information for xSeries and BladeCenter servers.
- Removes device definitions from the CSM database.
- Boots, resets, powers on and off, and queries nodes, devices and CECs.
- Stores the user ID and password required for internal programs to access remote hardware.
Hardware control attributes
You must define the hardware-related attributes for nodes or non-node devices. In some cases default values are provided. If these defaults are acceptable, you do not need to provide the attribute values when you define the node. Hardware control attributes depend on the kind of hardware you plan to use.
For a list of the hardware control attributes that you define for a node, see Hardware control attributes.
For a list of the hardware control attributes that you define for a device, see Defining non-node devices for the cluster.
Hardware and network requirements
CSM for AIX 5L or Linux hardware control depends on the specific hardware and network requirements described in this book. The management server can be connected to cluster nodes and external networks using various configurations of IBM and non-IBM hardware and software that meet the CSM architecture requirements described in this book. For the specific cluster hardware models required to use CSM 1.4, see Planning for CSM for AIX nodes. See Hardware configuration for model-specific hardware control configuration requirements.
Virtual LANs (VLANs)
A VLAN (virtual Local Area Network) is a division of a local area network by software rather than by physical arrangement of cables. Dividing a LAN into subgroups can simplify and speed up communications within a workgroup. Switching a user from one VLAN to another using software is also more efficient than rewiring the hardware.
IBM suggests creating one or more VLANs for the CSM management server, managed devices, and hardware control points, and one or more separate VLAN for the CSM management server and cluster nodes. Although cluster hardware control points and nodes can be on the same VLAN, limiting access to the management VLAN reduces security exposure for IP traffic on the management VLAN and access to hardware control points.
The VLANs refer to VLANs as defined by IEEE standards – see http://standards.ieee.org/ for details. Figure 1 Figure 3 shows a network partitioned into three virtual LANs; management, cluster, and public VLANs, which are defined as follows:
- management VLAN
- Hardware control commands such as rpower and rconsole are run on the management server and communicate to nodes through the management VLAN. The management VLAN connects the management server to the cluster hardware through an Ethernet connection. For optimal security, the management VLAN must be restricted to hardware control points, remote console servers, the management server, and root users. Routing between the management VLAN and cluster or public VLANs could compromise security on the management VLAN. The management VLAN is also referred to as the service VLAN in CSM documentation.
Note:The management VLAN is subject to the RSA restriction of 10/100 Mb/s.
- cluster VLAN
- The cluster VLAN connects nodes to each other and to the management server through an Ethernet connection. Installation and CSM administration tasks such as running dsh are done on the cluster VLAN. Host names and attribute values for nodes on the cluster VLAN are stored in the CSM database.
- public VLAN
- The public VLAN connects the cluster nodes and management server to the site network. Applications are accessed and run on cluster nodes over the public VLAN. The public VLAN can be connected to nodes through a second Ethernet adapter in each node, or by routing to each node through the Ethernet switch.
Hardware control overview and sample configurations
CSM communicates with hardware control points to request node power status, reboot, and power on and off functions. A hardware control point is the specific piece of hardware through which the management server controls node hardware. Hardware control points should be on the management virtual LAN (VLAN) and connected to the hardware that ultimately controls the power functions.
The supported hardware control points are:
- HMC for HMC-managed pSeries nodes
- Frame supervisor for SP Nodes
- CSP serial port for p660 nodes
- RSA for xSeries
- Management module for BladeCenter
- BMC for the eServer 325, 326, xSeries 336, and xSeries 346
- APC MasterSwitch for non-node and other devices that do not have another hardware control point such as an RSA and require the APC MasterSwitch.
For SP Nodes and p660 nodes, the connection is from a dedicated tty port on the management server to the frame supervisor or CSP serial port through a serial RS-232 line.
For details on defining tty ports for SP Nodes and p660 nodes, see the “Prepare the control workstation” section in Chapter 2 of the PSSP: Installation and Migration Guide, located at http://www.ibm.com/servers/eserver/pseries/library/sp_books/pssp.html.
Remote power software and configuration describes the remote power configurations for your cluster when you are using hardware control.
CSM communicates with console server hardware to open a console window for a node on the CSM management server. Console servers must be on the management VLAN, which connects the management server to the cluster hardware, and connected to node serial ports. (See Virtual LANs (VLANs).) This out of band network configuration allows a remote console to be opened from the management server even if the cluster VLAN is inaccessible. For example, if the cluster VLAN is offline, remote console can still access the target node to open a console window.
For HMC-attached pSeries, the HMC is the remote console server. For SP Nodes and p660 nodes, an independent device does not exist that can serve as a remote console server; console traffic is managed by the frame supervisor or p660 server firmware. xSeries servers can use any of the following console servers:
- MRV IR-8020, IR-8040, LX-4008S, LX-4016S, and LX-4032S
- Avocent CPS1600
- Cyclades AlterPath ACS48
Linux on pSeries clusters use the HMC for remote console; no additional console device is required or supported.
BladeCenter HS20-8678 blade servers that are part of an IBM 1350 Cluster require the 1350 Serial Port Module (SPM) option in order to support remote console. The SPM must be connected to an MRV IR-8020 or IR-8040 console server. HS20-8678 blade servers that are not part of a 1350 Cluster, or that do not have the SPM option, cannot support remote console. Consoles for these servers may be viewed, one at a time, by accessing the Management Modules web interface and selecting “Remote Control” from the “Blade Tasks” list.
BladeCenter HS20 (other than HS20-8678), HS40, and JS20 blade servers support remote console through the Ethernet Switch Module, using Serial Over LAN (SOL). Refer to your BladeCenter documentation for information on enabling and configuring SOL. To ensure maximum reliabilty, verify that the most up-to-date firmware is installed for the following components:
- BladeCenter HS20-8677 chassis: Management Module and Ethernet Switch Module
- HS20 (other than HS20-8678), HS40, and JS20 blade servers: Flash BIOS and Integrated Systems Management Processor (ISMP)
You can view the installed versions of these components in the Management Modules Web Interface by selecting “Firmware VPD” from the “Monitors” navigation panel. For information on the latest available versions, see the IBM Servers – Software and device drivers web page:
Entry filed under: Microcontrollers.