Évaluations - 0, GPA: 0
(
)
|
Par ce dispositif a également d'autres instructions :
Facilité d'utilisation
Document Outline
- Contents
- HP Technical Support
- Introduction
- Cluster Administration
- Configure Servers
- Add or Modify a Server
- Other Server Configuration Procedures
- Move a Server
- Delete a Server
- Disable a Server
- Enable a Server
- Change the IP Address for a Server
- 1. Stop HP Clustered File System on server S2.
- 2. Change the IP address of server S2. We will now identify the server as S2a.
- 3. Start HP Clustered File System on server S2a. The server joins the cluster, which now consists of servers S1, S2, S3, and S2a. Server 2 is down and S1, S2a, and S3 are up.
- 4. Delete server S2 from the cluster. This step will remove references to the server.
- 5. Update virtual hosts and any other cluster entities that used server S2 to now include S2a.
- HP Clustered File System License File
- Migrate Existing Servers to HP Clustered File System
- Configure Servers for DNS Load Balancing
- Configure Network Interfaces
- Configure the SAN
- Configure Dynamic Volumes
- Configure PSFS Filesystems
- Cluster Operations on the Applications Tab
- Configure Virtual Hosts
- Overview
- Add or Modify a Virtual Host
- Configure Applications for Virtual Hosts
- Other Procedures
- Enable or Disable a Virtual Host
- Delete a Virtual Host
- Change the Virtual IP Address for a Virtual Host
- 1. At the Command Prompt, run the following command:
- 2. Edit the mxdump.out file, replacing all occurrences of the old IP address with the new IP address.
- 3. At the beginning of the mxdump.out file, add a line such as the following to remove the old IP address from the virtual host configuration:
- 4. Delete any lines in mxdump.out that do not need to change.
- 5. Run the following command to update the cluster configuration with the new IP address:
- Re-Host Virtual Hosts
- Virtual Hosts and Failover
- Virtual Host Activeness Policy
- 1. If the virtual host is disabled, it is not made active anywhere.
- 2. ClusterPulse considers the list of servers that are both up and enabled and that are configured for the virtual host. The network interface that the virtual host is associated with must also be both up and enabled for hosting. Note the following:
- 3. ClusterPulse narrows the list to those servers without inactive, down, or disabled HP Clustered File System device monitors. If there are no servers that meet this criteria, the virtual host is not made active anywhere.
- 4. If the virtual host is currently active on a server and that virtual host has the NOFAILBACK policy, then this active server is moved to the head of the list of preferred servers to be considered.
- 5. From this list, ClusterPulse tries to find a server with all services up and enabled. If ClusterPulse finds a server meeting these conditions, it will use it, preferring servers earlier in the list of servers configured for the virtual host.
- 6. If there are no servers with completely healthy services, ClusterPulse picks a server that has at least one service up and en...
- 7. The selected server will have one interface that was configured for this virtual host. The virtual host will be active on this interface. If ClusterPulse cannot locate a server meeting these conditions, it does not place the virtual host anywhere.
- Customize Service and Device Monitors for Failover
- Specify Failback Behavior of the Virtual Host
- Virtual Host Activeness Policy
- Configure Service Monitors
- Configure Device Monitors
- Overview
- Multi-Active Device Monitors
- DISK Device Monitor
- GATEWAY Device Monitor
- Custom Device Monitor
- Server Assignments
- Failover
- Device Monitors and Failover
- Device Monitor Activeness Policy
- 1. If the device monitor on a specific server is disabled, then the device monitor will not be made active on that server.
- 2. ClusterPulse considers the list of servers that are both up and enabled and that are configured for the device monitor. Note the following:
- 3. If the device monitor is multi-active, it will be active on all servers passing evaluation for steps 1 and 2. If the device m...
- 4. From the list of servers that pass evaluation for steps 1 and 2, ClusterPulse tries to find a server with all services up and...
- 5. If there are no servers with completely healthy services, ClusterPulse picks a server that has at least one service up and en...
- 6. If ClusterPulse cannot locate a server meeting these conditions, it will make the device monitor active on the first server in the list of servers for this device monitor.
- Device Monitor Activeness Policy
- Add or Modify a Device Monitor
- Advanced Settings for Device Monitors
- Other Configuration Procedures
- Overview
- Configure Notifiers
- Test Your Configuration
- Test SAN Shared Filesystem Operation
- Test SAN Connectivity and Shared Filesystem Operation
- 1. From the HP CFS Management Console, log into one of the cluster servers.
- 2. Import an unused SAN disk into your cluster configuration.
- 3. Create a PSFS filesystem on an unused partition on this disk.
- 4. Log into each of the servers as Administrator and perform some basic I/O tests to the shared filesystem using a tool such as WinZip. Verify that changes to the filesystem made by each server are visible to all other servers in the cluster.
- Test Filesystem Failure Recovery
- 1. Configure the cluster with a shared filesystem as described in the previous procedure.
- 2. Pull the LAN network connection(s) from one of the servers. Verify that this server is unable to access the shared filesystem. Verify that the other servers in the cluster are still able to access the shared filesystem.
- 3. Reconnect the LAN network to the server and then reboot it. Verify that the server, upon rebooting, is able to mount the shared filesystem. Verify that all servers are able to access the shared filesystem.
- 4. Pull the SAN network connection(s) from one of the servers. Verify that this server is unable to access the shared filesystem. Verify that the other servers in the cluster are still able to access the shared filesystem.
- 5. Reconnect the SAN network to the server and then reboot it. Verify that the server, upon rebooting, is able to mount the shared filesystem. Verify that all servers are able to access the shared filesystem.
- 6. Power off one of the servers. Verify that the other servers in the cluster are still able to access the shared filesystem.
- 7. Restore the power to the server and then reboot it. Verify that this server, upon rebooting, is able to mount the shared filesystem. Verify that all servers are able to access the shared filesystem.
- Test SAN Connectivity and Shared Filesystem Operation
- Test Virtual Host Operation and Failover
- Test Failure and Reintegration of Servers
- 1. From the HP CFS Management Console, log into a backup cluster server.
- 2. Configure the cluster with a single virtual host.
- 3. Validate that all servers are up, that the virtual host is active on the primary server, and that the backup servers are inactive.
- 4. Pull the LAN network connection(s) from the primary server.
- 5. Verify that the Management Console shows that the primary server is down and inactive and that the first backup server is active.
- 6. Reconnect the primary server to the LAN.
- 7. Verify that the primary server re-enters the cluster as the active host and that the first backup server becomes inactive.
- Test Failure and Reintegration of Service Monitors
- 1. From the HP CFS Management Console, log into a backup server.
- 2. Add a service monitor to the virtual host already defined in the system.
- 3. Verify that all servers are up, that the service you are testing is up, and that the virtual host is active on the primary server and inactive on the backup servers.
- 4. Stop the service you are testing on the primary server (for example, for HTTP, bring down the HTTP process).
- 5. Verify that HP Clustered File System detects the service failure. The virtual host should be inactive on the primary server and active on the first backup server.
- 6. Start the service that you are testing on the primary server.
- 7. Verify that HP Clustered File System detects that the service has become active.
- 8. Verify that the virtual host is active on the primary server and inactive on the backup servers.
- Test Failure and Reintegration of Servers
- Test DNS Load-Balancing and Failover
- Validate Correct Load-Balancing Operation
- Validate Load-Balancing When a Server Is Down
- 1. From the HP CFS Management Console, log into one of the servers in the cluster.
- 2. Pull the LAN network connection(s) from a server that you are not logged onto.
- 3. Verify that HP Clustered File System detects that the server is down and that the backup for this virtual host takes over operation from the primary.
- 4. Ping the address www.acmd.com multiple times.
- 5. Verify that all requests are still being returned correctly.
- 6. Restore the LAN network connections.
- 7. Verify that DNS now serves up both IP addresses again and that the ping is returned correctly by both.
- Test LAN Failover of Administrative Cluster Traffic
- 1. Connect your cluster servers with at least two physically separate LANs. Configure the Linux network software to enable the interfaces to these networks on each of the cluster servers.
- 2. From the HP CFS Management Console, log into one of the cluster servers.
- 3. Observe on the Management Console which of your LANs is currently being used for cluster administrative traffic. Physically disconnect one of the server's LAN connections to this network.
- 4. Observe on the Management Console that HP Clustered File System has noticed the network disconnection and has failed over the administrative cluster traffic to one of the other LANs.
- 5. Reconnect the disconnected server to the LAN. Observe on the Management Console that HP Clustered File System has noticed that network connectivity has been restored for this LAN.
- 15
- Test SAN Shared Filesystem Operation
- Advanced Topics
- SAN Maintenance
- Server Access to the SAN
- Membership Partitions
- Change a Host Bus Adapter or Driver
- 1. Stop HP Clustered File System:
- 2. Disable the HP Clustered File System service:
- 3. Remove the psd driver from the driver stack:
- 4. Reboot the server. The server will come up without HP Clustered File System and the psd driver.
- 5. Make the necessary change to the HBA or driver. Reboot the system if the HBA installation prompts you to do so.
- 6. (Optional.) Verify that the new HBA is supported:
- 7. Update the registry:
- 8. Enable HP Clustered File System and the psd driver:
- 9. Reboot the server to return the psd driver to the driver stack.
- 10. When the system is rebooted, HP Clustered File System will still be disabled in the Windows Services Control Panel. Re-enable it for Automatic startup if desired.
- 11. Start HP Clustered File System (or wait until the next reboot).
- Server Cannot Be Located
- Online Insertion of New Storage
- Online Replacement of a FibreChannel Switch
- Replace a Brocade FC Switch
- 1. Stop HP Clustered File System on any servers that are connected only to the original switch.
- 2. If possible, save the configuration information on the original switch. Use the configUpload command.
- 3. Record the IP address of the original switch. Use the ipAddrShow command.
- 4. Back up the zone configuration information, either from the original switch or from another switch in the fabric. Use the cfgShow command and record its output.
- 5. Connect the power and either the Ethernet or the serial console cable to the new switch.
- 6. Log on to the new switch.
- 7. Disable the switch with the switchDisable command.
- 8. Disable any stale active configuration on the new switch with the cfgDisable command.
- 9. Verify that the Brocade licenses are installed by using the licenseShow command. The new switch should have the same kind of license as the rest of the fabric.
- 10. Clear any stale zone configuration on the new switch with the cfgClear command.
- 11. Save the clean configuration with the cfgSave command.
- 12. Configure the new switch. If you saved the original configuration with the configUpload command, use the configDownload comm...
- 13. Connect the FC connectors to the new switch. Be sure to plug them into the same ports as on the original switch.
- 14. Set the Ethernet IP address on the new switch to the IP address of the original switch. Use the ipAddrSet command.
- 15. Enable the switch using the switchEnable command.
- Replace a McDATA FC Switch
- 1. Stop HP Clustered File System on any servers that are connected only to the original switch.
- 2. If possible, save configuration information from the original switch. Some items such as the zone configuration are not neede...
- 3. After the original switch has been powered down, power up the new switch and set the IP address to the old switch's address t...
- 4. Connect to the EWS and take the switch offline using the Operations > Switch > Online State tab.
- 5. Make the switch operating mode and domain ID acceptable to the original fabric. This can be done either by consulting the fab...
- 6. Add the private community to the Configure > Management > SNMP tab and ensure that it is write enabled.
- 7. Connect the FC connectors to the new switch. Be sure to plug them into the same ports as on the original switch.
- 8. Bring the switch online.
- 17
- Replace a Brocade FC Switch
- Other Cluster Maintenance
- Maintain the HP Clustered File System Event Log
- Cluster Alerts
- Check the Server Configuration
- Disable a Server for Maintenance
- 1. Disable the server. (Choose the server from the Servers window on the HP CFS Management Console, right-click, and select Disable.) This step causes the virtual host to fail over to a backup network interface on another server.
- 2. If you want the virtual host to remain on the backup network interface after the original server is returned to operation but...
- 3. Perform the necessary maintenance on the original server and then reenable it.
- Troubleshoot Cluster Problems
- HP Clustered File System Fails to Start
- The Server Status Is ... Ce manuel est également adapté pour les modèles :
données dispositifs de traitement - HP DL380-SL Clustered Gateway (1.82 mb)
données dispositifs de traitement - HP ProLiant DL380G5-WSS Storage Server Clustered Gateway (1.82 mb)
données dispositifs de traitement - HP ProLiant DL380G5-WSS Storage Server Initial Cluster (1.82 mb)
données dispositifs de traitement - HP 3PAR StoreServ File Controller v2 Storage (1.82 mb)