本文共 18972 字,大约阅读时间需要 63 分钟。
# fence_vmware --helpUsage: fence_vmware [options]Options: -o, --action=Action: status, reboot (default), off or on -a, --ip= IP address or hostname of fencing device -l, --username= Login name -p, --password= Login password or passphrase -S, --password-script=
fence_vmware帮助 :
man fence_vmwareFENCE_AGENT(8) FENCE_AGENT(8)NAME fence_vmware - Fence agent for VMWareDESCRIPTION fence_vmware is an I/O Fencing agent which can be used with the VMware ESX, VMware ESXi or VMware Server to fence virtual machines. Before you can use this agent, it must be installed VI Perl Toolkit or vmrun command on every node you want to make fencing. VI Perl Toolkit is preferred for VMware ESX/ESXi and Virtual Center. Vmrun command is only solution for VMware Server 1/2 (this command will works against ESX/ESXi 3.5 up2 and VC up2 too, but not cluster aware!) and is available as part of VMware VIX API SDK package. VI Perl and VIX API SDK are both available from VMware web pages (not int RHEL repository!). You can specify type of VMware you are connecting to with -d switch (or vmware_type for stdin). Possible values are esx, server2 and server1.Default value is esx, which will use VI Perl. With server1 and server2, vmrun command is used. After you have successfully installed VI Perl Toolkit or VIX API, you should be able to run fence_vmware_helper (part of this agent) or vmrun command. This agent supports only vmrun from version 2.0.0 (VIX API 1.6.0). fence_vmware accepts options on the command line as well as from stdin. Fenced sends parameters through stdin when it execs the agent. fence_vmware can be run by itself with command line options. This is useful for testing and for turning outlets on or off from scripts. Vendor URL: http://www.vmware.comPARAMETERS -o, --action=Fencing Action (Default Value: reboot) -a, --ip= IP Address or Hostname This parameter is always required. -l, --username= Login Name This parameter is always required. -p, --password= Login password or passphrase -S, --password-script=
# less /usr/sbin/fence_vmware# Default type of vmwareVMWARE_DEFAULT_TYPE="esx"# Check vmware type, set vmware_internal_type to one of VMWARE_TYPE_ value and# options["-e"] to path (if not specified)def vmware_check_vmware_type(options): global vmware_internal_type options["-d"]=options["-d"].lower() if (options["-d"]=="esx"): vmware_internal_type=VMWARE_TYPE_ESX if (not options.has_key("-e")): options["-e"]=VMHELPER_COMMAND elif (options["-d"]=="server2"): vmware_internal_type=VMWARE_TYPE_SERVER2 if (not options.has_key("-e")): options["-e"]=VMRUN_COMMAND elif (options["-d"]=="server1"): vmware_internal_type=VMWARE_TYPE_SERVER1 if (not options.has_key("-e")): options["-e"]=VMRUN_COMMAND else: fail_usage("vmware_type can be esx,server2 or server1!")
Last updated 04-May-2010
Updates history:
We have 2 agents for VMware virtual machines fencing.
It's union of two older agents. fence_vmware_vix and fence_vmware_vi.
VI (in following text, VI API is not only original VI Perl API with last version 1.6, but VMware vSphere SDK for Perl too) is VMware API for controlling their main business class of VMware products (ESX/VC). This API is fully cluster aware (VMware cluster). So this agent is able to do fencing guests machines physically running on ESX but managed by VC and able to work without any reconfiguration in case of migrating guest to another ESX.
VIX is newer API, working on VMware "low-end" products (Server 2.x, 1.x), but there is some support for ESX/ESXi 3.5 update 2 and VC 2.5 update 2. This API is NOT cluster aware, and recommended only for Server 2.x and 1.x. But if you are using only one ESX/ESXi or doesn't have VMware Cluster and never use migration, you can use this API too.
If you are using RHEL 5.5/RHEL 6 just install fence-agents package and you are ready to use fence_vmware. For distributions with older fence_agetns, you can get this agent from GIT (RHEL 5.5/STABLE3/master) repository and use (please make sure to use current library (fencing.py) too).
VI Perl API or/and VIX API installed on every node in cluster. This is big difference against older agent, where you don't need install anything, but new agent has little less painful configuration (and many bonuses)
If you run fence_vmware with -h you will see something like this:
Options: -oAction: status, reboot (default), off or on -a IP address or hostname of fencing device -l Login name -p Login password or passphrase -S
Now parameters one by one, little more deeper (format is short option - XML argument name - description).
Example usage of agent in CLI mode: You have VC (named vccenter) with node1 which you want to fence. You will use Administrator account with password pass.
fence_vmware -a vccenter -l Administrator -p pass -n 'node1'
If everything works, you can modify your cluster.conf as follows (in this example, you have two nodes, guest1 and guest2):
......
You can test setup with fence_node fqdn command.
One of biggest problem of ESX 3.5/ESXi 3.5/VC 2.5 behaves very badly in case you have many virtual machines registered, because get list of VMs takes just too long. This will make fencing of larger datacenter unusable, because in case of 100+ registered VMs, whole fencing can take few minutes. This appears to be fixed in ESX 4.0.0/vCenter 4.0.0 (200+ registered VMs, fencing of one takes ~17 sec).. In case you don't want to upgrade, you can use separate datacenter for each cluster.
This is older fence agent, which should work on every ESX server, which has allowed ssh connection and has vmware-cmd command on it. Basic idea of this agent is to connect via ssh to ESX server, there run vmware-cmd which is able to run/shutdown virtual machine.
In ESX 4.0, vmware-cmd changed little, so it will not work anymore. You can solve this, by deleting lines 32 and 33 ('if options.has_key("-A"):' and 'cmd_line+=" -v"') or download, unpack it and replace original /sbin/fence_vmware.
Biggest problem of this solution is many parameters, which must be entered.
If you run fence_vmware with -h you will see something like this:
-oAction: status, reboot (default), off or on -a IP address or hostname of fencing device -l Login name -p Login password or passphrase -S
Now parameters one by one, little more deeper (format is short option - XML argument name - description).
I'm big fan of pictures, so example situation:
+---------------------------------------------------------------------------------------+| +---------- || | guest1 | ssh to VMware ESX - can be, where guest1 run || | RHEL 5 |------------------+ || +---------+ | || \/ || +---------- +--------SSH (22)---------------------------------+ || | guest2 | | ------> run vmware-cmd with params off --|-> Kill guest1 VM | | | RHEL 5 | | | || +---------+ | dom0 - VMware management console | || | (192.168.1.1) - Has user test with password test| || | - Has vmware-cmd | || +-------------------------------------------------+ || || VMware ESX hypervisor |+---------------------------------------------------------------------------------------+
As you can see, guest1 connect to VMware management console (with hostname/login/password (-a/-l/-p) for ssh) and there, vmware-cmd is runned (with hostname/login/password (-A/-L/-P for VMware).
So why we have 2 set's of parameters? Because:
Recomended way, how to use this agent is:
If everything done, test fencing via command line (on one of guests)
fence_vmware -a 192.168.1.1 -l test -p test -L root -P root -o status -n /vmfs/volumes/48bfcbd1-4624461c-8250-0015c5f3ef0f/Rhel/Rhel.vmx
You should get status of virtual machine named Rhel.
If everything works, you can modify your cluster.conf like:
......
The vmware "client" machine should have VMware Tools installed. So I recommend to install vmware tools in all cluster machine. This improve speed of guest.
rhel的集群软件,最让人纠结的就是fence这个设备,在xen的虚拟化平台,尚有"Virtual Machine Fence”可用,可"在Vmware虚拟化平台下,fence设备就没有现成的。我搜索了许多资料,基本上没看到解决办法,有人提出过RHCS是有可用的vmware fence,但语焉不详,又似遮遮掩掩。
功夫不负有心人,综合多方面资料,总算是找到解决办法:https://fedorahosted.org/cluster/wiki/VMware_FencingConfig 。该文章本事在redhat官网的,但是已经被删除,幸好这还有一篇,这个貌似是fedora的某个网站还有保存这篇文章,强烈建议先看看这篇文章。
文章简单说明:
1.Vcenter就具有fencing功能,可以当作一个fence设备,同理vmware vSphere(ESX)主机也是。
2.rhel5.4版本及其以下的系统中,用vmware_fence_ng连接fence,在rhel5.5及其以上版本的系统中,该工具已经被命名为vmware_fence,和vmware_fence_ng语法一样。
配置方法:
RHCS的安装及配置步骤略。
RHCS最重要的配置文件师/etc/cluster/cluster.conf文件,所有和集群有关的配置都在里面,包括fence的配置。
打开/etc/cluster/cluster.conf文件,定位到<clusternodes>...</clusternodes>和<fencedevices>...</fencedevices>。按照如下格式修改:
<clusternodes>
<clusternode name="guest1" nodeid="1" votes="1">
<fence>
<method name="1">
<device name="vmware1"/>
</method>
</fence>
<clusternode name="guest2" nodeid="2" votes="1">
<fence>
<method name="1">
<device name="vmware2"/>
</method>
</fence>
</clusternode>
<fencedevices>
<fencedevice agent="fence_vmware" ipaddr="vccenter" login="Administrator" name="vmware1" passwd="pass" port="guest1"/>
<fencedevice agent="fence_vmware" ipaddr="vccenter" login="Administrator" name="vmware2" passwd="pass" port="guest2"/>
</fencedevices>
说明:
agent固定为fence_vmware或者fence_vmware_ng,ipaddr为vccenter的主机名或者IP地址,login为vccenter的用户名,name为给该fence设置的命名,也就是device name,passwd为"login"登录的密码,port为该虚拟机在Vcenter中的命名。
然后安装在操作系统中安装vSphere SDK For Perlhttp://www.vmware.com/support/developer/viperltoolkit/。选择适合自己虚拟机主机的版本。解压安装,运行vmware_install.pl即可。
配置权限: 最后,在vcenter中为相应的账户配置权限(如果不介意用Administrator账户,也可以不用配置)。 选择相应的虚拟机,(给用户)添加权限。
给相应的用户授权权限,由于fence的功能和作用,要授予开关机重启的权限,嫌麻烦就授予该虚拟机的最高权限。