http://www.itworld.com/virtualization/335244/rhev-upgrade-saga-rhel-kvm-and-open-vswitch
A customer of mine recently switched from VMware Server to KVM, but they wanted better networking, which required installing and operating the Open vSwitch. Since they are using RHEL 5 (and I am using RHEL 6) we had to do some magic to install open vswitch. For RHEL 6 it is pretty simple. So here are the steps I took. All these were accomplished with a reference from Scott Lowe's posts (and with Scott's help via Twitter).
[Red Hat to acquire ManageIQ cloud software provider and Red Hat RHEV gets storage savvy]
The key requirement of this customer was that the KVM node and VMs had to share the same networking, which bridge routing would not do without some major configuration changes. They have a management network that is shared by all hosts whether virtual or physical to help in managing all aspects of their environment.
follow the instructions in INSTALL.RHEL to build your openvswitch RPMs such as:
If there is a problem refer to the INSTALL.RHEL for using the –D option to rebuild the module with the kernel version. In my install I did NOT include –target= options as I was building for the host I was upon.
Now that we have installed openvswitch we should make sure that libvirtd is running first. If it is not then we cannot use KVM and therefore not OVS.
A customer of mine recently switched from VMware Server to KVM, but they wanted better networking, which required installing and operating the Open vSwitch. Since they are using RHEL 5 (and I am using RHEL 6) we had to do some magic to install open vswitch. For RHEL 6 it is pretty simple. So here are the steps I took. All these were accomplished with a reference from Scott Lowe's posts (and with Scott's help via Twitter).
The key requirement of this customer was that the KVM node and VMs had to share the same networking, which bridge routing would not do without some major configuration changes. They have a management network that is shared by all hosts whether virtual or physical to help in managing all aspects of their environment.
RHEL5 (installation)
# wget http://openvswitch.org/releases/openvswitch-1.7.1.tar.gz
# tar –xzf openvswitch-1.7.1.tar.gz
# tar –xzf openvswitch-1.7.1.tar.gz
follow the instructions in INSTALL.RHEL to build your openvswitch RPMs such as:
# rpmbuild –bb rhel/openvswitch.spec
# cp rhel/kmodtool-openvswitch-el5.sh /usr/src/redhat/SOURCES
# rpmbuild –bb rhel/openvswitch-kmod-rhel5.spec
# cp rhel/kmodtool-openvswitch-el5.sh /usr/src/redhat/SOURCES
# rpmbuild –bb rhel/openvswitch-kmod-rhel5.spec
If there is a problem refer to the INSTALL.RHEL for using the –D option to rebuild the module with the kernel version. In my install I did NOT include –target= options as I was building for the host I was upon.
# rpm –ivh /usr/src/redhat/RPMS/x86_64/{kmod-openvswitch,openvswitch}-1.7.1*rpm
RHEL6 (installation)
# yum install kmod-openvswitch openvswitch
Now that we have installed openvswitch we should make sure that libvirtd is running first. If it is not then we cannot use KVM and therefore not OVS.
# service libvirtd status
If libvirtd is not running use the following to start immediately and to ensure it starts at boot.
Under normal circumstances, KVM starts with its own bridge named default, which is actually virbr0, if we are going to use the openvswitch, it is best to remove that bridge. First we need to see what bridges/networks exist. By default, it is fine to have this bridge also available as it becomes an internal only network using non-openvswitch constructs.
Now let's talk configuration, since this is the main reason for using openvswitch. We want the configuration to include an uplink from the physical devices to the openvswitch, then a link from the openvswitch to the Dom0 OS, and finally a link to each VM hosted. To complicate matters we need to have this done on two distinctly different networks. So how did we proceed?
First we need to configure Openvswitch, which goes along with Scott's article. First we configure the BRCOMPAT setting, which is commented out by default:/p>
Then start the openvswitch service(s) and configure them to start on reboot as well:
Check to see if KVM is running and openvswitch is installed properly, first by ensuring libvirtd is running properly and if the openvswitch components are loaded as modules and that the servers are running properly.
Now we need to create some openvswitches with some bonding thrown in for redundancy and bandwidth requirements. We also create a ‘named’ port on the openvswitch for our internal network.
Before we go any further, we need to
bring down the old interfaces, otherwise our changes to the
configuration files will force a reboot. Since we are working with the
existing Linux bond0 device and mapping that into the openvswitch, we
should disable that bond0 device as follows.
However, this is far from complete, we need to modify the ifcfg configurations within /etc/sysconfig/network-scripts to make all the networks come back on reboot. So the config scripts look like the following depending if we are using bonding or not:
Then we have to specify the bridge itself as a OVSBridge type.
Finally we have to specify a new device to bring up to put the KVM node itself onto the openvswitch. In this case, we define it as type OVSPort and specify that it is part of the OVS_BRIDGE named ovsbr0. We give it the ip address assigned to the machine and the proper netmask.
Finally, we set a default route for A.0.0.0/8 traffic that may be different than traffic going to the outside world.
Now the process is repeated for the next bonded network, which means we created two openvswitches. You can either reboot the server to make sure everything comes up properly and you have an alternative way into the machine (perhaps an ILO, DRAC, or IPMI mechanism) or you can shutdown the network and restart the network and the openvswitch constructs. I tested this by restarting the openvswitch and bringing up the mgmt0 network using normal means. I ran the following command with the following output and my openvswitches were created and all was working as expected.
Now enable the bridges and run some pings. If you are at a console you can run the following command:
Otherwise perhaps you want to test one interface at a time. In this case we did:
The ultimate test however is pinging the outside world and that worked flawlessly.
I would like to thank Scott Lowe for all his help from his blog post (originally for Ubuntu) and for his help on Twitter and Skype to solve the problem of getting not only openvswitch running but bonding my Dom0 to the vSwitch as well as all the DomU’s in use.
Next it is time to create some virtual machines and find a graphical management system that works for RHEV with the Open vSwitch.
If libvirtd is not running use the following to start immediately and to ensure it starts at boot.
# service libvirtd start
# chkconfig libvirtd on
# chkconfig libvirtd on
Under normal circumstances, KVM starts with its own bridge named default, which is actually virbr0, if we are going to use the openvswitch, it is best to remove that bridge. First we need to see what bridges/networks exist. By default, it is fine to have this bridge also available as it becomes an internal only network using non-openvswitch constructs.
# virsh –c qemu:///system net-list –all Name State Autostart Persistent -------------------------------------------------- default active yes yes # ifconfig –a |grep br0 virbr0 Link encap:Ethernet HWaddr XX:XX:XX:XX:XX:XX virbr0-nic Link encap:Ethernet HWaddr YY:YY:YY:YY:YY:YY
Now let's talk configuration, since this is the main reason for using openvswitch. We want the configuration to include an uplink from the physical devices to the openvswitch, then a link from the openvswitch to the Dom0 OS, and finally a link to each VM hosted. To complicate matters we need to have this done on two distinctly different networks. So how did we proceed?
First we need to configure Openvswitch, which goes along with Scott's article. First we configure the BRCOMPAT setting, which is commented out by default:/p>
# echo “BRCOMPAT=yes” >> /etc/sysconfig/openvswitch
Then start the openvswitch service(s) and configure them to start on reboot as well:
# /sbin/service openvswitch start
# /sbin/chkconfig openvswitch on
# /sbin/chkconfig openvswitch on
Check to see if KVM is running and openvswitch is installed properly, first by ensuring libvirtd is running properly and if the openvswitch components are loaded as modules and that the servers are running properly.
# virsh –c qemu:///system version
Compiled against library: libvirt 1.0.1
Using library: libvirt 1.0.1
Using API: QEMU 1.0.1
Running hypervisor: QEMU 0.12.1
# lsmod |grep brcom
brcompat 5905 0
openvswitch 92800 1 brcompat
# service openvswitch status
ovsdb-server is running with pid 2271
ovs-vswitchd is running with pid 2280
Compiled against library: libvirt 1.0.1
Using library: libvirt 1.0.1
Using API: QEMU 1.0.1
Running hypervisor: QEMU 0.12.1
# lsmod |grep brcom
brcompat 5905 0
openvswitch 92800 1 brcompat
# service openvswitch status
ovsdb-server is running with pid 2271
ovs-vswitchd is running with pid 2280
Now we need to create some openvswitches with some bonding thrown in for redundancy and bandwidth requirements. We also create a ‘named’ port on the openvswitch for our internal network.
# ovs-vsctl add-br ovsbr0
# ovs-vsctl add-bond ovsbr0 bond0 eth0 eth2 lacp=active # only needed for bonding
# ovs-vsctl add-port ovsbr0 mgmt0
# set interface mgmt0 type=internal
# ovs-vsctl add-bond ovsbr0 bond0 eth0 eth2 lacp=active # only needed for bonding
# ovs-vsctl add-port ovsbr0 mgmt0
# set interface mgmt0 type=internal
Bonded: # ifdown bond0 Unbonded: # ifdown eth0
However, this is far from complete, we need to modify the ifcfg configurations within /etc/sysconfig/network-scripts to make all the networks come back on reboot. So the config scripts look like the following depending if we are using bonding or not:
Then we have to specify the bridge itself as a OVSBridge type.
ifcfg-ovsbr0 DEVICE=ovsbr0 ONBOOT=yes DEVICETYPE=ovs TYPE=OVSBridge HOTPLUG=no USERCTL=no
Finally we have to specify a new device to bring up to put the KVM node itself onto the openvswitch. In this case, we define it as type OVSPort and specify that it is part of the OVS_BRIDGE named ovsbr0. We give it the ip address assigned to the machine and the proper netmask.
ifcfg-mgmt0 DEVICE=mgmt0 ONBOOT=yes DEVICETYPE=ovs TYPE=OVSIntPort OVS_BRIDGE=ovsbr0 USERCTL=no BOOTPROTO=none HOTPLUG=no IPADDR=A.B.C.D NETMASK=255.255.255.192
Finally, we set a default route for A.0.0.0/8 traffic that may be different than traffic going to the outside world.
route-mgmt0 route add A.0.0.0/8 via W.X.Y.Z
Now the process is repeated for the next bonded network, which means we created two openvswitches. You can either reboot the server to make sure everything comes up properly and you have an alternative way into the machine (perhaps an ILO, DRAC, or IPMI mechanism) or you can shutdown the network and restart the network and the openvswitch constructs. I tested this by restarting the openvswitch and bringing up the mgmt0 network using normal means. I ran the following command with the following output and my openvswitches were created and all was working as expected.
# ovs-vsctl show Bridge "ovsbr0" Port "ovsbr0" Interface "ovsbr0" type: internal Port "mgmt0" Interface "mgmt0" type: internal Port "bond0" Interface "eth2" Interface "eth0" Port "vnet7" Interface "vnet7" Bridge "ovsbr1" Port "bond1" Interface "eth3" Interface "eth1" Port "mgmt1" Interface "mgmt1" type: internal Port "ovsbr1" Interface "ovsbr1" type: internal ovs_version: "1.7.1"
# service network restart
Otherwise perhaps you want to test one interface at a time. In this case we did:
# ifup bond0 or use # ifup eth0 # ifup ovsbr0 # ifup mgmt0
The ultimate test however is pinging the outside world and that worked flawlessly.
I would like to thank Scott Lowe for all his help from his blog post (originally for Ubuntu) and for his help on Twitter and Skype to solve the problem of getting not only openvswitch running but bonding my Dom0 to the vSwitch as well as all the DomU’s in use.
Next it is time to create some virtual machines and find a graphical management system that works for RHEV with the Open vSwitch.
No comments:
Post a Comment