Thursday, July 16, 2015

OpenDaylight is One of the Best Controllers for OpenStack — Here’s How to Implement It

http://thenewstack.io/opendaylight-is-one-of-the-best-controllers-for-openstack-heres-how-to-implement-it

14090408637_171cdb0eda_k
This is part two of our posts about implementing controllers with OpenStack. Part one explored SDN’s scale out effect on OpenStack Neutron.
The integration of OpenStack and OpenDaylight (ODL) is a hot topic, with abundant, detailed information available; however, the majority of these articles focus on explaining usage aspects, rather than how the integration is implemented.
In this article, we’ll focus on detailed implementation of integrating the differing components. Here are some extremely useful references:
From these links we can summarize the complete setup process as follows:
  1. Build and install the appropriate OpenDaylight edition (depending on your implementation choice) on a virtual or physical machine. Ensure that you have the right bundles to implement the Neutron APIs (OVSDB, VTN Manager, LISP, etc.).
  2. Start the OpenDaylight controller with the appropriate configurations.
  3. Deploy OpenStack, preferably with a multi-node configuration – a control node, a network node, and one or more compute nodes.
  4. Perform the necessary OpenStack configurations for interaction with the OpenDaylight controller:
    1. Ensure the core plugin is in ML2.
    2. Add OpenDaylight as one of the “mechanism_drivers” in ML2.
    3. Setup the “[ml2_odl]” section in the “ml2_conf.ini” file with the following:
      1. username = admin
      2. password = admin
      3. url = http://IP-Address-Of-OpenDayLight:8080/controller/nb/v2/neutron
  5. Start creating and adding VMs from OpenStack, with their corresponding virtual networks.
  6. Verify the same (topologies) from the OpenDaylight GUI.
There are also excellent videos which demonstrate the step-by-step process of integrating OpenStack and OpenDaylight.

Integration of OpenStack and OpenDaylight 

The overall process of OpenStack and OpenDaylight integration is summarized in figure one. On the OpenStack front, Neutron consists of the ML2 mechanism driver, which acts as a REST proxy and passes all Neutron API calls into OpenDaylight. OpenDaylight contains a northbound REST service (called Neutron API service) which caches data from these proxied API calls and makes it available to other services inside of OpenDaylight. Shown below, when we describe the two components in detail, these RESTful APIs achieve the binding of OpenStack and OpenDaylight.
FigureOne
Figure One: OpenStack and OpenDaylight Integration

OpenStack

As introduced in SDN Controllers and OpenStack, the modular layer 2 (ML2) plugin for OpenStack Neutron is a framework designed to utilize the variety of layer 2 networking technologies simultaneously. The main idea behind the ML2 plugin is to separate the network type from the mechanism that realizes the network type. Drivers within the ML2 plugin implement extensible sets of network types (local, flat, VLAN, GRE and VXLAN), and mechanisms to access these networks.
In ML2, the registered mechanism drivers, which are typically vendor-specific, are called twice when the three core resources — networks, subnets and ports — are created, updated or deleted. The first call, typically referred to as a pre-commit call, is part of the DB transaction, where driver-specific states are maintained. In the case of the OpenDaylight mechanism driver, this pre-commit operation is not necessary. Once the transaction has been committed, the drivers are called again, typically referred as a post-commit call, at which point they can interact with external devices and controllers.
4
Figure Two: ML2 Mechanism Driver Architecture
Mechanism drivers are also called as part of the port binding process, to determine whether the associated mechanism can provide connectivity for the network, and if so, the network segment and VIF driver to be used.
Figure two above summarizes OpenStack Neutron’s ML2 OpenDaylight mechanism driver architecture. The OpenDaylight mechanism driver is made up of a single file “mechanism_odl.py” and a separate networking OpenDaylight driver. The mechanism driver is divided into two different parts (core and extension), based on API handling. The OpenDaylight mechanism driver and OpenDaylight drive classes implement the core APIs. OpenDaylight’s L3 router plugin class realizes the extension APIs only. Firewall as a service (FWaaS) and load balancing as a service (LBaaS) are currently not supported by the ODL driver.
The OpenDaylight mechanism driver receives the calls to create/update/delete the core resources (network, subnet and port). It forwards these calls to the OpenDaylight driver class by invoking the synchronize function. This function, in turn, invokes the ‘sendjson’ API.
Similarly, the OpenDaylight L3 router plugin class handles the L3 APIs to create/update/delete the router and floating IPs. Hence, the final call for both the core and L3 extension APIs is “sendjson” – which sends a REST request to the OpenDaylight controller and waits for the response.
In the next section, we’ll see how OpenDaylight handles these REST calls.

OpenDaylight

OpenDaylight exposes the OpenStack Neutron API service – which provides Neutron API handling for multiple implementations. Figure three summarizes the architecture of Neutron API implementation in OpenDaylight. There are mainly three different bundles that constitute the Neutron API service – termed Northbound API, Neutron Southbound provider interface (SPI) and transcriber – and a collection of implementations. In this section, we will take a detailed look at these components.
2
Figure 3: OpenDaylight Neutron API Implementation Architecture

Northbound API Bundle

This bundle handles the REST requests from the Openstack plugin and returns the appropriate responses. The contents of the Northbound API bundle can be described as follows:
  1. A single parent class for requests: IneutronRequest.
  2. A collection of JAXB (Java architecture for XML Binding) annotated request classes for each of the resources: network, subnet, port, firewall, load balancer, etc. These classes are used to represent a specific request, which implements the IneutronRequest interface. For example, the network request contains the following attributes: class NeutronNetworkRequest implements INeutronRequest
    @XmlElement(name="network")
    NeutronNetwork singletonNetwork;
    @XmlElement(name="networks")
    List bulkRequest;
    @XmlElement(name="networks_links")
    List links;
  3. A collection of Neutron northbound classes* which provide REST APIs for managing corresponding resources. For example, NeutronNetworksNorthbound class includes the following APIs: listNetworks(), showNetwork(), createNetworks(), updateNetwork() and deleteNetwork().
The symbol *, unless mentioned otherwise, represents any of the following: network, subnet, port, router, floating IP, security group, security group rules, load balancer, load balancer health, load balancer listener, load balancer pool, etc.

Neutron SPI Bundle

This is the most important bundle that links the northbound APIs to the appropriate implementations. The Neutron southbound protocol interface (SPI) bundle includes the following:
    1. JAXB (Java architecture for XML binding) annotated base class and subclasses, named Neutron* for supporting the API documented in networking API v2.0.
    2. INeutron*CRUD interfaces, which are implemented by the transcriber bundle.
    3. INeutron*Aware interfaces, which are implemented by the specific plugins (OpenDove, OVSDB, VTN, etc.).
Images-SDNController-Openstack-Part3-5
The symbol *, unless mentioned otherwise, represents any of the following: Network, subnet, port, router, floating-IP, security-group, security-group rules, load-balancer, load-balancer health, load-balancer listener and load-balancer-pool etc.

Transcriber Bundle

The transcriber module consists of a collection of Neutron*Interface classes, which implement the INeutron*CRUD interfaces for storing Neutron objects in caches. Most of these classes include a concurrent HashMap. For example, private ConcurrentMap portDB = new ConcurrentHashMap() – and all the add, remove, and get operations work on this HashMap.

Implementation Bundle

The advantage of OpenDaylight is it includes multiple implementations of Neutron networks, providing several ways to integrate with OpenStack. The majority of the northbound services that aim to provide network virtualization can be used as an implementation of the Neutron networks. Hence, OpenDaylight includes the following options for Neutron API implementations:
  1. OVSDB: OpenDaylight has northbound APIs to interact with Neutron, and uses OVSDB for southbound configuration of vSwitches on compute nodes. Thus OpenDaylight can manage network connectivity and initiate GRE or VXLAN tunnels for compute nodes. OVSDB Integration is a bundle for OpenDaylight that will implement the Open vSwitch Database management protocol, allowing southbound configuration of vSwitches. It is a critical protocol for Network Virtualization with Open vSwitch forwarding elements. OVSDB neutron bundle in the virtualization edition supports network virtualization using VXLAN and GRE tunnel for OpenStack and CloudStack deployments
  2. VTN Manager (Virtual Tenant Network): VTN manager, one of the network virtualization solutions in OpenDaylight, is implemented as an OSGi (Open Services Gateway initiative) bundle of controllers using AD-SAL, and manages OpenFlow switches. VTN Manager can also include a separate component that works as a network service provider for OpenStack. VTN Manager’s Neutron component enables OpenStack to work in pure OpenFlow environments, in which all switches in the data plane support OpenFlow. VTN Manager can also make use of OVSDB-enhanced VTN. Neutron bundles can make use of OVSDB plugins for operations such as port creation.
  3. Open DOVE: Open DOVE is a “network virtualization” platform with a full control plane implementation for OpenDaylight and data plane based on “Open vSwitch.” It aims to provide logically isolated multitenant networks with layer-2 or layer-3 connectivity, and runs on any IP network in a virtualized data center. Open DOVE is based on IBM SDN virtual environments and DOVE technology from IBM Research. Open DOVE has not been updated after the Hydrogen release, and its existence in the Lithium release of OpenDaylight is doubtful.
  4. OpenContrail (plugin2oc): provides the integration/interworking between the OpenDaylight controller and the OpenContrail platform. This combined open source solution will seamlessly enable OpenContrail platform capabilities such as cloud networking and network functions virtualization (NfV) within the OpenDaylight project.
  5. LISP Flow Mapping: Locator/ID Separation Protocol (LISP) aims to provide a “flexible map-and-encap framework that can be used for overlay network applications, and decouples network control plane from the forwarding plane.” LISP includes two namespaces: endpoint identifiers (EIDs — IP address of the host), and routing locators (RLOCs —IP address of the LISP router to the host). LISP flow mapping provides LISP mapping system services, which store and serve the mapping data (including a variety of routing policies such as traffic engineering and load balancing) to data plane nodes, as well as to OpenDaylight applications.
These implementations typically realize some or all of the following handlers: network, subnet, port, router, floating-IP, firewall, firewall policy, firewall rule, security group, security group rules, load balancer, load balancer health, load balancer listener, load balancer pool and load balancer pool member. These handlers support create, delete and update operations for the corresponding resource. For example, a NeutronNetworkHandler implements the following operations for the network resource:
canCreateNetwork(NeutronNetwork network)
neutronNetworkCreated(NeutronNetwork network)
canUpdateNetwork(NeutronNetwork delta, NeutronNetwork original)
neutronNetworkUpdated(NeutronNetwork network)
canDeleteNetwork(NeutronNetwork network)
neutronNetworkDeleted(NeutronNetwork network)
The exact mechanism involved in these handlers depends on the southbound plugin they use: OpenFlow (1.0 or 1.3), OVSDB, LISP, REST (OpenContrail), etc. Let us use the example of a NeutronNetworkCreated handler in VTN Manager. The steps involved in this handler can be summarized as:
  1. Check if the network can be created (again) by calling canCreateNetwork.
  2. Convert Neutron network’s tenant ID and network ID to tenant ID and bridge ID, respectively.
  3. Check if a tenant already exists, and if not, create a tenant.
  4. Create a bridge and perform VLAN mapping.
For the actual operations, the Neutron component of VTN manager invokes VTN manager’s core function, which in turn uses the OpenFlow (1.0) plugin to make necessary configurations on the OpenFlow switches.

Using All Bundles for Network Creation

3
Figure Four: Process for Network Creation in OpenDaylight
Figure four above briefly summarizes the process involved in network creation, and the corresponding calls in all of the above described bundles of the Opendaylight Neutron implementation. This figure should help the reader understand the control flow across all the bundles.
In summary, OpenDaylight is one of the best open source controllers for providing OpenStack integration. Though the support for load balancer and firewall services is still missing, the freedom of multiple implementations and support of complete core APIs itself provides immense advantage and flexibility to the administrator. In the near future, we can expect OpenDaylight to support all the extensions of OpenStack to achieve the perfect integration.

No comments:

Post a Comment