CORD Fundamentals with OpenStack

CORD Fundamentals with OpenStack2018-07-27T07:34:41+00:00

Project Description

Introduction – a philosophical change

Cloud computing and cloud centers dominate the data industries biggest architectures. Corporations such as Google, Facebook, Apple, Amazon and others have pioneered and honed these technologies to minimize costs whilst maximizing user performance and network agility. However, the providers of the networks that institutions, businesses and homes use to access these clouds (and all forms of internet access) have traditionally been constrained by the use of multiple vendor proprietary equipment to deliver these services. Vendor gear utilized for both customer premise and provider’s edge data office generally has all the control, feature and service software residing directly on the devices. This type of gear has always been limited by high capital and operational costs combined with limited scalability and upgradeability. For example, a provider who uses access gear from vendor A may want to introduce a new feature heavily desired by customers and only available from vendor B.  The provider needs to either wait for vendor A to provide an upgrade, if possible, or replace the access gear entirely. Both are time consuming, costly and sometimes a risky venture as network churn usually succumbs to technical and financial consequences.

CORD (Central Office Re-architected as a Data center) is an open source project that undertakes to take the advantages of cloud technologies and apply them so that the fundamental access network architecture becomes virtualized in an agile and economic cloud environment. Traditional windowless Telco edge offices are being transformed into the cloud access data centers defined by CORD.  Using a cloud environment at the edge offices provides elastic advantages of scalability, modularity and programmability.  And this elasticity enables providers to expeditiously introduce new revenue streams while at the same time delighting their customers with new technologies that can be added with little risk and considerably less expenditure.

So what is CORD and how does it work to replace the existing access network? – the rest of this brief endeavors to delve into answering that question and is intended to familiarize the reader with a basic working knowledge of CORD.

CORD Hardware

One of CORDs principles is the use of white-box appliance hardware that leverages merchant silicon. Recently, manufacturers have been producing white-boxes often referred to as “bare-metal” devices in categories including servers, switches and now edge devices such as OLTs. These type of devices have been labelled as hardware commodities on which the cloud can reside or utilize to perform the work of moving data. Having hardware as a commodity results in a market where a network provider can source the hardware from any white-box vendor. No longer constrained to use a single vendors gear equates to remarkable advantages of economy and efficiency in procurement.

Some examples of white-box gear include:

This type of equipment is delivered without any software except for a bootloader. The bootloader allows CORD to install the applicable OS, hardware abstraction software for merchant silicon (usually Broadcom ASICs) and services to fulfill the role of the CORD node.

Physical Hardware – CORD POD

A POD in CORD terms is the physical collection of the network appliances which underpins a virtualized delivery platform. The term POD derives from “Point of Delivery” which is exactly what CORD aims for as an edge office architecture. A basic or full POD is defined by CORD to consist of a Top-of-Rack management switch, four fabric switches and three x86 servers.

In addition, a CORD POD will also have white-box access equipment to serve customer premises and/or aggregation needs to provide a physical and logical data path to the CORD cloud and a customer’s virtual services. Depending on capacity or service expansion requirements, an operator needs only to procure additional white-boxes as additional switching or compute resources demand dictates.

Data Plane

In CORD, the white-box switch fabric nodes are inter-connected in a spine and leaf configuration and the access nodes, x86 servers and metro network (internet) are provisioned with links to the POD leaf switches.
The x86 servers are divvied up as follows.

  • Head Node 1 – providing the orchestration, control and provisioning services
  • Compute Node 1 – assignable compute services for virtual services
  • Compute Node 2 – assignable compute services for virtual services
  • Additional commodity servers for both Head and Compute nodes can be added as customers, traffic, and services expand.

The CORD user plane from a customer device to an upstream network, generally traverses the POD as follows:

  1. Customer to Access Device – bare metal CPE equipment – Fiber/RF/Ethernet medium towards a white-box aggregation device e.g. R-CORD vOLT.
  2. Aggregation Device to Compute Node – From the access or aggregation node the customer’s data flows via the fabric network to the compute nodes which harbor the instantiated virtual services that manage the customer’s data stream.

Compute Node towards the Upstream Network – by way of a Virtual Router instantiation also existing within a POD compute node utilizing the leaf and spine fabric to provide the data path between compute nodes (if necessary) and the upstream network.

Control and Management

In CORD, the white-box switch fabric nodes are inter-connected in a spine and leaf configuration and the access nodes, x86 servers and metro network (internet) are provisioned with links to the POD leaf switches.
The x86 servers are divvied up as follows.

  • Head Node 1 – providing the orchestration, control and provisioning services
  • Compute Node 1 – assignable compute services for virtual services
  • Compute Node 2 – assignable compute services for virtual services
  • Additional commodity servers for both Head and Compute nodes can be added as customers, traffic, and services expand.

The CORD user plane from a customer device to an upstream network, generally traverses the POD as follows:

  1. Customer to Access Device – bare metal CPE equipment – Fiber/RF/Ethernet medium towards a white-box aggregation device e.g. R-CORD vOLT.
  2. Aggregation Device to Compute Node – From the access or aggregation node the customer’s data flows via the fabric network to the compute nodes which harbor the instantiated virtual services that manage the customer’s data stream.

Compute Node towards the Upstream Network – by way of a Virtual Router instantiation also existing within a POD compute node utilizing the leaf and spine fabric to provide the data path between compute nodes (if necessary) and the upstream network.

The white-box Switch Facilitator

Multiple vendors providing commodity gear requires a facilitator to allow that a white-box from vendor A can be integrated equally well as one from Vendor B. Besides having compatible commodity hardware, an important development from the open source community is a project known as ONIE or Open Network Installer Environment (a subgroup project under the Open Compute Project).  ONIE is a scaled-down version of Linux installed on a white-box’s firmware. Its purpose is to ease the secure installation of the operating system that the appliance will utilize. Switch operating systems sourced from vendors, or from open source projects such as ONL (Open Network Linux), can be produced to be compatible with ONIE. This opens the way for large scale automated OS provisioning.  An operator can purchase bare metal commodity switch hardware from their supplier of choice (a CAPEX advantage) and integrate it into a CORD network. They can then realize OPEX gains from automation of a single installation environment platform. (MAAS, as will be discussed later.)

Understanding cloud functionality through OpenStack

Understanding the operation of OpenStack is worthwhile when dealing with the functions of computing clouds and how OpenStack enables it. OpenStack provides the functionality that can deliver cloud services, including compute services, storage services and network services. A tenant (a user or service whose data is isolated from others) needs to access these cloud services, and OpenStack implements this by acting as the orchestrator and framework for the cloud. OpenStack is made up of a collection of open source software projects of which, CORD uses some of, to “cloudify” the edge office. Most projects in OpenStack act “as a Service” and the table below lists some of these projects along with a brief description.

Project Service As a Service
Nova Provides a method to create computer services such as virtual machines, bare metal servers and some container services. Compute as a Service
Swift Provides Object Storage – Generally for large static storage needs that do not need to be updated i.e. web content, back-up images, and the like. Storage as a Service
Neutron Provides networking by delivering for example, virtual NICs to Compute services created by Nova. Can provide more than only flat network topologies, i.e. rich networking topologies can be instantiated for each tenant. Networking as a Service
Glance Tenant Image Services – VM Image etc. Glance image services include discovering, registering, and retrieving virtual machine (VM) images. Glance has a RESTful API that allows querying of VM image metadata as well as retrieval of the actual image. Imaging as a Service
Heat Orchestration – provisioning and configuration of instances – including software configuration, network configuration, database configuration, block storage object storage etc. – text format (YAML) modeling input. Instances as a Service
Sahara Provisions frameworks for cluster processing applications including Apache’s Hadoop, Spark and Storm. Data Processing as a service
Trove User databases instances – user provision-able, manageable etc. – produces a single tenant database within Nova. Database as a Service
Keystone Authentication – Identity Services Authentication and Identity as a Service
Gnocchi Time Series dB used by the Ceilometer project. Time Series Database as a Service
Cinder Guest instance block storage – standard storage service such as a hard drive normally would provide – better for dynamic data. Block Storage as a Service
Horizon Web UI to the Openstack services – dashboard
Ceilometer Telemetry Telemetry as a Service
Manila Shared file system service Shared Filesystems as a Service
RabbitMQ Message Queue for Clustering Messaging as a Service

OpenStack – basic operation

Referring to figure “Open Stack – Basic Flow Diagram”, when a user needs to access a cloud function such as a compute service, the user (or the entity that requires a virtual CPU machine) will send a request to OpenStack’s Nova project. Nova which is able to create a virtual machine, will then authenticate the request with OpenStacks’s Keystone project. If authentication passes, any networking required such as a simple virtual NIC IP address is requested from Neutron. Neutron is OpenStack’s networking allocation project. After networking is assigned by Neutron and returned to Nova, a virtual machine can be setup as a tenant through an OpenStack Hypervisor residing on a compute node hardware resource. The hypervisor will request an image from a service called Glance (OpenStack’s imaging service) which can upload an OS distribution from a stored user image, into the new virtual environment. Incidentally, a Hypervisor is the software that works with OpenStack to allow the physical hardware to be shared which could in this case be a white-box server.  It can create and manage the tenant virtual machines on a computing resource.

If in the future, new networking services or functionality is required to be added, the network settings can be updated in Neutron and an image can be updated with the new prescribed services using Glance. Next time a cloud resource request is sent from a user, all the new functionality will be instantiated and available in the virtual machine tenant without the user needing to add or update software on the user’s device.

CORD Software – Basic Building Blocks

Similar to the cloud model of OpenStack, CORD uses a cloud architecture to provide the applications that the white-box and bare metal nodes require to appropriately forward customer traffic.  Applications can be instantiated and configured in the “CORD cloud” using the OpenStack framework such that virtualized instances are created in a service tenancy arrangement. The network nodes become disaggregated from the control software. This means that the forwarding services normally computed within the device are now transacted within the compute and head nodes.

In the diagram “Basic R-CORD POD”, a Residential CORD block diagram is depicted to illustrate how this disaggregation takes place through abstraction of the hardware nodes entirely orchestrated and created from the head node cluster.  From the diagram, the CPE’s traditional functions are disaggregated to the subscriber vSG (virtual Subscriber Gateway) controllers. These are created in an individual container within a virtual machine (VM) hosted by a compute node. Further, the control of data flows within vOLTs and the leaf fabric switches is disaggregated to the VOLTHA and vRouter control applications respectively that execute on top of ONOS (CORD’s SDN controller).  These two applications abstract the PON and upstream routable networks. They work in conjunction with ONOS so that it can, through OpenFlow protocol, setup the traffic flows for the subscriber.

XOS

A CORD service is created through data models input into CORD’s XOS (Everything as a Service). XOS is the orchestrator in the CORD architecture and all the templates for the virtual systems are found within its interface. Not only the virtual instantiations abstracted from the bare metal hardware but also other configuration for the cloud software projects including OpenStack, ONOS and MAAS.  As an example, below is a screenshot from the XOS service executing in a “Cord in a Box” simulator that creates a CORD POD within a single server (otherwise known as CIAB). The screenshot shows XOS’s configuration of the images it needs to create the OpenStack installation for the PODs head node deployment. It is a simple example, but everything within the basic CORD POD can be found within XOS’s different templates as the orchestrator for CORD.

Referring back to the diagram “Basic R-CORD POD”, once a service is requested to be created by XOS, OpenStack will work to instantiate that service. For example, this includes virtual service tenants known as vSG, vCDN and vRouter found on the compute nodes. Each of these tenants are created in VMs and in CORD’s case “containers” within VMs, on an available computing or head node. For example, when a customer requires a new CPE installed at their location, a bare metal box is only needed. Any service such as DHCP or Firewall, which in a traditional system resided directly on the CPE, is now instantiated within the vSG created by the XOS templates and OpenStack on a computing node. The advantage is speed and flexibility. Speed in that the services can be swiftly instantiated in the CORD POD at the edge office. And flexibility in being able to add or subtract customer services within the edge office virtual tenants as needed. And it is not just the vSG, there are others service applications such as vCDN or virtualized content delivery network, and the vRouter application on top of ONOS which provides the necessary gateway services to the internet, core network, or metro network upstream from the CORD edge office.

ONOS

With XOS and OpenStack, the CORD head node is able to spawn the virtual machines and containers representing the network nodes.  Once these are created, ONOS or Open Network Operating System (an open source platform) operates in the form of an SDN controller for CORD using its northbound abstractions and APIs to interact with XOS and OpenStack. Correspondingly it provides the southbound abstractions and OpenFlow interface to the CORD POD nodes and acts as the overall authority on the control plane for dynamic configuration of data flows. ONOS works in conjunction with OpenStack’s neutron service to create the virtual networks necessary to interconnect the CORD POD which ultimately creates a chain of service for a customer. Finally, ONOS also works with control applications as mentioned earlier. For example, the vRouter application (a service created by XOS) runs on top of ONOS. It learns the routes necessary to forward traffic to upstream networks. As a software router it creates route rules that are transformed into OpenFlow rules which through ONOS configures the leaf switches to be able to correctly forward any packets to upstream networks.

MAAS

Metal as a Service or MAAS is an integral part of the CORD head node operations. Its function is to deliver the working software images that the bare metal nodes require to boot up and interact with the CORD OpenStack & ONOS services. In CORD, as discussed earlier, the nodes are all generic commodity boxes and some utilize merchant silicon. MAAS works similarly as a PXE server and it serves the images to these commodity boxes which normally only arrive with a small bootloader installed. Below is a screenshot of the MAAS interface once it is deployed on a head node. Depending on the node architecture, MAAS can select from multiple images suitable for the device and its architecture. Once selected, MASS can automatically deliver to the device: the boot processes, operating system and software packages needed.

Containerization – Docker

Earlier it was discussed in the OpenStack description that virtual machines are used to provide the tenancy services. In CORD however, this is not always the case. Within the CORD service chain, instantiated ONOS applications such as for example the vSG or vOLT (aka VOLTHA or OSAM-HA) service tenants, require a method to isolate the tenant from each other. Instead of using virtual machines to provide isolation a concept known as Linux containers is used. CORD uses containerization by means of the Docker container technology. Compared with virtual machines, containers use far less system resources. vSGs representing each individual customer will need hundreds or thousands of instances within a CORD POD. Containers provide a just big enough slice of an operating system, its programs and libraries where the vSG services can execute yet still remain isolated.

Other CORD Architectures

This brief has used Residential CORD as an example. At the moment, OpenCORD defines three different CORD architectures which includes R-CORD. The other two architectures are Mobile CORD or M-CORD and Enterprise CORD or E-CORD. Each of these CORD architectures uses the same principle as with R-CORD in that network services are virtualized (disaggregation) and the same generic hardware platforms are used to houses the service instantiations for the CORD POD services.  M-CORD is designed to provide disaggregation of both RAN and CORE components for 4G and upcoming 5G networks. 4G Nodes such as the MME, SG, PG and PCRF can all be virtualized using M-CORD. E-CORD on the other hand uses a minimum of two of the same basic CORD PODs to support enterprise customers. The two E-CORD PODs can then be connected either by Layer 2 or Layer 3 VPN.  Similar to R-CORD, E-CORD instantiations such as vCPE create QinQ headers for customer data and adds or removes VLAN tags for upstream and downstream data respectively. Some other E-CORD services are: vEE (Ethernet Edge) to aggregate traffic, vEG (Enterprise Gateway) provide virtual network functions such as bandwidth functions, firewall and diagnostics, and vRouter to provide upstream connectivity towards the internet. Each of the CORD technologies is a topic onto themselves but operate using the same principles outlined here, and so, it is best left to the reader to investigate further.

Summary

Open CORD is part of the Open Networking Foundation and their stated goal is “to create an open virtualized service delivery platform that provides cloud economies and agility”. Customer equipment, aggregation and core office equipment is simplified into commodity bare metal equipment and white-box servers where the customer’s data is serviced by virtualized instantiations that are abstracted from the actual hardware of the nodes. Virtualized services and the underlying hardware is all configured, controlled and provisioned from the CORD head node using the OpenStack framework, XOS, ONOS, and MAAS. The head node cluster eventual creates the virtual services arranged as tenants in Docker containers. Those virtualized services then form a chain to service a customer’s data as it traverses the CORD network.

Through applying cloud technology, CORD can radically change the characteristics of residential, mobile and enterprise edge offices by creating a virtualized edge office on top of an OpenStack cloud framework. Further, the CORD POD using the principles of SDN, utilizes the power of ONOS to control data flows across the virtualized network where metering, DHCP, firewall or any number of services can be applied.

It is suggested to refer to the Open Network Foundation umbrella organization and its sub-organizations such as OpenCORD to investigate further into the details of CORD and other software projects including ONOS and XOS. Further information for other software such as MAAS, ONIE and Docker can also be found under their respective organization webpages.

We use cookies to provide the best possible user experience for those who visit our website. By using this website you agree to the placement of cookies. For more details consult our privacy policy. OK