All Posts By

ONAP

ONAP vFW Blueprint Across Two Regions

By | Blog

This post originally appeared on Arana Networks. Republished with permission.

In the last blog we talked about how to use a public OpenStack cloud such as VEXXHOST as the NFVI/VIM layer for the ONAP vFW blueprint along with a containerized version of ONAP orchestrated by Kubernetes.

As we discussed, in reality, the traffic source and the vFW VNF are unlikely to be in the same cloud.  In this blog, we will briefly discuss how the vFW blueprint can span two different VEXXHOST tenants. This is not quite the same as two different cloud regions, but it is a pretty close simulation.

The two VNFs will be placed as follows:

  • vFW_PG (packet generator) on VEXXHOST Tenant1

  • vFW_SINC (compound VNF that consists of the vFW VNF and a sink VNF to receive packets) on VEXXHOST Tenant2

Since ONAP infrastructure is taken care of, here are the steps to connect ONAP to VEXXHOST. Please follow the steps from “Orchestrating Network Services Across Multiple OpenStack Regions Using ONAP” blog, to register both tenants as 2 regions in ONAP. Next:

  1. Create an account on VEXXHOST with 2 different tenants.

  2. If Registering the VEXXHOST into A&AI using ESR UI, change the password length to less than 20 characters.

  3. On Tenant1 manually create OAM network, unprotected_private  networks with different subnets than on Tenant2.

  4. On Tenant2, create an OAM network using the VEXXHOST cloud Horizon dashboard.

  5. Add security rules to allow ingress ICMP, SSH &all the required ports along with IPV6 from both the tenants.

  6. Edit the base_vfw.env and base_vpkg.env VNF descriptor files to configure them correctly based on the respective Tenants.

  7. Copy the above parameters into a text editor to use for subsequent A&AI registration, SDN-C preload, and APP-C⇔vFW_PG VNF netconf connection.

Now follow the steps from the vFW wiki that involve:

  1. SDC designer role: Create vendor license model

  2. SDC designer/tester role: Onboard and test VNFs (or vendor software product i.e. VSP)

  3. SDC designer role: Design network service

  4. SDC tester role: Test network service

  5. SDC governor role: Approve network service

  6. SDC ops role: Distribute network service

  7. VID: Instantiate network service

  8. VID: Add VNFs to network service

  9. SDN-C preload: Configure runtime parameters (for us design-time & run-time parameters are the same); preload vFW SINC on Tenant2 and vFW PG on Tenant1

  10. VID: Add VFs to network service — this step orchestrates networks and VNFs onto OpenStack

Upon completion of these steps, you should be able to go to the VEXXHOST Horizon GUI and see the VNFs coming up. Give them ~15 minutes to boot and another ~15 minutes to be fully configured. See below screenshots:

vFW Network Topology on Tenant2

vFW Network Topology on Tenant1

VNF SINC Stack Orchestrated on OpenStack Tenant2

VNF PG Stack Orchestrated on OpenStack Tenant1

Did you try this out? How did it go? We look forward to your feedback. In the meantime if you are looking for ONAP trainingprofessional services or development distros (basically an easy way to try out ONAP < 1 hour), please contact us.

Useful links: ONAP Wiki, vFWCL WikiOrchestrating Network Services Across Multiple OpenStack Regions Using ONAP

ONAP Beijing: Member Supporting Quotes

By | Announcement

“As one of the founding creators of and leading code contributors to ONAP, we’re thrilled to see this strong and growing developer ecosystem continue to advance the platform,” said Chris Rice, LF Networking Board Chairman and Senior Vice President of Domain 2.0 Architecture and Design at AT&T. “All service providers and network operators will benefit from the second software release, Beijing, which is focused closely on enhancing the platform to ensure scalability, security, stability and performance in support of real world deployments. As the 5G era approaches, software-centric network automation will be key to meeting customer expectations and driving new capabilities into the network.”

 

“China Mobile is committed to implementing network transformation technology innovation based on the ONAP open source community. We are very pleased to see that after the basic functional verification of the Amsterdam version of the core network virtualization business scenario, the community version of Beijing has been enhanced with respect to stability, reliability, and security,” said Yachen Wang, Deputy General Manager, AI and Intelligent Operation R&D Center, Network and IT Technology Department, China Mobile. “We will select core modules based on the Beijing version to conduct customized product R&D for key CMCC application scenarios. At the same time, we will continue to invest resources in supporting community work. In particular, the jointly-developed SDN and NFV-enabled cloud network collaborative orchestration business scenarios to enhance functionality and verification, contributing to the widespread deployment of ONAP.”

 

“As a platinum member, China Telecom has witnessed and participated in the successful deployment of ONAP’s Amsterdam release. Thanks to member collaboration, the Beijing release is now available and includes progress in Security, Stability, Scalability and Performance,” said Dr. Sun Qiong, SDN Technology R&D Center Director of China Telecom Beijing Research Inst., and LFN Board member, China Telecom.  “Based on the ONAP Architectural Principles, Beijing will accelerate the policy-driven orchestration and automation of physical and virtual network functions, and expand the platform’s maturity. China Telecom has endeavored and will continuously work hard together with other LFN members to to develop the top global automation platform in a software-defined, virtualized era.”

 

“The ONAP Beijing release brings the maturity of the platform to a new stage, providing a very good reference for carrier network transformation and service  automation,” said Dr. Xiongyan Tang, chief scientist, China Unicom Network Technology Research Institute. “As an innovative leader in China’s telecommunications industry, China Unicom has always been committed to uniting industry partners, accelerating network innovation, enabling business development and prospering the whole industry ecosystem. We will continue to participate in developer activities within the LFN community and within our network, to help enable growth across the industry.”

 

Mats Karlsson, Head of solution area OSS at Ericsson says “Ericsson is one of the leading promoters of open source ecosystem, accelerating the adoption and industry alignment in key technology areas including ONAP and ETSI – NFV alignment to benefit customers and partners.  As part of Ericsson’s collaboration and deep involvement in many open source projects, we see automation with orchestration playing a vital role in the evolution of 5G networks. ONAP is a key enabler of 5G evolution bringing automation based on analytics, policy and orchestration across legacy & hybrid cloud environments. The Beijing release takes substantial step forward in platform security, scalability, stability, enhanced exposure capabilities, and deployable on both virtual machine and container/kubernetes infrastructures. This release also demonstrated Virtual CPE and Virtual VoLTE use cases.”

“The ONAP Beijing Release focuses on increasing the stability, reliability, security and performance of the platform, it is a key milestone for ONAP’s commercial deployment,” said Bill Ren, vice president, Network & Industry Ecosystem Development, Huawei. “With 5G and network cloudification, automation and intelligence are more important to the telecommunications industry than ever before. Because of its advanced architecture and concepts, Huawei believes that ONAP is an industry platform more suitable for global operator network automation, but ONAP’s  maturity requires more consensus and collaboration in the upstream and downstream in the industries. Huawei will work with its carrier partners to conduct a joint POC of the 2B service scenario based on the Beijing Release, and promote ONAP to commercial deployment as soon as possible.”

“Inocybe is pleased to have been actively involved in contributing to the OpenDaylight (SDN-C) component of the ONAP Beijing Release,” said John Zannos, CRO of Inocybe and LFN Board Member. “ONAP is bringing the world’s operators together to collaboratively innovate and solve some of the most common challenges they face on their journey to automated and intelligent networks. In collaboration with partners, we’re looking at how we can best industrialize the software and help operators build, test and manage use-case specific distributions using the production-ready components of ONAP like SDN-C.”

“Netsia is fully committed to ONAP and actively participating in the OSAM project for the Casablanca release,” said Bora Eliacik, VP of Engineering, Netsia. “We intend to take this into production at a leading service provider in Turkey.”

 

Marc Rouanne, President of Mobile Networks at Nokia, says: “As top 10 contributor to the Beijing release, Nokia is an active member in the ONAP community and we continue to collaborate with community members to advance standard interfaces and system modularity in support of our customers’ varying needs. Integration with external controllers represents a significant step forward for an open and expandable automation platform, and is a key result of this collaboration. Nokia also supports recent ONAP directions towards virtual and physical network and service operations automation. Comprehensive automation strategies are critical for fast moving, hybrid network, digital services environments.”

 

“Designed using best-in-class micro-service architecture and following the best practice criteria for open source software, the ONAP Beijing Release provides a reliable and operable platform. It includes a set of powerful VNF packaging and validation tools that provides a common framework, easing VNF on-boarding and reducing the integration load,” said Emmanuel Lugagne Delpon, Senior Vice President of Orange Labs Networks. “Aligned with the internal network transformation program towards network softwaritisation, Orange is very active in the community with 20+ contributors for the Beijing Release. Orange developed 3 APIs aligned with TMF to facilitate integration within existing IT and BSS applications: External API/NorthBound Interface for service order, catalogue and inventory. To promote ONAP usage and to provide more testing capabilities, Orange proposes an Openlab platform used by 70+ users (from operators, vendors and academic) to demonstrate the full ONAP framework capabilities and to share results with the community.”

 

“We are excited to see the growing ONAP developer community and the strong interest from leading  Communication Service Providers (CSPs) across the globe,” said Arunmozhi Balasubramanian, Senior VP, Network Services – Solutions & Strategy, Tech Mahindra. “Tech Mahindra is among the top five contributors of ONAP. Tech Mahindra is executing a number of ONAP PoCs (Proof of Concepts) with leading CSPs across Americas, Europe and ANZ (Australia and New Zealand). The Beijing release provides support for PNFs (Physical Network Functions) which paves the way for easier migration to next- gen service management. ONAP enables CSPs to realize the much needed Service Agility and Hyper Automation for their networks.”

ONAP announces availability of Beijing Release, Enabling a Deployment- Ready Platform for Network Automation and Orchestration

By | Announcement

ONAP, as part of LF Networking, now accounts for more than 65% of global subscriber participation through carriers creating a harmonized, de-facto open source platform

San Francisco, June 12, 2018– The Open Network Automation Platform (ONAP) Project, which delivers a unified platform for end-to-end, closed-loop network automation, today announced the availability of ONAP Beijing, its second software release. The Beijing release accelerates ease of ONAP deployment for modern network operators and comes as more leading global service providers commit to enhancing open source networking. LF Networking– a Linux Foundation entity that brings together six top networking projects (including ONAP) to increase harmonization across platforms, communities and ecosystems– now enables more than 65 percent of the world’s mobile subscribers, as well as major global enterprises and cloud providers serving hundreds of millions of customers.

“We are delighted to announce the availability of ONAP’s second release, Beijing, which advances the architecture, seven dimensions of deployability, and new automation functionality,” said Arpit Joshipura, General Manager of Networking, The Linux Foundation. “As a community, we celebrate the progress the Beijing release brings to the ecosystem and look forward to additional deployments of the platform.”

With ONAP’s Beijing release, the developer community has focused closely on new platform and process enhancements to ensure scalability, security, stability and performance in support of real-world deployments. The release also evolves the platform toward container-based implementations, and provides robust documentation and training for Virtual Network Functions (VNF) developers, service designers, and operations managers. Leading developers from solution providers, vendors and system integrators globally have laid the foundations of a robust commercial ecosystem.

“The Beijing release ushers in the next phase of ONAP,” said Mazin Gilbert, ONAP Technical Steering Committee (TSC) Chair, and Vice President, Advanced Technology, AT&T Labs. “The technical enhancements in this release focus on enhancing the stability and deployability of the platform. In addition, the community has focused on supporting users in their adoption journey with the delivery of several new Getting Started guides as well as online and in-person introductory training options. Together with the community, we are further establishing ONAP as the defacto standard for automation.”

Specific platform and feature enhancements of the Beijing release include:

Architecture:

  • ONAP Operations Manager supports the migration to microservices-based deployments on Kubernetes
  • ONAP has collaborated with MEF and TMForum on external APIs, ensuring those frameworks and APIs can communicate seamlessly with the ONAP platform.
  • MSB has helped ONAP modules evolve towards the microservice direction in Being by providing service registration/discovery and API Gateway.
  • ONAP has achieved the unified resource VNF Informational Model/Data Model for both design and runtime.

Deployability:

  • Starting with the Beijing release, the ONAP development process measures improvements in seven key operational parameters (Usability, Security, Manageability, Stability, Scalability, Performance and Resiliency) for each platform module.
  • The Beijing release brings advanced platform stability and resiliency based on deployment of of the ONAP Operations Manager (OOM) and the Multi-Site State Coordination Service (MUSIC) projects.
    • ONAP OOM enables ONAP modules to be run on Kubernetes, contributing to availability, resilience, scalability and more for ONAP deployments and sets the stage for full implementation of a microservices architecture, expected with the third release, Casablanca.
    • MUSIC is an optional new solution for state management of ONAP components across geographically distributed sites, ensuring federated active-active operation without degrading performance, reliability and availability.
  • As security is a key element of the CI framework, the Project has adopted CII (Core Infrastructure Initiative) badging as part of its release requirements. CII is a project managed by The Linux Foundation that collaboratively works to improve the security and resilience of critical open source projects.
  • ONAP gains over 70% improvement at the elapse time of service resilience close loop from the event federation provided by Multi VIM/Cloud.

Functional Enhancements – Blueprint Enrichment

  • The residential vCPE blueprint has been enriched with change management and policy-driven workload placement features that include hardware platform awareness (HPA).
    • Network service scaling to meet traffic needs is a fundamental NFV value proposition. VNF Manual Scaleout is also supported on vLB via APPC and Policy with LCM based Manual Scale Out.
  • VF-C aligns R2 VNF data model and supports VoLTE and CPE use case which integrates with open source VNF via GVNFM.

Ecosystem Expansion:

  • The open source community is rapidly organizing to ensure the technology, tools and services are in place to support rapid adoption.
  • VNF integration: With the Beijing release, the ONAP community worked closely with the OPNFV Verified Program (OVP), which simplifies adoption in commercial NFV products and establishes an industry threshold based on OPNFV capabilities and test cases, to coordinate integrations via the ONAP VNFSDK and ONAP VNF Validation Program (VVP) components.
  • Documentation and training:
    • New startup and operations guides for users
    • Design guides and API and SDK documentation for service designers and VNF developers
    • Online training: Free introductory courses on Open Source Networking Technologies and ONAP as well as more in-depth, paid ONAP Fundamentals training
    • Community-led best-practices webinars
  • Real-World Use
    • Organizations spanning every aspect of the ecosystem (vendors, telecommunication providers, cable and cloud operators, NFV vendors and solution providers) continue to leverage ONAP for commercial products and services. The Beijing release code is being integrated into new and existing proofs of concept and production deployment plans for large global carriers like AT&T, Bell Canada, China Mobile, China Telecom, Orange, Reliance Jio, Verizon, Vodafone, Turk Telecom, among others. And major leading vendors are building products and solutions on the ONAP platform.

Fore more details on the ONAP Beijing release, please visit https://onap.readthedocs.io/en/latest/release/index.html.

ONAP Developer Forum

The ONAP project is hosting a developer forum in preparation of the third release, Casablanca (coming this summer), which will take place in Beijing, China, June 19-22, 2018. Additional details and registration details can be found here.

About the Linux Foundation

The Linux Foundation is the organization of choice for the world’s top developers and companies to build ecosystems that accelerate open technology development and commercial adoption. Together with the worldwide open source community, it is solving the hardest technology problems by creating the largest shared technology investment in history. Founded in 2000, The Linux Foundation today provides tools, training and events to scale any open source project, which together deliver an economic impact not achievable by any one company. More information can be found at www.linuxfoundation.org.

# # #

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Additional Resources

Media Contact
Jill Lovato
The Linux Foundation
jlovato@linuxfoundation.org

Debugging ONAP OOM Failures

By | Blog

Originally published on Aarna Networks, republished with permission.

On May-21, Amar Kapadia & I conducted a webinar on the topic of “Debugging OOM Failures”.

We started off by giving some context. Our objective was to develop a lightweight, repeatable lab environment for ONAP training on Google Cloud. We also plan to offer this image to developers that need a sandbox environment. To accomplish this, we used ONAP Amsterdam along with OPNFV Euphrates. ONAP was installed using OOM that uses Kubernetes and Helm. All of this software was installed on one VM on the Google cloud.

For most users, issues that pop up once in a while are OK. However, for us, the deployment process needed to be consistent and repeatable. For this reason, we had to debug every intermittent failure and develop a single-click workaround script.

The webinar next talked about the 7 issues we faced, how we debugged them and what the workarounds were. The issues faced were as follows. Other than failure#7, the other failures were all intermittent:

  1. AAI containers failed to transition to Running state

  2. SDC UI is not getting loaded

  3. SDC Service Distribution Error

  4. VID Service Deployment Error

  5. VID ADD VNF Error

  6. SDNC User creation failed

  7. Robot init_robot failed with missing attributes

If you are curious to learn more, check out the slide deck or video links above. Additionally if you have ONAP training, PoC needs, or simply feel like trying out the VM image on GCP, feel free to contact us. We have a whole portfolio of training, services and product offerings.

Orchestrating Network Services Across Multiple OpenStack Regions Using ONAP (Part 2/2)

By | Blog

Originally published on Aarna Networks, republished with permission.

In the previous installment of this two part blog series, we looked at why NFV clouds are likely to be highly distributed and why the management and orchestration software stack needs to support these numerous clouds. ONAP is one such network automation software stack. We saw the first three steps of what it takes to register multiple OpenStack cloud regions in

ONAP for the vFW use-case (other use cases might need slight tweaking).

Let’s pick up where we left off and look at the remaining steps 4-7:

Step 4: Associate Cloud Region object(s) with a subscriber’s service subscription
With this association, this cloud region will be populated into the dropdown list of available regions for deploying VNF/VF-Modules from VID.

Example script to associate the cloud region  “CloudOwner/Region1x” with subscriber “Demonstration2” that subscribes to service “vFWCL”:

curl -X PUT \  https://<AAI_VM1_IP>:8443/aai/v11/business/customers/customer/Demonstration2/service-subscriptions/service-subscription/vFWCL/relationship-list/relationship \

 -H ‘accept: application/json’ \

 -H ‘authorization: Basic QUFJOkFBSQ==’ \

 -H ‘cache-control: no-cache’ \

 -H ‘content-type: application/json’ \

 -H ‘postman-token: 4e6e55b9-4d84-6429-67c4-8c96144d1c52’ \

 -H ‘real-time: true’ \

 -H ‘x-fromappid: jimmy-postman’ \

 -H ‘x-transactionid: 9999’ \

-d ‘ {

   “related-to”: “tenant”,

   “related-link”: “/aai/v11/cloud-infrastructure/cloud-regions/cloud-region/CloudOwner/Region1x/tenants/tenant/<Project ID>”,

   “relationship-data”: [

       {

           “relationship-key”: “cloud-region.cloud-owner”,

           “relationship-value”: “CloudOwner”

       },

       {

           “relationship-key”: “cloud-region.cloud-region-id”,

           “relationship-value”: “<Cloud Region – should match with physical infra>”

       },

       {

           “relationship-key”: “tenant.tenant-id”,

           “relationship-value”: “<Project ID>”

       }

   ],

   “related-to-property”: [

       {

           “property-key”: “tenant.tenant-name”,

           “property-value”: “<OpenStack User Name>”

       }

   ]

}’

Step 5: Add Availability Zones to AAI
Now we need to add an availability zone to the region we created in step 3.

Example script to add OpenStack availability zone name, e.g ‘nova’ to Region1x:

curl -X PUT \

https://<AAI_VM1_IP>:8443/aai/v11/cloud-infrastructure/cloud-regions/cloud-region/CloudOwner/Region1x/availability-zones/availability-zone/<OpenStack ZoneName> \

-H ‘accept: application/json’ \

-H ‘authorization: Basic QUFJOkFBSQ==’ \

-H ‘cache-control: no-cache’ \

-H ‘content-type: application/json’ \

-H ‘postman-token: 4e6e55b9-4d84-6429-67c4-8c96144d1c52’ \

-H ‘real-time: true’ \

-H ‘x-fromappid: AAI’ \

-H ‘x-transactionid: 9999’ \

-d ‘{

   “availability-zone-name”: “<OpenStack ZoneName>”,

   “hypervisor-type”: “<Hypervisor>”,

   “operational-status”: “Active”

}’

Step 6:  Register VIM/Cloud instance with SO
SO does not utilize the cloud region representation from A&AI. It stores information of the VIM/Cloud instances inside the container (in the case of OOM) as a configuration file. To add a VIM/Cloud instance to SO, log into the SO service container and then update the configuration file “/etc/mso/config.d/cloud_config.json” as needed.

Example steps to add VIM/cloud instance info to SO:

# Procedure for mso_pass(encrypted)

# Go to the below directory on the kubernetes server

/<shared nfs folder>/onap/mso/mso

# Then run:

$ MSO_ENCRYPTION_KEY=$(cat encryption.key)

$ echo -n “your password in cleartext” | openssl aes-128-ecb -e -K MSO_ENCRYPTION_KEY -nosalt | xxd -c 256 –p

# Need to take the output and put it against the mso_pass

# value in the json file below. Template for adding a new cloud

# site and the associate identity service

$ sudo docker exec -it <mso container id> bash

root@mso:/# vi /etc/mso/config.d/mso_config.json

“mso-po-adapter-config”:

   {

     “identity_services”:

     [

       {

         “dcp_clli1x”: “DEFAULT_KEYSTONE_Region1x”,

         “identity_url”: “<keystone auth URL https://<IP or Name>>/v2.0”,

         “mso_id”: “<OpenStack User Name>”,

         “mso_pass”: “<created above>”,

         “admin_tenant”: “service”,

         “member_role”: “admin”,

         “tenant_metadata”: “true”,

         “identity_server_type”: “KEYSTONE”,

         “identity_authentication_type”: “USERNAME_PASSWORD”

       },

“cloud_sites”:

     [

       {

         “id”: “Region1x”,

         “aic_version”: “2.5”,

         “lcp_clli”: “Region1x”,

         “region_id”: “<OpenStack Region>”,

         “identity_service_id”: “DEFAULT_KEYSTONE_Region1x”

       },

# Save the changes and Restart MSO container

# Check the new config

http://<mso-vm-ip>:8080/networks/rest/cloud/showConfig # Note output below should match parameters used in the CURL Commands

# Sample output:

Cloud Sites:

CloudSite: id=Region11, regionId=RegionOne, identityServiceId=DEFAULT_KEYSTONE_Region11, aic_version=2.5, clli=Region11

CloudSite: id=Region12, regionId=RegionOne, identityServiceId=DEFAULT_KEYSTONE_Region12, aic_version=2.5, clli=Region12

Cloud Identity Services:

Cloud Identity Service: id=DEFAULT_KEYSTONE_Region11, identityUrl=<URLv2.0, msoId=<OpenStackUserName1>, adminTenant=service, memberRole=admin, tenantMetadata=true, identityServerType=KEYSTONE, identityAuthenticationType=USERNAME_PASSWORD

Cloud Identity Service: id=DEFAULT_KEYSTONE_Regopm12, identityUrl=https://auth.vexxhost.net/v2.0, msoId=<OpenStackUserName2>, adminTenant=service, memberRole=admin, tenantMetadata=true, identityServerType=KEYSTONE, identityAuthenticationType=USERNAME_PASSWORD

Step 7: Change Robot service to operate with the VIM/Cloud instance
When using OOM, by default the Robot service supports the pre-populated cloud region where the cloud-owner is “CloudOwner” and cloud-region-id is specified via the parameters “openstack_region” during the deployment of the ONAP instance through OOM configuration files. This cloud region information can be updated in the file “/share/config/vm_properties.py” inside the robot container. Appropriate relationships between Cloud Regions and Services need to be setup in the same file for Robot Service Tests to pass.

Note:  Robot framework does not rely on Multi-VIM/ESR.

If you have done all 7 steps correctly, Robot tests should pass and both regions should appear in the VID GUI.

If you liked (or disliked) this blog, we’d love to hear from you. Please let us know. Also if you are looking for ONAP trainingprofessional services or development distros (basically an easy way to try out ONAP on Google Cloud in <1 hour), please contact us. Professional services include ONAP deployment, network service design/deployment, VNF onboarding, custom training etc.

References:

ONAP Wiki

vFWCL Wiki

Orchestrating Network Services Across Multiple OpenStack Regions Using ONAP (Part 1/2)

By | Blog

Originally published on Aarna Networks, republished with permission.

NFV clouds are going to be distributed by their very nature. VNFs and applications will be distributed as per the below figure: horizontally across edge (access), regional datacenter (core) and hyperscale datacenters (could be public clouds) or vertically across multiple regional or hyperscale datacenters.

Distributed NFV Clouds

The Linux Foundation Open Network Automation Platform (ONAP) project is a management and orchestration software stack that automates network/SDN service deployment, lifecycle management and service assurance. For the above-mentioned reasons, ONAP is designed to support multiple cloud regions from the ground up.

In this two-part blog, we will walk you through the exact steps to register multiple cloud regions with ONAP for the virtual firewall (vFW) use-case that primarily utilizes SDC, SO, A&AI, VID and APP-C projects (other use cases will be similar but might require slightly different instructions). Try it out and let us know how it goes.

Prerequisites
  1. ONAP Installation (Amsterdam release)

  2. OpenStack regions spread across different physical locations

  3. Valid Subscriber already created under ONAP (e.g Demonstration2)

If you do not have the above, and still want to try this out, here are some alternatives:

ONAP Region Registration Steps

There are 3 locations where VIM/cloud instance information is stored: A&AI, SO & Robot. The following 7 steps outline the steps and provide sample API calls.

Step 1: Create Complex object(s) in AAI

A complex object in A&AI represent the physical location of a VIM/Cloud instance. Create a complex object for each OpenStack Region that needs to be configured under ONAP

Example script to do create complex object named clli1x:

# Main items to be changed are highlighted, but most of the below

# information should be customized for your region

curl -X PUT \ https://<AAI_VM1_IP>:8443/aai/v11/cloud-infrastructure/complexes/complex/clli1x \

-H ‘X-TransactionId: 9999’ \

-H ‘X-FromAppId: jimmy-postman’ \

-H ‘Real-Time: true’ \

-H ‘Authorization: Basic QUFJOkFBSQ==’ \

-H ‘Content-Type: application/json’ \

-H ‘Accept: application/json’ \

-H ‘Cache-Control: no-cache’ \

-H ‘Postman-Token: 734b5a2e-2a89-1cd3-596d-d69904bcda0a’ \

  -d   ‘{

           “physical-location-id”: “clli1x”,

           “data-center-code”: “example-data-center-code-val-6667”,

           “complex-name”: “clli1x”,

           “identity-url”: “example-identity-url-val-28399”,

           “physical-location-type”: “example-physical-location-type-val-28399”,

           “street1”: “example-street1-val-28399”,

           “street2”: “example-street2-val-28399”,

           “city”: “example-city-val-28399”,

           “state”: “example-state-val-28399”,

           “postal-code”: “example-postal-code-val-28399”,

           “country”: “example-country-val-28399”,

           “region”: “example-region-val-28399”,

           “latitude”: “example-latitude-val-28399”,

           “longitude”: “example-longitude-val-28399”,

           “elevation”: “example-elevation-val-28399”,

           “lata”: “example-lata-val-28399”

       }’

Step 2: Create Cloud Region object(s) in A&AI

The VIM/Cloud instance is represented as a cloud region object in A&AI and ESR. This representation will be used by VID, APP-C, VFC, DCAE, MultiVIM, etc. Create a cloud region object for each OpenStack Region.

Example script to create cloud region object for the same cloud region:

curl -X PUT \

‘https://<AAI_VM1_IP>:8443/aai/v11/cloud-infrastructure/cloud-regions/cloud-region/CloudOwner/Region1x \

 -H ‘accept: application/json’ \

 -H ‘authorization: Basic QUFJOkFBSQ==’ \

 -H ‘cache-control: no-cache’ \

 -H ‘content-type: application/json’ \

 -H ‘postman-token: f7c57ec5-ac01-7672-2014-d8dfad883cea’ \

 -H ‘real-time: true’ \

 -H ‘x-fromappid: jimmy-postman’ \

 -H ‘x-transactionid: 9999’ \

 -d ‘{

   “cloud-owner”: “CloudOwner”,

   “cloud-region-id”: “Region1x”,

   “cloud-type”: “openstack”,

   “owner-defined-type”: “t1”,

   “cloud-region-version”: “<OpenStack Version>”,

   “cloud-zone”: “<OpenStack Cloud Zone>”,

   “complex-name”: “clli1x”,

   “identity-url”: “<keystone auth URL https://<IP or Name>/v3>”,

   “sriov-automation”: false,

   “cloud-extra-info”: “”,

   “tenants”: {

       “tenant”: [

           {

               “tenant-id”: “<OpenStack Project ID>”,

               “tenant-name”: “<OpenStack Project Name>”

           }

       ]

   },

   “esr-system-info-list”:

   {

       “esr-system-info”:

       [

           {

               “esr-system-info-id”: “<Unique uuid, e.g. 432ac032-e996-41f2-84ed-9c7a1766eb29>”,

               “service-url”: “<keystone auth URL https://<IP or Name>/v3>”,

               “user-name”: “<OpenStack User Name>”,

               “password”: “<OpenStack Password>”,

               “system-type”: “VIM”,

               “ssl-cacert”: “”,

               “ssl-insecure”: true,

               “cloud-domain”: “Default”,

               “default-tenant”: “<Project Name>”

           }

       ]

   }

}’

Step 3: Associate each Cloud Region object with corresponding Complex Object
This needs to be setup for each cloud region with the corresponding complex object.

Example script to create the association:

curl -X PUT \ https://<AAI_VM1_IP>:8443/aai/v11/cloud-infrastructure/cloud-regions/cloud-region/CloudOwner/Region1x/relationship-list/relationship \

 -H ‘accept: application/json’ \

 -H ‘authorization: Basic QUFJOkFBSQ==’ \

 -H ‘cache-control: no-cache’ \

 -H ‘content-type: application/json’ \

 -H ‘postman-token: e68fd260-5cac-0570-9b48-c69c512b034f’ \

 -H ‘real-time: true’ \

 -H ‘x-fromappid: jimmy-postman’ \

 -H ‘x-transactionid: 9999’ \

 -d ‘{

   “related-to”: “complex”,

   “related-link”: “/aai/v11/cloud-infrastructure/complexes/complex/clli1x”,

   “relationship-data”: [{

           “relationship-key”: “complex.physical-location-id”,

           “relationship-value”: “clli1x”

   }]

}’

We will cover the remaining 4 steps in the next and final installment of this blog series.

In the meantime if you are looking for ONAP training, professional services or development distros (basically an easy way to try out ONAP < 1 hour), please contact us.

How service providers can use Kubernetes to scale NFV transformation

By | Blog

This post originally appeared on LinkedIn. Republished with permission by Jason Hunt, Distinguished Engineer of IBM.

After attending two major industry events—IBM’s Think and the Linux Foundation’s Open Networking Summit (ONS)—I’ve been thinking about how software and networking are evolving and merging in a way that can really benefit service providers.

It’s been interesting to watch how NFV has changed over the past few years. At first, NFV dealt simply with virtualization of physical network elements. Then as network services grew from simple VNFs to more complex combinations of VNFs, ONAP came along to provide lifecycle management of those network functions. Now, with 5G on the doorstep, service providers will need to shift the way they approach NFV deployments yet again.

Why? As Verizon’s CEO Lowell McAdam told IBM’s CEO Ginni Rometty at IBM Think, 5G will deliver 1GB throughput to devices with 1ms of latency, while allowing service providers to connect 1,000 times more devices to every cell site. In order to support that, service providers need to deploy network functions at the edge, close to where those devices are located.

But accomplishing that kind of scale can’t be done manually. It has to be done through automation at every level. And for that, service providers can leverage the kind of enterprise-level container management that’s possible with Kubernetes. Kubernetes allows service providers to provision, manage, and scale applications across a cluster. It also allows them to abstract away the infrastructure resources needed by applications. In ONAP’s experience, running on top of Kubernetes, rather than virtual machines, can reduce installation time from hours or weeks to just 20 minutes.

At the same time, service providers are utilizing a hybrid mixture of public and private clouds to run their network workloads. However, many providers at ONS expressed frustration at the incompatibility across clouds’ infrastructure provisioning APIs. This lack of harmonization is hampering their ability to deploy and scale NFV when and where needed.

Again, Kubernetes can help service providers meet this challenge. Since Kubernetes is supported across nearly all clouds, it can expose a common way to deploy workloads. Arpit Joshipura, GM Networking at the Linux Foundation, demonstrated this harmonization on the ONS keynote stage. With help from the Cloud-CI project in the Cloud Native Computing Foundation (CNCF), Arpit showed ONAP being deployed across public and private clouds (including IBM Cloud) and bare metal. Talk about multi-cloud!

Last October, IBM announced IBM Cloud Private, an integrated environment that enables you to design, develop, deploy and manage on-premises, containerized cloud applications behind your firewall. IBM Cloud Private includes Kubernetes, a private image repository, a management console and monitoring frameworks. We’ve documented how ONAP can be deployed on IBM Cloud Private, giving service providers a supported option for Kubernetes in an on-premises cloud.

At ONS, AT&T’s CTO Andre Fuetsch stated, “Software is the future of our network.” With 5G getting closer to the mainstream every day, the best-prepared service providers will look at how to combine the best of the software and network worlds together. Exploring the benefits of a Kubernetes-based environment might just be the best answer for their NFV deployment plans.