Skip to main content
Category

Blog

vCPE Blueprint in ONAP

By Blog

This post originally appeared on Arana Networks. Republished with permission.

This blog explains deployment details (using TOSCA/HEAT templates) of some of the important services of the vCPE blueprint in ONAP. It assumes that the reader is familiar with vCPE use case (for which there are several blogs/video available, including a free book from Aarna Networks — ONAP Demystified, or the ONAP Confluence page).

The following block diagram provides an overview of the end to end service of vCPE, and how various constituent services are linked together.

vCPE end to end use case comprises of several services (some of which are optional, and will be replaced by equivalent services already existing in CSP’s environment), each of which contains one or more VNF’s and/or VL’s.

  1. vCPE General Infra Service

  2. vG MUX Infra ServicevBNG Service

  3. vBNG MUX Service

  4. vBRG Emulation

  5. vCPE Customer Service

This blog shows details of some of these services, and their associated model templates.

vCPE General Infra

This service consists of vDHCP, vAAA and vDNS VNF’s connected by 2 virtual links (VLs) – cpe_signal and cpe_public, both of which are Openstack Neutron networks. The cpe_public link is also connected to a Web Server.

Now, let us examine the Infra Service in SDC Catalog for its constituent components and their details.

The composition of this service is as follows, which shows the virtual links (VLs) and the VF that makes up all the VNF’s:

The CSAR file for this service contains the following details:

The service is modeled (in TOSCA and HEAT templates) as follows:

Notice that the 2 networks (CPE_PUBLIC and CPE_SIGNAL) are modeled in HEAT, and so is the VF module that contains VM’s for all the VNF’s (vDHCP, vAAA and vDNS + vDHCP). The TOSCA template includes node_templates for all the HEAT templates. The TOSCA model definition file for this service can be found here.

Let us take a closer look at the Environment file (base_vcpe_infra.env) of this service.

parameters:

  cloud_env: “openstack”

  cpe_public_net_cidr: “10.2.0.0/24”

  cpe_public_net_id: “zdfw1cpe01_public”

  cpe_public_subnet_id: “zdfw1cpe01_sub_public”

  cpe_signal_net_cidr: “10.4.0.0/24”

  cpe_signal_net_id: “zdfw1cpe01_private”

  cpe_signal_subnet_id: “zdfw1cpe01_sub_private”

  dcae_collector_ip: “10.0.4.1”

  dcae_collector_port: “8081”

  demo_artifacts_version: “1.2.0”

  install_script_version: “1.2.0-SNAPSHOT”

  key_name: “vaaa_key”

  mr_ip_addr: “10.0.11.1”

  onap_private_net_cidr: “10.0.0.0/16”

  onap_private_net_id: “ext-net”

  onap_private_subnet_id: “ext-net”

  pub_key: “ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDQXYJYYi3/OUZXUiCYWdtc7K0m5C0dJKVxPG0eI8EWZrEHYdfYe6WoTSDJCww+1qlBSpA5ac/Ba4Wn9vh+lR1vtUKkyIC/nrYb90ReUd385Glkgzrfh5HdR5y5S2cL/Frh86lAn9r6b3iWTJD8wBwXFyoe1S2nMTOIuG4RPNvfmyCTYVh8XTCCE8HPvh3xv2r4egawG1P4Q4UDwk+hDBXThY2KS8M5/8EMyxHV0ImpLbpYCTBA6KYDIRtqmgS6iKyy8v2D1aSY5mc9J0T5t9S2Gv+VZQNWQDDKNFnxqYaAo1uEoq/i1q63XC5AD3ckXb2VT6dp23BQMdDfbHyUWfJN”

  public_net_id: “2da53890-5b54-4d29-81f7-3185110636ed”

  repo_url_artifacts: “https://nexus.onap.org/content/groups/staging”

  repo_url_blob: “https://nexus.onap.org/content/sites/raw”

  vaaa_name_0: “zdcpe1cpe01aaa01”

  vaaa_private_ip_0: “10.4.0.4”

  vaaa_private_ip_1: “10.0.101.2”

  vcpe_flavor_name: “onap.medium”

  vcpe_image_name: “ubuntu-16.04-daily”

  vdhcp_name_0: “zdcpe1cpe01dhcp01”

  vdhcp_private_ip_0: “10.4.0.1”

  vdhcp_private_ip_1: “10.0.101.1”

  vdns_name_0: “zdcpe1cpe01dns01”

  vdns_private_ip_0: “10.2.0.1”

  vdns_private_ip_1: “10.0.101.3”

  vf_module_id: “vCPE_Intrastructure”

  vnf_id: “vCPE_Infrastructure_demo_app”

  vweb_name_0: “zdcpe1cpe01web01”

  vweb_private_ip_0: “10.2.0.10”

  vweb_private_ip_1: “10.0.101.40”

Note the details about the constituent VNF’s (vAAA, vDHCP, vDNS and vWEB_Server), including their IP addresses, and the network addresses of the VL’s that these VNF’s are connected to (cpe_signal and cpe_public). For eg., vDHCP & vAAA are connected to cpe_signal network (10.4.x.x), and vDNS and vWebServer are connected to cpe_public network (10.2.x.x). Also, DCAE Collector service is connected at 10.0.4.x IP address.

Now, let us look at some of the interesting fields of HEAT template (base_vcpe_infra.yaml) of this service. This contains details about all the VNF’s that are part of this service, and how they will be instantiated using HEAT. Complete copy of the HEAT template can be found here.

heat_template_version: 2013-05-23

description: Heat template to deploy vCPE Infrastructure elements (vAAA, vDHCP, vDNS_DHCP, webServer)

##############

#            #

# PARAMETERS #

#            #

##############

parameters:

  vcpe_image_name:

    type: string

    label: Image name or ID

    description: Image to be used for compute instance

    …

  cpe_signal_net_id:

    type: string

    label: vAAA private network name or ID

    description: Private network that connects vAAA with vDNSs

  …

  cpe_public_net_id:

    type: string

    label: vCPE Public network (emulates internet) name or ID

    description: Private network that connects vGW to emulated internet

  …

  vaaa_private_ip_0:

    type: string

    label: vAAA private IP address towards the CPE_SIGNAL private network

    description: Private IP address that is assigned to the vAAA to communicate with the vCPE components

  …

  vdns_private_ip_0:

    type: string

    label: vDNS private IP address towards the CPE_PUBLIC private network

  …

  vdhcp_private_ip_0:

    type: string

    label: vDHCP  private IP address towards the CPE_SIGNAL private network

    description: Private IP address that is assigned to the vDHCP to communicate with the vCPE components

  …

  vweb_private_ip_0:

    type: string

    label: vWEB private IP address towards the CPE_PUBLIC private network

    description: Private IP address that is assigned to the vWEB to communicate with the vGWs

  …

    …

  dcae_collector_ip:

    type: string

    label: DCAE collector IP address

    description: IP address of the DCAE collector

 …

#############

#           #

# RESOURCES #

#           #

#############

resources:

….

  # Virtual AAA server Instantiation

  vaaa_private_0_port:

    type: OS::Neutron::Port

    properties:

      network: { get_param: cpe_signal_net_id }

      fixed_ips: [{“subnet”: { get_param: cpe_signal_subnet_id }, “ip_address”: { get_param: vaaa_private_ip_0 }}]

  …

  vaaa_0:

    type: OS::Nova::Server

    properties:

     …

          template: |

            #!/bin/bash

            # Create configuration files

            mkdir /opt/config

            echo “__dcae_collector_ip__” > /opt/config/dcae_collector_ip.txt

            …

            # Download and run install script

            curl -k __repo_url_blob__/org.onap.demo/vnfs/vcpe/__install_script_version__/v_aaa_install.sh -o /opt/v_aaa_install.sh

            cd /opt

            chmod +x v_aaa_install.sh

            ./v_aaa_install.sh

Note the details about various VNF’s (vAAA, vDHCP, vDNS and vWebServer) and the VL’s (Neutron networks – cpe_signal which connects vAAA with vDNS VNF’s and cpe_public, which connects vGW service to Emulate Internet) that are part of the Infrastructure service. Also note the vAAA instantiation, and details of DCAE Collector IP address, and the installation script (v_aaa_install.sh) in vAAA VNF. Other VNFs (vDNS, vDHCP & vWebserver) have been left out but you can refer to the link above for these details in the HEAT template file.

In the next blog, we will examine other Services and their details.

In the meantime check out our latest webinar on “What’s new in ONAP Beijing” or request ONAP training if you/your team needs to learn more.

ONAP vFW Blueprint Across Two Regions

By Blog

This post originally appeared on Arana Networks. Republished with permission.

In the last blog we talked about how to use a public OpenStack cloud such as VEXXHOST as the NFVI/VIM layer for the ONAP vFW blueprint along with a containerized version of ONAP orchestrated by Kubernetes.

As we discussed, in reality, the traffic source and the vFW VNF are unlikely to be in the same cloud.  In this blog, we will briefly discuss how the vFW blueprint can span two different VEXXHOST tenants. This is not quite the same as two different cloud regions, but it is a pretty close simulation.

The two VNFs will be placed as follows:

  • vFW_PG (packet generator) on VEXXHOST Tenant1

  • vFW_SINC (compound VNF that consists of the vFW VNF and a sink VNF to receive packets) on VEXXHOST Tenant2

Since ONAP infrastructure is taken care of, here are the steps to connect ONAP to VEXXHOST. Please follow the steps from “Orchestrating Network Services Across Multiple OpenStack Regions Using ONAP” blog, to register both tenants as 2 regions in ONAP. Next:

  1. Create an account on VEXXHOST with 2 different tenants.

  2. If Registering the VEXXHOST into A&AI using ESR UI, change the password length to less than 20 characters.

  3. On Tenant1 manually create OAM network, unprotected_private  networks with different subnets than on Tenant2.

  4. On Tenant2, create an OAM network using the VEXXHOST cloud Horizon dashboard.

  5. Add security rules to allow ingress ICMP, SSH &all the required ports along with IPV6 from both the tenants.

  6. Edit the base_vfw.env and base_vpkg.env VNF descriptor files to configure them correctly based on the respective Tenants.

  7. Copy the above parameters into a text editor to use for subsequent A&AI registration, SDN-C preload, and APP-C⇔vFW_PG VNF netconf connection.

Now follow the steps from the vFW wiki that involve:

  1. SDC designer role: Create vendor license model

  2. SDC designer/tester role: Onboard and test VNFs (or vendor software product i.e. VSP)

  3. SDC designer role: Design network service

  4. SDC tester role: Test network service

  5. SDC governor role: Approve network service

  6. SDC ops role: Distribute network service

  7. VID: Instantiate network service

  8. VID: Add VNFs to network service

  9. SDN-C preload: Configure runtime parameters (for us design-time & run-time parameters are the same); preload vFW SINC on Tenant2 and vFW PG on Tenant1

  10. VID: Add VFs to network service — this step orchestrates networks and VNFs onto OpenStack

Upon completion of these steps, you should be able to go to the VEXXHOST Horizon GUI and see the VNFs coming up. Give them ~15 minutes to boot and another ~15 minutes to be fully configured. See below screenshots:

vFW Network Topology on Tenant2

vFW Network Topology on Tenant1

VNF SINC Stack Orchestrated on OpenStack Tenant2

VNF PG Stack Orchestrated on OpenStack Tenant1

Did you try this out? How did it go? We look forward to your feedback. In the meantime if you are looking for ONAP trainingprofessional services or development distros (basically an easy way to try out ONAP < 1 hour), please contact us.

Useful links: ONAP Wiki, vFWCL WikiOrchestrating Network Services Across Multiple OpenStack Regions Using ONAP

Debugging ONAP OOM Failures

By Blog

Originally published on Aarna Networks, republished with permission.

On May-21, Amar Kapadia & I conducted a webinar on the topic of “Debugging OOM Failures”.

We started off by giving some context. Our objective was to develop a lightweight, repeatable lab environment for ONAP training on Google Cloud. We also plan to offer this image to developers that need a sandbox environment. To accomplish this, we used ONAP Amsterdam along with OPNFV Euphrates. ONAP was installed using OOM that uses Kubernetes and Helm. All of this software was installed on one VM on the Google cloud.

For most users, issues that pop up once in a while are OK. However, for us, the deployment process needed to be consistent and repeatable. For this reason, we had to debug every intermittent failure and develop a single-click workaround script.

The webinar next talked about the 7 issues we faced, how we debugged them and what the workarounds were. The issues faced were as follows. Other than failure#7, the other failures were all intermittent:

  1. AAI containers failed to transition to Running state

  2. SDC UI is not getting loaded

  3. SDC Service Distribution Error

  4. VID Service Deployment Error

  5. VID ADD VNF Error

  6. SDNC User creation failed

  7. Robot init_robot failed with missing attributes

If you are curious to learn more, check out the slide deck or video links above. Additionally if you have ONAP training, PoC needs, or simply feel like trying out the VM image on GCP, feel free to contact us. We have a whole portfolio of training, services and product offerings.

Orchestrating Network Services Across Multiple OpenStack Regions Using ONAP (Part 2/2)

By Blog

Originally published on Aarna Networks, republished with permission.

In the previous installment of this two part blog series, we looked at why NFV clouds are likely to be highly distributed and why the management and orchestration software stack needs to support these numerous clouds. ONAP is one such network automation software stack. We saw the first three steps of what it takes to register multiple OpenStack cloud regions in

ONAP for the vFW use-case (other use cases might need slight tweaking).

Let’s pick up where we left off and look at the remaining steps 4-7:

Step 4: Associate Cloud Region object(s) with a subscriber’s service subscription
With this association, this cloud region will be populated into the dropdown list of available regions for deploying VNF/VF-Modules from VID.

Example script to associate the cloud region  “CloudOwner/Region1x” with subscriber “Demonstration2” that subscribes to service “vFWCL”:

curl -X PUT \  https://<AAI_VM1_IP>:8443/aai/v11/business/customers/customer/Demonstration2/service-subscriptions/service-subscription/vFWCL/relationship-list/relationship \

 -H ‘accept: application/json’ \

 -H ‘authorization: Basic QUFJOkFBSQ==’ \

 -H ‘cache-control: no-cache’ \

 -H ‘content-type: application/json’ \

 -H ‘postman-token: 4e6e55b9-4d84-6429-67c4-8c96144d1c52’ \

 -H ‘real-time: true’ \

 -H ‘x-fromappid: jimmy-postman’ \

 -H ‘x-transactionid: 9999’ \

-d ‘ {

   “related-to”: “tenant”,

   “related-link”: “/aai/v11/cloud-infrastructure/cloud-regions/cloud-region/CloudOwner/Region1x/tenants/tenant/<Project ID>”,

   “relationship-data”: [

       {

           “relationship-key”: “cloud-region.cloud-owner”,

           “relationship-value”: “CloudOwner”

       },

       {

           “relationship-key”: “cloud-region.cloud-region-id”,

           “relationship-value”: “<Cloud Region – should match with physical infra>”

       },

       {

           “relationship-key”: “tenant.tenant-id”,

           “relationship-value”: “<Project ID>”

       }

   ],

   “related-to-property”: [

       {

           “property-key”: “tenant.tenant-name”,

           “property-value”: “<OpenStack User Name>”

       }

   ]

}’

Step 5: Add Availability Zones to AAI
Now we need to add an availability zone to the region we created in step 3.

Example script to add OpenStack availability zone name, e.g ‘nova’ to Region1x:

curl -X PUT \

https://<AAI_VM1_IP>:8443/aai/v11/cloud-infrastructure/cloud-regions/cloud-region/CloudOwner/Region1x/availability-zones/availability-zone/<OpenStack ZoneName> \

-H ‘accept: application/json’ \

-H ‘authorization: Basic QUFJOkFBSQ==’ \

-H ‘cache-control: no-cache’ \

-H ‘content-type: application/json’ \

-H ‘postman-token: 4e6e55b9-4d84-6429-67c4-8c96144d1c52’ \

-H ‘real-time: true’ \

-H ‘x-fromappid: AAI’ \

-H ‘x-transactionid: 9999’ \

-d ‘{

   “availability-zone-name”: “<OpenStack ZoneName>”,

   “hypervisor-type”: “<Hypervisor>”,

   “operational-status”: “Active”

}’

Step 6:  Register VIM/Cloud instance with SO
SO does not utilize the cloud region representation from A&AI. It stores information of the VIM/Cloud instances inside the container (in the case of OOM) as a configuration file. To add a VIM/Cloud instance to SO, log into the SO service container and then update the configuration file “/etc/mso/config.d/cloud_config.json” as needed.

Example steps to add VIM/cloud instance info to SO:

# Procedure for mso_pass(encrypted)

# Go to the below directory on the kubernetes server

/<shared nfs folder>/onap/mso/mso

# Then run:

$ MSO_ENCRYPTION_KEY=$(cat encryption.key)

$ echo -n “your password in cleartext” | openssl aes-128-ecb -e -K MSO_ENCRYPTION_KEY -nosalt | xxd -c 256 –p

# Need to take the output and put it against the mso_pass

# value in the json file below. Template for adding a new cloud

# site and the associate identity service

$ sudo docker exec -it <mso container id> bash

root@mso:/# vi /etc/mso/config.d/mso_config.json

“mso-po-adapter-config”:

   {

     “identity_services”:

     [

       {

         “dcp_clli1x”: “DEFAULT_KEYSTONE_Region1x”,

         “identity_url”: “<keystone auth URL https://<IP or Name>>/v2.0”,

         “mso_id”: “<OpenStack User Name>”,

         “mso_pass”: “<created above>”,

         “admin_tenant”: “service”,

         “member_role”: “admin”,

         “tenant_metadata”: “true”,

         “identity_server_type”: “KEYSTONE”,

         “identity_authentication_type”: “USERNAME_PASSWORD”

       },

“cloud_sites”:

     [

       {

         “id”: “Region1x”,

         “aic_version”: “2.5”,

         “lcp_clli”: “Region1x”,

         “region_id”: “<OpenStack Region>”,

         “identity_service_id”: “DEFAULT_KEYSTONE_Region1x”

       },

# Save the changes and Restart MSO container

# Check the new config

http://<mso-vm-ip>:8080/networks/rest/cloud/showConfig # Note output below should match parameters used in the CURL Commands

# Sample output:

Cloud Sites:

CloudSite: id=Region11, regionId=RegionOne, identityServiceId=DEFAULT_KEYSTONE_Region11, aic_version=2.5, clli=Region11

CloudSite: id=Region12, regionId=RegionOne, identityServiceId=DEFAULT_KEYSTONE_Region12, aic_version=2.5, clli=Region12

Cloud Identity Services:

Cloud Identity Service: id=DEFAULT_KEYSTONE_Region11, identityUrl=<URLv2.0, msoId=<OpenStackUserName1>, adminTenant=service, memberRole=admin, tenantMetadata=true, identityServerType=KEYSTONE, identityAuthenticationType=USERNAME_PASSWORD

Cloud Identity Service: id=DEFAULT_KEYSTONE_Regopm12, identityUrl=https://auth.vexxhost.net/v2.0, msoId=<OpenStackUserName2>, adminTenant=service, memberRole=admin, tenantMetadata=true, identityServerType=KEYSTONE, identityAuthenticationType=USERNAME_PASSWORD

Step 7: Change Robot service to operate with the VIM/Cloud instance
When using OOM, by default the Robot service supports the pre-populated cloud region where the cloud-owner is “CloudOwner” and cloud-region-id is specified via the parameters “openstack_region” during the deployment of the ONAP instance through OOM configuration files. This cloud region information can be updated in the file “/share/config/vm_properties.py” inside the robot container. Appropriate relationships between Cloud Regions and Services need to be setup in the same file for Robot Service Tests to pass.

Note:  Robot framework does not rely on Multi-VIM/ESR.

If you have done all 7 steps correctly, Robot tests should pass and both regions should appear in the VID GUI.

If you liked (or disliked) this blog, we’d love to hear from you. Please let us know. Also if you are looking for ONAP trainingprofessional services or development distros (basically an easy way to try out ONAP on Google Cloud in <1 hour), please contact us. Professional services include ONAP deployment, network service design/deployment, VNF onboarding, custom training etc.

References:

ONAP Wiki

vFWCL Wiki

Orchestrating Network Services Across Multiple OpenStack Regions Using ONAP (Part 1/2)

By Blog

Originally published on Aarna Networks, republished with permission.

NFV clouds are going to be distributed by their very nature. VNFs and applications will be distributed as per the below figure: horizontally across edge (access), regional datacenter (core) and hyperscale datacenters (could be public clouds) or vertically across multiple regional or hyperscale datacenters.

Distributed NFV Clouds

The Linux Foundation Open Network Automation Platform (ONAP) project is a management and orchestration software stack that automates network/SDN service deployment, lifecycle management and service assurance. For the above-mentioned reasons, ONAP is designed to support multiple cloud regions from the ground up.

In this two-part blog, we will walk you through the exact steps to register multiple cloud regions with ONAP for the virtual firewall (vFW) use-case that primarily utilizes SDC, SO, A&AI, VID and APP-C projects (other use cases will be similar but might require slightly different instructions). Try it out and let us know how it goes.

Prerequisites
  1. ONAP Installation (Amsterdam release)

  2. OpenStack regions spread across different physical locations

  3. Valid Subscriber already created under ONAP (e.g Demonstration2)

If you do not have the above, and still want to try this out, here are some alternatives:

ONAP Region Registration Steps

There are 3 locations where VIM/cloud instance information is stored: A&AI, SO & Robot. The following 7 steps outline the steps and provide sample API calls.

Step 1: Create Complex object(s) in AAI

A complex object in A&AI represent the physical location of a VIM/Cloud instance. Create a complex object for each OpenStack Region that needs to be configured under ONAP

Example script to do create complex object named clli1x:

# Main items to be changed are highlighted, but most of the below

# information should be customized for your region

curl -X PUT \ https://<AAI_VM1_IP>:8443/aai/v11/cloud-infrastructure/complexes/complex/clli1x \

-H ‘X-TransactionId: 9999’ \

-H ‘X-FromAppId: jimmy-postman’ \

-H ‘Real-Time: true’ \

-H ‘Authorization: Basic QUFJOkFBSQ==’ \

-H ‘Content-Type: application/json’ \

-H ‘Accept: application/json’ \

-H ‘Cache-Control: no-cache’ \

-H ‘Postman-Token: 734b5a2e-2a89-1cd3-596d-d69904bcda0a’ \

  -d   ‘{

           “physical-location-id”: “clli1x”,

           “data-center-code”: “example-data-center-code-val-6667”,

           “complex-name”: “clli1x”,

           “identity-url”: “example-identity-url-val-28399”,

           “physical-location-type”: “example-physical-location-type-val-28399”,

           “street1”: “example-street1-val-28399”,

           “street2”: “example-street2-val-28399”,

           “city”: “example-city-val-28399”,

           “state”: “example-state-val-28399”,

           “postal-code”: “example-postal-code-val-28399”,

           “country”: “example-country-val-28399”,

           “region”: “example-region-val-28399”,

           “latitude”: “example-latitude-val-28399”,

           “longitude”: “example-longitude-val-28399”,

           “elevation”: “example-elevation-val-28399”,

           “lata”: “example-lata-val-28399”

       }’

Step 2: Create Cloud Region object(s) in A&AI

The VIM/Cloud instance is represented as a cloud region object in A&AI and ESR. This representation will be used by VID, APP-C, VFC, DCAE, MultiVIM, etc. Create a cloud region object for each OpenStack Region.

Example script to create cloud region object for the same cloud region:

curl -X PUT \

‘https://<AAI_VM1_IP>:8443/aai/v11/cloud-infrastructure/cloud-regions/cloud-region/CloudOwner/Region1x \

 -H ‘accept: application/json’ \

 -H ‘authorization: Basic QUFJOkFBSQ==’ \

 -H ‘cache-control: no-cache’ \

 -H ‘content-type: application/json’ \

 -H ‘postman-token: f7c57ec5-ac01-7672-2014-d8dfad883cea’ \

 -H ‘real-time: true’ \

 -H ‘x-fromappid: jimmy-postman’ \

 -H ‘x-transactionid: 9999’ \

 -d ‘{

   “cloud-owner”: “CloudOwner”,

   “cloud-region-id”: “Region1x”,

   “cloud-type”: “openstack”,

   “owner-defined-type”: “t1”,

   “cloud-region-version”: “<OpenStack Version>”,

   “cloud-zone”: “<OpenStack Cloud Zone>”,

   “complex-name”: “clli1x”,

   “identity-url”: “<keystone auth URL https://<IP or Name>/v3>”,

   “sriov-automation”: false,

   “cloud-extra-info”: “”,

   “tenants”: {

       “tenant”: [

           {

               “tenant-id”: “<OpenStack Project ID>”,

               “tenant-name”: “<OpenStack Project Name>”

           }

       ]

   },

   “esr-system-info-list”:

   {

       “esr-system-info”:

       [

           {

               “esr-system-info-id”: “<Unique uuid, e.g. 432ac032-e996-41f2-84ed-9c7a1766eb29>”,

               “service-url”: “<keystone auth URL https://<IP or Name>/v3>”,

               “user-name”: “<OpenStack User Name>”,

               “password”: “<OpenStack Password>”,

               “system-type”: “VIM”,

               “ssl-cacert”: “”,

               “ssl-insecure”: true,

               “cloud-domain”: “Default”,

               “default-tenant”: “<Project Name>”

           }

       ]

   }

}’

Step 3: Associate each Cloud Region object with corresponding Complex Object
This needs to be setup for each cloud region with the corresponding complex object.

Example script to create the association:

curl -X PUT \ https://<AAI_VM1_IP>:8443/aai/v11/cloud-infrastructure/cloud-regions/cloud-region/CloudOwner/Region1x/relationship-list/relationship \

 -H ‘accept: application/json’ \

 -H ‘authorization: Basic QUFJOkFBSQ==’ \

 -H ‘cache-control: no-cache’ \

 -H ‘content-type: application/json’ \

 -H ‘postman-token: e68fd260-5cac-0570-9b48-c69c512b034f’ \

 -H ‘real-time: true’ \

 -H ‘x-fromappid: jimmy-postman’ \

 -H ‘x-transactionid: 9999’ \

 -d ‘{

   “related-to”: “complex”,

   “related-link”: “/aai/v11/cloud-infrastructure/complexes/complex/clli1x”,

   “relationship-data”: [{

           “relationship-key”: “complex.physical-location-id”,

           “relationship-value”: “clli1x”

   }]

}’

We will cover the remaining 4 steps in the next and final installment of this blog series.

In the meantime if you are looking for ONAP training, professional services or development distros (basically an easy way to try out ONAP < 1 hour), please contact us.

How service providers can use Kubernetes to scale NFV transformation

By Blog

This post originally appeared on LinkedIn. Republished with permission by Jason Hunt, Distinguished Engineer of IBM.

After attending two major industry events—IBM’s Think and the Linux Foundation’s Open Networking Summit (ONS)—I’ve been thinking about how software and networking are evolving and merging in a way that can really benefit service providers.

It’s been interesting to watch how NFV has changed over the past few years. At first, NFV dealt simply with virtualization of physical network elements. Then as network services grew from simple VNFs to more complex combinations of VNFs, ONAP came along to provide lifecycle management of those network functions. Now, with 5G on the doorstep, service providers will need to shift the way they approach NFV deployments yet again.

Why? As Verizon’s CEO Lowell McAdam told IBM’s CEO Ginni Rometty at IBM Think, 5G will deliver 1GB throughput to devices with 1ms of latency, while allowing service providers to connect 1,000 times more devices to every cell site. In order to support that, service providers need to deploy network functions at the edge, close to where those devices are located.

But accomplishing that kind of scale can’t be done manually. It has to be done through automation at every level. And for that, service providers can leverage the kind of enterprise-level container management that’s possible with Kubernetes. Kubernetes allows service providers to provision, manage, and scale applications across a cluster. It also allows them to abstract away the infrastructure resources needed by applications. In ONAP’s experience, running on top of Kubernetes, rather than virtual machines, can reduce installation time from hours or weeks to just 20 minutes.

At the same time, service providers are utilizing a hybrid mixture of public and private clouds to run their network workloads. However, many providers at ONS expressed frustration at the incompatibility across clouds’ infrastructure provisioning APIs. This lack of harmonization is hampering their ability to deploy and scale NFV when and where needed.

Again, Kubernetes can help service providers meet this challenge. Since Kubernetes is supported across nearly all clouds, it can expose a common way to deploy workloads. Arpit Joshipura, GM Networking at the Linux Foundation, demonstrated this harmonization on the ONS keynote stage. With help from the Cloud-CI project in the Cloud Native Computing Foundation (CNCF), Arpit showed ONAP being deployed across public and private clouds (including IBM Cloud) and bare metal. Talk about multi-cloud!

Last October, IBM announced IBM Cloud Private, an integrated environment that enables you to design, develop, deploy and manage on-premises, containerized cloud applications behind your firewall. IBM Cloud Private includes Kubernetes, a private image repository, a management console and monitoring frameworks. We’ve documented how ONAP can be deployed on IBM Cloud Private, giving service providers a supported option for Kubernetes in an on-premises cloud.

At ONS, AT&T’s CTO Andre Fuetsch stated, “Software is the future of our network.” With 5G getting closer to the mainstream every day, the best-prepared service providers will look at how to combine the best of the software and network worlds together. Exploring the benefits of a Kubernetes-based environment might just be the best answer for their NFV deployment plans.

From Portal to SDC: Inside the ONAP Architecture

By Blog

See below for a quick overview of ONAP informational videos from Architecture sub-committee members Manoop Talasilla and Michael Lando.

The ONAP platform is made of several software subsystems and two major architectural frameworks – a design-time environment to design, define and program the platform; and an execution-time environment to execute the logic programmed in the design phase. Whether new to the platform or well-versed, understanding the ONAP architecture is critical to deployment, and our latest video series is here to help.

To kick off the series, we’ll focus on the ONAP Portal and Service Design and Creation (SDC). The videos feature two key members of the ONAP Architecture Team:  Manoop Talasilla, Portal Technical Lead at AT&T Research Labs, and Michael Lando, Service Design Technical Lead of AT&T. In our video series, Manoop covers the ONAP portal, and Michael the Service Design and Creation (SDC).

Video 1: ONAP Portal

Manoop takes a beginners look at the ONAP portal, focused on the platform and its ability to integrate different applications into a centralized portal core. Additional capabilities of the portal include application onboarding and management, decentralized access management, and hosed application features, as detailed in the video.

Want to learn more about the ONAP portal and network operations? Dive in: watch Manoop’s full video now. 

 

Video 2: Service Design and Creation (SDC)

SDC, and Integrated Development Environment (IDE), is a subsystem of the design-time framework, accessible through the ONAP portal. In the video, Michael discusses that as an IDE, SDC provides the tools for designing services as well as creating the necessary artifacts for service orchestration. With its graphical interface and visual tools, users can drag and drop different components onto the SDC canvas to model their service, see what’s connected where, what the capabilities are and the requirements each VNF provides to the service.

As the design time component, SDC handles all design time activities. Check out the full video below to hear Michael’s explanation of SDC in ONAP.  

Interested in learning more about the ONAP Architecture? Take a look at the full video series here and read the Architecture Whitepaper.

2017 ONAP Community Awards Shine Spotlight on Collaboration

By Blog

As we reflect upon 2017 and the successful launch of Amsterdam, we are proud to announce the winners of the inaugural ONAP Community Awards acknowledging individual and community contributions to the success of the project. We were gratified to see strong participation, with 87 nominations representing 53 individuals and projects, and 571 total votes cast.

The community recognized the winners on December 11at the ONAP Developer Forum in Santa Clara, CA. Details about each award category and its winner appear below. Please join us in congratulating all of our nominees and winners!

Top Achievement Award: Catherine Lefevre, AT&T

The community recognized Catherine Lefevre for her dedication to the project and her pivotal role in the successful merger of multiple code bases and timely delivery of the Amsterdam release. Catherine worked tirelessly across many groups and companies to evangelize ONAP globally, while working closely with the Technical Steering Committee (TSC) to work toward Amsterdam’s release date.

Citizenship Award: Chris Donley, Huawei

Chris Donley provided the most assistance to others outside of his own ONAP project through code reviews, debugging, bug fixes and more, furthering collaboration across the large, distributed ONAP community. His work on the Architecture Committee and TSC and time spent educating and guiding others set the standard for communication across the team.

Marketing Award: Lingli Deng, China Mobile

Lingli Deng provided significant support to the ecosystem teams and championed ONAP across a variety of mediums. She also spoke on behalf of ONAP at events around the world and led the review team for the project’s VoLTE whitepaper. Additionally, Lingli contributed two technical videos in English, three in Chinese, and is a frequent coordinator of Chinese contributions to ecosystem development activities.

Code Contribution Award: Seshu Kumar, Huawei

PTLs, Committers and Contributors selected Seshu Kumar to receive the Code Contribution Award based on the quantity of quality of his code. He played important role in helping the Service Orchestrator (SO) project reach critical milestones and in resolving blocking issues. Seshu is one of the top code contributors to ONAP overall.

Project Achievement Award: The Integration Team

The Integration Team worked together for the first time on Amsterdam, yet they met the tight release deadline.

Innovation Award: The ONAP Operations Manager (OOM) Project Team

The OOM Project Team deployed ONAP on containers to support the Amsterdam release.

Top Predictions for Networking in 2018

By Blog

Arpit Joshipura, GM of Networking and Orchestration at the Linux Foundation, shares his 2018 predictions for the networking industry.

1. 2015’s buzzwords are 2018’s course curriculum.
SDN, NFV, VNF, Containers, Microservices — the hype crested in 2016 and receded in 2017. But don’t mistake quiet for inactivity; solution providers and users alike have been hard at work with re-architecting and maturing solutions for key networking challenges. And now that these projects are nearing production, these topics are our most requested areas for training.

2. Open Source networking is crossing the chasm – from POCs to Production.
The ability for users and developers to work side by side in open source has helped projects mature quickly — and vendors to rapidly deliver highly relevant solutions to their customers. For example:

3. Top networking vendors are embracing a shift in their business models…

  • Hardware-centric to software-centric: value-add from rapid customization
  • Proprietary development to open-source, shared development
  • Co-development with end users, reducing time to deployment from 2 years to 6 months

4. Industry-wide adoption of 1-2 Network Automation platforms will enable unprecedented mass customization.
The need to integrate multiple platforms, taking into account each of their unique feature sets and limitations, has traditionally been a massive barrier to rapid service delivery.

In 2018, mature abstractions and standardizing processes will enable user organizations to rapidly on-board and orchestrate a diverse set of best-of-breed VNFs and PNFs at need.

5. Advances in cloud and carrier networking are driving skills and purchasing shifts in the enterprise.
The ease and ubiquity of public cloud for simple workloads has reset end user expectations for Enterprise IT. The carrier space has driven maturity of open networking solutions and processes. Enterprise IT departments are now at a crossroads:

  • How many and which of their workloads and processes do they want to outsource?
  • How can they effectively support those workloads remaining in-house with the same ease and speed users expect?
  • What skills will IT staff need, and how will they get them?

Which brings us to…

6. Prediction #1 will also lead off our Predictions list for 2019.

Announcing Amsterdam, ONAP’s First Code Release

By Blog

We are pleased to announce the launch of Amsterdam, ONAP’s first united platform. We are incredibly proud of the community for all of the hard work that has gone into this release, which launched just eight months ago. A Herculean effort from a diverse and growing community, Amsterdam not only merges two separate, existing projects (OpenECOMP and Open-O), but re-architects and optimizes the code base into a single, flexible and modular platform. Amsterdam delivers a unified architecture for end-to-end, closed-loop network automation — which is becoming a mandatory requirement before 5G and IOT deployments — and by doing so, celebrates a new milestone for open source networking.

None of this could have been possible without vast support and contributions of time, resources, and passion from across the growing, global ONAP community. Since coming together, ONAP is now comprised of:

  • 58 members (Turk Telekom just joined as the newest Platinum), including major global carriers, collectively representing more than 55 % of the world’s mobile subscribers
  • All top 10 networking vendors, plus leading global services firms
  • 538 contributors from 46 different organizations (including 7 carriers)

With Amsterdam, ONAP is already delivering on its promise to accelerate the development of a vibrant ecosystem around a globally shared architecture and implementation for network automation. Additionally, ONAP is the first open source project to unite the majority of operators (end users) with the majority of vendors (integrators) in building a real service automation and orchestration platform.

Key Features of Amsterdam

So let’s get to the heart of the release–the key features.  At a high level, Amsterdam is the first vendor-agnostic, policy driven network orchestration and automation platform with closed loop automation.  

For ONAP’s first platform release, it was crucial to lay out a framework that would benefit not only the technology but also the ecosystem. As a result, the Technical Steering Committee (TSC) developed a high-level framework of initial requirements for the release, which have come to fruition with Amsterdam. The requirements include things like modularity; enhancement of model-driven design; the addition of new features/functionalities; open code; upstream collaboration; harmonization with standards bodies; and CI/CD, to name a few.

Architectural principles and and how they funnel into the stack are also critical element of the release. These include:

  • Policy- and Model-Driven: The ability to automate according to intent, without hard-coding
  • Cloud-Native: Built for the cloud and has ability to manage the cloud through native VNF
  • DevOps CI/CD: Built using CI/CD Manage VNFs using CI/CD Break/Fix → Plan/Build

Amsterdam Architecture Diagram:

Another key capability of Amsterdam is VNF automation. In current deployments, every VNF works in a different way and can start/stop at various points. Configuring a variety of VNFs is therefore a very time-consuming, manual process. Amsterdam has standardized VNF automation by creating a streamlined deployment process that  happens through the abstraction layer (through the virtual controller) and has a SDK plug-in. More details on automation and CLAMP can be found on the ONAP Developer Wiki, here.

Additionally, ONAP’s focus on automation helps move the industry forward in accelerating the development and deployment of next-gen use cases. Automation is mandatory for 5G deployments due to the scale, access, bandwidth, and latency it demands. Amsterdam provides the platform to deploy new services to enable 5G in an automated manner.

 

Real-World Support for Amsterdam

Despite ONAP’s infancy, we are proud to see early deployments, testing, and POCs using various modules of pre-release code from member carriers and vendors including Amdocs, AT&T, Bell Canada, China Mobile, China Telecom, Fujitsu, Huawei, Orange, Vodafone and many others. Members have also been testing blueprints of early use cases like VoLTE and residential vCPE. Stay tuned for more information on user stories in the near future!  (To see what members are saying in support of the Amsterdam release, please visit our Member Quotes page here. Additional videos with commentary from members can be found here.)

Interested in learning more about ONAP or getting involved with the community? Join us for the ONAP Developer Design Forum, Dec. 11-13 in Santa Clara, California! More details on the event -including how to register – are available here.  

Please join me in congratulating the ONAP community on its first release!