Objective: -
This blog aims to give a comprehensive overview of the process for integrating containerized network services in a multi-vendor setting, specifically using Nokia Service Routers (SR) and Juniper Aggregation Routers (AG). It emphasizes the importance of NNI interface policies for facilitating smooth communication between different network segments and provides guidance on configuring management interfaces and routing to optimize connectivity and traffic flow. The goal of this guide is to help network engineers implement scalable and interoperable solutions while making complex network operations easier to manage.
Scenario:- In Vodafone ISP, Pune location 2 SR non-production devices need to be launched in production. Router R1 acts as a primary and Router R2 acts as a secondary back up router. Both routers aggregate traffic combines at uplink Nasik Aggregate router. This uplink is connected with IXR router at Mumbai location. To automate process flow AWS containers are created with both SR routers at Pune location and same way with IXR router at Mumbai Location.
Fig: 1 Interconnectivity of Overlay Network and Underlay Network
Where R1, R2 = Nokia 7750 SR routers
AG = Aggregate router = Juniper
IXR = Nokia Interconnect router
Table : 1 Physical connectivity of Network topology
What is underlay network?
An underlay network is one of the fundamental structures of a network which includes routers, switches, and the cables (fiber optics, ethernet) linking them together. It functions using basic IP routing protocols to allow the movement of data within the network seamlessly. The underlay offers the necessary links and services which the overlay networks, that is virtual networks superimposed on the actual layer, require for interaction. Necessary for decent performance and reliability of conventional and SDN architectures.
What is overlay network?
An overlay network refers to a virtual network that includes a number of physical networks and provides logical communication between devices, applications, or containers that may not be located in the same physical location. In the context of the cloud and containers, containerized and virtualized environments indeed have overlay networks many of which are deployed in the form of virtual machines or containers. For instance, VXLAN or Kubernetes networking solutions like Calico and Flannel technology can achieve this communication. This abstraction level promotes simplicity in network management, enhances scalability, enables segmentation, and makes it possible for distributed workloads to communicate with one another as if they are in the same local area network (LAN) environment, regardless of the geographical distance between data centers or clouds.
Co-relation between Underlay and overlay network
It’s obvious that the underlay network and the overlay network are two sides of the same coin in networking. The underlay network consists of the systems infrastructure and interconnection, where the data packets are centrally routed over standard protocols, equipment, ports, and wires. The overlay network is built on top of these primitives that enable virtualization of logical links to mask the physical structure and add value in forms of enhancement, compartmentalization, and multi-cloud links support. The overlay is looking up to the underlay to carry the encapsulated packets from one ending point to another while the underlay picks advantages of flexible and easy management offered by overlays. Collectively, they allow the provision of fast and elastic networks suitable for today’s cloud, container, bas, and software-defined networking requirements.
Fig :2 Flow chart of co-relation between Overlay and Underlay network via NNI interface
The process starts with the goal of configuring a containerized network service within a multi-vendor setup that includes Nokia Service Routers (SR) and Juniper Aggregation Routers (AG).
1. Start :
The process begins with the intention to configure a containerized network service within a multi-vendor infrastructure (Nokia SR and Juniper AG devices).
2. Create Container:
A container is created on the infrastructure. This could involve provisioning a virtualized container in a network management system or a virtual network function (VNF) environment. The container will later house routing services or configurations as part of the network topology.
code :-
version: '3.8'
services:
pune_r1:
image: pune_r1
privileged: true
networks:
my-net:
ipv4_address:96.205.113.18
pune_r2:
image: pune_r2
privileged: true
networks:
my-net:
ipv4_address: 96.205.113.19
networks:
my-net:
driver: bridge
ipam:
config:
- subnet: 96.205.113.16/28
3. Assign IP Block to Container:
An IP block or subnet is allocated to the container. This step ensures that the container has its own IP range for internal and external communication. This is essential for proper routing and reachability across the network.
96.205.113.17/28 - acts as a GW of container
MGMT Vlan : 1001
4. MGMT Configuration on Nokia SR Device:
The management configuration is applied on the Nokia Service Router (SR).
This step includes:
- Setting up the MGMT interface to enable communication with the container.
- Configuring routes, protocols (such as BGP or OSPF if required), and management access for container-related operations.
- Verifying reachability between the Nokia SR device and the container.
code :-
Pune_r1 :
/bof router "management" { }
/bof router "management" { interface "management" }
/bof router "management" { interface "management" cpm active }
/bof router "management" { interface "management" cpm active ipv4 }
/bof router "management" { interface "management" cpm active ipv4 ip-address 96.205.113.18}
/bof router "management" { interface "management" cpm active ipv4 prefix-length 28}
/bof router "management" { static-routes }
/bof router "management" { static-routes route 0.0.0.0/0 }
/bof router "management" { static-routes route 0.0.0.0/0 next-hop 96.205.113.17 }
Pune_r2:
/bof router "management" { }
/bof router "management" { interface "management" }
/bof router "management" { interface "management" cpm active }
/bof router "management" { interface "management" cpm active ipv4 }
/bof router "management" { interface "management" cpm active ipv4 ip-address 96.205.113.19}
/bof router "management" { interface "management" cpm active ipv4 prefix-length 28 }
/bof router "management" { static-routes }
/bof router "management" { static-routes route 0.0.0.0/0 }
/bof router "management" { static-routes route 0.0.0.0/0 next-hop 96.205.113.17 }
5. NNI Policy Creation on Juniper AG Device
On the Juniper Aggregation Router (AG), an NNI (Network-to-Network Interface) policy is configured. This policy allows seamless communication between the networks handled by the Nokia SR and Juniper AG devices.
Key configurations may include:
- Defining VLANs or sub-interfaces for NNI.
- Configuring routing protocols (like iBGP/eBGP or L2/L3 VPNs) to establish a policy for traffic forwarding between the SR and AG devices.
- Ensuring appropriate Quality of Service (QoS) or traffic engineering parameters.
code :-
Nasik_AG :
set interfaces ge-1/1/17 apply-groups NNI_Nasik_ag_to_Mumbai_IXR
set interfaces ge-1/1/17 description "Nasik_ag"
set interfaces ge-1/1/17 apply-groups-except DEFAULT_DISABLE
set interfaces ge-1/1/17 ether-options auto-negotiation
set interfaces ge-1/1/17 ether-options speed 100m
set interfaces vlan unit 0 family inet address 96.205.113.27/28
set vlans vlan-1001 l3-interface vlan.0
delete interfaces ge-1/1/17 disable
Mumbai_IXR :
/configure port 1/1/1 { }
/configure port 1/1/1 { admin-state enable }
/configure port 1/1/1 { description "MGMT_NNI_Nasik_ag_to_Mumbai_IXR" }
/configure port 1/1/1 { ethernet }
/configure port 1/1/1 { ethernet collect-stats true }
/configure port 1/1/1 { ethernet mode access }
/configure port 1/1/1 { ethernet encap-type dot1q }
/configure port 1/1/1 { ethernet mtu 9096 }
/configure port 1/1/1 { ethernet }
/configure port 1/1/1 { ethernet autonegotiate true }
/configure port 1/1/1 { ethernet speed 100 }
/configure service { vpls "irb1001-vpls" sap 1/1/1:0 }
/configure service { vpls "irb1001-vpls" sap 1/1/1:0 admin-state enable }
/configure service { vpls "irb1001-vpls" sap 1/1/1:0 split-horizon-group "IRB1001-SHG" }
/configure service { vpls "irb1001-vpls" sap 1/1/1:0 ingress }
/configure service { vpls "irb1001-vpls" sap 1/1/1:0 ingress qos }
/configure service { vpls "irb1001-vpls" sap 1/1/1:0 ingress qos sap-ingress }
/configure service { vpls "irb1001-vpls" sap 1/1/1:0 ingress qos sap-ingress policy-name "MGMT-TO-IRB-IN" }
/configure service { vpls "irb1001-vpls" sap 1/1/1:0 egress }
/configure service { vpls "irb1001-vpls" sap 1/1/1:0 egress qos }
/configure service { vpls "irb1001-vpls" sap 1/1/1:0 egress qos egress-remark-policy }
/configure service { vpls "irb1001-vpls" sap 1/1/1:0 egress qos egress-remark-policy policy-name "NNI_MGMT" }
6. MGMT Interface (SR) - Next Hop IP Container Gateway
The MGMT interface on the Nokia SR is set to point to the next hop IP of the container gateway, ensuring that management traffic can flow correctly between the SR and the container.
Conclusion:
The flowchart presents a clear method for integrating a containerized network service in a multi-vendor setting, utilizing Nokia Service Routers (SR) and Juniper Aggregation Routers (AG). It starts with the creation of the container and the allocation of IP addresses, which gives the container a specific network identity. Management configurations on the Nokia SR device set up control and communication pathways, while NNI policies on the Juniper AG facilitate smooth inter-network traffic flow between devices. The final step is to configure the management interface of the SR device to direct traffic to the container's gateway, ensuring complete connectivity. This process emphasizes the need to align configurations across various vendor devices to support containerized services, improve network scalability, and optimize traffic management.
Comments
Post a Comment