Under Construction!

Prompt

You have not logged in. Want to enter the login page?

Yes No

Prompt

Sorry that you are not authorized to download this document.

Apply Previous

Register an account

  • *
  • *
  • *
  • *
  • *
  • I agree that Huawei representatives may contact me.
  • I agree to receive information on Huawei products by email.
  • Terms of UsePrivacy Policy

If you already have a Huawei account, please login to download

CloudEngine 12800 Data Center Switches

The CloudEngine 12800 (CE12800) series switches are next-generation, high-performance core switches designed for data center networks and high-end campus networks. Using Huawei's next-generation VRP8 software platform, CE12800 series switches provide stable, reliable, secure, high-performance L2/L3 switching capabilities to help build an elastic, virtualized, and high-quality network.
CE12800 series switches use advanced hardware architecture design and have the highest performance of any currently available core switches. The CE12800 provides as much as 64 Tbit/s switching capacity and high-density line-speed ports. Each switch has up to 192*100GE, 384*40GE, or 1,536*10GE ports.
The CE12800 series switches use the Clos architecture and provide comprehensive virtualization capabilities along with data center service features. The switches use innovative energy-saving technologies to greatly reduce power consumption. In addition, a front-to-back airflow design provides industrial-grade reliability.
The CE12800 is available in four models: CE12804, CE12808, CE12812, and CE12816. The CE12800 series uses interchangeable components to reduce costs on spare parts. This design ensures device scalability and protects customers' investment.


Next-Generation Core Engine Provides the Industry's Highest Performance

64 Tbit/s Switching Capacity

  • The CE12800 provides 4 Tbit/s per-slot bidirectional bandwidth (scalable to 10 Tbit/s) and a maximum of 64 Tbit/s switching capacity (scalable to more than 160 Tbit/s). This capacity can support sustainable development of cloud-computing data centers for the next 10 years.
  • The CE12800, together with the CE6800/5800 series of Top-of-Rack (ToR) switches, can implement the largest non-blocking switching network in the industry. This network can provide access for up to 18,000*10GE servers or 70,000*GE servers and support data center server evolution across four generations from GE, 10GE, 40GE, and ultimately 100GE.


Terabit High-Density Line Cards

  • The forwarding capacity of a line card can reach up to 1,200 Gbit/s.
  • The CE12800 line cards provide the industry's highest port densities, from 24*40GE/96*10GE to 12*100GE.
  • The CE12800 provides as many as 192*100GE, 384*40GE, or 1,536*10GE line-speed ports.


Super-Large Buffer of 18 GB

  • All service ports (100GE/40GE/10GE/GE) support a super-large buffer.
  • The distributed buffer mechanism on inbound interfaces can effectively handle incast traffic loads in data centers.
  • The line card provides up to 18 GB buffer, which is dynamically shared by interfaces to improve usage efficiency.


128 Tbit/s Non-Blocking System

  • The CE12816 is the industry's first data center core switch to support a non-blocking system. Two CE12816s can be upgraded to the CE12832 through back-to-back Cluster Switch Systems (CSSs) of Switch Fabric Units (SFUs). The new device provides 32 service slots, with a capacity of 128 Tbit/s.
  • The CE12832 builds a strict non-blocking system using Clos. All traffic between two CE12816 chassis can be forwarded without occupying any service interface.
  • The CE12832 can be upgraded through a CE12816 in-service upgrade. During the upgrade, services are not affected. This ensures continuous evolution and expansion of the customer service system.


Comprehensive Virtualization Capabilities Implement Simple, Efficient Networking

VS Implements On-demand Resource Sharing

  • Highest device virtualization capability: The CE12800 uses Virtual System (VS) technology to provide an industry-leading virtualization capability that enables one switch to be virtualized into as many as sixteen logical switches. This 1:16 ratio enables one core switch to manage services for an enterprise's multiple service areas such as production, office, and DMZ, or for multiple tenants.
  • Higher security and reliability: VS technology divides a network into separate logical areas for service isolation. The failure of one virtual switch does not affect other virtual switches, enhancing network security.
  • Lower CAPEX: VS technology improves the use efficiency of physical devices by implementing on-demand resource allocation. This ensures network scalability and reduces investment in devices.
  • Lower OPEX: Using one physical device to implement multiple logical devices saves space in a data center equipment room and reduces the cost of device maintenance.


CSS Simplifies Network Management

  • The CE12800 uses industry-leading CSS technology, which can virtualize multiple physical switches into one logical switch to facilitate network management and improve reliability.
  • The CE12800 provides the dedicated system inter-connect port and separates the control channel from the service channel, improving reliability.
  • The CE12800 provides a cluster bandwidth of 1.6 Tbit/s. This super-high bandwidth prevents traffic bottlenecks on data center networks.
  • The CE12800 switches establish a cluster using service ports with distances of up to 80 km between cluster member switches.
  • The CE12800 uses CSS+VS synergy technology to turn a network into a resource pool so that network resources can be allocated on demand. On-demand resource allocation is ideal for the cloud-computing service model.


Large-Scale Routing Bridge Supports Flexible Service Deployment

  • The CE12800 series switches support Transparent Interconnection of Lots of Links (TRILL), a standard IETF protocol. The TRILL protocol helps build a large Layer 2 network with more than 500 nodes, which permits flexible service deployments and Virtual Machine (VM) migrations. A TRILL network can use 10GE/GE servers.
  • The TRILL protocol uses a routing mechanism similar to IS-IS and sets a limited Time-to-Live (TTL) value in packets to prevent Layer 2 loops. This significantly improves network stability and speeds up network convergence.
  • On a TRILL network, all data flows are forwarded quickly using Shortest Path First (SPF) and Equal-cost Multi-path (ECMP) routing. SPF and ECMP avoid the problem of suboptimal path selection in the Spanning Tree Protocol (STP) and increase link bandwidth efficiency to 100 percent.
  • The CE12800 supports up to 32 TRILL-based Layer 2 equal-cost paths, greatly improving links' load-balancing capabilities. The network's fat-tree architecture supports easy expansion.


EVN Supports Resource Sharing Across Data Centers

  • The Ethernet Virtual Network (EVN) implements inter-data center Layer 2 interconnection across the IP WAN, and integrates multiple data centers into a large IT resource pool. VMs can migrate between data centers. EVN supports Layer 2 interconnection of a maximum of 32 data centers, which enable scalability that is 5 times higher than the industry. EVN combines the advantages of the Border Gateway Protocol (BGP) and Virtual Extensible LAN (VXLAN) to provide high scalability and highly efficient use of bandwidth.
  • High scalability: Based on BGP at the control plane, the CE12800 supports millions of MAC addresses and routes, 32K tenants, and 256K VMs.
  • Highly efficient bandwidth usage: The forwarding plane uses VXLAN encapsulation. Flow-based load balancing is implemented on the entire network, which optimizes bandwidth usage.


nCenter Implements Fast VM Migration

  • The CE12800 works with Huawei's nCenter automated network management platform to permit network policies to be dynamically deployed on the CE12800. nCenter also supports online VM migration.
  • nCenter delivers network policies through high-speed RADIUS interfaces. Its online VM migration is 10 to 20 times the rate of other industry platforms, enabling large-scale VM migrations.
  • nCenter is based on open APIs and is compatible with all major virtualization platforms including VMware.


Fully Programmable Switch Permits Agile Service Provisioning

ENP Implements Programmability at the Forwarding Plane

  • The CE12800 is based on Huawei's innovative, programmable Ethernet Network Processor (ENP). The high-performance 480 Gbit/s ENP card implements openness and definition capabilities for the first time at the forwarding plane on data center networks.
  • The ENP card defines network functions through software and extends network functions. When new services are provisioned, there is no need to invest in replacement hardware.
    The ENP card reduces service provisioning from two years to six months - an improvement of four times over current industry capabilities that help customers to quickly implement service innovations.


OPS Implements Programmability at the Control Plane

  • The CE12800 uses the Open Programmability System (OPS) embedded in the VRP8 software platform to provide programmability at the control plane.
  • The OPS provides open APIs. APIs can be integrated with mainstream cloud platforms (including commercial and open cloud platforms) and third-party controllers. The OPS enables services to be flexibly customized and provides automatic management.
  • Users or third-party developers can use open APIs to develop and deploy specialized network management policies to implement extension of fast service functions, automatic deployment, and intelligent management. The OPS also implements automatic operation and maintenance, and reduces management costs.
  • The OPS provides seamless integration of data center service and network in addition to a service-oriented, Software-Defined Network (SDN).


Virtualized Gateway Achieves Fast Service Deployment

  • The CE12800 can work with a mainstream virtualization platform. As the high-performance, hardware gateway of an overlay network (NVO3/NVGRE/VXLAN), a CE series switch can support more than 16M tenants.
  • The CE12800 can connect to a cloud platform through an open API to provide unified management of software and hardware networks.
  • This function implements fast service deployment without changing the customer network. It also protects customer investments.


ZTP Implements Zero-Configuration Deployment

  • The CE12800 supports Zero Touch Provisioning (ZTP). ZTP enables the CE12800 to automatically obtain and load version files from a USB flash drive or file server, freeing network engineers from onsite configuration or deployment. ZTP reduces labor costs and improves device deployment efficiency.
  • ZTP provides built-in scripts for users through open APIs. Data center personnel can use the programming language they are familiar with, such as Python, to provide unified configuration of network devices.
  • ZTP decouples configuration time of new devices from device quantity and area distribution, which improves service provisioning efficiency.


Advanced Architecture Ensures Industry-Leading Network Quality

High-Performance, Non-blocking Switching Architecture

  • The CE12800's non-blocking switching architecture includes an orthogonal switch fabric design, Clos architecture, cell switching, Virtual Output Queuing (VOQ), and a super-large buffer.
  • Orthogonal switch fabric design: CE12800 service line cards and switch fabric units (SFUs) use an orthogonal design in which service traffic between line cards is directly sent to the SFUs through orthogonal connectors. This approach reduces backplane cabling and minimizes signal attenuation. The orthogonal design can support signal rates as high as 25 Gbit/s per Serdes, which is 2.5 times the industry average. This design greatly improves system bandwidth and evolution capabilities, enabling the system switching capacity to scale to more than 100 Tbit/s.
  • Clos architecture: The CE12800's three-level Clos architecture permits flexible expansion of switch fabric capacity. The architecture uses Variable Size Cell (VSC) and provides dynamic routing. Load balancing among multiple switch fabrics prevents the switching matrix from being blocked and easily copes with complex, volatile traffic in data centers.
  • VOQ: The CE12800 supports 96,000 VOQ queues that implement fine-grained Quality of Service (QoS) based on the switch fabrics. With the VOQ mechanism and super-large buffer on inbound interfaces, the CE12800 creates independent VOQ queues on inbound interfaces to perform end-to-end flow control on traffic destined for different outbound interfaces. This method ensures unified service scheduling and sequenced forwarding and implements non-blocking switching.


Highly Reliable Industry-grade Hardware Architecture

  • Industry-grade reliability: The CE12800 has a Mean Time Between Failures (MTBF) of more than 30 years. Long-term, stable operation of a core switch ensures service continuity.
  • Hot backup of five key components: Main Processing Units (MPUs) and Centralized Monitoring Unit (CMUs) work in 1+1 hot backup mode. SFUs work in N+M hot backup mode. Power supplies support dual inputs and N+N backup and have their own fans. Both fan trays work in 1+1 backup mode; each fan tray has two counter-rotating fans working in 1+1 backup mode, ensuring efficient heat dissipation.
  • Redundancy of three types of major buses: Monitoring, management, and data buses all work in 1+1 backup mode. Bus redundancy ensures reliable signal transmission.
  • Independent triple-plane design: The independent control, data, and monitoring planes of the CE12800 improve system reliability and ensure service continuity.


High-Performance VRP8 Software Architecture

  • The CE12800 takes advantage of Huawei's next-generation VRP8, a high-performance, highly reliable modular software platform that provides continuous services.
  • Fine-grained distributed architecture: VRP8, the industry's high-end software platform, uses a fine-grained, fully distributed architecture that can process network protocols and services concurrently using multiple instances. This architecture takes full advantage of multi-core/multi-CPU processes to maximize performance and reliability.
  • Highly reliable In-Service Software Upgrade (ISSU): VRP8 supports ISSU.


Pioneering Energy-saving Technologies

Strict Front-to-Back Airflow Design

  • The CE12800 uses a patented front-to-back airflow design that isolates cold air channels from hot air channels. This design meets heat dissipation requirements in data center equipment rooms.
  • Line cards and the switching network use independent airflow channels, which solve the problems of mixing hot and cold air and cascade heating, and effectively reduce energy consumption in equipment rooms.
  • Each fan tray has two counter-rotating fans, ensuring efficient heat dissipation.
  • The fan speed in each area can be dynamically adjusted based on the workload of line cards in the area. This on-demand cooling design lowers power consumption and reduces noise.


Low Power Consumption (5 W/10GE)

  • The CE12800 uses innovative energy saving technologies. Each 10GE port consumes only 5 W power, which is half the industry standard. It greatly reduces power consumption in the data center equipment room.
  • Miercom has performed a series of strict tests for the CE12800, proving its low power consumption.
  • Miercom test report: http://enterprise.huawei.com/ilink/cnenterprise/download/HW_200123.


Efficient, Intelligent Power Supply System

  • The CE12800 incorporates the industry's most efficient digital power modules, which provide power efficiency of 96 percent.
  • The power supply system measures power consumption in real time and puts one or more power modules into sleep mode when system power demands are low.
  • The CE12800 can save energy dynamically by adjusting the power consumption of components to adapt to changes in service traffic volume.

Item CE12804 CE12808 CE12812 CE12816
Switching capacity 16/40 Tbit/s 32/80 Tbit/s 48/120 Tbit/s 64/160 Tbit/s
Forwarding performance 7,200 Mpps 14,400 Mpps 21,600 Mpps 28,800 Mpps
Service slots 4 8 12 16
Switching fabric module slots 6 6 6 6
Fabric architecture Orthogonal, Clos architecture
Airflow design Strict front-to-back
Device virtualization Virtual System (VS)
Cluster Switch System (CSS)
Network virtualization TRILL
VM awareness nCenter
Network convergence FCoE
DCBX, PFC, ETS
Data center interconnection EVN
SDN OPS, virtualized hardware gateway
Traffic analysis NetStream
sFlow
VLAN Adding access, trunk, and hybrid interfaces to VLANs
Default VLAN
QinQ
MUX VLAN
MAC address Dynamic learning and aging of MAC addresses
Static, dynamic, and blackhole MAC address entries
Packet filtering based on source MAC addresses
MAC address limiting based on ports and VLANs
IP routing IPv4 dynamic routing protocols, such as RIP, OSPF, IS-IS, and BGP
IPv6 dynamic routing protocols, such as RIPng, OSPFv3, ISISv6, and BGP4+
IPv6 IPv6 Neighbor Discovery (ND)
Path MTU Discovery (PMTU)
TCP6, ping IPv6, tracert IPv6, socket IPv6, UDP6, and Raw IP6
Multicast IGMP, PIM-SM, MSDP, MBGP
IGMP snooping
IGMP proxy
Fast leave of multicast member interfaces
Multicast traffic suppression
Multicast VLAN
MPLS Basic MPLS functions
MPLS VPN/VPLS
Reliability LACP
STP, RSTP, and MSTP
BPDU protection, root protection, and loop protection
Smart Link and multi-instance
DLDP
VRRP, VRRP load balancing, and BFD for VRRP
BFD for BGP/IS-IS/OSPF/Static route
In-Service Software Upgrade (ISSU)
QoS Traffic classification based on Layer 2, Layer 3, Layer 4, and priority information
Actions include ACL, CAR, and re-marking
Queue scheduling modes such as PQ, WFQ, and PQ+WRR
Congestion avoidance mechanisms, including WRED and tail drop
Traffic shaping
Configuration and maintenance Console, Telnet, and SSH terminals
Network management protocols, such as SNMPv1/v2c/v3
File upload and download through FTP and TFTP
BootROM upgrade and remote upgrade
Hot patches
User operation logs
ZTP
Security and management RADIUS and HWTACACS authentication for login users
Command line authority control based on user levels, preventing unauthorized users from using commands
Defense against MAC address attacks, broadcast storms, and heavy-traffic attacks
Ping and traceroute
Remote Network Monitoring (RMON)
Dimensions (W x D x H) 442 mm x 970 mm x 486.15 mm (11 U) 442 mm x 970 mm x 752.85 mm (17 U) 442 mm x 970 mm x 975.1 mm (22 U) 442 mm x 1065 mm x 1597.4 mm (36 U)
Chassis weight (empty) < 110 kg < 150 kg < 190 kg < 290 kg
Operating voltage AC: 90 V to 290 V DC: -38.4 V to -72 V
Maximum power supply 5400 W 10800 W 16200 W 27000 W

Mainframe
Basic Configuration
CE-RACK-A01 FR42812 AC Assembly Rack(800x1200x2000mm)
CE12804-AC CE12804 AC Assembly Chassis(with CMUs and Fans)
CE12808-AC CE12808 AC Assembly Chassis(with CMUs and Fans)
CE12812-AC CE12812 AC Assembly Chassis(with CMUs and Fans)
CE12816-AC CE12816 AC Assembly Chassis(with CMUs and Fans)
CE12804-DC CE12804 DC Assembly Chassis(with CMUs and Fans)
CE12808-DC CE12808 DC Assembly Chassis(with CMUs and Fans)
CE12812-DC CE12812 DC Assembly Chassis(with CMUs and Fans)
CE12816-DC CE12816 DC Assembly Chassis(with CMUs and Fans)
Main Processing Unit
CE-MPU Main Processing Unit
Switch Fabric Unit
CE-SFU04 CE12804 Switch Fabric
CE-SFU08 CE12808 Switch Fabric
CE-SFU12 CE12812 Switch Fabric
CE-SFU16 CE12816 Switch Fabric
GE BASE-T Interface Card
CE-L48GT 48-Port 10/100/1000BASE-T Interface Card(RJ45)
GE BASE-X Interface Card
CE-L48GS 48-Port 100/1000BASE-X Interface Card(SFP)
10GBASE-X Interface Card
CE-L12XS 12-Port 10GBASE-X Interface Card(SFP/SFP+)
CE-L24XS 24-Port 10GBASE-X Interface Card(SFP/SFP+)
CE-L48XS 48-Port 10GBASE-X Interface Card(SFP/SFP+)
40GE Interface Card
CE-L06LQ 6-Port 40G Interface Card(QSFP+)
CE-L12LQ 12-Port 40G Interface Card(QSFP+)
CE-L24LQ 24-Port 40G Interface Card(QSFP+)
100GE Interface Card
CE-L04CF 4-Port 100G Interface Card(CFP)
CE-L12CF 12-Port 100G Interface Card(CFP2)
Power
PAC-2700WA 2700W AC Power Supply
PDC-2200WA 2200W DC Power Supply
Software
CE128-LIC-B CE12800 Basic SW
CE128-LIC-TRILL TRILL Function License
CE128-LIC-MPLS MPLS Function License
CE128-LIC-VS Virtual System Function License
CE128-LIC-IPV6 IPV6 Function License
Document
CE128-DOC CloudEngine 12800 Series Switches Product Documentation

Data Center Applications

On a typical data center network, CE12800/7800 switches function as core switches, and CE6800/CE5800 switches function as ToR switches. CE6800/CE5800 switches connect to CE12800/7800 switches through 40GE/10GE ports. The CE12800/7800 and CE6800/CE5800 switches use the TRILL protocol to build a non-blocking Layer 2 network, which allows large-scale VM migrations and flexible service deployments.


Note: The TRILL protocol can be also used on campus networks to support flexible service deployments in different service areas.



Campus Network Applications

On a typical campus network, two CE12800/7800 switches are virtualized into a logical core switch using CSS or iStack technology. Multiple CE6800 switches at the aggregation layer form a logical switch using iStack technology. CSS and iStack improve network reliability and simplify network management. At the access layer, CE5800 switches are virtualized with SVF to provide high-density line-speed ports.


Expert opinion

  • Huawei Cloud Fabric-Creating a Future for Cloud Networks
  • Rapidly developing cloud computing applications have brought great changes to servers and storage devices in data centers. As a result, data centers must change their network architecture to adapt to these new cloud computing applications. Huawei developed the Cloud Fabric architecture to meet the challenges in the cloud computing era.


  • The Cloud Era The Time to Take Action Has Finally Come-Huawei Cloud Fabric Solution
  • As cloud computing services develop rapidly, data center network architecture must evolve and adapt in turn. Based on its experience, Huawei promotes the cloud fabric solution to adapt to the cloud era.


  • Future Directions in Cloud Computing and the Influence on Networks
  • In 2005, Jeffery Dean and Sanjay Ghemawat presented a paper on MapReduce, a programming model for processing large data sets. In 2006, Amazon launched its Elastic Compute Cloud (EC2) service, and Eric Schmidt of Google first put forward the cloud computing concept. In 2009, some consulting companies projected that adopting cloud computing would be the best IT strategy. Today, it is generally recognized that cloud computing will be a service provision mode in the future. A lot of technologies’ service models are emerging to cater to the use of cloud computing and to propel its development. A key issue is how to transform cloud computing from concept to practice based on the basic network structure.


  • Structural Design Considerations for Data Center Networks
  • Modern technology is an extension of human features. For example, the phone is an extension of human sound, the TV is an extension of human sight, and the data center is an extension of the human brain. A network can be compared to the nervous system, which connects all of these, conveying instructions between them and linking them all together. As important as the nervous system is to human health, a healthy network - with high bandwidth, low latency, and high reliability – is critical for those looking to enter the cloud era.


  • Next-generation Data Center Core Switches
  • Since the introduction of cloud computing technology in 2006, it has developed so rapidly that almost all enterprise IT services have migrated, or are in the process of migrating, to cloud-computing platforms.


  • Architecture Evolution and Development of Core Switches in Data Centers
  • As the Internet continues to develop, the quantity of data has grown explosively, and data management and transmission have become increasingly important. With such vast quantities of data, high data security, centralized data management, reliable data transmission, and fast data processing are required. To meet these requirements, data centers have come into being. During data center construction, core switches play an important role in meeting construction requirements.


  • The Internet Data Center Network Challenges and Solutions
  • The ways people consume entertainment, communicate, and even shop have all been greatly changed by the Internet, which has become an essential part of everyday life. As a result, people have increasingly high requirements for Internet services. Mature mobile Internet and service-oriented cloud computing technologies bring great opportunities for Internet enterprises. To provide various services for a large number of users, Internet enterprises face various challenges when constructing a reliable Internet data center network. This document describes the challenges faced by enterprises in constructing an Internet data center network and data center network solution that is compliant and compatible with future cloud computing architecture.



Technology Forum

  • Non-blocking Switching in the Cloud Computing Era
  • Data centers are at the core of the cloud computing era that is now beginning. How data centers can better support the fast growing demand for cloud computing services is of great concern to data center owners. Going forth, customers will construct larger data centers, purchase more servers with higher performance, and develop more applications. If data center networks cannot adapt to these changes, they will become bottlenecks to data center growth.


  • CSS A Simple and Efficient Network Solution
  • After decades of development, Ethernet technology — which is flexible and simple to deploy — has become the primary local area network technology. All-in-IP is now the norm in the communications industry.


  • Virtualizing VMs in Data Center Switches
  • With IT demands rising rapidly, and resource-strained organizations challenged to meet them, a transformation within IT is taking place. Virtualization and cloud computing are significantly changing how computing resources and services are managed and delivered. Virtual machines have increased the efficiency of physical computing resources, while reducing IT system operation and maintenance (O&M) costs. In addition, they enable the dynamic migration of computing resources, enhancing system reliability, flexibility, and scalability.


  • TRILL Large Layer 2 Network Solution
  • for data in a data center. Distributed architecture leads to a huge amount of collaboration between servers, as well as a high volume of east-to-west traffic. In addition, virtualization technologies are widely used in data centers, greatly increasing the computational workload of each physical server.


  • Control Tower for Virtualized Data Center Networks
  • As technologies mature and new applications emerge, many enterprise IT systems have begun using virtual machines, signifying the first step toward cloud computing. By virtualizing multiple servers on a physical server, IT systems can gain many benefits and enterprises do not need to purchase large numbers of servers. The virtual machine adds the high availability (HA) feature for data centers, reducing service interruptions and associated complaints. Virtualization technology effectively utilizes powerful hardware and can reduce hardware capability waste by more than 10%.


  • From North-South Split to a Unification
  • "North" and "south" on a data center network: The north is the data network running Ethernet/IP protocols, whereas the south is the storage area network (SAN) using the Fiber Channel (FC) protocol. The north and south are separated by servers. Based on service characteristics, the data network is called the front-end network, and the SAN is called the back-end network. The two networks are physically isolated from each other and use different protocols, standards, and switching devices.


  • Huawei Next-Generation Network Operating System VRP V8
  • The radical changes in network traffic is forcing the advancement in new technologies and innovations. Bourgeoning use of smart devices has resulted in an explosive increase in mobile Internet traffic. The cloud computing model originated by Google and Amazon is having a huge impact on the traditional usage of computing, storage, and network resources. And perhaps most notably, the growth of new businesses via mobile Internet, and new business models — like cloud computing — are driving the underlying physical device architecture to change to meet the service requirements. The Network Operating System (NOS), which is at the core of the network devices that transmit the services to help determine end users’ experience of today’s networks, needs to change to support these new trends and demands.



DOCUMENTATION CENTER NEWS ROOM CASE STUDIES VIDEO ZONE PUBLICATIONS SHARE