Linkedin Poll: TCP (All other options are application layer).
CCNA 200-301 replaces all previous CCNA exams witha single exam.The new exam is 120-minutes with 100+ questions and exam fee is $300 USD. There are significant changes to the new CCNA curriculum.
CCNA 200-301 Knowledge
Domains
20%
Network Fundamentals
20% Network Access
25% IP Connectivity
10% IP Services
15% Security Fundamentals
10% Automation and Programmability
New CCNA Topics!
CCNA 200-301 includes a significant amount of wireless and network programmability. That is attributed to the popularity of mobile devices, cloud computing and SDN architecture. Cisco is aligning the new CCNA certification exam with a shift to internet-based connectivity model and OSPF for routing IP protocol only. EIGRP was created for multiprotocol routing and RIP is not scalable for mobile and cloud connections.
The management and troubleshooting of network infrastructure is being radically changed with SDN open source architecture. Cisco has enabled programmable features on their devices and virtualization from physical equipment to software services. They have virtual appliances and CCNA engineers now support private and cloud data center connections.
CCNA Training Strategy
Preparing for CCNA certification requires students to invest time to pass the exam. The best strategy is a streamlined study plan with the right information instead of more information.
The purpose of any network is to enable data communication between host endpoints via network protocols. The network operational model can be described using the concept of planes, where each physical device within traditional network architecture has data, control and management planes. It is a functional model describing the dynamics of data communications and networking services. There are differences between traditional and newer controller-based architecture, that is evident with the location of operational planes.
Data Plane
The data plane is only
responsible for forwarding of endpoint data traffic between network interfaces.
All data plane traffic is in-transit
between neighbors, and not associated with communication protocols. It is not
handled by the processor as a result. For example, routing tables created by
the control plane are used by the data plane to select a route. The packet is
then forwarded to a next hop neighbor address.
MAC
learning and aging
MAC
address table lookup
Routing
table lookup
ARP
table lookup
MAC
frame rewrite
Figure
1 Traditional Network Architecture
Similarly, MAC address and
ARP tables created by the control plane, are used by the data plane to forward
traffic. While all three planes exist on all network devices, the services
provided are based on the device class. For example, only routers and L3
switches support routing tables, ARP tables and frame rewrite. Conversely, all
switches create MAC address tables while routers do not.
Control Plane
The control plane is
responsible for building network tables used by the data plane to make forwarding
decisions. Control plane protocols only communicate with directly connected
neighbors. It is only the processor that handles inbound and outbound control
plane traffic. There are routing protocols that build routing tables from neighbor
advertised routes for Layer 3 connectivity. Some common examples of Layer 3
control plane protocols include OSPF, EIGRP, BGP, and ICMP.
Network
tables
Path
selection
Frame
switching
Link
negotiation
Error
messages
Control plane protocols
also enable interconnection of switches within Layer 2 domains. For example,
STP enables a loop free topology between multiple switches. There is dynamic
trunk negotiation between neighbor switches and EtherChannels. Examples of Layer
2 control plane protocols include STP, DTP, LACP, and CDP. Network switches
create MAC address tables for frame switching within Layer 2 domains.
Management Plane
The management plane is
responsible for configuration and monitoring of network devices. There are
various application protocols that are used to manage the network. For example,
SSH is initiated to the management plane of a router to configure network
interfaces. SNMP sends traps to a network management station to alert on
operational status of interfaces.
Configuration
Monitoring
Automation
Programmability
There are newer protocols
such as NETCONF that enable automation of management functions. Similar to the
control plane, is the fact that all management plane protocols must be handled
by the processor. Some other examples of management plane protocols include TFTP,
Telnet, RESTCONF, Syslog, NTP, DNS, and DHCP.
The management plane initiates a session with the local router to configure OSPF and enable network interfaces.
The control plane has a routing table with a route that includes a next hop address and local exit interface.
The data plane does a routing table lookup for the next hop address associated with a destination subnet. The data plane then forwards all packets to neighbor with next hop address.
Software-Defined
Networking (SDN)
Software Defined Networking (SDN) is an architecture that separates the control plane from the data plane. Cisco IOS software is moved to an SDN controller. That decouples the control plane from hardware and enables programmability of all network devices. The controller communicates via agents installed on devices. The same functions are provided as with traditional networking architecture for each operational plane. Figure 2 illustrates how the management plane is also moved to the controller.
Figure
2SDN
Operational Planes
It is similar to a
hypervisor layer that abstracts (separates) server hardware from application software.
There is a centralized control plane that is software-based with an underlying
physical data plane transport. SDN enables overlays and programmable devices
for management with centralized policy
engine and global view.
SDN
decouples the control and data plane.
Control
plane is software-based and not a hardware module.
SDN
controller is a centralized control plane with a policy engine.
Network
infrastructure is an underlay for programmable fabric.
Figure 3SDN Architecture Layers
SDN Components
The SDN controller
provides centralized management where the network appears as a single logical switch.
Network services are dynamically configurable when the control plane is moved from
physical infrastructure to a software-based SDN controller with API modules.
The northbound and southbound APIs enables communication between applications
and network devices.
Table
1 SDN Components
SDN controllers communicate
with physical and virtual network devices via southbound APIs. Conversely,
communication from controller to SDN applications is via northbound APIs. There
is a policy engine configured on a controller for orchestration and automation
of network services.
Programmability – network is directly programmable
because it is decoupled from infrastructure and data plane forwarding.
Agility
–
abstracting control plane from data plane enables dynamic configuration to
modify traffic flows as network conditions change.
Centralized
Management
– network intelligence is centralized in software-based SDN controllers. The
global network appears to applications and policy engines as a single logical
switch.
Automation – dynamic configuration (provisioning) of
network devices and software upgrades is based on APIs.
Network Functions Virtualization (NFV) increase agility by decoupling
network services from proprietary hardware and moving it to software modules on
SDN controllers. That makes it easier to provision, automate, and orchestrate
network services such as DNS, firewall inspection and network address
translation.
Advantages of Programmability
The advantage of
programmability include automation and rapid deployment of new services and
applications. Turn up of a new branch office or an application is now
accomplished in minutes. Newer Cisco devices support programmable ASICs. Open
APIs translate between application and hardware to initialize, manage and
change network behavior dynamically.
New requirements now
include on-demand bandwidth, dynamic security and elastic capacity. In addition
rapid cost effective deployment of applications and services. The provisioning
of wired and wireless services requires automated turn-up of network services,
push configuration, automatic monitoring and real-time analysis.
Fabric Underlay
Cisco has recently
developed SD-Access fabric architecture for data center and enterprise
connectivity. The purpose is to enable automation, programmability and mobility
for physical and virtual platforms. It is comprised of an underlay, fabric overlays
and Cisco DNA Center.
The fabric is comprised of
a physical underlay designed for high-speed transport of traffic. It is
characterized by network devices, topology and protocols for communication.
There is a common underlay that provides transport for overlay traffic. That
would include control plane protocols such as STP, DTP, OSPF, EIGRP and ARP.
Network
infrastructure used for transport of all data traffic
Comprised
of network devices, protocols and configuration
Network
devices must support programmability with agents
Physical
underlay operation is independent of overlays
Fabric Overlay
There is also path
virtualization enabled with fabric overlays that are built on top of (or over)
the underlay. Overlays create a virtual topology across a physical underlay
infrastructure with encapsulation techniques that create tunnels. That
essentially enables route and address isolation, that is independent of
underlay and other overlays. Encapsulation is nothing more than adding outer
header/s to original payload that is not visible to network devices when in-transit.
Network
address overlap and route isolation enabled
Overlays
are operationally independent of underlays
Consider that overlays logically create single point-to-point connections. That same topology has multiple physical connections between switches. The purpose of overlays are to solve limitations inherent with physical switching domains such as STP, routing loops, broadcasts and address overlap. They also enable multi-tenant service, enhanced mobility, seamless connectivity and automation.
Table
2 Underlay vs Overlay
Layer 2 Overlay
Within the fabric
architecture there is support for Layer 2 and Layer 3 overlays. Layer 2 overlays
are designed to emulate a physical topology for the purpose of extending Layer
2 domains. For example, connecting two servers on different switches that are
assigned to the same VLAN. The solution is a VXLAN overlay to enable a virtual
connection between servers. It is common to have web-based applications with
multiple servers that are often in different locations.
Emulates
a physical switching topology with virtual overlay
Extend
Layer 2 domains between switches and locations
Enable
address isolation and overlapping between domains
Tunnels
terminate at leaf switches for campus deployment
Figure
4 VXLAN Fabric Overlay
VXLAN is a data plane overlay
that encapsulates host packets for communication across fabric. As an overlay,
it requires the transport services of a physical underlay infrastructure. In
our example, the tunnels are terminated at fabric edge switches. There is a
common underlay for data plane forwarding, however the underlay topology is
independent of overlay topologies. As a result, underlay and overlay maintain
separate data and control planes.
Layer 3 Overlay
Layer 3 overlays enable data plane forwarding across a fabric between different subnets. There is the advantage as well of isolation from the underlay limitations associated with MAC flooding and spanning tree protocol loops. Tunnels are created with encapsulation of host packets. Some examples include LISP, MPLS, GRE, CAPWAP and VRF.
Routing-based
overlay for IP connectivity across fabric
Isolates
broadcast domains to each network device
IP
tunnel terminates at host endpoint or network device
Logical
point-to-point topology between tunnel endpoints
Figure
5 GRE and CAPWAP Overlays
Automation
Fundamentals
The advent of programmability and automation tools is radically changing how network infrastructure is managed. Compared with traditional networking, automation has astonishing advantages for physical and virtualized network services. Network automation lowers operational costs, enables deployment agility, and unified policies. The following are some advantages of network automation
Minimize
network outages
Enable
deployment agility
Lower
operational costs
Unified
security policies
Software
compliance
The most common cause of network
downtime is user error. There are significantly less errors with configuration
changes and deployment of network infrastructure. There is a globally
centralized view of the network that is fundamental to SDN architecture.
Network administrators can push standard configurations out to new network devices and update configuration on existing infrastructure. The audit of device configuration or software versions for compliance before update is much faster with automation…