Episode 24: Routing Concepts — Path Selection and Next Hops

Wide Area Network technologies are the connective tissue that allow businesses to span continents, link remote sites, and maintain access to central services across vast distances. These technologies provide the stability, efficiency, and scalability required to support enterprise-grade performance over public and private infrastructures. Whether connecting a branch office to headquarters or routing voice and video traffic with minimal delay, modern WAN technologies like MPLS and mGRE form the foundation of today’s distributed network architectures.
This episode focuses on the structure and function of two important WAN technologies: Multi protocol Label Switching (MPLS) and Multipoint Generic Routing Encapsulation (mGRE). These concepts are part of the Network Plus certification objectives, particularly in the areas of infrastructure services and WAN design. This overview will explain how they work, what they’re used for, and how they compare to more traditional IP routing methods. Vendor-specific implementation details are excluded to keep the focus on exam-relevant knowledge and conceptual understanding.
Multi protocol Label Switching, commonly known as MPLS, is a high-performance method of forwarding packets through a network. Instead of relying on traditional destination-based IP routing, MPLS assigns labels to packets. These labels act as routing instructions, guiding each packet through the network based on a pre-established path. This method allows for faster and more predictable forwarding, as decisions are based on labels rather than repetitive route lookups in the IP routing table.
MPLS forwarding is handled by specialized devices called Label Switch Routers, or LSRs. These routers use the label on each packet to determine the next hop in its journey. The label itself is not tied to the destination IP address but rather to a specific path through the network. This means that LSRs do not need to inspect the full IP header, making the forwarding process more efficient. MPLS paths, also known as Label Switched Paths (LSPs), are precomputed and allow for traffic engineering and predictable performance.
The benefits of MPLS extend well beyond speed. MPLS enables traffic engineering, allowing network administrators to define routes based on traffic type, priority, or required latency. It also supports the transmission of multiple service types, such as IP, voice, and video, across the same backbone. MPLS allows for reliable, low-latency communication between offices, supports real-time applications, and improves network predictability by maintaining consistent end-to-end paths for key traffic flows.
MPLS packets use a label stack that contains one or more labels. These labels are inserted between the Layer 2 and Layer 3 headers, and each label contains a field for the label value, a traffic class indicator, and other control bits. The top label in the stack is used for forwarding at each hop, and it may be swapped, pushed, or popped depending on the role of the router. The stack structure allows MPLS to support nested traffic scenarios, such as carrier networks or VPNs.
Understanding the roles of the Provider Edge (PE) and Customer Edge (CE) routers is key to MPLS deployment. The PE router is managed by the service provider and connects to the core MPLS backbone. The CE router is owned by the customer and connects their internal network to the provider’s MPLS service. This separation defines a clear handoff point, with the provider responsible for label switching within the core and the customer managing routing on their side of the edge.
One of the most powerful features of MPLS is its ability to support Quality of Service, or QoS. Because labels can be used to identify traffic classes, routers can prioritize packets based on their type. Voice and video traffic, for example, can be sent over high-priority paths with low jitter and delay. Lower-priority traffic, such as backups or bulk data transfers, can be routed through less expensive or less congested paths. This ensures that critical services receive the performance levels they require.
Multipoint Generic Routing Encapsulation, or mGRE, is a WAN tunneling technology that allows multiple dynamic endpoints to communicate over a single interface. Unlike traditional GRE, which requires a point-to-point tunnel between each pair of routers, mGRE supports hub-and-spoke and full mesh topologies without needing separate tunnel interfaces for every connection. This simplifies configuration and supports more scalable and flexible WAN deployments.
GRE itself is a tunneling protocol that encapsulates packets within a new IP header. This encapsulation allows traffic to pass transparently through intermediate networks without needing changes to the original packet. GRE tunnels are used for building VPNs, carrying multicast traffic, and enabling routing protocols to function across networks that don’t natively support them. GRE operates at Layer 3, meaning it can encapsulate almost any Layer 3 protocol inside another IP packet.
The key distinction between mGRE and traditional GRE is that mGRE uses a single tunnel interface to support multiple destinations. This allows a hub router to maintain connections to many spokes without defining a separate tunnel for each. When combined with dynamic routing and other WAN overlay technologies, mGRE significantly reduces configuration complexity and administrative overhead, making it a preferred choice in flexible and dynamic WAN designs.
For more cyber related content and books, please check out cyber author dot me. Also, there are other podcasts on Cybersecurity and more at Bare Metal Cyber dot com.
MPLS is widely used in enterprise environments to support inter-office data transfer. Its predictable routing and built-in quality of service make it ideal for real-time applications such as Voice over IP (VoIP), video conferencing, and transactional systems that require consistency and low latency. Because MPLS paths are pre-established and label-switched, they avoid the unpredictability of traditional IP routing. Organizations with multiple branch locations often use MPLS to maintain private, secure, and high-performance connections between sites.
mGRE, on the other hand, excels in scenarios where flexibility and scalability are essential. One of the most common uses of mGRE is in dynamic VPN configurations. When combined with protocols like NHRP and IPsec, mGRE forms the backbone of solutions such as Dynamic Multipoint VPN (DMVPN). This enables secure, on-demand connectivity between branch offices, remote users, and cloud-connected sites without requiring pre-configured tunnels between every endpoint. mGRE supports both hub-and-spoke and full mesh topologies, allowing peer discovery and dynamic tunnel formation.
When comparing MPLS and traditional IP routing, the primary difference lies in how forwarding decisions are made. IP routing relies on destination-based lookups in the routing table. Each router inspects the IP header, searches its table, and forwards the packet accordingly. MPLS streamlines this process by using labels to dictate the packet’s path through the network. Label-based switching allows for predefined paths and faster packet processing, offering more control and flexibility for traffic management.
mGRE integrates seamlessly with VPN technologies. When paired with IPsec, mGRE tunnels can be encrypted, allowing secure traffic to travel over public networks while still benefiting from the scalability of a multipoint design. This model simplifies routing because mGRE uses a single tunnel interface, while IPsec overlays provide encryption and integrity. Together, they create secure, scalable WAN architectures without the need for dedicated leased lines or MPLS circuits.
Both MPLS and mGRE offer performance and reliability advantages, but they do so in different ways. MPLS minimizes jitter, packet loss, and delay by enforcing traffic policies and path engineering. It ensures that time-sensitive traffic follows low-latency routes and avoids congestion points. mGRE contributes to reliability by supporting dynamic backup routes. If a primary tunnel fails, mGRE can quickly establish a new connection to an available peer, maintaining connectivity and minimizing disruption.
Deployment models differ as well. MPLS requires support from the service provider, meaning the organization must contract with a carrier to provision MPLS circuits, manage label-switched paths, and handle backbone infrastructure. This makes MPLS ideal for large-scale enterprises with strict performance requirements. mGRE, on the other hand, is implemented entirely by the organization. It is configured on routers and managed internally, offering a cost-effective and flexible alternative that works over existing broadband, LTE, or fiber connections.
WAN topology plays a central role in how these technologies are deployed. MPLS can be used in hub-and-spoke models, where branch offices route traffic through a central site, or in full mesh designs where each site connects directly to every other site. However, configuring a full mesh in MPLS is more complex and expensive. mGRE is particularly well-suited for dynamic mesh topologies. It allows routers to discover peers and establish connections as needed, reducing the administrative burden and enabling more agile WAN deployments.
On the Network Plus exam, questions related to MPLS and mGRE often appear in scenario format. You may be presented with a WAN design and asked to identify the most appropriate technology based on performance requirements, cost, or scalability. Key terms like “label switching,” “tunnel interface,” or “dynamic endpoint” will often signal whether MPLS or mGRE is being discussed. A solid grasp of each technology’s characteristics and use cases will help you answer confidently and choose the right solution for a given environment.
MPLS and mGRE are two of the most important WAN technologies shaping modern enterprise networking. MPLS focuses on label switching and traffic engineering to deliver high-performance, carrier-managed connectivity. mGRE focuses on flexibility, scalability, and ease of configuration, particularly in dynamic and decentralized environments. Both technologies offer significant advantages, and both are frequently tested on the Network Plus exam. Understanding how they work, when to use them, and how they fit into broader network architectures is essential for mastering the exam and designing efficient WAN solutions.

Episode 24: Routing Concepts — Path Selection and Next Hops
Broadcast by