Episode 40: Wireless Roaming and Sticky Clients

Ethernet has evolved dramatically over the past few decades, and fiber optic technology has played a crucial role in its progression. Ethernet over fiber enables high-speed, long-distance data transmission using light rather than electrical signals. This approach minimizes attenuation and electromagnetic interference, supports much higher bandwidth than copper cabling, and is ideal for the core and distribution layers of enterprise and service provider networks. Whether linking buildings across a campus or connecting data centers across a metropolitan region, fiber-based Ethernet is often the medium of choice when performance and reliability are essential.
On the Network Plus exam, Ethernet standards over fiber are covered in the physical media and infrastructure objectives. You may encounter questions that require you to identify the proper fiber standard for a given distance, recognize connector and transceiver types, or match a specific application to its ideal fiber media. Topics involving fiber are also often combined with transceiver and wavelength technologies, making it essential to understand how signal type, cable type, and connector format interact. These standards are central to backbone design, high-capacity aggregation, and cross-campus interconnectivity.
One of the earliest standardized implementations of Ethernet over fiber is 100BASE-FX. This version of Fast Ethernet operates at 100 megabits per second and uses multimode fiber. The maximum supported distance is typically up to 2 kilometers, though this can vary depending on the fiber type and transceiver quality. 100BASE-FX was introduced to replace 100BASE-TX in environments where longer distances or higher resistance to interference were required. Though largely surpassed by gigabit and 10-gigabit technologies, 100BASE-FX remains in use in some legacy systems and cost-sensitive industrial environments.
The 1000BASE-SX standard brought gigabit speeds to short-range fiber networks. Operating over multimode fiber, 1000BASE-SX supports distances up to 550 meters depending on the fiber grade and transceiver quality. This standard is common in LAN backbones and data centers where high speed is needed, but long distances are not. LC connectors are typically used, and the standard works well in structured cabling systems within large buildings or campus networks. It is cost-effective and widely supported, making it a go-to option for short fiber runs.
1000BASE-LX, in contrast, is designed for longer-range communication and operates using either multimode or single-mode fiber. On multimode fiber, it typically reaches around 550 meters, similar to 1000BASE-SX, but its real strength lies in single-mode deployments, where it can span up to 10 kilometers. LX stands for “long wavelength,” and the standard is ideal for extending gigabit connectivity across buildings or between floors in skyscrapers. It provides a flexible option for environments transitioning from multimode to single-mode fiber infrastructure.
As networks began demanding higher performance, the industry moved to 10 Gigabit Ethernet. Two common standards are 10GBASE-SR and 10GBASE-LR. The “SR” in 10GBASE-SR stands for “short range,” and it uses multimode fiber to support distances of up to 300 meters. This is ideal for data center environments and horizontal backbones. 10GBASE-LR, or “long range,” operates on single-mode fiber and extends up to 10 kilometers. Both standards use LC connectors and are available in SFP+ transceiver modules. Choosing the right standard depends on the distance and type of fiber infrastructure already in place.
For even higher performance, especially in data centers and cloud-scale environments, standards such as 40GBASE-SR4 and 100GBASE-SR10 are used. These do not rely on a single fiber strand in each direction but instead use parallel optics. 40GBASE-SR4 uses four fiber strands for transmission and four for reception, typically carried via MPO connectors. 100GBASE-SR10 uses ten fibers in each direction. These parallel optics systems allow for massive throughput over short distances—up to 100 meters—and are ideal for connecting top-of-rack switches to aggregation layers within the same facility.
One of the innovations that improves fiber efficiency is the use of single-strand, or bidirectional, fiber connections. These systems transmit and receive data over a single strand of fiber by assigning different wavelengths for upstream and downstream traffic. This effectively doubles the utility of a single fiber and reduces infrastructure costs, particularly in environments where installing new fiber is difficult or expensive. Bidirectional optics are supported by specialized transceivers and must be carefully matched to ensure proper wavelength alignment.
To further increase the efficiency of fiber links, the industry relies on multiplexing, the process of combining multiple signals onto a single medium. In fiber optic networking, multiplexing allows many data streams to be transmitted over a single fiber pair, multiplying the capacity without physically installing more cables. This is especially important in carrier networks and data centers where demand for bandwidth far exceeds the number of available fibers. Multiplexing increases return on investment, optimizes space, and reduces operational complexity.
Coarse Wavelength Division Multiplexing, or CWDM, is a type of multiplexing that uses a small number of optical wavelengths spread out across the spectrum to transmit multiple channels simultaneously. CWDM typically supports up to 18 channels, each on a different wavelength, spaced widely enough to avoid interference. This system is cost-effective, simpler to deploy, and well suited for medium-range networks like metro area connections or campus-wide deployments. CWDM transceivers are less expensive than DWDM, and the equipment does not require high-end cooling or precision tuning.
Dense Wavelength Division Multiplexing, or DWDM, takes the concept further by packing many more wavelengths into a smaller spectral range. DWDM can support 40, 80, or even more channels, each operating at different wavelengths only a fraction of a nanometer apart. This high channel density allows a single fiber pair to carry terabits of data across hundreds of kilometers. Because of the tight spacing and precision required, DWDM systems are more complex and expensive, typically used by telecom providers and internet backbones for long-distance, high-volume data transport.
In all cases, the selection of a fiber standard must take into account the required speed, distance, fiber type, transceiver availability, and whether multiplexing is in use. Installing the wrong transceiver for the fiber type or distance can result in poor signal strength or failed links. Similarly, trying to connect two devices across a fiber run longer than the standard’s rated distance can lead to attenuation and packet loss. Matching the correct Ethernet standard to your application ensures stable, high-performance communication between network segments.
For more cyber-related content and books, please check out cyber author dot me. Also, there are other podcasts on Cybersecurity and more at Bare Metal Cyber dot com.
Understanding the differences between CWDM and DWDM is essential for choosing the right multiplexing technology. CWDM is often deployed in medium-distance scenarios, such as campus or metropolitan area networks, where maximizing fiber utility is important but where ultra-high channel density is not required. The wider channel spacing in CWDM—typically 20 nanometers apart—means that the lasers and optics do not need to be as finely tuned, which lowers cost and simplifies equipment design. However, this also means CWDM cannot pack as many channels into a given fiber as DWDM can. CWDM is limited to around 18 channels in most practical deployments, making it suitable for moderate-capacity needs.
DWDM, on the other hand, is used when the absolute maximum bandwidth is needed over long distances. It utilizes extremely narrow spacing between channels, often as close as 0.8 or 0.4 nanometers, which allows up to 80 or more channels to operate simultaneously on a single fiber pair. Because of this density, DWDM systems require more precise and expensive optics, including temperature-controlled lasers and finely tuned filters. DWDM is primarily used by telecommunications providers and large data center operators who need to transport massive amounts of data across hundreds of kilometers. Equipment used for DWDM typically includes optical amplifiers and multiplexers capable of managing dozens of wavelengths.
Multiplexing, in both CWDM and DWDM, occurs below the data link layer in the OSI model. This means that the process is invisible to switches and routers, which continue to forward traffic as if they were working with standard fiber connections. Because it operates at the physical level, multiplexing enables multiple logical connections to share a single fiber, dramatically improving the efficiency of fiber installations. For example, instead of installing separate fibers for internet traffic, voice communication, and data replication, all three can be carried over one fiber using different wavelengths and then demultiplexed at the destination.
Transceivers used in fiber Ethernet must be carefully selected to match both the cable type and the Ethernet standard being implemented. Each transceiver is typically labeled with a suffix indicating the range and wavelength—such as SR for short range, LR for long range, or ER for extended range. SR modules are designed for multimode fiber and work well for short distances, usually within a building or between racks in a data center. LR modules are optimized for single-mode fiber and are suitable for inter-building links or longer campus connections. ER modules can handle even greater distances and are typically used in metro or carrier-grade networks.
Speed and distance limitations are defined not just by the Ethernet standard but also by the type of fiber being used. Multimode fiber, with its larger core diameter, supports lower-cost transceivers and is easier to install, but it suffers from modal dispersion and cannot support long distances at higher speeds. At 10 Gbps, for example, multimode fiber is typically limited to 300 or 400 meters depending on the grade. Single-mode fiber, with its smaller core, avoids modal dispersion and supports longer distances—often up to 10, 40, or even 80 kilometers depending on the optical module and fiber quality.
In enterprise networks, fiber Ethernet is used primarily for connecting major infrastructure components. This includes linking wiring closets to core switches, connecting distribution switches to data center aggregation layers, and providing high-speed uplinks between floors or buildings. These connections carry the bulk of a network’s traffic, making fiber’s speed, reliability, and low latency ideal. For example, in a three-tier network architecture, single-mode fiber may be used to link core switches across a campus, while multimode fiber connects aggregation switches within the same building.
Comparing fiber and copper Ethernet helps clarify where each technology fits in modern network design. Fiber offers lower latency, is immune to electromagnetic interference, and supports higher speeds over longer distances. These advantages make it ideal for backbone links, critical data paths, and high-performance computing environments. Copper, while more flexible and cost-effective for short runs, cannot match fiber’s distance or speed capabilities. However, copper is often used for access layer devices such as desktop computers, VoIP phones, and printers because it is easier to install and requires no specialized transceivers.
Designing a network using fiber Ethernet standards requires careful planning to align cable types, connector styles, transceiver models, and performance goals. For short-range connections inside a building, standards like 10GBASE-SR and 1000BASE-SX on multimode fiber may be appropriate. For long-range links between sites or buildings, 10GBASE-LR or 1000BASE-LX using single-mode fiber is a better fit. In environments that demand even more throughput, high-density options like 40GBASE-SR4 or DWDM solutions may be considered. Every fiber link should be selected based on distance, speed, and future scalability.
The Network Plus exam includes questions that challenge your understanding of fiber Ethernet and multiplexing. You may be asked to match a fiber standard to its maximum supported distance or determine the correct transceiver type for a given connection. Some questions may show a diagram of a fiber link and ask you to identify whether it is using CWDM, DWDM, or single-strand fiber. Others may describe an application—such as connecting two data centers across a city—and ask which fiber Ethernet standard and multiplexing method is appropriate. These questions test both conceptual knowledge and practical application.
In conclusion, Ethernet over fiber supports the fastest and most scalable connections in modern networking. It provides the performance needed for core, distribution, and backbone links, with standards tailored to both short-range and long-haul deployments. Multiplexing technologies like CWDM and DWDM further extend fiber’s utility, allowing multiple data streams to traverse a single strand simultaneously. By understanding the nuances of fiber types, transceiver specifications, distance limitations, and wavelength management, network professionals can design high-capacity networks that meet the demands of today’s bandwidth-intensive environments. Whether you're deploying fiber in an enterprise network or preparing for the Network Plus exam, these concepts are foundational to success.

Episode 40: Wireless Roaming and Sticky Clients
Broadcast by