Controller Placement (Cisco Wireless LAN Controllers)

The Cisco Unified Wireless Network (CUWN) solution provides significant flexibility for network design and redundancy. Although the physical location of controllers and access points (APs) has some best practices depending on the actual network design, for the most part you can install a controller anywhere on your network and have the APs register to it and start serving wireless clients.

Before you decide where to locate a controller, you need to decide what model of controller you are going to use. Each controller model has its place within the network. The smaller 2100 series and Network Module-based controllers are usually implemented in small networks or remote locations where you are placing only a few APs and expect few clients. For a large campus deployment where you need hundreds of APs and expect thousands of clients, however, the Wireless Integrated Service Module (WiSM) is a much better solution than installing several smaller-capacity controllers.

To provide wireless redundancy, you can configure multiple controllers to be backups for each other. Unlike other Cisco devices that exchange heartbeats with their redundant device to determine if the primary failed, the APs are configured with primary, secondary, and tertiary controllers. The APs currently exchange heartbeats with the controller they are registered with; should the heartbeats fail, they acknowledge that the controller is down and then try to move to an alternative controller that has been configured or discovered by other means.Starting in code Release 5.0 and higher, you can configure global primary and secondary backup controllers that apply to all the APs that are registered to a controller. If no secondary or tertiary controllers are defined for an AP and their current controller fails, the AP tries to join the global primary backup controller. Cisco refers to this as high availability in later code releases.


Using high availability provides network resiliency in the case of hardware or network failure; you should take this into account when determining where to locate your controllers in the network. You can use three backup configurations:

■ With an N+1 scenario, you have two controllers, and all the APs are joined to one of them. Should that controller fail, the APs are configured to move to the backup controller.

■ When you have APs joined to both controllers and the APs have the other controller as their backup, this is known as N+N. Should one controller fail, the APs on that controller move to the second controller. Usually the redundant controllers are on the same physical network, if not the same management VLAN on the network.

■ An N+N+1 configuration has three controllers. Two of the controllers back up each other, and the third controller backs up the two primary controllers. The +1 controller might be at the network core while the N+N controllers are installed at the access or distribution layer.

Many other combinations exist for configuring the controllers and APs for redundancy.

Note Regardless of the failover scenario you decide to deploy, make sure that the backup controllers have the capacity to accept any and all APs from a failed controller. If you have two 4402-12s with ten APs on each and one of them fails, eight APs will be stranded with no controller to join.

Although restrictions do not limit the model of controller you should use, the type of redundancy you want (if any), or where in the network you decide to install them, the sections that follow cover some design considerations/suggestions that might aid in your decision.

Access Layer Deployments

The access layer of your network is where client devices access the network. It is also where the majority of the network resources the clients would need access to would reside. In wireless terms, the WLAN would be the access layer.

You can install your controller(s) at the access layer, or wiring closet, of your network. In most cases, you would use the integrated 3750G model in this scenario, but you could use the 4400 series just as easily.

Deploying your controllers at the access layer of the network keeps the access traffic of the wireless clients at the access layer. If you choose the 3750G model, you can take advantage of a lower cost controller, with Layer 3 uplink redundancy at the network edge, as well as the Power over Ethernet (PoE) features of the switch to power your APs. You can also set up N+N redundancy within the switch stack.

Having your controllers at the access layer might introduce some inter-controller roaming challenges. Should the wireless clients be able to roam between APs joined to controllers in different access layers of the network, you might be able to introduce Layer 3 roaming events and client traffic routing events that you need to consider.

Distribution Layer Deployments

The distribution layer of your network is where access layer traffic is aggregated and network policies are enforced. This makes the distribution layer a great place to install your controllers. You can easily implement controller redundancy and AP load-balancing with controllers deployed at the distribution layer. Although WiSMs are usually the controller model deployed in this scenario, the 4400 and 3750G series work just as well.

When you place controllers at the distribution layer, you effectively collapse the access layer into the distribution layer of the network. You should consider what access layer switching features you will need at the distribution layer switches so they can be applied to the incoming and outgoing WLAN traffic from the controller.

Service Block Deployments

Service blocks are groups of service modules in a Catalyst 6500 series switch. A single 6500 switch could hold a Firewall Service Module (FWSM), several WiSM blades, an Intrusion Detection System (IDS) Module, a Network Analysis Module (NAM), as well as high-speed switching modules such as a WS-X6548 line cards.

Like distribution layer deployments, service block deployments usually incorporate one or more WiSMs, but you can also use 4400 series controllers here. These deployments usually lead to highly efficient inter-controller mobility and simplified network management. For large campuses, there is also an incremental economy of scale as the size of the network grows.

You would normally install a service block in a data center where you have redundant power, routing, and switching to prevent a network down situation in the event of a device failure. Also, data centers are usually manned by skilled IT professionals.

Data centers are usually located across the network core, so bandwidth and latency are important factors for you to consider.

In large networks, you can install service blocks in a redundant arm off the distribution layer switches. This design is valuable when core bandwidth is at a premium, such as when there are several large distributed campuses connected via a metropolitan-area network (MAN).

WAN Considerations

Depending on your network topology, you might have offices or remote networks that are connected to the core network across a WAN. The WAN link could be a dedicated T1 line or Multiprotocol Label Switching (MPLS) network connection.

The type of WAN connection you have plays a part in your decision of where to place your controllers. If you have only a low-bandwidth link such as a T1 line, you might consider placing a 2106 or Network Module Controller (NMC) at the site or perhaps using the APs in Hybrid Remote Edge Access Point (H-REAP) mode.An AP operating in H-REAP mode only sends LWAPP/CAPWAP control traffic across the WAN link and bridges client data traffic directly to the local switch. Otherwise, a client that is trying to access a local resource, such as a printer, eats up the WAN bandwidth with the data packets going back to the controller and then returning across the WAN to reach the local resource. If you have a high-speed connection like MPLS or dedicated OC3, you could probably get away with the APs not using H-REAP mode at the remote location and have them register to your controller at the network core in Local mode. Placing a controller locally or using H-REAP helps you conserve bandwidth and provides wireless access in the event of a WAN failure.

Another WAN consideration is network delay. If your WAN link experiences significant delay and your APs are not in H-REAP mode, wireless client access to network resources could be sluggish or time out altogether. This would not be the case if the AP directly bridged the client data traffic to the local network switch. If the round-trip ping times from the remote network to where the controller will be installed are greater than 300 ms, you are better off using standalone APs or installing a small controller. The controller and H-REAP APs can handle only a 300 ms delay.

Next post:

Previous post: