Tuesday, 23 September 2014

The Cisco Three-Layer Hierarchical Model

Most of us were exposed to hierarchy early in life. Anyone with older siblings learned what it was like to be at the bottom of the hierarchy. Regardless of where you first discovered hierarchy, today most of us experience it in many aspects of our lives. It is hierarchy that helps us understand where things belong, how things fit together, and what functions go where. It brings order and understandability to otherwise complex models. If you want a pay raise, for instance, hierarchy dictates that you ask your boss, not your subordinate. That is the person whose role it is to grant (or deny) your request. So basically, understanding hierarchy helps us discern where we should go to get what we need.

Hierarchy has many of the same benefits in network design that it does in other areas of life. When used properly, it makes networks more predictable. It helps us define which areas should perform certain functions. Likewise, you can use tools such as access lists at certain levels in hierarchical networks and avoid them at others.

Let’s face it: Large networks can be extremely complicated, with multiple protocols, detailed configurations, and diverse technologies. Hierarchy helps us summarize a complex collection of details into an understandable model. Then, as specific configurations are needed, the model dictates the appropriate manner in which to apply them.

The Cisco hierarchical model can help you design, implement, and maintain a scalable, reliable, cost-effective hierarchical internetwork. Cisco defines three layers of hierarchy, as shown in Figure 1, each with specific functions.

FIGURE 1 The Cisco hierarchical model
The following are the three layers and their typical functions:
  • The core layer: backbone
  • The distribution layer: routing
  • The access layer: switching
Each layer has specific responsibilities. Remember, however, that the three layers are logical and are not necessarily physical devices. Consider the OSI model, another logical hierarchy. The seven layers describe functions but not necessarily protocols, right? Sometimes a protocol maps to more than one layer of the OSI model, and sometimes multiple protocols communicate within a single layer. In the same way, when we build physical implementations of hierarchical networks, we may have many devices in a single layer, or we might have a single device performing functions at two layers. The definition of the layers is logical, not physical.

Now, let’s take a closer look at each of the layers.

The Core Layer

The core layer is literally the core of the network. At the top of the hierarchy, the core layer is responsible for transporting large amounts of traffic both reliably and quickly. The only purpose of the network’s core layer is to switch traffic as fast as possible. The traffic transported across the core is common to a majority of users. However, remember that user data is processed at the distribution layer, which forwards the requests to the core if needed.

If there is a failure in the core, every single user can be affected. Therefore, fault tolerance at this layer is an issue. The core is likely to see large volumes of traffic, so speed and latency are driving concerns here. Given the function of the core, we can now consider some design specifics. Let’s start with some things we don’t want to do:
 
  • Don’t do anything to slow down traffic. This includes using access lists, routing between virtual local area networks (VLANs), and implementing packet filtering.
  • Don’t support workgroup access here.
  • Avoid expanding the core (i.e., adding routers) when the internetwork grows. If performance becomes an issue in the core, give preference to upgrades over expansion.
Now, there are a few things that we want to do as we design the core:

  • Design the core for high reliability. Consider data-link technologies that facilitate both speed and redundancy, such as FDDI, Fast Ethernet (with redundant links), or even ATM.
  • Design with speed in mind. The core should have very little latency.
  • Select routing protocols with lower convergence times. Fast and redundant data-link connectivity is no help if your routing tables are shot!

The Distribution Layer

The distribution layer is sometimes referred to as the workgroup layer and is the communication point between the access layer and the core. The primary functions of the distribution layer are to provide routing, filtering, and WAN access and to determine how packets can access the core, if needed. The distribution layer must determine the fastest way that network service requests are handled—for example, how a file request is forwarded to a server. After the distribution layer determines the best path, it forwards the request to the core layer if necessary.The core layer then quickly transports the request to the correct service.

The distribution layer is the place to implement policies for the network. Here you can exercise considerable flexibility in defining network operation. There are several actions that generally should be done at the distribution layer:
  • Routing
  • Implementing tools (such as access lists), packet filtering, and queuing
  • Implementing security and network policies, including address translation and firewalls
  • Redistributing between routing protocols, including static routing
  • Routing between VLANs and other workgroup support functions
  • Defining broadcast and multicast domains

Things to avoid at the distribution layer are limited to those functions that exclusively belong to one of the other layers.

The Access Layer

The access layer controls user and workgroup access to internetwork resources. The access layer is sometimes referred to as the desktop layer. The network resources most users need will be available locally. The distribution layer handles any traffic for remote services. The following are some of the functions to be included at the access layer:
  • Continued (from distribution layer) use of access control and policies
  • Creation of separate collision domains (segmentation)
  • Workgroup connectivity into the distribution layer

Technologies such as DDR and Ethernet switching are frequently seen in the access layer. Static routing (instead of dynamic routing protocols) is seen here as well.

As already noted, three separate levels does not imply three separate routers. There could be fewer, or there could be more. Remember, this is a layered approach.

Saturday, 20 September 2014

Data Encapsulation

When a host transmits data across a network to another device, the data goes through encapsulation: It is wrapped with protocol information at each layer of the OSI model. Each layer communicates only with its peer layer on the receiving device.

To communicate and exchange information, each layer uses Protocol Data Units (PDUs). These hold the control information attached to the data at each layer of the model. They are usually attached to the header in front of the data field but can also be in the trailer, or end, of it.

Each PDU attaches to the data by encapsulating it at each layer of the OSI model, and each has a specific name depending on the information provided in each header. This PDU information is read only by the peer layer on the receiving device. After it’s read, it’s stripped off and the data is then handed to the next layer up.

Figure 1 shows the PDUs and how they attach control information to each layer. This figure demonstrates how the upper-layer user data is converted for transmission on the network. The data stream is then handed down to the Transport layer, which sets up a virtual circuit to the receiving device by sending over a synch packet. Next, the data stream is broken up into smaller pieces, and a Transport layer header (a PDU) is created and attached to the header of the data field; now the piece of data is called a segment. Each segment is sequenced so the data stream can be put back together on the receiving side exactly as it was transmitted.

Each segment is then handed to the Network layer for network addressing and routing through the internetwork. Logical addressing (for example, IP) is used to get each segment to the correct network. The Network layer protocol adds a control header to the segment handed down from the Transport layer, and what we have now is called a packet or datagram. Remember that the Transport and Network layers work together to rebuild a data stream on a receiving host, but it’s not part of their work to place their PDUs on a local network segment—which is the only way to get the information to a router or host.
It’s the Data Link layer that’s responsible for taking packets from the Network layer and placing them on the network medium (cable or wireless). The Data Link layer encapsulates each packet in a frame, and the frame’s header carries the hardware address of the source and destination hosts. If the destination device is on a remote network, then the frame is sent to a router to be routed through an internetwork. Once it gets to the destination network, a new frame is used to get the packet to the destination host.

To put this frame on the network, it must first be put into a digital signal. Since a frame is really a logical group of 1s and 0s, the Physical layer is responsible for encoding these digits into a digital signal, which is read by devices on the same local network. The receiving devices will synchronize on the digital signal and extract (decode) the 1s and 0s from the digital signal. At this point, the devices build the frames, run a CRC, and then check their answer against the answer in the frame’s FCS field. If it matches, the packet is pulled from the frame and what’s left of the frame is discarded. This process is called de-encapsulation.

The packet is handed to the Network layer, where the address is checked. If the address matches, the segment is pulled from the packet and what’s left of the packet is discarded. The segment is processed at the Transport layer, which rebuilds the data stream and acknowledges to the transmitting station that it received each piece. It then happily hands the data stream to the upper-layer application.

At a transmitting device, the data encapsulation method works like this:

1. User information is converted to data for transmission on the network.
2. Data is converted to segments and a reliable connection is set up between the transmitting
and receiving hosts.
3. Segments are converted to packets or datagrams, and a logical address is placed in the
header so each packet can be routed through an internetwork.
4. Packets or datagrams are converted to frames for transmission on the local network. Hardware
(Ethernet) addresses are used to uniquely identify hosts on a local network segment.
5. Frames are converted to bits, and a digital encoding and clocking scheme is used.
6. To explain this in more detail using the layer addressing, I’ll use Figure 2.

FIGURE 2 PDU and layer addressing
Remember that a data stream is handed down from the upper layer to the Transport layer. As technicians, we really don’t care who the data stream comes from because that’s really a programmer’s problem. Our job is to rebuild the data stream reliably and hand it to the upper layers on the receiving device.

Before we go further in our discussion of Figure 2, let’s discuss port numbers and make sure we understand them. The Transport layer uses port numbers to define both the virtual circuit and the upper-layer process, as you can see from Figure 3.

FIGURE 3 Port numbers at the Transport layer
The Transport layer takes the data stream, makes segments out of it, and establishes a reliable session by creating a virtual circuit. It then sequences (numbers) each segment and uses acknowledgments and flow control. If you’re using TCP, the virtual circuit is defined by the source port number. Remember, the host just makes this up starting at port number 1024 (0 through 1023 are reserved for well-known port numbers). The destination port number defines the upper-layer process (application) that the data stream is handed to when the data stream is reliably rebuilt on the receiving host.

Now that you understand port numbers and how they are used at the Transport layer, let’s go back to Figure 3. Once the Transport layer header information is added to the piece of data, it becomes a segment and is handed down to the Network layer along with the destination IP address. (The destination IP address was handed down from the upper layers to the Transport layer with the data stream, and it was discovered through a name resolution method at the upper layers—probably DNS.)

The Network layer adds a header, and adds the logical addressing (IP addresses), to the front of each segment. Once the header is added to the segment, the PDU is called a packet. The packet has a protocol field that describes where the segment came from (either UDP or TCP) so it can hand the segment to the correct protocol at the Transport layer when it reaches the receiving host.

The Network layer is responsible for finding the destination hardware address that dictates where the packet should be sent on the local network. It does this by using the Address Resolution Protocol (ARP). IP at the Network layer looks at the destination IP address and compares that address to its own source IP address and subnet mask. If it turns out to be a local network request, the hardware address of the local host is requested via an ARP request. If the packet is destined for a remote host, IP will look for the IP address of the default gateway (router) instead.

The packet, along with the destination hardware address of either the local host or default gateway, is then handed down to the Data Link layer. The Data Link layer will add a header to the front of the packet and the piece of data then becomes a frame. (We call it a frame because both a header and a trailer are added to the packet, which makes the data resemble bookends or a frame, if you will.) This is shown in Figure 2. The frame uses an Ether-Type field to describe which protocol the packet came from at the Network layer. Now a cyclic redundancy check (CRC) is run on the frame, and the answer to the CRC is placed in the Frame Check Sequence field found in the trailer of the frame.

The frame is now ready to be handed down, one bit at a time, to the Physical layer, which will use bit timing rules to encode the data in a digital signal. Every device on the network segment will synchronize with the clock and extract the 1s and 0s from the digital signal and build a frame. After the frame is rebuilt, a CRC is run to make sure the frame is okay. If everything turns out to be all good, the hosts will check the destination address to see if the frame is for them. If all this is making your eyes cross and your brain freeze, don’t freak.

Wednesday, 17 September 2014

Ethernet Cabling

Ethernet cabling is an important discussion, especially if you are planning on taking the Cisco exams. Three types of Ethernet cables are available:
  • Straight-through cable
  • Crossover cable
  • Rolled cable
We will look at each in the following sections.

Straight-Through Cable

The straight-through cable is used to connect
  • Host to switch or hub
  • Router to switch or hub

Four wires are used in straight-through cable to connect Ethernet devices. It is relatively simple to create this type; Figure 1 shows the four wires used in a straight-through Ethernet cable.

Notice that only pins 1, 2, 3, and 6 are used. Just connect 1 to 1, 2 to 2, 3 to 3, and 6 to 6 and you’ll be up and networking in no time. However, remember that this would be an Ethernet-only cable and wouldn’t work with voice, Token Ring, ISDN, and so on.

FIGURE 1 Straight-through Ethernet cable

Crossover Cable

The crossover cable can be used to connect
  • Switch to switch
  • Hub to hub
  • Host to host
  • Hub to switch
  • Router direct to host
The same four wires are used in this cable as in the straight-through cable; we just connect different pins together. Figure 2 shows how the four wires are used in a crossover Ethernet cable.

Notice that instead of connecting 1 to 1, 2 to 2, and so on, here we connect pins 1 to 3 and 2 to 6 on each side of the cable.

FIGURE 2 Crossover Ethernet cable

Rolled Cable

Although rolled cable isn’t used to connect any Ethernet connections together, you can use a rolled Ethernet cable to connect a host to a router console serial communication (com) port.

If you have a Cisco router or switch, you would use this cable to connect your PC running HyperTerminal to the Cisco hardware. Eight wires are used in this cable to connect serial devices, although not all eight are used to send information, just as in Ethernet networking.

Figure 3 shows the eight wires used in a rolled cable.

FIGURE 3 Rolled Ethernet cable
These are probably the easiest cables to make because you just cut the end off on one side of a straight-through cable, turn it over, and put it back on (with a new connector, of course).

Once you have the correct cable connected from your PC to the Cisco router or switch, you can start HyperTerminal to create a console connection and configure the device. Set the configuration as follows:

1. Open HyperTerminal and enter a name for the connection. It is irrelevant what you name it, but I always just use Cisco. Then click OK.
2. Choose the communications port—either COM1 or COM2, whichever is open on your PC.
3. Now set the port settings. The default values (2400bps and no flow control hardware) will not work; you must set the port settings as shown in Figure 4.

Notice that the bit rate is now set to 9600 and the flow control is set to None. At this point, you can click OK and press the Enter key and you should be connected to your Cisco device console port.

We’ve taken a look at the various RJ45 unshielded twisted pair (UTP) cables. Keeping this in mind, what cable is used between the switches in Figure 5? In order for host A to ping host B, you need a crossover cable to connect the two switches together. But what types of cables are used in the network shown in Figure 6? In Figure 6, there are a variety of cables in use. For the connection between the switches, we’d obviously use a crossover cable like we saw in Figure 1. The trouble is, we have a console connection that uses a rolled cable. Plus, the connection from the router to the switch is a straight-through cable, as is true for the hosts to the switches. Keep in mind that if we had a serial connection (which we don’t), it would be a V.35 that we’d use to connect us to a WAN.

FIGURE 4 Port settings for a rolled cable connection
FIGURE 5 RJ45 UTP cable question #1
FIGURE 6 RJ45 UTP cable question #2

Sunday, 14 September 2014

Ethernet at the Physical Layer

Ethernet was first implemented by a group called DIX (Digital, Intel, and Xerox). They created and implemented the first Ethernet LAN specification, which the IEEE used to create the IEEE 802.3 Committee. This was a 10Mbps network that ran on coax and then eventually twisted pair and fiber physical media.

The IEEE extended the 802.3 Committee to two new committees known as 802.3u (Fast Ethernet) and 802.3ab (Gigabit Ethernet on category 5) and then finally 802.3ae (10Gbps over fiber and coax).

Figure 1 shows the IEEE 802.3 and original Ethernet Physical layer specifications.

When designing your LAN, it’s really important to understand the different types of Ethernet media available to you. Sure, it would be great to run Gigabit Ethernet to each desktop and 10Gbps between switches, and although this might happen one day, justifying the cost of that network today would be pretty difficult. But if you mix and match the different types of Ethernet media methods currently available, you can come up with a cost-effective network solution that works great.

FIGURE 1 Ethernet Physical layer specifications
The EIA/TIA (Electronic Industries Association and the newer Telecommunications Industry Alliance) is the standards body that creates the Physical layer specifications for Ethernet.The EIA/TIA specifies that Ethernet use a registered jack (RJ) connector with a 4 5 wiring sequence on unshielded twisted-pair (UTP) cabling (RJ45). However, the industry is moving toward calling this just an 8-pin modular connector.

Each Ethernet cable type that is specified by the EIA/TIA has inherent attenuation, which is defined as the loss of signal strength as it travels the length of a cable and is measured in decibels (dB). The cabling used in corporate and home markets is measured in categories. A higherquality cable will have a higher-rated category and lower attenuation. For example, category 5 is better than category 3 because category 5 cables have more wire twists per foot and therefore less crosstalk. Crosstalk is the unwanted signal interference from adjacent pairs in the cable.

Here are the original IEEE 802.3 standards:

10Base2 10Mbps, baseband technology, up to 185 meters in length. Known as thinnet and
can support up to 30 workstations on a single segment. Uses a physical and logical bus with
AUI connectors. The 10 means 10Mbps, Base means baseband technology (which is a signaling
method for communication on the network), and the 2 means almost 200 meters. 10Base2
Ethernet cards use BNC (British Naval Connector, Bayonet Neill Concelman, or Bayonet Nut
Connector) and T-connectors to connect to a network.

10Base5 10Mbps, baseband technology, up to 500 meters in length. Known as thicknet.
Uses a physical and logical bus with AUI connectors. Up to 2,500 meters with repeaters and
1,024 users for all segments.

10BaseT 10Mbps using category 3 UTP wiring. Unlike with the 10Base2 and 10Base5 networks,
each device must connect into a hub or switch, and you can have only one host per segment
or wire. Uses an RJ45 connector (8-pin modular connector) with a physical star topology
and a logical bus.

Each of the 802.3 standards defines an Attachment Unit Interface (AUI), which allows a one-bit-at-a-time transfer to the Physical layer from the Data Link media access method. This allows the MAC to remain constant but means the Physical layer can support any existing and new technologies. The original AUI interface was a 15-pin connector, which allowed a transceiver(transmitter/receiver) that provided a 15-pin-to-twisted-pair conversion.

The thing is, the AUI interface cannot support 100Mbps Ethernet because of the high frequencies involved. So 100BaseT needed a new interface, and the 802.3u specifications created one called the Media Independent Interface (MII), which provides 100Mbps throughput. The MII uses a nibble, defined as 4 bits. Gigabit Ethernet uses a Gigabit Media Independent Interface(GMII) and transmits 8 bits at a time.

802.3u (Fast Ethernet) is compatible with 802.3 Ethernet because they share the same physical characteristics. Fast Ethernet and Ethernet use the same maximum transmission unit (MTU), use the same MAC mechanisms, and preserve the frame format that is used by 10BaseT Ethernet.

Basically,Fast Ethernet is just based on an extension to the IEEE 802.3 specification, except that it
offers a speed increase of 10 times that of 10BaseT.

Here are the expanded IEEE Ethernet 802.3 standards:

100BaseTX (IEEE 802.3u) EIA/TIA category 5, 6, or 7 UTP two-pair wiring. One user per segment; up to 100 meters long. It uses an RJ45 connector with a physical star topology and a logical bus.

100BaseFX (IEEE 802.3u) Uses fiber cabling 62.5/125-micron multimode fiber. Point to point topology; up to 412 meters long. It uses an ST or SC connector, which are media interface connectors.

1000BaseCX (IEEE 802.3z) Copper twisted-pair called twinax (a balanced coaxial pair) that can only run up to 25 meters.

1000BaseT (IEEE 802.3ab) Category 5, four-pair UTP wiring up to 100 meters long.

1000BaseSX (IEEE 802.3z) MMF using 62.5- and 50-micron core; uses an 850 nano-meter laser and can go up to 220 meters with 62.5-micron, 550 meters with 50-micron.

1000BaseLX (IEEE 802.3z) Single-mode fiber that uses a 9-micron core and 1300 nanometer laser and can go from 3 kilometers up to 10 kilometers.

Note:If you want to implement a network medium that is not susceptible to electromagnetic interference (EMI), fiber-optic cable provides a more secure, long-distance cable that is not susceptible to EMI at high speeds.

Friday, 12 September 2014

Ethernet at the Data Link Layer

Ethernet at the Data Link layer is responsible for Ethernet addressing, commonly referred to as hardware addressing or MAC addressing. Ethernet is also responsible for framing packets received from the Network layer and preparing them for transmission on the local network through the Ethernet contention media access method.

Ethernet Addressing

Here’s where we get into how Ethernet addressing works. It uses the Media Access Control(MAC) address burned into each and every Ethernet network interface card (NIC). The MAC, or hardware, address is a 48-bit (6-byte) address written in a hexadecimal format.

Figure 1 shows the 48-bit MAC addresses and how the bits are divided.

FIGURE 1 Ethernet addressing using MAC addresses
The organizationally unique identifier (OUI) is assigned by the IEEE to an organization.

It’s composed of 24 bits, or 3 bytes. The organization, in turn, assigns a globally administered address (24 bits, or 3 bytes) that is unique (supposedly, again—no guarantees) to each and every adapter it manufactures. Look closely at the figure. The high-order bit is the Individual/ Group (I/G) bit. When it has a value of 0, we can assume that the address is the MAC address of a device and may well appear in the source portion of the MAC header. When it is a 1, we can assume that the address represents either a broadcast or multicast address in Ethernet or a broadcast or functional address in TR and FDDI (who really knows about FDDI?).

The next bit is the global/local bit, or just G/L bit (also known as U/L, where U means universal). When set to 0, this bit represents a globally administered address (as by the IEEE). When the bit is a 1, it represents a locally governed and administered address (as in what DECnet used to do).

The low-order 24 bits of an Ethernet address represent a locally administered or manufacturer assigned code. This portion commonly starts with 24 0s for the first card made and continues in order until there are 24 1s for the last (16,777,216th) card made. You’ll find that many manufacturers use these same six hex digits as the last six characters of their serial number on the same card.

Ethernet Frames

The Data Link layer is responsible for combining bits into bytes and bytes into frames. Frames are used at the Data Link layer to encapsulate packets handed down from the Network layer for transmission on a type of media access.

The function of Ethernet stations is to pass data frames between each other using a group of bits known as a MAC frame format. This provides error detection from a cyclic redundancy check (CRC). But remember—this is error detection, not error correction. The 802.3 frames and Ethernet frame are shown in Figure 2

Note:Encapsulating a frame within a different type of frame is called tunneling.

FIGURE 2  802.3 and Ethernet frame formats
Following are the details of the different fields in the 802.3 and Ethernet frame types:

Preamble An alternating 1,0 pattern provides a 5MHz clock at the start of each packet, which allows the receiving devices to lock the incoming bit stream.

Start Frame Delimiter (SFD)/Synch The preamble is seven octets and the SFD is one octet(synch). The SFD is 10101011, where the last pair of 1s allows the receiver to come into the alternating 1,0 pattern somewhere in the middle and still sync up and detect the beginning of the data.

Destination Address (DA) This transmits a 48-bit value using the least significant bit(LSB) first. The DA is used by receiving stations to determine whether an incoming packet is addressed to a particular node. The destination address can be an individual address or a broadcast or multicast MAC address. Remember that a broadcast is all 1s (or Fs in hex) and is sent to all devices but a multicast is sent only to a similar subset of nodes on a network.

Source Address (SA) The SA is a 48-bit MAC address used to identify the transmitting device, and it uses the LSB first. Broadcast and multicast address formats are illegal within the SA field.

Length or Type 802.3 uses a Length field, but the Ethernet frame uses a Type field to identify the Network layer protocol. 802.3 cannot identify the upper-layer protocol and must be used with a proprietary LAN—IPX, for example.

Data This is a packet sent down to the Data Link layer from the Network layer. The size can vary from 64 to 1,500 bytes.

Frame Check Sequence (FCS) FCS is a field at the end of the frame that’s used to store the CRC.

Let’s pause here for a minute and take a look at some frames caught on our trusty OmniPeek network analyzer. You can see that the frame below has only three fields: Destination, Source, and Type (shown as Protocol Type on this analyzer):

Destination: 00:60:f5:00:1f:27
Source: 00:60:f5:00:1f:2c
Protocol Type: 08-00 IP

This is an Ethernet_II frame. Notice that the type field is IP, or 08-00 (mostly just referred to as 0x800) in hexadecimal.

The next frame has the same fields, so it must be an Ethernet_II frame too:

Destination: ff:ff:ff:ff:ff:ff Ethernet Broadcast

Source: 02:07:01:22:de:a4
Protocol Type: 08-00 IP

Did you notice that this frame was a broadcast? You can tell because the destination hardware address is all 1s in binary, or all Fs in hexadecimal.

Let’s take a look at one more Ethernet_II frame. I’ll talk about this next example again when we use IPv6 in this blog, but you can see that the Ethernet frame is the same Ethernet_II frame we use with the IPv4 routed protocol but the type field has 0x86dd when we are carrying IPv6 data, and when we have IPv4 data, we use 0x0800 in the protocol field:

Destination: IPv6-Neighbor-Discovery_00:01:00:03 (33:33:00:01:00:03)
Source: Aopen_3e:7f:dd (00:01:80:3e:7f:dd)
Type: IPv6 (0x86dd)

This is the beauty of the Ethernet_II frame. Because of the protocol field, we can run any Network layer routed protocol and it will carry the data because it can identify the Network layer protocol.

Tuesday, 9 September 2014

Half- and Full-Duplex Ethernet

Half-duplex Ethernet is defined in the original 802.3 Ethernet; Cisco says it uses only one wire pair with a digital signal running in both directions on the wire. Certainly, the IEEE specifications discuss the process of half duplex somewhat differently, but what Cisco is talking about is a general sense of what is happening here with Ethernet.

It also uses the CSMA/CD protocol to help prevent collisions and to permit retransmitting if a collision does occur. If a hub is attached to a switch, it must operate in half-duplex mode because the end stations must be able to detect collisions. Half-duplex Ethernet—typically 10BaseT—is only about 30 to 40 percent efficient as Cisco sees it because a large 10BaseT network will usually only give you 3 to 4Mbps, at most.

But full-duplex Ethernet uses two pairs of wires instead of one wire pair like half duplex. And full duplex uses a point-to-point connection between the transmitter of the transmitting device and the receiver of the receiving device. This means that with full-duplex data transfer, you get a faster data transfer compared to half duplex. And because the transmitted data is sent on a different set of wires than the received data, no collisions will occur.

The reason you don’t need to worry about collisions is because now it’s like a freeway with multiple lanes instead of the single-lane road provided by half duplex. Full-duplex Ethernet is supposed to offer 100 percent efficiency in both directions—for example, you can get 20Mbps with a 10Mbps Ethernet running full duplex or 200Mbps for Fast Ethernet. But this rate is something known as an aggregate rate, which translates as “you’re supposed to get” 100 percent efficiency. No guarantees, in networking as in life.

Full-duplex Ethernet can be used in three situations:

  • With a connection from a switch to a host
  • With a connection from a switch to a switch
  • With a connection from a host to a host using a crossover cable

Note: Full-duplex Ethernet requires a point-to-point connection when only two nodes are present. You can run full-duplex with just about any device except a hub.

Now, if it’s capable of all that speed, why wouldn’t it deliver? Well, when a full-duplex Ethernet port is powered on, it first connects to the remote end and then negotiates with the other end of the Fast Ethernet link. This is called an auto-detect mechanism. This mechanism first decides on the exchange capability, which means it checks to see if it can run at 10 or 100Mbps. It then checks to see if it can run full duplex, and if it can’t, it will run half duplex.

Note: Remember that half-duplex Ethernet shares a collision domain and provides a lower effective throughput than full-duplex Ethernet, which typically has a private collision domain and a higher effective throughput.

Lastly, remember these important points:
  • There are no collisions in full-duplex mode.
  • A dedicated switch port is required for each full-duplex node.
  • The host network card and the switch port must be capable of operating in full-duplex mode.
  • Now let’s take a look at how Ethernet works at the Data Link layer.

Saturday, 6 September 2014

Ethernet Networking uses CSMA/CD protocol

Ethernet is a contention media access method that allows all hosts on a network to share the same bandwidth of a link. Ethernet is popular because it’s readily scalable, meaning that it’s comparatively easy to integrate new technologies, such as Fast Ethernet and Gigabit Ethernet, into an existing network infrastructure. It’s also relatively simple to implement in the first place, and with it, troubleshooting is reasonably straightforward. Ethernet uses both Data Link and Physical layer specifications, and this section of the chapter will give you both the Data Link layer and Physical layer information you need to effectively implement, troubleshoot, and maintain an Ethernet network.

Ethernet networking uses Carrier Sense Multiple Access with Collision Detection (CSMA/CD), a protocol that helps devices share the bandwidth evenly without having two devices transmit at the same time on the network medium. CSMA/CD was created to overcome the problem of those collisions that occur when packets are transmitted simultaneously from different nodes. And trust me—good collision management is crucial, because when a node transmits in a CSMA/CD network, all the other nodes on the network receive and examine that transmission. Only bridges and routers can effectively prevent a transmission from propagating throughout the entire network!

So, how does the CSMA/CD protocol work? Let’s start by taking a look at Figure 1

FIGURE 1 CSMA/CD
When a host wants to transmit over the network, it first checks for the presence of a digital signal on the wire. If all is clear (no other host is transmitting), the host will then proceed with its transmission. But it doesn’t stop there. The transmitting host constantly monitors the wire to make sure no other hosts begin transmitting. If the host detects another signal on the wire, it sends out an extended jam signal that causes all nodes on the segment to stop sending data (think busy signal). The nodes respond to that jam signal by waiting a while before attempting to transmit again. Backoff algorithms determine when the colliding stations can retransmit. If collisions keep occurring after 15 tries, the nodes attempting to transmit will then timeout. Pretty clean!

When a collision occurs on an Ethernet LAN, the following happens:

  • A jam signal informs all devices that a collision occurred.
  • The collision invokes a random backoff algorithm.
  • Each device on the Ethernet segment stops transmitting for a short time until the timers expire.
  • All hosts have equal priority to transmit after the timers have expired.

The following are the effects of having a CSMA/CD network sustaining heavy collisions:
  • Delay
  • Low throughput
  • Congestion

Thursday, 21 August 2014

The OSI Reference Model

One of the greatest functions of the OSI specifications is to assist in data transfer between disparate hosts—meaning, for example, that they enable us to transfer data between a Unix host and a PC or a Mac.

The OSI isn’t a physical model, though. Rather, it’s a set of guidelines that application developers can use to create and implement applications that run on a network. It also provides a framework for creating and implementing networking standards, devices, and internetworking schemes.

The OSI has seven different layers, divided into two groups. The top three layers define how the applications within the end stations will communicate with each other and with users. The bottom four layers define how data is transmitted end to end. Figure 1.6 shows the three upper layers and their functions, and Figure 1.7 shows the four lower layers and their functions.
When you study Figure 1.6, understand that the user interfaces with the computer at the Application layer and also that the upper layers are responsible for applications communicating between hosts.
Remember that none of the upper layers knows anything about networking or network addresses. That’s the responsibility of the four bottom layers.

In Figure 1.7, you can see that it’s the four bottom layers that define how data is transferred through a physical wire or through switches and routers. These bottom layers also determine how to rebuild a data stream from a transmitting host to a destination host’s application.
The following network devices operate at all seven layers of the OSI model:
Network management stations (NMSs)
Web and application servers
Gateways (not default gateways)
Network hosts

Basically, the ISO is pretty much the Emily Post of the network protocol world. Just as Ms. Post wrote the book setting the standards—or protocols—for human social interaction, the ISO developed the OSI reference model as the precedent and guide for an open network protocol set. Defining the etiquette of communication models, it remains today the most popular means of comparison for protocol suites.

The OSI reference model has seven layers:

Application layer (layer 7)
Presentation layer (layer 6)
Session layer (layer 5)
Transport layer (layer 4)
Network layer (layer 3)
Data Link layer (layer 2)
Physical layer (layer 1

Figure 1.8 shows a summary of the functions defined at each layer of the OSI model. With this in hand, you’re now ready to explore each layer’s function in detail.

Advantages of Reference Models

The OSI model is hierarchical, and the same benefits and advantages can apply to any layered model. The primary purpose of all such models, especially the OSI model, is to allow different vendors’ networks to interoperate.

Advantages of using the OSI layered model include, but are not limited to, the following:

1.It divides the network communication process into smaller and simpler components, thus aiding component development, design, and troubleshooting.
2.It allows multiple-vendor development through standardization of network components.
3.It encourages industry standardization by defining what functions occur at each layer of the model.
4.It allows various types of network hardware and software to communicate.
5.It prevents changes in one layer from affecting other layers, so it does not hamper development.

The Layered Approach

A reference model is a conceptual blueprint of how communications should take place. It addresses all the processes required for effective communication and divides these processes into logical groupings called layers. When a communication system is designed in this manner,it’s known as layered architecture.

Think of it like this: You and some friends want to start a company. One of the first things you’ll do is sit down and think through what tasks must be done, who will do them, the order in which they will be done, and how they relate to each other. Ultimately, you might group these tasks into departments. Let’s say you decide to have an order-taking department, an inventory
department, and a shipping department. Each of your departments has its own unique tasks, keeping its staff members busy and requiring them to focus on only their own duties.

In this scenario, I’m using departments as a metaphor for the layers in a communication system. For things to run smoothly, the staff of each department will have to trust and rely heavily upon the others to do their jobs and competently handle their unique responsibilities.

In your planning sessions, you would probably take notes, recording the entire process to facilitate later discussions about standards of operation that will serve as your business blueprint, or reference model.

Once your business is launched, your department heads, each armed with the part of the blueprint relating to their own department, will need to develop practical methods to implement their assigned tasks. These practical methods, or protocols, will need to be compiled into a standard operating procedures manual and followed closely. Each of the various procedures in your manual will have been included for different reasons and have varying degrees of importance and implementation. If you form a partnership or acquire another company, it will be imperative that its business protocols—its business blueprint—match yours (or at least be compatible with it).

Similarly, software developers can use a reference model to understand computer communication processes and see what types of functions need to be accomplished on any one layer. If they are developing a protocol for a certain layer, all they need to concern themselves with is that specific layer’s functions, not those of any other layer. Another layer and protocol will handle the other functions. The technical term for this idea is binding. The communication processes that are related to each other are bound, or grouped together, at a particular layer.

Internetworking Models

When networks first came into being, computers could typically communicate only with computers from the same manufacturer. For example, companies ran either a complete DECnet solution or an IBM solution—not both together. In the late 1970s, the Open Systems Interconnection (OSI) reference model was created by the International Organization for Standardization (ISO) to break this barrier.

The OSI model was meant to help vendors create interoperable network devices and software in the form of protocols so that different vendor networks could work with each other. Like world peace, it’ll probably never happen completely, but it’s still a great goal.

The OSI model is the primary architectural model for networks. It describes how data and network information are communicated from an application on one computer through the network media to an application on another computer. The OSI reference model breaks this approach into layers.

In the following section, I am going to explain the layered approach and how we can use this approach to help us troubleshoot our internetworks.

Internetworking Basics

Before we explore internetworking models and the specifications of the OSI reference model, you’ve got to understand the big picture and learn the answer to the key question, Why is it so important to learn Cisco internet working?

Networks and networking have grown exponentially over the last 15 years—understandably so. They’ve had to evolve at light speed just to keep up with huge increases in basic mission critical user needs such as sharing data and printers as well as more advanced demands such as videoconferencing. Unless everyone who needs to share network resources is located in the same office area (an increasingly uncommon situation), the challenge is to connect the sometimes many relevant networks together so all users can share the networks’ wealth.

Starting with a look at Figure 1.1, you get a picture of a basic LAN network that’s connected together using a hub. This network is actually one collision domain and one broadcast domain—but no worries if you have no idea what this means because I’m going to talk about both collision and broadcast domains so much throughout this whole chapter, you’ll probably even dream about them!
Okay, about Figure 1.1… How would you say the PC named Bob communicates with the PC named Sally? Well, they’re both on the same LAN connected with a multiport repeater (a hub). So does Bob just send out a data message, “Hey Sally, you there?” or does Bob use Sally’s IP address and put things more like, “Hey 192.168.0.3, are you there?” Hopefully, you picked the IP address option, but even if you did, the news is still bad—both answers are wrong!

Why? Because Bob is actually going to use Sally’s MAC address (known as a hardware address), which is burned right into the network card of Sally’s PC, to get ahold of her.

Great, but how does Bob get Sally’s MAC address since Bob knows only Sally’s name and doesn’t even have her IP address yet? Bob is going to start with name resolution (hostname to IP address resolution), something that’s usually accomplished using Domain Name Service (DNS). And of note, if these two are on the same LAN, Bob can just broadcast to Sally asking her for the information (no DNS needed)—welcome to Microsoft Windows (Vista included)!

Here’s an output from a network analyzer depicting a simple name resolution process from Bob to Sally:
As I already mentioned, since the two hosts are on a local LAN, Windows (Bob) will just
broadcast to resolve the name Sally (the destination 192.168.0.255 is a broadcast address).
Let’s take a look at the rest of the information:

EthernetII,Src:192.168.0.2(00:14:22:be:18:3b),Dst:Broadcast (ff:ff:ff:ff:ff:ff)

What this output shows is that Bob knows his own MAC address and source IP address but
not Sally’s IP address or MAC address, so Bob sends a broadcast address of all fs for the MAC
address (a Data Link layer broadcast) and an IP LAN broadcast of 192.168.0.255. Again,
don’t freak—you’re going to learn all about broadcasts in Chapter 3, “Subnetting, Variable
Length Subnet Masks (VLSMs), and Troubleshooting TCP/IP.”

Before the name is resolved, the first thing Bob has to do is broadcast on the LAN to get
Sally’s MAC address so he can communicate to her PC and resolve her name to an IP address:

Next, check out Sally’s response:
Okay sweet— Bob now has both Sally’s IP address and her MAC address! These are both listed as the source address at this point because this information was sent from Sally back to Bob. So,finally, Bob has all the goods he needs to communicate with Sally. And just so you know, I’m going to tell you all about ARP and show you exactly how Sally’s IP address was resolved to a MAC address a little later in Chapter 6, “IP Routing.”

By the way, I want you to understand that Sally still had to go through the same resolution processes to communicate back to Bob—sounds crazy, huh? Consider this a welcome to IPv4 and basic networking with Windows (and we haven’t even added a router yet!).

To complicate things further, it’s also likely that at some point you’ll have to break up one large network into a bunch of smaller ones because user response will have dwindled to a slow crawl as the network grew and grew. And with all that growth, your LAN’s traffic congestion has reached epic proportions. The answer to this is breaking up a really big network into a number of smaller ones-something called network segmentation. You do this by using devices like routers,switches, and bridges. Figure 1.2 displays a network that’s been segmented with a switch so each network segment connected to the switch is now a separate collision domain. But make note of the fact that this network is still one broadcast domain.

Keep in mind that the hub used in Figure 1.2 just extended the one collision domain from the
switch port. Here’s a list of some of the things that commonly cause LAN traffic congestion:


  • Too many hosts in a broadcast domain
  • Broadcast storms
  • Multicasting
  • Low bandwidth
  • Adding hubs for connectivity to the network
  • A bunch of ARP or IPX traffic (IPX is a Novell protocol that is like IP, but really, really
  • chatty. Typically not used in today’s networks.)
Take another look at Figure 1.2—did you notice that I replaced the main hub from Figure 1.1 with a switch? Whether you did or didn’t, the reason I did that is because hubs don’t segment a network;they just connect network segments together. So basically, it’s an inexpensive way to connect a couple of PCs together, which is great for home use and troubleshooting, but that’s about it!

Now routers are used to connect networks together and route packets of data from one network to another. Cisco became the de facto standard of routers because of its high-quality router products, great selection, and fantastic service. Routers, by default, break up a broadcast domain —the set of all devices on a network segment that hear all the broadcasts sent on that segment. Figure 1.3 shows a router in our little network that creates an internetwork and breaks up broadcast domains.

frames, routers (layer 3 switches) use logical addressing and provide what is called packet switching. Routers can also provide packet filtering by using access lists, and when routers connect two or more networks together and use logical addressing (IP or IPv6), this is called an internetwork. Last, routers use a routing table (map of the internetwork) to make path selections and to forward packets to remote networks.

Conversely, switches aren’t used to create internetworks (they do not break up broadcast domains by default); they’re employed to add functionality to a network LAN. The main purpose of a switch is to make a LAN work better—to optimize its performance—providing more bandwidth for the LAN’s users. And switches don’t forward packets to other networks as routers do. Instead, they only “switch” frames from one port to another within the switched network. Okay, you may be thinking, “Wait a minute, what are frames and packets?” I’ll tell you all about them later in this chapter, I promise!

By default, switches break up collision domains. This is an Ethernet term used to describe a network scenario wherein one particular device sends a packet on a network segment, forcing every other device on that same segment to pay attention to it. At the same time, a different device tries to transmit, leading to a collision, after which both devices must retransmit, one at a time. Not very efficient! This situation is typically found in a hub environment where each host segment connects to a hub that represents only one collision domain and only one broadcast domain. By contrast, each and every port on a switch represents its own collision domain.

The term bridging was introduced before routers and hubs were implemented, so it’s pretty common to hear people referring to bridges as switches. That’s because bridges and switches basically do the same thing—break up collision domains on a LAN (in reality, you cannot buy a physical bridge these days, only LAN switches, but they use bridging technologies, so Cisco still calls them multiport bridges).

So what this means is that a switch is basically just a multiple-port bridge with more brainpower, right? Well, pretty much, but there are differences. Switches do provide this function, but they do so with greatly enhanced management ability and features. Plus, most of the time, bridges only had 2 or 4 ports. Yes, you could get your hands on a bridge with up to 16 ports, but that’s nothing compared to the hundreds available on some switches!

Figure 1.4 shows how a network would look with all these internetwork devices in place. Remember that the router will not only break up broadcast domains for every LAN interface, it will break up collision domains as well.
When you looked at Figure 1.4, did you notice that the router is found at center stage and that it connects each physical network together? We have to use this layout because of the older technologies involved–—bridges and hubs.

On the top internetwork in Figure 1.4, you’ll notice that a bridge was used to connect the hubs to a router. The bridge breaks up collision domains, but all the hosts connected to both hubs are still crammed into the same broadcast domain. Also, the bridge only created two collision domains, so each device connected to a hub is in the same collision domain as every other device connected to that same hub. This is actually pretty lame, but it’s still better than having one collision domain for all hosts.

Notice something else: The three hubs at the bottom that are connected also connect to the router, creating one collision domain and one broadcast domain. This makes the bridged network look much better indeed!

The network in Figure 1.3 is a pretty cool network. Each host is connected to its own collision domain, and the router has created two broadcast domains. And don’t forget that the router provides connections to WAN services as well! The router uses something called a serial interface for WAN connections, specifically, a V.35 physical interface on a Cisco router.

Breaking up a broadcast domain is important because when a host or server sends a network broadcast, every device on the network must read and process that broadcast—unless you’ve got a router. When the router’s interface receives this broadcast, it can respond by basically saying, “Thanks, but no thanks,” and discard the broadcast without forwarding it on to other networks. Even though routers are known for breaking up broadcast domains by default, it’s important to remember that they break up collision domains as well.

There are two advantages of using routers in your network:

  • They don’t forward broadcasts by default.
  • They can filter the network based on layer 3 (Network layer) information (e.g., IP address). Four router functions in your network can be listed as follows:
  • Packet switching
  • Packet filtering
  • Internetwork communication
  • Path selection
Remember that routers are really switches; they’re actually what we call layer 3 switches (we’ll talk about layers later in this chapter). Unlike layer 2 switches, which forward or filter

The best network connected to the router is the LAN switch network on the left. Why? Because each port on that switch breaks up collision domains. But it’s not all good—all devices are still in the same broadcast domain. Do you remember why this can be a really bad thing? Because all devices must listen to all broadcasts transmitted, that’s why. And if your broadcast domains are too large, the users have less bandwidth and are required to process more broadcasts, and network response time will slow to a level that could cause office riots.

Once we have only switches in our network, things change a lot! Figure 1.5 shows the network that is typically found today.

Okay, here I’ve placed the LAN switches at the center of the network world so the routers are connecting only logical networks together. If I implemented this kind of setup, I’ve created virtual LANs (VLANs), something I’m going to tell you about in Chapter 9, “Virtual LANs (VLANs).” So don’t stress. But it is really important to understand that even though you have a switched network, you still need a router to provide your inter-VLAN communication, or internetworking. Don’t forget that!

Obviously, the best network is one that’s correctly configured to meet the business requirements of the company it serves. LAN switches with routers, correctly placed in the network, are the best network design. This book will help you understand the basics of routers and switches so you can make tight, informed decisions on a case-by-case basis.

Let’s go back to Figure 1.4 again. Looking at the figure, how many collision domains and broadcast domains are in this internetwork? Hopefully, you answered nine collision domains and three broadcast domains! The broadcast domains are definitely the easiest to see because only routers break up broadcast domains by default. And since there are three connections, that gives you three broadcast domains. But do you see the nine collision domains? Just in case that’s a no, I’ll explain. The all-hub network is one collision domain; the bridge network equals three collision domains. Add in the switch network of five collision domains—one for each switch port—and you’ve got a total of nine.

Now, in Figure 1.5, each port on the switch is a separate collision domain and each VLAN is a separate broadcast domain. But you still need a router for routing between VLANs. How many collision domains do you see here? I’m counting 10—remember that connections between the switches are considered a collision domain!







Sunday, 27 April 2014

Configuring Our Wireless Internetwork

Configuring through the SDM is definitely the easiest way to go for wireless configurations, that is, if you’re using any type of security. And of course you should be! Basically, all you need to do to bring up an access point is to just turn it on. But if you do have a wireless card in your router.

Here’s a screen shot of my R2 router showing that I can configure the wireless card I have installed in slot 3.
There really isn’t too much you can do from within SDM itself, but if I were to click on the Edit Interface/Connection tab and then click Summary, I could enable and disable the interface, as well as click the Edit button, which would allow me to add NAT, access lists, and so on to the interface.

From either the Create Connection screen shown in the first screen of this section, or from the screen that appears when you click the Edit button of the second screen, you can click Launch Wireless Application. This will open up a new HTTP screen that your wireless device is configured from called Express Set-up.
This is the same screen you would see if you just typed HTTP into an access point—one like our 1242AP. The SDM will be used with wireless interfaces for monitoring, to provide statistics, and for gaining access into the wireless configuration mode on a router that has wireless interfaces. This is so we don’t have to use the CLI for the hard configurations.

Again, you can only configure some basic information from here. But from the next screen, Wireless Express Security, we can configure the wireless AP in either bridging mode or routing mode—a really cool feature!
 The next screen shows the wireless interfaces and the basic settings.
The following screen shot is the second part of the Wireless Interfaces screen.
Under the Wireless Security heading is really where HTTP management shines! You can configure encryption, add SSIDs and configure your Radius sever settings.
Now, if we just HTTP in to the 1242AG AP, we’ll see this screen.

This looks amazingly like the APs we’ll find in our ISR routers, and we can configure the same devices and security too.

Cisco Unified Wireless Network Security

The Cisco Unified Wireless Network delivers many innovative Cisco enhancements and supports Wi-Fi Protected Access (WPA) and Wi-Fi Protected Access 2 (WPA2), which provide access control per user, per session via mutual authentication and data privacy and through strong dynamic encryption. Quality of service (QoS) and mobility are integrated into this solution to enable a rich set of enterprise applications.

The Cisco Unified Wireless Network provides the following:

Secure Connectivity for WLANs Strong dynamic encryption keys that automatically change on a configurable basis to protect the privacy of transmitted data.

1.WPA-TKIP includes encryption enhancements like MIC, per-packet keys via initialization vector hashing, and broadcast key rotation.

2.WPA2-AES is the “gold standard” for data encryption.

Trust and Identity for WLANs A robust WLAN access control that helps to ensure that legitimate clients associate only with trusted access points rather than rogue, or unauthorized access points. It’s provided per user, per session via mutual authentication using IEEE 802.1X, a variety of Extensible Authentication Protocol (EAP) types, a Remote Authentication Dial-In User Service (RADIUS), and a Authentication, Authorization, and Accounting (AAA) server. It supports the following:

1.The broadest range of 802.1X authentication types, client devices, and client operating
systems on the market

2.RADIUS accounting records for all authentication attempts

Threat Defense for WLANs Detection of unauthorized access, network attacks, and rogue access points via an Intrusion Prevention System (IPS), WLAN NAC, and advanced location services. Cisco’s IPS allows IT managers to continually scan the RF environment, detect rogue access points and unauthorized events, simultaneously track thousands of devices, and mitigate network attacks. NAC has been specifically designed to help ensure that all wired and wireless endpoint devices like PCs, laptops, servers, and PDAs that are trying to access network resources are adequately protected from security threats. NAC allows organizations to analyze and control all devices coming into the network. Okay—let’s configure some wireless devices now!

WPA or WPA 2 Pre-Shared Key

Okay, now we’re getting somewhere. Although this is another form of basic security that’s really just an add-on to the specifications, WPA or WPA2 Pre-Shared Key (PSK) is a better form of wireless security than any other basic wireless security method mentioned so far. I did say basic.

The PSK verifies users via a password or identifying code (also called a passphrase) on both the client machine and the access point. A client only gains access to the network if its password matches the access point’s password. The PSK also provides keying material that TKIP or AES uses to generate an encryption key for each packet of transmitted data. While more secure than static WEP, PSK still has a lot in common with static WEP in that the PSK is stored on the client station and can be compromised if the client station is lost or stolen even though finding this key isn’t all that easy to do. It’s a definite recommendation to use a strong PSK passphrase that includes a mixture of letters, numbers, and nonalphanumeric characters.

Wi-Fi Protected Access (WPA) is a standard developed in 2003 by the Wi-Fi Alliance, formerly known as WECA. WPA provides a standard for authentication and encryption of WLANs that’s intended to solve known security problems existing up to and including the year 2003. This takes into account the well-publicized AirSnort and man-in-the-middle WLAN attacks.

WPA is a step toward the IEEE 802.11i standard and uses many of the same components, with the exception of encryption—802.11i uses AES encryption. WPA’s mechanisms are designed to be implementable by current hardware vendors, meaning that users should be able to implement WPA on their systems with only a firmware/software modification.

Note: The IEEE 802.11i standard has been sanctioned by WPA and is termed WPA version 2.

SSIDs, WEP, and MAC Address Authentication

What the original designers of 802.11 did to create basic security was include the use of Service Set Identifiers (SSIDs), open or shared-key authentication, static Wired Equivalency Protocol(WEP), and optional Media Access Control (MAC) authentication. Sounds like a lot, but none of these really offer any type of serious security solution—all they may be close to adequate for is use on a common home network. But we’ll go over them anyway…

SSID is a common network name for the devices in a WLAN system that create the wireless LAN. An SSID prevents access by any client device that doesn’t have the SSID. The thing is, by default, an access point broadcasts its SSID in its beacon many times a second. And even if SSID broadcasting is turned off, a bad guy can discover the SSID by monitoring the network and just waiting for a client response to the access point. Why? Because, believe it or not, that information, as regulated in the original 802.11 specifications, must be sent in the clear—how secure!

Two types of authentication were specified by the IEEE 802.11 committee: open and shared-key authentication. Open authentication involves little more than supplying the correct SSID—but it’s the most common method in use today. With shared-key authentication, the access point sends the client device a challenge-text packet that the client must then encrypt with the correct Wired Equivalency Protocol (WEP) key and return to the access point. Without the correct key, authentication will fail and the client won’t be allowed to associate with the access point. But shared-key authentication is still not considered secure because all an intruder has to do to get around this is detect both the clear-text challenge and the same challenge encrypted with a WEP key and then decipher the WEP key. Surprise—shared key isn’t used in today’s WLANs because of clear-text challenge.

With open authentication, even if a client can complete authentication and associate with an access point, the use of WEP prevents the client from sending and receiving data from the access point unless the client has the correct WEP key. A WEP key is composed of either 40 or 128 bits and, in its basic form, is usually statically defined by the network administrator on the access point and all clients that communicate with that access point. When static WEP keys are used, a network administrator must perform the time-consuming task of entering the same keys on every device in the WLAN. Obviously, we now have fixes for this because this would be administratively impossible in today’s huge corporate wireless networks!

Last, client MAC addresses can be statically typed into each access point, and any of them that show up without that MAC addresses in the filter table would be denied access. Sounds good, but of course all MAC layer information must be sent in the clear—anyone equipped with a free wireless sniffer can just read the client packets sent to the access point and spoof their MAC address.

WEP can actually work if administered correctly. But basic static WEP keys are no longer a viable option in today’s corporate networks without some of the proprietary fixes that run on top of it. So let’s talk about some of these now.

Open Access

All Wi-Fi Certified wireless LAN products are shipped in “open-access” mode, with their security features turned off. While open access or no security may be appropriate and acceptable for public hot spots such as coffee shops, college campuses, and maybe airports, it’s definitely not an option for an enterprise organization, and likely not even adequate for your private home network.

Security needs to be enabled on wireless devices during their installation in enterprise environments. It may come as quite a shock, but some companies actually don’t enable any WLAN security features. Obviously, the companies that do this are exposing their networks to tremendous risk!

The reason that the products are shipped with open access is so that any person who knows absolutely nothing about computers can just buy an access point, plug it into their cable or DSL modem, and voilĂ —they’re up and running. It’s marketing, plain and simple, and simplicity sells.

Wireless Security

By default, wireless security is nonexistent on access points and clients. The original 802.11 committee just didn’t imagine that wireless hosts would one day outnumber bounded media hosts, but that’s truly where we’re headed. Also, and unfortunately, just like with the IPv4 routed protocol, engineers and scientists didn’t add security standards that are robust enough to work in a corporate environment.

So we’re left with proprietary solution add-ons to aid us in our quest to create a secure wireless network. And no—I’m not just sitting here bashing the standards committees because the security problems we’re experiencing were also created by the U.S. government because of export issues with its own security standards. Our world is a complicated place, so it follows that our security solutions are going to be as well.

A good place to start is by discussing the standard basic security that was added into the original 802.11 standards and why those standards are way too flimsy and incomplete to enable us to create a secure wireless network relevant to today’s challenges.

MESH and LWAPP

As more vendors migrate to a mesh hierarchical design, and as larger networks are built using lightweight access points, we really need a standardized protocol that governs how lightweight access points communicate with WLAN systems. This is exactly the role filled by one of the Internet Engineering Task Force’s (IETF’s) latest draft specification, Lightweight Access Point Protocol (LWAPP).

With LWAPP, large multi-vendor wireless networks can be deployed with maximum capabilities and increased flexibility. Well…okay, this is mostly true. No one, and I do mean no one, has actually deployed a Cisco and Motorola network within the same company and is sitting back smugly saying, “Dude, this is really cool!” They’re saying something loud for sure, but it isn’t that! Cisco is Cisco and Motorola is well, not Cisco, and even though they supposedly run the same IETF protocols, they just don’t seem to see the standards exactly the same way. Basically, they don’t play well with each other.

So, let’s say we’re using only Cisco. (Hey, we already have an unlimited budget here, so why not put in all Cisco too, I mean, this is a “Cisco” book, right?)

Okay—so Cisco’s mesh networking infrastructure is decentralized and comparably inexpensive for all the nice things it provides because each node only needs to transmit as far as the next node. Nodes act as repeaters to transmit data from nearby nodes to peers that are too far away for a manageable cabled connection, resulting in a network that can span a really large distance, especially over rough or difficult terrain. Figure 1 shows a large meshed environment using Cisco 1520 APs to “umbrella” an area with wireless connectivity:

Plus, mesh networks also happen to be extremely reliable—since each node can potentially
be connected to several other nodes, if one of them drops out of the network because of hardware failure or something, its neighbors simply find another route. So you get extra capacity and fault tolerance by simply adding more nodes.

FIGURE 1  Typical Large meshed outdoor environment

Mesh is a network topology in which devices are
connected with many redundant connections
between nodes.

Wireless mesh connections between AP nodes are formed with a radio, providing many possible paths from a single node to other nodes. Paths through the mesh network can change in response to traffic loads, radio conditions, or traffic prioritization.

Cisco LWAPP-enabled mesh access points are configured, monitored, and operated from and through any Cisco Wireless LAN Controller deployed in the Cisco Mesh Networking Solution—and they must go through a controller, which is why having redundant controllers is an absolute necessary.

Let’s define a couple terms used in mesh networks:

Root Access Points (RAPs) This access point is connected to the wired network and serves as the “root” or “gateway” to the wired network. RAPs have a wired connection back to a Cisco Wireless LAN Controller. They use the backhaul wireless interface to communicate with neighboring Mesh APs.

Mesh Access Points (MAPs) The Mesh APs are remote APs that are typically located on rooftops or towers and can connect up to 32 MAPs over a 5GHz backhaul. During bootup, an access point will try to become a RAP if it’s connected to the wired network. Conversely, if a RAP loses its wired network connection, it will attempt to become a MAP and will search for a RAP.

A typical mesh network would include the devices shown in Figure 2.

In Figure 2 , you can see that there’s one RAP connected to the infrastructure, and the MAPs connect to each other as well to the controller through the RAP.

But we’re not quite done with this yet—I want to explain one more mesh term before we get into wireless security: AWPP.

FIGURE 2 Typical devices found in a Cisco mesh network


AWPP

Each AP runs the Adaptive Wireless Path Protocol (AWPP)—a new protocol designed from the ground up by Cisco specifically for the wireless environment. This protocol allows RAPs to communicate with each other to determine the best path back to the wired network via the RAP. Once the optimal path is established, AWPP continues to run in the background to establish alternative routes back to the RAP just in case the topology changes or conditions cause the link strength to weaken.

This protocol takes into consideration things like interference and characteristics of the specific radio so that the mesh can be self-configuring and self-healing. AWPP actually has the ability to consider all relevant elements of the wireless environment so that the mesh network’s functionality isn’t disrupted and can provide consistent coverage.

This is pretty powerful considering how truly dynamic a wireless environment is. When there’s interference or if APs are added or removed, the Adaptive Wireless Path Protocol reconfigures the path back to the rooftop AP (RAP). Again, in response to the highly dynamic wireless environment, AWPP uses a “stickiness” factor to mitigate routes that ensure that an event, such as a large truck passing through the mesh causing a temporary disruption, doesn’t cause the mesh to change unnecessarily.