Cisco Routers

Cisco routers provide access to applications and services, and integrate technologies

IP Phone - Cisco

IP phone takes full advantage of converged voice and data networks, while retaining the convenience and user-friendliness you expect from a business phone...

WAN - Cisco Systems

Transform your WAN to deliver high-performance, highly secure, and reliable services to unite campus, data center, and branch networks.

EtherChannel - Cisco Systems

EtherChannel provides incremental trunk speeds between Fast Ethernet, Gigabit Ethernet, and 10 Gigabit Ethernet. EtherChannel combines multiple Fast ...

Looking Toward the Future - Cisco Systems

Looking Toward the Future by Vint Cerf. The Internet Corporation for Assigned Names and Numbers (ICANN) was formed 9 years ago....

Pages

Friday, April 6, 2012

Cisco WLAN design

      With most WLAN designs, security is the first capability folks worry about. Fortunately, WLAN technology contains robust security features with viable authentication and encryption mechanisms. A security solution can be designed in a variety of ways, however. This tip provides some best practices for designing effective security architectures.


       We will cover specific design aspects of the Cisco WLAN solution utilizing controller-based architectures. These design best practices have been developed over the course of multiple design initiatives with the Cisco solution and primarily from lessons learned from deploying the Cisco solution. Most of the information is related to the Cisco solution, but some of the lessons learned and best practices relate to the process behind deploying the designs.

User considerations
       In most organizations, the user community dictates the security architecture. It is not a one-size-fits-all approach. The recommended approach is to identify the user communities that will utilize the WLAN system and design the security accordingly.

As a foundation, the following user communities are a good place to start:
  • Employees/visiting employees -- require access to corporate applications and need those applications to be secure
  • Contractors -- on site temporarily, but for an extended period of time; require access to some corporate applications (other than just Internet)
  • Guests -- need access to Internet only


Tuesday, April 3, 2012

Multi-Protocol Label Switching (MPLS)

   This article identifies Multi-Protocol Label Switching (MPLS) technology components, describes their functionality, and illustrates the value they provide in Service Provider environments.

       MPLS was initially targeted for Service Provider customers; however, Enterprises have begun to show interest in deploying this technology. This document can apply to large Enterprise customer whose networks resemble Service Provider networks in the following areas:
  • Size of the network
  • Offer "internal services" to different departments within the Enterprise
   MPLS compliments IP technology. It is designed to leverage the intelligence associated with IP Routing, and the Switching paradigm associated with Asynchronous Transfer Mode (ATM). MPLS consists of a Control Plane and a Forwarding Plane. The Control Plane builds what is called a "Forwarding Table," while the Forwarding Plane forwards packets to the appropriate interface (based on the Forwarding Table).
   The efficient design of MPLS uses Labels to encapsulate IP packets. A Forwarding Table lists Label Values, which are each associated with determining the outgoing interface for every network prefix. Cisco IOS Software supports two signaling mechanisms to distribute labels: Label Distribution Protocol (LDP) and Resource Reservation Protocol/Traffic Engineering (RSVP / TE).

MPLS comprises the following major components:
  1.  MPLS Virtual Private Networks (VPNs)—provides MPLS-enabled IP networks for Layer 3 and Layer 2 connectivity. Includes two major components:    1.  Layer 3 VPNs—based on Border Gateway Patrol    2.  Layer 2 VPNs—Any Transport over MPLS (AToM)
  2. MPLS Traffic Engineering (TE)— provides an increased utilization of network bandwidth inventory and for protection services
  3. MPLS Quality of Service (QoS)— buildings upon existing IP QoS mechanisms, and provides preferential treatment to certain types of traffic, based on a QoS attribute (i.e., MPLS EXP).
MPLS VPNs (Layer 3 VPNs)
   Layer 3 VPNs or BGP VPNs have been the most widely deployed MPLS technology. They use Virtual Routing instances to create a separate routing table for each subscriber, and use BGP to establish peering relations and signal the VPN-associated labels with each of the corresponding Provider Edge (PE) routers. This results in a highly scalable implementation, because core (P) routers have no information about the VPNs.

   BGP VPNs are useful when subscribers want Layer 3 connectivity, and would prefer to offload their routing overhead to a Service Provider. This ensures that a variety of Layer 2 interfaces can be used on either side of a VPN. For example, Site A can use an Ethernet interface, while Site B uses an ATM interface; however, Sites A and B are part of a single VPN.

It is relatively simple to implement multiple topologies with router filtering, including a Hub & Spoke or Full Mesh:
  • Hub and Spoke—central site is configured to "learn" all the routes from the remote sites, while the remote sites are restricted to "learn" routes only from the central site.
  • Full Mesh topologies would result in all the sites having the ability to "learn" or import routes from every other site.
    Layer 3 VPNs have been deployed in networks that have as many as—seven hundred PE routers. Service Providers are currently providing up to five hundred VPNs, with each VPN containing as many as one thousand sites. A wide variety of routing protocols are available deploy on the subscriber access link (i.e. CE to PE link). These include Static Routes, BGP, RIP and Open Shortest Path First (OSPF). Most VPNs have been deployed with Static Routes, followed by BGP Routing.

   Layer 3 VPNs offer advanced capabilities, including Inter-AS and Carrier Supporting Carrier (CSC). These provide hierarchical VPNs, allowing a Service Provider to provide connectivity across multiple administrative networks. Currently, initial deployments of such functionality are becoming more widespread.
Download MPLS FLASH PRESENTATION here Full Mesh, Point to Point

Sunday, April 1, 2012

Cisco Catalyst 6500 Series Supervisor Engine 720

The Cisco® Catalyst® 6500 Series Supervisor Engine 720 is a family of Supervisor Engine(s) designed to deliver scalable performance and rich set of IP features in hardware. Its hardware-based feature set enables applications such as traditional IP forwarding, Layer 2 and Layer 3 Multiprotocol Label Switching (MPLS) VPNs, Ethernet over MPLS (EoMPLS) with quality of service (QoS) and security features. The Supervisor engine 720 integrates a high-performance 720 Gbps crossbar switch fabric with a forwarding engine in a single module, delivering 40 Gbps of switching capacity per slot (enabling 4-port 10GE and 48-port 10/100/1000 density line cards). With hardware-enabled forwarding for IPv4, IPv6 and MPLS, the system performance is capable of 400 Mpps for IPv4, 200 Mpps for IPv6 traffic, with features and 1024 VRFs each populated with up to 700 routes/VRF for MPLS



NIC Teaming and Cisco Switch Config

Server Configuration
       Server Access port configuration 
Server access ports typically fall into three categories:
  1. Normal servers which require simple gigabit connectivity with fail on fault cards (what HP calls Network Fault Tolerance – NFT)
  2. High bandwidth servers which require two gigabit throughput using aggregation
  3. VMWare servers which require special configuration 
Some initial thoughts
       Nowadays auto-negotiation of speed and duplex works well with server gigabit interfaces so do not try and set the speed or duplex manually. One reason is auto-negotiation enables the cable-tester built into some gigabit Ethernet modules to function.
For example:
switch#test cable-diagnostics tdr interface gi1/2/1
TDR test started on interface Gi1/2/1
A TDR test can take a few seconds to run on an interface
Use 'show cable-diagnostics tdr' to read the TDR results.
switch#show cable-diagnostics tdr interface gi1/2/1
TDR test last run on: August 06 13:58:00
Interface Speed     Pair Cable length Distance to fault   Channel Pair status
             --------- ----- ---- ------------------- ------------------- ------- ------------
Gi1/2/1   1000  1-2  0    +/- 6  m       N/A                Pair B  Terminated 
                       3-6  0    +/- 6  m       N/A                 Pair A  Terminated 
                       4-5  0    +/- 6  m       N/A                 Pair D  Terminated 
                       7-8  0    +/- 6  m       N/A                 Pair C  Terminated 


       If a server comes in at 100 Mbps and the server is also set to auto/auto, it is likely that there is a cable fault (gigabit requires all pairs to be terminated where 100 Base-T does not).
       Access ports should also be set to spanning-tree portfast as per established practice.
       Port-security is also worth mentioning as it is NOT compatible with dual-homed servers using HP’s network teaming software. Any cable fault on NIC 1 results in the MAC address shifting over to NIC 2’s port and the switch sees this as a security violation, blocks traffic and generates this syslog message.

Normal ServersSwitch Configuration

interface <interface name>
 switchport
 !Set an access VLAN
 switchport access vlan <###>
 !Force access mode
 switchport mode access
 !Set an acceptable broadcast storm level
 storm-control broadcast level 0.10
 !Port-security is not compatible with dual-homed servers
 no switchport port-security
 no switchport port-security maximum
 no switchport port-security violation restrict
 spanning-tree portfast
end

Server configuration
       The default configuration on HP servers for a teaming interface is Type: Automatic and Transmit: Automatic. This configuration will, on non-etherchannel switch ports, default to Transmit Load Balancing with Fault Tolerance (TLB). One NIC will transmit and receive traffic whilst the other will only transmit.
       From a network point of view this makes troubleshooting difficult, as transmit traffic is spread over two NICs with two MAC addresses and receive traffic is directed to just one NIC depending on what NIC responds to ARP requests. 
Our PREFERRED configuration is to use either: 
NFT Teaming Configuration
NFT Teaming with preference configuration
       Two servers that are known to exchange a lot of traffic with each other but do not use Etherchannel should use NFT with preference and ensure that the active NICs on both servers go to the same switch.

High Bandwidth Servers

Switch Configuration

Note: most settings MUST match between all ports in the same Etherchannel group (e.g. storm-control; access mode; and vlan).
interface <interface name> switchport
 !Set an access VLAN switchport access vlan <###>
 !Force access mode switchport mode access
 !Set an acceptable broadcast storm level storm-control broadcast level 0.10
 !port-security is not compatible with channelling  no switchport port-security
 no switchport port-security maximum
 no switchport port-security violation restrict
 !Force LACP & enable as passive mode channel-protocol lacp
 channel-group <#> mode passive
 spanning-tree portfast
 !Force flowcontrol off to stop any channelling issues
 !Intel cards default to no flow control; HP on-board default to on

 flowcontrol receive off
 flowcontrol send off
end


Sample output showing two links being aggregated:
switch#show int po100 etherchannel
Port-channel100   (Primary aggregator)

Age of the Port-channel   = 1d:01h:38m:34s
Logical slot/port   = 14/4          Number of ports = 2
HotStandBy port = null
Port state          = Port-channel Ag-Inuse
Protocol            =   LACP
Fast-switchover     = disabled

Ports in the Port-channel:

Index Load Port    EC state    No of bits
------+------+------+------------------+-----------
  1     FF   Gi1/2/1  Passive    8
  0     FF   Gi2/2/1  Passive    8

Time since last port bundled:    0d:00h:00m:05s    Gi1/2/1
Time since last port Un-bundled: 0d:00h:00m:33s    Gi1/2/1
Server configuration
       The default configuration on HP servers for a teaming interface is Type: Automatic and Transmit: Automatic. This configuration will attempt to negotiate an etherchannel using LACP and if this fails to use Transmit Load Balancing (TLB). As long as the port-channel and its corresponding physical interfaces are configured correctly the default configuration seems to work well. Although TLB is not our preferred failback connection type, there does not appear to be a way to enable channelling with NFT fallback.

Default Teaming Configuration
Successful LACP negotiation
Unsuccessful LACP negotiation
Other features such as duplex/speed and flowcontrol are best left at defaults.

Jumbo Frames
       Jumbo frames may improve performance of some applications, but no testing has been done at the time of writing to verify whether they introduce problems either locally or to remote users on a 1500 byte MTU WAN connection or whether they do indeed improve performance as much as some would believe. http://www.nanog.org/mtg-0802/scholl.html may be useful reading.
       Jumbo frames are also incompatible with HP’s TCP Offload Engine (TOE) NICs so jumbo frames may suffer from reduced throughput. More testing and investigation will be required before coming to any firm conclusions or recommendations. Therefore at the current time, our recommendation for host access ports is to use a standard 1500 byte MTU / 1518 byte frame size.
       However, since every trunk link on a LAN has to support the highest MTU, it is worth building the LAN’s trunk links to support a high MTU even if the access ports still run at 1514 bytes. This leaves the option open for later adoption at the host layer and allows easy adoption of some devices that require a high MTU such as Fibrechannel over IP.