Category Archives: Cisco

Cisco – Event Management to enable backup interface

I work with a lot of sites that are multi-homed or have a backup connection that, even though its backup still needs to be used at the same time for load balancing.  Recently I was able to have a site that had a very slow MPLS link that was being moved to only backup – and a new SD-WAN link has been installed as primary.  The MPLS link will go away after were sure the SD-WAN option works well.

The only time we want this link to become active is if SD-WAN fails. My first thought is no problem, just  use a routing protocol and a static route – the routing protocol will be our preferred method, and we can just raise the admin distance of the default route to make it a backup. One problem in this scenario, the MPLS uses dynamic routing (EIGRP), and many other locations still use the MPLS for primary, or secondary connections, so we cannot even have this link up (admin state) or the MPLS will advertise the network.

So, to recap what I need is a preferred route to our SD-WAN, and if SD-WAN fails, bring up the backup connection admin state and move traffic to it. Then of course, auto fix everything if SD-WAN comes back online. No problem! Cisco’s Event Manager to the rescue. Cisco’s Embedded Event Manager (EEM) (Copied From Cisco) –is a distributed and customized approach to event detection and recovery offered directly in a Cisco IOS device. EEM offers the ability to monitor events and take informational, corrective, or any desired EEM action when the monitored events occur or when a threshold is reached. An EEM policy is an entity that defines an event and the actions to be taken when that event occurs. When creating EEM scripts, you have two options TCL or CLI – in this case I am using just CLI.

Using EEM on the 3850 core I was able to detect if the default route learned VIA OSPF was removed from the routing table – if that happened, then run the command “No shut” on my MPLS uplink and EIGRP would take over. If the default route was learned at some point through OSPF when things were corrected – then run the command “Shut” on my MPLS uplink. This worked extremely well in combination with Link detection on the Fortigate, and OSPF default route distribution.

The Fortigate is my SD-WAN device, and my default gateway for the network.  I am using link detection to test HTTP access to google. If my WAN interface cannot get a response from google.com in my set time (5 attempts, with 5 seconds between each attempt) then it will remove the default route from my routing table. When this happens the route will be removed from redistribution in OSPF, and removed from the Cisco core. EEM sees this event, and does my list of commands. Below shows the layout, and code to get this going. Interface 0/24 is my MPLS uplink.

Layout2

First I made sure that the MPLS interface was  shutdown and OSPF was up, and I was receiving the redistributed default route from the Fortigate.

config t

event manager applet MPLS-UP
event routing network 0.0.0.0/0 type remove protocol OSPF
action 1 cli command “enable”
action 2 cli command “config t”
action 3 cli command “int gig 1/0/24”
action 4 cli command “no shut”
action 5 cli command “exit”

event manager applet MPLS-DOWN
event routing network 0.0.0.0/0 type add protocol OSPF
action 1 cli command “enable”
action 2 cli command “config t”
action 3 cli command “int gig 1/0/24”
action 4 cli command “shut”
action 5 cli command “exit”

You can check status and history of events by using the show event manager commands.

show

During a failure of my ISP, everything worked great. The default route was removed from OSPF, which caused an event that EEM matched – then it enabled the MPLS interface, and all routes/default was learned VIA EIGRP and the MPLS. When internet was restored, the MPLS interface was shutdown, and all traffic started flowing over SD-WAN.

 

Cisco – Flex links

Cisco Flex links give the ability to have a layer2 redundant connection, or pair of connections configured as an Etherchannel for a primary link. This is an active/passive setup where if the primary connection’s link status goes down, the Flex link will become active, and if the primary comes back it will go into a standby mode and not take back primary functionality unless told to do so with preemption commands. STP is disabled automatically on Flex Links so no need to bother with Portfast.

Flex links started in code a long while back, and not sure how I missed them. I have needed this functionality before if I was connecting to a backup Firewall, or some device that Spanning-tree would have issues with. This options gives a great way to have a backup link configured if you just need it to become active if something happens to the primary link. In the below scenario I have a Fortigate firewall and Cisco 3560 switch. Port FA 0/1 is my primary and goes to Port 1 of the FGT, and Port 12 is the backup port for this link.

Layout

image

Switch config:

interface FastEthernet0/1
description “Connected to Fortigate”
switchport trunk encapsulation dot1q
switchport mode trunk
switchport backup interface Fa0/12
spanning-tree portfast

interface FastEthernet0/12
description “Connected to Fortigate – Backup”
switchport trunk encapsulation dot1q
switchport mode trunk

Below shows that status of the Flex link

status-primary

Notice that the primary state is active and up. Now, I will cause a physical port state change by unplugging the interface and see how many pings/time it takes to failover.

After unplugging the primary connection, the link light of port 12 instantly came on and went green, I didn’t even lose a ping to my switch. All mac/uplinks moved over the backup port but no loss. Below shows the status after. Notice that the backup state is up, not the primary. The primary port after plugging it back up is amber like a blocked port, and both interfaces port status show up, even though the primary port at this point is not forwarding traffic.

status

 

 

 

Redundant network design using a Ruckus P300 as a backup link

This is a design I need a few weeks ago to help with a redundancy issue. Currently we have a client that occupies two buildings separated by about 500 hundred feet. Soon they will start construction to add a structure right in the middle , connecting the two buildings. But guess what runs right in the middle of this area? The fiber connecting the two buildings. We are thinking that the construction will most certainly cut the fiber causing an outage, whether planned or not.

We decided to have a backup wireless bridge link to help with redundancy. Ruckus’s P300 AC bridges works great, and that is what we decided to do .

Currently the link between the buildings is a Layer2 Trunk, and we are routing over Vlan 254 which traverses the trunk. OSPF is used to advertise each building’s local subnets, and redistribute the default route.

The goal is that routing/layer 2 will only come active on the wireless bridge in case of a failure in the Fiber connections. So Spanning-tree will block all vlans other than the native 200 – going through the bridge. If there is a failure, those vlans will come online over the bridge, routing will come up, and all should work great.

The switching/routing that is used is a Nexus 9500 and 3850 stack.

 

Bridges2

 

To accomplish the above, we enable OSPF on vlan 254, and make sure all routing is correct – including redistribution. Vlan 254 our routing vlan is allowed along with a few other vlans – At some point this will be fixed and we will only route over this link, but for now we have to stretch (I know, not the best practice).  Building 1 is currently the STP root for all vlans stretching over layer 2 link.  The Spanning-tree path cost is increased on links connecting to the bridge, and special commands are enabled on the wireless bridge to disable the Ruckus loop detection mechanism. This is very important because it will stop STP from flowing by default. Follow this link to help with that:

https://usermanual.wiki/Ruckus/P30010010957ReleaseNotesRevA20170721.1820034031/help.

After the bridge was setup, path cost modified, and Cisco port configs set correctly – it is time to test. First we needed to make sure STP was indeed blocking the vlans that were needed. Yes! STP is blocking the redundant path.

6040-blocked

Testing/Results

We tested fail over in two ways. 1 – just shutting down fiber links in CLI, and 2- physically unplugging the links. During fail over we saw that 2 pings were lost and then they were back up. I actually thought that OSPF drop, and then re-converge, but that did not happen. Instead, since the Hello-Dead timers were never reached, OSPF never dropped – fail over time was much better than I expected. The only way I could really tell we had failed over was a small increase in latency, and of course we were limited to around 300 Mbits.

Some notes on this – Make 100% sure that the Ruckus loop detection is disabled before even starting actual bridge configuration. Also create some kind of alerts VIA Prtg/Solarwinds/Cacti to send an alert if links go down, or their is a big increase in bandwidth on wireless bridges.

 

 

Ruckus P300 Bridge- Spanning-tree issue

I wanted to create a backup link for a network using a P300 bridge. The current network has two 10 gig links going between two buildings, but construction is set to start soon, that could cut the fiber stretching between. One option was to use the P300 bridges to create a backup link between the two building, which would become active in case of a failure in the fiber links.

We are currently stretching maybe 12 vlans between the buildings. The goal was to have all data go over the 2×10 gig links, and Spanning-tree block the other vlans from using the Bridge. I increased the STP port cost on each side and brought up the bridges – and both fiber links and bridge were forwarding, causing a loop. It took a bit to understand, but according to Ruckus their Gateway bridge detection mechanism basically stops STP and LACP from forwarding. I found the below help doc from Ruckus which gave the command to disable this feature.

https://usermanual.wiki/Ruckus/P30010010957ReleaseNotesRevA20170721.1820034031/help

After disabling – Bam! Vlans are blocked VIA STP.

The command to disable this feature in CLI:

rkscli: set meshcfg loop_detect disable

Below is the topology

Top

I modified the Path cost to something crazy high – 5000 with this command.

interface TenGigabitEthernet1/0/15
description “Connected to Ruckus Bridge”
switchport trunk native vlan 200
switchport trunk allowed vlan 5,10,12,14,15,20,22,40,150,200,254,1337
switchport mode trunk
auto qos trust
spanning-tree cost 5000

Then as soon as I disabled the loop Detect STP shutdown the secondary link for each vlan that the switch was not root for. .

For example, Building 1 is root for all vlans except 10, 254 and 12. So, Building 1 blocked those vlans from traversing port 15 which is the bridge.

6040-blocked

Failover took just seconds to work.

 

 

Cisco ASA – E-SMTP

I recently had an issue with a Office 365 deployment. This was a hybrid deployment, and as we were trying to start syncing to Office 365 we were getting an error in our logs :

(Retry : Must issue a STARTTLS command first)

This was causing mail not to flow. This site happened to be behind a Cisco ASA. After some research the cause of this problem is that the ASA is inspecting ESMTP in the policy map, which is stripping the STARTTLS packets.

To bypass this you have to create a new policy map to make sure the parameters of the inspection allow encrypted connections – the default is to not allow this, and it does this by removing the STARTTLS packets.

First lets create our policy map

policy-map type inspect esmtp esmtp_map
parameters
allow-tls action log

Then we will modify the global policy and match this policy for inspection. In the following example, the service policy is all default – the global_policy.

policy-map global_policy
class inspection_default

inspect esmtp esmtp_map

Thats it – now it will allow TLS . I had one other issue when applying I kept getting this error – ERROR: Inspect configuration of this type exists, first remove
that configuration and then add the new configuration

I was getting this error because by default the “Inspect esmtp” option is enabled in the global_policy map. So first remove this inspection and then reapply.  After modifying the ESMTP parameter mail started flowing. The follow shows that whole step.

show run

policy-map global_policy
class inspection_default
inspect dns preset_dns_map
inspect ftp
inspect h323 h225
inspect h323 ras
inspect rsh
inspect rtsp
inspect esmtp
inspect sqlnet
inspect skinny
inspect sunrpc
inspect xdmcp
inspect sip
inspect netbios
inspect tftp
!

config t

NHS-ASA(config-pmap)# policy-map global_policy
NHS-ASA(config-pmap)# class inspection_default
NHS-ASA(config-pmap-c)# inspect esmtp esmtp_map
ERROR: Inspect configuration of this type exists, first remove
that configuration and then add the new configuration

NHS-ASA(config-pmap-c)# no inspect esmtp
NHS-ASA(config-pmap-c)# inspect esmtp esmtp_map

show run :

policy-map type inspect esmtp esmtp_map
parameters
allow-tls action log
policy-map global_policy
class inspection_default
inspect dns preset_dns_map
inspect ftp
inspect h323 h225
inspect h323 ras
inspect rsh
inspect rtsp
inspect sqlnet
inspect skinny
inspect sunrpc
inspect xdmcp
inspect sip
inspect netbios
inspect tftp
inspect esmtp esmtp_map
!
service-policy global_policy global

 

 

Cisco ISR 4000 Bridge group with Vlans

The 4000 series does things a little differently with Bridge groups then older ISRs. The below is on a Cisco ISR 4331. In this case I needed to have a bridge group to go to two separate switches, one port would be blocked by spanning tree to keep loops out.

a Bridge-group,  Groups the physical interfaces into one logical group. and the Bridge Virtual Interface (BVI) is the layer 3 routing interface associated to that bridge group.

In this scenario I have two vlans 4006 and 4007, I will create a bridge group so basically the two ports of the bridge group are a switch. Spanning tree will pass through the bridge group and one of my ports will be blocked. The reason for the bridge group if that I have two Distro switches and I want to have switch redundancy (Yes, I know the router is still a single point of failure). If one of my core switches die, it should be only a few seconds and I will be back up and going at Layer2 to my core. In this scenario I have a reason to not use ECMP or routing on the router interconnects – I need to keep them at layer 2.

overview

Config:

bridge-domain 4006
bridge-domain 4007

interface GigabitEthernet0/0/0

description **Connected to Primary Core**
no ip address
negotiation auto
service instance 1 ethernet
encapsulation untagged
bridge-domain 1
!
service instance 4006 ethernet
encapsulation dot1q 4006
rewrite ingress tag pop 1 symmetric
bridge-domain 4006
!
service instance 4007 ethernet
encapsulation dot1q 4007
rewrite ingress tag pop 1 symmetric
bridge-domain 4007
!

int gig 0/0/2

description **Connected to Primary Core2**
no ip address
negotiation auto
service instance 1 ethernet
encapsulation untagged
bridge-domain 1
!
service instance 4006 ethernet
encapsulation dot1q 4006
rewrite ingress tag pop 1 symmetric
bridge-domain 4006
!
service instance 4007 ethernet
encapsulation dot1q 4007
rewrite ingress tag pop 1 symmetric
bridge-domain 4007
!
!

interface BDI1
no ip address
shutdown
!
interface BDI4006
ip address 1.1.1.1 255.255.255.0
!
interface BDI4007
ip address 2.2.2.1 255.255.255.0
no ip redirects

!

Thats, it. The Vlan and vlan interfaces are up and working. When I check spanning-tree on the switch I see the correct one blocked – which could totally be load balanced or modified.

 

 

Redundant Cisco ASA VPN scenario

Cisco ASA (Pre X series) are still extremely common.

This entry describes a redundant VPN setup of two ISPs on the Branch firewall (Cisco 5505), and one ISP on the Datacenter/hub side (Cisco ASA 5510).

The Branch office  has a cable connection as their primary ISP and a backup 4G Cradle Point. We will be using SLAs to track the internet status of the Cable connection, and a floating static route to control backup route priority.

The idea behind the branch office is that two different Crypto Maps exist, one mapped to each of the interfaces. If the SLA fails and brings down the primary internet the traffic starts going out of the backup connection which has a backup Crypto map applied.  When the primary interface comes back up, then traffic will start going over the crypto map applied to it. Therefore we do not have flip/flop VPNs and it solves the issue of having one crypto map applied to two different interface.

 

layout

CONFIG

Branch ASA:

interface Vlan2
nameif PRIMARY
security-level 0
ip address 1.1.1.10 255.255.255.0
!
interface Vlan12
nameif BACKUP
security-level 0
ip address 2.2.2.10 255.255.255.0

object-group network CORE-SUBNETS        — Object group for Core subnets
network-object 10.0.0.0 255.255.255.0
network-object 192.168.0.0 255.255.255.0
object-group network BRANCH-SUBNETS    — Object group for Branch subnets
network-object 192.168.18.0 255.255.255.0

object network Any-Cable                                     — NAT For Primary
nat (inside,PRIMARY) dynamic interface
object network Any-Backup                                  — NAT For Backup Internet
nat (inside,BACKUP) dynamic interface

NO-NAT

nat (inside,any) source static BRANCH-SUBNETS BRANCH-SUBNETS destination static CORE-SUBNETS CORE-SUBNETS

SLA config:

sla monitor 123
type echo protocol ipIcmpEcho 8.8.8.8 interface PRIMARY
sla monitor schedule 123 life forever start-time now

route PRIMARY 0.0.0.0 0.0.0.0 1.1.1.1 1 track 2  – The Track statement maps that SLA to the route

route BACKUP 0.0.0.0 0.0.0.0 250     – Floating Static – Makes this a backup route. I set the distance to 250

VPN CONFIG:

access-list VPN-to-CORE permit ip object-group BRANCH-SUBNETS object-group CORE-SUBNETS

crypto ipsec ikev1 transform-set AES256SHA esp-aes-256 esp-sha-hmac

Primary Crypto

crypto map BRANCH_MAP 100 match address VPN-to-CORE
crypto map BRANCH_MAP 100 set peer 3.3.3.1
crypto map BRANCH_MAP 100 set ikev1 transform-set AES256SHA
crypto map BRANCH_MAP 100 set security-association lifetime seconds 28800

crypto ikev1 enable PRIMARY

crypto map BRANCH-MAP interface PRIMARY

BACKUP Crypto MAP

crypto map BRANCH-MAP-BK 100 match address VPN-to-CORE
crypto map BRANCH-MAP-BK 100 set peer 3.3.3.1
crypto map BRANCH-MAP-BK 100 set ikev1 transform-set AES256SHA
crypto map BRANCH-MAP-BK 100 set security-association lifetime seconds 28800
crypto map BRANCH-MAP-BK interface BACKUP

crypto ikev1 enable BACKUP

crypto ikev1 policy 10
authentication pre-share
encryption aes-192
hash sha
group 2
lifetime 86400

Tunnel Group

tunnel-group 3.3.3.1 type ipsec-l2l
tunnel-group 3.3.3.1 ipsec-attributes
ikev1 pre-shared-key password

 

Core Config:

object-group network CORE-SUBNETS        — Object group for Core subnets
network-object 10.0.0.0 255.255.255.0
network-object 192.168.0.0 255.255.255.0
object-group network BRANCH-SUBNETS    — Object group for Branch subnets
network-object 192.168.18.0 255.255.255.0

NO-NAT

nat (inside,any) source static BRANCH-SUBNETS BRANCH-SUBNETS destination static CORE-SUBNETS CORE-SUBNETS

VPN CONFIG:

access-list VPN-to-BRANCH permit ip object-group CORE-SUBNETS object-group BRANCH-SUBNETS

crypto ipsec ikev1 transform-set ESP-AES-256-SHA-TRANS esp-aes-256 esp-sha-hmac

crypto map outside_map 100 match address VPN-to-BRANCH
crypto map outside_map 100 set peer 1.1.1.10 2.2.2.10              —Notice both IPs
crypto map outside_map 100 set ikev1 transform-set ESP-AES-256-SHA
crypto map outside_map 100 set reverse-route

crypto ikev1 enable outside

crypto map outside_map interface outside

crypto ikev1 policy 10
authentication pre-share
encryption aes-192
hash sha
group 2
lifetime 86400

tunnel-group 1.1.1.10 type ipsec-l2l
tunnel-group 1.1.1.10 ipsec-attributes
ikev1 pre-shared-key password

tunnel-group 2.2.2.10 type ipsec-l2l
tunnel-group 2.2.2.10 ipsec-attributes
ikev1 pre-shared-key password

Cisco USB console setup for a 3750/3850/2960 – USB Mini

The other day I needed to use the the blue mini-USB console cable that Cisco will now send. Its been around a long time, but I always have my normal console laying around and just use that. When I attempted to use it I first installed the USB driver provided by Cisco, everything seemed to work, but I could not open the com port. Today I did some research and got it working – I was just missing a small part, but thought I would write up the steps to try and help someone else. My OS is Windows 10.

So first we have to install the USB driver this can be downloaded from Cisco.com , using your CCO account.  Then install according to the computer, and then reboot. The problem comes in after the reboot – Windows will use the Windows USB driver, and not the Cisco one. So you have to manually change it.

So to walk through, after the install/reboot I connected the cable – Went into device manager to see what com port it was associated to. Com3. Great, then I tried to console to that port – and it would not work.

drivers

So, after a lot of troubleshooting I found that you need to update the driver to a locally install one, and when you do that Cisco’s driver will pop up. Those steps are below.

So, lets first change the driver.

update-drivers

Select “update driver software”

pick-from-list

Then Select “Let me pick from a list of device drivers on my computer”

driver pick

Bam! Now, select the Cisco driver.

cisco-driver

Now, we see that Cisco serial driver is in use.

So, now we should be able to launch Putty and change it to COM3 and it should work.

com-select

 

Thats it!

cisco usb console driver not working

Cisco USB

Cisco BGP UnSuppress Maps

Unsupress maps in Cisco can really be a very helpful tool in situations where you might be summarizing a bunch of /24s to maybe a /20, but you need to leak out one of the /24s without summarization, and still advertise the larger summary route.

By default, once you use summarize all networks that fall under your summary route do not advertise any more.  In my situation I was testing ECMP and needed to advertise one /24 to each of my MPLS neighbors, so my hub router could get back on either path. I couldn’t test this with the full /24 due to outage concerns so we had to do this for a /24 that was not used that often. I am not going to show the layout of the Dual MPLS , but just one.

Below shows the topology

layout

Great, now for config on the Cisco Routers.

Steps:

  • Created Prefix list of subnets I need to be unsupressed.
  • Create a new route-map to match those subnets.
  • Add the BGP statement referencing my neighbor with the “unsuppress-map” keyword.
  • clear routes soft, to force a refresh.

My Prefix list name will be UMAP and my route-map will be named UMAP-MAP

So lets take a look at our advertised routes to my neighbor 10.0.5.22 before making the changes.

routes.JPG

Notice that just the /20 is being advertised. Now check out the config below, and lets apply.

config t

ip prefix-list UMAP seq 5 permit 10.32.39.0/24

route-map UMAP-MAP permit 10
 match ip address prefix-list UMAP

router bgp 64551
neighbor 10.0.5.22 unsuppress-map UMAP-MAP

Then clear update BGP

clear ip bgp * soft

So that’s it for the config.  Lets look at the advertised routes now.

routes-after

Great! we are advertising our /24 and everything is now working perfectly. Unsuppress maps to the rescue!

Accessing the ASA’s inside interface across an IPSEC VPN tunnel

Recently I created a tunnel for a client between two Cisco ASAs, and they monitor VIA PRTG and make automated backups via Solarwinds. After the tunnel creation, all traffic worked great except traffic (SSH,SNMP,PING) directed to the device’s inside interface. There are a few simple command that fix this. In this entry I will point out those commands and explain why the commands actually fix the issue.

layout

So the above images shows a simple layout of what I have going on. All is working with the VPN, its up and functioning everything is great accept access to the ASA itself from the remote subnet. The ASA in question is 192.168.19.1/24.

There are really two commands here. First:

Management access <Inside interface>

As Cisco States it:

“If your VPN tunnel terminates on one interface, but you want to manage the ASA by accessing a different interface, you can identify that interface as a management-access interface. For example, if you enter the ASA from the outside interface, this feature lets you connect to the inside interface using ASDM, SSH, Telnet, or SNMP; or you can ping the inside interface when entering from the outside interface”.

Awesome, so that allows us to actually use the inside for management when connecting through a different interface (VPN). For me this did not work, still could not access the device from the remote subnet.

The next command that resolved the issue for me had to do with my nat statement. The firmware version of this device is 9.x, so we use object based nat to do our NO-NAT statements.

nat (inside,outside) source static LOCAL-NETWORKS LOCAL-NETWORKS destination static 10.0.0.0/24 10.0.0.0/24 route-lookup

to note, local networks is a group that has my 192.168.19.0/24 subnet in it.

The route-lookup command on top of the NO-NAT resolve the issue. The reason being is that when packets are sent to a destination the device looks for the needed egress interface or in this case the interface specified in our NAT rule which is “outside”.  This makes a lot of sense. But, we don’t want to send this traffic out of the WAN interface, we want to send it out of the tunnel. So specifying the command route-lookup tells the firewall to look at the routing table for the entry and then forward the packet accordingly, basically overriding the identity NAT statement interface (interface listed int the NO-NAT). According to Cisco the ASA looked at the routing table by default in older firmwares but to make this more flexible with NAT, now you have to specify the keyword.

That’s it all communication to your ASA should now work.