Starting in 5.4.1 you could “Quarantine” an IP address. This means that the quarantined host cannot communicate through the firewall.
There are many different parts of the firewall the quarantine an IP address. For example the AV and IPS can both automatically quarantine an IP if it meets a defined violation.
In 6.0 you can view the IPs that have been quarantined by going to Monitor- Quarantine. From here you can see what IPs are blocked, and for what reason. As you can see in the image below 22.214.171.124 has been blocked for 26 days by an admin. If an admin blocks an IP address (as we will see) it shows up with “Administrative” as the source.The other IPs have been blocked by the IPS engine. The below image shows the monitor section.
So, lets say that you look into Fortiview and see that a remote IP is sending/receiving a ton of bandwidth and you want make sure that stops. in this example lets quarantine the IP 126.96.36.199.
In this example we can act like I was looking through Fortiview and found an issue that makes me want to block the above IP. You can just click on the IP you would like to block, right click and then select to “quarantine”. When you do this, it will pop up and ask for the length of time you would like to block them for.
The above shows that it will ban the IP from communication for the given period of time.
So, lets say we want to remove an IP address that has been quarantined – No problem, just need to go to Monitor-Quarantine and click on the IP and delete that individual or click to delete all entries.
You can modify how long and for what reason the IPS/AV quarantine an address for within the policy. For example, below shows modifying the reason/time of quarantine. The AV settings are within the CLI of the AV policy under “nac-quar”. Something to note, sources are not quarantined by default.
This entry details the config for setting up and deploying VRFs on a Ruckus ICX 7250. Recently I had an issue where a client had a new ISP and that ISP gave them the Customer WAN /30 subnet, then routed their Customer LAN subnet (Public usable addresses) to their side of the /30. The customer did not want any extra equipment installed like a router to handle the WAN routing, so the next best thing was to split the Ruckus 7250 switch into a WAN/LAN router – One switch to rule them all! The VRF feature is in Ruckus’s Layer 3 Premium feature set so a license will be needed. In this scenario the 7250 is the local gateway for all Vlans – so local LAN routing, and the Internet router.
Of course there are a lot of problems with the following design, like single point of failure, but its a small site, with 1 48 port switch, Fortigate firewall and cloud Voip SD-WAN router. The purpose of this design is to allow the Voip SD-WAN solution to be outside the firewall, so using the 7250 for both LAN/WAN routing really and it worked well. If the ISP would have not required a customer routing device we would have just setup a Internet-Vlan, set Fortigate/INSpeed to public IPs, and placed them in that vlan. But, the ISP is requiring a routing device in this instance.
Here is the design.
I think the ICX series supported VRFs when it was running Brocade firmware, but I would recommend upgrading to Ruckus’s ICX firmware – Version number SPR08080 or greater. Of course the device has to be running the Routing firmware not the switching code. The VRF feature is in Ruckus’s Layer 3 Premium feature set so a license will be needed.
First lets enable the VRF, and increase the amount of routes.
These commands will enable the VRF functionality and it will need you to reboot.
Next we can start configuring our VRF. In this case my /30 will be 188.8.131.52/29 – so .1 will be the ISP, .2 will be us. I will setup the routes for the VRF, and then the Vlan interface and apply the /30. There is a keyword in the VE config to make sure its associated to a given VRF. Within the VRF config you need to specifcy the Route Identifier – only matters locally.
vrf INTERNET-VRF rd 11:11 ip router-id 184.108.40.206 address-family ipv4 ip route 0.0.0.0/0 220.127.116.11 exit-address-family exit-vrf
vlan 300 name INTERNET-VRF by port — My WAN Vlan for Fortigate WAN and SD-WAN router WAN interface. The Customer LAN Subnet goes here. untagged ethe 1/1/19 ethe 2/1/23 router-interface ve 300 spanning-tree 802-1w spanning-tree 802-1w priority 4094 ! vlan 400 name ISP-VRF by port — /30 ISP network untagged ethe 1/1/24 router-interface ve 400 !
interface ve 400 vrf forwarding ISP-VRF – This is the command to associate the VE to the VRF ip address 18.104.22.168/30
interface ve 300 vrf forwarding INTERNET-VRF – This is the command to associate the VE to the VRF ip address 22.214.171.124/29
Here is a subset of my user config – Vlan 40 – this is where most of the desktops go, and the gateway in this case 10.6.40.1/24 lives on the switch, on the default VRF.
vlan 40 name Computers by port untagged ethe 1/1/1 to 1/1/18 ethe 1/1/21 ethe 2/1/1 to 2/1/18 ethe 2/1/22 router-interface ve 40 spanning-tree 802-1w spanning-tree 802-1w priority 4094 ! ! show run int ve 40 interface ve 40 ip address 10.6.40.1 255.255.252.0
ip helper-address 1 10.6.10.10
Thats it! A show IP route of the default VRF (Switching VRF) shows:
#show ip route Total number of IP routes: 9 Type Codes – B:BGP D:Connected O:OSPF R:RIP S:Static; Cost – Dist/Metric BGP Codes – i:iBGP e:eBGP OSPF Codes – i:Inter Area 1:External Type 1 2:External Type 2 Destination Gateway Port Cost Type Upti me 1 0.0.0.0/0 10.6.254.2 ve 254 1/1 S 1d17 h — This is the Fortigate 2 10.6.0.0/22 DIRECT ve 1 0/0 D 1d17 h 3 10.6.10.0/24 DIRECT ve 10 0/0 D 21h4 m 4 10.6.40.0/22 DIRECT ve 40 0/0 D 1d17 h 5 10.6.100.0/24 DIRECT ve 100 0/0 D 1d18 h 6 10.6.254.0/24 DIRECT ve 254 0/0 D 1d17 h 7 172.16.6.0/29 DIRECT ve 650 0/0 D 1m5s 8 192.168.6.0/24 DIRECT ve 1 0/0 D 1d17 h 9 192.168.100.0/24 172.16.6.1 ve 650 1/1 S 1m4s
But, if we specifcally show the Internet-VRF routes:
#show ip route vrf INTERNET-VRF Total number of IP routes: 3 Type Codes – B:BGP D:Connected O:OSPF R:RIP S:Static; Cost – Dist/Metric BGP Codes – i:iBGP e:eBGP OSPF Codes – i:Inter Area 1:External Type 1 2:External Type 2 Destination Gateway Port Cost Type Uptime 1 0.0.0.0/0 126.96.36.199 ve 400 1/1 S 21h4m 2 188.8.131.52/30 DIRECT ve 300 0/0 D 20h57m 3 184.108.40.206/29 DIRECT ve 400 0/0 D 21h5m
And there we have it, the devices is now in two VRFs, a default and INTERNET-VRF with specific interfaces assigned to it. If you want to test pinging from that VRF specifically you can use the following commands:
#ping vrf INTERNET-VRF 220.127.116.11 Sending 1, 16-byte ICMP Echo to 18.104.22.168, timeout 5000 msec, TTL 64 Type Control-c to abort Reply from 22.214.171.124 : bytes=16 time=1ms TTL=122 Success rate is 100 percent (1/1), round-trip min/avg/max=1/1/1 ms.
Currently I am working with a client who has lots of Ruckus ICX 7250 PoE+ switches. These have been great switches, lots of features such as: large PoE budget, 10G, VRF/Routing capability. Recently the client has rolled out Mitel headsets that charge from their larger handset phone stations.
Strange issue has been happening though, when they put the headset in to charge the phone reboots, and the switch throws an error (you will see below) and basically kills power to the port, thus everything reboots. After some quick analysis it seems like the phone station is requesting 802.11AF (15.4 watts max) and then when the headset gets turned on to charge it spikes above 15.4 watts for a bit, and making the switch rightly throw the error. The phone pulls somewhere around 1-3 Watts, and the headset seems to add an additional 3 (according to its documentation). Still well within range of 802.11AF.
This is an assumption, I might go through and do some debugging and see if that’s the exact issue, but adding some commands to the switch did fix the problem. So before we go through commands and analysis; the commands used to resolve the issue basically set each port to 802.11AT which allocates 30 Watts for the port. An issue with this: simple math indicates that if we have a 48 port switch with a 740 Watt PoE budget, we can only really give each port 15 watts if every port is powered up. That’s true, but luckily we aren’t going to run into that problem here. Few headsets/Power needing ports.
When the headsets were plugged in the switch started throwing these errors:
Dec 20 19:41:27:C:System: PoE: Power disabled on port 1/1/19 because of PD overload. Dec 20 19:41:27:C:System: PoE: Power disabled on port 1/1/19 because of PD overload.
This would then disable PoE on the port for a few seconds and make the phone reboot.
When checking to see how much power the phone was pulling the following was done prior to the fix commands – Please just look at Port 19:
The phone was showing up just asking for 3.6 Watts and it was only allocating 15.4.
Lots of ways to tackle the fix to this problem, the approach I used is to modify the allocated power by class – So instead of letting the switch decide how much power to allocate by letting the device tell it – I am forcing the switch to change the power class for the phones (in this case 3 ) to 4. This allocates a default of 30 Watts. Below is Ruckus’s outline for the Power classes
The commands to modify this:
interface ethernet 1/1/19 inline power power-by-class 4
After applying these commands check Port 19 out:
All devices are still requesting pretty much the same amount of power they were before except now we see the headsets requesting power as well. Not only that but each port does have 30 Watts allocated to it. So the thought that we could run out of allocated power if we had a lot of phones/PoE devices plugged in is a real concern. Right now, even though we are only using 47.7 Watts, the switch has provisioned 390.
There are better commands to use other than the power-by-class that I used. For example, since we know the phone with charging the headset only needs little over 4 Watts we could use the command “inline power power-limit 25000” to allocate 25 Watts instead of the full 30. This number could keep being modified to find the exact number where the port drops. Or you could just modify the ports with headsets only – But, like I mentioned above we have no real need to do that, so the power-by-class blanket command works fine in this case.
After applying the above command check out port 19’s PoE allocation:
Lots of ways to fix this issue, but all modify the amount of power allocated to the port.
Recently have been working with the S4128 switches. These have been great, and the price point is amazing.
The device comes with 2 ports that can be 10/40/or 100 Gig interfaces (given media). I needed to connect the port via a 40 gig DAC to a Dell server. When I plugged this in, a “show interface eth 1/1/26” would show the interface up, show the DAC model number and then would say “Protocol down”. I thought at first this could be mismatching vlans, etc. But after a few minutes found it had to be a negotiation issue.
A client recently had an issue where a security audit found ciphers supported within HTTPS that are insecure. These ciphers were TLS 1.0 and TLS 1.2. The audit found these issues on the web interface of the Smartzone, nothing to do with EAP or WiFi authentication. . After trying quite a few things I decided to open a ticket with Ruckus support. They instructed me to run the following commands on the SmartZone to disable it:
vszh-50(debug)# no tlsv1
This seemed to fix the issue. The web service (Tomcat) restarts and takes about 5 minutes before you can log back into the SMZ again.
This is a design I need a few weeks ago to help with a redundancy issue. Currently we have a client that occupies two buildings separated by about 500 hundred feet. Soon they will start construction to add a structure right in the middle , connecting the two buildings. But guess what runs right in the middle of this area? The fiber connecting the two buildings. We are thinking that the construction will most certainly cut the fiber causing an outage, whether planned or not.
We decided to have a backup wireless bridge link to help with redundancy. Ruckus’s P300 AC bridges works great, and that is what we decided to do .
Currently the link between the buildings is a Layer2 Trunk, and we are routing over Vlan 254 which traverses the trunk. OSPF is used to advertise each building’s local subnets, and redistribute the default route.
The goal is that routing/layer 2 will only come active on the wireless bridge in case of a failure in the Fiber connections. So Spanning-tree will block all vlans other than the native 200 – going through the bridge. If there is a failure, those vlans will come online over the bridge, routing will come up, and all should work great.
The switching/routing that is used is a Nexus 9500 and 3850 stack.
To accomplish the above, we enable OSPF on vlan 254, and make sure all routing is correct – including redistribution. Vlan 254 our routing vlan is allowed along with a few other vlans – At some point this will be fixed and we will only route over this link, but for now we have to stretch (I know, not the best practice). Building 1 is currently the STP root for all vlans stretching over layer 2 link. The Spanning-tree path cost is increased on links connecting to the bridge, and special commands are enabled on the wireless bridge to disable the Ruckus loop detection mechanism. This is very important because it will stop STP from flowing by default. Follow this link to help with that:
After the bridge was setup, path cost modified, and Cisco port configs set correctly – it is time to test. First we needed to make sure STP was indeed blocking the vlans that were needed. Yes! STP is blocking the redundant path.
We tested fail over in two ways. 1 – just shutting down fiber links in CLI, and 2- physically unplugging the links. During fail over we saw that 2 pings were lost and then they were back up. I actually thought that OSPF drop, and then re-converge, but that did not happen. Instead, since the Hello-Dead timers were never reached, OSPF never dropped – fail over time was much better than I expected. The only way I could really tell we had failed over was a small increase in latency, and of course we were limited to around 300 Mbits.
Some notes on this – Make 100% sure that the Ruckus loop detection is disabled before even starting actual bridge configuration. Also create some kind of alerts VIA Prtg/Solarwinds/Cacti to send an alert if links go down, or their is a big increase in bandwidth on wireless bridges.
I wanted to create a backup link for a network using a P300 bridge. The current network has two 10 gig links going between two buildings, but construction is set to start soon, that could cut the fiber stretching between. One option was to use the P300 bridges to create a backup link between the two building, which would become active in case of a failure in the fiber links.
We are currently stretching maybe 12 vlans between the buildings. The goal was to have all data go over the 2×10 gig links, and Spanning-tree block the other vlans from using the Bridge. I increased the STP port cost on each side and brought up the bridges – and both fiber links and bridge were forwarding, causing a loop. It took a bit to understand, but according to Ruckus their Gateway bridge detection mechanism basically stops STP and LACP from forwarding. I found the below help doc from Ruckus which gave the command to disable this feature.
I have been working with Brocade ICX and now Ruckus ICX for a few years now. They are awesome switches.
I was asked a couple of times about something that was happening when someone would try and set the untagged or access vlan on a port. They would get this error:
error – port ethe x/x/x are not member of default vlan
The reason we were getting this error is because other vlans were attached to port as either untagged or tagged. To put a port into a vlan other than default as ‘untagged’ we need to make sure no other vlans are bound to that port. To do this we can check what vlans are attached to the port. In this scenario my default vlan is 999. It would be 1 on a switch that it was not manually changed on.
switch#show vlan br eth 1/1/3
Port 1/1/3 is a member of 2 VLANs
VLANs 32 48
Untagged VLAN : 999
Tagged VLANs : 32 48
Great, so now we know its untagged 999 (default) but tagged those 2 other ports. We need to remove the tags of 32 and 48 on this port before we can add it untagged into vlan 16 – which is the goal
Added untagged port(s) ethe 1/1/3 to port-vlan 16.
switch#show vlan br eth 1/1/3
Port 1/1/3 is a member of 1 VLANs
Untagged VLAN : 16
Tagged VLANs :
Thats it! now we are untagged or access in vlan 16. But wait! what if we wanted to have it be a trunk port to allow vlans 32/48 and be native 16. Then we would use the ‘Dual port’ command with the modification of the untagged vlan like this:
dual mode 16 — means untagged 16, but allow whatever vlans are tagged to pass. Of course vlans 16,32,48 would need to be tagged on the port first. I will write another entry about that.