Tag Archives: bandwidth

Time based Policy-map for traffic policing

Recently I was tasked to resolve a problem where a video stream on Saturday and Sunday would get very choppy. This video stream is for a church and as you would expect needs to be perfect. There are multiple remote sites each connected to an MPLS back to the core site which is where the video stream originates. Each site has a 100Mb MPLS link back to the core site.

We wanted to make sure that on Saturday and Sunday that normal traffic (web, file shares) would be capped at around half the useable bandwidth, and the other half was for the stream. This is much more bandwidth than what the stream needs, but we want to make sure its plenty.

So, what we did was implement time based ACLs for both the traffic we want to prioritize, and another ACL for all default traffic that we want to police. We used a ‘any-any’ ACL instead of the class-default because we have to make it time based.

Some of the things to note – the Burst rate really mattered here. I followed the formula from Cisco RATE-SPEED/8 * 1.5. Using this formula solved my issues and things flowed exactly the way I wanted them to.

To test I changed the time on the router to look like Saturday, and then used Iperf to push traffic.

The system clock on the router has to be correct, else everything can be very screwed up. You might be saying why did you use a policer instead of a shaper? good question. I would have used a shaper if my 3750’s supported it. A shaper would have been a better solution here.

Steps

– Create Time-range

– Create ACLs using time range

– create class maps

– create policy maps

– apply policy-map to interface

 

First to create the time-range to match the times we wanted we used the following commands:

config t

time-range Weekend-Service

periodic Saturday 13:00 to Sunday 15:00

exit

Next I created the ACL to match the priority traffic and default traffic

ip access-list ext Priority-traffic

10 permit ip host 10.0.0.1 any time-range Weekend-Service

exit

ip access-list ext Default-Traffic

10 permit ip any any time-range Weekend-Service

exit

Then I created the class and policy map, and attached it to the interface.

Class-map match any Priority-Traffic
match access-group Priority-Traffic
exit

class-map match-any Default-Traffic
match access-group Default-Traffic
exit

policy-map Stream
class Priority-Traffic
set ip dscp ef
exit

class Default-Traffic
police 50000000 937500
set ip dscp default
exit

int gig 1/0
policy-map input stream
exit

There are many commands to test the status, one command is “show policy-map interface”

Iperf for Bandwidth testing

Iperf is a great tool to test bandwidth on both UDP (connectionless) and TCP. Iperf does a great job of showing how much bandwidth it can push through the link between server and client, as well as delay and jitter of the UDP session. You can download it here: http://iperf.fr/

Defaults:

default time it runs is 10 seconds, on port 5001 with a window size of 64k. All settings can be changed

Using Iperf

Using Iperf is simple, run one instance on a server (receiving client)  with the -s option and another instance on the testing client (sender) with the -c option.

On the server run:

iperf -s  – this will then start the server, listening to TCP port 5001 by default. You can change to any port you like. 

On the client run:

iperf -c x.x.x.x where x is the ip address of the listening server.

Thats it. Iperf will try to push as much traffic as it can with a 64k window size through TCP.

Images are below. Note, this was done on my local machine, so just replace 127.0.0.1 with your test address.

Client:

Image

Server:

Image

 

Running a UDP test will usually result in higher bandwidth tests due to UDP not having any flow control mechanisms.

To use UDP instead of TCP use the switch -u.

Server: iperf -u -s

Client : iperf -u -c x.x.x.x

Images:

Client:

Image

Server:

Image

Notice, that the server has both Jitter and lost packets included. This could be very beneficial when troubleshooting link quality for VOIP.

More Bandwidth!

What if you want to completely saturate the link, full stress testing? you can use a combination of both the TCP window size (switch is -w), and parallel streams (Switch is -P ). I would recommend using a max window size of 1024k, and as lets say 7 Parallel streams (running at the same time).

Also, we can change when Iperf reports back to use , we will change it to 2 seconds (switch is -i). For laughs, lets also run  this from the default time of 10 seconds, to 30 seconds (Switch is -t). Here are the commands on both server and client:

Client:

iperf -w 1024k -P 8 -i 2 -t 30 -c 127.0.0.1

Server:

iperf -w 1024k  -s 127.0.0.1

Images:

Client:

Image

Server:

Image

Other Switches

Iperf has a lot important switches but here are a few I use a lot:

– B – Bind to a host/interface – Great to use if you have multiple IPs on the machine, and just want to test with one

– P – Runs more thread in parallel, can totally flood network with as much traffic as possible. Great for stress testing.

– D – used for testing both send and receive at the same time.

– i – how often iperf reports back to you about transfer

– t – amount of time iperf runs and sends data.

– w – Window size, can be specified in kb or mb.