FlowScan - a system to analyze and report on Cflowd flow files
This document is the FlowScan User Manual $Revision: 1.8 $, $Date: 2000/03/21 19:02:32 $. It describes the installation and setup of FlowScan.
FlowScan is a system which scans the flow files written by cflowd and
reports on what it finds. There are two reporting modules that are
included. The first, CampusIO.pm
produced the graphs at:
http://wwwstats.net.wisc.edu
which show traffic in and out at a peering point. The second, SubNetIO.pm
updates RRD files for each of the subnets that you specify (so that you can
produce graphs of
CampusIO
by subnet).
The idea behind the distinct report modules is that other users will be
able to write new reports that are either derived-classes from
CampusIO
, or altogether new ones. For instance, one may wish to write a report
module called Abuse.pm
which would send email when it detected potentially abusive things going
on, like Denial-of-Service attacks and various scans.
FlowScan is freely-available under the GPL, the GNU General Public License.
If you don't have Cisco at your border, you're probably barking up the wrong tree with this package.
If you have a trivial amount of traffic being exported to cflowd, such as a T1's worth, perhaps any old machine will do.
However, if you want to process a fair amount of traffic (e.g. at ~OC-3 rates) you'll want a fast machine.
I've used a SPARC Ultra-30 w/256MB running Solaris and a Dell Precision 610
(dual Pentium III (2x450Mhz)) w/128MB running Debian Linux 2.1. The latter
machine is definitely preferably - flowscan
processes flows in about 40% of the time that it took the SPARC. (flowscan
itself is currently single-threaded.)
In an early performance test of mine, using 24 hours of flows from our peering router here at UW-Madison, here's the comparison of their ave. time to process 5 minutes of flows:
Sun - 284 sec Dell - 111 sec
Note that it is important that flowscan doesn't take longer to process the flows than it does for your network's activity and exporting Cisco routers to produce the flows. So, you want to keep the time to process 5 minutes of flows under 300 seconds on average.
I recommend devoting a file-system to Cflowd and FlowScan. Both require disk space and the amount depends upon a number of things:
To find the characteristics of your environment, you'll just have to run the patched cflowd for a little while to see what you get.
Early in this project, we were usually collecting about 150-300,000 flows from our peering router every 5 minutes. Recently, our 5-minute flow files average ~15 to 20 MB in size.
During a recent inbound Denial-of-Service attack consisting of 40-byte TCP SYN packets with random source addresses and port numbers, I've seen a single ``5-minute'' flow file greater than 500MB! Even on our fast machine, that single file took hours to process.
Surely YMMV, but currently a 2.5GB file-system allows me to preserve
gzip(1)
ped flow file for about 24 hours.
The packages and perl modules required by FlowScan are numerous. Their
presence or absence will be detected by FlowScan's configure
script but you'll save yourself some frustration by getting ahead of the
game by collecting and installing them first.
This package is available at:
http://net.doit.wisc.edu/~plonka/FlowScan/
Cflowd itself is available at:
http://www.caida.org/Tools/Cflowd/
My patches are available at:
http://net.doit.wisc.edu/~plonka/cflowd/
This package is available at:
http://ee-staff.ethz.ch/~oetiker/webtools/rrdtool/
I've used versions 0.99.20 and 1.0.7.
If you don't have this already, your probably way over your head, but anyway, check out the Comprehensive Perl Archive Network (CPAN):
http://www.cpan.org/
ksh
is used as the SHELL in the Makefile for the graphs.
pdksh
works fine too. If for some reason you don't already have ksh
, check out:
http://www.kornshell.com/
or:
http://www.math.mun.ca/~michael/pdksh/
This is strictly optional. If you're a current MRTG user and want FlowScan to create and updated MRTG .log
in addition to RRD files, be sure rateup is available on the machine in
question.
Personally, I no longer create MRTG .log
files, just .rrd
files. If your already using MRTG, you probably already knew it's available
at:
http://ee-staff.ethz.ch/~oetiker/webtools/mrtg/mrtg.html
This is the shared-library perl module supplied with RRDTOOL. (See above.)
This module is part of the BoulderIO package. This package is available at:
http://www.genome.wi.mit.edu/genome_software/other/boulder.html
and/or:
http://www.genome.wi.mit.edu/genome_software/other/boulder.tar.gz
I'm using Boulder-1.06.
The ConfigReader package is available on CPAN. I'm using ConfigReader-0.5.
This package is available at:
http://net.doit.wisc.edu/~plonka/NetTree/
This package is available on CPAN and is required by the NetTrie.
Note spelling NetTree
and NetTrie
are 2 different modules! This package is available at:
http://net.doit.wisc.edu/~plonka/NetTrie/
This package is available at:
http://net.doit.wisc.edu/~plonka/Cflow/
You'll want Cflow-1.018 or greater.
I recommend that you create a user just for the purpose of running these utilities so that all directory permissions and created file permissions are consistent. You may find this useful especially if you have multiple network engineers accessing the flows.
I suggest that the FlowScan --prefix
directory be owned by an appropriate user and group, and that the
permissions allow write by other members of the group. Also, turn on the
set-group-id bit on the directory so that newly created files (such as the
flow files and log file) will be owned by that group as well, e.g.:
user$ chmod g+ws $PREFIX
For FlowScan, I suggest that you export from your Cisco like this:
ip flow-export version 5 peer-as ip flow-export destination 10.0.0.1 2055
Of course the IP address and port are determined by your cflowd.conf
. I suggest you also do this if your IOS version supports it:
ip flow-cache timeout active 1
This should help to ensure that flows are exported in a timely fashion.
This document does not attempt to explain Cflowd. There is good documentation provided with that package.
As for the tweaks necessary to get cflowd to play well with Flowscan, hopefully, an example is worth a thousand words.
My cflowd.conf
file looks like this:
OPTIONS { LOGFACILITY: local6 TCPCOLLECTPORT: 2056 TABLESOCKFILE: /home/whomever/cflowd/etc/cflowdtable.socket FLOWDIR: /var/local/flows FLOWFILELEN: 1000000 NUMFLOWFILES: 10 MINLOGMISSED: 300 } CISCOEXPORTER { HOST: 10.0.0.10 ADDRESSES: { 10.42.42.10, } CFDATAPORT: 2055 # COLLECT: { flows } } COLLECTOR { HOST: 127.0.0.1 AUTH: none }
And I invoke the patched cflowd like this:
user$ cflowd -s 300 -O 0 -m /path/to/cflowd.conf
Those options cause a flow file to be ``dropped'' every 5 minutes, skipping flows with an output interface of zero unless they are multicast flows. Once you have this working, your ready to continue.
Do not specify the --prefix
as you might for other packages!
I.e. don't use /usr/local
or a similar directory in which other things are installed. This prefix
should be the directory where the patched cflowd has been configured to
write flow files.
A good way to avoid doing something dumb here is to not configure, make and install FlowScan as root.
user$ ./configure --help # note --with-... options
e.g.:
user$ ./configure --prefix=/var/local/flows user$ make user$ make -n install user$ make install
Subsequently in this document the ``prefix'' directory will be referred to
as the ``--prefix
diretory'' or using the environment variable
$PREFIX
. FlowScan does not require or use this environment variable, it's just a
documentation convention so you know to use the directory which you passed
as with --prefix
.
The OutputDir
is where the .rrd
files and graphs will reside. As the chosen FlowScan user do:
$ PREFIX=/var/local/flows $ mkdir -p $PREFIX/graphs
Then, when you edit the .cf
files below, be sure to specify this using the OutputDir
directive.
The FlowScan Package ships with sample configuration files in the cf
sub-directory of the distribution. During initial configuration you will
copy and sometimes modify these sample files to match your network
environent and your purposes.
FlowScan looks for its configuration files in the directory in its
bin
directory - i.e. the directory in which the <flowscan> perl script and FlowScan report modules are installed. I don't really like this, but that's
the way it is for now. Forgive me.
FlowScan currently uses two kinds of cofiguration files:
This format should be relatively self-explanatory based on the sample files referenced below. The directives are documented in comments within those sample configuration files.
A number of the directorives have paths to directory entries as their
values. One has a choice of configuring these as either relative or
absolute paths. The samples configuration files ship with relative path
specifications to minimize the changes a new user must make. However, in
this configuration, it is imperitive that flowscan
be run in the --prefix
directory if these relative paths are used.
I've chosen Boulder IO's ``semantic free data interchange format'' to use for related projects, and since this is the format in which our subnet definitions were available, I continued to use it.
If you're new to ``Boulder IO'', the examples referenced below should be
sufficient. Remember that lines containing just =
are record seperators.
For complete information on this format, do:
$ perldoc boulder
Here's a step-by-step guide to installing, reviewing, and editing the FlowScan configuration files:
$ cp cf/flowscan.cf $PREFIX/bin $ chmod u+w $PREFIX/bin/flowscan.cf $ # edit $PREFIX/bin/flowscan.cf
FlowScan ships with, at least, the CampusIO
and SubNetIO
reports. These two reports are mutually exclusive - SubNetIO
does everything that CampusIO
does, and more.
Initially, in flowscan.cf
I strongly suggest you configure:
ReportClasses CampusIO
rather than:
ReportClasses SubNetIO
The CampusIO
report class is simpler than SubNetIO
, requires less configuration, and is CPU/processing intensive. Once you
have the
CampusIO
stuff working, you can always go back and configure
flowscan
to use SubNetIO
instead.
Copy the template to the bin
directory. Adjust the values using the required and optional configuration
directives documented there-in.
The most important is to configure the list of NextHops
. For most purposes, the default values for the rest of the directives
should suffice.
For advanced users that export from multiple Ciscos to the same
Cflowd/FlowScan machine, it is also very important to configure
LocalNextHops
.
Copy the template to the bin
directory. This file should be referenced in CampusIO.cf
by the LocalSubnetFiles
directive.
The local_nets.boulder
file must contain a list of the networks or subnets within your
organization. It is imperative that this file is maintained accurately
since flowscan will use this to determine whether a given flow represents
inbound traffic.
You should probably specify the networks/subnets in as terse a way as
possible. That is, if you have two adjacent subnets that can be coallesced
into one specification, do so. (This is differnet than the similarly
formatted our_subnets.boulder
file mentioned below.)
The format of an entry is:
SUBNET=10.0.0.0/8 [TAG=value] [...]
Technically, SUBNET
is the only tag required in each record. You may find it useful to add
other tags such as DESCRIPTION
for documentation purposes. Entries are seperated by a line containing a
single =
.
FlowScan identifies outbound flows based on the list of nexthop addresses that you'll set up below.
Copy the template to the bin
directory from which you will be running flowscan
. The supplied content seems to work well as of this writing (Mar 10,
2000). No warranties. Please let me know if you have updates regarding
Napster IP address usage, protocol, and/or port usage.
The file Napster_subnets.boulder
should contain a list of the networks/subnets in use by Napster, i.e. napster.com
.
As of this writing, more info on Napster can be found at:
http://napster.cjb.net/ http://opennap.sourceforge.net/napster.txt http://david.weekly.org/code/napster-proxy.php3
Copy the template to the bin
directory from which you will be running flowscan. Adjust the values using
the required and optional configuration directives documented there-in. For
most purposes, the default values should suffice.
Copy the template to the bin
directory.
This file is used by the SubNetIO
report class, and therefore is only necessary if you have defined ReportClasses SubNetIO
rather than ReportClasses CampusIO
.
The file our_subnets.boulder
should contain a list of the subnets on which you'd like to gather I/O
statistics.
You should format this file like the aforementioned
local_nets.boulder
file. However, the SUBNET
tags and values in this file should be listed exactly as you use them in
your network: one record for each subnet. So, if you have two subnets, with
different purposes, they should have seperate entries even if they are
numerically adjacent. This will enable you to report on each of those user
populations independently. For instance:
SUBNET=10.0.1.0/24 DESCRIPTION=power user subnet = SUBNET=10.0.2.0/24 DESCRIPTION=luser subnet
If you'd like to have FlowScan save your flow files, make a sub-directory
named saved
in the directory where flowscan has been configured to look for flow files.
This has been specified with the
FlowFileGlob
directive in flowscan.cf
and is usually the same directory that is specified using the FLOWDIR
directive in your
cflowd.conf
.
If you do this, flowscan will move each flow file to that saved
sub-directory after processing it. (Otherwise it would simply remove them.)
e.g.:
$ mkdir $PREFIX/saved $ touch $PREFIX/saved/.gzip_lock
The .gzip_lock
file created by this command is used as a lock file to ensure that only one
cron job at a time.
Be sure to set up a crontab entry as is mentioned in Final Setup below. I.e. don't complain to the author if you're saving flows and your file-system fills up.
Once you have the patched cflowd running with the -s 300
option, and it has written at least one time-stamped flow file (i.e. other
than
flows.current
), try this:
$ flowscan
The output should appear as something like this:
Loading "bin/Napster_subnets.boulder" ... Loading "bin/local_nets.boulder" ... 2000/03/20 17:01:04 working on file flows.20000320_16:57:22... 2000/03/20 17:07:38 flowscan-1.013 CampusIO: Cflow::find took 394 wallclock secs (350.03 usr + 0.52 sys = 350.55 CPU) for 23610455 flow file bytes, flow hit ratio: 254413/429281 2000/03/20 17:07:41 flowscan-1.013 CampusIO: report took 3 wallclock secs ( 0.44 usr + 0.04 sys = 0.48 CPU) sleep 300...
At this point, the RRD files should have be created and updated as the flow
files are processed. If not, you should use the diagnostic warning and
error messages or the perl debugger (perl -d flowscan
) to determine what is wrong.
Look at the above output carefully. It is imperative that the number of
seconds that Cflow::find took
not usually approach nor exceed 300. If, as in the example above, your log
messages indicate that it took more than 300 seconds, FlowScan will not be
able to keep up with the flows being collected on this machine (if the
given flow file is representative). If the total of usr + sys CPU seconds
totals more than 300 seconds, than this machine is not even capable of
running FlowScan fast enough, and you'll need to run it on a faster machine
(or tweak the code, rewrite in C, or mess with process priorities using
nice(1),
etc.)
Once you feel that flowscan
is working correctly, you can set it (and cflowd
) to start up at system boot time. Sample rc
scripts for Solaris and Linux are supplied in the rc
sub-directory of this distribution. You may have to edit these scripts
depending on your ps(1)
flavor and where various commands have
been installed on your system.
Also, if you're saving your flow files, you should set up crontab entries
to handle the ``old'' flows. I use one crontab entry to
gzip(1)
recently processed files, and another to delete the files older than a
given number of hours. The ``right'' number of hours is a function of your
file-system size and the rate of flows being exported/collected. See the example/crontab
file.
To generate graphs, try the graphs.mf
Makefile:
$ cp graphs.mf $PREFIX/graphs/Makefile $ cd $PREFIX/graphs $ make
This should produce the ``Campus I/O by IP Protocol'' and ``Well Known Services'' graphs in GIF files.
Creation of other graphs will require knowledge of RRDTool.
If you use the supplied graphs.mf
Makefile, you can use the examples there in to see how to build ``Campus
I/O by Network'' and ``AS to AS'' graphs. The examples use UW-Madison
network numbers, names of with which we peer and such, so it will be
non-trivial for you to customize them, but at least there's an example.
Currently, RRD files for the configured ASPairs
contain a :
in the file name. This is apparently a no-no with RRDTOOL since, although
it allows you create files with these names, it doesn't let you graphs
using them because of how the API uses :
to seperate arguments.
For the time being, if you want to graph AS information, you must manually create symbolic links in your graphs sub-dir. i.e.
$ cd graphs $ ln -s 0:42.rrd Us2Them.rrd $ ln -s 42:0.rrd Them2Us.rrd
A reminder for me to fix this is in the TODO
list.
The current Makefile-based graphing, while coherent, is cumbersome at best. I find that the verbosity and complexity of adding new graph targets to the Makefile makes my brain hurt.
Other RRDTOOL front-ends that produce graphs should be able to work with
FlowScan-generated .rrd
files, so there's hope.
Note that this document is provided `as is'. The information in it is not warranted to be correct. Use it at your own risk.
Copyright (c) 2000 Dave Plonka <plonka@doit.wisc.edu>. All rights reserved.
This document may be reproduced and distributed in its entirety (including this authorship, copyright, and permission notice), provided that no charge is made for the document itself.