FlowScan is a system which scans the flow files written by cflowd and
reports on what it finds. There are two reporting modules that are
included. The first, CampusIO.pm
produced the graphs at:
http://wwwstats.net.wisc.edu
which show traffic in and out at a peering point. The second, SubNetIO.pm
updates RRD files for each of the subnets that you specify (so that you can
produce graphs of
CampusIO
by subnet).
The idea behind the distinct report modules is that other users will be
able to write new reports that are either derived-classes from
CampusIO
, or altogether new ones. For instance, one may wish to write a report
module called Abuse.pm
which would send email when it detected potentially abusive things going
on, like Denial-of-Service attacks and various scans.
FlowScan is freely-available under the GPL, the GNU General Public License.
http://net.doit.wisc.edu/~plonka/FlowScan/#Mailing_Lists
By reading and/or participating in the list, you will be helping me to use my time effectively so that others will benefit from questions answered and issues raised.
The mailing lists' archives are available at:
http://net.doit.wisc.edu/~plonka/list/flowscan
and:
http://net.doit.wisc.edu/~plonka/list/flowscan-announce
FlowScan-1.002
or FlowScan-1.003
, you need only perform a subset of the steps described below for an
initial installation.
First time FlowScan users should skip to Initial Install Requirements, below.
As in the past, my cflowd patches are available here:
http://net.doit.wisc.edu/~plonka/cflowd/?M=D
cflowd-2-1-a6
, I recommend upgrading cflowd. Most importantly, this is so that you can
apply the aforementioned patch. FlowScan requres that cflowd-2-1-a6
have these patches applied: cflowd-2-1-a6-djp.patch
and
cflowd-2-1-a6-sysUpTime-djp.patch
.
As of this writing, the most recent cflowd version that I have used is
cflowd-2-1-a9
. If your are using cflowd-2-1-a9
, you need only to patch it with cflowd-2-1-a9-djp.patch
to prepare it for use with FlowScan.
Net::Patricia
. I have uploaded this module to PAUSE, but it not have entered CPAN yet.
You can try to install it using the CPAN shell like this:
# perl -MCPAN -e shell cpan> install Net::Patricia
If Net::Patricia
is not found on CPAN, you can obtain it here:
http://net.doit.wisc.edu/~plonka/Net-Patricia/
FlowScan-1.002
, you were required to run a very old version of the Boulder modules, which
I made available to you as
boulder.tar.gz
.
FlowScan is now compatible with the current Boulder
distribution. I highly recommend upgrading Boulder
as described below in Software Requirements, Boulder
.
Cflow-1.024
or greater. Test the version you currently have installed as follows:
$ perl -MCflow -le 'print $Cflow::VERSION'
If the version is less than 1.024, obtain and install the current version
as described below in Software Requirements, Cflow
.
configure
, you should specify the same value with --prefix
that you did when installing your existing FlowScan, e.g. /var/local/flows
, or wherever your time-stamped raw flow files are currently being written
by cflowd
.
UDPServices
example in the
CampusIO.cf
file in the cf
sub-directory of the FlowScan distribution. Cut-and-paste from it as
necessary into your
$PREFIX/bin/CampusIO.cf
file.
Secondly, the Napster_subnets.boulder
has changed significantly since that provided with FlowScan-1.002. If you
have FlowScan configured to measure Napster traffic, replace your old
Napster_subnets.boulder
with the one from the newer distribution:
$ cp cf/Napster_subnets.boulder $PREFIX/bin/Napster_subnets.boulder
graphs.mf
template Makefile. Be sure to try the one supplied in the current
distribution as described below in Supplied Graphs. You may wish to copy it to your <kbd>graphs</kbd>
sub-directory.
While it is not required, I highly recommend installing RRGrapher
if you want to produce other graphs. It is referenced below in Custom Graphs.
ip flow-export version ?
However, if you want to process a fair amount of traffic (e.g. at ~OC-3 rates) you'll want a fast machine.
I've run FlowScan on a SPARC Ultra-30 w/256MB running Solaris 2.6, a Dell
Precision 610 (dual Pentium III, 2x450Mhz) w/128MB running Debian Linux
2.1, and most recenlty a dual PIII Dell server, 2x600Mhz, w/256MB running
Debian Linux 2.2. The Intel machines are definitely preferably in the sense
that flowscan
processes flows in about 40% of the time that it took the SPARC. (The main flowscan
script itself is currently single-threaded.)
In an early performance test of mine, using 24 hours of flows from our peering router here at UW-Madison, here's the comparison of their ave. time to process 5 minutes of flows:
SPARC - 284 sec Intel - 111 sec
Note that it is important that flowscan doesn't take longer to process the flows than it does for your network's activity and exporting Cisco routers to produce the flows. So, you want to keep the time to process 5 minutes of flows under 300 seconds on average.
Early in this project, we were usually collecting about 150-300,000 flows from our peering router every 5 minutes. Recently, our 5-minute flow files average ~15 to 20 MB in size.
During a recent inbound Denial-of-Service attack consisting of 40-byte TCP SYN packets with random source addresses and port numbers, I've seen a single ``5-minute'' flow file greater than 500MB! Even on our fast machine, that single file took hours to process.
Surely YMMV, but currently a 2.5GB file-system allows me to preserve
gzip(1)
ped flow file for about 24 hours.
Cisco(s)
to your collector
machine if you have enabled ip route-cache flow
on very many fast interfaces. With lots of exported flow data (e.g. 15-20
MB of raw flow file data every 5 minutes) and only a 10 Mb/s ethernet NIC,
I found that the host was dropping some of the incoming UDP packets. This
was evidenced by a constantly-increasing number of udpInOverflows
in the
netstat -s
output under Solaris. I addressed this by reconfiguring my hosts with a 100
Mb/s fast ethernet NIC or 155 Mb/s OC-3 ATM LANE interface and have not
seen that problem since. Of course, one should assure that the requisite
bandwidth is available along the full path between the exporting
Cisco(s)
and the collecting host.
configure
script but you'll save yourself some frustration by getting ahead of the
game by collecting and installing them first. Below, I've attempted to
present them in a reasonable order in which to obtain, build, and install
them.
ftp://ftp.caida.org/pub/arts++/
As of arts++-1-1-a5, the arts++ build appears to require GNU make 3.79 because its Makefiles use glob for header dependencies, e.g. ``*.hh''. From my cursory look at the GNU make ChangeLog, perhaps any version >= 3.78.90 will suffice. Also there may be trouble if you don't have flex headers installed in your ``system'' include directory, such as ``/usr/include'', even though ``configure.in'' appears to be trying to handle this situation. Since mine were in the ``local'' include directory, I hand-tweaked the classes/src/Makefile's ``.cc.o'' default rule to include that directory as well.
http://net.doit.wisc.edu/~plonka/cflowd/?M=D
Obtain the patch or patches which apply to the version of cflowd that you intend to run and apply it to cflowd before building cflowd below.
http://www.caida.org/tools/measurement/cflowd/ ftp://ftp.caida.org/pub/cflowd/
In my experience with building cflowd, you're the most likely to have success in a GNU development environment such as that provided with GNU/Linux or FreeBSD.
I have not had problems building the patched cflowd-2-1-a9
or
cflowd-2-1-a6
under Debian Linux 2.2.
I've also managed to build the patched cflowd-2-1-a6 with gcc-2.95.2 and binutils-2.9.1 on a sparc-sun-solaris2.6 machine with GNU make 3.79 and flex-2.5.4.
As of cflowd-2-1-a6, beware that during the build may pause for minutes
while as(1)
uses lots of CPU and memory to building
``CflowdCisco.o''. This is apparenly `normal'. Also, the build appears to
be subtley reliant on GNU ld(1),
which is available in the GNU
``binutils'' package. (I was unable to build cflowd-2-1-a6 with the
sparc-sun-solaris2.6 ``/usr/ccs/bin/ld'' although earlier cflowd releases
built fine with it.)
http://www.cpan.org/
and:
http://www.perl.com/
I've tested with perl 5.004, 5.005, and 5.6.0.
ksh
is used as the SHELL in the Makefile for the graphs.
pdksh
works fine too. If for some reason you don't already have ksh
, check out:
http://www.kornshell.com/
or:
http://www.math.mun.ca/~michael/pdksh/
http://ee-staff.ethz.ch/~oetiker/webtools/rrdtool/
FlowScan expects that you've built and installed RRDTOOL something like this:
$ ./configure --enable-shared $ make install site-perl-install
.log
in addition to RRD files, be sure rateup is available on the machine in
question.
Personally, I no longer create MRTG .log
files, just .rrd
files. If your already using MRTG, you probably already knew it's available
at:
http://ee-staff.ethz.ch/~oetiker/webtools/mrtg/mrtg.html
You can install them using the CPAN shell like this:
# perl -MCPAN -e shell cpan> install Boulder::Stream
If you want to fetch it manually you can probably find it at:
http://search.cpan.org/search?dist=Boulder
I've tested with the modules supplied in the Boulder-1.18 distribution and also those in the old ``boulder.tar.gz'' distribution.
# perl -MCPAN -e shell cpan> install ConfigReader::DirectiveStyle
If you want to fetch it manually you can probably find it at:
http://search.cpan.org/search?dist=ConfigReader
I'm using ConfigReader-0.5.
You can try to install it using the CPAN shell like this:
# perl -MCPAN -e shell cpan> install Net::Patricia
If Net::Patricia
is not found on CPAN, you can obtain it here:
http://net.doit.wisc.edu/~plonka/Net-Patricia/
http://net.doit.wisc.edu/~plonka/Cflow/
You'll need Cflow-1.024 or greater.
http://net.doit.wisc.edu/~plonka/FlowScan/
I suggest that the FlowScan --prefix
directory be owned by an appropriate user and group, and that the
permissions allow write by other members of the group. Also, turn on the
set-group-id bit on the directory so that newly created files (such as the
flow files and log file) will be owned by that group as well, e.g.:
user$ chmod g+ws $PREFIX
80/tcp
service to be called http
. Try running this command:
$ perl -le "print scalar(getservbyport(80, 'tcp'))"
You can continue with the next step if this command prints http
. However, if it prints some other value, such as www
, then I suggest you modify your /etc/services
file so that the line containing
80/tcp
looks something like this:
http 80/tcp www www-http #World Wide Web HTTP
Be sure to leave the old name such as www
as an ``alias'', like I've shown here. This will reduce the risk of
breaking existing applications which may refer to the service by that name.
If you decide not to modify the service name in this way, FlowScan should
still work, but you'll be on your own when it comes to producing graphs.
ip route-cache flow
Also, I suggest that you export from your Cisco like this:
ip flow-export version 5 peer-as ip flow-export destination 10.0.0.1 2055
Of course the IP address and port are determined by your
cflowd.conf
. To help ensure that flows are exported in a timely fashion, I suggest you
also do this if your IOS version supports it:
ip flow-cache timeout active 1
Newer IOS versions, e.g. 12.0(9), use this syntax:
ip flow-cache active-timeout 1
unless you've specified something such as downward-compatible-config 11.2
.
Lastly, in complicated environments, choosing which particular interfaces
should have ip route-cache flow
enabled is somewhat difficult. For FlowScan, one usually wants it enabled
for any interface that is an ingress point for traffic that is from inside
to outside or vice-versa. You probably don't want flow-switching enabled
for interfaces that carry policy-routed traffic, such as that being
redirected transparently to a web cache. Otherwise, FlowScan could count
the same traffic twice because of multiple flows being reported for what
was essentially the same traffic making multiple passes through a border
router. E.g. user-to-webcache, webcache-to-outside world (on behalf of that
user).
As for the tweaks necessary to get cflowd to play well with FlowScan, hopefully, an example is worth a thousand words.
My cflowd.conf
file looks like this:
OPTIONS { LOGFACILITY: local6 TCPCOLLECTPORT: 2056 TABLESOCKFILE: /home/whomever/cflowd/etc/cflowdtable.socket FLOWDIR: /var/local/flows FLOWFILELEN: 1000000 NUMFLOWFILES: 10 MINLOGMISSED: 300 } CISCOEXPORTER { HOST: 10.0.0.10 ADDRESSES: { 10.42.42.10, } CFDATAPORT: 2055 # COLLECT: { flows } } COLLECTOR { HOST: 127.0.0.1 AUTH: none }
And I invoke the patched cflowd like this:
user$ cflowd -s 300 -O 0 -m /path/to/cflowd.conf
Those options cause a flow file to be ``dropped'' every 5 minutes, skipping flows with an output interface of zero unless they are multicast flows. Once you have this working, your ready to continue.
--prefix
value as might for other packages!
I.e. don't use /usr/local
or a similar directory in which other things are installed. This prefix
should be the directory where the patched cflowd has been configured to
write flow files.
A good way to avoid doing something dumb here is to not run FlowScan's
configure
nor make
as root.
user$ ./configure --help # note --with-... options
e.g.:
user$ ./configure --prefix=/var/local/flows user$ make user$ make -n install user$ make install
By the way, in the above commands, all is OK if make says ``Nothing to
be done for `target'
''. As long as make
completes without an error, all is OK.
Subsequently in this document the ``prefix'' directory will be referred to
as the ``--prefix
diretory'' or using the environment variable
$PREFIX
. FlowScan does not require or use this environment variable, it's just a
documentation convention so you know to use the directory which you passed
as with --prefix
.
OutputDir
is where the .rrd
files and graphs will reside. As the chosen FlowScan user do:
$ PREFIX=/var/local/flows $ mkdir -p $PREFIX/graphs
Then, when you edit the .cf
files below, be sure to specify this using the OutputDir
directive.
cf
sub-directory of the distribution. During initial configuration you will
copy and sometimes modify these sample files to match your network
environent and your purposes.
FlowScan looks for its configuration files in its bin
directory - i.e. the directory in which the flowscan
perl script and FlowScan report modules are installed. I don't really like this, but that's
the way it is for now. Forgive me.
FlowScan currently uses two kinds of cofiguration files:
A number of the directorives have paths to directory entries as their
values. One has a choice of configuring these as either relative or
absolute paths. The samples configuration files ship with relative path
specifications to minimize the changes a new user must make. However, in
this configuration, it is imperitive that flowscan
be run in the --prefix
directory if these relative paths are used.
If you're new to ``Boulder IO'', the examples referenced below should be
sufficient. Remember that lines containing just =
are record seperators.
For complete information on this format, do:
$ perldoc Boulder # or "perldoc bolder" if that fails
$ cp cf/flowscan.cf $PREFIX/bin $ chmod u+w $PREFIX/bin/flowscan.cf $ # edit $PREFIX/bin/flowscan.cf
CampusIO
and SubNetIO
reports. These two reports are mutually exclusive - SubNetIO
does everything that CampusIO
does, and more.
Initially, in flowscan.cf
I strongly suggest you configure:
ReportClasses CampusIO
rather than:
ReportClasses SubNetIO
The CampusIO
report class is simpler than SubNetIO
, requires less configuration, and is less CPU/processing intensive. Once
you have the
CampusIO
stuff working, you can always go back and configure
flowscan
to use SubNetIO
instead.
bin
directory. Adjust the values using the required and optional configuration
directives documented there-in.
The most important is to configure the list of NextHops
. For most purposes, the default values for the rest of the directives
should suffice.
For advanced users that export from multiple Ciscos to the same
cflowd/FlowScan machine, it is also very important to configure
LocalNextHops
.
bin
directory. This file should be referenced in CampusIO.cf
by the LocalSubnetFiles
directive.
The local_nets.boulder
file must contain a list of the networks or subnets within your
organization. It is imperative that this file is maintained accurately
since flowscan will use this to determine whether a given flow represents
inbound traffic.
You should probably specify the networks/subnets in as terse a way as
possible. That is, if you have two adjacent subnets that can be coallesced
into one specification, do so. (This is differnet than the similarly
formatted our_subnets.boulder
file mentioned below.)
The format of an entry is:
SUBNET=10.0.0.0/8 [TAG=value] [...]
Technically, SUBNET
is the only tag required in each record. You may find it useful to add
other tags such as DESCRIPTION
for documentation purposes. Entries are seperated by a line containing a
single =
.
FlowScan identifies outbound flows based on the list of nexthop addresses that you'll set up below.
CampusIO
attempt to identify Napster traffic, be sure to comment out all Napster
related option in
CampusIO.cf
.
Copy the template to the bin
directory from which you will be running flowscan
. The supplied content seems to work well as of this writing (Mar 10,
2000). No warranties. Please let me know if you have updates regarding
Napster IP address usage, protocol, and/or port usage.
The file Napster_subnets.boulder
should contain a list of the networks/subnets in use by Napster, i.e. napster.com
.
As of this writing, more info on Napster can be found at:
http://napster.cjb.net/ http://opennap.sourceforge.net/napster.txt http://david.weekly.org/code/napster-proxy.php3
bin
directory from which you will be running flowscan. Adjust the values using
the required and optional configuration directives documented there-in. For
most purposes, the default values should suffice.
bin
directory.
This file is used by the SubNetIO
report class, and therefore is only necessary if you have defined ReportClasses SubNetIO
rather than ReportClasses CampusIO
.
The file our_subnets.boulder
should contain a list of the subnets on which you'd like to gather I/O
statistics.
You should format this file like the aforementioned
local_nets.boulder
file. However, the SUBNET
tags and values in this file should be listed exactly as you use them in
your network: one record for each subnet. So, if you have two subnets, with
different purposes, they should have seperate entries even if they are
numerically adjacent. This will enable you to report on each of those user
populations independently. For instance:
SUBNET=10.0.1.0/24 DESCRIPTION=power user subnet = SUBNET=10.0.2.0/24 DESCRIPTION=luser subnet
saved
in the directory where flowscan has been configured to look for flow files.
This has been specified with the
FlowFileGlob
directive in flowscan.cf
and is usually the same directory that is specified using the FLOWDIR
directive in your
cflowd.conf
.
If you do this, flowscan will move each flow file to that saved
sub-directory after processing it. (Otherwise it would simply remove them.)
e.g.:
$ mkdir $PREFIX/saved $ touch $PREFIX/saved/.gzip_lock
The .gzip_lock
file created by this command is used as a lock file to ensure that only one
cron job at a time.
Be sure to set up a crontab entry as is mentioned below in Final Setup. I.e. don't complain to the author if you're saving flows and your file-system fills up ;^).
-s 300
option, and it has written at least one time-stamped flow file (i.e. other
than
flows.current
), try this:
$ cd /dir/containing/your/time-stamped/raw/flow/files $ flowscan
The output should appear as something like this:
Loading "bin/Napster_subnets.boulder" ... Loading "bin/local_nets.boulder" ... 2000/03/20 17:01:04 working on file flows.20000320_16:57:22... 2000/03/20 17:07:38 flowscan-1.013 CampusIO: Cflow::find took 394 wallclock secs (350.03 usr + 0.52 sys = 350.55 CPU) for 23610455 flow file bytes, flow hit ratio: 254413/429281 2000/03/20 17:07:41 flowscan-1.013 CampusIO: report took 3 wallclock secs ( 0.44 usr + 0.04 sys = 0.48 CPU) sleep 300...
At this point, the RRD files have been created and updated as the flow
files are processed. If not, you should use the diagnostic warning and
error messages or the perl debugger (perl -d flowscan
) to determine what is wrong.
Look at the above output carefully. It is imperative that the number of
seconds that Cflow::find took
not usually approach nor exceed 300. If, as in the example above, your log
messages indicate that it took more than 300 seconds, FlowScan will not be
able to keep up with the flows being collected on this machine (if the
given flow file is representative). If the total of usr + sys CPU seconds
totals more than 300 seconds, than this machine is not even capable of
running FlowScan fast enough, and you'll need to run it on a faster machine
(or tweak the code, rewrite in C, or mess with process priorities using
nice(1),
etc.)
flowscan
is working correctly, you can set it (and cflowd
) to start up at system boot time. Sample rc
scripts for Solaris and Linux are supplied in the rc
sub-directory of this distribution. You may have to edit these scripts
depending on your ps(1)
flavor and where various commands have
been installed on your system.
Also, if you're saving your flow files, you should set up crontab entries
to handle the ``old'' flows. I use one crontab entry to
gzip(1)
recently processed files, and another to delete the files older than a
given number of hours. The ``right'' number of hours is a function of your
file-system size and the rate of flows being exported/collected. See the example/crontab
file.
graphs.mf
Makefile:
$ cp graphs.mf $PREFIX/graphs/Makefile $ cd $PREFIX/graphs $ make
This should produce the ``Campus I/O by IP Protocol'' and ``Well Known
Services'' graphs in PNG files. GIF files may be produced using the
filetype
option mentioned below.
If this command fails to produce those graphs, it is likely that some of
the requisite .rrd
files are missing, i.e. they have not yet been created by FlowScan, such as http_dst.rrd
. If this is the case, it is probably because you skipped the configuration
of /etc/services
in Configuring Your Host. Stop flowscan
, rename your
www_*.rrd
files to http_*.rrd
, modify /etc/services
, and restart <kbd>flowscan</kbd>.
Alternatively, you may copy and customize the graphs.mf
Makefile to remove references to the missing or misnamed .rrd
files for those targets. Also, you could produce your graphs using a
graphing tool such as RRGrapher mentioned below in Custom Graphs.
Note that the graphs.mf
template Makefile has options to specify such things as the range of time,
graph height and width, and output file type. Usage:
make -f graphs.mf [filetype=<png|gif>] [width=x] [height=y] [ioheight=y+n] [hours=h]
as in:
$ make -f graphs.mf filetype=gif height=400 hours=24 io_services_bits.gif
http://net.doit.wisc.edu/~plonka/RRGrapher/
For other custom graphs, if you use the supplied graphs.mf
Makefile, you can use the examples there in to see how to build ``Campus
I/O by Network'' and ``AS to AS'' graphs. The examples use UW-Madison
network numbers, names of with which we peer and such, so it will be
non-trivial for you to customize them, but at least there's an example.
Currently, RRD files for the configured ASPairs
contain a :
in the file name. This is apparently a no-no with RRDTOOL since, although
it allows you create files with these names, it doesn't let you graphs
using them because of how the API uses :
to seperate arguments.
For the time being, if you want to graph AS information, you must manually create symbolic links in your graphs sub-dir. i.e.
$ cd graphs $ ln -s 0:42.rrd Us2Them.rrd $ ln -s 42:0.rrd Them2Us.rrd
A reminder for me to fix this is in the TODO
list.
Other RRDTOOL front-ends that produce graphs should be able to work with
FlowScan-generated .rrd
files, so there's hope.
Copyright (c) 2000 Dave Plonka <plonka@doit.wisc.edu>. All rights reserved.
This document may be reproduced and distributed in its entirety (including this authorship, copyright, and permission notice), provided that no charge is made for the document itself.