FlowScan-1.006
.
FlowScan is a system which scans cflowd-format raw flow files and reports
on what it finds. There are two report modules that are included. The CampusIO
report module produced the graphs at:
http://wwwstats.net.wisc.edu
which show traffic in and out through a peering point or network border.
The SubNetIO
report updates RRD files for each of the subnets that you specify (so that
you can produce graphs of CampusIO
by subnet).
The idea behind the distinct report modules is that users will be able to
write new reports that are either derived-classes from CampusIO
or altogether new ones. For instance, one may wish to write a report module
called Abuse
which would send email when it detected potentially abusive things going
on, like Denial-of-Service attacks and various scans.
FlowScan is freely-available under the GPL, the GNU General Public License.
http://net.doit.wisc.edu/~plonka/FlowScan/#Mailing_Lists
By reading and participating in the list, you will be helping me to use my time effectively so that others will benefit from questions answered and issues raised.
The mailing lists' archives are available at:
http://net.doit.wisc.edu/~plonka/list/flowscan
and:
http://net.doit.wisc.edu/~plonka/list/flowscan-announce
If you have previously installed and properly configured
FlowScan-1.005
, you need only perform a subset of the steps that one would normally have
to perform for an initial installation.
This release of FlowScan uses more memory than previous releases. That is,
the flowscan
process will grow to a larger size than that in
FlowScan-1.005
. In my recent experience while testing this release, the flowscan
process size to approximately 128MB when I use the new experimental BGPDumpFile
option to produce ``Top'' reports by ASN. This is hopefully understandable
since flowscan
is carrying a full internet routing table when configured in this way. The
memory requirements are significantly lessened if you do not use the
BGPDumpFile
option. The flowscan
process' size is also a function of the number of active hosts in your
network.
Cflow
perl module to Cflow-1.030
or later for improved performance. Install HTML::Table
in case you want to produce the new ``Top Talkers'' reports. Details on how
to obtain and install these modules can be found in Software Requirements, below.
--prefix
that you did when installing your existing FlowScan, e.g. /var/local/flows, or wherever your time-stamped raw flow files are currently being written
by cflowd
.
$ cd bin $ perldoc CampusIO
Here are a few things that changed regarding the FlowScan configuration:
TopN
and ReportPrefixFormat
directives for
CampusIO
and SubNetIO
. These directives enable the production of ``Top Talker'' reports.
Furthermore there are new experimental
BGPDumpFile
and ASNFile
options CampusIO
which are used to produce ``Top'' reports by Autonomous System. You will
need access a Cisco carrying a full BGP routing table to produce such
reports. See the CampusIO configuration documentation for more info about
configuring this feature. If you have trouble with it, remember that it is
experimental, so please join the discussion in the mailing list.
Secondly, the Napster_subnets.boulder has changed significantly since that provided with FlowScan-1.005. If you have FlowScan configured to measure Napster traffic, replace your old Napster_subnets.boulder with the one from the newer distribution:
$ cp cf/Napster_subnets.boulder $PREFIX/bin/Napster_subnets.boulder
$ cd $prefix/graphs $ tar cf saved_rrd_files.tar *.rrd
then do this:
$ cd $prefix/graphs $ ../bin/add_txrx total.rrd [1-9]*.*.*.*_*.rrd
While it is not required, I highly recommend installing RRGrapher
if you want to produce other graphs. It is referenced below in Custom Graphs.
ip flow-export version ?
However, if you want to process a fair amount of traffic (e.g. at ~OC-3 rates) you'll want a fast machine.
I've run FlowScan on a SPARC Ultra-30 w/256MB running Solaris 2.6, a Dell
Precision 610 (dual Pentium III, 2x450Mhz) w/128MB running Debian Linux
2.1, and most recenlty a dual PIII Dell server, 2x600Mhz, w/256MB running
Debian Linux 2.2r2. The Intel machines are definitely preferably in the
sense that flowscan
processes flows in about 40% of the time that it took the SPARC. (The main flowscan
script itself is currently single-threaded.)
In an early performance test of mine, using 24 hours of flows from our peering router here at UW-Madison, here's the comparison of their ave. time to process 5 minutes of flows:
SPARC - 284 sec Intel - 111 sec
Note that it is important that flowscan doesn't take longer to process the flows than it does for your network's activity and exporting Cisco routers to produce the flows. So, you want to keep the time to process 5 minutes of flows under 300 seconds on average.
My recent testing has indicated that 600-850MHz PIII machines can usually
process 3000-4000 flows per second, if flowscan
doesn't have to compete with too many other processes.
Early in this project (c. 1999), we were usually collecting about 150-300,000 flows from our peering router every 5 minutes. Recently, our 5-minute flow files average ~15 to 20 MB in size.
During a recent inbound Denial-of-Service attack consisting of 40-byte TCP SYN packets with random source addresses and port numbers, I've seen a single ``5-minute'' flow file greater than 500MB! Even on our fast machine, that single file took hours to process.
Surely YMMV, currently a 35GB file-system allows us to preserve
gzip(1)
ped flow files for about 2 weeks.
Cisco(s)
to your collector
machine if you have enabled ip route-cache flow
on very many fast interfaces. With lots of exported flow data (e.g. 15-20
MB of raw flow file data every 5 minutes) and only a 10 Mb/s ethernet NIC,
I found that the host was dropping some of the incoming UDP packets, even
though the rate of incoming flows was less than 2 Mb/s. This was evidenced
by a constantly-increasing number of udpInOverflows
in the
netstat -s
output under Solaris. I addressed this by reconfiguring my hosts with a 100
Mb/s fast ethernet NIC or 155 Mb/s OC-3 ATM LANE interface and have not
seen that problem since. Of course, one should assure that the requisite
bandwidth is available along the full path between the exporting
Cisco(s)
and the collecting host.
ftp://ftp.caida.org/pub/arts++/
As of arts++-1-1-a5, the arts++ build appears to require GNU make 3.79 because its Makefiles use glob for header dependencies, e.g. ``*.hh''. From my cursory look at the GNU make ChangeLog, perhaps any version >= 3.78.90 will suffice. Also there may be trouble if you don't have flex headers installed in your ``system'' include directory, such as ``/usr/include'', even though ``configure.in'' appears to be trying to handle this situation. Since mine were in the ``local'' include directory, I hand-tweaked the classes/src/Makefile's ``.cc.o'' default rule to include that directory as well.
http://net.doit.wisc.edu/~plonka/cflowd/?M=D
Obtain the patch or patches which apply to the version of cflowd that you intend to run and apply it to cflowd before building cflowd below.
http://www.caida.org/tools/measurement/cflowd/ ftp://ftp.caida.org/pub/cflowd/
In my experience with building cflowd, you're the most likely to have success in a GNU development environment such as that provided with GNU/Linux or FreeBSD.
I have not had problems building the patched cflowd-2-1-a9
or
cflowd-2-1-a6
under Debian Linux 2.2.
I've also managed to build the patched cflowd-2-1-a6 with gcc-2.95.2 and binutils-2.9.1 on a sparc-sun-solaris2.6 machine with GNU make 3.79 and flex-2.5.4.
As of cflowd-2-1-a6, beware that during the build may pause for minutes
while as(1)
uses lots of CPU and memory to building
``CflowdCisco.o''. This is apparenly `normal'. Also, the build appears to
be subtley reliant on GNU ld(1),
which is available in the GNU
``binutils'' package. (I was unable to build cflowd-2-1-a6 with the
sparc-sun-solaris2.6 ``/usr/ccs/bin/ld'' although earlier cflowd releases
built fine with it.)
http://www.cpan.org/
and:
http://www.perl.com/
I've tested with perl 5.004, 5.005, and 5.6.0. If you'd like to upgrade to perl 5.6.0 you can install it thusly:
# perl -MCPAN -e shell cpan> install G/GS/GSAR/perl-5.6.0.tar.gz
However, I suggest you don't install it in the same place as your existing perl
.
ksh
is used as the SHELL
in the Makefile for the graphs.
pdksh
works fine too. If for some reason you don't already have
ksh
, check out:
http://www.kornshell.com/
or:
http://www.math.mun.ca/~michael/pdksh/
If you're using GNU/Linux, pdksh
is available as an optional binary package for various distributions.
http://ee-staff.ethz.ch/~oetiker/webtools/rrdtool/
I recommend that you install rrdtool
from source, even if it is available as an optional binary package for
operating system distribution. This is because FlowScan expects that you've
built and installed RRDTOOL something like this:
$ ./configure --enable-shared $ make install site-perl-install
That last bit is important, since it makes the rrdtool
perl modules available to all perl scripts.
rrdtool
. (See above.)
You can install them using the CPAN shell like this:
# perl -MCPAN -e shell cpan> install Boulder::Stream
If you want to fetch it manually you can probably find it at:
http://search.cpan.org/search?dist=Boulder
I've tested with the modules supplied in the Boulder-1.18 distribution and also those in the old ``boulder.tar.gz'' distribution.
# perl -MCPAN -e shell cpan> install ConfigReader::DirectiveStyle
If you want to fetch it manually you can probably find it at:
http://search.cpan.org/search?dist=ConfigReader
I'm using ConfigReader-0.5.
# perl -MCPAN -e shell cpan> install HTML::Table
If you want to fetch it manually you can probably find it at:
http://search.cpan.org/search?dist=HTML-Table
You can try to install it using the CPAN shell like this:
# perl -MCPAN -e shell cpan> install Net::Patricia
If Net::Patricia
is not found on CPAN, you can obtain it here:
http://net.doit.wisc.edu/~plonka/Net-Patricia/
http://net.doit.wisc.edu/~plonka/Cflow/
You'll need Cflow-1.024 or greater.
http://net.doit.wisc.edu/~plonka/FlowScan/
I suggest that the FlowScan --prefix
directory be owned by an appropriate user and group, and that the
permissions allow write by other members of the group. Also, turn on the
set-group-id bit on the directory so that newly created files (such as the
flow files and log file) will be owned by that group as well, e.g.:
user$ chmod g+ws $PREFIX
80/tcp
service to be called http
. Try running this command:
$ perl -le "print scalar(getservbyport(80, 'tcp'))"
You can continue with the next step if this command prints http
. However, if it prints some other value, such as www
, then I suggest you modify your /etc/services file so that the line containing
80/tcp
looks something like this:
http 80/tcp www www-http #World Wide Web HTTP
Be sure to leave the old name such as www
as an ``alias'', like I've shown here. This will reduce the risk of
breaking existing applications which may refer to the service by that name.
If you decide not to modify the service name in this way, FlowScan should
still work, but you'll be on your own when it comes to producing graphs.
ip route-cache flow
Also, I suggest that you export from your Cisco like this:
ip flow-export version 5 peer-as ip flow-export destination 10.0.0.1 2055
Of course the IP address and port are determined by your cflowd.conf. To help ensure that flows are exported in a timely fashion, I suggest you also do this if your IOS version supports it:
ip flow-cache timeout active 1
Some IOS versions, e.g. 12.0(9), use this syntax instead:
ip flow-cache active-timeout 1
unless you've specified something such as downward-compatible-config 11.2
.
Lastly, in complicated environments, choosing which particular interfaces
should have ip route-cache flow
enabled is somewhat difficult. For FlowScan, one usually wants it enabled
for any interface that is an ingress point for traffic that is from inside
to outside or vice-versa. You probably don't want flow-switching enabled
for interfaces that carry policy-routed traffic, such as that being
redirected transparently to a web cache. Otherwise, FlowScan could count
the same traffic twice because of multiple flows being reported for what
was essentially the same traffic making multiple passes through a border
router. E.g. user-to-webcache, webcache-to-outside world (on behalf of that
user).
As for the tweaks necessary to get cflowd to play well with FlowScan, hopefully, an example is worth a thousand words.
My cflowd.conf file looks like this:
OPTIONS { LOGFACILITY: local6 TCPCOLLECTPORT: 2056 TABLESOCKFILE: /home/whomever/cflowd/etc/cflowdtable.socket FLOWDIR: /var/local/flows FLOWFILELEN: 1000000 NUMFLOWFILES: 10 MINLOGMISSED: 300 } CISCOEXPORTER { HOST: 10.0.0.10 ADDRESSES: { 10.42.42.10, } CFDATAPORT: 2055 # COLLECT: { flows } } COLLECTOR { HOST: 127.0.0.1 AUTH: none }
And I invoke the patched cflowd like this:
user$ cflowd -s 300 -O 0 -m /path/to/cflowd.conf
Those options cause a flow file to be ``dropped'' every 5 minutes, skipping flows with an output interface of zero unless they are multicast flows. Once you have this working, your ready to continue.
--prefix
value as might for other packages!
I.e. don't use /usr/local or a similar directory in which other things are installed. This prefix should be the directory where the patched cflowd has been configured to write flow files.
A good way to avoid doing something dumb here is to not run FlowScan's
configure
nor make
as root.
user$ ./configure --help # note --with-... options
e.g.:
user$ ./configure --prefix=/var/local/flows user$ make user$ make -n install user$ make install
By the way, in the above commands, all is OK if make says ``Nothing to
be done for `target'
''. As long as make
completes without an error, all is OK.
Subsequently in this document the ``prefix'' directory will be referred to
as the ``--prefix
diretory'' or using the environment variable
$PREFIX
. FlowScan does not require or use this environment variable, it's just a
documentation convention so you know to use the directory which you passed
as with --prefix
.
OutputDir
is where the .rrd
files and graphs will reside. As the chosen FlowScan user do:
$ PREFIX=/var/local/flows $ mkdir -p $PREFIX/graphs
Then, when you edit the .cf
files below, be sure to specify this using the OutputDir
directive.
cf
sub-directory of the distribution. During initial configuration you will
copy and sometimes modify these sample files to match your network
environent and your purposes.
FlowScan looks for its configuration files in its bin
directory - i.e. the directory in which the flowscan
perl script and FlowScan report modules are installed. I don't really like this, but that's
the way it is for now. Forgive me.
FlowScan currently uses two kinds of cofiguration files:
A number of the directorives have paths to directory entries as their
values. One has a choice of configuring these as either relative or
absolute paths. The samples configuration files ship with relative path
specifications to minimize the changes a new user must make. However, in
this configuration, it is imperitive that flowscan
be run in the --prefix
directory if these relative paths are used.
If you're new to ``Boulder IO'', the examples referenced below should be
sufficient. Remember that lines containing just =
are record seperators.
For complete information on this format, do:
$ perldoc Boulder # or "perldoc bolder" if that fails
$ cp cf/flowscan.cf $PREFIX/bin $ chmod u+w $PREFIX/bin/flowscan.cf $ # edit $PREFIX/bin/flowscan.cf
CampusIO
and SubNetIO
reports. These two reports are mutually exclusive - SubNetIO
does everything that CampusIO
does, and more.
Initially, in flowscan.cf I strongly suggest you configure:
ReportClasses CampusIO
rather than:
ReportClasses SubNetIO
The CampusIO
report class is simpler than SubNetIO
, requires less configuration, and is less CPU/processing intensive. Once
you have the
CampusIO
stuff working, you can always go back and configure
flowscan
to use SubNetIO
instead.
There is POD documentation provided with the CampusIO
and
SubNetIO
reports. Please use that as the definitive reference on configuration
options for those reports, e.g.:
$ cd bin $ perldoc CampusIO
The most important thing to consider configuring in CampusIO.cf is the method by which CampusIO
should identify outbound flows. In order of preference, you should define NextHops
, or
OutputIfIndexes
, or neither. Beware that if you define neither, CampusIO will resort to
using the flow destination address to determine whether or not the flow is
outbound. This can be troublesome if you do not accurately define your
local networks (below), since flows forwarded to any non-local addresses
will be considered outbound. If possible, it's best to define the list of NextHops
to which you know your outbound traffic is forwarded.
For most purposes, the default values for the rest of the CampusIO
directives should suffice. For advanced users that export from multiple
Ciscos to the same cflowd/FlowScan machine, it is also very important to
configure LocalNextHops
.
LocalSubnetFiles
directive.
The local_nets.boulder file must contain a list of the networks or subnets within your organization. It is imperative that this file is maintained accurately since flowscan will use this to determine whether a given flow represents inbound traffic.
You should probably specify the networks/subnets in as terse a way as possible. That is, if you have two adjacent subnets that can be coallesced into one specification, do so. (This is differnet than the similarly formatted our_subnets.boulder file mentioned below.)
The format of an entry is:
SUBNET=10.0.0.0/8 [TAG=value] [...]
Technically, SUBNET
is the only tag required in each record. You may find it useful to add
other tags such as DESCRIPTION
for documentation purposes. Entries are seperated by a line containing a
single =
.
FlowScan identifies outbound flows based on the list of nexthop addresses that you'll set up below.
CampusIO
attempt to identify Napster traffic, be sure to comment out all Napster
related option in
CampusIO.cf.
Copy the template to the bin directory from which you will be running flowscan
. The supplied content seems to work well as of this writing (Mar 10,
2000). No warranties. Please let me know if you have updates regarding
Napster IP address usage, protocol, and/or port usage.
The file Napster_subnets.boulder should contain a list of the networks/subnets in use by Napster, i.e. napster.com
.
As of this writing, more info on Napster can be found at:
http://napster.cjb.net/ http://opennap.sourceforge.net/napster.txt http://david.weekly.org/code/napster-proxy.php3
This file is used by the SubNetIO
report class, and therefore is only necessary if you have defined ReportClasses SubNetIO
rather than ReportClasses CampusIO
.
The file our_subnets.boulder should contain a list of the subnets on which you'd like to gather I/O statistics.
You should format this file like the aforementioned
local_nets.boulder file. However, the SUBNET
tags and values in this file should be listed exactly as you use them in
your network: one record for each subnet. So, if you have two subnets, with
different purposes, they should have seperate entries even if they are
numerically adjacent. This will enable you to report on each of those user
populations independently. For instance:
SUBNET=10.0.1.0/24 DESCRIPTION=power user subnet = SUBNET=10.0.2.0/24 DESCRIPTION=luser subnet
FlowFileGlob
directive in flowscan.cf and is usually the same directory that is specified using the FLOWDIR
directive in your
cflowd.conf.
If you do this, flowscan will move each flow file to that saved sub-directory after processing it. (Otherwise it would simply remove them.) e.g.:
$ mkdir $PREFIX/saved $ touch $PREFIX/saved/.gzip_lock
The .gzip_lock file created by this command is used as a lock file to ensure that only one cron job at a time.
Be sure to set up a crontab entry as is mentioned below in Final Setup. I.e. don't complain to the author if you're saving flows and your file-system fills up ;^).
-s 300
option, and it has written at least one time-stamped flow file (i.e. other
than
flows.current), try this:
$ cd /dir/containing/your/time-stamped/raw/flow/files $ flowscan
The output should appear as something like this:
Loading "bin/Napster_subnets.boulder" ... Loading "bin/local_nets.boulder" ... 2000/03/20 17:01:04 working on file flows.20000320_16:57:22... 2000/03/20 17:07:38 flowscan-1.013 CampusIO: Cflow::find took 394 wallclock secs (350.03 usr + 0.52 sys = 350.55 CPU) for 23610455 flow file bytes, flow hit ratio: 254413/429281 2000/03/20 17:07:41 flowscan-1.013 CampusIO: report took 3 wallclock secs ( 0.44 usr + 0.04 sys = 0.48 CPU) sleep 300...
At this point, the RRD files have been created and updated as the flow
files are processed. If not, you should use the diagnostic warning and
error messages or the perl debugger (perl -d flowscan
) to determine what is wrong.
Look at the above output carefully. It is imperative that the number of
seconds that Cflow::find took
not usually approach nor exceed 300. If, as in the example above, your log
messages indicate that it took more than 300 seconds, FlowScan will not be
able to keep up with the flows being collected on this machine (if the
given flow file is representative). If the total of usr + sys CPU seconds
totals more than 300 seconds, than this machine is not even capable of
running FlowScan fast enough, and you'll need to run it on a faster machine
(or tweak the code, rewrite in C, or mess with process priorities using
nice(1),
etc.)
flowscan
process is not being scheduled to run often enough because of context
switching or because of its competing for CPU with too many other
processes.
On a 2 processor Intell PIII, to keep flowscan
from having to compete with other processes for CPU, I have recently had
good luck with setting the flowscan
process' nice(1)
value to -20.
Furthermore, I applied this experimental patch to the Linux 2.2.18pre21 kernel:
http://isunix.it.ilstu.edu/~thockin/pset/
This patch enables users to determine which processor or set of processors
a process may run on. Once applied, you can reserve the 2nd processor
solely for use by flowscan
:
root# mpadmin -r 1
Then launch flowscan
on processor number 1:
root# /usr/bin/nice --20 /usr/bin/runon 1 /usr/bin/su - username -c '/usr/bin/nohup /var/local/flows/bin/flowscan -v' >> /var/local/flows/flowscan.log 2>&1 </dev/null &'
This configuration has yielded the best ratio of CPU to real seconds that I have seen - nearly 1 to 1.
flowscan
is working correctly, you can set it (and cflowd
) to start up at system boot time. Sample rc
scripts for Solaris and Linux are supplied in the rc sub-directory of this distribution. You may have to edit these scripts
depending on your ps(1)
flavor and where various commands have
been installed on your system.
Also, if you're saving your flow files, you should set up crontab entries
to handle the ``old'' flows. I use one crontab entry to
gzip(1)
recently processed files, and another to delete the files older than a
given number of hours. The ``right'' number of hours is a function of your
file-system size and the rate of flows being exported/collected. See the example/crontab file.
$ cp graphs.mf $PREFIX/graphs/Makefile $ cd $PREFIX/graphs $ make
This should produce the ``Campus I/O by IP Protocol'' and ``Well Known
Services'' graphs in PNG files. GIF files may be produced using the
filetype
option mentioned below.
If this command fails to produce those graphs, it is likely that some of
the requisite .rrd
files are missing, i.e. they have not yet been created by FlowScan, such as http_dst.rrd. If this is the case, it is probably because you skipped the configuration
of /etc/services
in Configuring Your Host. Stop flowscan
, rename your
www_*.rrd files to http_*.rrd, modify /etc/services, and restart flowscan
.
Alternatively, you may copy and customize the graphs.mf Makefile to remove references to the missing or misnamed .rrd
files for those targets. Also, you could produce your graphs using a
graphing tool such as RRGrapher mentioned below in Custom Graphs.
Note that the graphs.mf template Makefile has options to specify such things as the range of time, graph height and width, and output file type. Usage:
make -f graphs.mf [filetype=<png|gif>] [width=x] [height=y] [ioheight=y+n] [hours=h] [tag=_tagval] [events=public_events.txt] [organization='Foobar U - Springfield Campus']
as in:
$ make -f graphs.mf filetype=gif height=400 hours=24 io_services_bits.gif
For instance, one could create a plain text file in the graphs directory called events.txt containing these lines:
2001/02/10 1538 added support for events to FlowScan graphs 2001/02/12 1601 allowed the events file to be named on make command line
Then to generate the graphs with those events included one might run:
$ make -f graphs.mf events=events.txt
This feature was implemented using a new script called event2vrule
that is supplied with FlowScan. This script is meant to be used as a
``wrapper'' for running rrdtool(1),
similarly to how one might
run nohup(1).
E.g.:
$ event2vrule -h 48 events.txt rrdtool graph -s -48h ...
That command will cause these VRULE
arguments to be passed to rrdtool, at the end of the argument list:
COMMENT:\n VRULE:981841080#ff0000:2001/02/10 1538 added support for events to FlowScan graphs COMMENT:\n VRULE:982015260#ff0000:2001/02/12 1601 allowed the events file to be named on make command line COMMENT:\n
http://net.doit.wisc.edu/~plonka/RRGrapher/
For other custom graphs, if you use the supplied graphs.mf Makefile, you can use the examples there in to see how to build ``Campus I/O by Network'' and ``AS to AS'' graphs. The examples use UW-Madison network numbers, names of with which we peer and such, so it will be non-trivial for you to customize them, but at least there's an example.
Currently, RRD files for the configured ASPairs
contain a :
in the file name. This is apparently a no-no with RRDTOOL since, although
it allows you create files with these names, it doesn't let you graphs
using them because of how the API uses :
to seperate arguments.
For the time being, if you want to graph AS information, you must manually create symbolic links in your graphs sub-dir. i.e.
$ cd graphs $ ln -s 0:42.rrd Us2Them.rrd $ ln -s 42:0.rrd Them2Us.rrd
A reminder for me to fix this is in the TODO list.
Other RRDTOOL front-ends that produce graphs should be able to work with
FlowScan-generated .rrd
files, so there's hope.
Copyright (c) 2000-2001 Dave Plonka <plonka@doit.wisc.edu>. All rights reserved.
This document may be reproduced and distributed in its entirety (including this authorship, copyright, and permission notice), provided that no charge is made for the document itself.