Scalable Simulation Framework
Internet Measurement Infrastructure in SSFNet

distributed monitoring requirements
SSFNet 1.2 network monitoring infrastructure
a simple tutorial example: queue monitoring
flow monitoring with SSF.OS.NetFlow package
tcpdump and TCP connection state monitoring

SSFNet network models are open source software, distributed under the GNU General Public License.

back to list of tutorials...

Distributed network monitoring in SSFNet

The objectives of the SSFNet Measurement Infrastructure mirror and extend the objectives of the next generation Internet Measurement Infrastructures for the live global Internet:

  • Automate configuration of the measurement Monitors at many hosts and routers.
  • Collect streaming and sampled network data.
  • Correlate spatio-temporal network data.
  • Visualize global network activity.
However, unlike in the live Internet, in an SSFNet Internet model everything is accessible to measurement:
  • end-2-end application data
  • internal state of protocol sessions
  • IP packet dumps on network interfaces and links
  • flows in routers
  • routes and route updates
  • queue lengths in router interfaces
  • and more...
The traditional approach based on "one monitoring probe -- one (or more) files" does not scale to support distributed measurements involving hundreds or thousands of hosts and routers.

SSFNet release 1.2, therefore, introduced a novel, scalable measurement infrastructure, supporting streaming data export from all network monitors, and a uniform pattern for the DML-configurable placement and configuration of monitoring probes.

SSFNet 1.2: Scalable network monitoring infrastructure

SSFNet 1.2 supports efficient multi-point network monitoring infrastructure for collection of streaming and sampled data from many Monitors.

The requirements are:

  • Flexibility: instantiate and configure network Monitors from the DML network configuration database,
  • Fast output: use bytestreams of standardized records, use source multiplexing,
  • Fast selective record retrieval: demultiplex, support configurable record filters.

The package SSF.Util.Streams together with SSFNet class SSF.OS.ProbeSession provide such facilities, and in addition make it easiers to manage record streams in a multi-timeline context (such as with parallel execution).

Simple Streaming Data Protocol

  • A record stream is a sequence of bytes, consisting of a preamble followed by a body.

  • The preamble consists of two Strings in UTF format and Java byte order.

    • If the first String is "record" then the second String names the stream, and the body consists of zero or more records.

    • If the first String is anything other than "record" the subsequent String and stream body are undefined.

  • Each record consists of the following fields, written in Java byte order and encoded according to Java's standard primitive data type serialization rules:

    BytesJava TypeInterpretation
    0..3intService code (record data type)
    4..7intSource code (writer identification)
    8..15doubleTimestamp (seconds since epoch)
    16..19intRecord length (bytes to follow)
    20..endbytesUser-defined data

  • The integer codes used to identify record types and origins are generated uniquely for each stream, and cannot be relied upon to remain the same, even across subsequent runs of the same code. Their mapping to strings is performed inline within the stream, using special dynamic dictionary-building record types embedded in the stream. These codes are portably obtained using the getRecordTypeCode(String) and getRecordSourceCode(String) methods, and resolved using the getRecordTypeString(int) and getRecordSourceString(int) methods, all specified in interface StreamInterface.


    The Streams package contains four classes for very basic record-oriented streaming data export from SSFNet simulations. These classes implement the simple streaming data protocol, described above.

    • class streamException extends Exception

      Simple exception class that can be thrown by stream setup methods.

    • class BasicRecorder implements StreamInterface

      BasicRecorder demonstrates how to build a simple implementation of a StreamInterface for portably emitting a stream of records.

    • class BasicPlayer implements StreamInterface

      BasicPlayer demonstrates how to build a simple implementation of a StreamInterface for portably processing a stream of records.

    • interface StreamInterface

      Interface for sending and/or receiving a stream of records, each indexed by a small standard header. This header identifies the type of each record, the writer of the record, the time at which the record was generated, and the number of bytes to follow in a user-defined format. The type and writer are given as integer codes, which correspond to arbitrary-length strings sent in-stream to construct a pair of queryable dynamic data dictionaries. See the description of the small streaming data protocol below for more details. The interface specifies the following operations:

      Connect the stream to a data sink or source at the given URL, throwing a streamException if there are any problems:

          public void connectWrite(String url) throws streamException;
          public void connectRead(String url) throws streamException;

      Return true if this stream has been successfully connected to a data source or sink, and not disconnected:

          public boolean isConnected();

      Signal that no more records are to be received (if reading) or sent (if writing):

          public void disconnect();

      Process a single incoming record in a data stream connected for reading:

          public int receive(int type_code,
                     int source_code,
                     double timestamp,
                     byte[] bytes,
                     int offset,
                     int length);

      Emit a single record on a data stream connected for writing, returning zero if the record is successfully emitted, or a nonzero value if there is an error or if a filter has suppressed the record from being written. The short form (without payload) may be used to test whether a record will be emitted or suppressed, to save the overhead of actually preparing it for transmission:

          public int send(int type_code,
                  int source_code,
                  double timestamp);
          public int send(int type_code,
                  int source_code,
                  double timestamp,
                  byte[] bytes,
                  int offset,
                  int length);  // long form

      Map a user-defined record type string to an integer code, or vice-versa:

          public String getRecordTypeString(int code);
          public int getRecordTypeCode(String name);

      Map a user-defined sender ID string to an integer code, or vice-versa:

          public String getRecordSourceString(int code);
          public int getRecordSourceCode(String name);

    To write a stream of records, the user typically constructs a BasicRecorder, connects it to a data sink, and calls send() repeatedly to emit records before disconnecting(). Note that the sender uses the short form of send() with no payload to test stream status before committing to the overhead of preparing the payload bytes. There's no sense preparing bytes that will be dropped because the stream is dropping or suppressing output for some reason.

          StreamInterface myRecorder = new BasicRecorder("this names my stream");
          int tid = myRecorder.getRecordTypeCode("my record type");
          int sid = myRecorder.getRecordSourceCode("my writer id");
          double now = .. ; // get timestamp from simulator or clock
          if (0 == myRecorder.send(tid,sid,now)) {  // test for suppression
            byte[] mybuffer = .. ; // prepare the bytes to be emitted
            myRecorder.send(tid,sid,now, mybuffer,0,mybuffer.length);
          // .. do more sends until finished ..

    To read the records later, the user typically constructs a BasicPlayer and connects it to a data source; the BasicPlayer calls back receive() each time a record arrives:

          /** For this example only: use anonymous inner class BasicPlayer
            * to demonstrate specialized record processing.  We override
            * the default behavior for one type of record, and defer to the
            * base class default for all other types of records.
          StreamInterface myPlayer = new BasicPlayer("this names my stream") {
            public void receive(int tid, int sid, double time,
                            byte[] buf, int offset, int length) {
               if (tid == getRecordTypeCode("my record type")) {
             // .. process this record content appropriately
               else super.receive(tid,sid,time,buf,offset,length);
          myPlayer.connectRead("file:/tmp/stream.dat"); // calls back receive()..


    Finally, one new SSFNet class makes it easier to manage record streams in a multitimeline context. Configure an instance of the ProbeSession protocol under the standard name "probe" in each host or router where probing is to be enabled:

        ProtocolGraph [
          # .. traditional protocols here
          ProtocolSession [
            name probe use SSF.OS.ProbeSession
            file "/tmp/mystream.dat"
            stream "My Stream"

    Then, from any protocol or protocol-related code, access the "probe" protocol and call getRecorder to get a handle on an implementation of StreamInterface suitable for sending records:

         ProtocolSession theProbe =
         StreamInterface theStream = theProbe.getRecorder(); // preconnected
         int myHostCode = theProbe.getHostCode();  // uses the NHI address
         int myDatatypeCode = theStream.getRecordTypeCode("my record type");
                        myBytes, 0, myBytes.length);
    The streams accessed in this way are managed by the system-wide collection of ProbeSessions; they are automatically disconnected at the end of simulation time. The stream IDs and file names are not used directly; the ProbeSessions attach a dot (".") and the integer ID of the local timeline. In the following example, in a four-timeline model, four streams are actually created behind the scenes:

      ProtocolSession [
        name probe use SSF.OS.ProbeSession
        file "/tmp/mystream.dat"
        stream "My Stream"
        # Streams created: "My Stream.0" "My Stream.1" "My Stream.2" "My Stream.3"
        # Files created: /tmp/mystream.dat.[0-3]

SSFNet 1.2 monitoring: A simple queue monitor example

The directory ssfnet/examples/queueMonitorDemo in the SSFNet 1.2 distribution contains an example of the usage of the new package SSF.Utils.Streams. The demo shows how to configure the queue monitoring probes in the DML network configuration; it should be examined together with two prototype classes:

class SSF.Net.droptailQueueMonitor_2 illustrates the programming idioms used for writing network measurement classes that employ the services of SSF.Utils.Streams,

class SSF.Net.droptailRecordPlayer_2 illustrates the programming idioms used for writing a decoder of binary record streams created by SSF.Net.droptailQueueMonitor_2.

These two classes are provided as a simple example; for a more complex application of SSF.Utils.Streams that includes filtering of multiplexed record streams and other utilities see the package SSF.OS.NetFlow.

Consider the network configuration file ssfnet/examples/queueMonitorDemo/broom2.dml: where we place two queue monitors on the output queues of the router in the center of the diagram:

  client                       router
         1Mbs 1 ms     Qmon_1_________                   server
  1(0)--------------------(1)|        |
                             |  R 20  |(0)---------------30(0)
  2(0)--------------------(2)|________|     10 Mbs 1 ms
        1Mbs 10 ms      Qmon_1
an a snippet of the DML configuration for the router:
  router [
    id 20
    interface [id 0 bitrate 10000000 latency 0.0]
    interface [id 1 bitrate 1000000  latency 0.0
      queue [
        use SSF.Net.droptailQueue
      buffer 10000
      monitor [
        use SSF.Net.droptailQueueMonitor_1
        probe_interval 0.01
        debug false
    interface [id 2 bitrate 1000000 latency 0.0
      queue [
        use SSF.Net.droptailQueue
      buffer 4000
      monitor [
        use SSF.Net.droptailQueueMonitor_1
        probe_interval 1.0
        debug true
    graph [
      ProtocolSession [name ip use SSF.OS.IP]
      ProtocolSession [ name probe use SSF.OS.ProbeSession
        file "rtr_queuedata"     # output file prefix
        stream rtrstream         # stream name
Consider the DML attributes in boldface:
  • Every host or router that supports measurement probes must contain the pseudo-protocol SSF.OS.ProbeSession in its protocol graph. The ProbeSession transparently provides access to a record output stream that is shared by all Monitors that request it. A record stream is written to the file named in the file attribute (for parallel execution, there will be one file per timeline, with numerical suffixes identifying the timeline). The record stream also has a name given in the stream attribute, this enables stream identification by a Player written for decoding the binary records.
  • A monitor is configured within the monitor attribute; the mandatory attribute is use that specifies the monitoring class that is instantiated in the model configuration phase of the simulation. The example above shows class SSF.Net.droptailQueueMonitor_1, for tutorial purposes the SSFNet 1.2 also has class SSF.Net.droptailQueueMonitor_2 that could be named here.
Next, see how all this works together. It is recommended to examine the source code for details.
  1. Once the router specified above has been instantiated in the network configuration phase of simulation, it begins to configure its network interfaces, each specified by the interface attribute.
  2. An interface (class SSF.Net.NIC) finds the attribute queue that names the class implementing Java interface SSF.Net.packetQueue, and then finds the optional attribute monitor that names the class implementing Java interface SSF.Net.PacketQueueMonitor, and may have additional attributes specific for the named Monitor. These classes are instantiated and configured.
  3. Once all network has been configured, the simulation enters the initialization phase. Every instance of a monitoring class such as SSF.Net.droptailQueueMonitor_1 installed in the router then tries to locate an instance of ProbeSession in the router's protocol graph, and to obtain from it a handle to the shared record output stream. If successful, it starts the record writing Timer that will periodically write queue records to the output stream.
  4. When simulation finishes, one can run the class SSF.Net.droptailRecordPlayer_2 to convert binary queue records to ascii for further analysis. This class is provided as an example only, you should write a more elaborate demultiplexing Player if you want to analyze data from hundreds of queues on many routers.
Note the pattern: configurable queue monitoring is supported by two cooperating Java interfaces in the package SSF.Net:
  1. packetQueue
  2. PacketQueueMonitor
and any user-provided classes implementing a queue or a monitor that follow this pattern can be interchangeably configured in DML.

There is also a similar pattern for monitoring of IP packets in the IP protocol, formalized by the Java interface SSF.OS.IpMonitor. That is used in SSF.OS.NetFlow.

SSFNet 1.2 monitoring: NetFlow

Network traffic can be monitored by observing the IP packets passing through selected locations in the network, such as through routers, network interfaces, or point-to-point links, using packet capture software or network sniffers. However, in high speed networks packet-level monitoring is often impractical, and a coarser unit of traffic measurement - a flow - has been introduced.

An IP flow is a sequence of contiguous IP packets with the same source and destination; where the time delay between packets belonging to the same flow is below some small threshold.

The idea is that a flow summarily represents a single "transaction" between the end hosts, at the resolution that is coarser than packet level, but finer than session level. Flows can be further refined by additional attributes (such as protocol number), and can be aggregated in a variety of ways ( for instance by source and/or destination network prefix) for analysis.

Cisco routers' capability of exporting the flow data in NetFlow format has been modeled in the SSFNet package SSF.OS.NetFlow. The accompanying package SSF.OS.NetFlow.Filter provides the facilities for flow filtering and analysis.

The SSF NetFlow monitors can be placed and configured on selected routers from the network DML configuration file.

See the documentation and demonstrations of use included in the SSFNet 1.2 distribution.

tcpdump and TCP monitoring support

SSFNet 1.2 includes two network monitoring facilities carried over from the earlier releases:
  • tcpdump: class SSF.OS.binaryTcpDump that collects packet data on a network interface and writes them to file; and a standalone class SSF.OS.DumpPro that - with many command line options - converts a dumpfile to ascii for plotting and additional analyses. The SSFNet-generated tcpdump files have the same format at those generated by the well-known tcpdump program.
    SSFNet tcpdump can be configured among the DML attributes characterizing a network interface in a host or router, as in this configuration snippet:
            host [
              interface [
                tcpdump "filename"
                ... other interface attributes
              ... other host attributes
  • TCP instrumentation: In a host's or router's TCP protocol configuration, a user can specify multiple files for recording all details of the internal state variables of active TCP connections:
    ProtocolSession [name tcp use SSF.OS.TCP.tcpSessionMaster
       ... TCP parameter settings
      debug    %S            # if true, dump verbose TCP diagnostics to files
                                      # for session & host (see below), true/false
      # dump filename prefixes - actual filenames end with "_hostID_flowID.out"
      # for session info, and with "_hostID.out" for host info
      rttdump   %S              # rtt dumpfile prefix  (session)
      cwnddump  %S              # cwnd dumpfile prefix (session)
      rexdump   %S              # rexmit timer dumpfile prefix (session)
      eventdump %S              # dumpfile prefix for all events (session)
      con_count %S              # dumpfile prefix for number of connections (host)
      rto_count %S              # dumpfile prefix for timeout info (host)

back to list of tutorials...