A suite of simulation experiments were performed to determine the performance of the proposed establishment protocol. This allowed us to analyze the behavior of the protocol for different network topologies, varying network loads and bandwidth requirements of the connections. We assumed two different network topologies, a 4x4 Mesh with 16 four-port switches and 12 hosts and a 6x6 Torus with 36 five-port switches and 36 hosts.
For the simulations we assumed exponential distributions for both
connection interarrival times ( with being the arrival rate
), and connection durations ( with
being the service rate ).
For each of the experiments, we assumed a network topology and
increased the load on the network by repeatedly generating random
connections which arrive in the network at a rate of , trying
to establish them, and if they do get admitted into the network, tear
them down at a rate of
. ( For simplicity, we restricted
ourselves to connections with identical traffic parameters. ) As our
criteria for evaluating the performance, we measured the average
number of currently admitted connections in the network, during steady
state. A related measure is the average link utilization, which shows
how much the accepted connections utilize the existing
links.
As a reference for comparison, we use the centralized admission control described in Section 2. We recall that the centralized algorithm is an optimal admission control test, in that, given a set of connections, the test fails if and only if Recurrence Relation (1) is violated for at least one of the connections. Both the distributed and centralized algorithms operate on-line, that is, without any knowledge of future connection arrivals. Since we are not addressing routing in this work, we use the simplest possible routing algorithm, in our case X-Y routing. With a confidence level of 95%, the confidence intervals for all depicted results are below 2% of their absolute value. The interval at which we sample the number of connections admitted in the network is chosen sufficiently large to eliminate correlation effects.
Figure 10: Experimental Results for Varying and
Conditions 1 and 2 used to determine the residual time leave some room in
recalculating the blocking guards. Formula (8) attempts to
distribute the residual time equally over the route of the
connection. This basic allocation formula can be modified to vary the
allocation of residual time, such as allocating more residual time
toward the sender (causing lower values for the blocking guards) or
toward the receiver (resulting in higher blocking guards). Two effects
are traded off when updating the value for the blocking guard.
On one hand, a large value for the blocking guard gives more room to
accept connections downstream from the switch, whereas smaller
blocking guards tend make it more difficult to accept connections
downstream. On the other hand, future connections upstream are
penalized by a larger blocking guard, since the latter
causes higher values for guard-based calculated worst-case blocking
times .
Figure 10 illustrates the effect of varying the
allocation of residual time on the number of accepted connections for
a fixed load. In this series of experiments, we attempted to allocate
the residual time in the form , and plotted the performance for varying
's. The
results are collected for an interarrival rate of
and
different bandwidth requirements of connections. This indicates that
overall, the best results in terms of number of connections accepted
are obtained with for a value of
close to 1.0, which
corresponds to the equal distribution approach defined in
Formula (8). We will thus focus our attention on data
generated using Formula (8) for the remaining analyses.
Figure 11: Experimental Results on a 4x4 Mesh and 6x6 Torus
Figure 11 (a) shows the experimental results for the 4x4
Mesh. The results are for worst-case message lengths of one time
unit and varying interarrival times
of 20, 100, 1000 and 2000
units. For example the case of
and
corresponds to the requirements of uncompressed CD audio over 640
Mb/s links. As to be expected, the number of accepted connections
increases with increasing arrival rate. Also, obvious is the fact the
high-bandwidth connections show lower acceptance probabilities, owing
to the fact that they generate higher blocking delays. We see that
the centralized admission control scheme performs better. Having
access to a global state of the network at all times enables the
centralized scheme to optimally allocate resources and admit a larger
number of connections.
To compare the performance of the algorithm for different network topologies, we performed a similar set of experiments on a 6x6 Torus topology, shown in Figure 11 (b). This topology has a higher host-per-link ratio and thus less contention for links among connections. This is reflected in the higher numbers for accepted connections.
From the above graphs, we see that although the number of
accepted connections increases with , the rate of
acceptance decreases. Also evident is the fact that for the same
value, lower bandwidth requirements yield in higher number
of connections being accepted. This is verified in the Mesh topology
for a
value of 300 and a bandwidth requirement of 1/2000,
where the percentage of accepted connections is 66/300 =
22%. Maintaining the same arrival rate, we observe that if the
bandwidth requirement is doubled to 2/2000, the percentage of accepted
connections drops to 61/300 = 20.33%, which is not a significant
drop. If instead of doubling the bandwidth requirement, we double the
arrival rate to 600, the percentage of accepted connections drops
significantly to 86.45/600 = 14.4%. This illustrates the well known
fact that in wormhole networks, the blocking delays are substantially
more sensitive to an increase of the number of connections than of the
bandwidth requirements per connection.