Keywords: traffic engineering, large deviations, many sources asymptotic, effective bandwidths, time scales, broadband networks
The rapid progress and successful penetration of broadband communications in the recent years has led to important new problems in traffic modeling and engineering. Among others, call acceptance control and network dimensioning for cases of guaranteed QoS have attracted the attention of researchers. Successful approaches are closely related to the ability of quantifying the usage of resources on the basis of traffic modeling and measurements.
For example, statistical analysis of traffic measurements [15,18,10] has shown a selfsimilar or fractal behavior; such traffic exhibits long range dependence or slowly decaying autocorrelation. Although the implications of such long range dependence is still an open issue (e.g., see [9,11] and the references therein), recent work [20,11] has shown that these implications can be of secondary importance to the buffer overflow probability when the buffer size is small, which applies to the case where real time communication is supported. This example motivates the need for a methodology to understand the impact of the various time scales of the burstiness of real broadband traffic on the performance of the network and on its resource sharing capabilities. In particular, some basic questions for which the network engineer must provide answers are the following: How much does the cell loss probability decrease when the link capacity or buffer size increases? How does traffic shaping^{2} affect the multiplexing capability of a link and the amount of resources used by a bursty source? What is the sufficient time granularity of traffic measurements in order to capture the information that is important for performance analysis and network dimensioning? What are the effects of the composition of traffic mix on the multiplexing capability of a link? Traditional queueing theory, which requires elaborate traffic models, cannot answer such questions in the context of large multiservice networks; for such cases asymptotic methods are more appropriate. In this paper we answer such questions by applying and evaluating the many sources asymptotic and the effective bandwidth definition based on this asymptotic for real broadband traffic. This traffic consists of MPEG1 compressed video, Internet Wide Area Network (WAN) traffic, and traffic resulting from modeled voice.
Problems related to resource sharing have often been analyzed using the notion of effective bandwidth, which is a scalar that summarizes resource usage and which depends on the statistical properties and Quality of Service (QoS) requirements of a source. Effective bandwidths are usually derived by means of asymptotic analysis, which is concerned with how the buffer overflow probability decays as some quantity increases. If this quantity is the size of the buffer, we have the large buffer asymptotic [8,14]. If the buffer per source and capacity per source are kept constant, and we are interested in how the overflow probability decays as the size of the system (the broadband link and the multiplexed sources) increases, then we have the many sources asymptotic; this asymptotic regime has been investigated in [7,2,21].
Effective bandwidth definitions based on the large buffer asymptotic were found, in some cases, to be overly conservative or too optimistic [4]. This occurs because the large buffer asymptotic does not take into account the gain when many independent sources are statistically multiplexed together. Hence, in general the amount of resource usage depends not only on the statistical properties and Quality of Service (QoS) requirements of a source, but also on the statistical properties of the other traffic it is multiplexed with and the resources (capacity and buffer) of the multiplexing link. Only recently [13,6] has it been understood how to incorporate such information into the definition of the effective bandwidth. These works have shown that the effective bandwidth of a source depends on the link's operating point through two parameters, the space and time parameters, which in turn depend on the link resources and the statistical properties of the multiplexed traffic. The space and time parameters can be computed using the many sources asymptotic and, as we will demonstrate with real broadband traffic, have important applications to traffic engineering. Furthermore, since the effective bandwidth gives the amount of resources that must be reserved for the source in order to satisfy its QoS requirements, it helps simplify problems in resource management and network dimensioning.
The rest of this paper is structured as follows. In Section 2 we review basic results from the theory of effective bandwidths, as developed in [13], and many sources asymptotic [7,2,21], and we discuss the application of this framework to traffic engineering, giving emphasis on the interpretation of the space and time parameters. In Section 3 we present a detailed series of experiments which aim to evaluate the accuracy of the above framework for link capacities and buffer sizes that will appear in broadband networks and for real broadband traffic which consists of MPEG1 compressed video and Internet WAN traces, as well as modeled voice traffic. Finally, in Section 4 we summarize the results of the paper and identify areas for future research.
In this section we summarize the key results of the many sources asymptotic and the related effective bandwidth definition, and discuss their implications for traffic engineering.
Suppose the arrival process at a broadband link is the superposition of independent sources of J types. Let N_{j} = N n_{j} be the number of sources of type j, and let n = (n_{1},¼,n_{J}) (the n_{j}s are not necessarily integers). The broadband link has a shared buffer of size B = Nb and link capacity C = Nc. Parameter N is the scaling parameter (size of the system), and parameters b,c are the buffer and capacity per source, respectively. We suppose that after taking into account all economic factors (such as demand and competition) the proportions of traffic of each of the J types remains close to that given by the vector n, and we seek to understand the relative usage of network resources that should be attributed to each traffic type.
Let X_{j}[0,t] be the total load produced by a source of type j in the time interval [0,t), which feeds the above link. We assume that X_{j}[0,t] has stationary increments. Then, the effective bandwidth of a source of type j is defined as [13]
 (1) 
Let Q(Nc,Nb,Nn) = P(overflow) be the probability that in an infinite buffer which multiplexes Nn = (Nn_{1},¼,Nn_{J}) sources and is served at rate C = Nc, the queue length is above the threshold B = Nb. The following result, established in [7], holds for Q(Nc,Nb,Nn):
 (2) 
 (3) 
Consider the QoS constraint on the overflow probability to be P(overflow) £ e^{g}, and assume g = Ng_{0}. Let A(Ng_{0},Nc,Nb) be a subset of Z_{+}^{J} such that (Nn_{1}, ¼, Nn_{J}) Î A(Ng_{0},Nc,Nb) implies logP(overflow) £ Ng_{0} (and vice versa), i.e., the QoS constraint on the overflow probability is met. Due to this property, A(Ng_{0},Nc,Nb) is called the acceptance region. The region A(Ng_{0},Nc,Nb) is hard to compute. However, for the scaled acceptance region the following holds [13]:

 (4) 

If (n_{1}, ¼, n_{J}) is on the boundary of the region A and the boundary is differentiable at that point, then the tangent plane determines the halfspace [13]
 (5) 
To the extent that A(Ng_{0},Nc,Nb) can be approximated by NA, it follows from (5) that a point (N_{1}, ¼, N_{J}) = (Nn_{1},¼, Nn_{j}) Î A(Ng_{0},Nc,Nb) can be taken to satisfy
 (6) 
According to (6), the effective bandwidth a_{j}(s,t) provides a relative measure of resource usage for a particular operating point of the link, expressed through parameters s,t. For example, if a source of type j_{1} has twice as much effective bandwidth as a source of type j_{2}, then, for this particular operating point of the link, one source of the first type can be substituted for two sources of the second type, while still satisfying the QoS constraint. The above measure of resource usage differs from the ordinary measure that is usually reported (i.e., the mean rate), which corresponds to s = 0. Note that the QoS guarantees are encoded in the effective bandwidth definition through the value of g that appears on the right hand side of (6) and influences the form of the acceptance region.
Unlike the effective bandwidth definition (1) which takes into account the effects of statistical multiplexing, the effective bandwidth based on the large buffer asymptotic depends solely on the characteristics of the source and the QoS constraint. Specifically, [8,14] consider the QoS constraint P(overflow) £ e^{ dB}, where B is the total buffer. In this case the effective bandwidth based on the large buffer asymptotic of a source of type j and the corresponding constraint is

In this section we discuss an improvement of (3), due to [16], that is based on the BahadurRao theorem. Similar ideas were introduced as heuristics in [12,17]. Then we derive an effective bandwidth constraint similar to (6) that takes into account this improvement. An important result is that both the effective bandwidth formula (1) and the computation of parameters s,t remain the same; the latter uses the supinf formula (2).
Recently the authors of [16] extended the proof of the many sources asymptotic in [7] to show that as N ®¥
 (8) 

 (9) 
Next, we derive the effective bandwidth constraint similar to (6) applicable with the BahadurRao improvement (9). If the number of sources of each type N n = (N_{1}, ¼, N_{J}) is such that the overflow probability given by (9) is equal to the target overflow probability e^{g}, then we have




 (10) 
Next we discuss the interpretation of the space and time parameters s,t, and how they can be used for traffic engineering.
For any traffic stream, the effective bandwidth a_{j}(s,t) in (1) is a template that must be filled with the system operating point parameters s,t in order to provide the correct measure of effective usage by a source for that particular operating point. Although the value of this operating point also depends on this individual source, for a large system, due to heavy multiplexing, this dependence can be ignored. Such an engineering approach simplifies considerably the analysis because there is no circle in the definitions of the effective bandwidth and the operating point. Indeed, as we will see in Section 3.4, the values of s,t are, to a large extent, insensitive to small variations of the traffic mix. Furthermore, it has been observed that in networks serving a large number of sources, the traffic mix exhibits strong cyclic behavior. Hence, we can assign particular pairs (s,t) to periods of the day during which the traffic mix remains relatively constant. The values of s,t for a particular period of the day can be computed offline from traffic measurements taken during that period using the supinf formula (2) and the effective bandwidth formula (1); this procedure is discussed in detail in Section 3.1. Alternatively, the parameters s,t can be estimated using their interpretation, which we discuss next (related experimental results are presented in Section 3.3.1).
Recall that the time parameter t corresponds to the most probable duration of the buffer busy period prior to overflow. We now argue that this parameter also identifies the time scales that are important for buffer overflow. Assume that a link is operating at a particular operating point, expressed through parameters s,t. In the effective bandwidth formula (1) the statistical properties of the source appear in X_{j}[0,t], which is the amount of workload produced by the source in an interval of length t. If two sources have the same distribution of X_{j}[0,t] for this value of t, then they will both have the same effective bandwidth. A case of practical interest where this result can be applied is traffic smoothing: To have an effect on the amount of resources used by a source, traffic smoothing must be performed on a time scale larger than t, since only then does it affect the distribution of X_{j}[0,t]. We investigate this with real broadband traffic in Section 3.4.3. Based on the above discussion, the time parameter t also indicates the time granularity that traffic measurements must have, since given a value for t it would be sufficient to measure traffic in intervals with length a few times smaller than this value. Traditionally, the time granularity of traffic measurements was chosen in a rather adhoc manner.
Next we discuss the interpretation of the parameter s and the product st. Let g =  logP(overflow). Combining this with (2) we have g = sup_{t} inf_{s} [ s(b+ct)stå_{j = 1}^{J} n_{j} a_{j}(s,t)]. Taking the derivative of the last equation (see also [6]) we obtain
 (11) 
In this section we apply and evaluate for real broadband traffic the performance analysis framework discussed in Section 2. The specific issues we address are the following:
Next we give some details of how the supinf formula (2) can be numerically solved in an efficient manner.^{5} We assume that the source statistics are available from measurements of aggregate load (e.g., number of cells) in fixed intervals (epochs) with duration t. From these measurements the value of X_{j}[0,t] can be computed for values of t that are integer multiples of t.
The supinf formula (2) can be written as
 (12) 

 (13) 
The minimization J^{*}(t) = min_{s} J(s,t) can be numerically solved in an efficient manner by taking into account the fact that the logarithmic moment generating function sta_{j}(s,t) = logE [ e^{sXj[0,t]} ] is convex in s. Due to this, J(s,t) is a unimodal function of s and the minimizer is unique. Hence, to find J^{*}(t) = min_{s} J(s,t) one can start from an initial ``uncertainty'' interval [s_{a},s_{b}] that contains the minimum (this interval can be found heuristically), and decrease the uncertainty interval using a golden section search. The procedure stops when the uncertainty interval has length less than some small value e.
Unlike the previous minimization procedure, there is no general property for J^{*}(t) that we can take advantage of in order to perform the maximization I = max_{t} J^{*}(t) in an efficient manner.^{6} For this reason, the maximization is solved by linearly searching the values of t in the interval [0,kt] with granularity equal to one epoch t. The value of k is determined empirically and depends on the buffer size: the extremizing value of t is larger for larger buffer sizes. Indeed, the experimental results in Section 3.3.1 show that the values of the time parameter t found using this procedure are in good agreement with the interpretation given by the theory, thus validating the correctness of the above procedure.
The runtime required for numerically solving the supinf formula (12) depends primarily on the size (number of epochs) of the trace file and the range of values of t that are linearly searched. On the other hand, it does not depend on the number of multiplexed streams. For example, when the trace files contain 40,000 epochs and 50 values of t are searched^{7}, the solution of the supinf formula requires approximately 23 seconds on an ULTRA1 workstation with one UltraSPARC processor at 170 MHz.
In this section we compare the overflow probability and link utilization using the many sources asymptotic and its BahadurRao improvement to the actual cell loss probability and maximum utilization estimated using simulation. We also derive a simple heuristic for computing the actual cell loss probability from the overflow probability.
Figure 2 compares, for a fixed number of streams, the overflow probability estimated using the many sources asymptotic and its BahadurRao improvement with the cell loss probability and frame overflow probability estimated using simulation; the latter is the probability that at least one cell of a frame is lost. Both the cell loss probability and the frame overflow probability are measured using a discrete time simulation model with an epoch equal to one frame time ( = 40 msec ). In these and subsequent simulations we report the average from a total of 80 independent simulation runs, each with a random selection of the starting frame for every stream. Each simulation run had duration five times the size of the trace. We assume that frame boundaries are aligned and for each stream the trace ``wraps around'' when the last frame is reached. Both the number of runs and the duration of each run were chosen empirically.
For each method, the decimal logarithm of the overflow probability is plotted against the buffer size (measured in milliseconds), while the link utilization remains constant.
In Figure 2, first observe that for small buffer sizes there is a relatively fast decrease of the overflow probability as the buffer size increases. However, this occurs only for buffer sizes up to some value, e.g., 58 msec for a 155 Mbps link; further increase of the buffer above this value has a small effect on the overflow probability. Furthermore, the logarithm of the overflow probability in both of these regimes is almost linear with the buffer size.
Second, observe that although the many sources asymptotic overestimates the Cell Loss Probability (CLP) by approximately 23 orders of magnitude, it qualitatively tracks its shape very well. Furthermore, the BahadurRao improvement overestimates the CLP by 12 orders of magnitude. On the other hand, the large buffer asymptotic, in addition to overestimating the CLP by many orders of magnitude, also fails to track its shape.
The actual cell loss probability differs from the overflow probability estimated using the many sources asymptotic and its BahadurRao improvement because the latter is a measure of the probability that in an infinite buffer the queue length becomes greater than B, rather than a measure of the CLP. The definition of the buffer overflow probability is closer in spirit to that of the frame overflow probability (the probability that at least one cell of a frame is lost). Indeed, as Figure 2 shows, the overflow probability estimated using the many sources asymptotic with the BahadurRao improvement is very close to the frame overflow probability. This is the case because the simulation epoch is equal to the frame time.
To further explain the above, we derive a simple expression for the cell loss probability in terms of the frame overflow probability L_{f}. If one observes a large number of frames, say M, the average number of frames in which we have at least one lost cell is ML_{f}. Let x be the average number of cells that are lost when a frame overflow occurs. The average number of cells that are lost in M frames is M L_{f} x from a total of M F , where F is the average number of cells in a frame. We can approximate the cell loss probability with the percentage of lost cells, i.e.,
 (14) 
Let r = N m/C be the link utilization, where N is the number of streams, m is the mean rate, and C is the link capacity. Figure 4 compares, for a range of buffer sizes and for overflow probability 10^{6}, the link utilization using the many sources asymptotic and its BahadurRao improvement with the maximum achievable utilization (estimated using simulation). The utilization is computed by finding the largest number of multiplexed streams such that the overflow probability (3), computed using (12) and (13), is less than the target overflow probability 10^{6}. This is done using a binary search for values of N in the interval [N_{min},N_{max}] with N_{min} = C/h and N_{max} = C/m, where h is the peak rate of the streams. For the many sources asymptotic with the BahadurRao improvement, (9) is used instead of (3).
Similar to our observations regarding the overflow probability, there are significant gains in increasing the size of the buffer up to a certain value. Increasing the buffer size above this value has a very small effect on link utilization.
Recall that the many sources asymptotic overestimated the CLP by 23 orders of magnitude. However, as Table 1 shows, it is more accurate in estimating the maximum utilization. In particular, for C = 34 Mbps and B = 1 msec the many sources asymptotic achieves a utilization that is approximately 79 % of the maximum utilization. The BahadurRao improvement increases this percentage to 88 %. Furthermore, this percentage increases for larger link capacities; e.g., for C = 155 Mbps and B = 1 msec the many sources asymptotic achieves a utilization that is 90 % of the maximum utilization (Table 1(b)). Of course, as Figure 5 shows, using the heuristic based on (14) we achieve a utilization that almost coincides with the maximum utilization.
Buffer  Utilization r  
msec (cells)  Simulation  Many sources  Many sources + BR 
1 (80)  0.57  0.46 (79 %)  0.52 (88 %) 
8 (641)  0.70  0.59 (84 %)  0.64 (91 %) 
16 (1282)  0.81  0.71 (88 %)  0.77 (96 %) 
(a) C = 34 Mbps
Buffer  Utilization r  
msec (cells)  Simulation  Many sources  Many sources + BR 
1 (365)  0.82  0.74 (90 %)  0.76 (94 %) 
8 (2924)  0.92  0.88 (96 %)  0.89 (97 %) 
16 (5849)  0.92  0.89 (97 %)  0.90 (98 %) 
(b) C = 155 Mbps
Finally, Figure 6 shows the link utilization in the case of Internet WAN traffic. Observe that while for Star Wars traffic the gains of increasing the buffer decrease abruptly, for Internet WAN traffic the gains of increasing the buffer decrease more smoothly as the buffer size increases. This indicates that there are more time scales in Internet traffic which, if not smoothed, become important for buffer overflow, for different buffer sizes.
The space and time parameters s,t characterize a link's operating point and depend on the characteristics of the multiplexed traffic and the link resources. In this section we compare the values of these parameters computed using the supinf formula (12) to the corresponding values estimated using simulation. Furthermore, we compute and interpret typical values for these parameters, demonstrating how they can be used for traffic engineering.
Recall from Section 2.3 that the space parameter s is equal to the rate at which the logarithm of the overflow probability decreases with the buffer size, equation (11). Motivated by this, we can estimate s using the ratio
 (15) 
As discussed in Section 2.1, the time parameter t can be interpreted as the most probable duration of the buffer busy period prior to overflow. Figure 7(b) compares the value of parameter t computed using the supinf formula to the average value of the buffer busy period prior to overflow. As was the case for parameter s, the agreement between the two curves is good.
Note that the ``steps'' in the curves of s,t computed using the supinf formula are expected since the many sources asymptotic (and large deviations theory in general) captures only the most likely way overflow can occur. On the other hand, the curves labeled ``simulation'' in Figures 7(a) and 7(b) represent an average over all events that contribute to overflow.
Additional experimentation with other traffic types (Internet and videoconference traffic) has confirmed the above results.
Next we investigate how parameters s,t and the product st depend on the link capacity and buffer size. The values of s,t are computed using the supinf formula for a target overflow probability 10^{7}.
Figure 8 shows the parameter s as a function of the buffer size, for various link capacities (Figure 8(a)) and video contents (Figure 8(b)). Observe that, initially, s decreases slowly with the buffer size. According to equation (11), s is equal to the rate at which the logarithm of the cell loss probability decreases as the buffer size increases. Hence, for larger buffers, where statistical multiplexing is more efficient, increasing the buffer has a smaller effect on the cell loss probability.
The explanation of the steep decrease of s in Figure 8(a) is similar to the explanation of the ``knee'' of the curves in Figures 2 and 4. Up to some value, the buffer's effect is to smooth the fast time scales of the multiplexed traffic. Thus, increasing the buffer has a large effect on the overflow probability, and the value of s is high. Once the fast time scales have been smoothed, the slow time scales govern the buffer overflow. Thus, increasing the buffer has a very small effect on the overflow probability, and the value of s is small. Also, observe in Figure 8(a) that the steep decrease of the value of s occurs for smaller buffer sizes (measured in milliseconds) as the link capacity increases. Finally, see Figure 8(b), similar behavior is observed for MPEG1 traffic with various contents. This indicates that the above behavior of s is due to the MPEG1 frame structure.
The dependence of parameter t on the buffer size is shown in Figure 9(a). Observe that the steep increases of t occur for the same buffer sizes for which s decreases (Figure 8(a)). Small values of t correspond to the regime where fast time scales are important for buffer overflow, whereas large values of t correspond to the regime where slow time scales are important for buffer overflow.
The product st as a function of the buffer size is shown in Figure 9(b). The initial slow decrease of st as the buffer increases occurs while t remains constant, and is due to the slow decrease of s (see Figure 8(a)). Furthermore, there is a steep increase of st, which occurs for the same buffer sizes for which the changes of s,t occur. The explanation for this steep increase of st is more subtle than the explanation for the behavior of s,t. Recall from Section 2.3 that st is equal to the rate at which the logarithm of the overflow probability decreases with the link capacity, for fixed buffer size, equation (11). Comparing Figures 9(a) and 9(b), we observe that the larger values of st correspond to larger values of t. Larger values of t result in an averaging effect in the amount of workload X_{j}[0,t] that appears in the effective bandwidth formula (1). Hence, for the overflow phenomenon the traffic appears smoother. But for a link that multiplexes smooth traffic and is operating with a cell loss probability greater than zero, a change of the capacity has a greater effect on the overflow probability compared to a link multiplexing more bursty traffic. This gives the intuition of why st increases sharply for some buffer sizes.
Figure 10 compares the values of s,t for Star Wars and voice traffic. Figure 10(a) shows that as the buffer size increases, the value of s for voice traffic decreases smoothly. Furthermore, the rate of decrease is smaller for larger buffer sizes. Comparing the value of s for MPEG1 and voice traffic, we conclude that, for buffer sizes up to 2 msec and above 10 msec , increasing the buffer has a larger effect for a network carrying voice traffic compared to a network carrying MPEG1 traffic. This is an example of how the values of the space parameter can be used in buffer dimensioning.
Figure 10(b) shows that the time parameter t for voice traffic increases almost linearly with the buffer size. This can be explained since for a high degree of multiplexing, voice sources (which are modeled as onoff Markov fluids) behave as Gaussian sources. For such sources, it has been shown in [7] that the time parameter t increases linearly with the buffer size.
Figure 11(a) compares parameter s for Star Wars and Internet WAN traffic. For MPEG1 traffic, the values of s form two distinct regimes corresponding to the two distinct time scales that are important for buffer overflow. On the other hand, for Internet traffic the values of s form more than two regimes, indicating that for such traffic there are more time scales which, for different buffer sizes, become important for buffer overflow. Recall that this is also the reason for the smoother dependence of the link utilization on the buffer size for Internet traffic compared to Star Wars traffic (Figure 6). Finally, Figure 11(b) shows that s can have different values for different Internet traffic segments from the same source, illustrating that different such segments have different statistical properties.
As discussed in Section 2.1, periods of the day during which the traffic mix remains relatively constant can be characterized by particular pairs (s,t). In this section we investigate the dependence of these parameters, hence of the effective bandwidth, on the traffic mix. The traffic mix we consider consists either of traffic of different type (MPEG1 video and voice), or of traffic with the same structure but different content (MPEG1 video with different content), or of smoothed and unsmoothed traffic of the same type and content.
We first consider the traffic mix containing Star Wars and voice traffic. The horizontal axis in Figures 12(a) and 12(b) depicts the percentage of voice connections, each containing 30 individual voice channels. The vertical axis depicts the effective bandwidth of the Star Wars traffic stream. Observe that (1) the effective bandwidth, to a large extent, changes slowly with the traffic mix, (2) the dependence of the effective bandwidth on the traffic mix is smaller for larger capacities and buffer sizes, and (3) there are cases where increasing the percentage of voice connections leads to a sharp decrease of the effective bandwidth.
The first observation supports the argument that particular pairs (s,t) can be assigned to periods of the day during which the traffic mix remains relatively constant. However, the third observation states that there are certain percentages of the traffic mix where the effective bandwidth exhibits sharp changes. If the link's operating point is close to such a percentage, then we can estimate the average amount of resources used by a stream as a weighted sum of the effective bandwidth to the left and to the right of the sharp change. The weights would be determined by the percentage of the time the network was operating on the left and on the right of the change.
The sharp decrease of the effective bandwidth identified above is due to the change of the time scales that are important for buffer overflow. In particular, as indicated in Figure 12(a) above the curve for C = 155 Mbps and buffer 4 msec , the time parameter t increases (1, 4, and 7 frames) for the same percentage of voice connection at which the sharp decrease of the value of the effective bandwidth occurs. The increase of t produces an averaging effect (also discussed in Section 3.3.2) in the amount of workload X_{j}[0,t] that appears in the effective bandwidth formula (1); this averaging results in a smaller effective bandwidth.
Our previous investigations addressed the case where the traffic mix consists of traffic with different structure. Now we investigate the case where the traffic mix consists of MPEG1 video traffic with the same encoding parameters but with different content. Figures 13(a) and 13(b) show the effective bandwidth of the Star Wars stream as a function of the percentage of news and talk show streams, respectively. These figures show that the content has a very small effect on the effective bandwidth; this implies that the effects of the content on parameters s,t are also very small.
Our final investigation deals with another important question in traffic engineering: How does traffic smoothing affect the multiplexing capability of a link and the amount of resources used by a traffic stream? We will see that parameter t indicates the minimum time scale at which smoothing must be performed in order for it to affect resource usage.
Figure 14 shows the effective bandwidth of the Star Wars stream for different percentages of a traffic mix of unsmoothed and smoothed Star Wars traffic; the latter is created by evenly spacing the cells belonging to two consecutive frames. Observe that (1) the effects of the traffic mix on the effective bandwidth decrease when the link capacity or buffer size increases, (2) there are cases where increasing the buffer size has a very small effect on the effective bandwidth, e.g., at C = 622 Mbps the curves for B = 8 msec and B = 16 msec practically coincide, and (3) for some buffer sizes smoothing affects neither the effective bandwidth, nor the link's operating point, e.g., in Figure 14(a) the curve for C = 155 Mbps and B = 8 msec , and the curves for C = 622 Mbps and B = 4, 8, and 16 msec are flat. Next we discuss the third observation in more detail.
Figure 15 shows the effective bandwidth for both the smoothed and unsmoothed Star Wars stream. When the percentage of smoothed traffic is small, the time parameter t ( = 40 msec ) is smaller than the time interval over which smoothing was performed (80 msec ). For this reason, the amount of workload X_{j}[0,t] that appears in the effective bandwidth formula (1) is smaller for the smoothed stream than it is for the unsmoothed stream. Hence, the effective bandwidth of the smoothed stream is smaller than the effective bandwidth of the unsmoothed stream. For some percentage of smoothed traffic ( » 60 %), the time parameter t ( = 80 msec ) is no longer smaller than the time interval over which smoothing is performed (80 msec ). Because of this, the amount of workload X_{j}[0,t] is the same for both the smoothed and the unsmoothed streams. Hence, the effective bandwidth of both streams is the same.
In this paper we employ the recently developed theory of effective bandwidths based on the many sources asymptotic, whereby the effective bandwidth depends not only on the statistical characteristics of the traffic stream but also on a link's operating point. The latter is summarized in two parameters: the space and time parameters.
We have investigated the accuracy of the above framework, and how it can provide important insight to the complex phenomena that occur at a broadband link with a high degree of multiplexing. In particular, we have estimated and interpreted values of the space and time parameters for various mixes of real traffic demonstrating how these can be used to clarify the effects on the link performance of the time scales of traffic burstiness, of the link resources (capacity and buffer), and of traffic control mechanisms such as traffic smoothing.
Our approach is based on the offline analysis of traffic measurements, the granularity of which can be determined by the time parameter of the link. For the traffic mixes considered, the space and time parameters are, to a large extent, insensitive to small variations of the traffic mix. Furthermore, the dependence decreases for larger link capacities and buffer sizes. This indicates that particular pairs of these parameters can characterize periods of the day during which the traffic mix remains relatively constant. This result has important implications to charging, since simple pricing schemes that are linear in time and volume and have important incentive properties can be created from tangents to bounds of the effective bandwidth [6]. Furthermore, the above result opens up some new possibilities for traffic modeling. Rather than developing general models that try to emulate real traffic in any operating environment, a new approach would be to develop models, parameterized by s,t, that emulate real traffic for the particular operating point s,t. Such an approach is taken in [5]. If simple and efficient to implement, such models can be the basis for fast and flexible traffic generators.
The application of our approach to traffic engineering and management of traffic contracts in a real multiservice network that involves a large number of different source types is an important area for further research. Specific issues are whether the number of discontinuities of the operating point parameters s,t increases with the number of source types and how the parameters s,t vary for different periods of the day. Another important research topic is the analysis of multiplexers supporting multiple priorities [13,1]. It is interesting to extend our investigations to this case.

(a) C = 34 Mbps , r = 0.83

(b) C = 155 Mbps , r = 0.93


(a) C = 34 Mbps

(b) C = 155 Mbps


(a) Parameter s
(b) Parameter t


(a) Star Wars traffic
(b) MPEG1 traffic with various contents


(a) Parameter t

(b) Product st


(a) Parameter s

(b) Parameter t


(a) Internet WAN & Star Wars traffic

(b) Different Internet WAN segments


(a) C = 155 Mbps
(b) C = 622 Mbps


(a) Star Wars + news
(b) Star Wars + talk show


(a) C = 155 Mbps
(b) C = 622 Mbps

^{1} This work was supported in part by the European Commission under ACTS Project CASHMAN (AC039). A subset of this paper has appeared in Proceedings of ACM SIGMETRICS'98/PERFORMANCE'98, June 1998. The software used for the experiments and other related material are available at URL: http://www.ics.forth.gr/netgroup/msa/
^{2} Related work on how traffic smoothing affects the multiplexing capability of a link employing a guaranteed and renegotiated constant bit rate service model can be found in [22].
^{3} Available at URL: ftp://ftpinfo3.informatik.uniwuerzburg.de/pub/MPEG/
^{4} Available from The Internet Traffic Archive at URL: http://www.acm.org/sigcomm/ITA/
^{5} Software is available at URL: http://www.ics.forth.gr/netgroup/msa/
^{6} Furthermore, experimentation has shown that J^{*}(t) can have more than one local minima.
^{7} This range of t is typical for the case of MPEG1 traffic with frame time 40 msec , when C = 155 Mbps and the maximum queueing delay in the buffer is less than 15 msec .