added x and y label to latency measurements

master
Frederik Maaßen 2 years ago
parent dd6d0613e8
commit 2c971f8ad5
  1. 2
      implementation/mininet_controller.py
  2. 238
      thesis/content/evaluation/minimal_network.tex
  3. 1
      thesis/content/implementation/test_network.tex
  4. 11
      thesis/content/testing/testing.tex
  5. 2
      thesis/settings/packages.tex

@ -382,7 +382,7 @@ def measure_latency(net, sender, dest_ip, length, interval, unique_test_name, y_
net[sender].cmd(extended_command)
sleep(1)
net[sender].cmd(create_plot_command(graph_title, tmp_file_name, y_range))
net[sender].cmd(create_plot_command(graph_title, tmp_file_name, y_range, "Time in seconds", "Latency in milliseconds"))
def measure_bandwidth(net, components, iperf_server_ip, length, interval, unique_test_name, graph_title="", flag="tcp",

@ -10,26 +10,38 @@
We performed multiple tests of influences to the bandwidth with occurring failures. These were run using \textit{iperf} and a logging interval of 0.5 seconds. All data was collected from the output of the \textit{iperf} server.
\subsubsection{With FRR}
\begin{figure}
\centering
\fbox{\includegraphics[width=7cm]{tests/minimal_bandwidth/bandwidth_before_failure_wo_sc}}
\fbox{\includegraphics[width=7cm]{tests/minimal_bandwidth/bandwidth_after_failure_wo_sc}}
\caption{Bandwidth measured with iperf on a network with a throughput limit per link at 100 Mbit(s) (a) before a failure and (b) after a failure}
\label{fig:evaluation_minimal_network_bandwidth_wo_sc}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/minimal_bandwidth/bandwidth_before_failure_wo_sc}
\label{fig:evaluation_minimal_bandwidth_wo_sc_a}
\caption{Bandwidth before a failure}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/minimal_bandwidth/bandwidth_after_failure_wo_sc}
\label{fig:evaluation_minimal_bandwidth_wo_sc_b}
\caption{Bandwidth after a failure}
\end{subfigure}
\caption{Bandwidth measured with \textit{iperf} from H1 to H4}
\label{fig:evaluation_minimal_bandwidth_wo_sc}
\end{figure}
We performed a bandwidth test on the minimal network, a topology with 4 routers and 3 hosts. The failure occurred after the first run and before the second run of the test in \ref{fig:evaluation_minimal_network_bandwidth_wo_sc}. As can be seen the bandwidth does not change between runs. This is to expected as additional hops on the path of the packet do not influence the total throughput that can be achieved, and while the looped path is passed by the traffic in both directions, the duplex nature of ethernet connections does not impose any limitations in this regard.
We performed a bandwidth test on the minimal network, a topology with 4 routers and 3 hosts. The failure occurred after the first run and before the second run of the test in \ref{fig:evaluation_minimal_bandwidth_wo_sc}. As can be seen the bandwidth does not change between runs. This is to expected as additional hops on the path of the packet do not influence the total throughput that can be achieved, and while the looped path is passed by the traffic in both directions, the duplex nature of ethernet connections does not impose any limitations in this regard.
\begin{figure}
\centering
\fbox{\includegraphics[width=10cm]{tests/minimal_bandwidth/bandwidth_concurrent_failure_wo_sc}}
\includegraphics[width=10cm]{tests/minimal_bandwidth/bandwidth_concurrent_failure_wo_sc}
\caption{Bandwidth measured for 30 seconds, introducing a failure after 15 seconds}
\label{fig:evaluation_minimal_network_bandwidth_concurrent_wo_sc}
\label{fig:evaluation_minimal_bandwidth_concurrent_wo_sc}
\end{figure}
In \ref{fig:evaluation_minimal_network_bandwidth_concurrent_wo_sc} however we introduced the failure while the bandwidth test was running. The test was run for 30 seconds and the failure was introduced at around 15 seconds, which caused a drop in performance. The log output of the sending client reported the need to resend 22 packets in this time period; in all transfers before no packet loss occurred.
In \ref{fig:evaluation_minimal_bandwidth_concurrent_wo_sc} however we introduced the failure while the bandwidth test was running. The test was run for 30 seconds and the failure was introduced at around 15 seconds, which caused a drop in performance. The log output of the sending client reported the need to resend 22 packets in this time period; in all transfers before no packet loss occurred.
In addition to the already deployed transfer limit on the links between routers and hosts, we also added the bandwidth parameter -b to the execution of the \textit{iperf} client and limited the throughput to 100 Mbit(s). This was done because we experienced bursts in the bandwidth test after we introduced a failure concurrent to the bandwidth test as can be seen in figure \ref{fig:evaluation_minimal_network_bandwidth_concurrent_wo_sc}, exceeding the limit of the network by more than 50\%. Unfortunately the additional limit did not change the behaviour. Upon further investigation we found one possible reason for this burst.
In addition to the already deployed transfer limit on the links between routers and hosts, we also added the bandwidth parameter -b to the execution of the \textit{iperf} client and limited the throughput to 100 Mbit(s). This was done because we experienced bursts in the bandwidth test after we introduced a failure concurrent to the bandwidth test as can be seen in figure \ref{fig:evaluation_minimal_bandwidth_concurrent_wo_sc}, exceeding the limit of the network by more than 50\%. Unfortunately the additional limit did not change the behaviour. Upon further investigation we found one possible reason for this burst.
When the connection between routers is cut, our test framework uses the python api to deactivate the corresponding interfaces on both affected routers. This is done in sequence. In this example the interface on router R2 was deactivated first and the interface on router R4 was deactivated second. We implemented this behaviour after observing the default behaviour of the Mininet network. E.g. if the connection between router R2 and router R4 was only cut by deactivating the interface on router R4, router R2 would not recognize the failure and would loose all packets sent to the link. Because we deactivate the interfaces in sequence and the Mininet python api introduces delay to the operation, the interface on R2 will be deactivated while the interface on R4 will continue receiving packets already on the link and will continue sending packets to the deactivated interface on R2 for a short period of time. All packets sent to R2 in this time period will be lost. But because the \textit{iperf} server itself does not send any actual data, but only acknowledgements (ACK) for already received data, only ACKs are lost this way.
@ -43,19 +55,30 @@ In our further tests we observed that the bandwidth alone does not change heavil
\begin{figure}
\centering
\fbox{\includegraphics[width=7cm]{tests/minimal_bandwidth/bandwidth_before_failure_sc}}
\fbox{\includegraphics[width=7cm]{tests/minimal_bandwidth/bandwidth_after_failure_sc}}
\caption{Bandwidth measured with iperf on a network (a) before a failure and (b) after a failure using ShortCut}
\label{fig:evaluation_minimal_network_bandwidth_sc}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/minimal_bandwidth/bandwidth_before_failure_sc}
\label{fig:evaluation_minimal_bandwidth_sc_a}
\caption{Bandwidth before a failure}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/minimal_bandwidth/bandwidth_after_failure_sc}
\label{fig:evaluation_minimal_bandwidth_sc_b}
\caption{Bandwidth after a failure}
\end{subfigure}
\caption{Bandwidth measured with \textit{iperf} from H1 to H4 using ShortCut}
\label{fig:evaluation_minimal_bandwidth_sc}
\end{figure}
\begin{figure}
\centering
\fbox{\includegraphics[width=10cm]{tests/minimal_bandwidth/bandwidth_concurrent_failure_sc}}
\includegraphics[width=10cm]{tests/minimal_bandwidth/bandwidth_concurrent_failure_sc}
\caption{Bandwidth measured for 30 seconds, introducing a failure after 15 seconds using ShortCut}
\label{fig:evaluation_minimal_network_bandwidth_concurrent_sc}
\label{fig:evaluation_minimal_bandwidth_concurrent_sc}
\end{figure}
Using ShortCut had no further influence on the achieved throughput. This is to be expected, as longer or shorter paths will only influence throughput if e.g. a link with a lower bandwidth is contained in an additional path.
@ -65,41 +88,64 @@ Using ShortCut had no further influence on the achieved throughput. This is to b
\subsubsection{With FRR}
\begin{figure}
\centering
\fbox{\includegraphics[width=7cm]{tests/minimal_bandwidth_link_usage/bandwidth_link_usage_before_wo_sc}}
\fbox{\includegraphics[width=7cm]{tests/minimal_bandwidth_link_usage/bandwidth_link_usage_after_wo_sc}}
\caption{Bandwidth H1 to H4 with concurrent data transfer on H2 to H1 (a) before a failure and (b) after a failure}
\label{fig:evaluation_minimal_network_bandwidth_link_usage_wo_sc}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/minimal_bandwidth_link_usage/bandwidth_link_usage_before_wo_sc}
\label{fig:evaluation_minimal_bandwidth_link_usage_wo_sc_a}
\caption{Bandwidth before a failure}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/minimal_bandwidth_link_usage/bandwidth_link_usage_after_wo_sc}
\label{fig:evaluation_minimal_bandwidth_link_usage_wo_sc_b}
\caption{Bandwidth after a failure}
\end{subfigure}
\caption{Bandwidth with concurrent data transfer on H2 to H1}
\label{fig:evaluation_minimal_bandwidth_link_usage_wo_sc}
\end{figure}
\begin{figure}
\centering
\fbox{\includegraphics[width=10cm]{tests/minimal_bandwidth_link_usage/bandwidth_link_usage_concurrent_wo_sc}}
\includegraphics[width=10cm]{tests/minimal_bandwidth_link_usage/bandwidth_link_usage_concurrent_wo_sc}
\caption{Bandwidth H1 to H4 with concurrent data transfer on H2 to H1 - failure occuring after 15 seconds}
\label{fig:evaluation_minimal_network_bandwidth_link_usage_concurrent_wo_sc}
\label{fig:evaluation_minimal_bandwidth_link_usage_concurrent_wo_sc}
\end{figure}
In this test we evaluated the bandwidth between H1 and H4 with a concurrent data transfer on H2 to H1. Both transfers were run with a limitation of 100 Mbit(s), which constitutes the maximum allowed bandwidth in this test.
In this test we evaluated the bandwidth between H1 and H4 with a concurrent data transfer on H2 to H1. Both transfers were run with a limitation of \SI{100}{Mbps}, which constitutes the maximum allowed bandwidth in this test.
Before a failure, as can be seen in figure \ref{fig:evaluation_minimal_network_bandwidth_link_usage_wo_sc} (a), the throughput is at around 100 Mbit(s) which is our current maximum. While the additional transfer between H2 and H1 does in fact use some of the links that are also used in our iperf test, namely the link between R1 to R2 and H1 to R1, it does so in a different direction. While the data itself is sent from H1 to H4 over H2, only the tcp acknowledgements are sent on the route back. Data from H2 to H1 is sent from R2 to R1 and therefore only the returning acknowledgements use the link in the same direction, not impacting the achieved throughput.
Before a failure, as can be seen in figure \ref{fig:evaluation_minimal_bandwidth_link_usage_wo_sc} (a), the throughput is at around \SI{100}{Mbps} which is our current maximum. While the additional transfer between H2 and H1 does in fact use some of the links that are also used in our iperf test, namely the link between R1 to R2 and H1 to R1, it does so in a different direction. While the data itself is sent from H1 to H4 over H2, only the tcp acknowledgements are sent on the route back. Data from H2 to H1 is sent from R2 to R1 and therefore only the returning acknowledgements use the link in the same direction, not impacting the achieved throughput.
If a failure is introduced however, traffic from H1 does not only loop over R2, using up bandwidth from R2 to R1, it is also using the same path from R1 to R3 for its traffic. Therefore we experience a huge performance drop to around 20-30 Mbit(s). While in theory this will last only for up 5 seconds until the global convergence protocol rewrites the route, the lost data throughput in our network in this timeframe would be around 5s * 75 Mbit/s = 46.875 MByte.
If a failure is introduced however, traffic from H1 does not only loop over R2, using up bandwidth from R2 to R1, it is also using the same path from R1 to R3 for its traffic. Therefore we experience a huge performance drop to around \SIrange{20}{30}{Mbps}. While in theory this will last until the global convergence protocol rewrites the route, the lost data throughput in our network in this timeframe would be around \SI{75}{Mbps} for each second the global convergence protocol takes to rewrite routings.
\subsubsection{With FRR and ShortCut}
\begin{figure}
\centering
\fbox{\includegraphics[width=7cm]{tests/minimal_bandwidth_link_usage/bandwidth_link_usage_before_sc}}
\fbox{\includegraphics[width=7cm]{tests/minimal_bandwidth_link_usage/bandwidth_link_usage_after_sc}}
\caption{Bandwidth with concurrent data transfer on h2 to h1 (a) before a failure and (b) after a failure using ShortCut}
\label{fig:evaluation_minimal_network_bandwidth_link_usage_sc}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/minimal_bandwidth_link_usage/bandwidth_link_usage_before_sc}
\label{fig:evaluation_minimal_bandwidth_link_usage_sc_a}
\caption{Bandwidth before a failure}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/minimal_bandwidth_link_usage/bandwidth_link_usage_after_sc}
\label{fig:evaluation_minimal_bandwidth_link_usage_sc_b}
\caption{Bandwidth after a failure}
\end{subfigure}
\caption{Bandwidth with concurrent data transfer on H2 to H1 using ShortCut}
\label{fig:evaluation_minimal_bandwidth_link_usage_sc}
\end{figure}
\begin{figure}
\centering
\fbox{\includegraphics[width=10cm]{tests/minimal_bandwidth_link_usage/bandwidth_link_usage_concurrent_sc}}
\includegraphics[width=10cm]{tests/minimal_bandwidth_link_usage/bandwidth_link_usage_concurrent_sc}
\caption{Bandwidth H1 to H4 with concurrent data transfer on h2 to h1 - failure occuring after 15 seconds using ShortCut}
\label{fig:evaluation_minimal_network_bandwidth_link_usage_concurrent_sc}
\label{fig:evaluation_minimal_bandwidth_link_usage_concurrent_sc}
\end{figure}
@ -108,77 +154,145 @@ If a failure is introduced however, traffic from H1 does not only loop over R2,
\subsubsection{With FRR}
\begin{figure}
\centering
\fbox{\includegraphics[width=7cm]{tests/minimal_latency/latency_before_failure_wo_sc}}
\fbox{\includegraphics[width=7cm]{tests/minimal_latency/latency_after_failure_wo_sc}}
\caption{Latency measured with ping (a) before a failure and (b) after a failure}
\label{fig:evaluation_minimal_network_latency_wo_sc}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/minimal_latency/latency_before_failure_wo_sc}
\label{fig:evaluation_minimal_latency_wo_sc_a}
\caption{Latency before a failure}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/minimal_latency/latency_after_failure_wo_sc}
\label{fig:evaluation_minimalk_latency_wo_sc_b}
\caption{Latency after a failure}
\end{subfigure}
\caption{Latency measured with ping}
\label{fig:evaluation_minimal_latency_wo_sc}
\end{figure}
\begin{figure}
\centering
\fbox{\includegraphics[width=10cm]{tests/minimal_latency/latency_concurrent_wo_sc}}
\includegraphics[width=10cm]{tests/minimal_latency/latency_concurrent_wo_sc}
\caption{Latency with a concurrent failure after 15 seconds}
\label{fig:evaluation_minimal_network_latency_concurrent_wo_sc}
\label{fig:evaluation_minimal_latency_concurrent_wo_sc}
\end{figure}
As each link adds 5 milliseconds of delay and \textit{ping} logs the difference in time between sending a packet and receiving an answer, the approximate delay would be the amount of links passed \textit{N} multiplied with the delay per link. In our test network there are 6 links between H1 and H4. Because these links are passed twice, one time to H4 and one time back to H1, this results in an approximate delay of 60 milliseconds.
As each link adds \SI{5}{\milli\second} of delay and \textit{ping} logs the difference in time between sending a packet and receiving an answer, the approximate delay would be the amount of links passed \textit{N} multiplied with the delay per link. In our test network there are 6 links between H1 and H4. Because these links are passed twice, one time to H4 and one time back to H1, this results in an approximate delay of \SI{60}{\milli\second}.
The test run confirmed these assumptions. As can be seen in figure \ref{fig:evaluation_minimal_network_latency_wo_sc} (a) a ping on the network without failure took an average of around 65 milliseconds with slight variations. The additional 5 milliseconds are most likely caused in the routing process on the router.
The test run confirmed these assumptions. As can be seen in \cref{fig:evaluation_minimal_latency_wo_sc} a ping on the network without failure took an average of around 65 milliseconds with slight variations. The additional \SI{5}{\milli\second} are most likely caused in the routing process on the router.
When introducing a failure however, additional links are passed on the way from H1 to H4. Instead of 6 links passed per direction, the network now sends the packets on a sub-optimal path which adds 2 passed links from R1 to R2 and back. These are only passed when sending packets to H4, packets returning from H3 will not take the sub-optimal path. This would, in theory, add around 10 milliseconds of delay to our original results.
When introducing a failure however, additional links are passed on the way from H1 to H4. Instead of 6 links passed per direction, the network now sends the packets on a sub-optimal path which adds 2 passed links from R1 to R2 and back. These are only passed when sending packets to H4, packets returning from H3 will not take the sub-optimal path. This would, in theory, add around \SI{10}{\milli\second} of delay to our original results.
As can be seen in \ref{fig:evaluation_minimal_network_latency_wo_sc} (b) this is also the case. With an average of around 76 milliseconds of latency the results show an additional delay of around 11 milliseconds when taking the sub-optimal path. The discrepancy between our assumption of 10 milliseconds and the actual added 11 milliseconds might be caused by the additional router that is passed in the direction to H4.
As can be seen in \cref{fig:evaluation_minimal_latency_wo_sc} (b) this is also the case. With an average of around \SI{76}{\milli\second} of latency the results show an additional delay of around \SI{11}{\milli\second} when taking the sub-optimal path. The discrepancy between our assumption of \SI{10}{\milli\second} and the actual added \SI{11}{\milli\second} might be caused by the additional router that is passed in the direction to H4.
\subsubsection{With FRR and ShortCut}
\begin{figure}
\centering
\fbox{\includegraphics[width=7cm]{tests/minimal_latency/latency_before_failure_sc}}
\fbox{\includegraphics[width=7cm]{tests/minimal_latency/latency_after_failure_sc}}
\caption{Latency measured with ping (a) before a failure and (b) after a failure using ShortCut}
\label{fig:evaluation_minimal_network_latency_sc}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/minimal_latency/latency_before_failure_sc}
\label{fig:evaluation_minimal_latency_sc_a}
\caption{Latency before a failure}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/minimal_latency/latency_after_failure_sc}
\label{fig:evaluation_minimal_latency_sc_b}
\caption{Latency after a failure}
\end{subfigure}
\caption{Latency measured with ping using ShortCut}
\label{fig:evaluation_minimal_latency_sc}
\end{figure}
\begin{figure}
\centering
\fbox{\includegraphics[width=10cm]{tests/minimal_latency/latency_concurrent_sc}}
\includegraphics[width=10cm]{tests/minimal_latency/latency_concurrent_sc}
\caption{Latency with a concurrent failure after 15 seconds with ShortCut}
\label{fig:evaluation_minimal_network_latency_concurrent_sc}
\label{fig:evaluation_minimal_latency_concurrent_sc}
\end{figure}
\subsection{Packet flow}
\subsection{Packet flow - TCP}
\subsubsection{With FRR}
\begin{figure}
\centering
\fbox{\includegraphics[width=7cm]{tests/minimal_packet_flow/before_failure_wo_sc_graph}}
\fbox{\includegraphics[width=7cm]{tests/minimal_packet_flow/after_failure_wo_sc_graph}}
\caption{Number of packets on all routers (a) before a failure and (b) after a failure}
\label{fig:evaluation_minimal_network_packet_flow_wo_sc}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/minimal_packet_flow/before_failure_wo_sc_graph}
\label{fig:evaluation_minimal_packet_flow_wo_sc_a}
\caption{TCP Packets on routers before a failure}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/minimal_packet_flow/after_failure_wo_sc_graph}
\label{fig:evaluation_minimal_packet_flow_wo_sc_b}
\caption{TCP Packets on routers after a failure}
\end{subfigure}
\caption{TCP Packets on all routers measured with \textit{nftables} counters}
\label{fig:evaluation_minimal_packet_flow_wo_sc}
\end{figure}
\begin{figure}
\centering
\fbox{\includegraphics[width=10cm]{tests/minimal_packet_flow/concurrent_failure_wo_sc_graph}}
\includegraphics[width=10cm]{tests/minimal_packet_flow/concurrent_failure_wo_sc_graph}
\caption{Packet flow on all routers with failure after 15 seconds}
\label{fig:evaluation_minimal_network_packet_flow_concurrent_wo_sc}
\label{fig:evaluation_minimal_packet_flow_concurrent_wo_sc}
\end{figure}
\subsubsection{With FRR and ShortCut}
\begin{figure}
\centering
\fbox{\includegraphics[width=7cm]{tests/minimal_packet_flow/before_failure_sc_graph}}
\fbox{\includegraphics[width=7cm]{tests/minimal_packet_flow/after_failure_sc_graph}}
\caption{Number of packets on all routers using ShortCut (a) before a failure and (b) after a failure}
\label{fig:evaluation_minimal_network_packet_flow_sc}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/minimal_packet_flow/before_failure_sc_graph}
\label{fig:evaluation_minimal_packet_flow_sc_a}
\caption{TCP Packets on routers before a failure}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/minimal_packet_flow/before_failure_sc_graph}
\label{fig:evaluation_minimal_packet_flow_sc_b}
\caption{TCP Packets on routers after a failure}
\end{subfigure}
\caption{TCP Packets on all routers measured with \textit{nftables} counters using Shortcut}
\label{fig:evaluation_minimal_packet_flow_sc}
\end{figure}
\begin{figure}
\centering
\fbox{\includegraphics[width=10cm]{tests/minimal_packet_flow/concurrent_failure_sc_graph}}
\caption{Packet flow on all routers with failure after 15 seconds using ShortCut}
\label{fig:evaluation_minimal_network_packet_flow_concurrent_sc}
\label{fig:evaluation_minimal_packet_flow_concurrent_sc}
\end{figure}
To show the amount of packets being forwarded on each router, we measured the packet flow on all routers of this topology. This is done by counting TCP packets while a concurrent data transfer is started from H1 to H4.
To show the amount of packets being forwarded on each router, we measured the packet flow on all routers of this topology. This is done by counting TCP packets while a concurrent data transfer is started from H1 to H4.
\subsection{Packet flow - UDP}
\subsubsection{With FRR}
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/minimal_packet_flow_udp/packet_flow_udp_before_wo_sc}
\caption{Packets on routers before a failure}
\label{fig:evaluation_minimal_packet_flow_udp_wo_sc_a}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/minimal_packet_flow_udp/packet_flow_udp_after_wo_sc}
\caption{Packets on routers after a failure}
\label{fig:evaluation_minimal_packet_flow_udp_wo_sc_b}
\end{subfigure}
\label{fig:evaluation_minimalk_packet_flow_udp_wo_sc}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=10cm]{tests/minimal_packet_flow_udp/packet_flow_udp_concurrent_wo_sc}
\caption{Packet flow on all routers with failure after 15 seconds}
\label{fig:evaluation_minimal_packet_flow_udp_concurrent_wo_sc}
\end{figure}

@ -38,6 +38,7 @@ The execution phase contains the actual testing. A command can be executed and i
Additionally failures can be introduced to the network. The key \textit{failures} contains a list of failures, defined by a type and a list of commands. The types of implemented failures are "intermediate" and "timer". Intermediate failures will be executed after a first run of the execute command, which will be repeated after the failure function was called. A failure that is of the type "timer" will start a timer in the beginning of the measurement, which will execute the defined command after a delay, which is provided through an attribute named "timing".
\subsection{Implemented commands}
\label{implementation_commands}
A command can either be a lambda which is only dependent on the net, which will be passed into the lambda by default, or a function definition which is a tuple of the name of the function and a second nested x-tuple, depending on the function to call, containing all arguments that need to be passed to the function.
The functions are defined in the \textit{mininet\_controller} and can't be called directly, because the topologies are loaded dynamically and will not know existing function definitions until they are loaded. Because of this the \textit{mininet\_controller} contains a dictionary called "functions" which has the function name as key and an attribute called "callable" containing a lambda with the function call using the provided arguments. In the following we list the existing functions and explain their functionality, which files are created and which output can be used for further testing and evaluation.

@ -7,7 +7,16 @@ In this chapter we define and perform the tests for our evaluation. For this we
\section{Measurements}
\label{testing_measurements}
To evaluate the performance of a network we established a list of criteria in section \ref{basics_measuring_performance}
To evaluate the performance of a network we established a list of criteria in section \ref{basics_measuring_performance}. Our measurements should reflect these criteria, which is why we implemented corresponding measurement functions in our test framework as described in section \ref{implementation_commands}.
In the following sections we describe the implemented performance tests in detail. An evaluation of the results of said tests will be given in chapter \ref{evaluation}.
\subsection{Bandwidth}
The tests measuring bandwidth are one of the most basic tests in our testing framework. The use the "measure\_bandwidth" function described in section \ref{measure_bandwidth}.
As each virtual network that is created in Mininet starts with empty caches on all devices, we start a short \textit{iperf} test on the Mininet network prior to each bandwidth measurement which will cause most network handling like packets sent by the \textit{address resolution protocol} (ARP) to be finished before the actual test starts. This reduces the impact these protocols and mechanisms would otherwise have on the first seconds of the bandwidth measurement.
The bandwidth tests are run using \textit{iperf}. Each bandwidth test is run for each topology in two versions, one with an intermediate failure and one with a concurrent failure. Tests using an intermediate failure define two commands for measurement, one before a failure and one after a failure. The test will execute the first measurement, introduce the failure and then run the second measurement automatically. Each bandwidth measurement in tests with an intermediate failure is run for \SI{15}{\second}. In case of the bandwidth measurement using a concurrent failure the \textit{iperf} test is run for \SI{30}{\second} and a failure is introduced after \SI{15}{\second}.
\section{Failures, FRR and FRMs}
\label{testing_failures}
For each topology that was created there are several configurations which require testing. First and foremost we use two different types of failures to test each topology

@ -152,6 +152,7 @@
\usepackage{epstopdf}
\usepackage{float}
\usepackage{subcaption}
%% Colors for text, color definitions in color.tex
\usepackage{color}
\usepackage{colortbl}
@ -310,6 +311,7 @@
\else
\usepackage[anythingbreaks]{breakurl} % only for ps/dvi
\fi
\usepackage{cleveref}
% Make sure the whole text black!
\color{black}