some changes

master
Frederik Maaßen 2 years ago
parent ec3ebf62ff
commit 8014409c39
  1. 6
      implementation/topologies/8r4h_topo.py
  2. 2
      thesis/content/begin/abstract.tex
  3. 2
      thesis/content/begin/titlepage_english.tex
  4. 2
      thesis/content/evaluation/evaluation.tex
  5. 109
      thesis/content/evaluation/failure_path_networks.tex
  6. 29
      thesis/content/testing/testing.tex
  7. 792
      thesis/images/tests/failure_path_1_packet_flow/packet_flow_after_wo_sc.eps
  8. 781
      thesis/images/tests/failure_path_1_packet_flow/packet_flow_before_wo_sc.eps
  9. 1033
      thesis/images/tests/failure_path_1_packet_flow_udp/packet_flow_udp_after_wo_sc.eps
  10. 1030
      thesis/images/tests/failure_path_1_packet_flow_udp/packet_flow_udp_before_wo_sc.eps

@ -552,9 +552,9 @@ class EightRoutersFourHosts(CustomTopo):
"use_pre_defined_function": True,
"separate_definitions": True,
"command_pre": ("measure_packet_flow", (
'h1', 'h8', '10.8.0.101', ["r1", "r2", "r4", "r7"], 30, 1, "udp_before_failure", [0, 15000], "UDP Packet flow on routers before failure", "udp", 100)),
'h1', 'h8', '10.8.0.101', ["r1", "r2", "r4", "r7"], 30, 1, "udp_before_failure", [0, 20000], "UDP Packet flow on routers before failure", "udp", 100)),
"command_post": ("measure_packet_flow", (
'h1', 'h8', '10.8.0.101', ["r1", "r2", "r4", "r7"], 30, 1, "udp_after_failure", [0, 15000], "UDP Packet flow on routers after failure", "udp", 100)),
'h1', 'h8', '10.8.0.101', ["r1", "r2", "r4", "r7"], 30, 1, "udp_after_failure", [0, 20000], "UDP Packet flow on routers after failure", "udp", 100)),
},
"failures": [
@ -579,7 +579,7 @@ class EightRoutersFourHosts(CustomTopo):
"execute": {
"use_pre_defined_function": True,
"command": ("measure_packet_flow", (
'h1', 'h8', '10.8.0.101', ["r1", "r2", "r4", "r7"], 30, 1, "udp_concurrent_failure", [0, 15000], "UDP Packet flow on routers before failure", "udp", 100)),
'h1', 'h8', '10.8.0.101', ["r1", "r2", "r4", "r7"], 30, 1, "udp_concurrent_failure", [0, 20000], "UDP Packet flow on routers before failure", "udp", 100)),
},
"failures": [

@ -1,3 +1,3 @@
% !TeX encoding = UTF-8
\chapter*{\iftoggle{lang_eng}{Abstract}{Kurzfassung}}
Das ist die Kurzfassung. Hier sollte klar werden womit sich die Arbeit beschäftigt und die wichtigsten Aussagen sollen zusammengefasst werden. Richtwert is eine halbe Seite.
In our modern society the internet and provided services play an increasingly greater role in our daily lives.

@ -23,7 +23,7 @@
\begin{center}
\par{}{\Large Comparison of Fast Recovery Methods in Networks}
\vspace*{1cm}
\par{}\textbf{Implementation and Measurement of Fast Recovery Methods in Mininet}
\par{}\textbf{Implementation and Evaluation of Fast Recovery Methods in Mininet}
\vspace*{1cm}
\par{}Frederik Maaßen
% \vspace*{1cm}

@ -14,7 +14,7 @@ Lastly we discuss our results in \cref{discussion}.
\section{Discussion of results}
\label{discussion}
In this section we discuss our results in the previous measurements. We proceed by comparing the results of different measurement types using the three topologies. For each measurement type we collect the implications of a failure for the network and whether ShortCut is able to enhance results. We start with the bandwidth in \cref{discussion_bandwidth}, continuing to the bandwidth with a second data flow in \cref{discussion_bandwidth_link_usage}. After that we talk about our latency measurements in \cref{discussion_latency} followed by our packet flow measurements using TCP and UDP in \cref{discussion_packet_flow_tcp} and \cref{discussion_packet_flow_udp} respectively.
In this section we discuss our results in the previous measurements. We proceed by comparing the results of different measurement types using the three topologies. For each measurement type we collect the implications of a failure for the network and whether ShortCut is able to enhance results. We start with the bandwidth in \cref{discussion_bandwidth}, continuing to the bandwidth with a second data flow in \cref{discussion_bandwidth_link_usage}. After that we talk about our latency measurements in \cref{discussion_latency} followed by our packet flow measurements using TCP and UDP in \cref{discussion_packet_flow}.
\subsection{Bandwidth}
\label{discussion_bandwidth}

@ -69,7 +69,7 @@ When introducing a failure however the two data flows use all links from router
Introducing the failure concurrently to the data transfer causes both bandwidths to abruptly drop, which can be seen in \cref{fig:evaluation_failure_path_1_bandwidth_link_usage_concurrent_wo_sc}. Although the two data flows distribute the bandwidth differently, they achieve an overall throughput of \SI{100}{Mbps}. We assume that the incoherent distribution of bandwidth is caused by the timing of the data transfers.
The data transfer in \cref{fig:evaluation_failure_path_1_bandwidth_link_usage_wo_sc_b} already starts with the failure in place. Because the \textit{iperf} instance producing the additional data flow is started slightly before our main data flow, Mininet seems to allocate more bandwidth to this transfer. The graph also suggests that both bandwidths approximate each other, suggesting that Mininet tries to, over time, allocate both transfers the same bandwidth.
The data transfer in \cref{fig:evaluation_failure_path_1_bandwidth_link_usage_wo_sc_b} already starts with the failure in place. Because the \textit{iperf} instance producing the additional data flow is started slightly before our main data flow, Mininet seems to allocate more bandwidth to this transfer. The graph in \cref{fig:evaluation_failure_path_1_bandwidth_link_usage_wo_sc_b} also suggests that both bandwidths approximate each other, suggesting that Mininet tries to, over time, allocate both transfers the same bandwidth.
Our measurement with a failure occurring concurrent to our data transfers however evens the playing field. Both \textit{iperf} instances already send data over the network. This could explain the overall more evenly distributed bandwidth, as well as the main data flow even overtaking the additional data flow.
@ -113,26 +113,26 @@ We measured the latency between host H1 and H6 for our first failure path networ
\centering
\includegraphics[width=\textwidth]{tests/failure_path_1_latency/latency_before_wo_sc}
\label{fig:evaluation_failure_path_1_latency_wo_sc_a}
\caption{Latency before a failure on 1st failure path network}
\caption{Latency before a failure - 1st network}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/failure_path_1_latency/latency_after_wo_sc}
\label{fig:evaluation_failure_path_1_latency_wo_sc_b}
\caption{Latency after a failure on 1st failure path network}
\caption{Latency after a failure - 1st network}
\end{subfigure}
\vskip\baselineskip
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/failure_path_2_latency/latency_before_wo_sc}
\label{fig:evaluation_failure_path_2_latency_wo_sc_a}
\caption{Latency before a failure on 2nd failure path network}
\caption{Latency before a failure - 2nd network}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/failure_path_2_latency/latency_after_wo_sc}
\label{fig:evaluation_failure_path_2_latency_wo_sc_b}
\caption{Latency after a failure on 2nd failure path network}
\caption{Latency after a failure - 2nd network}
\end{subfigure}
\caption{Latency measured with \textit{ping} on both failure path networks}
\label{fig:evaluation_failure_path_1_latency_wo_sc}
@ -180,68 +180,33 @@ Similar to our results when measuring the minimal topology in \cref{evaluation_m
\subsection{Packet flow - TCP}
\label{failure_path_tcp_packet_flow}
We measure the amount of TCP packets forwarded in our failure path networks. For this we attach \textit{nftables} counters to for routers in each of these topologies.
\subsubsection{With FRR}
\label{failure_path_1_packet_flow_with_frr}
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/failure_path_1_packet_flow/packet_flow_before_wo_sc}
\label{fig:evaluation_failure_path_1_packet_flow_wo_sc_a}
\caption{TCP Packets on routers before a failure}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/failure_path_1_packet_flow/packet_flow_after_wo_sc}
\label{fig:evaluation_failure_path_1_packet_flow_wo_sc_b}
\caption{TCP Packets on routers after a failure}
\end{subfigure}
\caption{TCP Packets on all routers measured with \textit{nftables} counters}
\label{fig:evaluation_failure_path_1_packet_flow_wo_sc}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=10cm]{tests/failure_path_1_packet_flow/packet_flow_concurrent_wo_sc}
\caption{Packet flow on all routers with failure after 15 seconds}
\label{fig:evaluation_failure_path_1_packet_flow_concurrent_wo_sc}
\end{figure}
\subsubsection{With FRR and ShortCut}
\label{failure_path_1_packet_flow_with_frr_and_shortcut}
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/failure_path_1_packet_flow/packet_flow_before_sc}
\label{fig:evaluation_failure_path_1_packet_flow_sc_a}
\caption{TCP Packets on routers before a failure}
\caption{TCP Packets on routers after a failure - 1st network}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/failure_path_1_packet_flow/packet_flow_before_sc}
\label{fig:evaluation_failure_path_1_packet_flow_sc_b}
\caption{TCP Packets on routers after a failure}
\includegraphics[width=\textwidth]{tests/failure_path_2_packet_flow/packet_flow_after_wo_sc}
\label{fig:evaluation_failure_path_2_packet_flow_wo_sc_b}
\caption{TCP Packets on routers after a failure - 2nd network}
\end{subfigure}
\caption{TCP Packets on all routers measured with \textit{nftables} counters using Shortcut}
\label{fig:evaluation_failure_path_1_packet_flow_sc}
\caption{TCP Packets on all routers measured with \textit{nftables} counters}
\label{fig:evaluation_failure_path_packet_flow_wo_sc}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=10cm]{tests/failure_path_1_packet_flow/packet_flow_concurrent_sc}
\caption{TCP Packet flow on all routers with failure after 15 seconds using ShortCut}
\label{fig:evaluation_failure_path_1_packet_flow_concurrent_sc}
\end{figure}
\subsubsection{With FRR and ShortCut}
ShortCut was able to cut off the loop and therefore reduce the amount of packets forwarded to the original amount. Because this is similar to the behaviour in \cref{evaluation_minimal_tcp_packet_flow} we omitted additional graphs from our results.
\subsection{Packet flow - UDP}
\subsubsection{With FRR}
@ -249,50 +214,20 @@ Similar to our results when measuring the minimal topology in \cref{evaluation_m
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/failure_path_1_packet_flow_udp/packet_flow_udp_before_wo_sc}
\caption{Packets on routers before a failure}
\includegraphics[width=\textwidth]{tests/failure_path_1_packet_flow_udp/packet_flow_udp_after_wo_sc}
\caption{UDP packet flow after a failure - 1st network}
\label{fig:evaluation_failure_path_1_packet_flow_udp_wo_sc_a}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/failure_path_1_packet_flow_udp/packet_flow_udp_after_wo_sc}
\caption{Packets on routers after a failure}
\label{fig:evaluation_failure_path_1_packet_flow_udp_wo_sc_b}
\includegraphics[width=\textwidth]{tests/failure_path_2_packet_flow_udp/packet_flow_udp_after_wo_sc}
\caption{UDP packet flow after a failure - 2nd network}
\label{fig:evaluation_failure_path_2_packet_flow_udp_wo_sc_b}
\end{subfigure}
\label{fig:evaluation_failure_path_1_packet_flow_udp_wo_sc}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=10cm]{tests/failure_path_1_packet_flow_udp/packet_flow_udp_concurrent_wo_sc}
\caption{Packet flow on all routers with failure after 15 seconds}
\label{fig:evaluation_failure_path_1_packet_flow_udp_concurrent_wo_sc}
\caption{UDP packet flow on four routers for both failure path networks after a failure}
\label{fig:evaluation_failure_path_packet_flow_udp_wo_sc}
\end{figure}
\subsubsection{With FRR and ShortCut}
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/failure_path_1_packet_flow_udp/packet_flow_udp_before_sc}
\caption{UDP Packets on routers before a failure}
\label{fig:evaluation_failure_path_1_packet_flow_udp_sc_a}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/failure_path_1_packet_flow_udp/packet_flow_udp_after_sc}
\caption{UDP Packets on routers after a failure}
\label{fig:evaluation_failure_path_1_packet_flow_udp_sc_b}
\end{subfigure}
\label{fig:evaluation_failure_path_1k_packet_flow_udp_sc}
\caption{UDP packets on all routers using ShortCut}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=10cm]{tests/failure_path_1_packet_flow_udp/packet_flow_udp_concurrent_sc}
\caption{Packet flow on all routers with failure after 15 seconds using ShortCut}
\label{fig:evaluation_failure_path_1_packet_flow_udp_concurrent_sc}
\end{figure}
Similar to the behaviour of ShortCut in our TCP packet flow measurements, it was able to restore the network to its original amount of packets forwarded on the routers. Because this behaviour does not add any room for interpretation, we omitted graphs for these measurements from our results.

@ -7,7 +7,7 @@ In this chapter we define and perform the tests for our evaluation. For this we
\section{Measurements}
\label{testing_measurements}
To evaluate the performance of a network we established a list of criteria in \cref{basics_measuring_performance}. Our measurements should reflect these criteria, which is why we implemented corresponding measurement functions in our test framework as described in \cref{implementation_commands}.
To evaluate the performance of a network we established a list of criteria in \cref{basics_measuring_performance}. Our measurements should reflect these criteria, which is why we implemented corresponding measurement functions in our test framework as described in \cref{implementation_commands}.
In the following sections we describe the implemented performance tests in detail. An evaluation of the results of the tests described here will be given in \cref{evaluation}.
@ -36,22 +36,37 @@ Before each test run a separate \textit{iperf} measurement is run to fill caches
The test is run for \SI{30}{\second} with a log interval of \SI{1}{\second}.
For the first running measurement H1 is used as \textit{iperf} client in all topologies, while the server changes depending on the topology, similar to the configuration in \cref{testing_bandwidth}
For the first running measurement host H1 is used as \textit{iperf} client in all topologies, while the server changes depending on the topology, similar to the configuration in \cref{testing_bandwidth}
The second measurement however always uses H1 as an \textit{iperf} server, shifting the client depending on the topology. The client is always the host attached to the router on the top path nearest to the failure point, which is H2 in the minimal topology described in \cref{testing_minimum_network}, H3 in the first failure path network described in \cref{testing_failure_path} and H4 in the second failure path network, which is also described in \cref{testing_failure_path}.
The second measurement however always uses host H1 as an \textit{iperf} server, shifting the client depending on the topology. The client is always the host attached to the router on the top path nearest to the failure point, which is host H2 in the minimal topology described in \cref{testing_minimum_network}, host H3 in the first failure path network described in \cref{testing_failure_path} and host H4 in the second failure path network, which is also described in \cref{testing_failure_path}.
\subsection{TCP packet flow}
\subsection{Packet flow}
The test uses the "measure\_packet\_flow" function described in \textit{measure\_packet\_flow} in \cref{implementation_commands}.
For each topology we chose four routers for which the packet counters should be implemented. For our minimal topology this choice was easy - we just implemented packet counters on all routers. In our failure path networks however we
For each topology we choose four routers for which the packet counters should be implemented. This will cover all routes packets could take as some routers are connected in series, forwarding each packet they receive.
There are only four possible different results for the number of packets forwarded in our networks. First is the router R1 in each topology, as this router is the entry point to our loop after introducing a failure. Router R1 is in the unique position that it will receive all packets sent over the loop twice, but will also receive all packets sent back to host H1, e.g. acknowledgements sent by TCP.
The end point of the created loops is our second point of interest. This is router R2 for our minimal topology, router R3 for the first failure path network and router R4 for our second failure path network. They will receive all packets passing the loop only once as they return them back to the sending router.
All routers in between our start and end point of the loop will forward each packet passed into the loop twice, once "upwards" and once "downwards" to the start point. As such each router in this position will also forward the exact same amount of packets. In case of our minimal topology there is no router that fits this case. Router R2 is the only router in this position on our first failure path network. For our second failure path network we get to choose between routers R2 and R3. We choose router R2.
The fourth and final point of interest is a router on the alternative path, which will receive all packets in case of a failure. For our minimal topology this is either router R3 or R4 as both receive the same amount of packets. Because we only have four routers in our minimal topology, we just start counters on all of them. In the first failure path network we can choose between routers R4, R5 and R6 and in our second failure path network we can choose between routers R5, R6, R7 and R8. We choose router R5 for the first and router R7 for the second failure path network.
We execute a basic \textit{iperf} bandwidth measurement prior to our packet flow measurement to fill caches.
All packet flow measurements are run for \SI{30}{\second} with a log interval of \SI{1}{\second}.
\subsection{UDP packet flow}
\section{Failures, FRR and FRMs}
\label{testing_failures}
Each test is run for each topology in two versions, one with an intermediate failure and one with a concurrent failure. Tests using an intermediate failure define two commands for measurement, one before a failure and one after a failure. The test will execute the first measurement, introduce the failure by running the "connection\_shutdown" command described in \textit{connection\_shutdown} in \cref{implementation_commands} and then run the second measurement automatically.
In case of a concurrent failure the measurement is started and the failure is introduced during the run-time of the measurement.
Each of these tests is also run once only using FRR and once using FRR and our implementation of ShortCut.
\section{Performing tests}
\label{testing_performing}
Tests are performed using the command line interface (CLI) of the test framework
Tests are performed using the command line interface (CLI) of the test framework described in \cref{command_line_interface}. Each test will plot its results automatically.