current state

master
Frederik Maaßen 2 years ago
parent ae9a1c4980
commit a1d782f711
  1. 23
      thesis/content/evaluation/failure_path_networks.tex
  2. 16
      thesis/content/evaluation/minimal_network.tex
  3. 4
      thesis/content/implementation/implementation.tex
  4. 14
      thesis/content/introduction.tex
  5. 1057
      thesis/images/tests/minimal_packet_flow/after_failure_sc_graph.eps
  6. 1063
      thesis/images/tests/minimal_packet_flow/after_failure_wo_sc_graph.eps
  7. 1056
      thesis/images/tests/minimal_packet_flow/before_failure_sc_graph.eps
  8. 1056
      thesis/images/tests/minimal_packet_flow/before_failure_wo_sc_graph.eps
  9. 1057
      thesis/images/tests/minimal_packet_flow/concurrent_failure_sc_graph.eps
  10. 1060
      thesis/images/tests/minimal_packet_flow/concurrent_failure_wo_sc_graph.eps

@ -94,19 +94,20 @@ The longer failure path however implicates that the impact in a realistic enviro
\subsection{Latency}
\subsubsection{With FRR}
\label{failure_path_1_latency_with_frr}
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/failure_path_1_latency/latency_before_failure_wo_sc}
\includegraphics[width=\textwidth]{tests/failure_path_1_latency/latency_before_wo_sc}
\label{fig:evaluation_failure_path_1_latency_wo_sc_a}
\caption{Latency before a failure}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/failure_path_1_latency/latency_after_failure_wo_sc}
\includegraphics[width=\textwidth]{tests/failure_path_1_latency/latency_after_wo_sc}
\label{fig:evaluation_failure_path_1_latency_wo_sc_b}
\caption{Latency after a failure}
\end{subfigure}
@ -129,14 +130,14 @@ The longer failure path however implicates that the impact in a realistic enviro
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/failure_path_1_latency/latency_before_failure_sc}
\includegraphics[width=\textwidth]{tests/failure_path_1_latency/latency_before_sc}
\label{fig:evaluation_failure_path_1_latency_sc_a}
\caption{Latency before a failure}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/failure_path_1_latency/latency_after_failure_sc}
\includegraphics[width=\textwidth]{tests/failure_path_1_latency/latency_after_sc}
\label{fig:evaluation_failure_path_1_latency_sc_b}
\caption{Latency after a failure}
\end{subfigure}
@ -160,14 +161,14 @@ The longer failure path however implicates that the impact in a realistic enviro
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/failure_path_1_packet_flow/before_failure_wo_sc_graph}
\includegraphics[width=\textwidth]{tests/failure_path_1_packet_flow/packet_flow_before_wo_sc}
\label{fig:evaluation_failure_path_1_packet_flow_wo_sc_a}
\caption{TCP Packets on routers before a failure}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/failure_path_1_packet_flow/after_failure_wo_sc_graph}
\includegraphics[width=\textwidth]{tests/failure_path_1_packet_flow/packet_flow_after_wo_sc}
\label{fig:evaluation_failure_path_1_packet_flow_wo_sc_b}
\caption{TCP Packets on routers after a failure}
\end{subfigure}
@ -176,7 +177,7 @@ The longer failure path however implicates that the impact in a realistic enviro
\end{figure}
\begin{figure}
\centering
\includegraphics[width=10cm]{tests/failure_path_1_packet_flow/concurrent_failure_wo_sc_graph}
\includegraphics[width=10cm]{tests/failure_path_1_packet_flow/packet_flow_concurrent_wo_sc}
\caption{Packet flow on all routers with failure after 15 seconds}
\label{fig:evaluation_failure_path_1_packet_flow_concurrent_wo_sc}
\end{figure}
@ -191,14 +192,14 @@ The longer failure path however implicates that the impact in a realistic enviro
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/failure_path_1_packet_flow/before_failure_sc_graph}
\includegraphics[width=\textwidth]{tests/failure_path_1_packet_flow/packet_flow_before_sc}
\label{fig:evaluation_failure_path_1_packet_flow_sc_a}
\caption{TCP Packets on routers before a failure}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/failure_path_1_packet_flow/before_failure_sc_graph}
\includegraphics[width=\textwidth]{tests/failure_path_1_packet_flow/packet_flow_before_sc}
\label{fig:evaluation_failure_path_1_packet_flow_sc_b}
\caption{TCP Packets on routers after a failure}
\end{subfigure}
@ -208,7 +209,7 @@ The longer failure path however implicates that the impact in a realistic enviro
\begin{figure}
\centering
\includegraphics[width=10cm]{tests/failure_path_1_packet_flow/concurrent_failure_sc_graph}
\includegraphics[width=10cm]{tests/failure_path_1_packet_flow/packet_flow_concurrent_sc}
\caption{TCP Packet flow on all routers with failure after 15 seconds using ShortCut}
\label{fig:evaluation_failure_path_1_packet_flow_concurrent_sc}
\end{figure}
@ -233,7 +234,7 @@ The longer failure path however implicates that the impact in a realistic enviro
\caption{Packets on routers after a failure}
\label{fig:evaluation_failure_path_1_packet_flow_udp_wo_sc_b}
\end{subfigure}
\label{fig:evaluation_failure_path_1k_packet_flow_udp_wo_sc}
\label{fig:evaluation_failure_path_1_packet_flow_udp_wo_sc}
\end{figure}
\begin{figure}

@ -118,7 +118,9 @@ In this test we evaluated the bandwidth between H1 and H4 with a concurrent data
Before a failure, as can be seen in \cref{fig:evaluation_minimal_bandwidth_link_usage_wo_sc_a}, the throughput is at around \SI{100}{Mbps} which is our current maximum. While the additional transfer between H2 and H1 does in fact use some of the links that are also used in our \textit{iperf} test, namely the link between R1 to R2 and H1 to R1, it does so in a different direction. While the data itself is sent from H1 to H4 over H2, only the tcp acknowledgements are sent on the route back. Data from H2 to H1 is sent from R2 to R1 and therefore only the returning acknowledgements use the link in the same direction, not impacting the achieved throughput.
If a failure is introduced however, traffic from H1 does not only loop over R2, using up bandwidth from R2 to R1, it is also using the same path from R1 to R3 for its traffic. Therefore we experience a huge performance drop to around \SIrange{20}{30}{Mbps}. While in theory this will last until the global convergence protocol rewrites the route, the lost data throughput in our network in this time frame on this route would be around \SI{75}{Mbps}.
If a failure is introduced however, traffic from H1 loops over R2 using up bandwidth on a link that is also passed by the additional data flow. Therefore we experience a huge performance drop to around \SIrange{20}{30}{Mbps}, while the additional data flow drops in performance to around \SI{80}{Mbps} as can be seen in \cref{fig:evaluation_minimal_bandwidth_link_usage_wo_sc_b}. From a network perspective, this results in a combined loss of 50\% throughput. While the amount of traffic sent through the network before the failure amounted to \SI{200}{Mbps}, it now dropped down to a combined \SI{100}{Mbps}.
During the bandwidth measurement in \cref{fig:evaluation_minimal_bandwidth_link_usage_wo_sc_b} there are small drops in performance
\subsubsection{With FRR and ShortCut}
@ -147,7 +149,7 @@ If a failure is introduced however, traffic from H1 does not only loop over R2,
\caption{Bandwidth H1 to H4 with concurrent data transfer on h2 to h1 - failure occuring after 15 seconds using ShortCut}
\label{fig:evaluation_minimal_bandwidth_link_usage_concurrent_sc}
\end{figure}
When activating our implementation of ShortCut no significant change of values can be observed. This is due to the removal of the looped path, effectively allowing both data transfers to run on full bandwidth. This completely restores the original traffic throughput achieved by both data transfers of \SI{200}{Mbps}.
\subsection{Latency}
\label{evaluation_minimal_latency}
@ -235,14 +237,14 @@ To show the amount of TCP packets being forwarded on each router, we measured th
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/minimal_packet_flow/before_failure_wo_sc_graph}
\includegraphics[width=\textwidth]{tests/minimal_packet_flow/packet_flow_before_wo_sc}
\label{fig:evaluation_minimal_packet_flow_wo_sc_a}
\caption{TCP Packets on routers before a failure}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/minimal_packet_flow/after_failure_wo_sc_graph}
\includegraphics[width=\textwidth]{tests/minimal_packet_flow/packet_flow_after_wo_sc}
\label{fig:evaluation_minimal_packet_flow_wo_sc_b}
\caption{TCP Packets on routers after a failure}
\end{subfigure}
@ -251,7 +253,7 @@ To show the amount of TCP packets being forwarded on each router, we measured th
\end{figure}
\begin{figure}
\centering
\includegraphics[width=10cm]{tests/minimal_packet_flow/concurrent_failure_wo_sc_graph}
\includegraphics[width=10cm]{tests/minimal_packet_flow/packet_flow_concurrent_wo_sc}
\caption{Packet flow on all routers with failure after 15 seconds}
\label{fig:evaluation_minimal_packet_flow_concurrent_wo_sc}
\end{figure}
@ -296,7 +298,7 @@ Reconfiguration of routers in Mininet does not reset the \textit{nftables} count
\begin{figure}
\centering
\includegraphics[width=10cm]{tests/minimal_packet_flow/concurrent_failure_sc_graph}
\includegraphics[width=10cm]{tests/minimal_packet_flow/packet_flow_concurrent_sc}
\caption{TCP Packet flow on all routers with failure after 15 seconds using ShortCut}
\label{fig:evaluation_minimal_packet_flow_concurrent_sc}
\end{figure}
@ -333,7 +335,7 @@ We repeated the packet flow test in \cref{tcp_packet_flow} using UDP to inspect
\label{fig:evaluation_minimal_packet_flow_udp_concurrent_wo_sc}
\end{figure}
When running the packet flow test measuring UDP packets the amount of packets changed drastically when compared to TCP packets. \textit{iperf} uses different packet sizes for each protocol, sending TCP packet with a size of \SI{128}{\kilo\byte} and UDP packets with only a size of \SI{8}{\kilo\byte} (\cite{Dugan.2016}). The same amount of data transmitted should therefore produce a packet count roughly 16 times higher when using UDP compared to TCP. TCP however, as can be seen in \cref{fig:evaluation_minimal_packet_flow_wo_sc_a}, sends around 1000 packets per second when running a bandwidth measurement limited by the overall bandwidth limit on the network of \SI{100}{\mega\bit\per\second}. A naive assumption would be that UDP should sent 16000 packets per second over the network, but that does match with our test results seen in \cref{fig:evaluation_minimal_packet_flow_udp_wo_sc_a}, where only around 7800 packets per second are logged on the routers.
When running the packet flow test measuring UDP packets the amount of packets was much higher compared to TCP packets. \textit{iperf} uses different packet sizes for each protocol, sending TCP packet with a size of \SI{128}{\kilo\byte} and UDP packets with only a size of \SI{8}{\kilo\byte} (\cite{Dugan.2016}). The same amount of data transmitted should therefore produce a packet count roughly 16 times higher when using UDP compared to TCP. TCP however, as can be seen in \cref{fig:evaluation_minimal_packet_flow_wo_sc_a}, causes the routers to log around 1000 packets per second when running a bandwidth measurement limited by the overall bandwidth limit on the network of \SI{100}{\mega\bit\per\second}. A naive assumption would be that UDP should sent 16000 packets per second over the network, but that does match with our test results seen in \cref{fig:evaluation_minimal_packet_flow_udp_wo_sc_a}, where only around 7800 packets per second are logged on the routers.
The reason for this is also the key difference between TCP and UDP: TCP uses acknowledgements (ACKs) to confirm the transmission of packets. These are packets returning from the \textit{iperf} server to the client. For each received data package, the server will send back an ACK. If no ACK is sent for a packet, the client will resend the missing packet. This causes the network to transmit twice the amount of packets, one half containing the actual data and one half only containing ACKs.
UDP however will just blindly send packets on their way and does not evaluate whether they actually reached their destination. Therefore all UDP packets contain data and no additional packets for confirmation or congestion control etc. are sent over the network.

@ -1,7 +1,9 @@
\chapter{Implementation}
In the following chapter we implement an examplary network in Mininet, including routing between hosts and routers.
We also implement fast re-routing as well as ShortCut. In section \ref{sec:test_network} we explain the test framework that we built for performing tests. In section \ref{implementation_rrt} we then explain how we implemented FRR in the test framework.
Lastly we talk about our implementation of ShortCut in \ref{implementation_shortcut}
Lastly we talk about our implementation of ShortCut in \ref{implementation_shortcut}.
All implementations, the thesis and the measurements can be accessed via the git repository for this thesis in \cite{Maaen.052022}.
\input{content/implementation/test_network}

@ -27,7 +27,7 @@ Resilient Routing Layers pre-computes alternative routing tables, switching betw
ShortCut uses information about the incoming packet to determine whether or not the packet returned to the router, using already existing FRR implementations. In case a packet returns it will remove the route with the highest priority from the routing table, assuming that the path is no longer available.
Revive installs backup routes prior
Revive installs backup routes prior WRITE THIS
@ -39,6 +39,14 @@ Older FRMs have already been evaluated thoroughly and even though they do work i
\section{Contribution}
- in this context we use mininet, a tool to create virtual networks to implement and test these recovery methods
- by creating multiple network structures and failure scenarios we try to evaluate those mechanisms and compare them based on their performance
In this context we use Mininet (\cite{LantzBobandtheMininetContributors.}), a tool to create virtual networks, to implement multiple topologies with routings. We then implement a simple FRR mechanism, re-routing returning packets to an alternative path.
To test ShortCut we provide a prototype for an implementation using \textit{nftables} (\cite{AyusoPabloNeiraandKadlecsikJozsefandLeblondEricandWestphalFlorianandGonzalezArtur.}) and python 3.8 (\cite{vanRossum.2009}) that can be installed on any linux based router.
We developed a testing framework that can be used to automatically create Mininet topologies, formulate tests in python dictionaries using existing measurement functions, set network wide bandwidth limits or delays and to run automatic sets of tests.
The framework can be called using an argument based \textit{command line interface} (CLI).
Using this framework we test several topologies using FRR with and without ShortCut and discuss the results, showing the usefulness and resource efficiency of the FRM ShortCut.