This is my bachelor thesis.
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
This repo is archived. You can view files and clone it, but cannot push or open issues/pull-requests.
 
 

373 lines
26 KiB

\section{Minimal network}
\label{eva_minimal_network}
\begin{figure}
\centering
\fbox{\includegraphics[width=12cm]{testing_4r3h}}
\caption{Minimal network}
\label{fig:evaluation_minimal_network}
\end{figure}
\subsection{Bandwidth}
\label{evaluation_minimal_bandwidth}
We performed multiple tests of influences to the bandwidth with occurring failures. These were run using \textit{iperf} and a logging interval of \SI{0.5}{\second}. All data was collected from the output of the \textit{iperf} server.
\subsubsection{With FRR}
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/minimal_bandwidth/bandwidth_before_wo_sc}
\label{fig:evaluation_minimal_bandwidth_wo_sc_a}
\caption{Bandwidth before a failure}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/minimal_bandwidth/bandwidth_after_wo_sc}
\label{fig:evaluation_minimal_bandwidth_wo_sc_b}
\caption{Bandwidth after a failure}
\end{subfigure}
\caption{Bandwidth measured with \textit{iperf} from H1 to H4}
\label{fig:evaluation_minimal_bandwidth_wo_sc}
\end{figure}
We performed a TCP bandwidth test on the minimal network, a topology with 4 routers and 3 hosts. The failure occurred after the first run and before the second run of the test in \cref{fig:evaluation_minimal_bandwidth_wo_sc}. The bandwidth does not change between runs. This is to be expected as additional hops on the path of the packet do not influence the total throughput that can be achieved, and while the looped path is passed by the traffic in both directions, the duplex nature of ethernet connections does not impose any limitations in this regard.
\begin{figure}
\centering
\includegraphics[width=10cm]{tests/minimal_bandwidth/bandwidth_concurrent_wo_sc}
\caption{Bandwidth measured for 30 seconds, introducing a failure after 15 seconds}
\label{fig:evaluation_minimal_bandwidth_concurrent_wo_sc}
\end{figure}
In \cref{fig:evaluation_minimal_bandwidth_concurrent_wo_sc} however we introduced the failure while the bandwidth test was running. The test was run for \SI{30}{\second} and the failure was introduced at around 15 seconds, which caused no visible performance drop. However in some executions of this test the performance dropped when introducing the failure and the log output of the sending client reported the need to resend up to 100 packets. Because this behaviour is only occurring sporadically, we assume
this to be a timing issue.
When the connection between routers is cut, our test framework uses the Mininet python API to deactivate the corresponding interfaces on both affected routers. This is done in sequence. In this example the interface on router R2 was deactivated first and the interface on router R4 was deactivated second. We implemented this behaviour after observing the default behaviour of the Mininet network. If the connection between e.g. router R2 and router R4 was only cut by deactivating the interface on router R4, router R2 would not recognize the failure and would loose all packets sent to the link. Because we deactivate the interfaces in sequence and the Mininet python api introduces delay to the operation, the interface on R2 will be deactivated while the interface on R4 will continue receiving packets already on the link and will continue sending packets to the deactivated interface on R2 for a short period of time. All packets sent to R2 in this time period will be lost. But because the \textit{iperf} server itself does not send any actual data, but only acknowledgements (ACK) for already received data, only ACKs are lost this way.
TCP (\cite{InformationSciencesInstituteUniversityofSouthernCalifornia.1981}) however does not necessarily resend lost ACKs, and the client does not necessarily resend all packets for which he did not receive an ACK. Data for which the ACKs were lost could still be implicitly acknowledged by the server if they e.g. belonged to the same window as following packets and the ACKs for these packets were received by the client. This could cause a situation in which the server already received data, but the client only receives a notification of the success of the transfer with a delay.
This could cause some test runs to produce more packet loss than others, with most transfers experiencing no packet loss at all.
In our further tests we observed that the bandwidth alone does not change heavily when using different types of topologies. This is why we omit the evaluation of the bandwidth in further topologies.
\subsubsection{With FRR and ShortCut}
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/minimal_bandwidth/bandwidth_before_sc}
\label{fig:evaluation_minimal_bandwidth_sc_a}
\caption{Bandwidth before a failure}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/minimal_bandwidth/bandwidth_after_sc}
\label{fig:evaluation_minimal_bandwidth_sc_b}
\caption{Bandwidth after a failure}
\end{subfigure}
\caption{Bandwidth measured with \textit{iperf} from H1 to H4 using ShortCut}
\label{fig:evaluation_minimal_bandwidth_sc}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=10cm]{tests/minimal_bandwidth/bandwidth_concurrent_sc}
\caption{Bandwidth measured for 30 seconds, introducing a failure after 15 seconds using ShortCut}
\label{fig:evaluation_minimal_bandwidth_concurrent_sc}
\end{figure}
As can be seen in \cref{fig:evaluation_minimal_bandwidth_sc} and \cref{fig:evaluation_minimal_bandwidth_concurrent_sc}, using ShortCut had no further influence on the achieved throughput. This is to be expected, as longer or shorter paths will only influence throughput if e.g. a link with a lower bandwidth is contained in an additional path.
\subsection{Two concurrent data transfers}
\label{evaluation_minimal_bandwidth_link_usage}
In this test we evaluated the bandwidth between H1 and H4 with a concurrent data transfer on H2 to H1. Both transfers were run with a limitation of \SI{100}{Mbps}, which constitutes the maximum allowed bandwidth in this test.
\subsubsection{With FRR}
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/minimal_bandwidth_link_usage/bandwidth_link_usage_before_wo_sc}
\label{fig:evaluation_minimal_bandwidth_link_usage_wo_sc_a}
\caption{Bandwidth before a failure}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/minimal_bandwidth_link_usage/bandwidth_link_usage_after_wo_sc}
\label{fig:evaluation_minimal_bandwidth_link_usage_wo_sc_b}
\caption{Bandwidth after a failure}
\end{subfigure}
\caption{Bandwidth with concurrent data transfer on H2 to H1}
\label{fig:evaluation_minimal_bandwidth_link_usage_wo_sc}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=10cm]{tests/minimal_bandwidth_link_usage/bandwidth_link_usage_concurrent_wo_sc}
\caption{Bandwidth H1 to H4 with concurrent data transfer on H2 to H1 - failure occuring after 15 seconds}
\label{fig:evaluation_minimal_bandwidth_link_usage_concurrent_wo_sc}
\end{figure}
Before a failure, as can be seen in \cref{fig:evaluation_minimal_bandwidth_link_usage_wo_sc_a}, the throughput is at around \SI{100}{Mbps} which is our current maximum. While the additional transfer between H2 and H1 does in fact use some of the links that are also used in our \textit{iperf} test, namely the link between R1 to R2 and H1 to R1, it does so in a different direction. While the data itself is sent from H1 to H4 over H2, only the tcp acknowledgements are sent on the route back. Data from H2 to H1 is sent from R2 to R1 and therefore only the returning acknowledgements use the link in the same direction, not impacting the achieved throughput.
If a failure is introduced however, traffic from H1 does not only loop over R2, using up bandwidth from R2 to R1, it is also using the same path from R1 to R3 for its traffic. Therefore we experience a huge performance drop to around \SIrange{20}{30}{Mbps}. While in theory this will last until the global convergence protocol rewrites the route, the lost data throughput in our network in this time frame on this route would be around \SI{75}{Mbps}.
\subsubsection{With FRR and ShortCut}
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/minimal_bandwidth_link_usage/bandwidth_link_usage_before_sc}
\label{fig:evaluation_minimal_bandwidth_link_usage_sc_a}
\caption{Bandwidth before a failure}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/minimal_bandwidth_link_usage/bandwidth_link_usage_after_sc}
\label{fig:evaluation_minimal_bandwidth_link_usage_sc_b}
\caption{Bandwidth after a failure}
\end{subfigure}
\caption{Bandwidth with concurrent data transfer on H2 to H1 using ShortCut}
\label{fig:evaluation_minimal_bandwidth_link_usage_sc}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=10cm]{tests/minimal_bandwidth_link_usage/bandwidth_link_usage_concurrent_sc}
\caption{Bandwidth H1 to H4 with concurrent data transfer on h2 to h1 - failure occuring after 15 seconds using ShortCut}
\label{fig:evaluation_minimal_bandwidth_link_usage_concurrent_sc}
\end{figure}
\subsection{Latency}
\label{evaluation_minimal_latency}
In the following sections we evaluate the latency measurements run on the minimal topology with 4 routers and 3 hosts first with only FRR in section \textit{With FRR} and then with our implementation of ShortCut running in section \textit{With FRR and ShortCut}.
\subsubsection{With FRR}
\label{minimal_latency_with_frr}
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/minimal_latency/latency_before_failure_wo_sc}
\label{fig:evaluation_minimal_latency_wo_sc_a}
\caption{Latency before a failure}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/minimal_latency/latency_after_failure_wo_sc}
\label{fig:evaluation_minimal_latency_wo_sc_b}
\caption{Latency after a failure}
\end{subfigure}
\caption{Latency measured with ping}
\label{fig:evaluation_minimal_latency_wo_sc}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=10cm]{tests/minimal_latency/latency_concurrent_wo_sc}
\caption{Latency with a concurrent failure after 15 seconds}
\label{fig:evaluation_minimal_latency_concurrent_wo_sc}
\end{figure}
As each link adds \SI{5}{\milli\second} of delay and \textit{ping} logs the difference in time between sending a packet and receiving an answer, the approximate delay would be the amount of links passed \textit{N} multiplied with the delay per link. In our test network there are 6 links between H1 and H4. Because these links are passed twice, one time to H4 and one time back to H1, this results in an approximate delay of \SI{60}{\milli\second}.
The test run confirmed these assumptions. As can be seen in \cref{fig:evaluation_minimal_latency_wo_sc_a} a ping on the network without failure took an average of around 65 milliseconds with slight variations. The additional \SI{5}{\milli\second} are most likely caused in the routing process on the router.
When introducing a failure however, additional links are passed on the way from H1 to H4. Instead of 6 links passed per direction, the network now sends the packets on a sub-optimal path which adds 2 passed links from R1 to R2 and back. These are only passed when sending packets to H4, packets returning from H4 will not take the sub-optimal path. This would, in theory, add around \SI{10}{\milli\second} of delay to our original results.
As can be seen in \cref{fig:evaluation_minimal_latency_wo_sc_b} this is also the case. With an average of around \SI{76}{\milli\second} of latency the results show an additional delay of around \SI{11}{\milli\second} when taking the sub-optimal path. The discrepancy between our assumption of \SI{10}{\milli\second} and the actual added \SI{11}{\milli\second} might be caused by the additional router that is passed in the direction to H4.
When the failure is introduced concurrent to a running test, the latency spikes to around \SI{94}{\milli\second} for one packet as can be seen in \cref{fig:evaluation_minimal_latency_concurrent_wo_sc}. This might be caused by the deactivation of interfaces using \textit{ifconfig} and a packet arriving just at the moment of reconfiguration, as packets are sent every \SI{0.5}{\second} and the failure is introduced exactly \SI{15}{\second} after starting the measurement. Depending on the time \textit{ifconfig} takes to reconfigure this would cause the packet to remain in the queue until the reconfiguration is finished, adding to the latency measured in this one instance.
\subsubsection{With FRR and ShortCut}
\label{minimal_latency_with_frr_and_shortcut}
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/minimal_latency/latency_before_failure_sc}
\label{fig:evaluation_minimal_latency_sc_a}
\caption{Latency before a failure}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/minimal_latency/latency_after_failure_sc}
\label{fig:evaluation_minimal_latency_sc_b}
\caption{Latency after a failure}
\end{subfigure}
\caption{Latency measured with ping using ShortCut}
\label{fig:evaluation_minimal_latency_sc}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=10cm]{tests/minimal_latency/latency_concurrent_sc}
\caption{Latency with a concurrent failure after 15 seconds with ShortCut}
\label{fig:evaluation_minimal_latency_concurrent_sc}
\end{figure}
Our implementation of ShortCut using \textit{nftables} does not seem to add any additional delay to packet transfers as is evident when comparing the average delay produced before a failure in \cref{fig:evaluation_minimal_latency_wo_sc_a} and \cref{fig:evaluation_minimal_latency_sc_a}.
When introducing the failure the latency does not change when introducing a failure to the network as can be seen in \cref{fig:evaluation_minimal_latency_sc}. This is caused by the removal of the looped path and therefore the additional delay each packet would be subjected to.
The spike in latency which can be seen in \cref{fig:evaluation_minimal_latency_concurrent_sc}, occurring when the failure is introduced can be attributed to the same possible scenario as explained in \cref{minimal_latency_with_frr}, which is most likely a timing issue with the introduction of the failure on the routers R2 and R4 and simultaneously sent ICMP packets.
\subsection{Packet flow - TCP}
\label{evaluation_minimal_tcp_packet_flow}
To show the amount of TCP packets being forwarded on each router, we measured the packet flow on all routers of this topology. This is done by counting TCP packets with \textit{nftables} while a concurrent data transfer is started from H1 to H4. The results include the amount of packets forwarded on each router per second. This was done with an intermediate and concurrent failure for a network with FRR in \cref{minimal_packet_flow_with_frr}, as well as a network with an additional implementation of ShortCut in \cref{minimal_packet_flow_with_frr_and_shortcut}.
\subsubsection{With FRR}
\label{minimal_packet_flow_with_frr}
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/minimal_packet_flow/before_failure_wo_sc_graph}
\label{fig:evaluation_minimal_packet_flow_wo_sc_a}
\caption{TCP Packets on routers before a failure}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/minimal_packet_flow/after_failure_wo_sc_graph}
\label{fig:evaluation_minimal_packet_flow_wo_sc_b}
\caption{TCP Packets on routers after a failure}
\end{subfigure}
\caption{TCP Packets on all routers measured with \textit{nftables} counters}
\label{fig:evaluation_minimal_packet_flow_wo_sc}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=10cm]{tests/minimal_packet_flow/concurrent_failure_wo_sc_graph}
\caption{Packet flow on all routers with failure after 15 seconds}
\label{fig:evaluation_minimal_packet_flow_concurrent_wo_sc}
\end{figure}
The results in the network before a failure are as to be expected and can be seen in \cref{fig:evaluation_minimal_packet_flow_wo_sc_a}. Each router on the route from H1 to H4, which includes R1, R2 and R4, report the same amount of packets at each point of measurement. While the packet count fluctuates during the measurement no packet loss was reported and the bandwidth was at an average of \SI{95}{Mbps} during the whole run of the test. This is why we assume that the fluctuations can be attributed to the mechanisms used in \textit{iperf}.
After a failure all four routers receive packets as can be seen in \cref{fig:evaluation_minimal_packet_flow_wo_sc_b}, but router R1 now receives most packets with an average of around 1500 packets while router R3 and R4 receive roughly the same amount of packets as before the failure at an average of around 1000 packets. Router R2 receives the least packets with an average of around 500 packets.
This is most likely caused by the looped path and the implications for packet travel this has. Router R1 receives all packets that are sent to H4 from H1 twice, once sending them to R2 and the second time when receiving the packets back from R2 to send them to R3. But while all packets sent from H1 pass R1 twice, acknowledgements sent back by the \textit{iperf} server on H4 will only pass R1 once, as R1 would not send packets with H1 as destination to R2. Router R2 on the other hand only receives packets sent to H4 but none of the ACKs sent back. This is why, when compared to the average packet count of all routers in \cref{fig:evaluation_minimal_packet_flow_wo_sc_a}, R2 receives roughly half of all packets a router would normally receive as TCP specifies that for each received packet TCP will send an ACK as answer. This also explains why router R1 forwards an average of around 1500 packets per second, forwarding data packets with around 500 packets per second twice and forwarding acknowledgement packets once with also 500 packets per second, producing an additional 50\% load on the router.
Aside from the changed path and therefore the inclusion of router R3 in this path, routers R3 and R4 are unaffected by the failure, forwarding each packet once.
When causing a failure while the bandwidth measurement is running, the failure itself will cause a sudden drop to 0 packets forwarded for a short amount of time. This can be attributed to the time the routers take to change their configuration. The \textit{nftables} counter uses the "forward" netfilter hook which is called in a pre-defined phase in the network stack of linux. Packets which are logged in the forwarding state already received a routing decision, but because Mininet needs some time to reconfigure the interface to shut down a connection, imitating a failure, the packets have to wait for the router to be ready again.
This behaviour has also been observed when measuring the latency and introducing a failure concurrently in \cref{fig:evaluation_minimal_latency_concurrent_wo_sc}, adding delay to packets to be delivered in the moment of failure.
Reconfiguration of routers in Mininet does not reset the \textit{nftables} counters either, which was confirmed in a quick test counting packets of an \textit{iperf} transfer and shutting down an interface on the same router. The packet count did not change after shutting down the interface.
\subsubsection{With FRR and ShortCut}
\label{minimal_packet_flow_with_frr_and_shortcut}
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/minimal_packet_flow/packet_flow_before_sc}
\label{fig:evaluation_minimal_packet_flow_sc_a}
\caption{TCP Packets on routers before a failure}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/minimal_packet_flow/packet_flow_after_sc}
\label{fig:evaluation_minimal_packet_flow_sc_b}
\caption{TCP Packets on routers after a failure}
\end{subfigure}
\caption{TCP Packets on all routers measured with \textit{nftables} counters using Shortcut}
\label{fig:evaluation_minimal_packet_flow_sc}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=10cm]{tests/minimal_packet_flow/concurrent_failure_sc_graph}
\caption{TCP Packet flow on all routers with failure after 15 seconds using ShortCut}
\label{fig:evaluation_minimal_packet_flow_concurrent_sc}
\end{figure}
When running the TCP packet flow measurements with an implementation of ShortCut running on the network however, the results change drastically, and as expected all packets sent by the \textit{iperf} transfer are forwarded by router R2 on the original route. After the failure was introduced the router R2 does not forward any packets. ShortCut has effectively cut out router R2 from the route, forwarding packets from R1 to R3 directly. All remaining routers R1, R3 and R4 now receive all packets and no router forwards any packets twice.
\subsection{Packet flow - UDP}
\label{evaluation_minimal_udp_packet_flow}
We repeated the packet flow test in \cref{tcp_packet_flow} using UDP to inspect the differences caused by the two protocols.
\subsubsection{With FRR}
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/minimal_packet_flow_udp/packet_flow_udp_before_wo_sc}
\caption{Packets on routers before a failure}
\label{fig:evaluation_minimal_packet_flow_udp_wo_sc_a}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/minimal_packet_flow_udp/packet_flow_udp_after_wo_sc}
\caption{Packets on routers after a failure}
\label{fig:evaluation_minimal_packet_flow_udp_wo_sc_b}
\end{subfigure}
\label{fig:evaluation_minimalk_packet_flow_udp_wo_sc}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=10cm]{tests/minimal_packet_flow_udp/packet_flow_udp_concurrent_wo_sc}
\caption{Packet flow on all routers with failure after 15 seconds}
\label{fig:evaluation_minimal_packet_flow_udp_concurrent_wo_sc}
\end{figure}
When running the packet flow test measuring UDP packets the amount of packets changed drastically when compared to TCP packets. \textit{iperf} uses different packet sizes for each protocol, sending TCP packet with a size of \SI{128}{\kilo\byte} and UDP packets with only a size of \SI{8}{\kilo\byte} (\cite{Dugan.2016}). The same amount of data transmitted should therefore produce a packet count roughly 16 times higher when using UDP compared to TCP. TCP however, as can be seen in \cref{fig:evaluation_minimal_packet_flow_wo_sc_a}, sends around 1000 packets per second when running a bandwidth measurement limited by the overall bandwidth limit on the network of \SI{100}{\mega\bit\per\second}. A naive assumption would be that UDP should sent 16000 packets per second over the network, but that does match with our test results seen in \cref{fig:evaluation_minimal_packet_flow_udp_wo_sc_a}, where only around 7800 packets per second are logged on the routers.
The reason for this is also the key difference between TCP and UDP: TCP uses acknowledgements (ACKs) to confirm the transmission of packets. These are packets returning from the \textit{iperf} server to the client. For each received data package, the server will send back an ACK. If no ACK is sent for a packet, the client will resend the missing packet. This causes the network to transmit twice the amount of packets, one half containing the actual data and one half only containing ACKs.
UDP however will just blindly send packets on their way and does not evaluate whether they actually reached their destination. Therefore all UDP packets contain data and no additional packets for confirmation or congestion control etc. are sent over the network.
Because UDP does not send ACKs the results observed after a failure in \cref{fig:evaluation_minimal_packet_flow_udp_wo_sc_b} are very telling, with routers R2, R3 and R4 all forwarding the same amount of packets and router R1 forwarding exactly double the amount of packets.
\subsubsection{With FRR and ShortCut}
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/minimal_packet_flow_udp/packet_flow_udp_before_sc}
\caption{UDP Packets on routers before a failure}
\label{fig:evaluation_minimal_packet_flow_udp_wo_sc_a}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{tests/minimal_packet_flow_udp/packet_flow_udp_after_sc}
\caption{UDP Packets on routers after a failure}
\label{fig:evaluation_minimal_packet_flow_udp_wo_sc_b}
\end{subfigure}
\label{fig:evaluation_minimalk_packet_flow_udp_wo_sc}
\caption{UDP packets on all routers using ShortCut}
\end{figure}
When using ShortCut in a UDP packet flow measurement, the negative consequences of the failure disappear. While in \cref{fig:evaluation_minimal_packet_flow_udp_wo_sc_a} routers R1, R2 and R4 receive all packets on the original route, the load switches after a failure from R2 to R3. As expected, the ShortCut implementation has cut out the looped path and restored the original functionality on an alternative route.
\begin{figure}
\centering
\includegraphics[width=10cm]{tests/minimal_packet_flow_udp/packet_flow_udp_concurrent_sc}
\caption{Packet flow on all routers with failure after 15 seconds using ShortCut}
\label{fig:evaluation_minimal_packet_flow_udp_concurrent_wo_sc}
\end{figure}
WRITE THIS