Show only selected publications
arXiv. Pre-print. (April 2020)
Attacks on Internet routing are typically viewed through the lens of availability and confidentiality, assuming an adversary that either discards traffic or performs eavesdropping. Yet, a strategic adversary can use routing attacks to compromise the security of critical Internet applications like Tor, certificate authorities, and the bitcoin network. In this paper, we survey such application-specific routing attacks and argue that both application-layer and network-layer defenses are essential and urgently needed. The good news is that, while deployment challenges have hindered the adoption of network-layer defenses (i.e. secure routing protocols) thus far, application-layer defenses are much easier to deploy in the short term.
ACM SIGCOMM CCR 2020. Volume 50 Issue 2 (April 2020).
Each year at ETH Zurich, around 100 students collectively build and operate their very own Internet infrastructure composed of hundreds of routers and dozens of Autonomous Systems (ASes). Their goal? Enabling Internet-wide connectivity.
We find this class-wide project to be invaluable in teaching our students how the Internet infrastructure practically works. Among others, our students have a much deeper understanding of Internet operations alongside their pitfalls. Besides students tend to love the project: clearly the fact that all of them need to cooperate for the entire Internet to work is empowering.
In this paper, we describe the overall design of our teaching platform, how we use it, and interesting lessons we have learnt over the years. We also make our platform openly available.
USENIX NSDI 2020. Santa Clara, California, USA (February 2020).
Network verification and configuration synthesis are promising approaches to make networks more reliable and secure by enforcing a set of policies. However, these approaches require a formal and precise description of the intended network behavior, imposing a major barrier to their adoption: network operators are not only reluctant to write formal specifications, but often do not even know what these specifications are.
We present Config2Spec, a system that automatically synthesizes a formal specification (a set of policies) of a network given its configuration and a failure model (e.g., up to two link failures). A key technical challenge is to design a synthesis algorithm which can efficiently explore the large space of possible policies. To address this challenge, Config2Spec relies on a careful combination of two well-known methods: data plane analysis and control plane verification.
Experimental results show that Config2Spec scales to mining specifications of large networks (>150 routers).
USENIX NSDI 2020. Santa Clara, California, USA (February 2020).
Push-In First-Out (PIFO) queues are hardware primitives which enable programmable packet scheduling by allowing to perfectly reorder packets at line rate. While promising, implementing PIFO queues in hardware and at scale is not easy: only hardware designs (not implementations) exist and they can only support about 1000 flows.
In this paper, we introduce SP-PIFO, a programmable packet scheduler which closely approximates the behavior of PIFO queues using strict-priority queues—at line rate, at scale, and on existing devices. The key insight behind SP-PIFO is to dynamically adapt the mapping between packet ranks and available queues to minimize the scheduling errors. We present a mathematical formulation of the problem and derive an adaptation technique which closely approximates the optimal queue mapping without any traffic knowledge.
We fully implement SP-PIFO in P4 and evaluate it on real workloads. We show that SP-PIFO: (i) closely matches ideal PIFO performance, with as little as 8 priority queues; (ii) arbitrarily scales to large amount of flows and ranks; and (iii) quickly adapts to traffic variations. We also show that SP-PIFO runs at line rate on existing programmable data planes.
When designing their performance evaluations, networking researchers often encounter questions such as: How long should a run be? How many runs to perform? How to account for the variability across multiple runs? Despite their best intentions, researchers often answer these questions differently, thus impairing the reproducibility of their evaluations and decreasing the confidence in their results. To support networking researchers, we propose a systematic methodology that streamlines the design and analysis of performance evaluations. Our methodology first identifies the temporal characteristics of variability sources in networking experiments, and then applies rigorous statistical methods to derive performance results with quantifiable confidence, in spite of the inherent variability. We implement this methodology in a software framework called TriScale.
For each performance metric, TriScale computes a variability score that estimates, with a desired confidence, how similar the results would be if the evaluation were repeated; in other words, TriScale quantifies the reproducibility of the performance evaluation. We apply TriScale to four diverse use cases (congestion control, wireless embedded systems, failure detection, video streaming), demonstrating that TriScale helps generalize and strengthen previously published results. Improving the standards of reproducibility in networking is a crucial and complex challenge; with TriScale, we make an important contribution to this endeavor by providing a rationale and statistically sound experimental methodology.
ACM Workshop on Buffer Sizing. Stanford, CA, USA (December 2019).
Conventional buffer sizing techniques consider an output port with multiple queues in isolation and provide guidelines for the size of the queue. In practice, however, switches consist of several ports that share a buffering chip. Hence, chip manufacturers, such as Broadcom, are left to devise a set of proprietary resource sharing algorithms to allocate buffers across ports. This algorithm dynamically adjusts the buffer size for output queues and directly impacts the packet loss and latency of individual queues. We show that the problem of allocating buffers across ports, although less known, is indeed responsible for fundamental inefficiencies in today's devices. In particular, the per-port buffer allocation is an ad-hoc decision that (at best) depends on the remaining buffer cells on the chip instead of the type of traffic. In this work, we advocate for a flow-aware and device-wide buffer sharing scheme (FAB), which is practical today in programmable devices. We tested FAB on two specific workloads and showed that it can improve the tail flow completion time by an order of magnitude compared to conventional buffer management techniques.
ACM HotNets 2019. Princeton, NJ, USA (November 2019).
Traditional network control planes can be slow and require manual tinkering from operators to change their behavior. There is thus great interest in a faster, data-driven approach that uses signals from real-time traffic instead. However, the promise of fast and automatic reaction to data comes with new risks: malicious inputs designed towards negative outcomes for the network, service providers, users, and operators.
Adversarial inputs are a well-recognized problem in other areas; we show that networking applications are susceptible to them too. We characterize the attack surface of data-driven networks and examine how attackers with different privileges—from infected hosts to operator-level access—may target network infrastructure, applications, and protocols. To illustrate the problem, we present case studies with concrete attacks on recently proposed data-driven systems.
Our analysis urgently calls for a careful study of attacks and defenses in data-driven networking, with a view towards ensuring that their promise is not marred by oversights in robust design.
Pierre Dumont, Roland Meier, David Gugelmann, Vincent Lenders
NATO CCD COE CyCon 2019. Tallinn, Estonia (May 2019).
Remote shell sessions via protocols such as SSH are essential for managing systems, deploying applications, and running experiments. However, combined with weak passwords or flaws in the authentication process, remote shell access becomes a major security risk, as it allows an attacker to run arbitrary commands in the name of an impersonated user or even a system administrator. For example, remote shells of weakly protected systems are often exploited in order to build large botnets, to send spam emails, or to launch distributed denial of service attacks. Also, malicious insiders in organizations often use shell sessions to access and transfer restricted data. In this work, we tackle the problem of detecting malicious shell sessions based on session logs, i.e., recorded sequences of commands that were executed over time. Our approach is to classify sessions as benign or malicious by analyzing the sequence of commands that the shell users executed. We model such sequences of commands as n-grams and use them as features to train a supervised machine learning classifier. Our evaluation, based on freely available data and data from our own honeypot infrastructure, shows that the classifier reaches a true positive rate of 99.4% and a true negative rate of 99.7% after observing only four shell commands.
NATO CCD COE CyCon 2019. Tallinn, Estonia (May 2019).
The diversity of applications and devices in enterprise networks combined with large traffic volumes make it inherently challenging to quickly identify malicious traffic. When incidents occur, emergency response teams often lose precious time in reverse-engineering the network topology and configuration before they can focus on malicious activities and digital forensics. In this paper, we present a system that quickly and reliably identifies Command and Control (C&C) channels without prior network knowledge. The key idea is to train a classifier using network traffic from attacks that happened in the past and use it to identify C&C connections in the current traffic of other networks. Specifically, we leverage the fact that – while benign traffic differs – malicious traffic bears similarities across networks (e.g., devices participating in a botnet act in a similar manner irrespective of their location).To ensure performance and scalability, we use a random forest classifier based on a set of computationally-efficient features tailored to the detection of C&C traffic. In order to prevent attackers from outwitting our classifier, we tune the model parameters to maximize robustness. We measure high resilience against possible attacks – e.g.,attempts to camouflaging C&C flows as benign traffic – and packet loss during the inference. We have implemented our approach and we show its practicality on a real use case:Locked Shields, the world’s largest cyber defense exercise. In Locked Shields, defenders have limited resources to protect a large, heterogeneous network against unknown attacks. Using recorded datasets (from 2017 and 2018) from a participating team, we show that our classifier is able to identify C&C channels with 99% precision and over 90% recall in near real time and with realistic resource requirements. If the team had used our system in 2018, it would have discovered 10 out of 12 C&C servers p.p1 in the first hours of the exercise.
USENIX NSDI 2019. Boston, Massachusetts, USA (February 2019).
We present Blink, a data-driven system that leverages TCP-induced signals to detect failures directly in the data plane. The key intuition behind Blink is that a TCP flow exhibits a predictable behavior upon disruption: retransmitting the same packet over and over, at epochs exponentially spaced in time. When compounded over multiple flows, this behavior creates a strong and characteristic failure signal. Blink efficiently analyzes TCP flows to: (i) select which ones to track; (ii) reliably and quickly detect major traffic disruptions; and (iii) recover connectivity---all this, completely in the data plane. We present an implementation of Blink in P4 together with an extensive evaluation on real and synthetic traffic traces. Our results indicate that Blink: (i) achieves sub-second rerouting for large fractions of Internet traffic; and (ii) prevents unnecessary traffic shifts even in the presence of noise. We further show the feasibility of Blink by running it on an actual Tofino switch.
NDSS Symposium 2019. San Diego, CA, USA (February 2019).
Nowadays Internet routing attacks remain practically effective as existing countermeasures either fail to provide protection guarantees or are not easily deployable. Blockchain systems are particularly vulnerable to such attacks as they rely on Internet-wide communications to reach consensus. In particular, Bitcoin---the most widely-used cryptocurrency---can be split in half by any AS-level adversary using BGP hijacking.
In this paper, we present SABRE, a secure and scalable Bitcoin relay network which relays blocks worldwide through a set of connections that are resilient to routing attacks. SABRE runs alongside the existing peer-to-peer network and is easily deployable. As a critical system, SABRE design is highly resilient and can efficiently handle high bandwidth loads, including Denial of Service attacks.
We built SABRE around two key technical insights. First, we leverage fundamental properties of inter-domain routing (BGP) policies to host relay nodes: (i) in networks that are inherently protected against routing attacks; and (ii) on paths that are economically-preferred by the majority of Bitcoin clients. These properties are generic and can be used to protect other Blockchain-based systems. Second, we leverage the fact that relaying blocks is communication-heavy, not computation-heavy. This enables us to offload most of the relay operations to programmable network hardware (using the P4 programming language). Thanks to this hardware/software co-design, SABRE nodes operate seamlessly under high load while mitigating the effects of malicious clients.
We present a complete implementation of SABRE together with an extensive evaluation. Our results demonstrate that SABRE is effective at securing Bitcoin against routing attacks, even with deployments of as few as 6 nodes.
ACM HotNets 2018. Redmond, WA, USA (November 2018).
One design principle of modern network architecture seems to be set in stone: a software-based control plane drives a hardware- or software-based data plane. We argue that it is time to revisit this principle after the advent of programmable switch ASICs which can run complex logic at line rate.
We explore the possibility and benefits of accelerating the control plane by offloading some of its tasks directly to the network hardware. We show that programmable data planes are indeed powerful enough to run key control plane tasks including: failure detection and notification, connectivity retrieval, and even policy-based routing protocols. We implement in P4 a prototype of such a “hardware-accelerated” control plane, and illustrate its benefits in a case study.
Despite such benefits, we acknowledge that offloading tasks to hardware is not a silver bullet. We discuss its tradeoffs and limitations, and outline future research directions towards hardware-software codesign of network control planes.
ACM IMC 2018. Boston, Massachusetts, USA (October 2018).
Passive measurement is a commonly used approach for measuring round trip time (RTT), as it reduces bandwidth overhead compared to large-scale active measurements. However, passive RTT measurement is limited to transport-specific approaches, such as those that utilize Transmission Control Protocol (TCP) timestamps. Furthermore, the continuing deployment of encrypted transport protocols such as QUIC hides the information used for passive RTT measurement from the network.
In this work, we introduce the latency spin signal as a light-weight, transport-independent and explicit replacement for TCP timestamps for passive latency measurement. This signal supports per-flow, single-point and single direction passive measurement of end-to-end RTT using just three bits in the transport protocol header, leveraging the existing dynamics of the vast majority of Internet-deployed transports. We show how the signal applies to measurement of both TCP and to QUIC through implementation of the signal in endpoint transport stacks. We also provide a high-performance measurement implementation for the signal using the Vector Packet Processing (VPP) framework. Evaluation on emulated networks and in an Internet testbed demonstrate the viability of the signal, and show that it is resistant to even large amounts of loss or reordering on the measured path.
USENIX Security 2018. Baltimore, MD, USA (August 2018).
Simple path tracing tools such as traceroute allow malicious users to infer network topologies remotely and use that knowledge to craft advanced denial-of-service (DoS) attacks such as Link-Flooding Attacks (LFAs). Yet, despite the risk, most network operators still allow path tracing as it is an essential network debugging tool.
In this paper, we present NetHide, a network topology obfuscation framework that mitigates LFAs while preserving the practicality of path tracing tools. The key idea behind NetHide is to formulate network obfuscation as a multi-objective optimization problem that allows for a flexible tradeoff between security (encoded as hard constraints) and usability (encoded as soft constraints). While solving this problem exactly is hard, we show that NetHide can obfuscate topologies at scale by only considering a subset of the candidate solutions and without reducing obfuscation quality. In practice, NetHide obfuscates the topology by intercepting and modifying path tracing probes directly in the data plane. We show that this process can be done at line-rate, in a stateless fashion, by leveraging the latest generation of programmable network devices.
We fully implemented NetHide and evaluated it on realistic topologies. Our results show that NetHide is able to obfuscate large topologies (> 150 nodes) while preserving near-perfect debugging capabilities. In particular, we show that operators can still precisely trace back > 90% of link failures despite obfuscation.
Timon Gehr, Sasa Misailovic, Petar Tsankov, Laurent Vanbever, Pascal Wiesman, Martin Vechev
PLDI 2018. Philadelphia, Pennsylvania, USA (June 2018).
Network operators often need to ensure that important probabilistic properties are met, such as that the probability of network congestion is below a certain threshold. Ensuring such properties is challenging and requires both a suitable language for probabilistic networks and an automated procedure for answering probabilistic inference queries. We present Bayonet, a novel approach that consists of: (i) a probabilistic network programming language and (ii) a system that performs probabilistic inference on Bayonet programs. The key insight behind Bayonet is to phrase the problem of probabilistic network reasoning as inference in existing probabilistic languages. As a result, Bayonet directly leverages existing probabilistic inference systems and offers a flexible and expressive interface to operators. We present a detailed evaluation of Bayonet on common network scenarios, such as network congestion, reliability of packet delivery, and others. Our results indicate that Bayonet can express such practical scenarios and answer queries for realistic topology sizes (with up to 30 nodes).
David Gugelmann, David Sommer, Vincent Lenders, Markus Happe, Laurent Vanbever
NATO CCD COE CyCon 2018. Tallinn, Estonia (May 2018).
Organizations not only need to defend their IT systems against external cyber attackers, but also from malicious insiders, that is, agents who have infiltrated an organization or malicious members stealing information for their own profit. In particular, malicious insiders can leak a document by simply opening it and taking pictures of the document displayed on the computer screen with a digital camera. Using a digital camera allows a perpetrator to easily avoid a log trail that results from using traditional communication channels, such as sending the document via email. This makes it difficult to identify and prove the identity of the perpetrator. Even a policy prohibiting the use of any device containing a camera cannot eliminate this threat since tiny cameras can be hidden almost everywhere.
To address this leakage vector, we propose a novel screen watermarking technique that embeds hidden information on computer screens displaying text documents. The watermark is imperceptible during regular use, but can be extracted from pictures of documents shown on the screen, which allows an organization to reconstruct the place and time of the data leak from recovered leaked pictures. Our approach takes advantage of the fact that the human eye is less sensitive to small luminance changes than digital cameras. We devise a symbol shape that is invisible to the human eye, but still robust to the image artifacts introduced when taking pictures. We complement this symbol shape with an error correction coding scheme that can handle very high bit error rates and retrieve watermarks from cropped and compressed pictures. We show in an experimental user study that our screen watermarks are not perceivable by humans and analyze the robustness of our watermarks against image modifications.
NATO CCD COE CyCon 2018. Tallinn, Estonia (May 2018).
Organizations increasingly rely on cyber threat intelligence feeds to protect their infrastructure from attacks. These feeds typically list IP addresses or domains associated with malicious activities such as spreading malware or participating in a botnet. Today, there is a rich ecosystem of commercial and free cyber threat intelligence feeds, making it difficult, yet essential, for network defenders to quantify the quality and to select the optimal set of feeds to follow. Selecting too many or low-quality feeds results in many false alerts, while considering too few feeds increases the risk of missing relevant threats. Naïve individual metrics like size and update rate give a somewhat good overview about a feed, but they do not allow conclusions about its quality and they can easily be manipulated by feed providers.
In this paper, we present FeedRank, a novel ranking approach for cyber threat intelligence feeds. In contrast to individual metrics, FeedRank is robust against tampering attempts by feed providers. FeedRank’s key insight is to rank feeds according to the originality of their content and the reuse of entries by other feeds. Such correlations between feeds are modelled in a graph, which allows FeedRank to find temporal and spatial correlations without requiring any ground truth or an operator’s feedback.
We illustrate FeedRank’s usefulness with two characteristic examples: (i) selecting the best feeds that together contain as many distinct entries as possible; and (ii) selecting the best feeds that list new entries before they appear on other feeds. We evaluate FeedRank based on a large set of real feeds. The evaluation shows that FeedRank identifies dishonest feeds as outliers and that dishonest feeds do not achieve a better FeedRank score than the top-rated real feeds.
USENIX NSDI 2018. Renton, Washington, USA (April 2018).
For an Internet Service Provider (ISP), getting an accurate picture of how its network behaves is challenging. Indeed, given the carried traffic volume and the impossibility to control end-hosts, ISPs often have no other choice but to rely on heavily sampled traffic statistics, which provide them with coarse-grained visibility at a less than ideal time resolution (seconds or minutes). We present Stroboscope, a system that enables fine-grained monitoring of any traffic flow by instructing routers to mirror millisecond-long traffic slices in a programmatic way. Stroboscope takes as input high-level monitoring queries together with a budget and automatically determines: (i) which flows to mirror; (ii) where to place mirroring rules, using fast and provably correct algorithms; and (iii) when to schedule these rules to maximize coverage while meeting the input budget. We implemented Stroboscope, and show that it scales well: it computes schedules for large networks and query sizes in few seconds, and produces a number of mirroring rules well within the limits of current routers. We also show that Stroboscope works on existing routers and is therefore immediately deployable.
USENIX NSDI 2018. Renton, Washington, USA (April 2018).
Network operators often need to adapt the configuration of a network in order to comply with changing routing policies. Evolving existing configurations, however, is a complex task as local changes can have unforeseen global effects. Not surprisingly, this often leads to mistakes that result in network downtimes. We present NetComplete, a system that assists operators in modifying existing network-wide configurations to comply with new routing policies. NetComplete takes as input configurations with “holes” that identify the parameters to be completed and “autocompletes” these with concrete values. The use of a partial configuration addresses two important challenges inherent to existing synthesis solutions: (i) it allows the operators to precisely control how configurations should be changed; and (ii) it allows the synthesizer to leverage the existing configurations to gain performance. To scale, NetComplete relies on powerful techniques such as counter-example guided inductive synthesis (for link-state protocols) and partial evaluation (for path-vector protocols). We implemented NetComplete and showed that it can autocomplete configurations using static routes, OSPF, and BGP. Our implementation also scales to realistic networks and complex routing policies. Among others, it is able to synthesize configurations for networks with up to 200 routers within few minutes.
USENIX NSDI 2018. Renton, Washington, USA (April 2018).
Today network operators spend a significant amount of time struggling to understand how their network forwards traffic. Even simple questions such as "How is my network handling Google traffic?" often require operators to manually bridge large semantic gaps between low-level forwarding rules distributed across many routers and the corresponding high-level insights. We introduce Net2Text, a system which assists network operators in reasoning about network-wide forwarding behaviors. Out of the raw forwarding state and a query expressed in natural language, Net2Text automatically produces succinct summaries, also in natural language, which efficiently capture network-wide semantics. Our key insight is to pose the problem of summarizing ("captioning") the network forwarding state as an optimization problem that aims to balance coverage, by describing as many paths as possible, and explainability, by maximizing the information provided. As this problem is NP-hard, we also propose an approximation algorithm which generates summaries based on a sample of the forwarding state, with marginal loss of quality. We implemented Net2Text and demonstrated its practicality and scalability. We show that Net2Text generates high-quality interpretable summaries of the entire forwarding state of hundreds of routers with full routing tables, in few seconds only.
Aaron Gember-Jacobson, Costin Raiciu, Laurent Vanbever
ACM HotNets 2017. Palo Alto, California, USA (November 2017).
Network verification has made great progress recently, yet existing solutions are limited in their ability to handle specific protocols or implementation quirks or to diagnose and repair the cause of policy violations. In this positioning paper, we examine whether we can achieve the best of both worlds: full coverage of control plane protocols and decision processes combined with the ability to diagnose and repair the cause of violations. To this end, we leverage the happens-before relationships that exist between control plane I/Os (e.g., route advertisements and forwarding updates). These relationships allow us to identify when it is safe to employ a data plane verifier and track the root-cause of problematic forwarding updates. We show how we can capture errors before they are installed, automatically trace down the source of the error and roll-back the updates whenever possible.
CNSM 2017. Tokyo, Japan (November 2017).
The deployment of encrypted transport protocols imposes new challenges for network operations. Key in-network functions such as those implemented by firewalls and passive measurement devices currently rely on information exposed by the transport layer. Encryption, in addition to improving privacy, helps to address ossification of network protocols caused by middleboxes that assume certain information to be present in the clear. However, ``encrypting it all'' risks diminishing the utility of these middleboxes for the traffic management tasks for which they were designed. A middlebox cannot use what it cannot see.
We propose an architectural solution to this issue, by introducing a new ``path layer'' for transport-independent, in-band signaling between Internet endpoints and network elements on the paths between them, and using this layer to reinforce the boundary between the hop-by-hop network layer and the end-to-end transport layer. We define a path layer header on top of UDP to provide a common wire image for new, encrypted transports. This path layer header provides information to a transport-independent on-path state machine that replaces stateful handling currently based on exposed header flags and fields in TCP; it enables explicit measurability of transport layer performance; and offers extensibility by sender-to-path and path-to-receiver communications for diagnostics and management. This provides not only a replacement for signals that are not available with encrypted traffic, but also allows integrity-protected, enhanced signaling under endpoint control. We present an implementation of this wire image integrated with the QUIC protocol, as well as a basic stateful middlebox built on Vector Packet Processing (VPP) provided by FD.io.
ACM SIGCOMM 2017. Los Angeles, California, USA (August 2017).
Network operators often face the problem of remote outages in transit networks leading to significant (sometimes on the order of minutes) downtimes. The issue is that BGP, the Internet routing protocol, often converges slowly upon such outages, as large bursts of messages have to be processed and propagated router by router. In this paper, we present SWIFT, a fast-reroute framework which enables routers to restore connectivity in few seconds upon remote outages. SWIFT is based on two novel techniques. First, SWIFT deals with slow outage notification by predicting the overall extent of a remote failure out of few control-plane (BGP) messages. The key insight is that significant inference speed can be gained at the price of some accuracy. Second, SWIFT introduces a new dataplane encoding scheme, which enables quick and flexible update of the affected forwarding entries. SWIFT is deployable on existing devices, without modifying BGP.
We present a complete implementation of SWIFT and demonstrate that it is both fast and accurate. In our experiments with real BGP traces, SWIFT predicts the extent of a remote outage in few seconds with an accuracy of ?90% and can restore connectivity for 99% of the affected destinations.
CAV 2017. Heidelberg, Germany (July 2017).
Computer networks are hard to manage. Given a set of highlevel requirements (e.g., reachability, security), operators have to manually figure out the individual configuration of potentially hundreds of devices running complex distributed protocols so that they, collectively, compute a compatible forwarding state. Not surprisingly, operators often make mistakes which lead to downtimes.
To address this problem, we present a novel synthesis approach that automatically computes correct network configurations that comply with the operator’s requirements. We capture the behavior of existing routers along with the distributed protocols they run in stratified Datalog. Our key insight is to reduce the problem of finding correct input configurations to the task of synthesizing inputs for a stratified Datalog program. To solve this synthesis task, we introduce a new algorithm that synthesizes inputs for stratified Datalog programs. This algorithm is applicable beyond the domain of networks.
We leverage our synthesis algorithm to construct the first network-wide configuration synthesis system, called SyNET, that support multiple interacting routing protocols (OSPF and BGP) and static routes. We show that our system is practical and can infer correct input configurations, in a reasonable amount time, for networks of realistic size (>50 routers) that forward packets for multiple traffic classes.
Pavlos Lamprakis, Ruggiero Dargenio, David Gugelmann, Vincent Lenders, Markus Happe, Laurent Vanbever
DIMVA 2017. Bonn, Germany (July 2017).
HTTP is the main protocol used by attackers to establish a command and control (C&C) channel to infected hosts in a network. Identifying such C&C channels in network trffiac is however a challenge because of the large volume and complex structure of benign HTTP requests emerging from regular user browsing activities. A common approach to C&C channel detection has been to use supervised learning techniques which are trained on old malware samples. However, these techniques require large training datasets which are generally not avail- able in the case of advanced persistent threats (APT); APT malware are often custom-built and used against selected targets only, making it dicult to collect malware artifacts for supervised machine learning and thus rendering supervised approaches ineffective at detecting APT traffic.
In this paper, we present a novel and highly effective unsupervised approach to detect C&C channels in Web traffic. Our key observation is that APT malware typically follow a specific communication pattern that is different from regular Web browsing. Therefore, by reconstructing the dependencies between Web requests, that is the Web request graphs, and filering away the nodes pertaining to regular Web browsing, we can identify malware requests without training a malware model. We evaluated our approach on real Web traces and show that it can detect the C&C requests of nine APTs with a true positive rate of 99.5- 100% and a true negative rate of 99.5-99.7%. These APTs had been used against several hundred organizations for years without being detected.
Stefano Vissicchio, Laurent Vanbever, Luca Cittadini, Geoffrey G. Xie, Olivier Bonaventure
IEEE/ACM Transactions on Networking. Volume 25, Issue 3, pp. 1649-1662 (June 2017).
The support for safe network updates, i.e., live modification of device behavior without service disruption, is a critical primitive for current and future networks. Several techniques have been proposed by previous works to implement such a primitive. Unfortunately, existing techniques are not generally applicable to any network architecture, and typically require high overhead (e.g., additional memory) to guarantee strong consistency (i.e., traversal of either initial or final paths, but never a mix of them) during the update. In this paper, we deeply study the problem of computing operational sequences to safely and quickly update arbitrary networks. We characterize cases, for which this computation is easy, and revisit previous algorithmic contributions in the new light of our theoretical findings. We also propose and thoroughly evaluate a generic sequence-computation approach, based on two new algorithms that we combine to overcome limitations of prior proposals. Our approach always finds an operational sequence that provably guarantees strong consistency throughout the update, with very limited overhead. Moreover, it can be applied to update networks running any combination of centralized and distributed control-planes, including different families of IGPs, OpenFlow or other SDN protocols, and hybrid SDN networks. Our approach therefore supports a large set of use cases, ranging from traffic engineering in IGP-only or SDN-only networks to incremental SDN roll-out and advanced requirements (e.g., per-flow path selection or dynamic network function virtualization) in partial SDN deployments.
IEEE Symposium on Security and Privacy 2017. San Jose, CA, USA (May 2017).
As the most successful cryptocurrency to date, Bitcoin constitutes a target of choice for attackers. While many attack vectors have already been uncovered, one important vector has been left out though: attacking the currency via the Internet routing infrastructure itself. Indeed, by manipulating routing advertisements (BGP hijacks) or by naturally intercepting traffic, Autonomous Systems (ASes) can intercept and manipulate a large fraction of Bitcoin traffic.
This paper presents the first taxonomy of routing attacks and their impact on Bitcoin, considering both small-scale attacks, targeting individual nodes, and large-scale attacks, targeting the network as a whole. While challenging, we show that two key properties make routing attacks practical: (i) the efficiency of routing manipulation; and (ii) the significant centralization of Bitcoin in terms of mining and routing. Specifically, we find that any network attacker can hijack few (<100) BGP prefixes to isolate 50% of the mining power—even when considering that mining pools are heavily multi-homed. We also show that on-path network attackers can considerably slow down block propagation by interfering with few key Bitcoin messages.
We demonstrate the feasibility of each attack against the deployed Bitcoin software. We also quantify their effectiveness on the current Bitcoin topology using data collected from a Bitcoin supernode combined with BGP routing data.
The potential damage to Bitcoin is worrying. By isolating parts of the network or delaying block propagation, attackers can cause a significant amount of mining power to be wasted, leading to revenue losses and enabling a wide range of exploits such as double spending. To prevent such effects in practice, we provide both short and long-term countermeasures, some of which can be deployed immediately.
ACM SOSR 2017. Santa Clara, CA, USA (April 2017).
Software-Defined Internet eXchange Points (SDXes) are recently gaining momentum, with several SDXes now running in production. The deployment of multiple SDXes on the Internet raises the question of whether the interactions between these SDXes will cause correctness problems, since SDX policies can deflect traffic away from the default BGP route for a prefix, effectively breaking the congruence between the control plane and data plane. Although one deflection on a path will never cause loops to occur, combining multiple deflections at different SDXes can lead to persistent forwarding loops that the control plane never sees.
In this paper, we introduce SIDR, a coordination framework that enables SDXes to verify the end-to-end correctness (i.e., loop freedom) of an SDX policy. The challenge behind SIDR is to strike a balance between privacy, scalability, and flexibility. SIDR addresses these challenges by: (i) not requiring SDXes to disclose the flow space their SDX policies act on, only the next-hop they deflect to; and (ii) minimizing the number of SDXes that must exchange state to detect correctness problems. SIDR manages to preserve the flexibility of SDX policies by activating the vast majority of the safe policies, the policies that do not create a loop. We implemented SIDR on the SDX platform and showed its practical effectiveness: SIDR can activate 91% of all safe policies while preserving privacy and scalability and can perform correctness checks in about one second.
ACM SOSR 2017. Santa Clara, CA, USA (April 2017).
By operating in highly asynchronous environments, SDN controllers often suffer from bugs caused by concurrency violations. Unfortunately, state-of-the-art concurrency analyzers for SDNs often report thousands of true violations, limiting their effectiveness in practice.
This paper presents BigBug, an approach for automatically identifying the most representative concurrency violations: those that capture the cause of the violation. The two key insights behind BigBug are that: (i) many violations share the same root cause, and (ii) violations with the same cause share common characteristics. BigBug leverages these observations to cluster reported violations according to the similarity of events in them as well as SDN-specific features. BigBug then reports the most representative violation for each cluster using a ranking function.
We implemented BigBug and showed its practical effectiveness. In more than 100 experiments involving different controllers and applications, BigBug systematically produced 6 clusters or less, corresponding to a median decrease of 95% over state-of-the-art analyzers. The number of violations reported by BigBug also closely matched that of actual bugs, indicating that BigBug is effective at identifying root causes of SDN races.
ACM SOSR 2017. Santa Clara, CA, USA (April 2017).
With the rise of stateful programmable data planes, a lot of the network functions that used to be implemented in the controller or at the end-hosts are now moving to the data plane to benet from line-rate processing. Unfortunately, stateful data planes also mean more complex network updates as not only flows, but also the associated states, must now be migrated consistently to guarantee correct network behaviors. The main challenge is that data-plane states are maintained at line rate, according to possibly runtime criteria, rendering controller-driven migration impossible. We present Swing State, a general state-management framework and runtime system supporting consistent state migration in stateful data planes. The key insight behind Swing State is to perform state migration entirely within the data plane by piggybacking state updates on live traffic. To minimize the overhead, Swing State only migrates the states that cannot be safely reconstructed at the destination switch. We implemented a prototype of Swing State for P4. Given a P4 program, Swing State performs static analysis to compute which states require consistent migration and automatically augments the program to enable the transfer of these states at runtime. Our preliminary results indicate that Swing State is practical in migrating data-plane states at line rate with small overhead.
ACM SOSR 2017. Santa Clara, CA, USA (April 2017).
Advances in layer 2 networking technologies have fostered the deployment of large, geographically distributed LANs. Due to their large diameter, such LANs provide many vantage points for wiretapping. As an example, Google's internal network was reportedly tapped by governmental agencies, forcing the Web giant to encrypt its internal traffic. While using encryption certainly helps, eavesdroppers can still access traffic metadata which often reveals sensitive information, such as who communicates with whom and which are the critical hubs in the infrastructure.
This paper presents iTAP, a system for providing strong anonymity guarantees within a network. iTAP is network-based and can be partially deployed. Akin to onion routing, iTAP rewrites packet headers at the network edges by leveraging SDN devices. As large LANs can see millions of flows, the key challenge is to rewrite headers in a way that guarantees strong anonymity while, at the same time, scaling the control-plane (number of events) and the data-plane (number of flow rules). iTAP addresses these challenges by adopting a hybrid rewriting scheme. Specifically, iTAP scales by reusing rewriting rules across distinct flows and by distributing them on multiple switches. As reusing headers leaks information, iTAP monitors this leakage and adapts the rewriting rules before any eavesdropper could provably de-anonymize any host.
We implemented iTAP and evaluated it using real network traffic traces. We show that iTAP works in practice, on existing hardware, and that deploying few SDN switches is enough to protect a large share of the network traffic.
João Luís Sobrinho, Laurent Vanbever, Franck Le, André Sousa, Jennifer Rexford
IEEE/ACM Transactions on Networking. Volume 24, Issue 6, pp. 3462-3476 (December 2016)
The Internet routing system faces serious scalability challenges due to the growing number of IP prefixes that needs to be propagated throughout the network. Although IP prefixes are assigned hierarchically and roughly align with geographic regions, today’s Border Gateway Protocol (BGP) and operational practices do not exploit opportunities to aggregate routing information. We present DRAGON, a distributed route-aggregation technique whereby nodes analyze BGP routes across different prefixes to determine which of them can be filtered while respecting the routing policies for forwarding data-packets. DRAGON works with BGP, can be deployed incrementally, and offers incentives for Autonomous Systems (ASs) to upgrade their router software. We illustrate the design of DRAGON through a number of examples, prove its properties while developing a theoretical model of route aggregation, and evaluate its performance. Our experiments with realistic AS-level topologies, assignments of IP prefixes, and routing policies show that DRAGON reduces the number of prefixes in each AS by at least 70% with minimal stretch in the lengths of AS-paths traversed by data packets.
ACM HotNets 2016. Atlanta, Georgia, USA (November 2016).
For Internet Service Provider (ISP) operators, getting an accurate picture of how their network behaves is challenging. Given the traffic volumes that their networks carry and the impossibility to control end-hosts, ISP operators are typically forced to randomly sample traffic, and rely on aggregated statistics. This provides coarse-grained visibility, at a time resolution that is far from ideal (seconds or minutes).
In this paper, we present Mille-Feuille, a novel monitoring architecture that provides fine-grained visibility over ISP traffic. Mille-Feuille schedules activation and deactivation of traffic-mirroring rules, that are then provisioned networkwide from a central location, within milliseconds. By doing so, Mille-Feuille combines the scalability of sampling with the visibility and controllability of traffic mirroring. As a result, it supports a set of monitoring primitives, ranging from checking key performance indicators (e.g., one-way delay) for single destinations to estimating traffic matrices in subseconds. Our preliminary measurements on existing routers confirm that Mille-Feuille is viable in practice.
ACM PLDI 2016. Santa Barbara, CA, USA (June 2016).
Usenix NSDI 2016. Santa Clara, CA, USA (March 2016).
Arpit Gupta, Nick Feamster, Laurent Vanbever
ACM SOSR 2016. Santa Clara, CA, USA (March 2016).
Software Defined Internet Exchange Points (SDXes) increase the flexibility of interdomain traffic delivery on the Internet. Yet, an SDX inherently requires multiple participants to have access to a single, shared physical switch, which creates the need for an authorization mechanism to mediate this access. In this paper, we introduce a logic and mechanism called FLANC (A Formal Logic for Authorizing Network Control), which authorizes each participant to control forwarding actions on a shared switch and also allows participants to delegate forwarding actions to other participants at the switch (e.g., a trusted third party). FLANC extends “says” and “speaks for” logic that have been previously designed for operating system objects to handle expressions involving network traffic flows. We describe FLANC, explain how participants can use it to express authorization policies for realistic interdomain routing settings, and demonstrate that it is efficient enough to operate in operational settings
Karla Saur, Joseph Collard, Nate Foster, Arjun Guha, Laurent Vanbever, Michael Hicks
ACM SOSR 2016. Santa Clara, CA, USA (March 2016).
Nick Shelly, Brendan Tschaen, Klaus-Tycho Forster, Michael Chang, Theophilus Benson, Laurent Vanbever
ACM HotNets 2015. Philadelphia, PA, USA (November 2015).
ACM IMC 2015. Tokyo, Japan (October 2015).
Stefano Vissicchio, Olivier Tilmans, Laurent Vanbever, Jennifer Rexford
ACM SIGCOMM 2015. London, UK (August 2015).
Media coverage: ipSpace
Yixin Sun, Anne Edmundson, Laurent Vanbever, Oscar Li, Jennifer Rexford, Mung Chiang, Prateek Mittal
USENIX Security 2015. Washington, D. C. , USA (August 2015).
ACM SOSR 2015. Santa Clara, CA, USA (June 2015).
Peng Sun, Laurent Vanbever, Jennifer Rexford
ACM SOSR 2015. Santa Clara, CA, USA (June 2015).
With the rise of video streaming and cloud services, enterprise and access networks receive much more traffic than they send, and must rely on the Internet to offer good end-to-end performance. These edge networks often connect to multiple ISPs for better performance and reliability, but have only limited ways to influence which of their ISPs carries the traffic for each service. In this paper, we present Sprite, a software-defined solution for flexible inbound traffic engineering (TE). Sprite offers direct, fine-grained control over inbound traffic, by announcing different public IP pre- fixes to each ISP, and performing source network address translation (SNAT) on outbound request traffic. Our design achieves scalability in both the data plane (by performing SNAT on edge switches close to the clients) and the control plane (by having local agents install the SNAT rules). The controller translates highlevel TE objectives, based on client and server names, as well as performance metrics, to a dynamic network policy based on realtime traffic and performance measurements. We evaluate Sprite with live data from “in the wild” experiments on an EC2-based testbed, and demonstrate how Sprite dynamically adapts the network policy to achieve high-level TE objectives, such as balancing YouTube traffic among ISPs to improve video quality.
Stefano Vissicchio, Luca Cittadini, Olivier Bonaventure, Geoffrey Xie, Laurent Vanbever
IEEE INFOCOM 2015. Hong Kong (April 2015).
Network operators can and do deploy multiple routing control-planes, e.g., by running different protocols or instances of the same protocol. With the rise of SDN, multiple control-planes are likely to become even more popular, e.g., to enable hybrid SDN or multi-controller deployments. Unfortunately, previous works do not apply to arbitrary combinations of centralized and distributed control-planes. In this paper, we develop a general theory for coexisting control-planes. We provide a novel, exhaustive classification of existing and future control-planes (e.g., OSPF, EIGRP, and OpenFlow) based on fundamental control-plane properties that we identify. Our properties are general enough to study centralized and distributed control-planes under a common framework. We show that multiple uncoordinated control-planes can cause forwarding anomalies whose type solely depends on the identified properties. To show the wide applicability of our framework, we leverage our theoretical insight to (i) provide sufficient conditions to avoid anomalies, (ii) propose configuration guidelines, and (iii) define a provably-safe procedure for reconfigurations from any (combination of) control-planes to any other. Finally, we discuss prominent consequences of our findings on the deployment of new paradigms (notably, SDN) and previous research works.
João Luís Sobrinho, Laurent Vanbever, Franck Le, Jennifer Rexford
ACM CoNEXT 2014. Sydney, Australia (December 2014).
The Internet routing system faces serious scalability challenges, due to the growing number of IP prefixes it needs to propagate throughout the network. For example, the Internet suffered significant outages in August 2014 when the number of globally routable prefixes went past 512K, the default size of the forwarding tables in many older routers. Although IP prefixes are assigned hierarchically, and roughly align with geographic regions, today's Border Gateway Protocol (BGP) and operational practices do not exploit opportunities to aggregate routes. We present a distributed route-aggregation technique (called DRAGON) where nodes analyze BGP routes across different prefixes to determine which of them can be filtered while respecting the routing policies for forwarding data-packets. DRAGON works with BGP, can be deployed incrementally, and offers incentives for ASs to upgrade their router software. We present a theoretical model of route-aggregation, and the design and analysis of DRAGON. Our experiments with realistic assignments of IP prefixes, network topologies, and routing policies show that DRAGON reduces the number of prefixes in each AS by about 80% and significantly curtails the number of routes exchanged during transient periods of convergence.
Laurent Vanbever, Oscar Li, Jennifer Rexford, Prateek Mittal
ACM HotNets 2014. Los Angeles, CA, USA (October 2014).
Anonymity systems like Tor are known to be vulnerable to malicious relay nodes. Another serious threat comes from the Autonomous Systems (ASes) that carry Tor traffic due to their powerful eavesdropping capabilities. Indeed, an AS (or set of colluding ASes) that lies between the client and the first relay, and between the last relay and the destination, can perform timing analysis to compromise user anonymity. In this paper, we show that AS-level adversaries are much more powerful than previously thought. First, routine BGP routing changes can significantly increase the number of ASes that can analyze a user's traffic successfully. Second, ASes can actively manipulate BGP announcements to put themselves on the paths to and from relay nodes. Third, an AS can perform timing analysis even when it sees only one direction of the traffic at both communication ends. Actually, asymmetric routing increases the fraction of ASes able to analyze a user's traffic. We present a preliminary evaluation of our attacks using measurements of BGP and Tor. Our findings motivate the design of approaches for anonymous communication that are resilient to AS-level adversaries.
Stefano Vissicchio, Laurent Vanbever, Jennifer Rexford
ACM HotNets 2014. Los Angeles, CA, USA (October 2014).
Link-state routing protocols (e.g., OSPF and IS-IS) are widely used because they are scalable, robust, and based on simple abstractions. Unfortunately, these protocols are also relatively inflexible, since they direct all traffic over shortest paths. In contrast, Software Defined Networking (SDN) offers fine-grained control over routing, at the expense of controller overhead, failover latency, and deployment challenges.
We argue that future networks can achieve the benefits of both approaches through central control over the distributed route computation. The key idea, which we call Fibbing, is to have the controller trick the routers into seeing a fake topology that is carefully constructed to achieve the desired Forwarding Information Base (FIB). Given an acyclic forwarding graph for each destination, the controller computes an augmented topology with fake nodes (and destinations to announce there) and fake links (and link weights). The controller injects these "lies" into the link-state routing protocol, and the routers simply compute the paths accordingly. The controller can also select an augmented topology that triggers the use of specific backup paths when real links and routers fail. To reduce router load, our Fibbing algorithms compute augmented topologies of minimal size. Our preliminary evaluation on realistic ISP topologies shows that Fibbing works well in practice.
Shuyuan Zhang, Sharad Malik, Sanjai Narain, Laurent Vanbever
IEEE ICNP 2014 (Concise Papers Track). Raleigh, North Carolina, USA (October 2014).
Network operators often need to change their routing policy in response to network failures, new load balancing strategies, or stricter security requirements. While several recent works have aimed at solving this problem, they all assume that a fast and conveniently dimensioned out-of band network is available to communicate with any device. Unfortunately, such a parallel network is often not practical. This paper presents a technique for performing such updates in-band: it enables reconfiguration control messages to be sent directly within the fast production network. Performing such updates is hard because intermediate configurations can lock out the controller from devices before they are updated. Thus, updates have to be carefully sequenced. Our technique also minimizes the total update time by updating the network in parallel, whenever possible. Our technique takes into account in-band middle boxes, such as firewalls. We have implemented our framework using Integer Linear Programming, and experimentally validated it on problems of realistic scale.
Arpit Gupta, Laurent Vanbever, Muhammad Shahbaz, Sean Donovan, Brandon Schlinker, Nick Feamster, Jennifer Rexford, Scott Shenker, Russ Clark, Ethan Katz-Bassett
ACM SIGCOMM 2014. Chicago, IL, USA (August 2014).
BGP severely constrains how networks can deliver traffic over the Internet. Today's networks can only forward traffic based on the destination IP prefix, by selecting among routes offered by their immediate neighbors. We believe Software Defined Networking (SDN) could revolutionize wide-area traffic delivery, by offering direct control over packet-processing rules that match on multiple header fields and perform a variety of actions. Internet exchange points (IXPs) are a compelling place to start, given their central role in interconnecting many networks and their growing importance in bringing popular content closer to end users.
To realize a Software Defined IXP (an "SDX"), we must create compelling applications, such as "application-specific peering"---where two networks peer only for (say) streaming video traffic. We also need new programming abstractions that allow participating networks to create and run these applications and a runtime that both behaves correctly when interacting with BGP and ensures that applications do not interfere with each other. Finally, we must ensure that the system scales, both in rule-table size and computational overhead. In this paper, we tackle these challenges and demonstrate the flexibility and scalability of our solutions through controlled and in-the-wild experiments. Our experiments demonstrate that our SDX implementation can implement representative policies for hundreds of participants who advertise full routing tables while achieving sub-second convergence in response to configuration changes and routing updates.
Stefano Vissicchio, Laurent Vanbever, Luca Cittadini, Geoffrey Xie, Olivier Bonaventure
IEEE INFOCOM 2014. Toronto, ON, Canada (April 2014).
Simultaneously providing flexibility, evolvability and correctness of routing is one of the basic and still unsolved problems in networking. Route redistribution provides a tool, used in many enterprise networks, to either partition a network into multiple routing domains or merge previously independent networks. However, no general technique exists for changing a live network's route redistribution configuration without incurring packet losses and service disruptions. In this paper, we study the problem of how to safely transition between route redistribution configurations. We investigate what anomalies may occur in the reconfiguration process, showing that many long-lasting forwarding loops can and do occur if naive techniques are applied. We devise new sufficient conditions for anomaly-free reconfigurations, and we leverage them to build provably safe and practical reconfiguration procedures. Our procedures enable seamless network re-organizations to accomplish both short-term objectives, such as local repair or traffic engineering, and long-term requirement changes.
Stefano Vissicchio, Laurent Vanbever, Olivier Bonaventure
ACM SIGCOMM Computer Communications Review. Editorial Zone (April 2014)
Software Defined Networking (SDN) promises to ease design, operation and management of communication networks. However, SDN comes with its own set of challenges, including incremental deployability, robustness, and scalability. Those challenges make a full SDN deployment difficult in the short-term and possibly inconvenient in the longer-term. In this paper, we explore hybrid SDN models that combine SDN with a more traditional networking approach based on distributed protocols. We show a number of use cases in which hybrid models can mitigate the respective limitations of traditional and SDN approaches, providing incentives to (partially) transition to SDN. Further, we expose the qualitatively diverse tradeoffs that are naturally achieved in hybrid models, making them convenient for different transition strategies and long-term network designs. For those reasons, we argue that hybrid SDN architectures deserve more attention from the scientific community.
Laurent Vanbever, Stefano Vissicchio
Open Network Summit (Research Track). Santa Clara, CA, USA (March 2014).
Xin Jin, Li Erran Li, Laurent Vanbever, Jennifer Rexford
ACM CoNEXT 2013. Santa Barbara, CA, USA (December 2013).
Cellular core networks suffer from inflexible and expensive equipment, as well as from complex control-plane protocols. To address these challenges, we present SoftCell, a scalable architecture that supports fine-grained policies for mobile devices in cellular core networks, using commodity switches and servers. SoftCell enables operators to realize high-level service policies that direct traffic through sequences of middleboxes based on subscriber attributes and applications. To minimize the size of the forwarding tables, SoftCell aggregates traffic along multiple dimensions---the service policy, the base station, and the mobile device---at different switches in the network. Since most traffic originates from mobile devices, SoftCell performs fine-grained packet classification at the access switches, next to the base stations, where software switches can easily handle the state and bandwidth requirements. SoftCell guarantees that packets belonging to the same connection traverse the same sequence of middleboxes in both directions, even in the presence of mobility. We demonstrate that SoftCell improves the scalability and flexibility of cellular core networks by analyzing real LTE workloads, performing micro-benchmarks on our prototype controller as well as large-scale simulations.
Marco Chiesa, Luca Cittadini, Giuseppe Di Battista, Laurent Vanbever, Stefano Vissicchio
IEEE ICNP 2013. Göttingen, Germany (October 2013).
Fun: Our paper is listed on Accidentally Turing-Complete
Because of its practical relevance, the Border Gateway Protocol (BGP) has been the target of a huge research effort since more than a decade. In particular, many contributions aimed at characterizing the computational complexity of BGP-related problems. In this paper, we answer computational complexity questions by unveiling a fundamental mapping between BGP configurations and logic circuits. Namely, we describe simple networks containing routers with elementary BGP configurations that simulate logic gates, clocks, and flip-flops, and we show how to interconnect them to simulate arbitrary logic circuits. We then investigate the implications of such a mapping on the feasibility of solving BGP fundamental problems, and prove that, under realistic assumptions, BGP has the same computing power as a Turing Machine. We also investigate the impact of restrictions on the expressiveness of BGP policies and route propagation (e.g., route propagation rules in iBGP and Local Transit Policies in eBGP) and the impact of different message timing models. Finally, we show that the mapping is not limited to BGP and can be applied to generic routing protocols that use several metrics.
Laurent Vanbever, Joshua Reich, Theophilus Benson, Nate Foster, Jennifer Rexford
ACM SIGCOMM HotSDN 2013 Workshop. Hong Kong, China (August 2013).
Like any complex software, SDN programs must be updated periodically, whether to migrate to a new controller platform, repair bugs, or address performance issues. Nowadays, SDN operators typically perform such upgrades by stopping the old controller and starting the new one---an approach that wipes out all installed flow table entries and causes substantial disruption including losing packets, increasing latency, and even compromising correctness.
This paper presents HotSwap, a system for upgrading SDN controllers in a disruption-free and correct manner. HotSwap is a hypervisor (sitting between the switches and the controller) that maintains a history of network events. To upgrade from an old controller to a new one, HotSwap bootstraps the new controller (by replaying the history) and monitors its output (to determine which parts of the network state may be reused with the new controller). To ensure good performance, HotSwap filters the history using queries specified by programmers. We describe our design and preliminary implementation of HotSwap, and present experimental results demonstrating its effectiveness for managing upgrades to third-party controller programs.
Stefano Vissicchio, Laurent Vanbever, Cristel Pelsser, Luca Cittadini, Pierre Francois, Olivier Bonaventure
IEEE/ACM Transactions on Networking. Volume 21, Issue 3, pp. 990-1002 (June 2013)
The network infrastructure of Internet Service Providers (ISPs) undergoes constant evolution. Whenever new requirements arise (e.g., the deployment of a new Point of Presence, or a change in the business relationship with a neighboring ISP), operators need to change the configuration of the network. Due to the complexity of BGP and to the lack of methodologies and tools, maintaining service availability during reconfigurations that involve BGP is a challenge for operators. In this paper, we show that the current best practices to reconfigure BGP do not provide guarantees with respect to traffic disruptions. Then, we study the problem of finding an operational ordering of BGP reconfiguration steps which guarantees no packet loss. Unfortunately, finding such an operational ordering, when it exists, is computationally hard. To enable lossless reconfigurations, we propose a framework that extends current features of carrier-grade routers to run two BGP control planes in parallel. We present a prototype implementation and we show the effectiveness of our framework through a case study.
Laurent Vanbever, Stefano Vissicchio, Luca Cittadini, Olivier Bonaventure
IEEE INFOCOM 2013. Turin, Italy (April 2013).
Network upgrades, performance optimizations and traffic engineering activities often force network operators to adapt their IGP configuration. Recently, several techniques have been proposed to change an IGP configuration (e.g., link weights) in a disruption-free manner. Unfortunately, none of these techniques considers the impact of IGP changes on BGP correctness. In this paper, we show that known reconfiguration techniques can trigger various kinds of BGP anomalies. First, we illustrate the relevance of the problem by performing simulations on a Tier-1 network. Our simulations highlight that even a few link weight changes can produce long-lasting BGP anomalies affecting a significant part of the BGP routing table. Then, we study the problem of finding a reconfiguration ordering which maintains both IGP and BGP correctness. Unfortunately, we show examples in which such an ordering does not exist. Furthermore, we prove that deciding if such an ordering exists is NP-hard. Finally, we provide sufficient conditions and configuration guidelines that enable graceful operations for both IGP and BGP.
Xin Jin, Li Erran Li, Laurent Vanbever, Jennifer Rexford
Open Network Summit (Research Track). Santa Clara, CA, USA (April 2013).
Laurent Vanbever, Stefano Vissicchio, Cristel Pelsser, Pierre Francois, Olivier Bonaventure
IEEE/ACM Transactions on Networking. Volume 20, Issue 6, pp. 1842-1855 (December 2012)
Network-wide migrations of a running network, such as the replacement of a routing protocol or the modification of its configuration, can improve the performance, scalability, manageability, and security of the entire network. However, such migrations are an important source of concerns for network operators as the reconfiguration campaign can lead to long, service-disrupting outages. In this paper, we propose a methodology that addresses the problem of seamlessly modifying the configuration of link-state Interior Gateway Protocols (IGPs). We illustrate the benefits of our methodology by considering several migration scenarios, including the addition and the removal of routing hierarchy in a running IGP, and the replacement of one IGP with another. We prove that a strict operational ordering can guarantee that the migration will not create any service outage. Although finding a safe ordering is NP-complete, we describe techniques that efficiently find such an ordering and evaluate them using several real-world and inferred ISP topologies. Finally, we describe the implementation of a provisioning system that automatically performs the migration by pushing the configurations on the routers in the appropriate order while monitoring the entire migration process
Stefano Vissicchio, Luca Cittadini, Laurent Vanbever, Olivier Bonaventure
IEEE INFOCOM 2012. Orlando, FL, USA (March 2012).
Internal BGP (iBGP) is used to distribute interdomain routes within a single ISP. The interaction between iBGP and the underlying IGP can lead to routing and forwarding anomalies. For this reason, several research contributions aimed at defining sufficient conditions to guarantee anomaly-free configurations and providing design guidelines for network operators. In this paper, we show several anomalies caused by defective dissemination of routes in iBGP. We define the dissemination correctness property, which models the ability of routers to learn at least one route to each destination. By distinguishing between dissemination correctness and existing correctness properties, we show counterexamples that invalidate some results in the literature. Further, we prove that deciding whether an iBGP configuration is dissemination correct is computationally intractable. Even worse, determining whether the addition of a single iBGP session can adversely affect dissemination correctness of an iBGP configuration is also computationally intractable. Finally, we provide sufficient conditions that ensure dissemination correctness, and we leverage them to both formulate design guidelines and revisit prior results.
Laurent Vanbever, Stefano Vissicchio, Cristel Pelsser, Pierre Francois, Olivier Bonaventure
ACM SIGCOMM 2011. Toronto, ON, Canada (August 2011).
Network-wide migrations of a running network, such as the replacement of a routing protocol or the modification of its configuration, can improve the performance, scalability, manageability, and security of the entire network. However, such migrations are an important source of concerns for network operators as the reconfiguration campaign can lead to long and service-affecting outages.
In this paper, we propose a methodology which addresses the problem of seamlessly modifying the configuration of commonly used link-state Interior Gateway Protocols (IGP). We illustrate the benefits of our methodology by considering several migration scenarios, including the addition or the removal of routing hierarchy in an existing IGP and the replacement of one IGP with another. We prove that a strict operational ordering can guarantee that the migration will not create IP transit service outages. Although finding a safe ordering is NP complete, we describe techniques which efficiently find such an ordering and evaluate them using both real-world and inferred ISP topologies. Finally, we describe the implementation of a provisioning system which automatically performs the migration by pushing the configurations on the routers in the appropriate order, while monitoring the entire migration process.
Laurent Vanbever, Bruno Quoitin, Olivier Bonaventure
ACM SIGCOMM PRESTO Workshop. Barcelona, Spain (August 2009).
BGP routing policies are mainly used by network operators to enforce business relationships between Autonomous Systems (AS), and to prefer some routes over others. In this paper, we propose a hierarchical policy model to express each policy at the most appropriate level of abstraction. The model is structured around chains of filters that apply at a specific level of abstraction. To validate our approach, we implemented the model in a Java prototype and used it to reproduce several network-wide routing policies. In all studied networks, the model produced a high-level view of routing policies while preserving their semantics. Our model offers numerous advantages to network operators including, but not limited to, a better network documentation, an improved understanding of routing policies, redundancy suppression, and an easier way of managing their network.
Laurent Vanbever, Grégory Pardoen, Olivier Bonaventure
Internet Network Management Workshop. Orlando, FL, USA (October 2008).
Today, most IP networks are still configured manually on a router-by-router basis. This is error-prone and often leads to misconfiguration. In this paper, we describe the Network Configuration Safeguard (NCGuard), a tool that allows the network architect to apply a safer methodology. The first step is to define his design rules. Based on a survey of the networking literature, we classify the most common types of rules in three main patterns: presence, uniqueness and symmetry and provide several examples. The second step is to write a high-level representation of his network. The third step is to validate the network representation and generate the configuration of each router. This last step is performed automatically by our prototype. Finally, we describe our prototype and apply it to the Abilene network.