Distributed Systems

Organization and topics of the Seminar and Advanced Seminar

A Seminar for Bachelor students and an Advanced Seminar for Master students are offered each semester.

Both modules are organized in the style of a scientific conference. Each student works on an asigned topic from the distributed systems domain. After submitting a written report of the results of the literature research, each participant writes reviews for other seminar papers, and presents their work in the "conference session" of the seminar.

Seminar: Modern Internet Technologies

Driven by the requirements of innovative applications and services, the past few years have produced new technologies for networked or distributed systems. In the Internet of Things, a large number of devices and everyday objects equipped with sensors and actuators are connected to the internet and communicate through mostly wireless communication technologies. In one of its sub-domains, the industrial Internet of Things (IIoT, Industry 4.0), machines, tools, transport equipment, etc. are networked.
Virtualisation and "software-defined" systems (e.g. Software Defined Networking (SDN)), increase the flexiblity and efficiency of distributed systems through dynamic adaptation and scaling.
Driven by the popularity of the Bitcoin system, distributed ledger technologies and related concepts such as smart contracts have been developed, which are not only the foundations of electronic currencies, but can also support any application in which a consensus between different parties must be reached and documented. 
Another focus has been the reduction of latency in networked and distributed systems, e.g. by using nearby edge and fog computing resources in addition to the remote cloud, or by using optimised communication protocols to., for instance, rapidly connect client and server. 
In addition to stationary networks, mobile (5G) communication technologies and systems have developed rapidly. For example, the COVID tracing application uses mobile devices to track contacts. This method, known as crowdsensing, can be used more generally to collect large amounts of geographically distributed sensor data. 

This seminar will discuss a wide range of current technologies, protocols and standards that enable the above networked and distributed applications and services. 



The seminar is organized in the style of a scientific conference. Following the submission of a written paper on the assigned topic, students write reviews for other seminar papers and participate in a final presentation session where they present their work and discuss the work of others. Attendance at the kick-off and final presentation session is mandatory.

Flexible, scalable, and efficient networks

1) Low-Latency Internet

Supervisor: Robin Laidig

Low latency is crucial for many networked applications and often improves performance and user experience. For example, in online gaming, high latency can cause lag and potentially make the game unplayable; At Amazon, even a small increase in latencies on the online shopping platform leads to lower sales, with a loss amounting to almost 1 billion dollars. But why are latencies in the Internet so high and how can we improve it?

The objective of this seminar topic is to review computer science research that proposes solutions to overcome high latencies in the Internet.


2) QUIC -The Fast TCP Alternative

Supervisor: Michael Schramm

For large web companies and Content Distribution Networks (CDN), the performance and security of web protocols is a major concern. For this reason, Google proposed the QUIC protocol. It is supported by all Google mobile apps and as of October 2020, more than 75% of Meta Internet traffic used QUIC, making it Meta's standard Internet protocol1. But how does this novel protocol ensure security? How is the higher performance achieved and how much higher is it?

The objective of this seminar topic is to explain the QUIC protocol and compare it with TCP protocol variants. It shall also review scientific papers to extract performance measurements and experiences with the QUIC protocol.


3) Fairness of Bandwidth Sharing Between Transport Layer Protocols (TCP, UDP, QUIC, …)

Supervisor: Robin Laidig

There are many transport layer protocols for computer networks that are used for different purposes. Traditional examples are the connection-oriented TCP and the connectionless UDP. A more recent example is the Google QUIC protocol, which is based on UDP but provides features similar to TCP with higher efficiency. All of these protocols are used on the Internet today and must share the same infrastructure. But how well do they work together? Do they share the available bandwidth equally? What measures are taken to prevent unfair bandwidth allocation between network flows of different protocols?

The goal of this seminar paper is to review existing computer science research that investigates the fairness of bandwidth sharing between different transport layer protocols. Further, possible improvements or solutions that were proposed in computer science research shall be pointed out.


4) Modern Scalable Network Topologies for Datacenters

Supervisor: Michael Schramm

Modern data centers consist of thousands of servers. To be able to connect these servers with a scalable, high bandwidth network, topologies are needed. Fat trees and Clos networks are prominent examples of such network topologies. Only recently, Google described how they evolved their Jupiter data center from a Clos network to a Direct Connect network2.

The objective of this seminar topic is to (1) motivate the use of scalable network topologies, (2) present and compare different topologies, and (3) describe the topology of a modern data center. The description of a modern data center topology should include the reasons for choosing this topology and possible drawbacks.


Secure and privacy preserving networks

5) Tor - The Onion Router

Supervisor: Lukas Epple

Tor is a popular anonymity network that allows users to browse the Internet without revealing their true identity, location, or activity. By routing Internet traffic through a series of encrypted nodes or relays, the source of the traffic is obscured and user privacy is maintained.

The objective of this seminar topic is to provide a comprehensive overview of Tor and its technical underpinnings. This will be done by exploring how Tor works internally and how it achieves privacy. Additionally, it should review the potential risks and limitations of Tor, as well as its impact on Internet privacy and security. 


6) I2P - The Invisible Internet Project

Supervisor: Lukas Epple

I2P offers a self-contained network that ensures secure and anonymous communication between users. Unlike other anonymity networks, I2P focuses on providing a private communication layer within its own boundaries, rather than enabling access to the wider Internet.

The objective of this seminar topic is to provide an insightful overview of I2P, highlighting its pivotal role in the realm of anonymous online communication. To this end, we will navigate through the key points of the I2P architecture, explore its distinctive use of garlic routing to encrypt and bundle messages, examine the various applications intrinsic to the I2P ecosystem, and analyze the network's limitations and vulnerabilities.


Time-sensitive networks

7) Deterministic Real-Time Communication With Time-Sensitive Networking

Supervisor: Lucas Haug

Real-time communication with deterministic bounds on network delay and jitter is critical for the efficient and safe operation of networked real-time systems. As distributed Cyber-Physical Systems (CPSs) with networked sensors, actuators, and controllers become increasingly popular in various domains such as the Industrial Internet of Things (IIoT) and autonomous vehicles, the demand for deterministic real-time communication with a high reliability and bounded network delay and delay variance (jitter) has grown.

Major standardization organizations like the Institute of Electrical and Electronics Engineers (IEEE) and the Internet Engineering Taskforce (IETF) have acknowledged the necessity for deterministic networks leading to a set of standards under the term Time-Sensitive Networking (TSN), which extend standard wired IEEE 803.3 networks (Ethernet) with real-time communication mechanisms.

The objective of this seminar topic is to give an overview of the aforementioned TSN standards and describe how they enable deterministic real-time communication.


8) Time Synchronization in the Internet and in Local Networks – NTP and PTP

Supervisor: Lucas Haug

Time synchronization is a critical component in modern networks, e.g. for network monitoring or deterministic real-time communication. The Network Time Protocol (NTP) and the Precision Time Protocol (PTP) are among of the most widely used time synchronization protocols and serve different purposes.

NTP uses a hierarchical system of time sources to distribute time information over a variable-latency network, primarily the Internet. It is generally intended for wide area networks with relatively low time precision requirements, typically within tens of milliseconds. PTP, on the other hand, is designed for local networks and aims for sub-microsecond time precision. It uses the master-slave architecture for clock synchronization and employs hardware timestamping techniques to minimize latency and jitter. Understanding the underlying mechanisms and trade-offs between NTP and PTP is essential for selecting the appropriate time synchronization protocol for a given application or network scenario.

The objective of this seminar topic is to describe the mechanisms of the two time synchronization protocols NTP and PTP, give an overview of their respective advantages and disadvantages, and compare their use cases.


9) Planning Time-Triggered Traffic in IEEE 802.1Qbv Time-Sensitive Networks

Supervisor: Heiko Geppert

Time-Sensitive Networking (TSN) is an evolution in Ethernet-based communication systems enabling real-time communication. At the forefront of this innovation is IEEE 802.1Qbv, which introduces a comprehensive set of features that ensure predictable and reliable network performance.

One of the outstanding features of IEEE 802.1Qbv is its ability to schedule time-triggered traffic. With its Time-Aware Shaper, time slots can be allocated specifically for critical data streams to ensure that essential information is transmitted with minimal latency and jitter, enabling seamless synchronization and coordination between devices and systems.

The objective of this seminar topic is to provide an overview of the challenges in real-time networks for time-triggered traffic planning. Further, it should be explored how these challenges are addressed by the IEEE 802.1Qbv standard.


IoT applications and technology

10) Proactive Adaptation for Self-Adaptive Systems

Supervisor: Michael Matthé

Traditionally, implementations of self-adaptive systems use reactive decision making. The system monitors its environment and makes adaptations based on observed changes. Proactive adaptation provides an alternative approach to making adaptation decisions. The system makes predictions about its future context thus has the potential to perform adaptations in a more timely manner. A challenge of this approach is to deal with the uncertainty of accurately predicting the future state of the environment.

The objective of this seminar topic is to compare proactive approaches of self-adaptive systems and to evaluate their performance and handling of uncertainty both in comparison to each other and with respect to reactive approaches.


11) Multi-Armed Bandit Learning in Edge Computing Approaches

Supervisor: Michael Matthé

In edge computing, computing resources are not only provided by a central cloud server, but also by many smaller servers located closer to the user at the edge of the network. One goal is to improve the latency of interactive applications by distributing data and allocating resources closer to the user. Multi-armed bandit learning is a type of online decision problem where the goal is to choose the arm with the highest reward. In the case of edge computing, an arm can be a particular server or a data distribution configuration. The goal of using multi-armed bandits in combination with edge computing is to optimize system performance metric such as user satisfaction.

The objective of this seminar topic is to explore approaches that use multi-armed bandits for edge computing use cases and to describe the modeling of the approaches in detail.


12) Making Energy Grids Smart – A Network Perspective

Supervisor: Heiko Geppert

The integration of advanced digital technologies and real-time data analytics into power grids makes them "smart grids". A key feature is two-way communication between utilities and consumers, enabling efficient energy distribution and management. They also incorporate renewable energy sources, energy storage, and automated control to optimize energy use, reduce waste, and improve overall reliability.

Communication in smart grids is not trivial. Real-time requirements are met by a large number of network participants, which can be distributed over areas the size of continents. In addition, different types of actors may have conflicting interests, e.g., quality aspects must be balanced against privacy concerns.

The objective of this seminar topic is to provide an overview of the energy grid infrastructure, present the transformation to smart grids, and explore the communication network challenges that arise in smart grids.


13) Blockchain Technology for Food Supply Chain Tracking

Supervisor: Melanie Heck

The Food and Agriculture Organization of the United Nations estimates that 13.3 % of agricultural products are lost during transport, storage, wholesale, and processing. Tracking and monitoring the handling conditions of food along the supply chain is therefore essential to ensure that the products that end up in supermarkets and, ultimately, households are indeed safe for consumption. A violation of requirements on cooling, hygiene, and pressure on the surface of food can have serious implications for food safety. Sensors and other IoT technologies can improve the monitoring of the conditions during transport. With many independent parties involved in the supply chain, this requires reliable data exchange. Distributed data management solutions provide robust data access, where a central database would be a single point of failure and potential performance bottleneck. Blockchain is a promising contender for food tracking systems, as it provides full traceability of the origin of food.

The objective of this seminar topic is to describe and compare food monitoring systems that use blockchain technology to track conditions along the supply chain that may affect food safety.


14) Improving Global Health With Geographic Information Systems

Supervisor: Melanie Heck

Geographic Information Systems (GIS) are widely used in disease management and public health policy-making to analyze the relationship between health and location. Even before COVID-19, GIS have been used for epidemiological surveillance and modelling, but the pandemic brought the technology into the spotlight for early detection and control of infectious diseases. The idea is to collect information such as outbreak source and spreading dynamics for spatial modelling and forecasting in order to help control the outbreak and perform risk assessments. In addition, geographic data on environmental measures such as particulate matter or land surface temperature provide information on how ecological degradation affects global health. In public heath policy, GIS can reveal local hotspots and spatial inequality of access to healthcare, making it an effective tool for determining the optimum distribution of health resources.

The objective of this seminar topic is to review GIS in the health domain with a particular reference to the technology for monitoring health-related metrics, the integration and analysis of the collected data, and the various applications for which the derived insights are used.

Advanced Seminar (Hauptseminar): Trends in Distributed and Context-Aware Systems

Distributed systems are a corner stone of many services today. Distribution provides scalability of cloud services, implemented atop a massive number of servers. For instance, Google’s data centers host an estimated 2.5 million servers! At the same time replicating functions and data ensures reliability. This does not only apply to cloud services, but also to peer-to-peer networks as used for instance by the Bitcoin network and mobile systems such as vehicular networks or networks of unmanned aerial vehicles. As in the example in Figure 1, such mobile systems are inherently distributed geographically and are supported by edge cloud services located close to the mobile devices to reduce network latency. Last but not least, the Internet is evolving into an Internet of Things (IoT), where virtually everything can communicate through the Internet.
Such distributed systems come with many challenges, as pointed out by Urs Hölzl (Senior Vice President for technical infrastructure at Google): “At scale, everything breaks ... Keeping things simple and yet scalable is actually the biggest challenge. It's really, really hard.“ Other challenges include consistency of replicated services, privacy, and protection against attacks if untrusted devices are involved.

Adaptation is one of the key mechanisms that enable distributed systems to cope with the demands of increasingly dynamic environments. Figure 2 shows an example of a system that monitors the user’s context and adapts its layout and functions to provide a more efficient interaction.

In this seminar, we take a deep dive into specific distributed and context-aware systems concepts that tackle the above challenges.



The seminar is organized in the style of a scientific conference. Following the submission of a written paper on the assigned topic, students write reviews for other seminar papers and participate in a final presentation session where they present their work and discuss the work of others. Attendance at the kick-off and final presentation session is mandatory.

Flexible, scalable, and efficient networks

1) Deterministic Latency Guarantees for Event-Based Network Traffic

Supervisor: Robin Laidig

With the Internet of Things and Industry 4.0, an increasing number of devices are connected via computer networks. These devices vary greatly in functionality and importance, so Quality of Service (QoS) models are essential to separate and prioritize their network traffic. Of particular importance is real-time network traffic, which must arrive at its destination before a certain deadline, otherwise catastrophic failures can occur. To guarantee the delivery before a strict deadline, recent research has focused on scheduling algorithms that provide deterministic latency bounds. However, many of these algorithms are designed to work with periodic, time-triggered network traffic that can be precisely predicted. In reality, there is often event-based (sporadic) network traffic, that can emerge randomly.

The goal of this seminar paper is to review existing computer science research that proposes network scheduling algorithms for event-based network traffic. Further, the seminar paper shall point out future research directions.


2) Software-Defined Networks: OpenFlow and P4

Supervisor: Lucas Haug

Modern computing environments, including server virtualization, cloud computing, and the rapid scaling of networked services, have exposed the limitations of traditional network infrastructures. Their dynamic nature often requires networks to be programmatically and dynamically configurable. By decoupling the control plane from the data plane, Software-defined Networking (SDN) aims to fulfill these demands.

OpenFlow, one of the first enablers of SDN, has redefined how network devices interact with control logic by providing a standardized interface for manipulating so-called flow tables. These flow tables contain entries that specify match criteria based on packet headers and describe the actions to be taken when a match occurs. However, OpenFlow has some limitations, particularly in the areas of pipeline flexibility and protocol extensibility. These limitations have led to the emergence of P4, a high-level programming language designed explicitly for network data layers. P4 extends the capabilities of SDN by providing a greater degree of programmability, enabling custom packet processing pipelines, and thus allowing for a broader array of supported protocols and more nuanced performance optimizations.

The objective of this seminar topic is to provide a detailed description of both OpenFlow and P4 and to compare their respective advantages and disadvantages.


3) Scheduling Algorithms for Time-Aware Shaping in Time-Sensitive Networking

Supervisor: Lucas Haug

With the rise of Cyber-Physical Systems (CPS) in various domains such as the Industrial Internet of Things (IIoT) and autonomous vehicles, the demand for deterministic real-time communication with high reliability, bounded network delay, and delay variance (jitter) has grown. In CPS with networked sensors and actuators, these deterministic real-time bounds are often critical to provide safety guarantees. Standardization organizations like the IEEE have acknowledged the need for deterministic networks, resulting in a set of standards known as Time-Sensitive Networking (TSN) that enable deterministic communication in wired Ethernet networks.

One critical part of the TSN standards is the IEEE 802.1Qbv amendment for scheduled traffic. This amendment specifies how TSN-capable switches enable scheduled traffic using multiple priority levels and a Time Division Multiple Access (TDMA)-based gating mechanism. However, it does not define how to calculate TDMA schedules. As scheduling is an NP-hard problem, calculating optimal schedules is a time-consuming task. Various scheduling algorithms with different optimization objectives have been developed.

The objective of this seminar topic is to provide an overview of different scheduling algorithms for TSN. This includes a detailed description of the scheduling algorithms, their optimization goals, and a comparison of their advantages and disadvantages.


4) Scaling Real-Time Multicasts in IEEE 802.1Qbv Systems

Supervisor: Heiko Geppert

Real-time communication with guarantees on delay and jitter has become an essential requirement for implementing time-sensitive networked systems in application domains such as manufacturing (Industrial Internet of Things), automotive, or any kind of cyber-physical system, where physical processes are controlled via networked sensors, actuators, and controllers. The TSN standard defined by the IEEE extends Ethernet networks to be able to provide real-time guarantees.

Multicast is a network communication technique that enables efficient transmission of data from one sender to multiple receivers simultaneously. Unlike unicast, where data is sent separately to each receiver, multicast allows to deliver a single copy of data to a group of interested receivers, optimizing bandwidth usage and reducing network congestion if properly implemented at Layer 2. Both bandwidth and congestion optimization can be especially valuable in real-time networks.

The objective of this seminar topic is to provide an overview of the challenges of using multicast in real-time networks and show how these challenges are overcome in the TSN standards. Special emphasis should be placed on the time-triggered traffic planning introduced in IEEE 802.1Qbv.


Learning in distributed (edge) systems

5) Online Learning in Self-Adaptive Systems

Supervisor: Michael Matthé

Self-adaptive systems use adaptation control to make configuration decisions based on their current environmental context. Their adaptation logic consists of monitoring, analysis, planning, and execution components. Online learning techniques, including various machine learning methods, can be used in the adaptation logic to learn configuration decisions. One goal is to continuously optimize the decision making of the adaptation logic to improve system performance.

The objective of this seminar topic is to look at specific approaches of self-adaptive systems that use online learning as part of their adaptation logic, and to outline their advantages and disadvantages compared to an offline approach.


6) Self-Adaptive Decision-Making Using Markov Decision Processes

Supervisor: Michael Matthé

Markov decision processes are stochastic sequential decision processes. They can be used to model the adaptation logic of self-adaptive systems by representing configurations as states and adaptation decisions as transitions to other states. Finding the optimal policy (i.e., reward function) for these transitions optimizes system performance. This task becomes more difficult when the transitions are stochastic and thus non-deterministic.

The objective of this seminar topic is to look in detail at approaches that use Markov decision processes to implement the adaptation logic of self-adaptive systems.


7) Edge Intelligence

Supervisor: Michael Schramm

With the evolution of mobile communication technologies, edge computing has attracted increasing interest from researchers and engineers around the world. By communicating with nearby edge nodes instead of the cloud, it can help speed up content delivery and reduce network load.

Another hot topic in computer science is new artificial intelligence (AI) applications, made possible by breakthroughs in deep learning and improvements in hardware architectures. However, as billions of data bytes are generated at the network edge, they cannot all be sent to a central cloud server where the AI infrastructure is hosted. This leads to a strong demand for bringing AI to the edge to reduce the communication overhead, improve privacy, and reduce latency.

The objective of this seminar topic is to review existing scientific publications on Edge Intelligence. It should describe different types of Edge Intelligence in terms of their improvements over cloud solutions. What are the current problems that the research community is working on? Furthermore, applications of Edge Intelligence should be discussed.


8) Federated Learning for the Edge

Supervisor: Michael Schramm

Federated Learning is a machine learning setting where many clients train together without making the data itself accessible to collaborators. This enables institutions with smaller datasets to gain insights they could not get from their own data alone. In addition, federated Learning can help in privacy-sensitive areas such as medical research.

At the same time, edge computing has attracted the interest of researchers and engineers to accelerate content delivery and reduce network load by communicating with nearby edge nodes instead of the cloud. However, it introduces new issues such as task scheduling, data replication, and data placement.

The objective of this seminar topic is to review scientific publications on Federated Learning for the edge. What are the challenges in edge networks? What Federated Learning approaches exist to tackle these challenges?


Blockchain technology and cryptocurrency

9) Utreexo 

Supervisor: Lukas Epple

Utreexo offers a pathway to minimize Bitcoin’s Unspent Transaction Output (UTXO) set storage through the use of cryptographic accumulators. This enables the operation of lightweight, fully validating nodes while maintaining network security and integrity, using only a few kilobytes per node.

This objective of this seminar topic is to provide a succinct overview of Utreexo and its transformative potential by examining its impact on node management, decentralization, and the future prospects of blockchain technology. To this end, it should explore the internal workings of Utreexo and its potential to improve scalability and participation in the Bitcoin network.


10) Monero

Supervisor: Lukas Epple

Monero stands out in the cryptocurrency landscape for its emphasis on user privacy and anonymity. Through the use of advanced cryptographic techniques such as ring signatures, stealth addresses, CT and Dandelion++, Monero obscures the sender, receiver, and transaction amount, ensuring that all transfers are confidential and untraceable.

The objective of this seminar topic is to dissect the technological mechanisms that underpin Monero, specifically Dandelion++, providing insight into how it achieves a secure and private transaction environment.


Applications of distributed systems

11) Potential Real-Time Network Applications for Modern Smart Grids

Supervisor: Heiko Geppert

The integration of advanced digital technologies and real-time data analytics into power grids makes them "smart grids". A key feature is two-way communication between utilities and consumers, enabling efficient energy distribution and management. They also incorporate renewable energy sources, energy storage, and automated control to optimize energy use, reduce waste, and improve overall reliability.

From a network perspective, there are many different real-time applications for monitoring and managing a smart grid with very different characteristics such as latency requirements and traffic patterns.

The objective of this seminar topic is to provide an overview of the structure of power grids and their main components. In addition, the traffic patterns and real-time characteristics of different potential real-time monitoring and management applications running on a smart grid are to be compared.


12) Meal Planning With Food Ontologies

Supervisor: Melanie Heck

Food and fitness tracking applications like MyFitnessPal have created large databases that allow users to manage their daily food intake based on their physical activity and nutritional requirements. However, knowing the micro and macro nutrients of individual ingredients is of limited use. For instance, it does not take into account that certain food pairings can increase the absorption of valuable nutrients (e.g., black pepper improves the bioavailability of the anti-inflammatory turmeric by 2000 %), which is particularly important in the context of health conditions or diets that exclude entire food categories such as animal products. In addition, the way in which food is prepared can significantly alter its calorie content: While a stalk of raw celery counts as few as 6 kcal, this number increases to about 30 kcal when cooked. In order to assemble balanced meals with appropriate portion sizes and meet the users’ nutritional targets, it is therefore important that applications understand the types, properties, and interrelationships of ingredients.

A wealth of food-related information is available and connected to the Internet. However, it is largely siloed due the lack of a lingua franca describing the journey of food from diverse agricultural origins along its supply chain. Ontologies define a hierarchical vocabulary with logical relationships, making it usable across applications. Food ontologies together with other relevant non-food ontologies (e.g., about diet-sensitive health conditions) can transform food lookup databases into more useful applications that assist users in making healthy food choices in line with their health, body type, lifestyle, and preferences.

The objective of this seminar topic is to review approaches that aim to either (a) create a universal food ontology, or (b) integrate siloed food information and existing food ontologies with the intention to create one unified database that links food with nutrition, health and other related characteristics.


13) Early Warning Systems: Live Saviors or Just an Empty Promise?

Supervisor: Melanie Heck

Natural disasters like the recent floods in Libya or the earthquake in Morocco regularly cause the deaths of thousands of people, and with climate change, extreme climate events are expected to become more frequent. While disasters such as earthquakes are rather unpredictable by nature, Early Warning Systems (EWS) can alert the affected population of possible aftershocks or late effects like tsunamis and guide them in taking appropriate responsive measures. However, according to UNESCO, one third of the world’s population does not have access to any EWS and, where implemented, the technology is not always mature and/or effective. The employed monitoring techniques, integration and analysis of information, as well as dissemination technology play a major role in how effective the systems are in reducing the economic impact of natural disasters and, most importantly, preventing the loss of life.

The objective of this seminar topic is to compare EWS in different countries and to discuss which implementations are most promising in the light of the socio-cultural and technological context of a geographical region.

This image shows Melanie Heck

Melanie Heck

Dr. rer. pol.


To the top of the page