A Seminar for Bachelor students and an Advanced Seminar for Master students are offered each semester.
Both modules are organized in the style of a scientific conference. Each student works on an asigned topic from the distributed systems domain. After submitting a written report of the results of the literature research, each participant writes reviews for other seminar papers, and presents their work in the "conference session" of the seminar.
Seminar: Modern Internet Technologies
Driven by the requirements of innovative applications and services, the past few years have produced new technologies for networked or distributed systems. In the Internet of Things, a large number of devices and everyday objects equipped with sensors and actuators are connected to the internet and communicate through mostly wireless communication technologies (e.g., BLE, ZigBee, 6LoWPAN, LoRaWAN). In one of its sub-domains, the industrial Internet of Things (IIoT, Industry 4.0), machines, tools, transport equipment, etc. are networked.
Virtualisation (e.g., NFV) and "software-defined" systems (e.g., SDN), increase the flexiblity and efficiency of distributed systems through dynamic adaptation and scaling.
Driven by the popularity of the Bitcoin system, distributed ledger technologies and related concepts such as smart contracts have been developed, which are not only the foundations of electronic currencies, but can also support any application in which a consensus between different parties must be reached and documented.
Another focus has been the reduction of latency in networked and distributed systems, e.g. by using nearby edge and fog computing resources in addition to the remote cloud, or by using optimised communication protocols to., for instance, rapidly connect client and server.
In addition to stationary networks, mobile (5G) communication technologies and systems have developed rapidly. For example, the Covid-19 tracing application uses mobile devices to track contacts. This method, known as crowdsensing, can be used more generally to collect large amounts of geographically distributed sensor data.
This seminar will discuss a wide range of current technologies, protocols and standards that enable the above networked and distributed applications and services.
Organization: The seminar is organized in the style of a scientific conference. Following the submission of a written paper on the assigned topic, students write reviews for other seminar papers and participate in a final presentation session where they present their work and discuss the work of others. Attendance at the kick-off and final presentation session is mandatory.
Formal prerequisites: Successful completion of at least 1 course at the department of Distributed Systems.
Time-Sensitive Networks
Topic 1: TSN Traffic Shaping Deep-dive
Supervisor: Heiko Geppert
Time-Sensitive Networks (TSN) refer to a set of networking standards developed by the IEEE to ensure reliable, low-latency, and deterministic data transmission across Ethernet-based networks. They are designed to handle real-time communication for critical applications in industrial automation, autonomous vehicles, and audio/video streaming. TSN achieves this by incorporating features such as time synchronization, traffic shaping, and reliability enhancements to prioritize and guarantee the delivery of time-critical data. It builds upon the IEEE 802.3 Ethernet technology, enabling it to meet the stringent safety and performance requirements of complex systems. With TSN, industries can converge both real-time and non-real-time data in a single network infrastructure, thus enhancing efficiency and reducing costs.
Traffic shaping within TSN plays a critical role in managing network resources by prioritizing time-sensitive data, reducing latency, and preventing congestion. The IEEE standards define several traffic shapers (Credit-Based Shaper, Time-Aware Shaper, and Asynchronous Traffic Shaper) to achieve different types of guarantees and reliability for real-time applications.
In this seminar topic, we will take a deep dive into these traffic shaping methods, highlighting their features and benefits. We will also explore their interaction with other TSN features such as per-stream filtering and policing or time synchronization. The goal is to identify dependencies and synergies that enable more complex and powerful network operations.
Topic 2: Fault Tolerance in the Generalized Precision Time Protocol (gPTP)
Supervisor: Lucas Haug
Accurate clock synchronization is essential in networks that require time-critical data transfer, where even minimal clock deviations can lead to severe operational errors. Given that the internal clocks of network devices inherently drift apart over time, clock synchronization is indispensable for reliable and error-free operation.
In wired Ethernet networks, protocols like the Generalized Precision Time Protocol (gPTP), defined in the IEEE 802.1AS standard, are widely implemented. gPTP employs a master-slave architecture and hardware timestamping to achieve synchronization with sub-microsecond accuracy. However, critical use cases also require fault tolerance if a device or link failure occurs. To this end, the gPTP standard defines the Best Master Clock Algorithm, which is able to dynamically select a new grand master (GM). Additionally, the new Hot Standby amendment (IEEE 802.1ASdm) allows to predefine a secondary GM to enable a seamless transition in the event of faults.
In this seminar topic, we will investigate the mechanisms through which gPTP achieves precise time synchronization. Additionally, one of the two fault tolerance methods described above shall be examined.
Modern World Wide Web
Topic 3: QUIC and HTTP/3: A Modern Transport Protocol for the Web
Supervisor: Lucas Haug
The Transmission Control Protocol (TCP) has been the dominant transport protocol of the Internet for decades, providing reliable, connection-oriented communication. However, the limitations of TCP, such as connection establishment delay, have motivated the development of alternative transport protocols. Google has introduced QUIC as a solution to these issues, integrating transport-layer capabilities directly with encryption to improve performance, security, and flexibility. Today, QUIC is widely adopted, including in major web browsers like Google Chrome.
HTTP/3 is the latest version of the Hypertext Transfer Protocol, built on top of QUIC instead of TCP. By leveraging the features of QUIC, HTTP/3 reduces connection latency, improves performance in high-loss networks, and eliminates head-of-line blocking at the transport layer. This marks a fundamental shift in web communication, enabling faster and more reliable data transmission.
In this seminar topic, we will analyze QUIC in the context of HTTP/3, covering its motivations, key concepts, and advantages over TCP.
Topic 4: Media Access Control in IEEE 802.11 Networks
Supervisor: Jona Herrmann
Wireless Local Area Networks (WLAN) based on the IEEE 802.11 standard are very important for mobile devices, as this is the only way they can access the network and finally the Internet. Typically, many devices are connected to the same WLAN and use it at the same time. Therefore, a Media Access Control (MAC) protocol is required to coordinate access to the shared medium.
In this seminar topic, we will explore why the Media Access Control of IEEE 802.3 networks cannot be used in IEEE 802.11 networks. Additionally, the seminar paper should explain the MAC that is used instead.
Topic 5: Utreexo
Supervisor: Lukas Epple
Utreexo offers a pathway to minimize Bitcoin’s Unspent Transaction Output (UTXO) set storage through the use of cryptographic accumulators. This enables the operation of lightweight, fully validating nodes using only a few kilobytes per node while maintaining network security and integrity.
In this seminar topic, we will gain an overview of Utreexo and its transformative potential by examining its impact on node management, decentralization, and the future prospects of blockchain technology. To this end, the seminar paper should explore the internal workings of Utreexo and its potential to improve scalability and participation in the Bitcoin network.
Topic 6: Tor - The Onion Router
Supervisor: Lukas Epple
Tor is a popular anonymity network that allows users to browse the Internet without revealing their true identity, location, or activity. By routing Internet traffic through a series of encrypted nodes or relays, the source of the traffic is obscured and user privacy is maintained.
In this seminar topic, we will gain a comprehensive overview of Tor and its technical underpinnings. This will be done by exploring how Tor works internally and how it achieves privacy. Additionally, the seminar paper should review the potential risks and limitations of Tor, as well as its impact on Internet privacy and security.
Topic 7: I2P - The Invisible Internet
Supervisor: Lukas Epple
I2P is a self-contained network that ensures secure and anonymous communication between users. Unlike other anonymity networks, I2P focuses on providing a private communication layer within its own boundaries, rather than enabling access to the wider Internet.
In this seminar topic, we will provide an overview of I2P, highlighting its pivotal role in the realm of anonymous online communication. To this end, we will navigate through the key points of the I2P architecture, explore its distinctive use of garlic routing to encrypt and bundle messages, examine the various applications intrinsic to the I2P ecosystem, and analyze the network's limitations and vulnerabilities.
Concurrent Programs
Topic 8: Concurrency Control
Supervisor: Simon König
Uncontrolled concurrent data access can potentially leave data in an inconsistent state. Therefore, concurrency control algorithms are a central element in distributed data storage systems. Specifically, distributed transaction processing relies heavily on transaction schedulers such two-phase locking (2PL) or optimistic concurrency control (OCC). These algorithms are widely known and used in state-of-the-art data storage systems.
In this seminar topic, we will gain an overview of less-popular concurrency control algorithms. Examples are altruistic locking (AL), ordered sharing of locks (O2PL), tree locking (WTL, RWTL), backward-oriented optimistic concurrency control (BOCC), and many more. Through a comprehensive review of relevant literature and case studies, the seminar paper shall discuss the benefits and drawbacks of such lesser-known concurrency control algorithms compared to their more popular counterparts.
Topic 9: User-Space Mutex: Spinlocks and Atomic Synchronization
Supervisor: Simon König
This seminar topic focuses on the challenges of ensuring proper synchronization without relying on kernel-based synchronization. We will explore the concept of user-space mutex implementations, focusing particularly on spinlocks and their role in low-level synchronization within concurrent systems. We will analyze the mechanics of busy-wait locks, where threads continuously check the availability of a critical section, and the implications this has on processor utilization and cache coherence.
The discussion will center on spinlock implementations as synchronization primitives that avoid blocking and instead rely on active polling. Specifically, we will compare the test-and-set lock, the test-and-test-and-set lock, and MCS locks. The focus shall lie on the trade-offs in terms of efficiency under contention and multithreaded scalability.
Self-adaptation
Topic 10: Managing Scalability in Self-adaptive Systems
Supervisor: Melanie Heck
Software systems are becoming more and more complex and their management beyond human control. Self-adaptive systems therefore autonomously adjust their settings to the state of their environment based on continuously monitored data. However, the number of possible system states of a networked system – and consequently the number of options that must be assessed by the decision algorithm – grows exponentially with the number of features that define a system and its environment. Researchers have therefore developed multiple approaches and heuristics to reduce the number of considered options and keep computation times within reasonable limits.
In this seminar topic, we will compare how different research approaches address this so-called “state space explosion” problem to improve scalability in self-adaptive systems.
Topic 11: The Neglected Costs of Self-adaptation: How Expensive is the Adaptation Itself?
Supervisor: Melanie Heck
Self-adaptive systems monitor their environment and autonomously configure themselves to the observed context. For example, adaptive web hosting platforms can dynamically increase or reduce the number of active servers in order to provide high responsiveness while keeping operating costs low. While this can facilitate better operation if external or internal conditions change, the adaptation itself is often also associated with costs. Depending on the domain of the adaptive system, relevant costs may, for example, be the energy consumption, latency, or the number of resources that are used to reach a new system configuration. For instance, starting an additional server will allow a web application to meet an increased number of requests, but only after spending the time and energy to start the new server. Especially in highly dynamic environments where adaptations occur very frequently, this can lead to undesirable oscillatory behavior and high accumulated adaptation costs.
In this seminar topic, we will investigate which costs have been considered in SAS research and how they are accounted for in the decision making.
Networks & Computing Paradigms for the IoT
Topic 12: WebAssembly (Wasm) and Edge Computing
Supervisor: Michael Schramm
With the evolution of mobile communication technologies, edge computing theory and techniques have increasingly attracted the interest of researchers and engineers around the world. Edge computing can help to accelerate content delivery and reduce network load by communicating with nearby edge nodes instead of the cloud.
WebAssembly (Wasm) is a low-level, portable binary instruction format for high-performance code execution in web browsers and beyond. Originally designed to enhance web applications, Wasm is now increasingly used in server-side and edge computing environments due to its lightweight nature, security model, and cross-platform compatibility. With the rise of edge computing, Wasm offers an efficient way to execute compute-intensive tasks closer to the user, thus reducing latency and bandwidth consumption.
In this seminar topic, we will explore how WebAssembly enhances edge computing applications. This will be done by investigating Wasm's technical capabilities, its role in modern edge architectures, and real-world edge computing use cases in which Wasm is applied. Additionally, the seminar paper should evaluate the benefits and challenges of integrating Wasm into edge computing and discuss its potential for future distributed computing environments.
Topic 13: Content Delivery Networks (CDNs) and Edge Computing
Supervisor: Michael Schramm
Edge computing is a distributed computing paradigm that brings computation and data storage closer to the source of generated data. Unlike traditional cloud computing, which relies on centralized data centers, edge computing processes data on local devices or edge servers, thus reducing latency and improving responsiveness. This architecture is especially beneficial for applications that require low-latency communication such as video streaming, online gaming, or IoT services.
Traditionally, Content Delivery Networks (CDNs) operate on pre-cached content stored at edge locations, with limited real-time processing capabilities. However, through the integration of edge computing, modern CDNs can become more dynamic by processing and personalizing content on-the-fly before delivering it to the end users.
In this seminar topic, we will explore how edge computing enhances traditional CDNs to improve content delivery. First, the seminar paper should investigate which types of content benefit from using edge computing in CDNs. Additionally, the technical architecture of edge-enabled CDNs should be discussed.
Topic 14: CoAP: Constrained Application Protocol
Supervisor: Simon Egger
In resource constrained environments, low energy consumption of the devices is of the utmost importance. At the same time, there is a strong need to have a unified protocol that enables the devices of different vendors to communicate with each other. Within this setting, CoAP provides a web protocol with low header/parsing overhead, asynchronous message exchange, and optional reliability support [RFC7252].
In this seminar topic, we will gain an overview of CoAP with an in-depth explanation of its most central design choices.
Topic 15: MQTT: Publish/Subscribe Communication for IoT
Supervisor: Simon Egger
Similar to CoAP, MQTT is specifically designed for resource constrained devices that have only limited network bandwidth. However, MQTT has a special focus on asynchronous message exchanges by providing a publish/subscribe messaging protocol.
In this seminar topic, we will gain an overview of the MQTT protocol specification with a special focus on the guarantees that MQTT can provide in IoT (e.g., with respect to reliability and security mechanisms).
Advanced Seminar (Hauptseminar): Trends in Distributed and Context-Aware Systems
The Internet of Everything (IoE), where virtually everything can now communicate through the Internet, and the increasingly demanding performance requirements of new technologies (e.g., cryptocurrencies) have driven the emergence of new computing paradigms for distributed systems. Scalability is now offered not only by centralized cloud providers, but also by edge computing systems, where geographically distributed servers provide computational resources at the edge of the network and, therefore, close to the end devices. This can significantly reduce latency for time-critical applications like vehicular networks. The advances in edge computing have led to the emergence of edge AI, where powerful AI algorithms are deployed at the edge, without relying on a remote cloud.
But distributed systems come with many challenges which requires a profound understanding of core principles in distributed computing. As pointed out by former Google Senior Vice President Urs Hölzl: “At scale, everything breaks ... Keeping things simple and yet scalable is actually the biggest challenge. It's really, really hard.“ This is especially true for dynamic and uncertain environments that we are facing, for instance, in smart buildings or smart energy systems. Self-adaptation is one of the key mechanisms for coping with increasingly large and dynamic systems, often by using machine learning techniques (GNN, reinforcement learning). Challenges that come with distributed storage systems include consistency and scalability.
Another hot topic, especially in the context of 5G and the development of future 6G networks, is Time Sensitive Networking (TSN), which defines a set of standards to enable reliable, deterministic real-time communication in Ethernet networks. These standards target, among others, time synchronization and traffic shaping/scheduling approaches for both event-based and time-triggered traffic.
In this seminar, we take a deep dive into specific concepts of distributed and context-aware systems that tackle the above challenges. The topics will be published on the department’s website and are assigned according to a standardized procedure as explained during the kick-off.
Organization: The seminar is organized in the style of a scientific conference. Following the submission of a written paper on the assigned topic, students write reviews for other seminar papers and participate in a final presentation session where they present their work and discuss the work of others. Attendance at the kick-off and final presentation session is mandatory.
Formal prerequisite: Successful completion of at least 1 Master-level course at the department of Distributed Systems.
TSN
Topic 1: TSN Traffic Shaping Deep-dive
Supervisor: Heiko Geppert
Time-Sensitive Networks (TSN) refer to a set of networking standards developed by the IEEE to ensure reliable, low-latency, and deterministic data transmission across Ethernet-based networks. They are designed to handle real-time communication for critical applications in industrial automation, autonomous vehicles, and audio/video streaming. TSN achieves this by incorporating features such as time synchronization, traffic shaping, and reliability enhancements to prioritize and guarantee the delivery of time-critical data. It builds upon the IEEE 802.3 Ethernet technology, enabling it to meet the stringent safety and performance requirements of complex systems. With TSN, industries can converge both real-time and non-real-time data in a single network infrastructure, thus enhancing efficiency and reducing costs.
Traffic shaping within TSN plays a critical role in managing network resources by prioritizing time-sensitive data, reducing latency, and preventing congestion. The IEEE standards define several traffic shapers (Credit-Based Shaper, Time-Aware Shaper, and Asynchronous Traffic Shaper) to achieve different types of guarantees and reliability for real-time applications.
In this seminar topic, we will take a deep dive into these traffic shaping methods, highlighting their features and benefits. We will also explore their interaction with other TSN features such as per-stream filtering and policing or time synchronization. The goal is to identify dependencies and synergies that enable more complex and powerful network operations.
Efficient and Reliable Networks
Topic 2: gPTP-based Time Synchronization in Converged 6G/TSN networks
Supervisor: Lucas Haug
Accurate clock synchronization is essential in networks that require time-critical data transfer, where even minimal clock deviations can lead to significant operational errors. Given that the internal clocks of network devices inherently drift apart over time, clock synchronization is indispensable for reliable and error-free operations.
In wired Ethernet networks, protocols like the Generalized Precision Time Protocol (gPTP), defined in the IEEE 802.1AS standard, are widely implemented. gPTP employs a master-slave architecture and hardware timestamping to achieve time synchronization with sub-microsecond accuracy. However, in emerging network domains – including integrated 6G/TSN networks – further problems, such as variable link delays, arise. Thus, time synchronization in these fields is still under active research.
In this seminar topic, we will investigate the mechanisms through which gPTP achieves precise time synchronization in traditional Ethernet-networks and how they can be employed in integrated 6G/TSN networks.
Topic 3: Radio Resource Grid Allocation for 5G Network Slicing
Supervisor: Lucas Haug
Efficient allocation of radio resources is critical in 5G networks, where diverse services such as enhanced mobile broadband (eMBB), ultra-reliable low-latency communications (URLLC), and massive machine-type communication (mMTC) must coexist. Network slicing enables multiple logical networks to share the same physical infrastructure while maintaining performance isolation. However, realizing slicing at the Radio Access Network (RAN) level introduces challenges, particularly in the allocation and scheduling of radio resources within the resource grid. Ensuring that different slices operate efficiently while minimizing interference and maximizing spectral efficiency is a central research problem.
The resource grid in 5G is a structured framework that defines how the radio spectrum is divided into time-frequency units. In RAN slicing, each slice may require a different configuration in terms of bandwidth, latency, and reliability, necessitating dynamic partitioning of the resource grid. This involves defining slice-specific configurations at different protocol layers, such as adaptive scheduling at the Medium Access Control (MAC) layer and customized numerologies at the physical layer (PHY). Furthermore, ensuring isolation between slices while maintaining overall network efficiency requires sophisticated resource management strategies, including dynamic spectrum allocation and interference mitigation techniques.
In this seminar topic, we will examine how the resource grid in 5G RAN slicing is structured and allocated across different slices with a possible focus on resource scheduling, isolation mechanisms, or strategies for efficient spectrum utilization.
Concurrent programs
Topic 4: Working with C++20 Coroutines
Supervisor: Simon König
Coroutines, a language feature introduced in C++20, enable efficient asynchronous programming through suspension and resumption of functions.
In this seminar topic, we will explore how the C++ compiler translates coroutine functions into state machines that handle suspension points and resumptions. We will discuss how the compiler constructs and manages promise objects, suspend points, and awaiter types, and how these elements are tightly integrated into the coroutine’s state and execution flow.
The seminar paper shall also highlight how the compiler generates the necessary control flow for handling coroutine suspension and resumption without blocking other tasks. This includes the transformation of coroutine functions into a form where control is explicitly passed between the suspended and resumed states, as well as the compiler’s handling of memory management, object lifetimes, and exception propagation in such a framework. The objective of the seminar paper is hereby to compare state-of-the-art implementations of coroutine frameworks.
Topic 5: Memory Models in the C++ Standard
Supervisor: Simon König
Controlling access to memory is crucial when building concurrent programs. Programmers often assume the operations of their programs to be executed sequentially and in-order. However, optimizations applied by the compiler, the CPU pipeline, and the memory management unit can cause the operations of a program to be executed out-of-order. In multi-threaded applications, this can change the semantics of the program, resulting in unexpected behavior. Therefore, synchronization is necessary.
Sequential consistency (SQ) is a model that ensures that the result of a concurrent execution is the same as if the operations of all processors were executed in some sequential order. However, SQ is very expensive to achieve. Therefore, many languages use the SQ-DRF (data race-free) model instead, where consistency is achieved by inter-thread synchronization through the use of memory barriers. Using memory barriers allows OS-level primitives (e.g., semaphores) or lock-free data structures to provide meaningful consistency guarantees.
In this seminar topic, we will delve into the principles and mechanisms used to control and synchronize access to shared memory. Specifically, we will investigate the usage of memory barriers in modern memory models such as the C++ memory model. Through a comprehensive review of relevant literature and case studies, the seminar paper shall provide insight into the challenges associated with this topics.
Topic 6: Coroutines - A new form of Subroutines
Supervisor: Lukas Epple
Subroutines encapsulate some computation and help to break down a large program into smaller parts. Coroutines imitate this property, but the lifetime of a coroutine is not tied to the control flow of the program. When a subroutine returns to the calling program, its control information and local variables are destroyed. In contrast, when a coroutine returns control to its caller, its execution is not finished and thus its state is preserved. Each time control reenters the coroutine, it resumes execution where its local control left off and data state is retained. Hence, coroutines are a form of retentive control.
Coroutines are especially well suited for the implementation of asynchronous or event-driven applications. With the use of coroutines, data or IO stalls can be hidden, improving parallelism at runtime and reducing programming complexity at build-time. For this reason, coroutines are experiencing a resurgence. Many popular languages have recently introduced native support for coroutines. Moreover, coroutine-based implementations of highly parallel applications are becoming more and more popular.
In this seminar topic, we will examine the principles and mechanisms of coroutines. Through a comprehensive review of relevant literature and case studies, the seminar paper could investigate aspects including synchronization primitives for coroutines, state and context switch mechanics for coroutines, and mechanisms for runtime optimization.
Consensus in Distributed Systems
Topic 7: Failure Detectors for Solving the Consensus Problem
Supervisor: Simon Egger
Solving the consensus problem is one of the most prominent and fundamental primitives for any distributed computing system. Given a set of N processes with an initial value, a consensus protocol has to ensure that each "honest" process agrees on the same value, despite potential "malicious" processes in the system. Depending on the employed fault model, "malicious" processes can range from simple crash faults to Byzantine faults where processes can deviate from the protocol in an arbitrary fashion.
A well-known impossibility result by Fischer et al. [1] shows that consensus cannot be reached in an asynchronous system with a single faulty node. This result spawned an entire research branch that is concerned with the question: "What types of fault detectors are needed to reach consensus in asynchronous systems?"
In this seminar topic, we will take a deep dive into the work of Chandra et al. [2], which provides insights on the information that has to be provided by even the "weakest" fault detector in such a setting.
- [1] Fischer, Michael J., Nancy A. Lynch, and Michael S. Paterson. "Impossibility of distributed consensus with one faulty process." Journal of the ACM (JACM) 32.2 (1985): 374-382.
- [2] Chandra, Tushar Deepak, Vassos Hadzilacos, and Sam Toueg. "The weakest failure detector for solving consensus." Journal of the ACM (JACM) 43.4 (1996): 685-722.
Topic 8: Easy impossibility proofs for distributed consensus problems
Supervisor: Simon Egger
This seminar topic is motivated by the same issue as the previous topic ("Failure Detectors for Solving the Consensus Problem").
In this seminar topic, however, we will explore different impossibility results in distributed systems (e.g., [1]) and identify the key underlying conditions on which the impossibility results depend. To this end, the seminar paper should investigate the paper by Fischer et al. in [2], which combines and simplifies previous known impossibility results.
- [1] Lynch, Nancy. "A hundred impossibility proofs for distributed computing." Proceedings of the eighth annual ACM Symposium on Principles of distributed computing. 1989.
- [2] Fischer, Michael J., Nancy A. Lynch, and Michael Merritt. "Easy impossibility proofs for distributed consensus problems." Distributed Computing 1 (1986): 26-39.
Smart Grids
Topic 9: Demand Response in Smart Grids
Supervisor: Melanie Heck
Renewable energy generation is an essential step towards achieving climate neutrality, but puts stress on the energy grid. Energy supply from renewable sources is not always predictable or constant and energy must be consumed when produced. Demand Response (DR) programs therefore aim to increase energy flexibility by incentivizing energy consumers to shift their power consumption towards periods of high energy availability.
In this seminar topic, we will compare the multiplicity of existing DR schemes. Specifically, the incentives offered to consumers shall be reviewed as well as the applied concept of “flexibility” and the optimization algorithms that consumers can use to determine optimal power consumption schedules.
Topic 10: Energy Flexibility of Buildings
Supervisor: Melanie Heck
The activities and preferences of occupants impacts a building’s power demand, particularly for heating, cooling and ventilation (HVAC) as well as lighting, EV charging, and other electrical equipment. Predicting relevant user activities and preferences can be used shift the energy consumption of the building towards periods of high availability. For example, thermal inertia can be exploited to accumulate heat or cold in the structure of the building when energy is available and reduce consumption when power supply is scarce. In order to calculate optimal power consumption schedules, reliable predictions of user patterns as well as precise information about the impact of shifting the power consumption are needed.
In this seminar topic, we will investigate which user behaviors and preferences impact the energy flexibility potential for different applications and which technologies can be used to monitor them. Additionally, different load scheduling algorithms shall be reviewed.
Topic 11: Integration of Buildings into the Smart Grid
Supervisor: Sonja Klingert
The European Union’s “new Green Deal” strategy aims at a carbon neutral Europe until 2045. To successfully achieve this target, the electricity grid requires a drastic increase in energy efficiency and renewable energy resources, while considering their integration with other energy carriers. The challenges for the electricity grid are particularly high, as it needs to be physically balanced at all times. The increased fraction of small, variable and less predictable decentralized generation on the one hand and the continuous electrification of demand on the other hand multiplies this challenge and calls for increased flexibility in the electricity system.
There is a wide range of consuming devices and entities on the demand side that can serve as assets for flexibility, batteries and EVs being among the most cited. Buildings can also be viewed as a collection of consuming devices whose flexibility contribution can be optimized through concertation of usage, charging processes, and injecting electricity into the grid. Current flexibility schemes for buildings, however, exhibit several important shortcomings. First, the number of flexible assets they address is still quite limited, due to the prevalence of legacy systems and low adoption of smart solutions (i.e., Building Energy Management Systems and IoT/smart devices). Second, although the set of core services that buildings provide to their occupants is rather well-defined and stationary, the technologies and technical equipment required to implement them is heterogenous and varies dynamically over time.
In this seminar topic, we will gain an overview of current energy informatics-based models that deal with this new area of research. The focus shall be on the interaction between the building and the power grid and on structuring the models, e.g., by creating a taxonomy.
Enabling technologies for IoT/Industry 4.0
Topic 12: Indoor Positioning in IEEE 802.11 Networks
Supervisor: Jona Herrmann
Indoor positioning is important for the precise navigation and tracking of people and objects in large indoor spaces. By enabling real-time tracking, it can improve the user experience and efficiency in complex environments such as shopping malls, airports, or warehouses. One approach to indoor positioning leverages wireless local area networks (WLAN) based on the IEEE 802.11 standard. Various techniques such as signal strength, round trip time, or angle of arrival can be used.
In this seminar topic, we will compare the positioning techniques that can be used in WLAN.
Topic 13: AI for the Edge
Supervisor: Michael Schramm
With the evolution of mobile communication technologies, edge computing theory and techniques have increasingly attracted the interest of researchers and engineers around the world. Edge computing can help to accelerate content delivery and reduce network load by communicating with nearby edge nodes instead of the cloud.
Artificial Intelligence (AI) is increasingly used to improve the performance, scalability, and efficiency of edge computing environments. AI techniques are applied for dynamic resource allocation, predictive maintenance, and real-time workload distribution. By leveraging machine learning and deep learning models, edge networks can self-optimize, adapt to changing conditions, and reduce operation costs.
In this seminar topic, we will explore how AI can optimize edge computing environments. This will be done by investigating how AI can be used for resource management, network optimization, security, and energy efficiency at the edge. Additionally, the seminar paper should discuss the benefits and limitations of AI-driven edge optimization and examine the trade-offs between computational overhead and performance gains.
Topic 14: Federated Learning for the Edge
Supervisor: Michael Schramm
Federated Learning is a machine learning setting where many clients train together without making the data itself accessible to collaborators. This enables institutions with smaller datasets to gain insights that they could not get from their own data alone. In addition, federated learning can help in privacy-sensitive areas such as medical research.
At the same time, edge computing has attracted the interest of researchers and engineers to accelerate content delivery and reduce network load by communicating with nearby edge nodes instead of the cloud. However, it introduces new issues such as task scheduling, data replication, and data placement.
In this seminar topic, we will review scientific publications on federated learning for the edge. What are the challenges in edge networks? What federated learning approaches exist to tackle these challenges?
Explore our department...

Melanie Heck
Dr. rer. pol.Researcher