Seminar

Distributed Systems

Organization and topics of the Seminar and Advanced Seminar

A Seminar for Bachelor students and an Advanced Seminar for Master students are offered each semester.

Both modules are organized in the style of a scientific conference. Each student works on an asigned topic from the distributed systems domain. After submitting a written report of the results of the literature research, each participant writes reviews for other seminar papers, and presents their work in the "conference session" of the seminar.

Seminar: Modern Internet Technologies

Driven by the requirements of innovative applications and services, the past few years have produced new technologies for networked or distributed systems. In the Internet of Things, a large number of devices and everyday objects equipped with sensors and actuators are connected to the internet and communicate through mostly wireless communication technologies. In one of its sub-domains, the industrial Internet of Things (IIoT, Industry 4.0), machines, tools, transport equipment, etc. are networked.
Virtualisation and "software-defined" systems (e.g. Software Defined Networking (SDN)), increase the flexiblity and efficiency of distributed systems through dynamic adaptation and scaling.
Driven by the popularity of the Bitcoin system, distributed ledger technologies and related concepts such as smart contracts have been developed, which are not only the foundations of electronic currencies, but can also support any application in which a consensus between different parties must be reached and documented. 
Another focus has been the reduction of latency in networked and distributed systems, e.g. by using nearby edge and fog computing resources in addition to the remote cloud, or by using optimised communication protocols to., for instance, rapidly connect client and server. 
In addition to stationary networks, mobile (5G) communication technologies and systems have developed rapidly. For example, the COVID tracing application uses mobile devices to track contacts. This method, known as crowdsensing, can be used more generally to collect large amounts of geographically distributed sensor data. 

This seminar will discuss a wide range of current technologies, protocols and standards that enable the above networked and distributed applications and services. 

 

Flyer

The seminar is organized in the style of a scientific conference. Following the submission of a written paper on the assigned topic, students write reviews for other seminar papers and participate in a final presentation session where they present their work and discuss the work of others. Attendance at the kick-off and final presentation session is mandatory.

Flexible, scalable, and efficient networks

1) Low-Latency Internet

Supervisor: Robin Laidig

Low latency is crucial for many networked applications and often improves performance and user experience. For example, in online gaming, high latency can cause lag and potentially make the game unplayable; At Amazon, even a small increase in latencies on the online shopping platform leads to lower sales, with a loss amounting to almost 1 billion dollars. But why are latencies in the Internet so high and how can we improve it?

The objective of this seminar topic is to review computer science research that proposes solutions to overcome high latencies in the Internet.

 

2) Fairness of Bandwidth Sharing Between Transport Layer Protocols (TCP, UDP, QUIC, …)

Supervisor: Robin Laidig

There are many transport layer protocols for computer networks that are used for different purposes. Traditional examples are the connection-oriented TCP and the connectionless UDP. A more recent example is the Google QUIC protocol, which is based on UDP but provides features similar to TCP with higher efficiency. All of these protocols are used on the Internet today and must share the same infrastructure. But how well do they work together? Do they share the available bandwidth equally? What measures are taken to prevent unfair bandwidth allocation between network flows of different protocols?

The goal of this seminar paper is to review existing computer science research that investigates the fairness of bandwidth sharing between different transport layer protocols. Further, possible improvements or solutions that were proposed in computer science research shall be pointed out.

 

3) Data Plane Verification Techniques

Supervisor: Simon Egger

In networking, the data plane specifies a match-action table for each network switch. Matching fields in the packet header, the corresponding action specifies how the packet is forwarded and which header fields are modified. Research on data plane verification investigates mechanisms to identify network errors (e.g., forwarding loops and black holes), in the simplest case by analyzing a complete snapshot of the network.

The objective of this seminar topic is to compare state-of-the-art data plane verifiers. With promising results achieved by recent distributed verifiers (e.g., Xiang et al.), their applicability in networks with a (logically) centralized controller, (e.g., Software-Defined Networks and Time-Sensitive Networks) should be discussed.

 

4) Mininet: A Network Emulator

Supervisor: Jona Herrmann

Testing and evaluating new network protocols typically requires expensive hardware testbeds, which are not always available. An alternative is emulation, which has the advantages of being cost-effective and highly configurable. Mininet is a network emulator that allows to emulate a large network with many hosts and switches on a single Linux computer. Any topology can be emulated and the links can be configured with properties such as delay, bandwidth, and loss.

The objective of this seminar topic is to explain the underlying concepts of Mininet and to give a brief overview of its use.

 

Secure and privacy preserving networks

5) Tor - The Onion Router

Supervisor: Lukas Epple

Tor is a popular anonymity network that allows users to browse the Internet without revealing their true identity, location, or activity. By routing Internet traffic through a series of encrypted nodes or relays, the source of the traffic is obscured and user privacy is maintained.

The objective of this seminar topic is to provide a comprehensive overview of Tor and its technical underpinnings. This will be done by exploring how Tor works internally and how it achieves privacy. Additionally, it should review the potential risks and limitations of Tor, as well as its impact on Internet privacy and security. 

 

6) I2P - The Invisible Internet Project

Supervisor: Lukas Epple

I2P offers a self-contained network that ensures secure and anonymous communication between users. Unlike other anonymity networks, I2P focuses on providing a private communication layer within its own boundaries, rather than enabling access to the wider Internet.

The objective of this seminar topic is to provide an insightful overview of I2P, highlighting its pivotal role in the realm of anonymous online communication. To this end, we will navigate through the key points of the I2P architecture, explore its distinctive use of garlic routing to encrypt and bundle messages, examine the various applications intrinsic to the I2P ecosystem, and analyze the network's limitations and vulnerabilities.

 

Time-sensitive networks

7) Deterministic Real-Time Communication With Time-Sensitive Networking

Supervisor: Lucas Haug

Real-time communication with deterministic bounds on network delay and jitter is critical for the efficient and safe operation of networked real-time systems. As distributed Cyber-Physical Systems (CPSs) with networked sensors, actuators, and controllers become increasingly popular in various domains such as the Industrial Internet of Things (IIoT) and autonomous vehicles, the demand for deterministic real-time communication with a high reliability and bounded network delay and delay variance (jitter) has grown.

Major standardization organizations like the Institute of Electrical and Electronics Engineers (IEEE) and the Internet Engineering Taskforce (IETF) have acknowledged the necessity for deterministic networks leading to a set of standards under the term Time-Sensitive Networking (TSN), which extend standard wired IEEE 803.3 networks (Ethernet) with real-time communication mechanisms.

The objective of this seminar topic is to give an overview of the aforementioned TSN standards and describe how they enable deterministic real-time communication.

 

8) gPTP – Time Synchronization for Time-Sensitive Networking

Supervisor: Lucas Haug

Synchronized clocks are a crucial requirement in networks with time-critical data transfers. Accurate synchronization is essential, as even small clock deviations can result in operational errors. Given that the internal clocks of network devices inherently drift apart over time, regular clock synchronization is indispensable to ensure seamless and error-free operations.

The Precision Time Protocol (PTP), established by IEEE 1588, aims to achieve high-accuracy clock synchronization within local networks. It uses a master-slave architecture and hardware timestamping to reach sub-microsecond synchronization accuracy. The generalized Precision Time Protocol (gPTP), specified in IEEE 802.1AS, adapts PTP for Time-Sensitive Networking (TSN). It introduces modifications such as the Best Master Clock Algorithm (BMCA) to improve synchronization efficiency and accuracy in networks that facilitate critical real-time operations.

The objective of this seminar topic is to explain the mechanisms by which PTP achieves precise time synchronization and to examine the specific adaptations and enhancements introduced by gPTP in the context of TSN.

 

9) Impossibility Results and Lower Bounds for Synchronization Problems

Supervisor: Simon Egger

There is a surprising number of impossibility results in distributed computing. Well-known results include the CAP theorem, the Byzantine Generals Problem, and the FLP theorem.

This seminar topic focuses on impossibility results and known lower bounds for synchronization problems, i.e., the time it takes for multiple parties to synchronize in an asynchronous reliable network. The objective of the seminar paper is to explore known results and to discuss their implications for time synchronization in wireless Time-Sensitive Networks. A good starting point is provided by Lynch.

 

10) Traffic Shaping in Time-Sensitive Networks

Supervisor: Heiko Geppert

In recent years, the IEEE has created a series of standards to enable real-time communication in Ethernet networks, known as Time-Sensitive Networks (TSN). There are several traffic shapers defined in TSN, such as the Credit-based Shaper (CBS) to stretch out event-based traffic, the Time-aware Shaper (TAS) for precise and deterministic frame forwarding, or the Asynchronous Traffic Shaper (ATS) to limit the rate of event-based traffic while enabling bursts. The different traffic shapers provide a variety of features for different use cases.

The objective of this seminar topic is to delve into the principles, mechanisms, and applications of CBS, TAS, and ATS, and to examine their effectiveness in ensuring Quality-of-Service (QoS) and minimizing latency in TSN environments. Through a comprehensive review of relevant literature and case studies, the seminar paper should provide insight into the performance and challenges associated with these traffic shaping techniques.

 

Shared resource coordination and synchronization

11) Media Access Control in IEEE 802.11 Networks

Supervisor: Jona Herrmann

Wireless Local Area Networks (WLAN) based on the IEEE 802.11 standard are very important for mobile devices to access the network and finally the Internet. Typically, many devices are connected to and simultaneously use the same WLAN. Therefore, a Media Access Control (MAC) protocol is required to coordinate the access to the shared medium.

The objective of this seminar topic is to discuss why the Media Access Control of IEEE 802.3 networks cannot be used and to describe the MAC used in IEEE 802.11 networks.

 

12) Memory Barriers: The Primitives Behind Shared-Memory Synchronization

Supervisor: Simon König

Controlling access to memory is critical to building concurrent and highly optimized programs. Programmers often assume that the operations of their programs are executed sequentially and in order. However, optimizations applied by the compiler, the CPU pipeline and the memory management unit can cause the operations of a program to be executed out of order. In multi-threaded applications, this has the potential to change the semantics of the program, resulting in unexpected observed behavior. Therefore, synchronization is a fundamental problem in computer science.

Sequential consistency (SQ) is a model that ensures that the result of a concurrent execution is the same as if the operations of all processors were executed in sequential order. However, SQ is very expensive to achieve in practice. Instead, many languages use the SQ-DRF (data race-free) model. This consistency model is achieved by inter-thread synchronization through the use of memory barriers. Memory barriers allow OS-level primitives (e.g., semaphores) or lock-free data structures to provide meaningful consistency guarantees.

The objective of this seminar topic is to explore the principles and mechanisms used to control and synchronize access to shared memory. In particular, it should investigate the use of memory barriers in modern memory models and, through a comprehensive literature review, provide insight into the associated challenges.

 

Design & coordination of distributed IoT applications

13) Distributed Coordination Patterns for Adaptation Logic in Self-Adaptive System

Supervisor: Michael Matthé

Self-adaptive systems use adaptation control to make configuration decisions based on their current environmental context. Their adaptation logic consists of monitoring, analysis, planning, and execution (MAPE) components. In a distributed system, it may be beneficial to distribute these components across the participating nodes to improve the performance or efficiency of the application. The distribution of components can range from a full MAPE loop per node to a single centralized full MAPE loop for the whole network.

The objective of this seminar topic is to compare self-adaptive approaches with different degrees of decentralized adaptation control and to discuss their advantages and disadvantages with respect to each other and their specific application context.

 

14) Building Energy Models for Online Load Scheduling and Shifting 

Supervisor: Melanie Heck

Smart homes connect a growing number of smart devices to the internet. In most buildings, however, the devices operate in ways that reduce user comfort and waste energy. Energy flexibility schemes seek to reduce peak demand and improve demand response by shifting energy consumption to times of high availability. At the core of these schemes are Building Energy Models (BEM) that forecast energy consumption and enable online control and optimization of scheduling and load-shifting plans. However, building energy models are heavily influenced by weather conditions, building operations, and occupant schedules. Additionally, sensor data is often of low quality and sub-meters that measure energy for specific parts of the building are rarely available.

The objective of this seminar topic is to review approaches for building simplified BEM that can produce accurate forecasts using low-quality and sparse sensor data, making them suitable for online control and optimization.

 

15) Kubernetes in Edge Computing

Supervisor: Michael Schramm

With the evolution of mobile communication technologies, edge computing theory and techniques have increasingly attracted the interest of researchers and engineers around the world. By communicating with nearby edge nodes instead of the cloud, edge computing can help accelerate content delivery and reduce network load.

Kubernetes is fast becoming the de facto standard for orchestrating large container deployments in the cloud. However, edge computing has very particular characteristics that have to be taken into account when placing and scheduling the containers. Unlike cloud infrastructures, edge computing in conjunction with container orchestration, and Kubernetes in particular, has not yet been researched in detail.

The objective of this seminar topic is to introduce and explain containers and their orchestration on the example of Docker and Kubernetes. In addition, it the state-of-the-art of container orchestration in Edge Computing should be investigated.

 

Advanced Seminar (Hauptseminar): Trends in Distributed and Context-Aware Systems

Distributed systems are a corner stone of many services today. Distribution provides scalability of cloud services, implemented atop a massive number of servers. For instance, Google’s data centers host an estimated 2.5 million servers! At the same time replicating functions and data ensures reliability. This does not only apply to cloud services, but also to peer-to-peer networks as used for instance by the Bitcoin network and mobile systems such as vehicular networks or networks of unmanned aerial vehicles. As in the example in Figure 1, such mobile systems are inherently distributed geographically and are supported by edge cloud services located close to the mobile devices to reduce network latency. Last but not least, the Internet is evolving into an Internet of Things (IoT), where virtually everything can communicate through the Internet.
 
Such distributed systems come with many challenges, as pointed out by Urs Hölzl (Senior Vice President for technical infrastructure at Google): “At scale, everything breaks ... Keeping things simple and yet scalable is actually the biggest challenge. It's really, really hard.“ Other challenges include consistency of replicated services, privacy, and protection against attacks if untrusted devices are involved.

Adaptation is one of the key mechanisms that enable distributed systems to cope with the demands of increasingly dynamic environments. Figure 2 shows an example of a system that monitors the user’s context and adapts its layout and functions to provide a more efficient interaction.

In this seminar, we take a deep dive into specific distributed and context-aware systems concepts that tackle the above challenges.

 

Flyer

The seminar is organized in the style of a scientific conference. Following the submission of a written paper on the assigned topic, students write reviews for other seminar papers and participate in a final presentation session where they present their work and discuss the work of others. Attendance at the kick-off and final presentation session is mandatory.

Flexible, scalable, and efficient networks

1) Deterministic Latency Guarantees for Event-Based Network Traffic

Supervisor: Robin Laidig

With the Internet of Things and Industry 4.0, an increasing number of devices are connected via computer networks. These devices vary greatly in functionality and importance, so Quality of Service (QoS) models are essential to separate and prioritize their network traffic. Of particular importance is real-time network traffic, which must arrive at its destination before a certain deadline, otherwise catastrophic failures can occur. To guarantee the delivery before a strict deadline, recent research has focused on scheduling algorithms that provide deterministic latency bounds. However, many of these algorithms are designed to work with periodic, time-triggered network traffic that can be precisely predicted. In reality, there is often event-based (sporadic) network traffic, that can emerge randomly.

The goal of this seminar paper is to review existing computer science research that proposes network scheduling algorithms for event-based network traffic. Further, the seminar paper shall point out future research directions.

 

2) Software-Defined Networks for TSN: OpenFlow and P4

Supervisor: Lucas Haug

Modern computing environments demand programmatically configurable networks. Software-Defined Networking (SDN) addresses these demands by decoupling the control and data planes. Time-Sensitive Networking (TSN), which is crucial for deterministic data delivery, becomes increasingly important, especially for industrial and automotive networks. TSN, integrated with SDN, promises flexible, deterministic networking.

OpenFlow, an early SDN enabler, provides a standardized interface for flow table manipulation that dictates packet processing based on header matches. Its limitations in terms of pipeline flexibility and protocol support have led to the emergence of P4. P4 offers extensive programmability for custom packet processing, which enhances the adaptability and performance of SDN.

A key component of TSN is the IEEE 802.1Qbv standard, also known as Time-Aware Shaper. It is crucial for achieving deterministic data delivery by scheduling traffic in time-gated windows, ensuring that high-priority traffic can pass through the network with minimal latency and jitter. Incorporating Qbv into SDN frameworks through OpenFlow and P4 can significantly improve network performance for time-sensitive applications.

The objective of this seminar topic is to provide a detailed description of both OpenFlow and P4. In addition, it should compare their advantages and disadvantages, especially in the context of integrating TSN, and Qbv in particular, to achieve deterministic, low-latency networking for critical applications.

 

3) In-Depth Exploration of Asynchronous Traffic Shaping   

Supervisor: Heiko Geppert

known as Time-Sensitive Networks (TSN). Among various traffic shapers defined in TSN, the Asynchronous Traffic Shaper (ATS) stands out for its ability to limit the rate of event-based traffic while enabling bursts. While the general functionality of ATS is well described in the use of token buckets, the standard specifies a number of details and additional features.

The objective of this seminar topic is to take a deep dive into the Asynchronous Traffic Shaper, highlighting the shaper’s capabilities and interplay with other TSN standards.

 

4) Scheduling Heuristics for Time-Triggered Real-Time Data Flows   

Supervisor: Heiko Geppert

Modern Industrial IoT applications, such as automated industrial plants, require real-time communication between the different machines. Otherwise, a delayed message could lead to crashes, resulting in human harm or financial loss. The IEEE 802.1 Time-Sensitive Networks (TSN) standards extend Ethernet networks to provide real-time properties. Features like the Time-Aware Shaper (TAS) and precise time synchronization enable deterministic schedules for time-triggered flows. However, the standards do not specify how to compute these schedules. Instead, researchers have developed countless scheduling strategies. Large or dynamic systems with rapidly changing traffic demands require fast and scalable scheduling solutions. Heuristics offer a way to achieve fast computations with reasonable scheduling quality.

The objective of this seminar topic is to review and present fast and scalable scheduling heuristics, which do not rely on formal solver tools, for time-triggered traffic in TSN networks. The review should consider different feature sets such as multicast and scalability to large problem instances.

 

Concurrency coordination & control

5) Understanding the Underdogs of Concurrency Control

Supervisor: Simon König

Uncontrolled concurrent data access can potentially leave data in an inconsistent state. Therefore, concurrency control algorithms are a central element in distributed data storage systems. In particular, distributed transaction processing relies heavily on transaction schedulers such as two-phase locking (2PL) or optimistic concurrency control (OCC). These algorithms are widely known and used in state-of-the-art data storage systems.

The goal of this seminar topic is to provide an overview of less popular concurrency control algorithms. Examples include altruistic ordering (AL), ordered sharing of locks (O2PL), tree locking (WTL, RWTL), and backward-oriented optimistic concurrency control (BOCC). Through a comprehensive review of relevant literature and case studies, the seminar paper should discuss the benefits and drawbacks of less well-known concurrency control algorithms compared to their more popular counterparts.

 

6) Coroutines   

Supervisor: Simon König

Subroutines encapsulate computation and help to break down a large program into smaller parts. Coroutines replicate this property, but the lifetime of a coroutine is not tied to the control flow of the program. When a subroutine returns to the calling program, its control information and local variables are destroyed. On the other hand, when a coroutine returns control to its caller, its execution is not finished and so its state is preserved. Each time control returns to the coroutine, it resumes execution with the local control and data state from where it left off. Hence, coroutines are a form of retentive control.

Coroutines are especially well suited for the implementation of asynchronous or event-driven applications. With the use of coroutines, data or IO stalls can be hidden, improving parallelism at runtime and reducing programming complexity at design time. As a result, the use of coroutines is experiencing a resurgence. Many popular languages have recently introduced native support for coroutines, and coroutine-based implementations of highly parallel applications are becoming more and more popular.

The objective of this seminar topic is to examine the principles and mechanisms of coroutines. Through a comprehensive review of relevant literature and case studies, the seminar paper could investigate, for instance, synchronization primitives for coroutines, state and context switching for coroutines, or mechanisms for runtime optimization such as deadlock detection.

 

Privacy & security

7) DNS Resolution: Attacks and the Need for Formal Frameworks

Supervisor: Simon Egger

The Domain Name System (DNS) is a central building block of today's Internet, resolving domain names into IP addresses. However, with the growing complexity of DNS (e.g., caching and security amendments), the surface area allowing for misconfiguration and attacks (e.g., cache poisoning and denial of service attacks) increases.

The objective of this seminar topic is to review past attacks on DNS and to examine recent efforts to design a formal framework for DNS (e.g., Liu et al.). It should investigate the effects of different attack detection mechanisms and of the heuristics that are employed to cover the attack search space.

 

8) Monero

Supervisor: Lukas Epple

Monero stands out in the cryptocurrency landscape for its emphasis on user privacy and anonymity. Through the use of advanced cryptographic techniques such as ring signatures, stealth addresses, CT and Dandelion++, Monero obscures the sender, receiver, and transaction amount, ensuring that all transfers are confidential and untraceable.

The objective of this seminar topic is to dissect the technological mechanisms that underpin Monero, specifically Dandelion++, providing insight into how it achieves a secure and private transaction environment.

 

Edge computing          

9) Edge Intelligence

Supervisor: Michael Schramm

With the evolution of mobile communication technologies, edge computing theory and techniques have increasingly attracted the interest of researchers and engineers around the world. By communicating with nearby edge nodes instead of the cloud, edge computing can help accelerate content delivery and reduce network load.

At the same time, breakthroughs in deep learning and improvements in hardware architectures have enabled new artificial intelligence (AI) applications. However, the billions of bytes of data that are generated at the network edge cannot all be transmitted to a central cloud server hosting the AI hardware infrastructure. This leads to a strong demand for bringing AI to the edge and thus to reduce communication overhead, improve privacy, and reduce latency.

The goal of this seminar topic is to review existing scientific publications on Edge Intelligence and to describe its different types in terms of their advantages over cloud solutions, as well as the current issues that research is working on. In addition, applications of Edge Intelligence should be discussed.

 

10) Federated Learning for the Edge

Supervisor: Michael Schramm

Federated Learning is a machine learning setting where many clients train together without making the data itself accessible to collaborators. This enables institutions with smaller datasets to gain insights that they could not get from their own data alone. In addition, federated learning can help in privacy-sensitive areas such as medical research.

At the same time, edge computing has attracted the interest of researchers and engineers to accelerate content delivery and reduce network load by communicating with nearby edge nodes instead of the cloud. However, it introduces new issues such as task scheduling, data replication, and data placement.

The objective of this seminar topic is to review scientific publications on federated learning for the edge. What are the challenges in edge networks? What federated learning approaches exist to tackle these challenges?

 

11) Multi-Armed Bandit Learning in Edge Computing Approaches

Supervisor: Michael Matthé

In edge computing, computing resources are not only provided by a central cloud server, but also by many smaller servers located closer to the user at the edge of the network. One goal is to improve the latency of interactive applications by distributing data and allocating resources closer to the user. Multi-armed bandit learning is a type of online decision problem where the goal is to choose the arm with the highest reward. In the case of edge computing, an arm can be a particular server or a data distribution configuration. The goal of using multi-armed bandits in combination with edge computing is to optimize system performance metric such as user satisfaction.

The objective of this seminar topic is to explore approaches that use multi-armed bandits for edge computing use cases and to describe the modeling of the approaches in detail.

 

IoT applications & technologies

12) Managing Uncertainty in Self-Adaptive Systems

Supervisor: Michael Matthé

Many modern software systems are becoming so complex that human control is no longer possible. One solution is self-adaptive systems. They monitor their environmental context and, in their simplest implementation, make configuration decisions based only on the monitored data. However, there are some uncertainties in this process. The monitored data may be incomplete or inaccurate, and it may be beneficial to reason about future monitoring data by including predictions about the future behavior of the system's environmental context.

The objective of this seminar topic is to highlight existing approaches for dealing with uncertainty in self-adaptive systems. The effectiveness of each approach with respect to its application domain should be highlighted, and their differences should be compared.

 

13) Indoor Positioning Technologies

Supervisor: Jona Herrmann

The Global Positioning System (GPS) is the most widely used technology for outdoor positioning. However, as GPS relies on satellites whose signals are attenuated by objects such as walls, it cannot be used for indoor positioning. Therefore, another technology is required for indoor positioning, using for example WiFi, Bluetooth, or RFID.

The objective of this seminar topic is to provide an overview of existing indoor positioning technologies and to compare their advantages and disadvantages.

 

14) Building Function Virtualization

Supervisor: Melanie Heck

A sensing environment is key to optimizing and automating smart building operations. However, smart homes often use low-cost sensors with low reading frequency or receive no sensor data at all from relevant sources due to high initial costs or difficulties to place the sensors (e.g., in pipelines and hidden spaces). Systematic errors in sensor readings further complicate their use for smart applications and optimization of building operations.

Virtual sensors are therefore used to estimate measurements when physical sensors are difficult or impossible to deploy. These sensors, consisting of data-driven mathematical models (e.g., unsupervised data mining or deep learning models), can forecast performance and operational states of smart building components in order to improve energy consumption, control efficiency, and comfort in the building.

The objective of this seminar topic is to conduct a scoping review of virtual sensors in order to identify current trends in, e.g., applications, sensor modelling, and data calibration approaches.

 

15) Smart Building Digital Twins  

Supervisor: Melanie Heck

Although the services that buildings provide to their occupants (e.g., cooling/heating, lighting, household appliances, etc.) are typically the same, they use different technologies and the technical equipment is often replaced over time. Occupants using these services are typically not aware of or don’t take into account the current availability and price of the electricity from the grid. Digital twins therefore aim to create a virtual replica of the building’s static and dynamic characteristics. They can monitor the current state, predict the future state, and take proactive measures to optimize the operations of the building and schedule power consumption more efficiently.

The objective of this seminar topic is to review digital twin frameworks from both industry and research and to identify their strengths and weaknesses.

 

16) Energy System Modeling for Africa

Supervisor: Sonja Klingert

As part of the energy turnaround, many African countries need the diffusion of energy, mostly electricity, to supply a higher share of the population. However, this expansion requires energy system models that capture the specific characteristics of the current energy systems in African countries.

The objective of this seminar topic is to review current energy systems in order to extract the main differences between energy systems in the Western world and Africa. In a second step, prevailing energy system models shall be analyzed in terms of how well they meet the requirements of African countries and what challenges energy models for Africa face.

 

This image shows Melanie Heck

Melanie Heck

Dr. rer. pol.

Researcher

To the top of the page