Workshops and Tutorials

Workshop I: Workshop on Data-Driven Engineering

Samuel Fricker

The software domain is in the process of being changed radically with a new dominant paradigm: data-driven engineering. Data is essential to develop a new generation of systems that are represented by autonomous driving, adaptive manufacturing, personalised medicine, risk-aware investments, and other applications of artificial intelligence. The paradigm is enabled by a massive increase of deployed sensors, improvements in learning technologies, such as deep learning, and global networks. This change also implies that end-users unavoidably become data subjects and systems learn about these users and adapt as they are getting used. The change creates new forms of engineering processes and supply networks that can span legal entities and countries: system owners harvest data from end-users and offer them to data scientists against algorithms that allow systems to take intelligent decisions themselves and self-adapt.
This workshop offers a forum for debating the role of privacy and identity management in data-driven engineering. The goal is to find approaches that enable democratic forms of data-driven engineering and stimulate the uptake of this paradigm by the industry for broadly accepted and sustainable innovations. To support the discussion, we would like to confront ethical and legal challenges with concrete cases of data-driven engineering that currently are explored by academia and industry. The participants are encouraged to bring their opinions, past research, and cases to the workshop. The result of the workshop should be a position paper that summarises the opinions of the participants and grounds the arguments in current critical thinking and literature.

Workshop II: Interactive Workshop on GDPR transparency requirements and data privacy vocabularies

Eva Schlehahn, Rigo Wenning and Harald Zwingelberg

The workshop introduces participants to the transparency requirements of the European General Data Protection Regulation (GDPR). As a necessary precondition for the data subject to exercise his or her rights, transparency is a key element of fair and lawful personal data processing. The discussions will address how privacy-enhancing technologies may enable or support transparency of the whole processing lifecycle, thereby focusing especially on technical specifications such as vocabularies and GDPR-related taxonomies supporting management of consent and objections.

Workshop III: Towards Empowering the Human for Privacy Online

Kovila P.L. Coopamootoo

Background. While users are encouraged and sup- ported to share information online, we argue that they are currently dis-empowered with respect to their privacy. This is continuously evidenced by the public uproar following privacy breaches. In ad- dition, in recent research, we found that the affect dimension of the privacy concept is fear focused, where fear can act as a de-motivator to protective privacy behaviour. In consequence, we seek to ask whether the field will benefit from a new generation of usable privacy methods, tools, and approaches that focuses on empowering the user and what will be requirements of such a paradigm.
Aim. This 2-3 hours workshop aims at enabling a discussion on a next generation of usable privacy approaches, that aims to empower the user, such as by developing and sustaining human ability with privacy technologies.
Method. We intend to combine a brief presentation reviewing the state-of-the-art in usable privacy with group work and self-guided exploration. The par- ticipants have the option to bring a case studies of privacy technologies of their own to the workshop or to take a prepared list for evaluation.
Results. At the end of the workshop, the participants shall be able to discuss ways of enabling and empowering the human in privacy technologies. Conclusions. We believe that apart from users being supported to make informed decisions with respect to their privacy, they ought to be supported in ways that enable protective actions, as well as their (justifiable) belief in their ability to take privacy actions. Whereas the area of usable privacy has seen considerable attention, we seek to make a first step in by sensitizing our PhD students and researchers to themes of privacy empowerment, where we focus on inter-disciplinary work with human factors.

Workshop IV: Secure and Usable Mobile Identity Management Solutions: a Methodology for their Design and Assessment

Roberto Carbone, Silvio Ranise and Giada Sciarretta

The widespread use of digital identities in our everyday life, along with the release of our sensitive data on many online transactions, calls for Identity Management (IdM) solutions that are secure, usable, privacy-aware, and compatible with new technologies, such as mobile and cloud. While there exist many secure IdM solutions for web applications, their adaptation in the mobile context is a new and open challenge. Due to the lack of specifications and security guidelines, designing a mobile IdM solution that covers different authentication aspects from scratch is not a simple task; and as its security depends on several trust and communication assumptions, in most cases, could result in a solution with hidden vulnerabilities. To overcome this difficulty, we provide a reference model and a design methodology which can be used by different organizations to implement mobile Single Sign-On and multi-factor authentication. Main objectives of the workshop are to create awareness of privacy and security issues together with legal provisions related to authentication in mobile computing and perform an experimental evaluation of security vs usability of widespread second-factor authentication solutions for mobile applications.

Workshop V: Interactive Workshop on Data Breaches: Who you gonna Call when there’s Something Wrong in your Processing?

Felix Bieker and Susan Gonscherowski

Breaches of personal data occur in the public as well as the private sector, however, frequently the data subjects affected by these breaches are not informed. Thus, the EU legislator included the obligation to notify and communicate breaches of personal data to the supervisory authorities and data subjects, respectively. This workshop provides insight into the research of the EIDI project and the Forum Privacy project, which have developed a method the assess the risk to the rights and freedoms of natural persons, which is a crucial notion for personal data breaches as well as best practice with regard to notification and communication. These issues will be discussed among participants based on different scenarios and will provide feedback for the projects in their future work.

Workshop VI: An exploration of attitudes to dynamic consent in research

Arianna Schuler Scott, Michael Goldsmith and Harriet Teare

Consent is defined in the General Data Protection Regulation (GDPR) as “any freely given, specific, informed and unambiguous indication of the data subject’s wishes by which he or she, by a statement or by a clear affirmative action, signi-fies agreement to the processing of personal data relating to him or her”. Consent will be used as the legal basis for being able to process an individual’s data so the importance of genuine consent cannot be overstated. If the legal basis for pro-cessing information is proved invalid, the ability for an external party to use that information will be erased. A dynamic framework for consent consists of three things: the ability to revoke consent, effective engagement with individuals whose data is being collected and the persistence of data over time. This addresses cur-rent issues with the largely static model of consent we work from at the moment: once information is given there is no way to rescind that access which may mean a violation of privacy or similar, communication breakdowns that erode confi-dence and ‘one-time’ consent where there are no points at which to corroborate data access. The workshop will provide a vital opportunity to validate the re-search methodology being applied to this wider programme of research, while providing an opportunity for mutual learning amongst participants.

Tutorial I: Private verification from ancient caves to sigma protocols to SNARKs

Vadym Fedyukovych

This tutorial starts with an introductory part about zero-knowledge protocols where we will concentrate on sigma protocols, using the “Schnorr protocol” as the “common ground”. It will discuss applications to anonymous credential systems such as U-Prove and Idemix. Moreover, we will discuss extensions to polynomials of higher degrees in the challenge and as a particular case quadratic polynomials with applications to verifying statement about distance. Then, we will cover characteristic polynomials, a case with set characteristic polynomial, refinements on sets from “The Incredible Machine”, and a case with set of attribute-value pairs as model for identity-related applications.

Finally, we will provide an overview of SNARKs, “quadratic” R1CS circuits as the language to express statements, “compressing” multiple verification equations into a single one with polynomial interpolation, divisibility and the Schwartz-Zippel lemma. We also provide a short discussion on circuit complexity for Sudoku verification.

Tutorial II: Exploring transparency through a data flow mapping of existing innovations (group exercise)

Rob Heyman

Students can partake in an exercise where they apply one method to map how an existing innovation is processing personal data. We will use this exercise as a starting point to explore how much transparency is enough for whom?
The goal of this session is to learn to use an easy mapping method to map data driven innovations as an addition to the data register proposed by the GDPR. The resulting mapping will be used as a boundary object, a discussion object that can be used by collaborators from different disciplinary backgrounds. Which leads us to the final goal of the workshop, a debate on what kind of transparency is needed by whom for particular innovations.

Tutorial III: Trust and Distrust: On Sense and Nonsense in Big Data

Stefan Rass, Andreas Schorn, Florian Skopik

Big data is an appealing source and often perceived to bear all sorts of hidden information. Filtering out the gemstones of information besides the rubbish that is equally easy to “deduce” is, however, a nontrivial issue. The tutorial will open with a list of a few do’s and don’ts about big data, and then digs deeper into (semi-)automated evaluation of a company’s risk situation. Ideally, this assessment – gained from big data – should be interpretable, justified, up-to-date and comprehensible in order to provide a maximum level of information with minimal additional manual effort. Here, we shall discuss an example model of trust management relying on big data, and present the synERGY project (“security for cyber-physical value networks Exploiting smaRt Grid sYstems”) as a case study to show the (unexplored) potential, application and difficulties of using big data in practice. The ultimate goal of projects like synERGY is to establish trust in a system, based on observed behavior and its resilience to anomalies. This calls for a distinction of “normal” from “abnormal” behavior, and trust can intuitively be understood as the expectation of “normal” behavior.

Tutorial IV: Functional encryption – decrypting statistical functions over encrypted messages

Miha Stopar, Tilen Marc and Jolanda Modic

Functional Encryption (FE) aims to overcome all-or-nothing limitations of classical encryption. In an FE system it is possible to finely control the amount of informatiion that is revealed by a ciphertext to a given receiver. The decryptor deciphers only a function over the message plaintext: such functional decryptability makes it feasible to process encrypted data and obtain a partial view of the message plaintext. This extra flexibility over classical encryption is a powerful enabler for many emerging security technologies (i.e. controlled access, searching and computing on encrypted data, program obfuscation). An example of the function that can be computed over the plaintext is simple, but useful tool from statistics – weighted mean. This paper demonstrates how to use functional encryption schemes developed by FENTEC project. While some recent papers focused on constructing schemes for general functionalities at expense of efficiency, FENTEC aims to design and implement less general, but efficient functionalities that are still expressive enough for practical scenarios.