Invited Speakers and Talks

Invited Talk I: Fairness, Accountability, and Transparency: The Other Side of the Coin

Sonja Buchegger (KTH Royal Institute of Technology, Sweden)

Fairness, accountability, and transparency are good properties to require from those that analyze, learn and infer from collected data, be they commercial or public entities. This does not always make them suitable properties to require from the individuals the data is about. To the contrary, to decrease power asymmetries, fairness, transparency, and accountability may need to be countered with (private)verifiability, deniability, and privacy, respectively. In this talk, we explore the related notions and give some examples from our own research on how to achieve them.

Sonja Buchegger is an associate professor of Computer Science at KTH Royal Institute of Technolgy in Stockholm, at the School of Electrical Engineering and Computer Science (EECS). She was a senior research scientist at Deutsche Telekom Laboratories in Berlin, a post-doctoral scholar at the University of California at Berkeley, and a pre-doc at the IBM Zurich Research Laboratory. Her Ph.D. is in Communication Systems from EPFL, and her current research focus is on privacy-enhancing technologies and decentralized systems.

Invited Talk II: Risk assessment in personal data processing: from DPIA to a broader perspective

Alessandro Mantelero (Politecnico di Torino, Italy)

Risk assessment models play an increasing role in data protection, as confirmed by the Data Protection Impact Assessment (DPIA) adopted by the EU legislator. Nevertheless, in this early stage, the DPIA largely consists of an internal process, with a very limited role played in the assessment by participatory approaches and transparency. Moreover, the DPIA only partially addresses the main issues and challenges associated with data-intensive systems. DPIA remains primarily focused on data security and data quality, while today’s AI and Big Data applications raise new issues concerning the collective dimension of data protection and the ethical and social consequences of data use. Against this background, this talk investigates the adoption of a different value-oriented approach, focused on the societal impact of data use. This impact encompasses the potential negative outcomes of data processing on a variety of fundamental rights and principles and also takes into account the ethical and social consequences of data use. Building on the first results of the H2020 Virt-EU project and the author’s ongoing research on Human Rights, Social and Ethical Impact Assessment (HRSEIA), this talk sets out to embed this new perspective in the GDPR framework.

Alessandro Mantelero is Associate Professor of Private Law at the Polytechnic University of Turin. He is Council of Europe Rapporteur on Artificial Intelligence and data protection. In 2016, he was appointed expert consultant by the Council of Europe to draft the Guidelines on personal data in a world of Big Data (2017). He is also member of the IPEN-Internet Privacy Engineering Network (European Data Protection Supervisor) and has served as an expert on data regulation for the UN–ILO, the EU Agency for Fundamental Rights, the UN-OHCHR, the American Chamber of Commerce in Italy, the Italian Ministry of Justice and the Italian Communications Authority (AGCOM). He is author of over a hundred articles and book chapters on law & technology.

Invited Talk III: Privacy Technologies for Machine Learning

Morten Dahl (OpenMined.org, France)

In this talk we give a high-level overview of privacy enhancing technologies that have recently been applied in the setting of machine learning. Without going deeply into details, we describe underlying principles in order to understand difference, weaknesses, and strengths, and as an example illustrate how a model can be trained on data that remain encrypted throughout the whole process. Finally, to facilitate further exploration we point to existing tools, successful applications, and key players in the field.

Morten Dahl, PhD in computer science, works in the intersection of privacy, cryptography, and machine learning with a focus on practical tools and concrete applications. On the side he enjoys spending time helping make these technologies accessible and participating in the OpenMined community.

Invited Talk IV: TBD

Silvia Chiappa (DeepMind, United Kingdom)

Silvia Chiappa is a senior research scientist in Machine Learning at DeepMind, where she works on deep models of high-dimensional time-series and machine learning fairness. Silvia received a Diploma di Laurea in Mathematics from University of Bologna and a PhD in Statistical Machine Learning from École Polytechnique Fédérale de Lausanne. Before joining DeepMind, she worked in several Machine Learning and Statistics research groups: the Empirical Inference Department at the Planck Institute for Intelligent Systems, the Machine Intelligence and Perception Group at Microsoft Research Cambridge, and the Statistical Laboratory, University of Cambridge. Silvia’s research interests are based around Bayesian and causal reasoning, graphical models, approximate inference, time-series models, and machine learning fairness.

Invited Talk V: An exploration of transparency in data driven innovation, what is needed when by whom?

Rob Heyman (Vrije Universiteit Brussel, Belgium)

There is a need for more transparency in data processing from a legal perspective, an economic perspective and lastly from an organizational perspective. The EU’s General Data Protection Regulation requires that data subjects be informed about processing operations involving data about them and that this information be provided ‘in a concise, transparent, intelligible and easily accessible form, using clear and plain language’. When Data Protection Authorities (DPAs) audit or prepare for a trial, these also struggle with transparency and the required literacies to understand complex systems such as algorithms, online advertising business models or internet of things technology. Secondly, as the use of AI becomes more common place, there is a need for transparency in the value chain which delivers data driven solutions. Lastly, through my own research, we also encountered many occasions where transparency was required to organise technological development.

In this talk I wish to open up the need for transparency by asking for whom, what and when. For whom refers to which actors need transparency. What refers to the kind of information they require. And lastly, when implies that during the development of a project, different transparency needs may arise. This presentation will be based on past experiences in Facebook, online advertising, algorithm development and smart cities. The goal is to explore transparency from a bottom-up perspective by considering different cases and linking these to existing (value network mapping, data register, data flow mapping) or new methods.

After the presentation, students can partake in an exercise (Tutorial II: Exploring transparency through a data flow mapping of existing innovations (group exercise)) where they apply one method to map how an existing innovation is processing personal data. We will use this exercise as a starting point to explore how much transparency is enough for whom?
The goal of this session is to learn to use an easy mapping method to map data driven innovations as an addition to the data register proposed by the GDPR. The resulting mapping will be used as a boundary object, a discussion object that can be used by collaborators from different disciplinary backgrounds. Which leads us to the final goal of the workshop, a debate on what kind of transparency is needed by whom for particular innovations.

Rob Heyman, PhD, is a senior researcher at SMIT-VUB and Lead of the Expert Group PETS at City of Things Antwerp. He currently works on privacy, data protection and data transparency in the following application areas: Smart cities, IoT, Online and programmatic advertising, Social media, Big data and AI. For the moment he is working on SPECTRE, a Flemish funded SBO on DPIAs in smart cites and methods to expand the relevance of DPIAs for other challenges relevant to smart cities and its stakeholders.

Invited Talk VI: Information Privacy, Accountability and Ethics

Charles Raab (University of Edinburgh, United Kingdom)

The accountability of data controllers and the ethics of data processing have come to prominence as part of regulatory provisions for protecting personal data. They are represented in the GDPR and in many other legal or other instruments, including self-regulation, but it is not always clear what they mean and how far they can be effective in practice. This lecture takes a close and to some extent new look at accountability and ethics in terms of the processes and principles involved, and asks some questions about these novel provisions.

Charles Raab is Professorial Fellow, having held the Chair of Government from 1999 to 2007 and from 2012 to 2015. He has served as a member of the academic staff since 1964, and has held visiting positions in the Oxford Internet Institute, the Tilburg Institute for Law, Technology, and Society (Tilburg University, The Netherlands), Queen’s University, Kingston, Ontario, and the Victoria University of Wellington (NZ). He was a Fellow at the Hanse-Wissenschaftskolleg (Institute for Advanced Study) in Delmenhorst, Germany. With colleagues at the University of Stirling and the Open University, he is a Director of CRISP (Centre for Research into Information, Surveillance and Privacy, and is a founder of the Scottish Privacy Forum. He is a Fellow of the Academy of Social Sciences (FAcSS) and a Fellow of the Royal Society of Arts (FRSA). His main general research interests are in public policy, governance and regulation, and more specifically in information policy (privacy protection and public access to information; surveillance and security; identity and anonymity; information technology and systems in democratic politics, government and commerce; and ethical and human rights implications of information processes).

Invited Talk VII: Quantitative Models of Behavior for Privacy and Security

Joachim Meyer (Tel Aviv University, Israel)

Research on behavioral aspects of privacy and security poses major challenges. In it, we attempt to describe and predict human behavior in settings where technologies, circumstances, threats and opportunities all rapidly change. It is impossible to conduct empirical studies of the effects of all relevant variables, so simply describing observations may not be very useful. Instead, I argue that research should strive to develop quantitative models that can be validated with specific data and can provide predictions for changing conditions. I discuss a number of our studies and models that are relevant for privacy and security and show some implications this research has for system design and policy decisions.

Joachim Meyer is Professor, and currently also department chair, in the Department of Industrial Engineering at Tel Aviv University. He holds an M.A. in Psychology and a Ph.D. (1994) in Industrial Engineering from Ben-Gurion University of the Negev, Beer Sheva, Israel. He was a post-doctoral fellow and researcher at the Technion – Israel Institute of Technology, was on the faculty of Ben-Gurion University of the Negev, and was a visiting scholar at Harvard Business School, a research scientist at the M.I.T. Center for Transportation Studies, and a visiting professor at the M.I.T. MediaLab. His research deals with cognitive engineering, focusing on the development of quantitative models of decision processes involving automation and computer systems, considering properties of the task, the system and the human operator. The models are based on empirical research in laboratory settings, as well as field studies and research on applications in cybersecurity, IT design, manufacturing, process control, transportation, business administration, communication, law and medicine. The models are applied in the design of systems and in their operation and evaluation.

Invited Talk VIII: Security and Privacy Foundations of Blockchain Technologies

Matteo Maffei (TU Vienna, Austria)

Blockchain technologies promise to revolution distributed systems, enabling mutually distrustful parties to reach a consensus on distributed data and decentralized operations. At the core of this technology lie distributed consensus algorithms, which embrace randomness and rational arguments to bypass long-standing impossibility results. The applications of blockchain technologies are multifold and go well beyond cryptocurrencies, embracing smart contracts, auctions, accountable data storage, and more.

In this lecture, we will give a gentle introduction to blockchain technologies, focusing in particular on the associated security and privacy challenges. Along with basic concepts, such as consensus and distributed ledger, we will overview also some advanced topics, such as smart contracts and payment channels.

Matteo Maffei is professor and head of the Security and Privacy group at the Vienna University of Technology. Previously, he was research group leader and professor at Saarland University and CISPA. He obtained his PhD in 2006 at the Ca’ Foscari University of Venice. He received in 2009 an Emmy Noether fellowship from the German Research Foundation with the project “Formal Design and Verification of Modern Cryptographic Applications” and in 2018 an ERC Consolidator Grant from the European Research Council with the project “Foundations and Tools for Client-Side Web Security”.
His current research interests include program analysis, cryptography, and distributed computation. In particular, he designs formal verification techniques for security properties in cryptographic protocols, mobile code, web applications, and smart contracts, and he develops privacy-enhancing technologies for cloud storage, analytics, and blockchain technologies.

Invited Talk IX: Surveillance by intelligence services: fundamental rights safeguards and remedies in the EU

Mario Oetheimer (European Union Agency for Fundamental Rights, Austria)

The session will provide the key findings of the recently published (October 2017) second surveillance report of the European Union Agency for Fundamental Rights (FRA), namely Surveillance by intelligence services: fundamental rights safeguards and remedies in the EU. Emphasis will be placed on the institutional guarantees that oversight mechanisms need to incorporate in order to be independent, efficient and transparent. Challenges for oversight bodies, which arise from a field traditionally shrouded in secrecy, such as limited powers, access to intelligence files, resources and expertise, will be thoroughly discussed.

Mario Oetheimer, PhD, is Head of Sector Information Society, Privacy and Data Protection at the European Union Agency for Fundamental Rights (FRA). Mario is managing the Agency’s research project on National intelligence authorities and surveillance in the EU. His areas of expertise with respect to the FRA’s work include: data protection and freedom of expression and international human rights, in particular the European Court of Human Rights’ case law. Mario coordinates the cooperation between the FRA and the Council of Europe. He previously worked for the Council of Europe for thirteen years – first with the Council of Europe media division, human rights directorate and then with the European Court of Human Rights research division. Mario studied law, and is the author of the book Harmonisation of Freedom of Expression in Europe (2001) in French. He has authored several articles on freedom of expression and the European Court of Human Rights.

Invited Talk X: Artificial Intelligence, Big Data and Human Rights – discrimination and other potential challenges

David Reichel (European Union Agency for Fundamental Rights, Austria)

The session will provide an overview of current discussions on artificial intelligence, big data and fundamental rights. After a general input on current discussions and developments in the area, the problem of identifying discrimination in data supported decisions will be presented and discussed in an interactive session.

David Reichel, PhD, is a researcher in the Freedoms and Justice Department at the European Union Agency for Fundamental Rights (FRA). He is responsible for managing FRA’s work concerning artificial intelligence, big data and fundamental rights. His areas of expertise include statistical data analysis, data quality and statistical data visualisation. He has extensive experience in working with data and statistics in an international context.
Prior to joining FRA in 2014, he worked for the research department of the International Centre for Migration Policy Development (ICMPD) and as a lecturer at the Institute for Sociology and the Institute for International Development at the University of Vienna. He has published numerous articles, working papers and book chapters on issues related to migration and integration statistics, citizenship and human rights.

Invited Talk XI: Privacy Challenges of Artificial Intelligence

Maja Brkan (Maastricht University, The Netherlands)

The purpose of this presentation is to analyse the rules of the General Data Protection Regulation and the Directive on Data Protection in Criminal Matters on automated decision making and to explore how to ensure transparency of such decisions, in particular those taken with the help of algorithms. Both legal acts impose limitations on automated individual decision-making, including profiling. While these limitations of automated decisions might come across as a forceful fortress strongly protecting individuals and potentially even hampering the future development of AI in decision making, the relevant provisions nevertheless contain numerous exceptions allowing for such decisions. While the Directive on Data Protection in Criminal Matters worryingly does not seem to give the data subject the possibility to familiarise herself with the reasons for such a decision, the GDPR obliges the controller to provide the data subject with ‘meaningful information about the logic involved’ (Articles 13(2)(f), 14(2)(g) and 15(1)(h)), thus raising the much-debated question whether the data subject should be granted a ‘right to explanation’ of the automated decision. This paper seeks to go beyond the semantic question of whether this right should be designated as the ‘right to explanation’ and argues that the GDPR obliges the controller to inform the data subject of the reasons why an automated decision was taken. While such a right would in principle fit well within the broader framework of GDPR’s quest for a high level of transparency, it also raises several queries: What exactly needs to be revealed to the data subject? How can an algorithm-based decision be explained? The presentation aims to explore these questions and to identify challenges for further research regarding explainability of automated decisions. ​

Maja Brkan is Assistant Professor of European Union Law at Maastricht University since 2013 where she is responsible for coordinating the core course on EU institutions and for supervising students researching on data privacy aspects of Big Data and Artificial Intelligence. She is Associate Director of the Maastricht Centre for European Law, a member of the European Centre on Privacy and Cybersecurity (ECPC), holds a position of Associate Editor of the European Data Protection Law Review and regularly presents her work at international conferences in Europe and in the US. She has published widely on privacy and data protection in peer-reviewed journals. She is the author of an award-winning PhD thesis in EU law and holds a prestigious Diploma of the Academy of European Law from the European University Institute. Before moving to Maastricht, she worked as a legal advisor (référéndaire) at the Court of Justice of the EU (2007-2013). Under her supervision, students achieved first and second place at the widely recognised European Law Moot Court Competition.

Invited Talk XII: Legal consciousness: conceptualising the privacy paradox

Katharine Sarikakis (University of Vienna, Austria)

Scholarship explores citizens’ attitudes to privacy from a wide angle of perspectives but struggles to move beyond the attitudes recorded in terms of understanding of privacy problems in media usage. In particular, a great deal has been written about the seeming paradox between users’ knowledge and users’ actions of allowing the violation of privacy. What are the ways in which however this paradox, or, indeed users’ needs and expectations can be better understood beyond the gap between their actions and their knowledge?

In order to understand people’s motivations, it is important to refrain from treating them as ‘simple’ ‘users’ and instead introduce more systematically elements of agency, structure and consciousness as pillars of a complex, and although fluid yet surprisingly stable core of understandings of what privacy means to citizens. The concept of legal consciousness can support efforts towards building theory and research design in which citizens’ responses to law and policy are more comprehensively understood within a process of supporting, enabling or resisting policy and law. The lecture will discuss the ways in which the concept of legal consciousness can be useful in providing anchoring points from which variations of responses to law and policy, including both forms of law and privatized policy as is the case of terms and conditions, and a closer look at processes of negotiation with such legal frameworks. Not simply a question of ‘understanding’ the law but rather a question of living, experiencing and contesting it, circumventing or accepting it are the core dimensions unfolding through a systematic exploration of expressions of legal consciousness.

Katharine Sarikakis PhD is Professor of Media Governance at the Department of Communication, the University of Vienna. Her research interests are in the field of European and international communication and in particular across two intersecting directions: the role of institutions in supra- and international communications policy processes and policy implications policy for the empowerment of citizens and the exercise of enlarged citizenship. Major areas she is investigating at the moment are Copyright, Privacy and Public media. Currently she is working on a research monograph on Communication and Control.She is the vice chair of the Communication Policy and Law Division of the International Communication Association; she founded and led the Communication Law and Policy Section of the European Communication Research and Education Association and was elected as Head of the section twice. Katharine is also past Vice President of the International Association for Media and Communication Researchers and is serving now as an elected member of the International Council of IAMCR. She is also the managing editor of the International Journal of Media and Cultural Politics.Katharine regularly consults with international organisations on matters of media and communication policy and regulation and rights.