Invited Speakers and Talks

Invited Talk I: Fairness, Accountability, and Transparency: The Other Side of the Coin

Sonja Buchegger (KTH Royal Institute of Technology, Sweden)

Fairness, accountability, and transparency are good properties to require from those that analyze, learn and infer from collected data, be they commercial or public entities. This does not always make them suitable properties to require from the individuals the data is about. To the contrary, to decrease power asymmetries, fairness, transparency, and accountability may need to be countered with (private)verifiability, deniability, and privacy, respectively. In this talk, we explore the related notions and give some examples from our own research on how to achieve them.

Sonja Buchegger is an associate professor of Computer Science at KTH Royal Institute of Technolgy in Stockholm, at the School of Electrical Engineering and Computer Science (EECS). She was a senior research scientist at Deutsche Telekom Laboratories in Berlin, a post-doctoral scholar at the University of California at Berkeley, and a pre-doc at the IBM Zurich Research Laboratory. Her Ph.D. is in Communication Systems from EPFL, and her current research focus is on privacy-enhancing technologies and decentralized systems.

Invited Talk II: Risk assessment in personal data processing: from DPIA to a broader perspective

Alessandro Mantelero (Politecnico di Torino, Italy)

Risk assessment models play an increasing role in data protection, as confirmed by the Data Protection Impact Assessment (DPIA) adopted by the EU legislator. Nevertheless, in this early stage, the DPIA largely consists of an internal process, with a very limited role played in the assessment by participatory approaches and transparency. Moreover, the DPIA only partially addresses the main issues and challenges associated with data-intensive systems. DPIA remains primarily focused on data security and data quality, while today’s AI and Big Data applications raise new issues concerning the collective dimension of data protection and the ethical and social consequences of data use. Against this background, this talk investigates the adoption of a different value-oriented approach, focused on the societal impact of data use. This impact encompasses the potential negative outcomes of data processing on a variety of fundamental rights and principles and also takes into account the ethical and social consequences of data use. Building on the first results of the H2020 Virt-EU project and the author’s ongoing research on Human Rights, Social and Ethical Impact Assessment (HRSEIA), this talk sets out to embed this new perspective in the GDPR framework.

Alessandro Mantelero is Associate Professor of Private Law at the Polytechnic University of Turin. He is Council of Europe Rapporteur on Artificial Intelligence and data protection. In 2016, he was appointed expert consultant by the Council of Europe to draft the Guidelines on personal data in a world of Big Data (2017). He is also member of the IPEN-Internet Privacy Engineering Network (European Data Protection Supervisor) and has served as an expert on data regulation for the UN–ILO, the EU Agency for Fundamental Rights, the UN-OHCHR, the American Chamber of Commerce in Italy, the Italian Ministry of Justice and the Italian Communications Authority (AGCOM). He is author of over a hundred articles and book chapters on law & technology.

Invited Talk III: Privacy Technologies for Machine Learning

Morten Dahl (OpenMined.org, France)

In this talk we give a high-level overview of privacy enhancing technologies that have recently been applied in the setting of machine learning. Without going deeply into details, we describe underlying principles in order to understand difference, weaknesses, and strengths, and as an example illustrate how a model can be trained on data that remain encrypted throughout the whole process. Finally, to facilitate further exploration we point to existing tools, successful applications, and key players in the field.

Morten Dahl, PhD in computer science, works in the intersection of privacy, cryptography, and machine learning with a focus on practical tools and concrete applications. On the side he enjoys spending time helping make these technologies accessible and participating in the OpenMined community.

Invited Talk IV: TBD

Silvia Chiappa (DeepMind, United Kingdom)

Silvia Chiappa is a senior research scientist in Machine Learning at DeepMind, where she works on deep models of high-dimensional time-series and machine learning fairness. Silvia received a Diploma di Laurea in Mathematics from University of Bologna and a PhD in Statistical Machine Learning from École Polytechnique Fédérale de Lausanne. Before joining DeepMind, she worked in several Machine Learning and Statistics research groups: the Empirical Inference Department at the Planck Institute for Intelligent Systems, the Machine Intelligence and Perception Group at Microsoft Research Cambridge, and the Statistical Laboratory, University of Cambridge. Silvia’s research interests are based around Bayesian and causal reasoning, graphical models, approximate inference, time-series models, and machine learning fairness.

Invited Talk V: An exploration of transparency in data driven innovation, what is needed when by whom?

Rob Heyman (Vrije Universiteit Brussel, Belgium)

There is a need for more transparency in data processing from a legal perspective, an economic perspective and lastly from an organizational perspective. The EU’s General Data Protection Regulation requires that data subjects be informed about processing operations involving data about them and that this information be provided ‘in a concise, transparent, intelligible and easily accessible form, using clear and plain language’. When Data Protection Authorities (DPAs) audit or prepare for a trial, these also struggle with transparency and the required literacies to understand complex systems such as algorithms, online advertising business models or internet of things technology. Secondly, as the use of AI becomes more common place, there is a need for transparency in the value chain which delivers data driven solutions. Lastly, through my own research, we also encountered many occasions where transparency was required to organise technological development.

In this talk I wish to open up the need for transparency by asking for whom, what and when. For whom refers to which actors need transparency. What refers to the kind of information they require. And lastly, when implies that during the development of a project, different transparency needs may arise. This presentation will be based on past experiences in Facebook, online advertising, algorithm development and smart cities. The goal is to explore transparency from a bottom-up perspective by considering different cases and linking these to existing (value network mapping, data register, data flow mapping) or new methods.

After the presentation, students can partake in an exercise (Tutorial I: Exploring transparency through a data flow mapping of existing innovations (group exercise)) where they apply one method to map how an existing innovation is processing personal data. We will use this exercise as a starting point to explore how much transparency is enough for whom?
The goal of this session is to learn to use an easy mapping method to map data driven innovations as an addition to the data register proposed by the GDPR. The resulting mapping will be used as a boundary object, a discussion object that can be used by collaborators from different disciplinary backgrounds. Which leads us to the final goal of the workshop, a debate on what kind of transparency is needed by whom for particular innovations.

Rob Heyman, PhD, is a senior researcher at SMIT-VUB and Lead of the Expert Group PETS at City of Things Antwerp. He currently works on privacy, data protection and data transparency in the following application areas: Smart cities, IoT, Online and programmatic advertising, Social media, Big data and AI. For the moment he is working on SPECTRE, a Flemish funded SBO on DPIAs in smart cites and methods to expand the relevance of DPIAs for other challenges relevant to smart cities and its stakeholders.

Invited Talk VI: Information Privacy, Accountability and Ethics

Charles Raab (University of Edinburgh, United Kingdom)

The accountability of data controllers and the ethics of data processing have come to prominence as part of regulatory provisions for protecting personal data. They are represented in the GDPR and in many other legal or other instruments, including self-regulation, but it is not always clear what they mean and how far they can be effective in practice. This lecture takes a close and to some extent new look at accountability and ethics in terms of the processes and principles involved, and asks some questions about these novel provisions.

Charles Raab is Professorial Fellow, having held the Chair of Government from 1999 to 2007 and from 2012 to 2015. He has served as a member of the academic staff since 1964, and has held visiting positions in the Oxford Internet Institute, the Tilburg Institute for Law, Technology, and Society (Tilburg University, The Netherlands), Queen’s University, Kingston, Ontario, and the Victoria University of Wellington (NZ). He was a Fellow at the Hanse-Wissenschaftskolleg (Institute for Advanced Study) in Delmenhorst, Germany. With colleagues at the University of Stirling and the Open University, he is a Director of CRISP (Centre for Research into Information, Surveillance and Privacy, and is a founder of the Scottish Privacy Forum. He is a Fellow of the Academy of Social Sciences (FAcSS) and a Fellow of the Royal Society of Arts (FRSA). His main general research interests are in public policy, governance and regulation, and more specifically in information policy (privacy protection and public access to information; surveillance and security; identity and anonymity; information technology and systems in democratic politics, government and commerce; and ethical and human rights implications of information processes).

Invited Talk VII: Ethics as an Escape from Regulation: From ethics-washing to ethics-shopping?

Ben Wagner (Vienna University of Economics and Business, Austria)

A strange confusion among technology policy makers can be witnessed at present. While almost all are able to agree on the common chorus of voices chanting ‘something must be done,’ it is very difficult to identify what exactly must be done and how. In this confused environment it is perhaps unsurprising that the idea of ‘ethics’ is presented as a concrete policy option. Striving for ethics and ethical decision-making it is argued, will make technologies better. While this may be true in many cases, much of the debate about ethics seems increasingly focussed on private companies avoiding regulation. Unable or unwilling to properly provide regulatory solutions, ethics is seen as the ‘easy’ or ‘soft’ option which can help structure and give meaning to existing self-regulatory initiatives. In this world, ‘ethics’ is the new ‘industry self-regulation.’

Ben Wagner, PhD, is an Assistant Professor and Director of the Privacy & Sustainable Computing Lab at Vienna University of Economics and Business. In 2014 he founded the Centre for the Internet and Human Rights (CIHR) at European University Viadrina and served as CIHR Director from 2014 to 2016. His research focuses on communications technology at the intersection between rights, ethics and governance. Ben holds a PhD in Political and Social Sciences from European University Institute in Florence. He was previously worked at the German Institute for International and Security Affairs, the University of Pennsylvania, Human Rights Watch and the European Council on Foreign Relations. His research has been published in Telecommunications Policy, JITP and the International Journal of Communications.

Invited Talk VIII: Security and Privacy Foundations of Blockchain Technologies

Matteo Maffei (TU Vienna, Austria)

Blockchain technologies promise to revolution distributed systems, enabling mutually distrustful parties to reach a consensus on distributed data and decentralized operations. At the core of this technology lie distributed consensus algorithms, which embrace randomness and rational arguments to bypass long-standing impossibility results. The applications of blockchain technologies are multifold and go well beyond cryptocurrencies, embracing smart contracts, auctions, accountable data storage, and more.

In this lecture, we will give a gentle introduction to blockchain technologies, focusing in particular on the associated security and privacy challenges. Along with basic concepts, such as consensus and distributed ledger, we will overview also some advanced topics, such as smart contracts and payment channels.

Matteo Maffei is professor and head of the Security and Privacy group at the Vienna University of Technology. Previously, he was research group leader and professor at Saarland University and CISPA. He obtained his PhD in 2006 at the Ca’ Foscari University of Venice. He received in 2009 an Emmy Noether fellowship from the German Research Foundation with the project “Formal Design and Verification of Modern Cryptographic Applications” and in 2018 an ERC Consolidator Grant from the European Research Council with the project “Foundations and Tools for Client-Side Web Security”.
His current research interests include program analysis, cryptography, and distributed computation. In particular, he designs formal verification techniques for security properties in cryptographic protocols, mobile code, web applications, and smart contracts, and he develops privacy-enhancing technologies for cloud storage, analytics, and blockchain technologies.

Invited Talk IX: Surveillance by intelligence services: fundamental rights safeguards and remedies in the EU

Mario Oetheimer (European Union Agency for Fundamental Rights, Austria)

The session will provide the key findings of the recently published (October 2017) second surveillance report of the European Union Agency for Fundamental Rights (FRA), namely Surveillance by intelligence services: fundamental rights safeguards and remedies in the EU. Emphasis will be placed on the institutional guarantees that oversight mechanisms need to incorporate in order to be independent, efficient and transparent. Challenges for oversight bodies, which arise from a field traditionally shrouded in secrecy, such as limited powers, access to intelligence files, resources and expertise, will be thoroughly discussed.

Mario Oetheimer, PhD, is Head of Sector Information Society, Privacy and Data Protection at the European Union Agency for Fundamental Rights (FRA). Mario is managing the Agency’s research project on National intelligence authorities and surveillance in the EU. His areas of expertise with respect to the FRA’s work include: data protection and freedom of expression and international human rights, in particular the European Court of Human Rights’ case law. Mario coordinates the cooperation between the FRA and the Council of Europe. He previously worked for the Council of Europe for thirteen years – first with the Council of Europe media division, human rights directorate and then with the European Court of Human Rights research division. Mario studied law, and is the author of the book Harmonisation of Freedom of Expression in Europe (2001) in French. He has authored several articles on freedom of expression and the European Court of Human Rights.

Invited Talk X: Artificial Intelligence, Big Data and Human Rights – discrimination and other potential challenges

David Reichel (European Union Agency for Fundamental Rights, Austria)

The session will provide an overview of current discussions on artificial intelligence, big data and fundamental rights. After a general input on current discussions and developments in the area, the problem of identifying discrimination in data supported decisions will be presented and discussed in an interactive session.

David Reichel, PhD, is a researcher in the Freedoms and Justice Department at the European Union Agency for Fundamental Rights (FRA). He is responsible for managing FRA’s work concerning artificial intelligence, big data and fundamental rights. His areas of expertise include statistical data analysis, data quality and statistical data visualisation. He has extensive experience in working with data and statistics in an international context.
Prior to joining FRA in 2014, he worked for the research department of the International Centre for Migration Policy Development (ICMPD) and as a lecturer at the Institute for Sociology and the Institute for International Development at the University of Vienna. He has published numerous articles, working papers and book chapters on issues related to migration and integration statistics, citizenship and human rights.

Invited Talk XI: Privacy Challenges of Artificial Intelligence

Maja Brkan (Maastricht University, The Netherlands)

tba

Invited Talk XII: Legal consciousness: conceptualising the privacy paradox

Katharine Sarikakis (University of Vienna, Austria)

Scholarship explores citizens’ attitudes to privacy from a wide angle of perspectives but struggles to move beyond the attitudes recorded in terms of understanding of privacy problems in media usage. In particular, a great deal has been written about the seeming paradox between users’ knowledge and users’ actions of allowing the violation of privacy. What are the ways in which however this paradox, or, indeed users’ needs and expectations can be better understood beyond the gap between their actions and their knowledge?

In order to understand people’s motivations, it is important to refrain from treating them as ‘simple’ ‘users’ and instead introduce more systematically elements of agency, structure and consciousness as pillars of a complex, and although fluid yet surprisingly stable core of understandings of what privacy means to citizens. The concept of legal consciousness can support efforts towards building theory and research design in which citizens’ responses to law and policy are more comprehensively understood within a process of supporting, enabling or resisting policy and law. The lecture will discuss the ways in which the concept of legal consciousness can be useful in providing anchoring points from which variations of responses to law and policy, including both forms of law and privatized policy as is the case of terms and conditions, and a closer look at processes of negotiation with such legal frameworks. Not simply a question of ‘understanding’ the law but rather a question of living, experiencing and contesting it, circumventing or accepting it are the core dimensions unfolding through a systematic exploration of expressions of legal consciousness.

Katharine Sarikakis PhD is Professor of Media Governance at the Department of Communication, the University of Vienna. Her research interests are in the field of European and international communication and in particular across two intersecting directions: the role of institutions in supra- and international communications policy processes and policy implications policy for the empowerment of citizens and the exercise of enlarged citizenship. Major areas she is investigating at the moment are Copyright, Privacy and Public media. Currently she is working on a research monograph on Communication and Control.She is the vice chair of the Communication Policy and Law Division of the International Communication Association; she founded and led the Communication Law and Policy Section of the European Communication Research and Education Association and was elected as Head of the section twice. Katharine is also past Vice President of the International Association for Media and Communication Researchers and is serving now as an elected member of the International Council of IAMCR. She is also the managing editor of the International Journal of Media and Cultural Politics.Katharine regularly consults with international organisations on matters of media and communication policy and regulation and rights.