AI Ethics #4 : Pandemic Privacy, fixing unethical design, CORD-19 dataset, infodemic, AI governance and more ...

Our fourth weekly edition covering research and news in the world of AI Ethics

Welcome to the fourth edition of our weekly newsletter that will help you navigate the fast changing world of AI Ethics! Every week we dive into research papers that caught our eye, sharing a summary of those with you and presenting our thoughts on how it links with other work in the research landscape. We also share brief thoughts on interesting articles and developments in the field. More about us on: https://montrealethics.ai/about/


If someone has forwarded this to you and you want to get one delivered to you every week, you can subscribe to receive this newsletter by clicking below


With not much to cheer about as we all #stayhome (for those who can) to curb the spread, for this edition, I thought of taking a stab at being a bit creative and sharing this original artwork and accompanying verse with you - if you find AI-themed creative content that can cheer everyone, hit reply to this email and let me know!

The Mechanistic Grokking

Some saw red where others saw black
Veneered by ripples revealing through slits
Why how humans perceive must come back
Us machines, well without hominids we're lost in our bits

With machines mediating our understanding of the world around us, this art and accompanying verse highlight why the unique lens of human subjectivity will be even more crucial as we move into a highly automated world.

By Abhishek Gupta


Our AI ethics community’s contributions to the Office of Privacy Commissioner of Canada’s consultation

Our team spent many hours to put together technical and legal recommendations for the OPCC consultation, it also includes the feedback we gathered from our community leveraging a great mix of diversity of backgrounds.

You can read the community insights here. For the entire report along with a summary of the recommendations, read here.


Research:

Let's look at some highlights of research papers that caught our attention at MAIEI:

Apps Gone Rogue : Maintaining Personal Privacy in an Epidemic by Raskar et al.

With a rising number of cases worldwide of COVID-19 and extensive measures being taken across the world to minimize the spread and mitigate economic harm and continuity to our way of life. Yet, some measures are creeping up on invading the privacy of people and creating real possible harms in how the collected data is used and managed. The paper presents some of the contact tracing solutions that are being used in places around the world and their associated risks. They also share information on an open-source solution called Private Kit : Safe Paths that is a privacy preserving way of doing contact tracing. While there are very clear benefits in containing the spread of the epidemic, the privacy and other social harms arising as a consequence of the use of such technology need to be weighed and judged in line with the culture and values of the society in which it is being used. It is also important to ensure the inclusivity of the solutions so developed because often those with minimal access to technology are the most vulnerable to the negative impacts of the epidemic. Ultimately, it is critical to weigh the tradeoffs in deploying contact tracing technology compared to the intended and unintended harms that can arise with its use.

To delve deeper, read our full summary here.

AI Governance: A Holistic Approach to Implement Ethics in AI by the World Economic Forum 

This report presents a great getting started guide for people looking to implement governance and regulatory mechanisms for AI systems. While being high-level in many recommendations, it sets out the landscape very clearly and posits certain mini-frameworks to reasons about the various tensions that one will encounter when trying to implement governance for AI systems. One of the overarching themes in the report is that one needs to view the AI system as embedded in a large socio-technical ecosystem and hence effective interventions will involve both technical and legal, policy making approaches. When thinking about policy making, what's important is to position that within the context of the local culture such that it is in line with what that community values. Because of a diversity of ethical principle sets, the report advocates for an approach that utilizes cross-sectoral expertise and leans on those who have context and knowledge of the domain. Finally, the report stresses on the importance of balancing the tradeoff between early regulation that can catch and mitigate harms and overzealousness without understanding the technology that might stifle innovation while not creating meaningful regulation.

To delve deeper, read our full summary here.

Challenges in Supporting Exploratory Search through Voice Assistants by Xiao Ma and Ariel Lu 

A high-level position paper from Google, this work brings forth in a succinct manner some of the challenges faced in designing useful and efficacious voice-activated AI systems. The authors do a great job in providing short examples along with references to relevant literature that position the current challenges in a socio-technical context. While design challenges abound in any technology, with voice systems, users have very high expectations because of their increasing ubiquity and anthropomorphization. Especially when looking at exploratory search which consists of open questions and the user is seeking a subset of meaningful responses rather than one best answer as is the case with fact searches, these challenges become more important. The users come in with pre-set notions on how they interact with each other using natural language and seek to get a similar experience from the system. Especially in cases where the voice interface is the only possible mode of interaction, such as when driving, it becomes essential that people are able to get results that they are seeking expeditiously compared to having to pull out the device and utilize the traditional visual and touch modalities. The development of voice interfaces can also usher in novel paradigms of mixed-modal interactions for optimizing the user experience such as presenting pieces of information that utilize part visual and part voice outputs. The systems also need to be sensitive to various demographic differences in terms of dialects, accents, modes of use, etc. There is still more research needed as to how exploratory search is done in voice compared to text searches and the research and challenges highlighted in this paper serve as good starting points.

To delve deeper, read our full summary here.


Articles:

Let’s look at highlights of some recent articles that we found interesting at MAIEI:

Here’s how social media can combat the coronavirus ‘infodemic’

Disinformation is harmful even during times when we aren’t going through large scale changes but this year the US has elections, the once in a decade census, and the COVID-19 pandemic. Malicious agents are having a field day disbursing false information, overwhelming people with a mixture of true and untrue pieces of content. The article gives the example of a potential lockdown and people reflecting on their experience with the Boston Marathon bombings including stockpiling essentials out of panic. This was then uncovered to have originated from conspiracy theorists, but in an environment where contact with the outside world has become limited and local touch points such as speaking with your neighbor have dwindled, we're struggling with our ability to combat this infodemic. Social media is playing a critical role in getting information to people but if it's untrue, we end up risking lives especially if it's falsehoods on how to protect yourself from contracting a disease. But wherever there is a challenge lies a corresponding opportunity: social media companies have a unique window into discovering issues that a local population is concerned about and it can, if used effectively, be a source for providing crisis response to those most in need with resources that are specific and meaningful.

You Can’t Fix Unethical Design by Yourself

Individual actions are powerful, they create bottom-up change and empower advocates with the ability to catalyze larger change. But, when we look at products and services with millions of users where designs that are inherently unethical become part of everyday practice and are met with a slight shrug of the shoulders resigning to our fates, we need a more systematized approach that is standardized and widely practiced. Ethics in AI is having its moment in the spotlight with people giving talks and conferences focusing on it as a core theme yet it falls short of putting the espoused principles into practice. More often than not, you have individuals, rank and file employees who go out of their way, often on personal time, to advocate for the use of ethical, safety and inclusivity in the design of systems, sometimes even at the risk of their employment. While such efforts are laudable, they lack widespread impact and awareness that is necessary to move the needle, we need leaders at the top who can affect sweeping changes to adopt these guidelines not just in letter but in spirit and then transmit them as actionable policies to their workforce. It needs to arrive at a point where people advocating for this change don't need to do so from a place of moral and ethical obligations which customers can dispute but from a place of policy decisions which force disengagement for non-adherence to these policies. We need to move from talk to action not just at a micro but at a macro scale.

How Much Privacy Are You Entitled to During a Pandemic?

Many countries are looking at utilizing existing surveillance and counter-terrorism tools to help track the spread of the coronavirus and are urging tech companies and carriers to assist with this. The US is looking at how they can tap into location data from smartphones, following in the heels of Israel and South Korea that have deployed similar measures. While extraordinary measures might be justified given the time of crisis we're going through, we mustn't lose sight of what behaviors we are normalizing as a part of this response against the pandemic. Russia and China are also using facial recognition technologies to track movements of people, while Iran is endorsing an app that might be used as a diagnosis tool. Expansion of the boundaries of surveillance capabilities and government powers is something that is hard to reign back in once a crisis is over. In some cases, like the signing of the Freedom Act in the USA reduced government agency data collection abilities that were expanded under the Patriot Act. But, that's not always the case and even so, the powers today exceed those that existed prior to the enactment of the Patriot Act. What's most important is to ensure that decisions policy makers take today keep in mind the time limits on such expansion of powers and don't trigger a future privacy crisis because of it.

Translating a Surveillance Tool into a Virus Tracker for Democracies

While no replacement for social distancing, a virus tracking tool putting into practice the technique of contact tracing is largely unpalatable to Western democracies because of expectations of privacy and freedom of movement. A British effort underway to create an app that meets democratic ideals of privacy and freedom while also being useful in collecting geolocation data to aid in the virus containment efforts. It is based on the notion of participatory sharing which we've covered in the research summary of this week but it essentially relies on people's sense of civic duty to contribute their data in case they test positive. While in the USA, discussions between the administration and technology companies has focused on large scale aggregate data collection, in a place like the UK with a centralized healthcare system, there might be higher trust levels in sharing data with the government. While the app doesn't require uptake by everyone to be effective, but a majority of the people would need to use it to bring down the rate of spread. The efficacy of the solution itself will rely on being able to collect granular location data from multiple sources including Bluetooth, Wi-Fi, cell tower data, and app check-ins.

Don’t like dystopian surveillance? Flatten the coronavirus curve

A lot of high level CDC officials are advising that if people in the USA don't follow best practices of social distancing, sheltering in place, and washing hands regularly, the outbreak will not have peaked and the infection will continue to spread, especially hitting those who are the most vulnerable including the elderly and those with pre-existing conditions. On top of the public health impacts, there are also concerns of growing tech-enabled surveillance which is being seriously explored as an additional measure to curb the spread. While privacy and freedom rights are enshrined in the constitution, during times of crisis, government and justice powers are expanded to allow for extraordinary measures to be adopted to restore the safety of the public. This is one of those times and the US administration is actively exploring options in partnership with various governments on how to effectively combat the spread of the virus including the use of facial recognition technology. This comes shortly after the techlash and a potential bipartisan movement to curb the degree of data collection by large firms, which seem to have come to a halt as everyone scrambles to battle the coronavirus.

When Humans Attack

AI systems are different from other software systems when it comes to security vulnerabilities. While traditional cybersecurity mechanisms rely heavily on securing the perimeter, AI security vulnerabilities run deeper and they can be manipulated through their interactions with the real world — the very mechanism that makes them intelligent systems. Numerous examples of utilizing audio samples from TV commercials to trigger voice assistants have demonstrated new attack surfaces for which we need to develop defense techniques. Visual systems are also fooled, especially in AV systems where, according to one example, manipulating STOP signs on the road with innocuous stripes of tape make it seem like the STOP sign is a speed indicator and can cause fatal crashes. There are also examples of hiding these adversarial examples under the guise of white noise and other imperceptible changes to the human senses. We need to think of AI systems as inherently socio-technical to come up with effective protection techniques that don't just rely on technical measures but also look at the human factors surrounding them. Some other useful insights are to utilize abusability testing, red-teaming, White Hacking, bug bounty programs, and consulting with civic society advocates who have deep experience with the interactions of vulnerable communities with technology. 


From the archives:

Here’s an article from our blogs that we think is worth another look:

Approaches to Deploying a Safe Artificial Moral Agent by Olivier Couttolenc (Philosophy & Political Science, McGill University)

This paper explains and evaluates the basic moral theories based on their feasibility and propensity to produce existential risk if they were to be deployed by artificial moral agents.


Guest contributions:

We invite researchers and practitioners working in different domains studying the impacts of AI-enabled systems to share their work with the larger AI ethics community, here’s this week’s featured post:

Consequentialism and Machine Ethics – Towards a Foundational Machine Ethic to Ensure the Ethical Conduct of Artificial Moral Agents by Josiah Della Foresta (Philosophy, McGill University)

This paper argues that Consequentialism represents a kind of ethical theory that is the most plausible to serve as a basis for a machine ethic. It begins by outlining the concept of an artificial moral agent and the essential properties of Consequentialism. Then, it presents a scenario involving autonomous vehicles to illustrate how the features of Consequentialism inform agent action. Thirdly, an alternative Deontological approach will be evaluated and the problem of moral conflict discussed. Finally, two bottom-up approaches to the development of machine ethics will be presented and briefly challenged.

If you’re working on something interesting and would like to share that with our community, please email us at support@montrealethics.ai


Events:

As a part of our public competence building efforts, we host events frequently spanning different subjects as it relates to building responsible AI systems, you can see a complete list here: https://montrealethics.ai/meetup

Given the advice from various health agencies, we’re avoiding physical events to curb the spread of COVID-19. Stay tuned for updates!


From elsewhere on the web:

Things from our network and more that we found interesting and worth your time.

Given the urgency of having to fight the pandemic using all the resources we have, we found this to be the best place to start for those looking to utilize AI to fight COVID-19. COVID-19 Open Research Dataset (CORD-19) is a resource of more than 44,000 articles in JSON format about COVID-19 and the coronavirus family of viruses for use by the global machine learning community. The dataset represents the most extensive machine-readable coronavirus literature collection available for data and text mining to date.


Signing off for this week, we look forward to it again in a week! If you enjoyed this and know someone else that can benefit from this newsletter, please share it with them!

Share Montreal AI Ethics Institute


If you have feedback for this newsletter or think there is an interesting piece of research, development or event that we missed, please feel free to email us at support@montrealethics.ai


If someone has forwarded this to you and you like what you read, you can subscribe to receive this weekly newsletter by clicking below