Published on Tue Sep 15 2020

Report prepared by the Montreal AI Ethics Institute (MAIEI) on Publication Norms for Responsible AI

Abhishek Gupta, Camylle Lanteigne, Victoria Heath

The Montreal AI Ethics Institute (MAIEI) co-hosted two public consultations with the Partnership on AI in May 2020. These meetups examined potential publication norms for responsible AI. MAIEI provides six initial recommendations, these include: create tools to navigate publication decisions.

4
9
12
Abstract

The history of science and technology shows that seemingly innocuous developments in scientific theories and research have enabled real-world applications with significant negative consequences for humanity. In order to ensure that the science and technology of AI is developed in a humane manner, we must develop research publication norms that are informed by our growing understanding of AI's potential threats and use cases. Unfortunately, it's difficult to create a set of publication norms for responsible AI because the field of AI is currently fragmented in terms of how this technology is researched, developed, funded, etc. To examine this challenge and find solutions, the Montreal AI Ethics Institute (MAIEI) co-hosted two public consultations with the Partnership on AI in May 2020. These meetups examined potential publication norms for responsible AI, with the goal of creating a clear set of recommendations and ways forward for publishers. In its submission, MAIEI provides six initial recommendations, these include: 1) create tools to navigate publication decisions, 2) offer a page number extension, 3) develop a network of peers, 4) require broad impact statements, 5) require the publication of expected results, and 6) revamp the peer-review process. After considering potential concerns regarding these recommendations, including constraining innovation and creating a "black market" for AI research, MAIEI outlines three ways forward for publishers, these include: 1) state clearly and consistently the need for established norms, 2) coordinate and build trust as a community, and 3) change the approach.

Wed Oct 02 2019
Machine Learning
The tension between openness and prudence in AI research
This paper explores the tension between openness and prudence in AI research. While the AI community has strong norms around open sharing of research, concerns about the potential harms arising from misuse of research are growing. We discuss how different beliefs and values can lead to different perspectives on how to manage this tension.
0
0
0
Wed Nov 25 2020
Machine Learning
Like a Researcher Stating Broader Impact For the Very First Time
NeurIPS program chairs required a statement of broader impact accompany all submissions for this year's conference. This paper seeks to answer the question of how individual researchers reacted to the new requirement.
0
0
0
Sun Oct 11 2020
Machine Learning
ArXiving Before Submission Helps Everyone
We claim that allowing arXiv publication before a conference or journal submission benefits researchers, especially early career. We see no reasons why anyone but the authors should decide whether a paper is published.
4
37
153
Fri Dec 27 2019
Artificial Intelligence
The Offense-Defense Balance of Scientific Knowledge: Does Publishing AI Research Reduce Misuse?
There is growing concern over the potential misuse of artificial intelligence(AI) research. Publishing scientific research can facilitate misuse of the technology. But the research can also contribute to protections against misuse.
0
0
0
Wed May 19 2021
Artificial Intelligence
AI and Ethics -- Operationalising Responsible AI
Building and maintaining public trust in AI has been identified as the key to successful and sustainable innovation. This chapter discusses the challenges related to operationalizing ethical AI principles and presents an integrated view.
1
0
0
Mon Jul 26 2021
Artificial Intelligence
Measuring Ethics in AI with AI: A Methodology and Dataset Construction
The use of sound measures and metrics in Artificial Intelligence has become the subject of interest of academia, government, and industry. We propose to use such newfound capabilities of AI technologies to augment our AI measuring capabilities. We do so by training a model to classify publications related to ethical issues and concerns.
3
9
30