Score ELSI (Measuring the Ethics of Science and Technology) / スコアELSI (科学技術の倫理を測る)

This research project started in 2020.1 ELSI refers to Ethical, Legal and Social Issues in Science and Technology. The goal of this project is to develop a simple scale to measure the ELSI of advanced science and technology, such as AI and genome editing, for scientists and engineers in R&D.

本研究プロジェクトは、2020.1に始まりました。ELSIとは、科学技術の倫理的・法的・社会的課題(Ethical, Legal and Social Issues)を指します。本プロジェクトはAIやゲノム編集等の先端的な科学技術のELSIを、研究開発現場の科学技術者が簡易に測定できるための尺度開発を目標にしています。


Papers / 論文

2022/9/2 OPEN ACCESS Ikkatai, Y., Hartwig, T., Takanashi, N. et al. Segmentation of ethics, legal, and social issues (ELSI) related to AI in Japan, the United States, and Germany. AI Ethics (2022). https://doi.org/10.1007/s43681-022-00207-y

Abstract:Artificial intelligence (AI) is often accompanied by public concern. In this study, we quantitatively evaluated a source of public concern using the framework for ethics, legal, and social issues (ELSI). Concern was compared among people in Japan, the United States, and Germany using four different scenarios: (1) the use of AI to replicate the voice of a famous deceased singer, (2) the use of AI for customer service, (3) the use of AI for autonomous weapons, and (4) the use of AI for preventing criminal activities. The results show that the most striking difference was in the response to the “weapon” scenario. Respondents from Japan showed greater concern than those in the other two countries. Older respondents had more concerns, and respondents who had a deeper understanding of AI were more likely to have concerns related to the legal aspects of it. We also found that attitudes toward legal issues were the key to segmenting their attitudes toward ELSI related to AI: Positive, Less skeptical of laws, Skeptical of laws, and Negative.

EとLとSを測定する3つの項目(論文2の結果)を用い、日米独で4つのシナリオを測定した。これを用いて、ELSの3次元の中で、人々を意見の傾向の異なる4つのグループに分けることが理想的であることがわかった。今後、議論の際などにこの4つのグループ分けが有用であると考えている。


2022/1/22 OPEN ACCESS Hartwig, T., Ikkatai, Y., Takanashi, N. & Yokoyama, H.M. (2022). 'Artificial intelligence ELSI score for science and technology: a comparison between Japan and the US'. AI & Soc. https://doi.org/10.1007/s00146-021-01323-9

Abstract:For this study, we developed an AI ethics scale based on AI-specific scenarios. We investigated public attitudes toward AI ethics in Japan and the US using online questionnaires. We designed a test set using four dilemma scenarios and questionnaire items based on a theoretical framework for ethics, legal, and social issues (ELSI). We found that country and age are the most informative sociodemographic categories for predicting attitudes for AI ethics. Our proposed scale, which consists of 13 questions, can be reduced to only three, covering ethics, tradition, and policies. This new AI ethics scale will help to quantify how AI research is accepted in society and which area of ELSI people are most concerned with.

ELSIのEとLとSの併せて13項目を用意し、4つのジレンマシナリオについて日米で測定を行った。特徴を分けるのは国、年齢の順であることがわかった。また、重要な項目はEとLとSの1つずつに絞り込むことができるとわかった。ELSIを元にしたAI倫理測定が可能かもしれない。

talk by T. Hartwig
2022/1/11 OPEN ACCESS Y.Ikkatai, T.Hartwig, N.Takanashi & H.M. Yokoyama (2022). 'Octagon Measurement: Public Attitudes toward AI Ethics', International Journal of Human–Computer Interaction (プレスリリース, English press release)

Abstract:Artificial intelligence (AI) is rapidly permeating our lives, but public attitudes toward AI ethics have only partially been investigated quantitatively. In this study, we focused on eight themes commonly shared in AI guidelines: “privacy,” “accountability,” “safety and security,” “transparency and explainability,” “fairness and non-discrimination,” “human control of technology,” “professional responsibility,” and “promotion of human values.” We investigated public attitudes toward AI ethics using four scenarios in Japan. Through an online questionnaire, we found that public disagreement/agreement with using AI varied depending on the scenario. For instance, anxiety over AI ethics was high for the scenario where AI was used with weaponry. Age was significantly related to the themes across the scenarios, but gender and understanding of AI differently related depending on the themes and scenarios. While the eight themes need to be carefully explained to the participants, our Octagon measurement may be useful for understanding how people feel about the risks of the technologies, especially AI, that are rapidly permeating society and what the problems might be.

各国・団体が用意するAIガイドラインの共通部分が8つに絞り込まれている。これに注目をし、AI倫理を問う異なる4つのジレンマシナリオを用意し測定をした。これを「オクタゴン測定」と名付けて、AI倫理の測定として提案していきたい。

English version : Octagon mesurement for AI technology ELSI

日本語バージョン: AIテクノロジーELSIに対するオクタゴン尺度

talk by H. Yokoyama

メンバー / Member

横山広美 東京大学 国際高等研究所カブリ連携宇宙研究機構 教授 / Prof. Hiromi M Yokoyama: Kavli IPMU, UTokyo

研究代表。研究の方針を策定、調査設計、実施等を通して全体を統括する。/ Research representative. Entire research process, including the formulation of research policies, research design, and implementation.

ティルマン・ハートウィグ 東京大学 大学院理学系研究科物理学専攻 知の物理学センター 助教 / Assist.Prof. Tilman Hartwig: IPI center, UTokyo

物理学者。主にAIを用いた解析を担当する。/ Physicist, Responsible for AI-based analysis.

一方井祐子 金沢大学 人間社会研究域 人間科学系 准教授 / A/Prof. Yuko Ikkatai: Kanazawa Univ.

量的調査の調査設計とデータ解析・検証を担当する。/ Responsible for survey design and data analysis/validation for quantitative research.

高梨直紘 東京大学 エグゼクティブ・マネジメント 特任准教授 / Project A/Prof. Naoshi Takanashi: EMP, UTokyo

主に倫理学者、宗教学者への質的調査を担当する。/ Responsible for conducting qualitative research mainly with ethicists and religious scholars.

松山桃世 東京大学 生産技術研究所 准教授 / A/Prof. Momoyo Matsuyama: IIS, UTokyo

主にゲノム編集の倫理測定を担当する。 / Responsible for the ethical measurement of genome editing.


本プロジェクトから、さらに以下の研究に発展しています。

AIダイバーシティ研究

ビヨンドAI

研究期間: 2021年4月 - 代表者: 林香里・横山広美

AIでおきるダイバーシティ差別について、原因を明らかにし、改善する研究を行っています。