2023 July : The Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU) receives support from The Kavli Foundation for a research project on Science and Society led by Professor Hiromi Yokoyama. The project will embark on data-driven studies that will bring Asian perspectives into the global discussions surrounding a subset of ethical, legal and social issues (ELSI) associated with Artificial Intelligence and Climate Engineering.(press release)
2023年7月、東京大学国際高等研究所カブリ数物連携宇宙研究機構(Kavli IPMU)の横山広美教授/副機構長の主導する Score ELSI プロジェクトに対する、米国カブリ財団からの支援が決定しました。データ駆動型、アジアの視点を重視して、AIや気候工学技術に対して人々が感じる倫理的・法的・社会的課題 (Ethical Legal and Social Issues:ELSI) を数値で可視化する研究を行います。プレスリリース)
気候工学の中でも太陽遮蔽工学を扱います。また、倫理学のフレームを用いたELSI-RRIシートを開発して、日本、アメリカのほか西・南・東アジアのデータを取得し分析します。
The first phase members and B'AI members are conducting research on AI diversity. / 第1フェーズメンバーとB'AIのメンバーで、AIダイバーシティの研究を行っています。
We are collaborating ELSI research on face recognition, mainly at the Technical University of Munich. /ミュンヘン工科大学を中心に、顔認証についてのELSI研究を行っています。
The ELSI project (2020.1-2022.12) supported by SECOM Science and technology Foundation is to develop a simple scale to measure the ELSI of advanced science and technology, such as AI and genome editing, for scientists and engineers in R&D.
セコム科学技術振興財団 特定領域研究助成 ELSI分野(領域代表:小林傳司) 「AI、ゲノム編集等の先進科学技術における科学技術倫理指標の構築」は、2020.1に始まりました。ELSIとは、科学技術の倫理的・法的・社会的課題(Ethical, Legal and Social Issues)を指します。本プロジェクトはAIやゲノム編集等の先端的な科学技術のELSIを、研究開発現場の科学技術者が簡易に測定できるための尺度開発を目標にしています。
日本語バージョン: AIテクノロジーELSIに対するオクタゴン尺度
Abstract:Artificial intelligence (AI) is often accompanied by public concern. In this study, we quantitatively evaluated a source of public concern using the framework for ethics, legal, and social issues (ELSI). Concern was compared among people in Japan, the United States, and Germany using four different scenarios: (1) the use of AI to replicate the voice of a famous deceased singer, (2) the use of AI for customer service, (3) the use of AI for autonomous weapons, and (4) the use of AI for preventing criminal activities. The results show that the most striking difference was in the response to the “weapon” scenario. Respondents from Japan showed greater concern than those in the other two countries. Older respondents had more concerns, and respondents who had a deeper understanding of AI were more likely to have concerns related to the legal aspects of it. We also found that attitudes toward legal issues were the key to segmenting their attitudes toward ELSI related to AI: Positive, Less skeptical of laws, Skeptical of laws, and Negative.
EとLとSを測定する3つの項目(論文2の結果)を用い、日米独で4つのシナリオを測定した。これを用いて、ELSの3次元の中で、人々を意見の傾向の異なる4つのグループに分けることが理想的であることがわかった。今後、議論の際などにこの4つのグループ分けが有用であると考えている。
Abstract:For this study, we developed an AI ethics scale based on AI-specific scenarios. We investigated public attitudes toward AI ethics in Japan and the US using online questionnaires. We designed a test set using four dilemma scenarios and questionnaire items based on a theoretical framework for ethics, legal, and social issues (ELSI). We found that country and age are the most informative sociodemographic categories for predicting attitudes for AI ethics. Our proposed scale, which consists of 13 questions, can be reduced to only three, covering ethics, tradition, and policies. This new AI ethics scale will help to quantify how AI research is accepted in society and which area of ELSI people are most concerned with.
ELSIのEとLとSの併せて13項目を用意し、4つのジレンマシナリオについて日米で測定を行った。特徴を分けるのは国、年齢の順であることがわかった。また、重要な項目はEとLとSの1つずつに絞り込むことができるとわかった。ELSIを元にしたAI倫理測定が可能かもしれない。
talk by T. HartwigAbstract:Artificial intelligence (AI) is rapidly permeating our lives, but public attitudes toward AI ethics have only partially been investigated quantitatively. In this study, we focused on eight themes commonly shared in AI guidelines: “privacy,” “accountability,” “safety and security,” “transparency and explainability,” “fairness and non-discrimination,” “human control of technology,” “professional responsibility,” and “promotion of human values.” We investigated public attitudes toward AI ethics using four scenarios in Japan. Through an online questionnaire, we found that public disagreement/agreement with using AI varied depending on the scenario. For instance, anxiety over AI ethics was high for the scenario where AI was used with weaponry. Age was significantly related to the themes across the scenarios, but gender and understanding of AI differently related depending on the themes and scenarios. While the eight themes need to be carefully explained to the participants, our Octagon measurement may be useful for understanding how people feel about the risks of the technologies, especially AI, that are rapidly permeating society and what the problems might be.
各国・団体が用意するAIガイドラインの共通部分が8つに絞り込まれている。これに注目をし、AI倫理を問う異なる4つのジレンマシナリオを用意し測定をした。これを「オクタゴン測定」と名付けて、AI倫理の測定として提案していきたい。
talk by H. Yokoyama