[News] Interview with AI fairness lead Saito published in Asahi Shimbun DIGITAL: Gender discrimination reflected in AI - To avoid distortions in human society (September 23, 2023)

An interview with Asumi Saito, the AI Fairness Lead, has been published in Asahi Shimbun DIGITAL. Saito discusses the risks of AI reflecting gender and racial biases and proposes measures to avoid these issues.
Main points:
- AI Bias Issues: AI learns from data based on human society, which means it can reflect discrimination and prejudice.
- Role and Challenges of Developers: Developers are taking measures to avoid discriminatory responses, but they may be unaware of unconscious biases, and addressing this is necessary.
- Specific Examples and Measures: Generative AI like ChatGPT is designed to provide fair responses to questions about gender, but fundamental solutions require a change in awareness among developers.
You can find more details about the interview article through the "Related Links."
- Asahi Shimbun DIGITAL (a paid membership is required to access.)

Inquiry about this news
Contact Us OnlineMore Details & Registration
Details & Registration