“AI will often need to be culturally or environmentally sensitive” – Prof. Anupam Chander on Artifical Intelligence and Human Rights
December 1, 2022
Prof. Anupam Chander, Scott K. Ginsburg, Professor of Law at Georgetown University Law Center, delivered an insightful speech on the topic of “Artificial Intelligence and Human Rights” at an event organised by the National Law School of India University, Bengaluru this week.
The event, held at the Bangalore International Centre, was part of NLSIU’s AI and Human Rights project, supported by the Consulate General of the Federal Republic of Germany, Bengaluru. The project aims to develop a multidisciplinary approach toward understanding the impact of Artificial Intelligence on human rights.
During his speech, Prof. Chander highlighted the geopolitics governing artificial intelligence wherein he identified several tensions in the policy conversation around AI. These included concerns about the possibility of the global south being left behind in the race to develop advanced AI technologies; or the anxiety amongst western policymakers around China’s rise as a technology leader, possibly signalling a global power shift away from the West.
In a related vein, Prof. Chander highlighted the spectre of ‘data colonialism’. This refers to practices followed by industrialized nations, especially in the European Union (EU), to deny service providers from developing countries access to their markets–all in the name of data protection.
Limitations of AI
Prof. Chander illustrated the limitations of AI with insightful examples. Sharing an example of a geographical mismatch, he said Volvo was developing a feature called “Large Animal Detection.” But when it tried this feature on Australian roads, it realized that the AI didn’t recognize kangaroos as large animals because they jumped—unlike any animal it had been trained on. Volvo recognized the problem and began training the system on kangaroos.
He said that AI will often need to be culturally or environmentally sensitive—that an AI trained on the behavior of the United States population will likely produce erroneous results when applied in China—or vice versa.
He argued that such examples demonstrate the importance of having diverse teams that are involved in the creation and management of AI systems. He however said that this is not true just of AI systems—but true of all important systems that impact diverse people.
“Today, decisions about people and machines are being made by machines. AI helps people file tax returns, it helps offer or deny loans, it matches individuals for dating, it makes investment decisions, sorts through job applications, and delivers search results. Given that AI is making decisions that affect people’s lives, governments should insist on what we might call “locally responsible AI”, he said.
Imagining the future of AI
Concluding on an optimistic note, Prof. Chander urged the audience to imagine a future where the race to develop advanced AI technologies would not just be a zero-sum geopolitical game, but one where human welfare would be central to AI policy.
“There is a more optimistic possibility that we should strive for—of using the latest computing and AI tools to lift people up. Financial tools that increase inclusion; Opportunities for trade that were previously impossible; And access to the sum of human knowledge previously available only in the greatest libraries of the world. That’s the AI future I hope we seek to build,” he said.
Panel Discussion
Prof. Chander’s speech was followed by a panel discussion where he was joined by co-panelists Anita Gurumurthy, Executive Director, IT for Change; and Siddharth Das, Co-founder, Univ.AI. The discussion was moderated by NLSIU Vice-Chancellor Prof. Sudhir Krishnaswamy.
Follow this page for further updates.