The growing sophistication and diffusion of artificial intelligence (AI) and digital surveillance technologies have raised concerns about privacy and human rights. China is unquestionably one of the leaders in developing these technologies for both domestic and international use. However, other countries active in this space are the United States, Israel, Russia, several European countries, Japan, and South Korea. American companies play a particularly important role in supplying the hardware underlying surveillance technologies.
In turn, these technologies are used in a range of contexts. Some of its most serious use cases include helping to spy on political dissidents and suppressing Uyghur and Turkish Muslim populations across China. However, concerns arise even in its most “mundane” uses, which include individual verification at banks and gyms. The higher quality of collected data can help companies improve the accuracy of their facial recognition technology. Over time, these increasingly effective technologies can be used elsewhere for authoritarian purposes.
The United States and partner democracies have implemented sanctions, export controls and investment bans to curb the uncontrolled spread of surveillance technology, but the opaque nature of supply chains makes it unclear how well these efforts are working. A major gap remains in international standards at institutions such as the United Nations’ International Telecommunication Union (ITU), where Chinese companies have been the only ones to come up with facial recognition standards that are fast-tracking adoption in many countries. vast regions of the world. .
To continue to address these political challenges, this note offers five recommendations for democratic governments and three for civil society. In brief, these recommendations are:
- The United States and its allies should demonstrate that it can produce a viable alternative model by proving that it can use facial recognition, predictive policing, and other AI surveillance tools responsibly at home.
- The State Department should work with technical experts, such as those meeting at the Global AI Partnership, to propose alternative facial recognition standards to the ITU.
- The United States and like-minded countries should jointly develop systems to improve the regulation of data transfers and reduce risk.
- The United States and partner democracies should subsidize companies to help them create standards to submit to bodies like the ITU.
- The National Science Foundation and the Defense Advanced Research Projects Agency should fund research into privacy-preserving computer vision, where computer vision derives information from images or video.
- Civil society organizations (CSOs) should engage in outreach efforts with local communities and community leaders to strengthen public discourse on the pros and cons of using AI in sustaining health. order and supervision.
- CSOs should engage in or support research into issues related to rights violations using AI and digital surveillance technologies and the export of such technologies.
- CSOs should actively participate in setting international technology standards.