The AI Now Institute Research Center from New York University published a report on Thursday, December 12, 2019, on the implications of artificial intelligence (AI) in society. This year, researchers focused on the negative aspects of AI.
The conclusion is clear: this technology must be much more strictly regulated. “It is becoming increasingly clear that in various areas, AI amplifies inequalities, places information and means of control in the hands of those who have power, thereby depriving those who do not already have it,” the document states.
The danger of algorithmic biases
“The AI industry is terribly homogeneous,” the report warns. This lack of diversity has consequences for algorithms that are biased. They can come from two sources. First, the developer himself, who integrates his own cognitive beliefs and biases into his algorithms. Then others are derived from the data that feeds the system, from which it is driven.
The report “Algorithms: bias, discrimination and equity” by researchers from Télécom ParisTech and the University of Nanterre, published in March 2019, takes the example of Amazon. In 2015, the e-commerce giant decided to use an automated system to help it choose its recruitments. The initiative was interrupted because only men were chosen. “The data entered were completely unbalanced between men and women, with men constituting the overwhelming majority of managers recruited in the past. The algorithm did not give any chance to the newly qualified candidates,” the report explains.
To combat these biases, researchers at the AI Now Institute advocate truly opening engineering positions to women and minorities. They also believe that the creation of algorithms should not remain in the hands of information science researchers alone, but should be open to the social sciences.
Study the risks of facial recognition
Two points are of particular concern to American scientists: facial recognition and algorithmic biases. “Governments and companies should stop using facial recognition in sensitive social and political contexts until the risks are fully investigated, and appropriate regulations are in place,” says the AI Now Institute.
The report notes that this technology gained considerable momentum in 2019, particularly in China, where citizens are forced to have their faces scanned to purchase a telephone package. However, it has been shown that these technologies are far from perfect. For example, in July 2019, the National Institute of Standards and Technology (NIST) in the United States published a study showing that these systems have difficulty distinguishing the faces of women with black skin.
The report advises the legislator to adopt a moratorium imposing transparency requirements that would allow researchers, policymakers, and civil society to decide on the best approach to regulating facial recognition. The public must also be able to understand how these technologies work to form their own opinion.
How to supervise such an industry?
This report is not the first to be alarmed by the lack of regulation around AI. It must be said that States are struggling to adopt legislation on this subject. At the end of August 2019, we learned that the European Commission was working on a text. At the end of November 2019, UNESCO was mandated to draft a “global standard-setting instrument” in 18 months. But this exceptional initiative is hampered by the concrete reality of international law. In the vast majority of cases, it is impossible to make international sanctions useful and, therefore, adequate. States can avoid it very quickly.