By Tom Simonite
In early April, the European Commission published guidelines intended to keep any artificial intelligence technology used on the EUâ€™s 500 million citizens trustworthy. The blocâ€™s commissioner for digital economy and society, Bulgariaâ€™s Mariya Gabriel, called them â€œa solid foundation based on EU values.â€�
One of the 52 experts who worked on the guidelines argues that foundation is flawedâ€”thanks to the tech industry. Thomas Metzinger, a philosopher from the University of Mainz, in Germany, says too many of the experts who created the guidelines came from or were aligned with industry interests. Metzinger says he and another member of the group were asked to draft a list of AI uses that should be prohibited. That list included autonomous weapons, and government social scoring systems similar to those under development in China. But Metzinger alleges techâ€™s allies later convinced the broader group that it shouldnâ€™t draw any â€œred linesâ€� around uses of AI.
Metzinger says that spoiled a chance for the EU to set an influential example thatâ€”like the blocâ€™s GDPR privacy rulesâ€”showed technology must operate within clear limits. â€œNow everything is up for negotiation,â€� he says.
When a formal draft was released in December, uses that had been suggested as requiring â€œred linesâ€� were presented as examples of â€œcritical concerns.â€� That shift appeared to please Microsoft. The company didnâ€™t have its own seat on the EU expert group, but like Facebook, Apple, and others, was represented via trade group DigitalEurope. In a public comment on the draft, Cornelia Kutterer, Microsoftâ€™s senior director for EU government affairs, said the group had â€œtaken the right approach in choosing to cast these as â€˜concerns,â€™ rather than as â€˜red lines.â€™â€� Microsoft did not provide further comment. Cecilia Bonefeld-Dahl, director general for DigitalEurope and a member of the expert group, said its work had been balanced, and not tilted towards industry. â€œWe need to get it right, not to stop European innovation and welfare, but also to avoid the risks of misuse of AI.â€�
The brouhaha over Europeâ€™s guidelines for AI was an early skirmish in a debate thatâ€™s likely to recur around the globe, as policymakers consider installing guardrails on artificial intelligence to prevent harm to society. Tech companies are taking a close interestâ€”and in some cases appear to be trying to steer construction of any new guardrails to their own benefit.
Harvard law professor Yochai Benkler warned in the journal Nature this month that â€œindustry has mobilized to shape the science, morality and laws of artificial intelligence.â€�
Benkler cited Metzingerâ€™s experience in that op-ed. He also joined other academics in criticizing a National Science Foundation program for research into â€œFairness in Artificial Intelligenceâ€� that is co-funded by Amazon. The company will not participate in the peer review process that allocates the grants. But NSF documents say it can ask recipients to share updates on their work, and will retain a right to royalty-free license to any intellectual property developed.
Amazon declined to comment on the program; an NSF spokesperson said that tools, data, and research papers produced under the grants would all be made available to the public. Benkler says the program is an example of how the tech industry is becoming too influential over how society governs and scrutinizes the effects of AI. â€œGovernment actors need to rediscover their own sense of purpose as an indispensable counterweight to industry power,â€� he says.
Microsoft used some of its power when Washington state considered proposals to restrict facial recognition technology. The companyâ€™s cloud unit offers such technology, but it has also said that technology should be subject to new federal regulation.
In February, Microsoft loudly supported a privacy bill being considered in Washingtonâ€™s state Senate that reflected its preferred rules, which included a requirement that vendors allow outsiders to test their technology for accuracy or biases. The company spoke against a stricter bill that would have placed a moratorium on local and state government use of the technology.
By April, Microsoft found itself fighting against a House version of the bill it had supported, after the addition of firmer language on facial recognition. The House bill would have required that companies obtain independent confirmation that their technology worked equally well for all skin tones and genders before deploying it. Irene Plenefisch, Microsoftâ€™s director of government affairs, testified against that version of the bill, saying it â€œwould effectively ban facial recognition technology [which] has many beneficial uses.â€� The house bill stalled. With lawmakers unable to reconcile differing visions for the legislation, Washingtonâ€™s attempt to pass a new privacy law collapsed.
In a statement, a Microsoft spokesperson said that the company’s actions in Washington sprang from its belief in â€œstrong regulation of facial recognition technology to ensure it is used responsibly.”
Shankar Narayan, director of the technology and liberty project of the ACLU’s Washington chapter, says the episode shows how tech companies are trying to steer legislators toward their favored, looser, rules for AI. But, Narayan says, they wonâ€™t always succeed. â€œMy hope is that more policymakers will see these companies as entities that need to be regulated and stand up for consumers and communities,â€� he says. On Tuesday, San Francisco supervisors voted to ban the use of facial recognition by city agencies.
Washington lawmakersâ€”and Microsoftâ€”hope to try again for new privacy and facial recognition legislation next year. By then, AI may also be a subject of debate in Washington, DC.
Last month, Senators Cory Booker (D-New Jersey) and Ron Wyden (D-Oregon), and Representative Yvette Clarke (D-New York), introduced bills dubbed the Algorithmic Accountability Act. It includes a requirement that companies assess whether AI systems and their training data have built-in biases, or could harm consumers through discrimination.
Mutale Nkonde, a researcher at Data and Society, participated in discussions during the billâ€™s drafting. She is hopeful it will trigger discussion in DC about AIâ€™s societal impacts, which she says is long overdue.
The tech industry will make itself a part of any such conversations. Nkonde says that when talking with lawmakers about topics such as racial disparities in face analysis algorithms, some have seemed surprised, and said they have been briefed by tech companies on how AI technology benefits society.
Google is one company that has briefed federal lawmakers about AI. Its parent Alphabet spent $22 million, more than any other company, on lobbying last year. In January, Google issued a white paper arguing that although the technology comes with hazards, existing rules and self-regulation will be sufficient â€œin the vast majority of instances.â€�
Metzinger, the German philosophy professor, believes the EU can still break free from industry influence over its AI policy. The expert group that produced the guidelines is now devising recommendations for how the European Commission should invest billions of euros it plans to spend in coming years to strengthening Europeâ€™s competitiveness.
Metzinger wants some of it to fund a new center to study the effects and ethics of AI, and similar work throughout Europe. That would create a new class of experts who could keep evolving the EUâ€™s AI ethics guidelines in a less industry-centric direction, he says.
More Great WIRED Stories