âIs there anyone working on regulation protecting Ethical AI researchers, similar to whistleblower protection?â Timnit Gebru asked on Twitter. âBecause with the amount of censorship and intimidation that goes on towards people in specific groups, how does anyone trust any real research in this area can take place?â
Gebru, a Black woman, was the co-lead of Googleâs âEthical Artificial Intelligence (AI)â team. On December 2, two days after posting this message, Google fired Gebru. It was in response to an ethics research paper that reportedly included criticism of the environmental impact of AI models.
Now, AI researcher Dr. Margaret Mitchell, who was also involved in leading the âEthical AIâ team, was fired on March 5 after Google locked her âout of her work account for five anxious weeks.â The retaliation came after she shared a document with Googleâs public relations department that questioned their stated reasons for terminating Gebru.
Google Walkout For Real Change (GWRC), a group of Google employees, have seized the moment to demand Congress and state legislatures strengthen whistleblower protections for tech employees like Gebru and Mitchell.
âThe existing legal infrastructure for whistleblowing at corporations developing technologies is wholly insufficient,â GWRC declares. âResearchers and other tech workers need protections, which allow them to call out harmful technology when they see it, and whistleblower protection can be a powerful tool for guarding against the worst abuses of the private entities which create these technologies.â
As UC Berkeley Center for Law and Technology co-director Sonia Katyal told VentureBeatâs Khari Johnson in December, âWhat we should be concerned about is a world where all of the most talented researchers like [Gebru] get hired at these places and then effectively muzzled from speaking. And when that happens, whistleblower protections become essential.â
Johnson noted Katyal is âconcerned about a clash between the rights of a business to not disclose information about an algorithm and the civil rights of an individual to live in a world free of discrimination. This will increasingly become a problem, she warned, as government agencies take data or AI service contracts from private companies.â
There are a number of examples that show a need for whistleblower protection to protect AI researchers, who challenge corporations from within their industries. Like Johnson highlighted, âA fall 2019 study in Nature found that an algorithm used in hospitals may have been involved in the discrimination against millions of Black people in the United States. A more recent story reveals how an algorithm prevented Black people from receiving kidney transplants.â
âDrs. Mitchell and Gebru also built one of the most diverse teams in Google Research, people who could connect their lived experiences to practices of power, subjection, and domination which get encoded into AI and other data-driven systems,â according to GWRC.
The group maintains Gebru and Mitchell were âworking in the public interestâ and spent time critically examining the âbenefits and risks of powerful AI systems â especially those whose potential harms outside of the Google workplace were likely to be overlooked or minimized in the name of profit or efficiency.â (Both also complained about workplace conditions to the human resources department.)
âGoogle workers have been organizing from within, raising inextricably linked issues of toxic workplace conditions and unethical and harmful tech to leadership and to the public,â GWRC adds. âWith the firing of Drs. Mitchell and Gebru, Google has made it clear that they believe they are powerful enough to withstand the public backlash and are not concerned with publicly damaging their employeesâ careers and mental health.â
âThey have also shown that they are willing to crack down hard on anyone who would perturb the companyâs quest for growth and profit.â
The group concludes, âGoogle is not committed to making itself better and has to be held accountable by organized workers with the unwavering support of social movements, civil society, and the research community beyond.â
In addition to a call for whistleblower protections, GWRC urges academic conferences to âdecline sponsorship from organizations, such as Google, engaged in retaliatory actions towards researchers.â
The Association for Computing Machineryâs Fairness, Accountability, and Transparency (FAccT) conference âsuspendedâ Googleâs sponsorship on March 2.
GWRC encourages âpotential recruits to Googleâ to help âbreak the tech talent pipelineâ and refuse to work at Google as it creates âharmful and unethical technology.â
âWe call on universities, especially those that claim to be human-centered, such as Stanfordâs Human Centered AI Institute and MITâs Schwarzman College of Computing to publicly reject Google funding,â they further suggest.
Gebru was previously involved in exposing the âgender and skin-type bias in commercial AI facial recognition systems.â Her work garnered a lot of praise.
Along with raising concerns about the impact of technology on marginalized communities, Protocol reported that Gebru supported activism at Google and spoke up for employees, who faced retaliation for engaging in protests.
The same day that Gebru was terminated by Google the corporation was accused by the National Labor Relations Board of spying on employees involved in protest organizing.
It is hard to fathom how Googleâs retaliation against conscientious employees with respect and influence in the world of technology will not prolong a backlash that has already spanned several months.
Ultimately, if Google cannot tolerate employees on their âEthical AIâ team who raise objections, they should quit using this team as cover for their business. Simply rename the team âAI,â and lean into their corporate culture with the mantra, âDonât Be Ethical.â