Breaking News

Google’s brand-new AI ethics board is already falling apart

The Google office in Berlin, at its opening in January 2019.

One member resigned and two more are under fire. It’s only a week old.

Just a week after it was announced, Google’s new AI ethics board is already in trouble.

The board, founded to guide “responsible development of AI” at Google, would have had eight members and met four times over the course of 2019 to consider concerns about Google’s AI program. Those concerns include how AI can enable authoritarian states, how AI algorithms produce disparate outcomes, whether to work on military applications of AI, and more.

Of the eight people listed in Google’s initial announcement, one (privacy researcher Alessandro Acquisti) has announced on Twitter that he won’t serve, and two others are the subject of petitions calling for their removal — Kay Coles James, president of the conservative Heritage Foundation think tank, and Dyan Gibbens, CEO of drone company Trumbull Unmanned. Thousands of Google employees have signed onto the petition calling for James’s removal.

James and Gibbens are two of the three women on the board. The third, Joanna Bryson, was asked if she was comfortable serving on a board with James, and answered, “Believe it or not, I know worse about one of the other people.”

Altogether, it’s not the most promising start for the board.

The whole situation is embarrassing to Google, but it also illustrates something deeper: AI ethics boards like Google’s, which are in vogue in Silicon Valley, largely appear not to be equipped to solve, or even make progress on, hard questions about ethical AI progress.

A role on Google’s AI board is an unpaid, toothless position that cannot possibly, in four meetings over the course of a year, arrive at a clear understanding of everything Google is doing, let alone offer nuanced guidance on it. There are urgent ethical questions about the AI work Google is doing — and no real avenue by which the board could address them satisfactorily. From the start, it was badly designed for the goal — in a way that suggests Google is treating AI ethics more like a PR problem than a substantive one.

Nearly half the board has resigned or is under fire

Google announced their AI ethics board last week, saying the board would “consider some of Google’s most complex challenges that arise under our AI Principles, like facial recognition and fairness in machine learning, providing diverse perspectives to inform our work.”

From the start, the board attracted criticism. Many people were outraged about the inclusion of Kay Coles James, the Heritage Foundation president.

“In selecting James, Google is making clear that its version of ‘ethics’ values proximity to power over the wellbeing of trans people, other LGBTQ people, and immigrants,” argues an open letter signed by more than 1,800 Google employees. A particular cause for concern was James’s stance that the trans rights movement is seeking to “change the definition of women to include men” in order to “erase” women’s rights.

“Google cannot claim to support trans people and its trans employees  —  a population that faces real and material threats  —  and simultaneously appoint someone committed to trans erasure to a key AI advisory position,” concludes the open letter.

Others called on Google to remove Dyan Gibbens from the board. Gibbens is the CEO of Trumbull Unmanned, a drone technology company, and she previously worked on drones for the US military. A year ago, Google employees were outraged when it was revealed that the company had been working with the US military on drone technology as part of so-called Project Maven. With employees resigning in protest, Google promised not to renew Maven. Collaborating with the military on drone technology remains a touchy subject internally, and one where many Google employees don’t have a lot of trust in Google leadership.

On Saturday, Alessandro Acquisti, the privacy researcher, announced his resignation from the panel, saying, “I’d like to share that I’ve declined the invitation to the ATEAC [Advanced Technology External Advisory Council ] council. While I’m devoted to research grappling with key ethical issues of fairness, rights & inclusion in AI, I don’t believe this is the right forum for me to engage in this important work.”

Even before the outrage, this panel was not set up for success

But the collapse of Google’s panel and the controversy over its make-up almost obscures a deeper problem: This was not an entity set up to do a good job.

Google’s announcement states that the panel would serve over the course of 2019, and meet four times. That’s just not very much time together, given the complexity of the issues members will be advising on. It’s not enough time to hear about even a fraction of Google’s ongoing projects, which suggests the board won’t be giving advice on those.

Second, the board positions are unpaid. Some have contended that a paid oversight committee would be worse, because board members would be indebted to Google, but others think unpaid board positions advantage the independently wealthy. These critics see the unpaid positions as another sign that Google isn’t taking the AI ethics board very seriously, and that the company doesn’t expect members to spend much time on it, either.

Next, the ethics panel — as has been the case with ethics panels at other top tech companies — does not have the power to do anything. Google says “we hope this effort will inform both our own work and the broader technology sector,” but it’s very unclear who, if anyone, at Google will rely on these recommendations and which decisions the board will get to make recommendations about.

Overall, it’s not clear whether the panel will be used for guidance on internal Google matters at all. What it definitely will be used for is PR.

Panel member Joanna Bryson, defending Coles James’s inclusion on Twitter, said, “I know that I have pushed [Google] before on some of their associations, and they say they need diversity in order to be convincing to society broadly, e.g. the GOP.” This makes sense as a strategic priority for Google, whose products, of course, are used by nearly everyone.

But if Google’s goal with the panel is to “be convincing to society broadly” without necessarily changing anything the company does, that’s not really AI ethics — it’s AI marketing.

And fundamentally, that’s what’s wrong with AI ethics panels. Google is not the only tech company to have one, and while Microsoft’s AI ethics committee and Facebook’s center for ethics research have not been embroiled in quite as much drama, they don’t have official decision-making power, either.

Ethical deployment of powerful emerging technologies involves tough decisions. Will a company work with Immigration and Customs Enforcement (ICE)? Or with the Chinese government on technology that aids it in its ongoing, horrifying campaign to imprison a million Uighurs? If a facial recognition tool works better on white Americans than black Americans, what does it mean to fairly deploy it? If AI is creating and exacerbating inequalities, what’s the plan to tackle them? If a line of AI research looks dangerous — if some experts are warning it could have catastrophic effects for the world — will Google pursue it anyway?

“The frameworks presently governing AI are not capable of ensuring accountability,” a review of AI ethical governance by the AI Now Institute concluded in November.

All of those calls have to be made at the highest level of the company. Google quite reasonably doesn’t want to give control of these decisions to outsiders, but that means that the people tasked with providing guidance on AI ethics are removed from the context where key AI policy decisions will happen. A better panel would contain both decision makers at Google and outside voices; would issue formal, specific, detailed recommendations; and would announce publicly whether Google followed them.

Neither Google nor anyone else appears actually comfortable with meaningful external oversight. Neither Google nor anyone else seems to have a principled or systematic way to handle the power it has stumbled into. That’s why companies are formulating these panels with goals like “be convincing to society broadly” — as Google aimed for with the inclusion of James — rather than “review the process for approving collaborations with the U.S. military.” The brouhaha has convinced me that Google needs an AI ethics board quite badly — but not the kind it seems to want to try to build.


Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good



from Vox - All https://ift.tt/2UytyJg

No comments