Artificial intelligence is becoming a bigger part of the world of photography and videography every single day.
Whether it is AI-based programs that can edit photos for you or even wholesale remove and add objects, AI is becoming very, very powerful.
That’s not to even mention the technology that can create a new person out of thin air (you can read more about that here).
To monitor the development of their technology, and make sure it was used in ethical ways, Google summoned a panel that it called its AI Ethics Board to accomplish this exact task.
But it didn’t even last a week and you already know the culprit that killed it – Internet scandal and controversy.
Apparently, one member of the AI ethics board was Heritage Foundation president Kay Coles James and previous comments that she made that described climate change skepticism, as well as her positions with regards to transgender people, caused an uproar.
A petition to remove her from the board got 2,300 signatures from Google employees according to Vox.
From here, the whole board fell into disarray and internal scandal – all within one week.
Not only was Kay Coles James the subject of controversy, but so was fellow board member Dyan Gibbens, the founder, and CEO of Trumbull Unmanned, a drone technology company. This relationship called into question Google’s close relationship with the US military in light of the ostensible purpose of the board to make sure that governments around the world don’t abuse AI.
The drama really kicked into high gear when Alessandro Acquisti resigned, putting pressure on Joanna Bryson to explain her decision to stay, which she did with a referral to another board member with an even more controversial past, or as she says, “Believe it or not, I know worse about one of the other people.”
Let’s keep in mind that Google thought these people were going to work well together.
In a press release that put the drama out to pasture, Google said, “It’s become clear that in the current environment, ATEAC can’t function as we wanted. So we’re ending the council and going back to the drawing board. We’ll continue to be responsible in our work on the important issues that AI raises, and will find different ways of getting outside opinions on these topics.”
What this means for the company’s efforts to police itself with regards to AI technology remains to be seen but this first attempt is, without a doubt, a somewhat shambolic effort.