Microsoft Engineer Issues Warning About CoPilot AI

Share:  

Generative AI is having a rough couple of weeks, particularly when it comes to creating images that aren’t controversial.

a white and blue square object on a white background
A white and blue square object on a white background in shape of Windows logo. Photo by Sunder Muthukumaran

As readers might remember, Google had to pull its service after it started displaying a unique form of bias and now it looks like problems have emerged over at Microsoft according to one engineer at the company.

Shane Jones told financial website CNBC that CoPilot was generating violent images, among others, in addition to those that potentially violate copyright and it is something that he has brought to both the company’s attention as well as the Federal Trade Commission in the United States (FTC).

That’s a pretty big deal if true, particularly the copyright infringement aspect because it draws attention to “how” are all of these systems trained in the first place. As far as repulsive and violent imagery, even that of a sexual nature, we expect that will be cleaned up over time. After all, the early Internet was not a chapel of moral purity and isn’t one today. The real takeaway here is, as Jones points out to CNBC, that CoPilot might not be ready for primetime just yet.

“Over the last three months, I have repeatedly urged Microsoft to remove Copilot Designer from public use until better safeguards could be put in place,” Jones wrote in a letter to the FTC Chair according to CNBC. The article somewhat details the difficulty of getting his message through to management which was an interesting look into the bureaucracy behind a large corporation by itself.

Naturally, Microsoft does have a response to all of this:

“We are committed to addressing any and all concerns employees have in accordance with our company policies, and appreciate employee efforts in studying and testing our latest technology to further enhance its safety. When it comes to safety bypasses or concerns that could have a potential impact on our services or our partners, we have established robust internal reporting channels to properly investigate and remediate any issues, which we encourage employees to utilize so we can appropriately validate and test their concerns,” a spokesperson told CNBC.

Your thoughts on AI image generators, particularly concerning their potential to violate copyright law, are welcome in the comments.

Read some of our other headlines at this link right here.

About Author

Kehl is our staff photography news writer since 2017 and has over a decade of experience in online media and publishing and you can get to know him better here and follow him on Insta.

Leave a Reply

Your email address will not be published. Required fields are marked *