OpenAI says it can detect images made by its own software… mostly


We all think we’re pretty good at identifying images made by AI. It’s the weird alien text in the background. It’s the bizarre inaccuracies that seem to break the laws of physics. Most of all, it’s those gruesome hands and fingers. However, the technology is constantly evolving and it won’t be too long until we won’t be able to tell what’s real or not. Industry leader OpenAI is trying to get ahead of the problem by created by its own DALL-E 3 generator. The results are a mixed bag.

OpenAI says it can detect images made by its own software… mostly

OpenAI

The company says it can accurately detect pictures whipped up by DALL-3 98 percent of the time, which is great. There are, though, some fairly big caveats. First of all, the image has to be created by DALL-E and, well, it’s not the only image generator on the block. The internet overfloweth with them. According to , the system only managed to successfully classify five to ten percent of images made by other AI models.

Also, it runs into trouble if the image has been modified in any way. This didn’t seem to be a huge deal in the case of minor modifications, like cropping, compression and changes in saturation. In these cases, the success rate was lower but still within acceptable range at around 95 to 97 percent. Adjusting the hue, however, dropped the success rate down to 82 percent.

Results from the test.Results from the test.

OpenAI

Now here’s where things get really sticky. The toolset struggled when used to classify images that underwent more extensive changes. OpenAI didn’t even publish the success rate in these cases, stating simply that “other modifications, however, can reduce performance.”

See also  Apple confirma que la línea iPhone 15 recibirá cinco años de soporte de software; Menos que Google y Samsung

This is a bummer because, well, it’s an election year and the vast majority of AI-generated images are going to be modified after the fact so as to better enrage people. In other words, the tool will likely recognize an image of Joe Biden asleep in the Oval Office surrounded by baggies of white powder, but not after the creator slaps on a bunch of angry text and Photoshops in a crying bald eagle or whatever.

At least OpenAI is being transparent regarding the limitations of its detection technology. It’s also giving external testers access to the aforementioned tools to help fix these issues, . The company, along with bestie Microsoft, has poured $2 million into something called the , which hopes to expand AI education and literacy.

Unfortunately, the idea of AI mucking up an election is not some faraway concept. It’s happening right now. There have already been and used this cycle, and there’s likely as we slowly, slowly, slowly (slowly) crawl toward November.

This article contains affiliate links; if you click such a link and make a purchase, we may earn a commission.



Source Article Link

Leave a Comment