AI-powered tool transforming pixelated images has ‘racial bias’

An artificial intelligence-powered tool that transforms pixelated images into clear photos has been accused of racial bias.

Called Face Depixelizer, the tool was released on Twitter for users to test, who found the technology is unable to process Black faces properly.

The tool transformed a pixelated image of Barack Obama into a a white man and other examples included Representative Alexandria-Ocasio Cortez and actress Lucy Liu – all of which were reconstructed to look white.

The Face Depixelizer is based on an AI tool developed by a team at Duke University, which uses a method called PULSE that searches through AI-generated images of high-resolution faces to match ones that look similar to the input image when compressed to the same size.

Scroll down for video 

An artificial intelligence-powered tool that transforms pixelated images into clear photos has been accused of racial bias. It transformed a pixelated image of Barack Obama (left) into a white man (right)

The Duke team used a machine learning tool with two neural networks – one develops the AI-create human faces that mimics the ones it was trained on and the other takes this output and decides if it is convincing enough to be mistaken for the real thing. 

Duke University boasted that its system is capable of converting a 16×16-pixel image to 1024 x 1024 pixels in a few seconds, which is 64 times the resolution.

A developer, who goes by the name Bomze on Twitter, used the system to develop Face Depixelizer and shared it on the social media site over the weekend.

‘Given a low-resolution input image, model generates high-resolution images that are perceptually realistic and downscale correctly,’ reads the post.

A developer, who goes by the name Bomze on Twitter, used the system to develop Face Depixelizer and shared it on the social media site over the weekend. 'Given a low-resolution input image, model generates high-resolution images that are perceptually realistic and downscale correctly,' reads the post

A developer, who goes by the name Bomze on Twitter, used the system to develop Face Depixelizer and shared it on the social media site over the weekend. ‘Given a low-resolution input image, model generates high-resolution images that are perceptually realistic and downscale correctly,’ reads the post

A day later users spotted that the tool was not accurate when it came to processing Black faces.

One users ran a pixelated image of Obama, producing a clear image of a white man.

And another users ran the same image several times, showing the same results.

Robert Osazuwa Ness, a machine learning blogger, conducted a test with his own face along with images of Alexandria-Ocasio Cortez and actress Lucy Liu. 

And the results produced faces that look white.

Business Insider noted that the failure may be due to the dataset used to train the AI.

If there is a lack of diversity in the images fed to the machine learning algorithm, it will not be able to perform properly.

Robert Osazuwa Ness, a machine learning blogger, conducted a test with his own face along with images of Alexandria-Ocasio Cortez and actress Lucy Liu. And the results produced faces that look white

Robert Osazuwa Ness, a machine learning blogger, conducted a test with his own face along with images of Alexandria-Ocasio Cortez and actress Lucy Liu. And the results produced faces that look white

However, this concept is nothing new, as MIT researchers released a report in 2018 that reveals the way artificial intelligence system collect data often makes them racist and sexist.

Lead author Irene Chen, a PhD student who wrote the paper with MIT professor David Sontag and postdoctoral associate Fredrik D. Johansson, said: ‘Computer scientists are often quick to say that the way to make these systems less biased is to simply design better algorithms.’

‘But algorithms are only as good as the data they’re using, and our research shows that you can often make a bigger difference with better data.’

In one example, the team looked at an income-prediction system and found that it was twice as likely to misclassify female employees as low-income and male employees as high-income.

They found that if they had increased the dataset by a factor of 10, those mistakes would happen 40 percent less often.

In another dataset, the researchers found that a system’s ability to predict intensive care unit (ICU) mortality was less accurate for Asian patients.

However, the researchers warned existing approaches for reducing discrimination would make the non-Asian predictions less accurate

Chen says that one of the biggest misconceptions is that more data is always better.

Instead, researchers should get more data from those under-represented groups.

‘We view this as a toolbox for helping machine learning engineers figure out what questions to ask of their data in order to diagnose why their systems may be making unfair predictions,’ says Sontag.