Dear AI, Black youth are beautiful too

Junior Bernadin, Dean of Students and Director of Information Techonology for The Ron Clark Academy

By Junior Bernadin, Dean of Students and Director of Information Techonology for The Ron Clark Academy

      As technology advances and artificial intelligence (AI) becomes more prevalent in our daily lives, it is crucial that the creators and developers of these technologies accurately represent and include all diverse groups.

For example, on January 3, I generated a series of about one hundred images using the word beautiful in conjunction with the words “kids,” “baby,” and “girl” on the Dall-E. This tool generates images using AI based on written descriptions.

Unfortunately, as seen in my recent experiment with Dall-E, hardly any Black babies, kids, or girls were represented in the art generated when using the term “beautiful baby,” “beautiful kids,” or “beautiful girls”.

The word “beautiful” is often used to describe art and photographs, and it can be especially significant when it is used to describe images of Black people. There are a few reasons why this is the case.

Historically, Black people have often been marginalized and excluded from mainstream definitions of beauty. This has contributed to harmful stereotypes and biases about Black people’s appearance.

When Black youth are rarely represented in a positive light in mainstream media and art, it can be difficult for them to feel confident and valued in society.

This lack of representation is not only harmful and offensive, but it also negatively impacts the development and accuracy of AI systems.

It is my belief that if there isn’t enough representation present in the data sets and those involved in the machine learning process like developers and stakeholders, bias occurs in the AI.

In other words, if the people creating and inputting data into these systems are not diverse and representative of different cultures and communities, the resulting AI will also be biased and exclude certain groups.

This is especially concerning regarding facial recognition technology, which has been shown to have significant biases against people of color.

For example, in a study conducted by the National Institute of Standards and Technology (NIST), it was found that “most facial recognition algorithms were more accurate for lighter-skinned males than darker-skinned females” (NIST, 2019).

The technology is more likely to recognize and identify white men accurately while consistently misidentifying and excluding Black women.

These biases and exclusions have real-life consequences, as seen in the case of biometric border control systems. In 2018, the Electronic Frontier Foundation (EFF) reported that “customs officers have been using facial recognition to screen travelers at border crossings, including US citizens, for several years” (EFF, 2018).

However, the technology has been shown to have higher error rates for people of color, leading to false positives and wrongful detentions.

The lack of diversity in the development and creation of AI also perpetuates harmful stereotypes and reinforces systemic racism.

For example, the lack of beautiful Black babies, kids, and girls in the Dall-E-generated images reinforces the harmful stereotype that beauty only exists in certain races and further perpetuates the exclusion of Black women and children from societal beauty standards.

Diverse developers and scientists must be included in creating and developing AI systems to ensure that the technology accurately represents and consists of all groups.

About Carma Henry 24634 Articles
Carma Lynn Henry Westside Gazette Newspaper 545 N.W. 7th Terrace, Fort Lauderdale, Florida 33311 Office: (954) 525-1489 Fax: (954) 525-1861

Be the first to comment

Leave a Reply

Your email address will not be published.


*