In this short video, a black actor plays a person waiting for the audition, followed by other participants with different skin colours with him in the unknown condition. After a brief acquaintance, the black actor was saying to other people that he received a paragraph written in Lithuanian and pretended that he could not read it and sought the help of the Lithuanian participant to translate it, which in fact, described unpleasant racist language. After that, the reaction of the Lithuanian participants varied, with some refusing to translate, believing that it would not be helpful to the black actors. Some are grim-faced after thinking with euphemistic language to help them solve. On the other hand, one child told the story in a way that did not pretend too much. (but it is inevitable that this child was not racist in any way, only that his age limited his expression). All these participants were embarrassed, even slightly angry, about the black actor receiving this racist message. The negative and hurtful impact is the stereotype and racism. As we can see from this video, each participant was sorry for the negative comments and did not do any reaction to endorse this lousy message.

Recommendation algorithms make it possible to “personalize ad delivery” by accurately targeting the message to viewers’ habits and preferences. More and more ads can be recommended based on the needs of the viewers. Algorithmic tools play an essential role in calculating the viewers’ potential value.
Why is this video targeted to viewers?
Social software has viewers’ browsing history, and contemporary networks can also recognize and remember viewers’ search preferences. Suppose a viewer has viewed too much knowledge of racism. In that case, social media will absorb this information and use a quick algorithmic method to calculate what viewers are interested in. This also can be explained when I searched for these racism videos on YouTube; after that, my YouTube homepage appeared with lots of related racism videos in it.
In addition, the search engine of social networking software has an automatic association function. For example, when I search for “male” in Google, the first result is the male gaze. These messages about discrimination are subliminally placed in our network, which is the racist ideology mentioned in Noble’s book. Perhaps people who create these search engine algorithms have a racist ideology and are trying to influence the searcher by setting up keywords and targeting them to the areas of interest to the searcher.

We think that artificiality is intelligent, but that is not the case. In 2016, ProPublica published an investigation into a machine learning program that the court used to predict who was likely to re-offend after being registered. The journalist found that the software rated black people as having a higher risk than white people. Caliskan has researched black women in the US, and the results show that the association mechanism between the robot machine and the keywords searched by the viewer has a significant link. On the Internet, African Americans are more likely to have their names surrounded by “bad” words, not because they are bad people, but because users are making nasty comments that cause the AI to update its algorithm patterns.
Algorithms reflect contemporary technological advances and the ability of algorithm makers to push relevant information to people’s needs. That’s made our life easier and made us dependent on the Web. But our associative search through some search engines inadvertently identifies with the ideology of the algorithm makers, which can be invasive and influential on our thoughts. This may be why the public sometimes feels uneasy about social software.