Abstract
Technology, particularly Artificial Intelligence (AI), often proves its potential to revolutionize the future. However, it is crucial to recognize that AI is not without its biases. AI learns from data, and data is a reflection of our history – structurally organized around racist and misogynist ideologies. In this study, we want to explore the issue of bias in AI, with a focus on facial recognition. To do this, we identified that the structure of the algorithm itself is not inherently racist or sexist. What happens is that, as pointed out by O'Neil (2017), the data that feeds these algorithms incorporates our past, including dark aspects of discrimination and prejudice, characterizing them as a new expression of racism (LIPPOLD; FAUSTINO, 2022) and the racialization of subalternized bodies. Therefore, one of the biggest challenges is the problem of mitigating biases and prejudices related to the auditing and transparency of these systems (VIEIRA, 2023). Although technological biases – based on a supposed white, male and Western supremacy – existed before the development of algorithms, we work with the following question: how could algorithms, from their applications, rid humanity of these biases? Is there a future possibility, or is AI doomed to perpetuate these biases under the guise of technological neutrality? The aim of this paper is to shed light on these questions and contribute to a future in which AI can be used in an equitable way, promoting futures based on social justice, feminist agendas and freedom from racial discrimination.
References
ABBAS DA SILVA, Lorena; FRANQUEIRA, Bruna Diniz; HARTMANN, Ivar A. O que os olhos não veem, as câmeras monitoram: reconhecimento facial para segurança pública e regulação na América Latina. Revista Digital de Direito Administrativo, v. 8, n. 1, p. 171–204, 2021. Disponível em: https://revistas.usp.br/rdda/article/view/173903. Acesso em: 19/maio/2025.
ADAM, Alison. Constructions of gender in the history of artificial intelligence. IEEE Annals of the History of Computing, v. 18, n. 3, p. 47–53, 1996. Disponível em: https://ieeexplore.ieee.org/abstract/document/511944. Acesso em: 12/fevereiro/2021.
AHMED, Saadaldeen Rashid; NASSAR, Mahmoud; SATTAR, Awni; MAJZUB, Rania; ELSAYED, Muawya. Analysis survey on deepfake detection and recognition with convolutional neural networks. [S. l.], 2022. Disponível em: https://ieeexplore.ieee.org/document/9799858. Acesso em: 29/junho/2024.
ALVI, Mohsan; ZISSERMAN, Andrew; NELLÅKER, Christoffer. Turning a blind eye: explicit removal of biases and variation from deep neural network embeddings. [S. l.], 2018. Disponível em: https://openaccess.thecvf.com/content_ECCVW_2018/papers/11129/Alvi_Turning_a_Blind_Eye_Explicit_Removal_of_Biases_and_Variation_ECCVW_2018_paper.pdf. Acesso em: 30/junho/2024.
ANGWIN, Julia; LARSON, Jeff; MATTU, Surya; KIRCHNER, Lauren. Machine bias. ProPublica, 2016. Disponível em: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. Acesso em: 17/abril/2025.
ANGWIN, Julia; TOBIN, Ariana; VARNER, Madeleine. Facebook (still) letting housing advertisers exclude users by race. ProPublica, 2017. Disponível em: https://www.propublica.org/article/facebook-advertising-discrimination-housing-race-sex-national-origin. Acesso em: 21/maio/2024.
BAROCAS, Solon; SELBST, Andrew D. Big data’s disparate impact. California Law Review, v. 104, n. 3, p. 671–732, 2016. Disponível em: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2477899. Acesso em: 17/abril/2025.
BROUSSARD, Meredith. Artificial unintelligence. Cambridge: The MIT Press, 2018.
BROWNE, Jude; DRAGE, Eleanor; MCINERNEY, Kerry. Tech workers’ perspectives on ethical issues in AI development: foregrounding feminist approaches. Big Data & Society, v. 11, n. 1, 2024. Disponível em: https://www.repository.cam.ac.uk/items/26f0861d-9ed5-4ab8-a338-6480384ffe09. Acesso em: 28/junho/2024.
BUOLAMWINI, Joy; GEBRU, Timnit. Gender shades: intersectional accuracy disparities in commercial gender classification. Conference on Fairness, Accountability and Transparency, p. 77–91, 2018. Disponível em: https://proceedings.mlr.press/v81/buolamwini18a.html. Acesso em: 30/junho/2024.
COSTA, Ramon Silva; KREMER, Bianca. Inteligência artificial e discriminação: desafios e perspectivas para a proteção de grupos vulneráveis diante das tecnologias de reconhecimento facial. Revista Brasileira de Direitos Fundamentais & Justiça, v. 16, n. 1, 2022. Disponível em: https://dfj.emnuvens.com.br/dfj/article/view/1316. Acesso em: 19/maio/2025.
DILLON, Sarah; COLLETT, Clementine. AI and gender: four proposals for future research. Apollo – University of Cambridge Repository, 2019.
DOLHANSKY, Brian; BITTON, Hilary; COHEN, Ben; DARRELL, Trevor; FERNANDEZ, Lauren; GESSERT, Felix; JANSSEN, Dieter; KUMAR, Deepak; LUX, Elias; MAHESHWARI, Gaurav; et al. The DeepFake Detection Challenge (DFDC) dataset. arXiv, v. 4, 2020.
FEUERRIEGEL, Stefan; SCHÖNFELDER, Nico; NITZSCHE, Nico. Generative AI. Business & Information Systems Engineering, v. 66, n. 1, p. 111–126, 2023. Disponível em: https://link.springer.com/article/10.1007/s12599-023-00834-7. Acesso em: 30/junho/2024.
GERRARD, Juliet A.; MACLAURIN, James; WALTON, Michael. By 2030, AI will contribute $15 trillion to the global economy. World Economic Forum, 2019. Disponível em: https://www.weforum.org/agenda/2019/08/by-2030-ai-will-contribute-15-trillion-to-the-global-economy/. Acesso em: 24/maio/2024.
GIBBS, Samuel. Google says sorry over racist Google Maps White House search results. The Guardian, 2015. Disponível em: https://www.theguardian.com/technology/2015/may/20/google-apologises-racist-google-maps-white-house-search-results. Acesso em: 21/maio/2024.
GROSS, Nicole. What ChatGPT tells us about gender: a cautionary tale about performativity and gender biases in AI. Social Sciences, v. 12, n. 8, p. 435, 2023. Disponível em: https://www.mdpi.com/2076-0760/12/8/435. Acesso em: 28/junho/2024.
INTERNATIONAL ORGANIZATION FOR STANDARDIZATION. ISO/IEC 22989:2022 – Information technology – Artificial intelligence – Artificial intelligence concepts and terminology. Geneva: ISO, 2022.
KAUR, Paramjit; KUMAR, Rajeev; SHUKLA, Prabhakar. Facial-recognition algorithms: a literature review. Medicine, Science and the Law, v. 60, n. 2, p. 002580241989316, 2020. Disponível em: https://journals.sagepub.com/doi/abs/10.1177/0025802419893168. Acesso em: 30/junho/2024.
KIM, Ahyeon; SONG, Haeyeon; LEE, Heekyung. Effects of gender and relationship type on the response to artificial intelligence. Cyberpsychology, Behavior, and Social Networking, v. 22, n. 4, p. 249–253, 2019.
LECUN, Yann; BENGIO, Yoshua; HINTON, Geoffrey. Deep learning. Nature, v. 521, n. 7553, p. 436–444, 2015.
LEE, Kai-Fu. AI superpowers: China, Silicon Valley, and the new world order. Boston: Houghton Mifflin Harcourt, 2018.
MAGNO, Madja Elayne da Silva Penha; BEZERRA, Josenildo Soares. Vigilância negra: o dispositivo de reconhecimento facial e a disciplinaridade dos corpos. Novos Olhares, v. 9, n. 2, p. 45–52, 2020. Disponível em: https://www.revistas.usp.br/novosolhares/article/view/165698/169548. Acesso em: 30/junho/2024.
MANCINI, Malena Beatriz. Deepfaked: propuesta de regulación de réplicas digitales. Repositorio UDESA, 2024. Disponível em: https://repositorio.udesa.edu.ar/jspui/handle/10908/23851. Acesso em: 30/junho/2024.
MARTIN, Noelle. Sexual predators edited my photos into porn – how I fought back. TEDxPerth, 2018. Disponível em: https://www.ted.com/talks/noelle_martin_sexual_predators_edited_my_photos_into_porn_how_i_fought_back. Acesso em: 30/junho/2024.
MEHRABI, Ninareh; MORSTAN, Fred; SAXENA, Abhishek; FRIEDLER, Suresh V.; CHOUDHARY, Prateek. A survey on bias and fairness in machine learning. ACM Computing Surveys, v. 54, n. 6, p. 1–35, 2021. Disponível em: https://dl.acm.org/doi/10.1145/3457607. Acesso em: 17/abril/2025.
METZ, Cade. Who is making sure the A.I. machines aren’t racist? The New York Times, 15/março/2021. Disponível em: https://www.nytimes.com/2021/03/15/technology/artificial-intelligence-google-bias.html. Acesso em: 22/maio/2024.
MIT TECH REVIEW. Novo aplicativo de inteligência artificial coloca mulheres em vídeos pornôs com um clique. MIT Technology Review, 2021. Disponível em: https://mittechreview.com.br/novo-aplicativo-de-inteligencia-artificial-coloca-mulheres-em-videos-pornos-com-um-clique/. Acesso em: 28/junho/2024.
MIT TECH REVIEW. Taylor Swift: mais uma vítima do deepfake. MIT Technology Review, 2024. Disponível em: https://mittechreview.com.br/taylor-swift-mais-uma-vitima-do-deepfake/. Acesso em: 30/junho/2024.
NOBLE, Safiya Umoja. Algoritmos da opressão. São Paulo: Editora Rua do Sabão, 2022.
O’NEIL, Cathy. Weapons of math destruction: how big data increases inequality and threatens democracy. New York: Crown, 2016.
O GLOBO. Busca por termo racista no Google Maps mostra Casa Branca, e site pede desculpas. O Globo, 2015. Disponível em: https://oglobo.globo.com/economia/busca-por-termo-racista-no-google-maps-mostra-casa-branca-site-pede-desculpas-16210255. Acesso em: 30/junho/2024.
ÖHMAN, Carl. Introducing the pervert’s dilemma: a contribution to the critique of deepfake pornography. Ethics and Information Technology, v. 22, 2019.
RANA, Md Shohel; NATH, Rajib; MONDOL, Md Al-Amin; PAUL, Avijit. Deepfake detection: a systematic literature review. IEEE Access, v. 10, p. 1–1, 2022.
RICH, Elaine. Artificial intelligence and the humanities. Computers and the Humanities, v. 19, n. 2, p. 117–122, 1985. Disponível em: https://www.jstor.org/stable/30204398. Acesso em: 20/junho/2024.
RÖHE, Anderson; SANTAELLA, Lucia. Prognósticos das deepfakes na política eleitoral. Organicom, v. 21, n. 44, p. 187–196, 2024. Disponível em: https://revistas.usp.br/organicom/article/view/221294. Acesso em: 30/junho/2024.
RUSSEL, Stuart; NORVIG, Peter. Artificial intelligence: a modern approach. New Jersey: Prentice Hall, 2021.
SALAS, Javier. Google conserta seu algoritmo “racista” apagando os gorilas. El País, 2018. Disponível em: https://brasil.elpais.com/brasil/2018/01/14/tecnologia/1515955554_803955.html. Acesso em: 21/maio/2024.
SHINDE, P. P.; SHAH, S. A. A review of machine learning and deep learning applications. [S. l.], 2018. Disponível em: https://ieeexplore.ieee.org/document/8697857. Acesso em: 17/abril/2025.
SILVA, Tarcízio. Racismo algorítmico. São Paulo: Edições Sesc SP, 2022.
SILVA, Rosane Leal da; SILVA, Fernanda dos Santos Rodrigues. Reconhecimento facial e segurança pública: os perigos do uso da tecnologia no sistema penal seletivo brasileiro. [S. l.: s. n.], 2019. Disponível em: https://www.ufsm.br/app/uploads/sites/563/2019/09/5.23.pdf. Acesso em: 30/junho/2024.
SULEIMENOV, Ibragim E.; ZHANBURGIN, Yerbolat A.; AIDARKHANOV, Almas; MUKHANBETZHANOV, Nurlybek. Artificial intelligence. Proceedings of the 2020 6th International Conference on Computer and Technology Applications, [S. l.], 2020.
SUVOROVA, Inna A. Deepfake pornography as a male gaze on fan culture. arXiv (Cornell University), [S. l.], 2022. Disponível em: https://arxiv.org/abs/2202.00374. Acesso em: 30/junho/2024.
TONG, Anna. “AI godfather”, others urge more deepfake regulation in open letter. [S. l.], 2024. Disponível em: https://www.reuters.com/technology/cybersecurity/ai-godfather-others-urge-more-deepfake-regulation-open-letter-2024-02-21/. Acesso em: 30/junho/2024.
TORKINGTON, Simon. The US has plans to tackle AI-generated deepfakes. World Economic Forum, 2024. Disponível em: https://www.weforum.org/agenda/2024/02/ai-deepfakes-legislation-trust/. Acesso em: 30/junho/2024.
TOUPIN, Sophie. Shaping feminist artificial intelligence. New Media & Society, v. 26, n. 1, 2023.
UK GOVERNMENT. Government cracks down on “deepfakes” creation. UK Government, 2024. Disponível em: https://www.gov.uk/government/news/government-cracks-down-on-deepfakes-creation. Acesso em: 30/junho/2024.
UNITED NATIONS. India: attacks against woman journalist Rana Ayyub must stop – UN experts. [S. l.], 2022. Disponível em: https://www.ohchr.org/en/press-releases/2022/02/india-attacks-against-woman-journalist-rana-ayyub-must-stop-un-experts. Acesso em: 30/junho/2024.
VINCENT, James. Twitter taught Microsoft’s AI chatbot to be a racist in less than a day. The Verge, 2016. Disponível em: https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist. Acesso em: 22/maio/2024.
WAKABAYASHI, Daisuke. Lawsuit accuses Google of bias against Black employees. The New York Times, 18/março/2022. Seção: Technology. Disponível em: https://www.nytimes.com/2022/03/18/technology/google-discrimination-suit-black-employees.html. Acesso em: 22/maio/2024.
WATSON, Angus. Teenager questioned after explicit AI deepfakes of dozens of schoolgirls shared online. CNN, 2024. Disponível em: https://edition.cnn.com/2024/06/13/australia/australia-boy-arrested-deepfakes-schoolgirls-intl-hnk/index.html. Acesso em: 30/junho/2024.
ZAFEIRIOU, Stefanos; ZHANG, Cha; ZHANG, Zhengyou. A survey on face detection in the wild: past, present and future. Computer Vision and Image Understanding, v. 138, p. 1–24, 2015.

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Copyright (c) 2025 Ellen Gomes Passos, Cícero Passos Lisboa, Joice da Silva Ferreira