CONTROVERSIES ON ALGORITHMIC HARMS: corporate discourses on coded discrimination
DOI:
https://doi.org/10.20873/uft.2447-4266.2020v6n4a1ptKeywords:
Algorithms; Algorithmic Auditing; Explainability; Technology Journalism; Platforms.Abstract
Discriminatory impacts and the damages due to algorithmic systems have opened discussions regarding the scope of responsibility of communication technology and artificial intelligence companies. The article presents public controversies triggered by eight public cases of harm and algorithmic discrimination that generated public responses from technology companies, addressing the efforts made by them in framing the debate about responsibility in the course of planning, training and implementation of systems. Following that, it discusses how the opacity of systems is defended by the commercial companies that develop them, alleging prerogatives such as “industry secrets” and algorithmic inscrutability.
Downloads
References
BROCK, Andre. Análise Crítica Tecnocultural do Discurso. In: SILVA, T. Comunidades, Algoritmos e Ativismos Digitais: olhares afrodiaspóricos. São Paulo, LiteraRUA, 2020.
BUCHER, Taina. The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms. Information, Communication & Society, v. 20, n. 1, p. 30-44, 2016b.
BUCHER, Taina. Neither black nor box: ways of knowing algorithms. In: KUBITSCHKO, S. & KAUN, A. (orgs.) Innovative methods in media and communication research. Palgrave Macmillan, Cham, 2016b. p. 81-98.
BUOLAMWINI, Joy; GEBRU, Timnit. Gender shades: Intersectional accuracy disparities in commercial gender classification. In: Proceeedings of Conference on fairness, accountability and transparency, 2018. pp. 77-91.
DIAKOPOULOS, Nicholas. Accountability in algorithmic decision making. Communications of the ACM, v. 59, n. 2, p. 56-62, 2016.
EPSTEIN, Ziv et al. Closing the AI Knowledge Gap. arXiv preprint arXiv:1803.07233, 2018.
ESLAMI, Motahhare et al. I always assumed that I wasn't really that close to [her]: Reasoning about Invisible Algorithms in News Feeds. In: Proceedings of the 33rd annual ACM conference on human factors in computing systems. ACM, 2015. p. 153-162
FRANCE PRESS. (2019). San Francisco proíbe a polícia de usar reconhecimento facial Oito dos nove conselheiros municipais são contrários à tecnologia. G1, 16/05/2019, online. Disponível em: https://g1.globo.com/pop-arte/noticia/2019/05/16/san-francisco-proibe-a-policia-de-usar-reconhecimento-facial.ghtml Acesso em 22/04/2020.
GILLESPIE, Tarleton. A relevância dos algoritmos. Parágrafo, 6(1), 2018, pp. 95-121.
GUNNING, D. Broad Agency Announcement Explainable Artificial Intelligence (XAI). Technical report, 2016.
GUNNING, David. Explainable artificial intelligence (xai) Program. AI Magazine, v. 40, n. 2, 2019. pp.44-58.
LATOUR, Bruno. Why has critique run out of steam? From matters of fact to matters of concern. Critical inquiry, v. 30, n. 2, p. 225-248, 2004.
LATOUR, Bruno. Reassembling the Social: An Introduction to Actor-Network-Theory. New York: Oxford University Press, 2005..
NOBLE, Safiya Umoja. Searching for Black Girls: Ranking Race and Gender in Commercial Search Engines. Tese de Doutorado defenfida na Urbana-Champaign: University of Illinois at Urbana-Champaign, 2011.
NOBLE, Safiya Umoja. Algorithms of oppression: How search engines reinforce racism. New York: NYU Press, 2018.
PASQUALE, Frank. The black box society. Harvard University Press, 2015.
PASQUALE, Frank. Toward a Fourth Law of Robotics: Preserving Attribution, Responsibility, and Explainability in an Algorithmic Society. Ohio St. LJ, v. 78, p. 1243, 2017.
RAJI, Inioluwa Deborah; BUOLAMWINI, Joy. Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial ai products. In: AAAI/ACM Conf. on AI Ethics and Society, 2019.
RIBEIRO, Manoel Horta et al. Auditing radicalization pathways on youtube. arXiv preprint arXiv:1908.08313, 2019.
RIEDER, Bernhard; MATAMOROS-FERNÁNDEZ, Ariadna; COROMINA, Òscar. From ranking algorithms to ‘ranking cultures’ Investigating the modulation of visibility in YouTube search results. Convergence, v. 24, n. 1, p. 50-68, 2018.
ROMANI, Cristóbal C.; KUKLINSKI, Hugo P. Planeta Web 2.0: Inteligencia colectiva o medios fast food. Barcelona: Grup de Recerca d’Interaccions Digitals, Universitat de Vic. Flacso, 2007.
RUBEL, Alan; PHAM, Adam; CASTRO, Clinton. Agency Laundering and Algorithmic Decision Systems. In: International Conference on Information. Springer, Cham, 2019. p. 590-598.
SANDVIG, Christian et al. Auditing algorithms: Research methods for detecting discrimination on internet platforms. Data and discrimination: converting critical concerns into productive inquiry, v. 22, 2014.
SEAVER, N. Knowing Algorithms. In: VERTESI, J.; RIBES, D. (orgs.) digitalSTS: A Field Guide for Science & Technology Studies. Princeton Univrsity Press, 2019. pp.412-422.
SILVEIRA, S. A. Democracia e os códigos invisíveis: como os algoritmos estão modulando comportamentos e escolhas políticas. São Paulo: Edições SESC-SP, 2019.
SRNICEK, Nick. Platform capitalism. John Wiley & Sons, 2017.
SWEENEY, Latanya. Discrimination in online ad delivery. arXiv preprint arXiv:1301.6822, 2013.
VAN DIJCK, José. Datafication, dataism and dataveillance: Big Data between scientific paradigm and ideology. Surveillance & Society, 12(2), 2014. pp. 197-208.
Published
How to Cite
Issue
Section
License
[PT] Autores que publicam nesta revista concordam com os seguintes termos:
1. Autores mantém os direitos autorais e concedem à revista, sem pagamento, o direito de primeira publicação, com o trabalho simultaneamente licenciado sob a Creative Commons Attribution License (CC BY-NC 4.0), permitindo o compartilhamento do trabalho com reconhecimento da autoria do trabalho e publicação inicial nesta revista.
Leia todos os termos dos direitos autorais aqui.