Cargando…

When algorithms rule, values can wither : building responsible AI systems with recognizing that technology solutions implicitly prioritize efficiency /

Organizations that aim to develop and deploy responsible AI systems must begin by considering what implicit and systemic biases reside in a technological system itself. Not doing so can result in catastrophic failures resulting from unintended consequences of the system’s normal operation...

Descripción completa

Detalles Bibliográficos
Clasificación:Libro Electrónico
Autores principales: Lindebaum, Dirk (Autor), Glaser, Vern (Autor), Moser, Christine (Autor), Ashraf, Mehreen (Autor)
Formato: Electrónico eBook
Idioma:Inglés
Publicado: [Cambridge, Massachusetts] : MIT Sloan Management Review, [2022]
Edición:[First edition].
Temas:
Acceso en línea:Texto completo (Requiere registro previo con correo institucional)

MARC

LEADER 00000cam a22000007i 4500
001 OR_on1361715337
003 OCoLC
005 20231017213018.0
006 m o d
007 cr cnu|||unuuu
008 230118s2022 mau ob 000 0 eng d
040 |a ORMDA  |b eng  |e rda  |e pn  |c ORMDA  |d OCLCF 
024 8 |a 53863MIT64223 
029 1 |a AU@  |b 000073337208 
035 |a (OCoLC)1361715337 
037 |a 53863MIT64223  |b O'Reilly Media 
050 4 |a Q335 
082 0 4 |a 006.301  |2 23/eng/20230118 
049 |a UAMI 
100 1 |a Lindebaum, Dirk,  |e author. 
245 1 0 |a When algorithms rule, values can wither :  |b building responsible AI systems with recognizing that technology solutions implicitly prioritize efficiency /  |c Dirk Lindebaum, Vern Glaser, Christine Moser, and Mehreen Ashraf. 
250 |a [First edition]. 
264 1 |a [Cambridge, Massachusetts] :  |b MIT Sloan Management Review,  |c [2022] 
300 |a 1 online resource (6 pages) 
336 |a text  |b txt  |2 rdacontent 
337 |a computer  |b c  |2 rdamedia 
338 |a online resource  |b cr  |2 rdacarrier 
500 |a "Reprint 64223." 
504 |a Includes bibliographical references. 
520 |a Organizations that aim to develop and deploy responsible AI systems must begin by considering what implicit and systemic biases reside in a technological system itself. Not doing so can result in catastrophic failures resulting from unintended consequences of the system’s normal operations. The authors advise managers on how to mitigate the risk that algorithmic systems can cause organizational or social harm if left unchecked. 
590 |a O'Reilly  |b O'Reilly Online Learning: Academic/Public Library Edition 
650 0 |a Artificial intelligence  |x Social aspects. 
650 0 |a Algorithms  |x Social aspects. 
650 7 |a Artificial intelligence  |x Social aspects.  |2 fast  |0 (OCoLC)fst00817279 
700 1 |a Glaser, Vern,  |e author. 
700 1 |a Moser, Christine,  |e author. 
700 1 |a Ashraf, Mehreen,  |e author. 
856 4 0 |u https://learning.oreilly.com/library/view/~/53863MIT64223/?ar  |z Texto completo (Requiere registro previo con correo institucional) 
994 |a 92  |b IZTAP