Cargando…

Can we solve AI's 'trust problem'? to address users' wariness, makers of AI applications should stop overpromising, become more transparent, and consider third-party certification

Many people don't trust decisions, answers, or recommendations from artificial intelligence. To address that problem, makers of AI applications and systems should stop overpromising, be more transparent about how systems are used, and consider third-party certification.

Detalles Bibliográficos
Autor principal: Davenport, Thomas H. 1954- (Autor, VerfasserIn.)
Formato: Electrónico eBook
Idioma:Inglés
Publicado: [Cambridge, Massachusetts] MIT Sloan Management Review [2018]
Temas:
Acceso en línea:Texto completo (Requiere registro previo con correo institucional)
Descripción
Sumario:Many people don't trust decisions, answers, or recommendations from artificial intelligence. To address that problem, makers of AI applications and systems should stop overpromising, be more transparent about how systems are used, and consider third-party certification.
Notas:Place of publication from publisher's website. - Adapted from the author's The AI advantage (MIT Press, 2018). - "Reprint #60217.". - Includes bibliographical references. - Description based on online resource; title from cover (Safari, viewed April 29, 2019).
Descripción Física:1 online resource (1 volume)