Loading…

Can we solve AI's 'trust problem'? to address users' wariness, makers of AI applications should stop overpromising, become more transparent, and consider third-party certification

Many people don't trust decisions, answers, or recommendations from artificial intelligence. To address that problem, makers of AI applications and systems should stop overpromising, be more transparent about how systems are used, and consider third-party certification.

Bibliographic Details
Main Author: Davenport, Thomas H. 1954- (Author, VerfasserIn.)
Format: Electronic eBook
Language:Inglés
Published: [Cambridge, Massachusetts] MIT Sloan Management Review [2018]
Subjects:
Online Access:Texto completo (Requiere registro previo con correo institucional)
Description
Summary:Many people don't trust decisions, answers, or recommendations from artificial intelligence. To address that problem, makers of AI applications and systems should stop overpromising, be more transparent about how systems are used, and consider third-party certification.
Item Description:Place of publication from publisher's website. - Adapted from the author's The AI advantage (MIT Press, 2018). - "Reprint #60217.". - Includes bibliographical references. - Description based on online resource; title from cover (Safari, viewed April 29, 2019).
Physical Description:1 online resource (1 volume)