Sumario: | "Word embeddings first emerged as a revolutionary technique in natural language processing (NLP) in the last decade, allowing machines to read large reams of unlabeled text and automatically answer analogical questions such as, 'What is to man as queen is to woman?'" Modern embeddings leverage advances in deep neural networks to be effective. Following the success of word embeddings, there have been massive efforts in both academia and industry to embed all kinds of data, including images, speech, video, entire sentences, phrases and documents, structured data, and even computer programs. These piecemeal approaches are now starting to converge, drawing on a similar mix of techniques. Mayank Kejriwal (USC Information Sciences Institute) explores the ongoing movement that's attempting to embed every conceivable kind of data, sometimes jointly, to build ever-more powerful predictive models. Mayank makes a business case for why you should care about embeddings and how you can position them as your organization's secret sauce within a broader AI strategy. This session is from the 2019 O'Reilly Artificial Intelligence Conference in San Jose, CA."--Resource description page
|