|
|
|
|
LEADER |
00000cam a22000007a 4500 |
001 |
OR_on1285525864 |
003 |
OCoLC |
005 |
20231017213018.0 |
006 |
m o d |
007 |
cr cnu|||||||| |
008 |
211117s2021 xx o 000 0 eng d |
040 |
|
|
|a AU@
|b eng
|c AU@
|d ORMDA
|d OCLCO
|d OCLCF
|d OCLCQ
|
020 |
|
|
|z 9781098109066
|
024 |
8 |
|
|a 9781098109073
|
029 |
0 |
|
|a AU@
|b 000070164780
|
029 |
1 |
|
|a AU@
|b 000073556358
|
035 |
|
|
|a (OCoLC)1285525864
|
037 |
|
|
|a 9781098109073
|b O'Reilly Media
|
050 |
|
4 |
|a Q325.5
|
082 |
0 |
4 |
|a 006.3/1
|2 23
|
049 |
|
|
|a UAMI
|
100 |
1 |
|
|a Eovito, Austin,
|e author.
|
245 |
1 |
0 |
|a Language Models in Plain English
|h [electronic resource] /
|c Eovito, Austin.
|
250 |
|
|
|a 1st edition.
|
264 |
|
1 |
|b O'Reilly Media, Inc.,
|c 2021.
|
300 |
|
|
|a 1 online resource (65 pages)
|
336 |
|
|
|a text
|b txt
|2 rdacontent
|
337 |
|
|
|a computer
|b c
|2 rdamedia
|
338 |
|
|
|a online resource
|b cr
|2 rdacarrier
|
347 |
|
|
|a text file
|
520 |
|
|
|a Recent advances in machine learning have lowered the barriers to creating and using ML models. But understanding what these models are doing has only become more difficult. We discuss technological advances with little understanding of how they work and struggle to develop a comfortable intuition for new functionality. In this report, authors Austin Eovito and Marina Danilevsky from IBM focus on how to think about neural network-based language model architectures. They guide you through various models (neural networks, RNN/LSTM, encoder-decoder, attention/transformers) to convey a sense of their abilities without getting entangled in the complex details. The report uses simple examples of how humans approach language in specific applications to explore and compare how different neural network-based language models work. This report will empower you to better understand how machines understand language. Dive deep into the basic task of a language model to predict the next word, and use it as a lens to understand neural network language models Explore encoder-decoder architecture through abstractive text summarization Use machine translation to understand the attention mechanism and transformer architecture Examine the current state of machine language understanding to discern what these language models are good at and their risks and weaknesses.
|
542 |
|
|
|f Copyright © O'Reilly Media, Inc.
|
550 |
|
|
|a Made available through: Safari, an O'Reilly Media Company.
|
588 |
|
|
|a Online resource; Title from title page (viewed October 25, 2021)
|
590 |
|
|
|a O'Reilly
|b O'Reilly Online Learning: Academic/Public Library Edition
|
650 |
|
0 |
|a Machine learning.
|
650 |
|
6 |
|a Apprentissage automatique.
|
650 |
|
7 |
|a Machine learning.
|2 fast
|0 (OCoLC)fst01004795
|
700 |
1 |
|
|a Danilevsky, Marina,
|e author.
|
710 |
2 |
|
|a Safari, an O'Reilly Media Company.
|
856 |
4 |
0 |
|u https://learning.oreilly.com/library/view/~/9781098109073/?ar
|z Texto completo (Requiere registro previo con correo institucional)
|
994 |
|
|
|a 92
|b IZTAP
|