Cargando…

Catalyst Conference : NLP with ChatGPT (and other large language models) : transformer architectures from development to deployment.

3 Hours of Video Discover the astounding state-of-the-art in Natural Language Processing (NLP) that is enabled by Large Language Models (LLMs) like ChatGPT and T5 Understand Attention and Transformers, as well as how these essential modern NLP concepts relate to Deep Learning and LLMs Survey the sta...

Descripción completa

Detalles Bibliográficos
Clasificación:Libro Electrónico
Formato: Electrónico Video
Idioma:Inglés
Publicado: [Place of publication not identified] : Addison-Wesley Professional, [2023]
Edición:[First edition].
Temas:
Acceso en línea:Texto completo (Requiere registro previo con correo institucional)

MARC

LEADER 00000cgm a22000007i 4500
001 OR_on1375495514
003 OCoLC
005 20231017213018.0
006 m o c
007 vz czazuu
007 cr cnannnuuuuu
008 230411s2023 xx 173 o vleng d
040 |a ORMDA  |b eng  |e rda  |e pn  |c ORMDA  |d OCLCF  |d OCLCO 
019 |a 1390762202 
020 |a 9780138224912  |q (electronic video) 
020 |a 0138224919  |q (electronic video) 
035 |a (OCoLC)1375495514  |z (OCoLC)1390762202 
037 |a 9780138224912  |b O'Reilly Media 
050 4 |a Q335 
082 0 4 |a 006.3  |2 23/eng/20230411 
049 |a UAMI 
245 0 0 |a Catalyst Conference :  |b NLP with ChatGPT (and other large language models) : transformer architectures from development to deployment. 
246 3 0 |a NLP with ChatGPT (and other large language models) :  |b transformer architectures from development to deployment 
250 |a [First edition]. 
264 1 |a [Place of publication not identified] :  |b Addison-Wesley Professional,  |c [2023] 
300 |a 1 online resource (1 video file (2 hr., 53 min.)) :  |b sound, color. 
306 |a 025300 
336 |a two-dimensional moving image  |b tdi  |2 rdacontent 
337 |a computer  |b c  |2 rdamedia 
338 |a online resource  |b cr  |2 rdacarrier 
344 |a digital  |2 rdatr 
347 |a video file  |2 rdaft 
380 |a Instructional films  |2 lcgft 
511 0 |a Jon Krohn, host. 
520 |a 3 Hours of Video Discover the astounding state-of-the-art in Natural Language Processing (NLP) that is enabled by Large Language Models (LLMs) like ChatGPT and T5 Understand Attention and Transformers, as well as how these essential modern NLP concepts relate to Deep Learning and LLMs Survey the staggeringly broad range of LLMs' natural-language capabilities Learn how to use LLMs in practice, including how to train and deploy them into production NLP applications Large Language Models (LLMs) such as GPT series architectures have dramatically accelerated the natural language processing (NLP) capabilities of machines in recent years. These capabilities, facilitated by LLMs' hundreds of billions of model parameters, approach or exceed human-level performance on a staggeringly broad set of natural-language tasks -- often without any task-specific training being required. In this event, leading subject-matter experts introduce LLMs and their associated concepts (e.g., Transformers, Attention), survey LLMs' breadth of capabilities, and provide the best practices on how to leverage LLMs efficiently and confidently in order to supercharge your own natural-language applications. AI Catalyst The AI Catalyst Conference from Pearson brings together leading voices in AI to make complex topics understandable and actionable. Host Jon Krohn guides the conversation and explains how to bring state-of-the-art methods into practice. Gain new information or a different perspective to make an impact in your job and in the world. By the end of the course, you'll understand: Large Language Models (LLMs) Attention Transformers The breadth of state-of-the-art NLP applications And you'll be able to: Select an appropriate LLM architecture for a given NLP application Prompt pre-trained LLMs like ChatGPT and GPT-3 to effectively produce your desired output Train and deploy LLMs into production NLP applications Potentially accelerate your data science roadmap by months or years by leveraging a pre-trained LLM instead of needing to train individual task-specific models from scratch yourself This course is for you because... You'd like to appreciate the staggering breadth of NLP and Deep Learning capabilities You are a data scientist, software developer, ML engineer, or other technical professional who would like to be able incorporate new NLP approaches into real-world applications Prerequisites All you need is an interest in how AI can impact you and your organization. Recommended Follow-up Read: Quick Start Guide to LLMs by Sinan Ozdemir, https://learning.oreilly.com/library/view/quick-start-guide/9780138199425/ Attend: Deploying GPT and Large Language Models by Sinan Ozdemir: https://learning.oreilly.com/search/?q=Sinan%20Ozdemir&type=live-event-series&rows=10&publishers=Pearson Attend: Hands-on Natural Language Generation and GPT by Sinan Ozdemir: https://learning.oreilly.com/search/?q=Sinan%20Ozdemir&type=live-event-series&rows=10&publishers=Pearson Read: Chapter 15 of Learning Deep Learning by Dr. Magnus Ekman: https://learning.oreilly.com/library/view/learning-deep-learning/9780137470198/ Watch: NLP using Transformer Architectures by Aur©♭lien G©♭ron: https://learning.oreilly.com/videos/natural-language-processing/0636920373605/0636920373605-video329383/ For a more general introduction to deep learning, check out the Deep Learning: The Complete Guide playlist by Dr. Jon Krohn: https://learning.oreilly.com/playlists/a40ea8fe-994d-4370-8b29-0d6c0f519a89/ Course Schedule Jon Krohn: Welcome Sinan Ozdemir: Introduction to Large Language Models (30 minutes) We can't talk about state-of-the-art Natural Language Processing (NLP) without talking about Transformers and large language models (LLMs) like ChatGPT, BERT, GPT, and T5. Sinan explores a brief history of modern NLP up to the rise of attention-based models and Transformers including the proliferation of LLMs that continues to this day along with all of the good and sometimes the not-so-good outcomes. He overviews the major architectures that influence the tasks and models that dominate NLP while peeking under the hood to understand how LLMs learn to read, write, and do so much more. Sinan Ozdemir is an active lecturer focusing on large language models and a former lecturer of data science at the Johns Hopkins University. He is the author of multiple textbooks on data science and machine learning including The Principles of Data Science. Sinan is the Founder and CTO of LoopGenius where he uses State of the art AI to help people create and run their businesses. He holds a master's degree in Pure Mathematics from Johns Hopkins University and is based in San Francisco. Jon and Sinan Discussion + Q&A Melanie Subbiah: The Broad Range of LLM Capabilities Large language models have unlocked a huge number of exciting applications in the real world that were not possible before -- capabilities that are creative, useful, and profitable. Through interactive demos of GPT-3, Melanie explores a broad range of these use cases, giving participants more intuition for how large language models have been effective. Melanie Subbiah is a third-year PhD student in NLP at Columbia University where she researches narrative summarization and aspects of online text safety. Before starting graduate school, she was one of the lead authors on the GPT-3 paper, building out the evaluation suite for that work and helping early customers use the OpenAI API for their projects. Prior to that, she researched autonomous systems at Apple. Melanie obtained her Bachelor's in computer science from Williams College. Jon and Melanie Discussion + Q&A Shaan Khosla: Training and Deploying LLMs Shaan covers practical LLM tips over the full NLP lifecycle. These include topics such as efficient training practices, validation methods, and productionization considerations to ensure your design is optimized for implementation within your real-world natural-language application. Shaan Khosla is a data scientist at Nebula where he researches, designs, and develops NLP models. He's previously worked at Bank of America on an internal machine learning consulting team, where he used LLMs to build proof of concept systems for various lines of business. Shaan holds a BSBA in Computer Science and Finance from the University of Miami and is currently completing a master's degree in Data Science at NYU. He has published multiple peer-reviewed papers applying LLMs, topic modeling, and recommendation systems to the fields of biochemistry and healthcare. Jon and Shaan Discussion + Q&A Jon Krohn: Closing Remarks About the Host Host: Jon Krohn is Co-Founder and Chief Data Scientist at the machine learning company Nebula. He authored the book Deep Learning Illustrated, an instant #1 bestseller that was translated into seven languages. He is also the host of SuperDataScience, the data science industry's most listened-to podcast. Jon is renowned for his compelling lectures, which he offers at leading universities and conferences, as well as via his award-winning YouTube channel. He holds a PhD from Oxford and has been publishing on machine learning in prominent academic journals since 2010. 
588 |a Online resource; title from title details screen (O'Reilly, viewed April 11, 2023). 
590 |a O'Reilly  |b O'Reilly Online Learning: Academic/Public Library Edition 
630 0 0 |a ChatGPT. 
650 0 |a Artificial intelligence. 
650 0 |a Natural language processing (Computer science) 
650 6 |a Intelligence artificielle. 
650 6 |a Traitement automatique des langues naturelles. 
650 7 |a artificial intelligence.  |2 aat 
650 7 |a Artificial intelligence  |2 fast 
650 7 |a Natural language processing (Computer science)  |2 fast 
655 7 |a Instructional films  |2 fast 
655 7 |a Internet videos  |2 fast 
655 7 |a Nonfiction films  |2 fast 
655 7 |a Instructional films.  |2 lcgft 
655 7 |a Nonfiction films.  |2 lcgft 
655 7 |a Internet videos.  |2 lcgft 
655 7 |a Films de formation.  |2 rvmgf 
655 7 |a Films autres que de fiction.  |2 rvmgf 
655 7 |a Vidéos sur Internet.  |2 rvmgf 
700 1 |a Krohn, Jon,  |e host. 
710 2 |a Addison-Wesley Professional (Firm),  |e publisher. 
856 4 0 |u https://learning.oreilly.com/videos/~/9780138224912/?ar  |z Texto completo (Requiere registro previo con correo institucional) 
994 |a 92  |b IZTAP