Distributed machine learning with Python : accelerating model training and serving with distributed systems /
Chapter 2: Parameter Server and All-Reduce -- Technical requirements -- Parameter server architecture -- Communication bottleneck in the parameter server architecture -- Sharding the model among parameter servers -- Implementing the parameter server -- Defining model layers -- Defining the parameter...
Call Number: | Libro Electrónico |
---|---|
Main Author: | |
Format: | Electronic eBook |
Language: | Inglés |
Published: |
Birmingham :
Packt Publishing, Limited,
2022.
|
Subjects: | |
Online Access: | Texto completo (Requiere registro previo con correo institucional) |
Summary: | Chapter 2: Parameter Server and All-Reduce -- Technical requirements -- Parameter server architecture -- Communication bottleneck in the parameter server architecture -- Sharding the model among parameter servers -- Implementing the parameter server -- Defining model layers -- Defining the parameter server -- Defining the worker -- Passing data between the parameter server and worker -- Issues with the parameter server -- The parameter server architecture introduces a high coding complexity for practitioners -- All-Reduce architecture -- Reduce -- All-Reduce -- Ring All-Reduce. |
---|---|
Item Description: | Pros and cons of pipeline parallelism. |
Physical Description: | 1 online resource (284 pages) : color illustrations |
ISBN: | 1801817219 9781801817219 |