This book presents the state of the art in distributed machine learning algorithms that are based on gradient optimization methods. In the big data era, large-scale datasets pose enormous challenges for the existing machine learning systems. As such, implementing machine learning algorithms in a distributed environment has become a key technology, and recent research has shown gradient-based iterative optimization to be an effective solution. Focusing on methods that can speed up large-scale gradient optimization through both algorithm optimizations and careful system implementations, the book introduces three essential techniques in designing a gradient optimization algorithm to train a distributed machine learning model: parallel strategy, data compression and synchronization protocol.
Written in a tutorial style, it covers a range of topics, from fundamental knowledge to a number of carefully designed algorithms and systems of distributed machine learning. It will appealto a broad audience in the field of machine learning, artificial intelligence, big data and database management.
Produkteigenschaften
- Artikelnummer: 9789811634222
- Medium: Buch
- ISBN: 978-981-16-3422-2
- Verlag: Springer Nature Singapore
- Erscheinungstermin: 25.02.2023
- Sprache(n): Englisch
- Auflage: 1. Auflage 2022
- Serie: Big Data Management
- Produktform: Kartoniert
- Gewicht: 289 g
- Seiten: 169
- Format (B x H x T): 155 x 235 x 11 mm
- Ausgabetyp: Kein, Unbekannt