Cargando…

Hadoop mapreduce v2 cookbook : explore the hadoop mapreduce v2 ecosystem to gain insights from very large datasets /

If you are a Big Data enthusiast and wish to use Hadoop v2 to solve your problems, then this book is for you. This book is for Java programmers with little to moderate knowledge of Hadoop MapReduce. This is also a one-stop reference for developers and system admins who want to quickly get up to spee...

Descripción completa

Detalles Bibliográficos
Clasificación:Libro Electrónico
Autor principal: Gunarathne, Thilina (Autor)
Otros Autores: Jarek Blaminsky (Diseñador de portada), Gordon, Edward (Editor ), Lalwani, Puja (Editor ), Paiva, Alfida (Editor ), Subramanian, Laxmi (Editor )
Formato: Electrónico eBook
Idioma:Inglés
Publicado: Birmingham, England : Packt Publishing, 2015.
Edición:Second edition.
Colección:Community experience distilled.
Temas:
Acceso en línea:Texto completo
Tabla de Contenidos:
  • Cover; Copyright; Credits; About the Author; Acknowledgments; About the Author; About the Reviewers; www.PacktPub.com; Table of Contents; Preface; Chapter 1: Getting Started with Hadoop v2; Introduction; Setting up Hadoop v2 on your local machine; Writing a WordCount MapReduce application, bundling it, and running it using Hadoop local mode; Adding a combiner step to the WordCount MapReduce program; Setting up HDFS; Setting up Hadoop YARN in a distributed cluster environment using Hadoop v2; Setting up Hadoop ecosystem in a distributed cluster environment using a Hadoop distribution
  • HDFS command-line file operationsRunning the WordCount program in a distributed cluster environment; Benchmarking HDFS using DFSIO; Benchmarking Hadoop MapReduce using TeraSort; Chapter 2: Cloud Deployments
  • Using Hadoop YARN on Cloud Environments; Introduction; Running Hadoop MapReduce v2 computations using Amazon Elastic MapReduce; Saving money using Amazon EC2 Spot Instances to execute EMR job flows; Executing a Pig script using EMR; Executing a Hive script using EMR; Creating an Amazon EMR job flow using the AWS Command Line Interface
  • Deploying an Apache HBase cluster on Amazon EC2 using EMRUsing EMR bootstrap actions to configure VMs for the Amazon EMR jobs; Using Apache Whirr to deploy an Apache Hadoop cluster in a cloud environment; Chapter 3: Hadoop Essentials
  • Configurations, Unit Tests, and Other APIs; Introduction; Optimizing Hadoop YARN and MapReduce configurations for cluster deployments; Shared user Hadoop clusters
  • using Fair and Capacity schedulers; Setting classpath precedence to user-provided JARs; Speculative execution of straggling tasks; Unit testing Hadoop MapReduce applications using MRUnit
  • Integration testing Hadoop MapReduce applications using MiniYarnClusterAdding a new DataNode; Decommissioning DataNodes; Using multiple disks/volumes and limiting HDFS disk usage; Setting the HDFS block size; Setting the file replication factor; Using the HDFS Java API; Chapter 4: Developing Complex Hadoop MapReduce Applications; Introduction; Choosing appropriate Hadoop data types; Implementing a custom Hadoop Writable data type; Implementing a custom Hadoop key type; Emitting data of different value types from a Mapper; Choosing a suitable Hadoop InputFormat for your input data format
  • Adding support for new input data formats
  • implementing a custom InputFormatFormatting the results of MapReduce computations
  • using Hadoop OutputFormats; Writing multiple outputs from a MapReduce computation; Hadoop intermediate data partitioning; Secondary sorting
  • sorting Reduce input values; Broadcasting and distributing shared resources to tasks in a MapReduce job
  • Hadoop DistributedCache; Using Hadoop with legacy applications
  • Hadoop Streaming; Adding dependencies between MapReduce jobs; Hadoop counters for reporting custom metrics; Chapter 5: Analytics; Introduction