Cargando…

Master big data ingestion and analytics with Flume, Sqoop, Hive and Spark /

"In this course, you will start by learning about the Hadoop Distributed File System (HDFS) and the most common Hadoop commands required to work with HDFS. Next, you'll be introduced to Sqoop Import, which will help you gain insights into the lifecycle of the Sqoop command and how to use t...

Descripción completa

Detalles Bibliográficos
Clasificación:Libro Electrónico
Otros Autores: Kaur, Navdeep (Orador)
Formato: Electrónico Video
Idioma:Inglés
Publicado: [Place of publication not identified] : Packt Publishing, 2019.
Temas:
Acceso en línea:Texto completo (Requiere registro previo con correo institucional)

MARC

LEADER 00000cgm a2200000 i 4500
001 OR_on1144107815
003 OCoLC
005 20231017213018.0
006 m o c
007 cr cna||||||||
007 vz czazuu
008 200311s2019 xx 339 o vleng d
040 |a UMI  |b eng  |e rda  |e pn  |c UMI  |d OCLCF  |d ERF  |d OCLCO  |d OCLCQ  |d OCLCO 
035 |a (OCoLC)1144107815 
037 |a CL0501000103  |b Safari Books Online 
050 4 |a QA76.76.A65 
049 |a UAMI 
100 1 |a Kaur, Navdeep,  |e speaker. 
245 1 0 |a Master big data ingestion and analytics with Flume, Sqoop, Hive and Spark /  |c Navdeep Kaur. 
264 1 |a [Place of publication not identified] :  |b Packt Publishing,  |c 2019. 
300 |a 1 online resource (1 streaming video file (5 hr., 38 min., 37 sec.)) 
336 |a two-dimensional moving image  |b tdi  |2 rdacontent 
337 |a computer  |b c  |2 rdamedia 
337 |a video  |b v  |2 rdamedia 
338 |a online resource  |b cr  |2 rdacarrier 
511 0 |a Presenter, Navdeep Kaur. 
500 |a Title from resource description page (Safari, viewed March 11, 2020). 
520 |a "In this course, you will start by learning about the Hadoop Distributed File System (HDFS) and the most common Hadoop commands required to work with HDFS. Next, you'll be introduced to Sqoop Import, which will help you gain insights into the lifecycle of the Sqoop command and how to use the import command to migrate data from MySQL to HDFS, and from MySQL to Hive. In addition to this, you will get up to speed with Sqoop Export for migrating data effectively, along with using Apache Flume to ingest data. As you progress, you will delve into Apache Hive, external and managed tables, working with different files, and Parquet and Avro. Toward the concluding section, you will focus on Spark DataFrames and Spark SQL. By the end of this course, you will have gained comprehensive insights into big data ingestion and analytics with Flume, Sqoop, Hive, and Spark."--Resource description page 
590 |a O'Reilly  |b O'Reilly Online Learning: Academic/Public Library Edition 
630 0 0 |a Apache Hadoop. 
630 0 7 |a Apache Hadoop.  |2 fast  |0 (OCoLC)fst01911570 
650 0 |a Big data. 
650 6 |a Données volumineuses. 
650 7 |a Big data.  |2 fast  |0 (OCoLC)fst01892965 
655 4 |a Electronic videos. 
856 4 0 |u https://learning.oreilly.com/videos/~/9781839212734/?ar  |z Texto completo (Requiere registro previo con correo institucional) 
994 |a 92  |b IZTAP