Cargando…

Azure databricks cookbook : accelerate and scale real-time analytics solutions using the apache spark-based analytics service /

Get to grips with building and productionizing end-to-end big data solutions in Azure and learn best practices for working with large datasets Key Features Integrate with Azure Synapse Analytics, Cosmos DB, and Azure HDInsight Kafka Cluster to scale and analyze your projects and build pipelines Use...

Descripción completa

Detalles Bibliográficos
Clasificación:Libro Electrónico
Autores principales: Raj, Phani (Autor), Jaiswal, Vinod (Autor)
Formato: Electrónico eBook
Idioma:Inglés
Publicado: Birmingham : Packt Publishing, 2021.
Temas:
Acceso en línea:Texto completo (Requiere registro previo con correo institucional)
Tabla de Contenidos:
  • Cover
  • Title Page
  • Copyright and Credits
  • Contributors
  • Table of Contents
  • Preface
  • Chapter 1: Creating an Azure Databricks Service
  • Technical requirements
  • Creating a Databricks workspace in the Azure portal
  • Getting ready
  • How to do it...
  • How it works...
  • Creating a Databricks service using the Azure CLI (command-line interface)
  • Getting ready
  • How to do it...
  • How it works...
  • There's more...
  • Creating a Databricks service using Azure Resource Manager (ARM) templates
  • Getting ready
  • How to do it...
  • How it works...
  • Adding users and groups to the workspace
  • Getting ready
  • How to do it...
  • How it works...
  • There's more...
  • Creating a cluster from the user interface (UI)
  • Getting ready
  • How to do it...
  • How it works...
  • There's more...
  • Getting started with notebooks and jobs in Azure Databricks
  • Getting ready
  • How to do it...
  • How it works...
  • Authenticating to Databricks using a PAT
  • Getting ready
  • How to do it...
  • How it works...
  • There's more...
  • Chapter 2: Reading and Writing Data from and to Various Azure Services and File Formats
  • Technical requirements
  • Mounting ADLS Gen2 and Azure Blob storage to Azure DBFS
  • Getting ready
  • How to do it...
  • How it works...
  • There's more...
  • Reading and writing data from and to Azure Blob storage
  • Getting ready
  • How to do it...
  • How it works...
  • There's more...
  • Reading and writing data from and to ADLS Gen2
  • Getting ready
  • How to do it...
  • How it works...
  • Reading and writing data from and to an Azure SQL database using native connectors
  • Getting ready
  • How to do it...
  • How it works...
  • Reading and writing data from and to Azure Synapse SQL (dedicated SQL pool) using native connectors
  • Getting ready
  • How to do it...
  • How it works...
  • Reading and writing data from and to Azure Cosmos DB
  • Getting ready
  • How to do it...
  • How it works...
  • Reading and writing data from and to CSV and Parquet
  • Getting ready
  • How to do it...
  • How it works...
  • Reading and writing data from and to JSON, including nested JSON
  • Getting ready
  • How to do it...
  • How it works...
  • Chapter 3: Understanding Spark Query Execution
  • Technical requirements
  • Introduction to jobs, stages, and tasks
  • Getting ready
  • How to do it...
  • How it works...
  • Checking the execution details of all the executed Spark queries via the Spark UI
  • Getting ready
  • How to do it...
  • How it works...
  • Deep diving into schema inference
  • Getting ready
  • How to do it...
  • How it works...
  • There's more...
  • Looking into the query execution plan
  • Getting ready
  • How to do it...
  • How it works...
  • How joins work in Spark
  • Getting ready
  • How to do it...
  • How it works...
  • There's more...
  • Learning about input partitions
  • Getting ready
  • How to do it...
  • How it works...
  • Learning about output partitions
  • Getting ready
  • How to do it...
  • How it works...
  • Learning about shuffle partitions
  • Getting ready
  • How to do it...
  • How it works...