|
|
|
|
LEADER |
00000cam a22000001i 4500 |
001 |
OR_on1268134426 |
003 |
OCoLC |
005 |
20231017213018.0 |
006 |
m d |
007 |
cr ||||||||||| |
008 |
210709s2021 enk o 000 0 eng d |
040 |
|
|
|a UKMGB
|b eng
|e rda
|e pn
|c UKMGB
|d OCLCO
|d EBLCP
|d OCLCF
|d UKAHL
|d OCLCO
|d OCLCQ
|d IEEEE
|d OCLCO
|
015 |
|
|
|a GBC1B2411
|2 bnb
|
016 |
7 |
|
|a 020259547
|2 Uk
|
020 |
|
|
|a 9781789618556
|q (ebook)
|
020 |
|
|
|a 178961855X
|
020 |
|
|
|z 9781789809718 (pbk.)
|
029 |
0 |
|
|a UKMGB
|b 020259547
|
035 |
|
|
|a (OCoLC)1268134426
|
037 |
|
|
|a 9781789618556
|b Packt Publishing Pvt. Ltd
|
037 |
|
|
|a 10162379
|b IEEE
|
050 |
|
4 |
|a QA76.9.B45
|
082 |
0 |
4 |
|a 005.7
|2 23/eng/20230811
|
049 |
|
|
|a UAMI
|
100 |
1 |
|
|a Raj, Phani,
|e author.
|
245 |
1 |
0 |
|a Azure databricks cookbook :
|b accelerate and scale real-time analytics solutions using the apache spark-based analytics service /
|c Phani Raj, Vinod Jaiswal.
|
264 |
|
1 |
|a Birmingham :
|b Packt Publishing,
|c 2021.
|
300 |
|
|
|a 1 online resource
|
336 |
|
|
|a text
|2 rdacontent
|
337 |
|
|
|a computer
|2 rdamedia
|
338 |
|
|
|a online resource
|2 rdacarrier
|
588 |
|
|
|a Description based on CIP data; resource not viewed.
|
505 |
0 |
|
|a Cover -- Title Page -- Copyright and Credits -- Contributors -- Table of Contents -- Preface -- Chapter 1: Creating an Azure Databricks Service -- Technical requirements -- Creating a Databricks workspace in the Azure portal -- Getting ready -- How to do it... -- How it works... -- Creating a Databricks service using the Azure CLI (command-line interface) -- Getting ready -- How to do it... -- How it works... -- There's more... -- Creating a Databricks service using Azure Resource Manager (ARM) templates -- Getting ready -- How to do it... -- How it works... -- Adding users and groups to the workspace
|
505 |
8 |
|
|a Getting ready -- How to do it... -- How it works... -- There's more... -- Creating a cluster from the user interface (UI) -- Getting ready -- How to do it... -- How it works... -- There's more... -- Getting started with notebooks and jobs in Azure Databricks -- Getting ready -- How to do it... -- How it works... -- Authenticating to Databricks using a PAT -- Getting ready -- How to do it... -- How it works... -- There's more... -- Chapter 2: Reading and Writing Data from and to Various Azure Services and File Formats -- Technical requirements -- Mounting ADLS Gen2 and Azure Blob storage to Azure DBFS -- Getting ready
|
505 |
8 |
|
|a How to do it... -- How it works... -- There's more... -- Reading and writing data from and to Azure Blob storage -- Getting ready -- How to do it... -- How it works... -- There's more... -- Reading and writing data from and to ADLS Gen2 -- Getting ready -- How to do it... -- How it works... -- Reading and writing data from and to an Azure SQL database using native connectors -- Getting ready -- How to do it... -- How it works... -- Reading and writing data from and to Azure Synapse SQL (dedicated SQL pool) using native connectors -- Getting ready -- How to do it... -- How it works...
|
505 |
8 |
|
|a Reading and writing data from and to Azure Cosmos DB -- Getting ready -- How to do it... -- How it works... -- Reading and writing data from and to CSV and Parquet -- Getting ready -- How to do it... -- How it works... -- Reading and writing data from and to JSON, including nested JSON -- Getting ready -- How to do it... -- How it works... -- Chapter 3: Understanding Spark Query Execution -- Technical requirements -- Introduction to jobs, stages, and tasks -- Getting ready -- How to do it... -- How it works... -- Checking the execution details of all the executed Spark queries via the Spark UI -- Getting ready
|
505 |
8 |
|
|a How to do it... -- How it works... -- Deep diving into schema inference -- Getting ready -- How to do it... -- How it works... -- There's more... -- Looking into the query execution plan -- Getting ready -- How to do it... -- How it works... -- How joins work in Spark -- Getting ready -- How to do it... -- How it works... -- There's more... -- Learning about input partitions -- Getting ready -- How to do it... -- How it works... -- Learning about output partitions -- Getting ready -- How to do it... -- How it works... -- Learning about shuffle partitions -- Getting ready -- How to do it... -- How it works...
|
520 |
|
|
|a Get to grips with building and productionizing end-to-end big data solutions in Azure and learn best practices for working with large datasets Key Features Integrate with Azure Synapse Analytics, Cosmos DB, and Azure HDInsight Kafka Cluster to scale and analyze your projects and build pipelines Use Databricks SQL to run ad hoc queries on your data lake and create dashboards Productionize a solution using CI/CD for deploying notebooks and Azure Databricks Service to various environments Book DescriptionAzure Databricks is a unified collaborative platform for performing scalable analytics in an interactive environment. The Azure Databricks Cookbook provides recipes to get hands-on with the analytics process, including ingesting data from various batch and streaming sources and building a modern data warehouse. The book starts by teaching you how to create an Azure Databricks instance within the Azure portal, Azure CLI, and ARM templates. You’ll work through clusters in Databricks and explore recipes for ingesting data from sources, including files, databases, and streaming sources such as Apache Kafka and EventHub. The book will help you explore all the features supported by Azure Databricks for building powerful end-to-end data pipelines. You'll also find out how to build a modern data warehouse by using Delta tables and Azure Synapse Analytics. Later, you’ll learn how to write ad hoc queries and extract meaningful insights from the data lake by creating visualizations and dashboards with Databricks SQL. Finally, you'll deploy and productionize a data pipeline as well as deploy notebooks and Azure Databricks service using continuous integration and continuous delivery (CI/CD). By the end of this Azure book, you'll be able to use Azure Databricks to streamline different processes involved in building data-driven apps. What you will learn Read and write data from and to various Azure resources and file formats Build a modern data warehouse with Delta Tables and Azure Synapse Analytics Explore jobs, stages, and tasks and see how Spark lazy evaluation works Handle concurrent transactions and learn performance optimization in Delta tables Learn Databricks SQL and create real-time dashboards in Databricks SQL Integrate Azure DevOps for version control, deploying, and productionizing solutions with CI/CD pipelines Discover how to use RBAC and ACLs to restrict data access Build end-to-end data processing pipeline for near real-time data analytics Who this book is for This recipe-based book is for data scientists, data engineers, big data professionals, and machine learning engineers who want to perform data analytics on their applications. Prior experience of working with Apache Spark and Azure is necessary to get the most out of this book.
|
590 |
|
|
|a O'Reilly
|b O'Reilly Online Learning: Academic/Public Library Edition
|
650 |
|
0 |
|a Big data.
|
650 |
|
0 |
|a Microsoft Azure (Computing platform)
|
650 |
|
6 |
|a Données volumineuses.
|
650 |
|
6 |
|a Microsoft Azure (Plateforme informatique)
|
650 |
|
7 |
|a Big data
|2 fast
|
650 |
|
7 |
|a Microsoft Azure (Computing platform)
|2 fast
|
700 |
1 |
|
|a Jaiswal, Vinod,
|e author.
|
776 |
0 |
8 |
|i Print version:
|z 9781789809718
|
856 |
4 |
0 |
|u https://learning.oreilly.com/library/view/~/9781789809718/?ar
|z Texto completo (Requiere registro previo con correo institucional)
|
938 |
|
|
|a Askews and Holts Library Services
|b ASKH
|n AH37307265
|
938 |
|
|
|a ProQuest Ebook Central
|b EBLB
|n EBL6724609
|
994 |
|
|
|a 92
|b IZTAP
|