|
|
|
|
LEADER |
00000cam a2200000 a 4500 |
001 |
KNOVEL_on1228037528 |
003 |
OCoLC |
005 |
20231027140348.0 |
006 |
m o d |
007 |
cr un|---aucuu |
008 |
201226s2021 cau o 001 0 eng d |
040 |
|
|
|a EBLCP
|b eng
|e pn
|c EBLCP
|d YDX
|d ERF
|d OCLCF
|d GW5XE
|d OCLCO
|d VT2
|d SFB
|d N$T
|d OCL
|d K6U
|d OCLCQ
|d OCLCO
|d OCLCQ
|
019 |
|
|
|a 1227448842
|a 1232856870
|a 1235845480
|a 1240522822
|
020 |
|
|
|a 9781484265000
|q (electronic bk.)
|
020 |
|
|
|a 1484265009
|q (electronic bk.)
|
020 |
|
|
|a 9781484265017
|q (print)
|
020 |
|
|
|a 1484265017
|
020 |
|
|
|z 1484264991
|
020 |
|
|
|z 9781484264997
|
024 |
7 |
|
|a 10.1007/978-1-4842-6500-0
|2 doi
|
029 |
1 |
|
|a AU@
|b 000068472374
|
029 |
1 |
|
|a AU@
|b 000069025930
|
035 |
|
|
|a (OCoLC)1228037528
|z (OCoLC)1227448842
|z (OCoLC)1232856870
|z (OCoLC)1235845480
|z (OCoLC)1240522822
|
050 |
|
4 |
|a QA76.9.B45
|
072 |
|
7 |
|a U.
|2 bicssc
|
072 |
|
7 |
|a COM000000
|2 bisacsh
|
072 |
|
7 |
|a UX.
|2 thema
|
082 |
0 |
4 |
|a 005.7
|2 23
|
082 |
0 |
4 |
|a 004
|2 23
|
049 |
|
|
|a UAMI
|
100 |
1 |
|
|a Kakarla, Ramcharan.
|
245 |
1 |
0 |
|a Applied data science using Pyspark :
|b learn the end-to-end predictive model-building cycle /
|c Ramcharan Kakarla, Sundar Krishnan, Sridhar Alla.
|
260 |
|
|
|a Berkeley, CA :
|b Apress,
|c 2021.
|
300 |
|
|
|a 1 online resource (427 pages)
|
336 |
|
|
|a text
|b txt
|2 rdacontent
|
337 |
|
|
|a computer
|b c
|2 rdamedia
|
338 |
|
|
|a online resource
|b cr
|2 rdacarrier
|
347 |
|
|
|a text file
|
347 |
|
|
|b PDF
|
588 |
0 |
|
|a Print version record.
|
505 |
0 |
|
|a Intro -- Table of Contents -- About the Authors -- About the Technical Reviewer -- Acknowledgments -- Foreword 1 -- Foreword 2 -- Foreword 3 -- Introduction -- Chapter 1: Setting Up the PySpark Environment -- Local Installation using Anaconda -- Step 1: Install Anaconda -- Step 2: Conda Environment Creation -- Step 3: Download and Unpack Apache Spark -- Step 4: Install Java 8 or Later -- Step 5: Mac & Linux Users -- Step 6: Windows Users -- Step 7: Run PySpark -- Step 8: Jupyter Notebook Extension -- Docker-based Installation -- Why Do We Need to Use Docker? -- What Is Docker?
|
505 |
8 |
|
|a Create a Simple Docker Image -- Download PySpark Docker -- Step-by-Step Approach to Understanding the Docker PySpark run Command -- Databricks Community Edition -- Create Databricks Account -- Create a New Cluster -- Create Notebooks -- How Do You Import Data Files into the Databricks Environment? -- Basic Operations -- Upload Data -- Access Data -- Calculate Pi -- Summary -- Chapter 2: PySpark Basics -- PySpark Background -- PySpark Resilient Distributed Datasets (RDDs) and DataFrames -- Data Manipulations -- Reading Data from a File -- Reading Data from Hive Table -- Reading Metadata
|
505 |
8 |
|
|a Counting Records -- Subset Columns and View a Glimpse of the Data -- Missing Values -- One-Way Frequencies -- Sorting and Filtering One-Way Frequencies -- Casting Variables -- Descriptive Statistics -- Unique/Distinct Values and Counts -- Filtering -- Creating New Columns -- Deleting and Renaming Columns -- Summary -- Chapter 3: Utility Functions and Visualizations -- Additional Data Manipulations -- String Functions -- Registering DataFrames -- Window Functions -- Other Useful Functions -- Collect List -- Sampling -- Caching and Persisting -- Saving Data -- Pandas Support -- Joins
|
505 |
8 |
|
|a Dropping Duplicates -- Data Visualizations -- Introduction to Machine Learning -- Summary -- Chapter 4: Variable Selection -- Exploratory Data Analysis -- Cardinality -- Missing Values -- Missing at Random (MAR) -- Missing Completely at Random (MCAR) -- Missing Not at Random (MNAR) -- Code 1: Cardinality Check -- Code 2: Missing Values Check -- Step 1: Identify Variable Types -- Step 2: Apply StringIndexer to Character Columns -- Step 3: Assemble Features -- Built-in Variable Selection Process: Without Target -- Principal Component Analysis -- Mechanics -- Singular Value Decomposition
|
505 |
8 |
|
|a Built-in Variable Selection Process: With Target -- ChiSq Selector -- Model-based Feature Selection -- Custom-built Variable Selection Process -- Information Value Using Weight of Evidence -- Monotonic Binning Using Spearman Correlation -- How Do You Calculate the Spearman Correlation by Hand? -- How Is Spearman Correlation Used to Create Monotonic Bins for Continuous Variables? -- Custom Transformers -- Main Concepts in Pipelines -- Voting-based Selection -- Summary -- Chapter 5: Supervised Learning Algorithms -- Basics -- Regression -- Classification -- Loss Functions -- Optimizers
|
500 |
|
|
|a Gradient Descent.
|
500 |
|
|
|a Includes index.
|
520 |
|
|
|a Discover the capabilities of PySpark and its application in the realm of data science. This comprehensive guide with hand-picked examples of daily use cases will walk you through the end-to-end predictive model-building cycle with the latest techniques and tricks of the trade. Applied Data Science Using PySpark is divided unto six sections which walk you through the book. In section 1, you start with the basics of PySpark focusing on data manipulation. We make you comfortable with the language and then build upon it to introduce you to the mathematical functions available off the shelf. In section 2, you will dive into the art of variable selection where we demonstrate various selection techniques available in PySpark. In section 3, we take you on a journey through machine learning algorithms, implementations, and fine-tuning techniques. We will also talk about different validation metrics and how to use them for picking the best models. Sections 4 and 5 go through machine learning pipelines and various methods available to operationalize the model and serve it through Docker/an API. In the final section, you will cover reusable objects for easy experimentation and learn some tricks that can help you optimize your programs and machine learning pipelines. By the end of this book, you will have seen the flexibility and advantages of PySpark in data science applications. This book is recommended to those who want to unleash the power of parallel computing by simultaneously working with big datasets. You will: Build an end-to-end predictive model Implement multiple variable selection techniques Operationalize models Master multiple algorithms and implementations.
|
590 |
|
|
|a Knovel
|b ACADEMIC - Software Engineering
|
590 |
|
|
|a O'Reilly
|b O'Reilly Online Learning: Academic/Public Library Edition
|
650 |
|
0 |
|a Big data.
|
650 |
|
0 |
|a Machine learning.
|
650 |
|
0 |
|a Python (Computer program language)
|
650 |
|
0 |
|a Parallel processing (Electronic computers)
|
650 |
|
6 |
|a Données volumineuses.
|
650 |
|
6 |
|a Apprentissage automatique.
|
650 |
|
6 |
|a Python (Langage de programmation)
|
650 |
|
6 |
|a Parallélisme (Informatique)
|
650 |
|
7 |
|a Python (Computer program language)
|2 fast
|0 (OCoLC)fst01084736
|
650 |
|
7 |
|a Parallel processing (Electronic computers)
|2 fast
|0 (OCoLC)fst01052928
|
650 |
|
7 |
|a Big data.
|2 fast
|0 (OCoLC)fst01892965
|
650 |
|
7 |
|a Computer software.
|2 fast
|0 (OCoLC)fst00872527
|
650 |
|
7 |
|a Machine learning.
|2 fast
|0 (OCoLC)fst01004795
|
700 |
1 |
|
|a Krishnan, Sundar.
|
700 |
1 |
|
|a Alla, Sridhar.
|
776 |
0 |
8 |
|i Print version:
|a Kakarla, Ramcharan.
|t Applied Data Science Using Pyspark : Learn the End-To-End Predictive Model-Building Cycle.
|d Berkeley, CA : Apress L.P., ©2021
|z 9781484264997
|
856 |
4 |
0 |
|u https://appknovel.uam.elogim.com/kn/resources/kpADSUPSL1/toc
|z Texto completo
|
938 |
|
|
|a ProQuest Ebook Central
|b EBLB
|n EBL6427452
|
938 |
|
|
|a EBSCOhost
|b EBSC
|n 2709855
|
938 |
|
|
|a YBP Library Services
|b YANK
|n 301814712
|
994 |
|
|
|a 92
|b IZTAP
|