Cargando…

Data simplification : taming information with open source tools /

Data Simplification: Taming Information With Open Source Tools addresses the simple fact that modern data is too big and complex to analyze in its native form. Data simplification is the process whereby large and complex data is rendered usable. Complex data must be simplified before it can be analy...

Descripción completa

Detalles Bibliográficos
Clasificación:Libro Electrónico
Autor principal: Berman, Jules J. (Autor)
Formato: Electrónico eBook
Idioma:Inglés
Publicado: Cambridge, MA : Morgan Kaufmann is an imprint of Elsevier, [2016]
Temas:
Acceso en línea:Texto completo
Tabla de Contenidos:
  • Front cover; Data Simplification: Taming Information With Open Source Tools; Copyright; Dedication; Contents; Foreword; Preface; Organization of this book; Chapter Organization; How to Read this Book; Nota Bene; Glossary; References; Author Biography; Chapter 1: The Simple Life; 1.1. Simplification Drives Scientific Progress; 1.2. The Human Mind is a Simplifying Machine; 1.3. Simplification in Nature; 1.4. The Complexity Barrier; 1.5. Getting Ready; Open Source Tools; Perl; Python; Ruby; Text Editors; OpenOffice; LibreOffice; Command Line Utilities; Cygwin, Linux Emulation for Windows.
  • DOS Batch ScriptsLinux Bash Scripts; Interactive Line Interpreters; Package Installers; System Calls; Glossary; References; Chapter 2: Structuring Text; 2.1. The Meaninglessness of Free Text; 2.2. Sorting Text, the Impossible Dream; 2.3. Sentence Parsing; 2.4. Abbreviations; 2.5. Annotation and the Simple Science of Metadata; 2.6. Specifications Good, Standards Bad; Open Source Tools; ASCII; Regular Expressions; Format Commands; Converting Nonprintable Files to Plain-Text; Dublin Core; Glossary; References; Chapter 3: Indexing Text; 3.1. How Data Scientists Use Indexes.
  • 3.2. Concordances and Indexed Lists3.3. Term Extraction and Simple Indexes; 3.4. Autoencoding and Indexing with Nomenclatures; 3.5. Computational Operations on Indexes; Open Source Tools; Word Lists; Doublet Lists; Ngram Lists; Glossary; References; Chapter 4: Understanding Your Data; 4.1. Ranges and Outliers; 4.2. Simple Statistical Descriptors; 4.3. Retrieving Image Information; 4.4. Data Profiling; 4.5. Reducing Data; Open Source Tools; Gnuplot; MatPlotLib; R, for Statistical Programming; Numpy; Scipy; ImageMagick; Displaying Equations in LaTex; Normalized Compression Distance.
  • Pearson's CorrelationThe Ridiculously Simple Dot Product; Glossary; References; Chapter 5: Identifying and Deidentifying Data; 5.1. Unique Identifiers; 5.2. Poor Identifiers, Horrific Consequences; 5.3. Deidentifiers and Reidentifiers; 5.4. Data Scrubbing; 5.5. Data Encryption and Authentication; 5.6. Timestamps, Signatures, and Event Identifiers; Open Source Tools; Pseudorandom Number Generators; UUID; Encryption and Decryption with OpenSSL; One-Way Hash Implementations; Steganography; Glossary; References; Chapter 6: Giving Meaning to Data; 6.1. Meaning and Triples.
  • 6.2. Driving Down Complexity With Classifications6.3. Driving Up Complexity With Ontologies; 6.4. The Unreasonable Effectiveness of Classifications; 6.5. Properties That Cross Multiple Classes; Open Source Tools; Syntax for Triples; RDF Schema; RDF Parsers; Visualizing Class Relationships; Glossary; References; Chapter 7: Object-oriented Data; 7.1. The Importance of Self-Explaining Data; 7.2. Introspection and Reflection; 7.3. Object-Oriented Data Objects; 7.4. Working With Object-Oriented Data; Open Source Tools; Persistent Data; SQLite Databases; Glossary; References.