Website scraping with Python : using BeautifulSoup and Scrapy /
Closely examine website scraping and data processing: the technique of extracting data from websites in a format suitable for further analysis. You'll review which tools to use, and compare their features and efficiency. Focusing on BeautifulSoup4 and Scrapy, this concise, focused book highligh...
Clasificación: | Libro Electrónico |
---|---|
Autor principal: | |
Formato: | Electrónico eBook |
Idioma: | Inglés |
Publicado: |
New York, NY :
Apress,
[2018]
|
Temas: | |
Acceso en línea: | Texto completo (Requiere registro previo con correo institucional) |
Tabla de Contenidos:
- Intro; Table of Contents; About the Author; About the Technical Reviewer; Acknowledgments; Introduction; Chapter 1: Getting Started; Website Scraping; Projects for Website Scraping; Websites Are the Bottleneck; Tools in This Book; Preparation; Terms and Robots; robots.txt; Technology of the Website; Using Chrome Developer Tools; Set-up; Tool Considerations; Starting to Code; Parsing robots.txt; Creating a Link Extractor; Extracting Images; Summary; Chapter 2: Enter the Requirements; The Requirements; Preparation; Navigating Through "Meat & fishFish"; Selecting the Required Information
- Outlining the ApplicationNavigating the Website; Creating the Navigation; The requests Library; Installation; Getting Pages; Switching to requests; Putting the Code Together; Summary; Chapter 3: Using Beautiful Soup; Installing Beautiful Soup; Simple Examples; Parsing HTML Text; Parsing Remote HTML; Parsing a File; Difference Between find and find_all; Extracting All Links; Extracting All Images; Finding Tags Through Their Attributes; Finding Multiple Tags Based on Property; Changing Content; Adding Tags and Attributes; Changing Tags and Attributes; Deleting Tags and Attributes
- Finding CommentsConver ting a Soup to HTML Text; Extracting the Required Information; Identifying, Extracting, and Calling the Target URLs; Navigating the Product Pages; Extracting the Information; Using Dictionaries; Using Classes; Unforeseen Changes; Exporting the Data; To CSV; Quick Glance at the csv Module; Line Endings; Headers; Saving a Dictionary; Saving a Class; To JSON; Quick Glance at the json module; Saving a Dictionary; Saving a Class; To a Relational Database; To an NoSQL Database; Installing MongoDB; Writing to MongoDB; Per formance Improvements; Changing the Parser
- Parse Only What's NeededSaving While Working; Developing on a Long Run; Caching Intermediate Step Results; Caching Whole Websites; File-Based Cache; Database Cache; Saving Space; Updating the Cache; Source Code for this Chapter; Summary; Chapter 4: Using Scrapy; Installing Scrapy; Creating the Project; Configuring the Project; Terminology; Middleware; Pipeline; Extension; Selectors; Implementing the Sainsbury Scraper; What's This allowed_domains About?; Preparation; Using the Shell; def parse(self, response); Navigating Through Categories; Navigating Through the Product Listings
- Extracting the DataWhere to Put the Data?; Why Items?; Running the Spider; Exporting the Results; To CSV; To JSON; To Databases; MongoDB; SQLite; Bring Your Own Exporter; Filtering Duplicates; Silently Dropping Items; Fixing the CSV File; CSV Item Exporter; Caching with Scrapy; Storage Solutions; File System Storage; DBM Storage; LevelDB Storage; Cache Policies; Dummy Policy; RFC2616 Policy; Downloading Images; Using Beautiful Soup with Scrapy; Logging; (A Bit) Advanced Configuration; LOG_LEVEL; CONCURRENT_REQUESTS; DOWNLOAD_DELAY; Autothrottling; COOKIES_ENABLED; Summary