ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

使用scrapy中的items、piplines、settings

2022-05-31 00:03:48  阅读:144  来源: 互联网

标签:en items spider piplines item scrapy html docs


bookstoscrape

 1 import scrapy
 2 from spider_01_books.items import BookItem
 3 
 4 class BookstoscrapeSpider(scrapy.Spider):
 5     """爬虫类,继承spider"""
 6     #爬虫名称--每一个爬虫的唯一标识
 7     name = 'bookstoscrape'
 8     #允许爬爬取的域名
 9     allowed_domains = ['books.toscrape.com']
10     #初始爬取的URL
11     start_urls = ['http://books.toscrape.com/']
12     #解析下载
13     def parse(self, response):
14         #提取数据
15         #每一本书的信息在<article class="product_pod">中,使用xpath()方法找到所有的article元素,并依次迭代
16         for book in response.xpath('//article[@class="product_pod"]'):
17             # #书名信息在article>h3 >a元素的title属性中
18             #创建item对象
19             item = BookItem()
20             #例如:<a href="catalogue/a-light-in-the-attic_1000/index.html" title="A Light in the Attic">A Light in the ...</a>
21             item['name'] = book.xpath('./h3/a/@title').extract_first()
22             # extract_first():这个方法返回的是一个string字符串,是list数组里面的第一个字符串。
23             #书价信息在article><div class="product_price">的text中,例如:<p class="price_color">£51.77</p>
24             item['price']= book.xpath('./div[2]/p[1]/text()').extract_first()[1:]
25             #书的评级在article>p 元素的class属性中。例如:<p class="star-rating Three">
26             item['rate'] = book.xpath('./p/@class').extract_first().split(" ")[1]
27 
28             #返回单个图书对象
29             yield item
30         #提取下一页的链接
31         #下一页的url在li.next >a 元素的href属性中
32         #例如:<li class="next"><a href="catalogue/page-2.html">next</a></li>
33         next_url= response.xpath('//li[@class="next"]/a/@href').extract_first()
34 
35         #判断
36         if next_url:
37             #如果找到下一页的URl,则得到绝对路径,构造新的request对象
38             next_url = response.urljoin(next_url)
39             #返回新的request对象
40             yield scrapy.Request(next_url,callback=self.parse)

 

items:
# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html

import scrapy


class BookItem(scrapy.Item):
    # define the fields for your item here like:
    #可以设置自己想要的格式,设置的名称和bookstoscrape中名字对应
    name = scrapy.Field()

    price = scrapy.Field()
    rate = scrapy.Field()

piplines处理:



# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html


# useful for handling different item types with a single interface
from itemadapter import ItemAdapter


class PriceConverterPipeline(object):
    """实现英镑转人民币"""

    # 英镑兑换人民币汇率 1英镑=8.7091人民币
    exchange_rate = 8.7091

    def process_item(self, item, spider):
        """处理传入的item,最后返回"""

        # 提取item的price 字段(如£53.74)
        # 去掉前面英镑符号£,转换为float类型,乘以汇率
        price = float(item['price'][1:]) * self.exchange_rate
        # 保留2 位小数,赋值回item的price字段
        item['price'] = '¥%.2f' % price
        # 必须返回item
        return item

settings 设置:

# Scrapy settings for spider_01_books project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://docs.scrapy.org/en/latest/topics/settings.html
#     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://docs.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'spider_01_books'

SPIDER_MODULES = ['spider_01_books.spiders']
NEWSPIDER_MODULE = 'spider_01_books.spiders'


# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'spider_01_books (+http://www.yourdomain.com)'

# Obey robots.txt rules
ROBOTSTXT_OBEY = True

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
#}

# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'spider_01_books.middlewares.Spider01BooksSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    'spider_01_books.middlewares.Spider01BooksDownloaderMiddleware': 543,
#}

# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
   'spider_01_books.pipelines.PriceConverterPipeline': 300,
}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

 



标签:en,items,spider,piplines,item,scrapy,html,docs
来源: https://www.cnblogs.com/anhao-world/p/16328834.html

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有