scrapy中有一个参数:DOWNLOAD_DELAY
或者 download_delay
可以设置下载延时,不过Spider类被初始化的时候就固定了,爬虫运行过程中没发改变。
随机延时,可以降低被封ip的风险
代码示例
random_delay_middleware.py
# -*- coding:utf-8 -*- import logging import random import time class RandomDelayMiddleware(object): def __init__(self, delay): self.delay = delay @classmethod def from_crawler(cls, crawler): delay = crawler.spider.settings.get("RANDOM_DELAY", 10) if not isinstance(delay, int): raise ValueError("RANDOM_DELAY need a int") return cls(delay) def process_request(self, request, spider): delay = random.randint(0, self.delay) logging.debug("### random delay: %s s ###" % delay) time.sleep(delay)
使用方式:
custom_settings = { "RANDOM_DELAY": 3, "DOWNLOADER_MIDDLEWARES": { "middlewares.random_delay_middleware.RandomDelayMiddleware": 999, } }
说明:
RANDOM_DELAY
: 下载随机延时范围,[0, RANDOM_DELAY]
比如上面我设置了3秒,那么随机延时范围将是[0, 3]
如果设置了DOWNLOAD_DELAY
那么,总的延时应该是两者之和:
total_delay = DOWNLOAD_DELAY + RANDOM_DELAY
更精确的说,应该是:
DOWNLOAD_DELAY + 0 < total_delay < DOWNLOAD_DELAY + RANDOM_DELAY