CrawlSpider Derived クラスでクロールを開始し、Ctrl+C で一時停止します。コマンドを再度実行して再開すると、続行しません。
私の開始および再開コマンド:
scrapy crawl mycrawler -s JOBDIR=crawls/test5_mycrawl
Scrapy はフォルダーを作成します。パーミッションは 777 です。
クロールを再開すると、次のように出力されます。
/home/adminuser/.virtualenvs/rg_harvest/lib/python2.7/site-packages/twisted/internet/_sslverify.py:184: UserWarning: You do not have the service_identity module installed. Please install it from <https://pypi.python.org/pypi/service_identity>. Without the service_identity module and a recent enough pyOpenSSL tosupport it, Twisted can perform only rudimentary TLS client hostnameverification. Many valid certificate/hostname mappings may be rejected.
verifyHostname, VerificationError = _selectVerifyImplementation()
2014-11-21 11:05:10-0500 [scrapy] INFO: Scrapy 0.24.4 started (bot: rg_harvest_scrapy)
2014-11-21 11:05:10-0500 [scrapy] INFO: Optional features available: ssl, http11, django
2014-11-21 11:05:10-0500 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'rg_harvest_scrapy.spiders', 'SPIDER_MODULES': ['rg_harvest_scrapy.spiders'], 'BOT_NAME': 'rg_harvest_scrapy'}
2014-11-21 11:05:10-0500 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2014-11-21 11:05:10-0500 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2014-11-21 11:05:10-0500 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2014-11-21 11:05:10-0500 [scrapy] INFO: Enabled item pipelines: ValidateMandatory, TypeConversion, ValidateRange, ValidateLogic, RestegourmetImagesPipeline, SaveToDB
2014-11-21 11:05:10-0500 [mycrawler] INFO: Spider opened
2014-11-21 11:05:10-0500 [mycrawler] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2014-11-21 11:05:10-0500 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2014-11-21 11:05:10-0500 [scrapy] DEBUG: Web service listening on 127.0.0.1:6080
2014-11-21 11:05:10-0500 [mycrawler] DEBUG: Crawled (200) <GET http://eatsmarter.de/suche/rezepte> (referer: None)
2014-11-21 11:05:10-0500 [mycrawler] DEBUG: Filtered duplicate request: <GET http://eatsmarter.de/suche/rezepte?page=1> - no more duplicates will be shown (see DUPEFILTER_DEBUG to show all duplicates)
2014-11-21 11:05:10-0500 [mycrawler] INFO: Closing spider (finished)
2014-11-21 11:05:10-0500 [mycrawler] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 225,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 19242,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'dupefilter/filtered': 29,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2014, 11, 21, 16, 5, 10, 733196),
'log_count/DEBUG': 4,
'log_count/INFO': 7,
'request_depth_max': 1,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/disk': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/disk': 1,
'start_time': datetime.datetime(2014, 11, 21, 16, 5, 10, 528629)}
start_url が 1 つあります。これが理由でしょうか?私のクローラーは 1 つの start_url を使用してから、LinkExtractor を使用したルールによってページネーションに従い、特定の URL 形式で解析項目を呼び出します。
私のスパイダーコード:
class MyCrawlSpiderBase(CrawlSpider):
name = 'test_spider'
testmode = True
crawl_start = datetime.utcnow().isoformat()
def __init__(self, testmode=True, urls=None, *args, **kwargs):
self.testmode = bool(int(testmode))
super(MyCrawlSpiderBase, self).__init__(*args, **kwargs)
def parse_item(self, response):
# Item Values
l = MyItemLoader(RecipeItem(), response=response)
l.replace_value('url', response.url)
l.replace_value('crawl_start', self.crawl_start)
return l.load_item()
class MyCrawlSpider(MyCrawlSpiderBase):
name = 'example_de'
allowed_domains = ['example.de']
start_urls = [
"http://example.de",
]
rules = (
Rule(
LinkExtractor(
allow=('/search/entry\?page=', )
)
),
Rule(
LinkExtractor(
allow=('/entry/[0-9A-z\-]{3,}$', ),
),
callback='parse_item'
),
)
def parse_item(self, response):
item = super(MyCrawlSpider, self).parse_item(response)
l = MyItemLoader(item=item, response=response)
l.replace_xpath("name", "//h1[@class='fn title']/text()")
(...)
return l.load_item()