2

シェルにログインして HTML ページを通過し、Xpath クエリをテストできるため、このスパイダーを構築しています。

私が間違っているのかわかりません。どんな助けでも大歓迎です。Twisted を再インストールしましたが、何もありません。

私のクモはこのように見えます -

from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from spider_scrap.items import spiderItem

class spider(BaseSpider):
name="spider1"
#allowed_domains = ["example.com"]
start_urls = [                  
              "http://www.example.com"
            ]

def parse(self, response):
 items = [] 
    hxs = HtmlXPathSelector(response)
    sites = hxs.select('//*[@id="search_results"]/div[1]/div')

    for site in sites:
        item = spiderItem()
        item['title'] = site.select('div[2]/h2/a/text()').extract                            item['author'] = site.select('div[2]/span/a/text()').extract    
        item['price'] = site.select('div[3]/div[1]/div[1]/div/b/text()').extract()     
    items.append(item)
    return items

スパイダーを実行すると-スクレイピークロールSpider1次のエラーが発生します-

    2012-09-25 17:56:12-0400 [scrapy] DEBUG: Enabled item pipelines:
    2012-09-25 17:56:12-0400 [Spider1] INFO: Spider opened
    2012-09-25 17:56:12-0400 [Spider1] INFO: Crawled 0 pages (at 0 pages/min), scraped  0 items (at 0 items/min)
    2012-09-25 17:56:12-0400 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
    2012-09-25 17:56:12-0400 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
   2012-09-25 17:56:15-0400 [Spider1] DEBUG: Crawled (200) <GET http://www.example.com>  (refere
   r: None)
   2012-09-25 17:56:15-0400 [Spider1] ERROR: Spider error processing <GET    http://www.example.com
    s>
    Traceback (most recent call last):
      File "C:\Python27\lib\site-packages\twisted\internet\base.py", line 1178, in mainLoop
        self.runUntilCurrent()
      File "C:\Python27\lib\site-packages\twisted\internet\base.py", line 800, in runUntilCurrent
        call.func(*call.args, **call.kw)
      File "C:\Python27\lib\site-packages\twisted\internet\defer.py", line 368, in callback
        self._startRunCallbacks(result)
      File "C:\Python27\lib\site-packages\twisted\internet\defer.py", line 464, in _startRunCallbacks
        self._runCallbacks()
    --- <exception caught here> ---
      File "C:\Python27\lib\site-packages\twisted\internet\defer.py", line 551, in _runCallbacks
        current.result = callback(current.result, *args, **kw)
      File "C:\Python27\lib\site-packages\scrapy\spider.py", line 62, in parse
        raise NotImplementedError
    exceptions.NotImplementedError:

     2012-09-25 17:56:15-0400 [Spider1] INFO: Closing spider (finished)
     2012-09-25 17:56:15-0400 [Spider1] INFO: Dumping spider stats:
    {'downloader/request_bytes': 231,
     'downloader/request_count': 1,
     'downloader/request_method_count/GET': 1,
     'downloader/response_bytes': 186965,
     'downloader/response_count': 1,
     'downloader/response_status_count/200': 1,
     'finish_reason': 'finished',
     'finish_time': datetime.datetime(2012, 9, 25, 21, 56, 15, 326000),
     'scheduler/memory_enqueued': 1,
     'spider_exceptions/NotImplementedError': 1,
     'start_time': datetime.datetime(2012, 9, 25, 21, 56, 12, 157000)}
      2012-09-25 17:56:15-0400 [Spider1] INFO: Spider closed (finished)
      2012-09-25 17:56:15-0400 [scrapy] INFO: Dumping global stats:
    {}
4

3 に答える 3

3

レオは正しいです、インデントは正しくありません。いくつかのコードを貼り付けて別のコードを自分で入力し、エディターで同じファイル内のタブとスペースの両方を許可したため、スクリプト内でいくつかのタブとスペースが混在している可能性があります。すべてのタブをスペースに変換して、次のようにします。

from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from spider_scrap.items import spiderItem

class spider(BaseSpider):
    name = "spider1"
    start_urls = ["http://www.example.com"]

    def parse(self, response):
        items = []
        hxs = HtmlXPathSelector(response)
        sites = hxs.select('//*[@id="search_results"]/div[1]/div')

        for site in sites:
            item = spiderItem()
            item['title'] = site.select('div[2]/h2/a/text()').extract
            item['author'] = site.select('div[2]/span/a/text()').extract
            item['price'] = site.select('div[3]/div[1]/div[1]/div/b/text()').extract()
            items.append(item)

        return items
于 2012-09-26T15:14:45.870 に答える
2

解析メソッドがクラスコード外です。以下のコードを使用してください

from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from spider_scrap.items import spiderItem

class spider(BaseSpider):
    name="spider1"
    allowed_domains = ["example.com"]
    start_urls = [
      "http://www.example.com"
     ]

    def parse(self, response):
        items = []
        hxs = HtmlXPathSelector(response)
        sites = hxs.select('//*[@id="search_results"]/div[1]/div')

        for site in sites:
            item = spiderItem()
            item['title'] = site.select('div[2]/h2/a/text()').extract                            item['author'] = site.select('div[2]/span/a/text()').extract
            item['price'] = site.select('div[3]/div[1]/div[1]/div/b/text()').extract()
        items.append(item)
        return items
于 2013-03-21T18:44:22.757 に答える