私はスクレイプクロールスパイダーを使用しており、出力ページを解析して入力タグパラメーター(type、id、name)を選択しようとしています。各データ型はアイテムに選択され、後でデータベースに保存されます。
Database Table_1
╔════════════════╗
║ text ║
╠════════════════╣
║ id │ name ║
╟──────┼─────────╢
║ │ ║
╟──────┼─────────╢
║ │ ║
╚══════╧═════════╝
同じことがパスワードとファイルにもありますが、
私が直面している問題は、xpathがタグ全体を抽出することです!!
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from scrapy.item import Item, Field
from isa.items import IsaItem
class MySpider(CrawlSpider):
name = 'example.com'
allowed_domains = ['testaspnet.vulnweb.com']
start_urls = ['http://testaspnet.vulnweb.com']
rules = (
Rule(SgmlLinkExtractor(allow=('/*' ) ),callback='parse_item'),)
def parse_item(self, response):
self.log('%s' % response.url)
hxs = HtmlXPathSelector(response)
item=IsaItem()
text_input=hxs.select("//input[(@id or @name) and (@type = 'text' )]").extract()
pass_input=hxs.select("//input[(@id or @name) and (@type = 'password')]").extract()
file_input=hxs.select("//input[(@id or @name) and (@type = 'file')]").extract()
print text_input , pass_input ,file_input
return item
出力
me@me-pc:~/isa/isa$ scrapy crawl example.com -L INFO -o file_nfffame.csv -t csv
2012-07-02 12:42:02+0200 [scrapy] INFO: Scrapy 0.14.4 started (bot: isa)
2012-07-02 12:42:02+0200 [example.com] INFO: Spider opened
2012-07-02 12:42:02+0200 [example.com] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
[] [] []
[] [] []
[] [] []
[u'<input name="tbUsername" type="text" id="tbUsername" class="Login">'] [u'<input name="tbPassword" type="password" id="tbPassword" class="Login">'] []
[] [] []
[u'<input name="tbUsername" type="text" id="tbUsername" class="Login">'] [u'<input name="tbPassword" type="password" id="tbPassword" class="Login">'] []
[] [] []
2012-07-02 12:42:08+0200 [example.com] INFO: Closing spider (finished)