5

この xml (http://www.reddit.com/r/videos/top/.rss) を解析しようとしていますが、うまくいきません。YouTube リンクを各項目に保存しようとしていますが、「チャンネル」子ノードが原因で問題が発生しています。アイテムを反復できるように、このレベルに到達するにはどうすればよいですか?

#reddit parse
reddit_file = urllib2.urlopen('http://www.reddit.com/r/videos/top/.rss')
#convert to string:
reddit_data = reddit_file.read()
#close file because we dont need it anymore:
reddit_file.close()

#entire feed
reddit_root = etree.fromstring(reddit_data)
channel = reddit_root.findall('{http://purl.org/dc/elements/1.1/}channel')
print channel

reddit_feed=[]
for entry in channel:   
    #get description, url, and thumbnail
    desc = #not sure how to get this

    reddit_feed.append([desc])
4

2 に答える 2

6

あなたが試すことができますfindall('channel/item')

import urllib2
from xml.etree import ElementTree as etree
#reddit parse
reddit_file = urllib2.urlopen('http://www.reddit.com/r/videos/top/.rss')
#convert to string:
reddit_data = reddit_file.read()
print reddit_data
#close file because we dont need it anymore:
reddit_file.close()

#entire feed
reddit_root = etree.fromstring(reddit_data)
item = reddit_root.findall('channel/item')
print item

reddit_feed=[]
for entry in item:   
    #get description, url, and thumbnail
    desc = entry.findtext('description')  
    reddit_feed.append([desc])
于 2012-10-14T03:41:57.463 に答える
4

私はXpath式を使用してあなたのためにそれを書きました(正常にテストされました):

from lxml import etree
import urllib2

headers = { 'User-Agent' : 'Mozilla/5.0' }
req = urllib2.Request('http://www.reddit.com/r/videos/top/.rss', None, headers)
reddit_file = urllib2.urlopen(req).read()

reddit = etree.fromstring(reddit_file)

for item in reddit.xpath('/rss/channel/item'):
    print "title =", item.xpath("./title/text()")[0]
    print "description =", item.xpath("./description/text()")[0]
    print "thumbnail =", item.xpath("./*[local-name()='thumbnail']/@url")[0]
    print "link =", item.xpath("./link/text()")[0]
    print "-" * 100
于 2012-10-14T03:41:38.023 に答える