item pipeline 主要用於從網頁抓取(spider)後對資料item進行收集,寫入資料庫或檔案中。
spider 在獲得item後,會傳遞給item pipeline,進行後續資料收集工作。
在setting中對item pipeline類路徑進行配置,scrapy框架會呼叫該item pipeline類,為了正確呼叫,
item pipeline類必須按照框架要求實現一些方法。使用者只需關注實現這些方法即可。
下面檔案實現了乙個簡單的item pipeline類,對抓取的新聞資料進行進一步處理並寫入檔案中。這些方法的功能見注釋。
1. 檔案:pipelines.py
注意事項:1. 初始化函式實現非常自由,無需限定引數,只需保證from_crawler類方法能夠呼叫該初始化函式生成相應的例項及可。
2. 框架所使用方法宣告引數固定。(保證框架能夠正確呼叫)
# -*- coding: utf-8 -*-
# define your item pipelines here
## don't forget to add your pipeline to the item_pipelines setting
# see:
class
news2filefor163pipeline
(object):
""" pipeline: process items given by spider
"""def__init__
(self, filepath, filename):
""" init for the pipeline class
"""self.fullname = filepath + '/' + filename
self.id = 0
return
defprocess_item
(self, item, spider):
""" process each items from the spider.
example: check if item is ok or raise dropitem exception.
example: do some process before writing into database.
example: check if item is exist and drop.
"""for element in ("url","source","title","editor","time","content"):
if item[element] is
none:
raise dropitem("invalid items url: %s" % str(item["url"]))
self.fs.write("news id: %s" % self.id)
self.fs.write("\n")
self.id += 1
self.fs.write("url: %s" % item["url"][0].strip().encode('utf-8'))
self.fs.write("\n")
self.fs.write("source: %s" % item["source"][0].strip().encode('utf-8'))
self.fs.write("\n")
self.fs.write("title: %s" % item["title"][0].strip().encode('utf-8'))
self.fs.write("\n")
self.fs.write("editor: %s" % item["editor"][0].strip().
encode('utf-8').split(':')[1])
self.fs.write("\n")
time_string = item["time"][0].strip().split()
datetime = time_string[0] + ' ' + time_string[1]
self.fs.write("time: %s" % datetime.encode('utf-8'))
self.fs.write("\n")
content = ""
for para in item["content"]:
content += para.strip().replace('\n', '').replace('\t', '')
self.fs.write("content: %s" % content.encode('utf-8'))
self.fs.write("\n")
return item
defopen_spider
(self, spider):
""" called when spider is opened.
do something before pipeline is processing items.
example: do settings or create connection to the database.
"""self.fs = open(self.fullname, 'w+')
return
defclose_spider
(self, spider):
""" called when spider is closed.
do something after pipeline processing all items.
example: close the database.
"""self.fs.flush()
self.fs.close()
return
@classmethod
deffrom_crawler
(cls, crawler):
""" return an pipeline instance.
example: initialize pipeline object by crawler's setting and components.
"""return cls(crawler.settings.get('item_file_path'),
crawler.settings.get('item_file_name'))
抓取project中settting.py的相關配置**
# configure item pipelines
# see
item_pipelines =
如果抓取資料的內容非常多,使用item pipeline 對資料處理並寫入資料庫中乃王道。 使用Scrapy對新聞進行爬蟲(二)
scrapy框架下的item用於定義抓取的資料內容。實現從非結構化資料 網頁 中提取結構化資料時,結構化資料所用的資料結構即為該item scrapy.item 宣告乙個item類,scrapy匯入該模組並使用item例項來儲存結構化資料。所有資料的型別field實際是乙個dict的別名而已。開發者...
網路爬蟲(四) 使用Scrapy爬取網易新聞
import scrapy class newsitem scrapy.item news thread scrapy.field news title scrapy.field news url scrapy.field news time scrapy.field news source scr...
利用scrapy框架進行爬蟲
1.安裝2.用scrapy爬蟲四步走 第一步 編寫items.py 第二步 編寫spiders下的 py 檔案 第三步 編寫pipelines.py檔案 第四步 開啟settings.py 檔案更改配置 3.基於scrapy爬蟲框架,只需在命令列中輸入 scrapy startproject 命令,...