爬蟲回顧:
fake_useragent是乙個有很多user-agent的模組,windows下通過pip3 install fake_useragent -i --user
即可安裝
說真的這個**真的沒啥難的,不多說了直接上**吧
"""
爬"""
from urllib import request, parse
from fake_useragent import useragent
import time, random
class
tiebaspider
:def
__init__
(self)
: self.url =
''# 獲取url
defget_url
(self, kw, pn)
: kw = parse.quote(
'kw'
) url = self.url.
format
(kw, pn)
print
(url)
return url
# 獲取user-agent
defget_useragent
(self)
: ua = useragent(
)return ua.random
# 儲存html頁面
defwrite_html
(self, html, filename)
:with
open
(filename,
'w', encoding=
'utf-8'
)as f:
f.write(html)
# 獲取html
defget_html
(self, url)
: headers =
req = request.request(url, headers=headers)
res = request.urlopen(req)
html = res.read(
).decode(
)return html
# 執行主程式
defrun
(self,kw,start,end)
:for i in
range
(start, end +1)
: pn =
(i -1)
*50url = self.get_url(kw, pn)
html = self.get_html(url)
filename =
'{}-第{}.html'
.format
(kw, i)
self.write_html(html, filename)
print
(filename,
' 完成啦'
) time.sleep(random.randint(1,
3))if __name__ ==
'__main__'
: kw =
input
('請輸入貼吧:'
) start =
int(
input
('請輸入起始頁:'))
end =
int(
input
('請輸入結束頁:'))
tieba = tiebaspider(
)while
true
:try
: tieba.run(kw,start,end)
break
except exception as e:
print
(e) time.sleep(
0.5)
爬蟲爬取百度貼吧 python
本爬蟲是在pycharm中編寫完成,伺服器環境是ubuntu16.04,使用語言是python3,匯入的模組包是requests模組 匯入模組 import requests class tiebaspider object def init self self.base url self.head...
爬取百度貼吧
import urllib.request import urllib.parse import os,time 輸入貼吧名字 baname input 請輸入貼吧的名字 start page int input 請輸入起始頁 end page int input 請輸入結束頁 不完整的url ur...
爬取百度貼吧
帶入需要使用的包 from urllib import request,parse importos 基礎知識 變數賦值 字串賦值 爬取的關鍵字 kw lol 數值賦值 爬取的頁數範圍 start 1end 4 輸出 print kw,start,end 宣告需要爬取的連線 base url 建立資...