#爬取搜狗首頁的頁面資料
import requests
#1指定url
url = ''
#2.發起請求
response = requests.get(url=url)
#3獲取響應資料
page_text = response.text #text返回的是字串型別的資料
#持久化儲存
with open('./sogou.html','w',encoding='utf-8') as fp:
fp.write(page_text)
print('over!')
示例:1
import requests
wd = input('enter a word:')
url = 'web'
#引數的封裝
param =
#ua偽裝
headers =
response = requests.get(url=url,params=param,headers=headers)
#手動修改響應資料的編碼
response.encoding = 'utf-8'
page_text = response.text
filename = wd + '.html'
with open(filename,'w',encoding='utf-8') as fp:
fp.write(page_text)
print(filename,'爬取成功!!!')
示例2:
url = ''
word = input('enter a english word:')
#請求引數的封裝
data =
#ua偽裝
headers =
response = requests.post(url=url,data=data,headers=headers)
#text:字串 json():物件
obj_json = response.json()
print(obj_json)
示例3:
#爬取任意城市對應的肯德基餐廳的位置資訊
#動態載入的資料
city = input('enter a cityname:')
url = ''
data =
#ua偽裝
headers =
response = requests.post(url=url,headers=headers,data=data)
json_text = response.text
print(json_text)
Python爬蟲 Request模組
文章說明了request模組的意義,且強調了request模組使用更加方便。接下來介紹幾種常用的request操作,並且會在後續補充說明一些特定用法。匯入檔案 import requests一 請求 右邊為請求語句,返回值為response回應 r requests.get r requests.p...
python爬蟲利器 request庫
request庫比urllib2庫更為高階,因為其功能更強大,更易於使用。使用該庫可以十分方便我們的抓取。基本請求 r requests.get r requests.post r requests.put r requests.delete r requests.head r requests.o...
爬蟲 python(二)初識request
from urllib.request import urlopen 傳送請求,獲取伺服器給的響應 url response urlopen url 讀取結果,無法正常顯示中文 html response.read 進行解碼操作,轉為utf 8 html decode html.decode 列印結...