● 多項式回歸 —— sklearn
輸出:import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import polynomialfeatures
from sklearn.linear_model import linearregression
# 載入資料
data = np.
genfromtxt
("job.csv"
, delimiter=
",")
x_data = data[1:
,1]y_data = data[1:
,2]plt.
scatter
(x_data,y_data)
plt.
show
()
輸出:# 定義多項式回歸,degree的值可以調節多項式的特徵
poly_reg =
polynomialfeatures
(degree=5)
# 特徵處理
x_poly = poly_reg.
fit_transform
(x_data)
# 定義回歸模型
lin_reg =
linearregression()
# 訓練模型
lin_reg.
fit(x_poly, y_data)
# 畫圖
plt.
plot
(x_data, y_data,
'b.'
)plt.
plot
(x_data, lin_reg.
predict
(poly_reg.
fit_transform
(x_data)
), c=
'r')
plt.
title
('truth or bluff (polynomial regression)'
)plt.
xlabel
('position level'
)plt.
ylabel
('salary'
)plt.
show
()
輸出:# 畫圖
plt.
plot
(x_data, y_data,
'b.'
)x_test = np.
linspace(1
,10,100
)x_test = x_test[
:,np.newaxis]
plt.
plot
(x_test, lin_reg.
predict
(poly_reg.
fit_transform
(x_test)
), c=
'r')
plt.
title
('truth or bluff (polynomial regression)'
)plt.
xlabel
('position level'
)plt.
ylabel
('salary'
)plt.
show
()
● 資料集:「job.csv」:
position,level,salary
business analyst,1,
45000
junior consultant,2,
50000
senior consultant,3,
60000
manager,4,
80000
country manager,5,
110000
region manager,6,
150000
partner,7,
200000
senior partner,8,
300000
c-level,9,
500000
ceo,10,
1000000
機器學習筆記 多項式回歸
比如你的資料分布,是符合y 0.5 x 2 x 2的.那你用y ax b去擬合,無論如何都沒法取的很好的效果.通過上面的分析,我們可以看出,我們想做的事情是對樣本做公升維 即增加樣本的特徵數目 sklean中完成這一功能的類叫做polynomialfeatures.classsklearn.prep...
python 機器學習多項式回歸
現實世界的曲線關係都是通過增加多項式實現的,現在解決多項式回歸問題 住房 樣本 樣本影象 import matplotlib.font manager as fm import matplotlib.pyplot as plt myfont fm.fontproperties fname c win...
多項式回歸
import numpy as np import matplotlib.pyplot as plt x np.random.uniform 3,3,size 100 x x.reshape 1,1 y 0.5 x 2 x 2 np.random.normal 0,1,100 plt.scatter...