**
演算法這裡為了防止大家看不懂我的程式,我在這裡做一些定義
∇θjj(θ)=−x(i)(1−p(y(i)=j|x(i);θ))+λθj(1)
p(y(i)=j|x(i);θ)=eθtjx(i)∑kl=1eθtlx(i)(2)
eθtlx(i)(3)
資料集
特徵將整個圖作為特徵
****已上傳github
這次的**是python3的。
# encoding=utf8
import math
import pandas as pd
import numpy as np
import random
import time
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
class
softmax
(object):
def__init__
(self):
self.learning_step = 0.000001
# 學習速率
self.max_iteration = 100000
# 最大迭代次數
self.weight_lambda = 0.01
# 衰退權重
defcal_e
(self,x,l):
''' 計算部落格中的公式3
'''theta_l = self.w[l]
product = np.dot(theta_l,x)
return math.exp(product)
defcal_probability
(self,x,j):
''' 計算部落格中的公式2
'''molecule = self.cal_e(x,j)
denominator = sum([self.cal_e(x,i) for i in range(self.k)])
return molecule/denominator
defcal_partial_derivative
(self,x,y,j):
''' 計算部落格中的公式1
'''first = int(y==j) # 計算示性函式
second = self.cal_probability(x,j) # 計算後面那個概率
return -x*(first-second) + self.weight_lambda*self.w[j]
defpredict_
(self, x):
result = np.dot(self.w,x)
row, column = result.shape
# 找最大值所在的列
_positon = np.argmax(result)
m, n = divmod(_positon, column)
return m
deftrain
(self, features, labels):
self.k = len(set(labels))
self.w = np.zeros((self.k,len(features[0])+1))
time = 0
while time < self.max_iteration:
print('loop %d' % time)
time += 1
index = random.randint(0, len(labels) - 1)
x = features[index]
y = labels[index]
x = list(x)
x = np.array(x)
derivatives = [self.cal_partial_derivative(x,y,j) for j in range(self.k)]
for j in range(self.k):
self.w[j] -= self.learning_step * derivatives[j]
defpredict
(self,features):
labels =
for feature in features:
x = list(feature)
x = np.matrix(x)
x = np.transpose(x)
return labels
if __name__ == '__main__':
print('start read data')
time_1 = time.time()
raw_data = pd.read_csv('../data/train.csv', header=0)
data = raw_data.values
imgs = data[0::, 1::]
labels = data[::, 0]
# 選取 2/3 資料作為訓練集, 1/3 資料作為測試集
train_features, test_features, train_labels, test_labels = train_test_split(
imgs, labels, test_size=0.33, random_state=23323)
# print train_features.shape
# print train_features.shape
time_2 = time.time()
print('read data cost '+ str(time_2 - time_1)+' second')
print('start training')
p = softmax()
p.train(train_features, train_labels)
time_3 = time.time()
print('training cost '+ str(time_3 - time_2)+' second')
print('start predicting')
test_predict = p.predict(test_features)
time_4 = time.time()
print('predicting cost ' + str(time_4 - time_3) +' second')
score = accuracy_score(test_labels, test_predict)
print("the accruacy socre is " + str(score))
softmax的多分類
from 我們常見的邏輯回歸 svm等常用於解決二分類問題,對於多分類問題,比如識別手寫數字,它就需要10個分類,同樣也可以用邏輯回歸或svm,只是需要多個二分類來組成多分類,但這裡討論另外一種方式來解決多分類 softmax。softmax的函式為 p i exp tix kk 1exp tkx ...
softmax的多分類
我們常見的邏輯回歸 svm等常用於解決二分類問題,對於多分類問題,比如識別手寫數字,它就需要10個分類,同樣也可以用邏輯回歸或svm,只是需要多個二分類來組成多分類,但這裡討論另外一種方式來解決多分類 softmax。softmax的函式為 p i dfrac exp theta k tx 可以看到...
CCF 202006 1 線性分類器 python
題目 線性分類器 line 題目描述 考慮乙個簡單的二分類問題 將二維平面上的點分為a和b兩類。訓練資料報含n個點,其中第i個點 1 i n 可以表示為乙個三元組 x,y,type 即該點的橫座標 縱座標和類別。在二維平面上,任意一條直線可以表示為 x y 0的形式,即由 三個引數確定該直線,且滿足...