所有支援函式正常執行後,下面是k-means演算法了,它會建立k個centroid,然後將每個點分配給最近的centroid,在重新計算centroid,重複該過程直到資料點的簇分配結果不再改變為止。def loaddataset(filename):
datamat = #assume last column is target value
fr = open(filename)
for line in fr.readlines():
curline = line.strip().split('\t')
fltline = map(float,curline) #map all elements to float()
return datamat
def disteclud(veca, vecb):
return sqrt(sum(power(veca - vecb, 2))) #計算兩個向量的歐氏距離
def randcent(dataset, k):#為給定資料集構建乙個包含k個隨機centroid的集合
n = shape(dataset)[1]
centroids = mat(zeros((k,n)))
for j in range(n):#建立隨機的聚類中心
minj = min(dataset[:,j]) #保證centroid的在資料的邊界之內
rangej = float(max(dataset[:,j]) - minj)
centroids[:,j] = mat(minj + rangej * random.rand(k,1))
return centroids
def kmeans(dataset, k, distmeas=disteclud, createcent=randcent):
m = shape(dataset)[0]
clusterassment = mat(zeros((m,2)))#建立矩陣分配資料點
centroids = createcent(dataset, k)
clusterchanged = true
while clusterchanged:
clusterchanged = false
for i in range(m):#為每個資料點尋找最近的centroid
mindist = inf; minindex = -1
for j in range(k):
distji = distmeas(centroids[j,:],dataset[i,:])
if distji < mindist:
mindist = distji; minindex = j
if clusterassment[i,0] != minindex: clusterchanged = true
clusterassment[i,:] = minindex,mindist**2
print centroids
for cent in range(k):#重新計算centroids
ptsinclust = dataset[nonzero(clusterassment[:,0].a==cent)[0]]#獲取在這個簇裡的所有資料點
centroids[cent,:] = mean(ptsinclust, axis=0) #將centroid的值更新為該簇中所有點的均值
return centroids, clusterassment
def bikmeans(dataset, k, distmeas=disteclud):
m = shape(dataset)[0]
clusterassment = mat(zeros((m,2)))
centroid0 = mean(dataset, axis=0).tolist()[0]
centlist =[centroid0] #建立乙個初始centroid
for j in range(m):
clusterassment[j,1] = distmeas(mat(centroid0), dataset[j,:])**2
while (len(centlist) < k):
lowestsse = inf
for i in range(len(centlist)):
ptsincurrcluster = dataset[nonzero(clusterassment[:,0].a==i)[0],:]#獲取當前簇i中的所有資料點
centroidmat, splitclustass = kmeans(ptsincurrcluster, 2, distmeas)
ssesplit = sum(splitclustass[:,1])#比較sse和當前的minimum
ssenotsplit = sum(clusterassment[nonzero(clusterassment[:,0].a!=i)[0],1])
print "ssesplit, and notsplit: ",ssesplit,ssenotsplit
if (ssesplit + ssenotsplit) < lowestsse:
bestcenttosplit = i
bestnewcents = centroidmat
bestclustass = splitclustass.copy()
lowestsse = ssesplit + ssenotsplit
bestclustass[nonzero(bestclustass[:,0].a == 1)[0],0] = len(centlist) #更新簇的分配結果
bestclustass[nonzero(bestclustass[:,0].a == 0)[0],0] = bestcenttosplit
print 'the bestcenttosplit is: ',bestcenttosplit
print 'the len of bestclustass is: ', len(bestclustass)
centlist[bestcenttosplit] = bestnewcents[0,:].tolist()[0]#將當前的centroid替換為新的兩個centroids
clusterassment[nonzero(clusterassment[:,0].a == bestcenttosplit)[0],:]= bestclustass
return mat(centlist), clusterassment
K Means聚類演算法
k means聚類演算法 intergret kmeans演算法的基本思想是初始隨機給定k個簇中心,按照最鄰近原則把待分類樣本點分到各個簇。然後按平均法重新計算各個簇的質心,從而確定新的簇心。一直迭代,直到簇心的移動距離小於某個給定的值。k means聚類演算法主要分為三個步驟 1 第一步是為待聚類...
聚類演算法 K means
演算法接受引數 k 然後將事先輸入的n個資料物件劃分為 k個聚類以便使得所獲得的聚類滿足 同一聚類中的物件相似度較高 而不同聚類中的物件相似度較小。聚類相似度是利用各聚類中物件的均值所獲得乙個 中心物件 引力中心 來進行計算的。k means演算法是最為經典的基於劃分的聚類方法,是十大經典資料探勘演...
k means聚類演算法
說到聚類,得跟分類區別開來,分類是按人為給定的標準將樣本歸到某個類別中去,在機器學習中多是監督學習,也就是訓練樣本要給標籤 正確的類別資訊 而聚類是在某種規則下自動將樣本歸類,在機器學習中是無監督學習,不需要提前給樣本打標籤。k means聚類演算法,就是在某種度量方式下,將樣本自動劃分到k個類別中...