將所有,.?!』:等分隔符全部替換為空格
將所有大寫轉換為小寫
生成單詞列表
生成詞頻統計
排序排除語法型詞彙,代詞、冠詞、連詞
輸出詞頻最大top20
將分析物件存為utf-8編碼的檔案,通過檔案讀取的方式獲得詞頻分析內容。
fo = open('news.txt','r')執行結果:news = fo.read()
fo.close()
sep = ''',.!?'":;()'''
exclude =
for c in sep:
news = news.replace(c,' ')
wordlist = news.lower().split()
worddict = {}
for w in wordlist:
worddict[w] = worddict.get(w,0)+1
for w in exclude:
del(worddict[w])
## #方法二
# wordset = set(wordlist)
# for w in wordset:
# worddict[w] = wordlist.count(w)
dictlist = list(worddict.items())
dictlist.sort(key=lambda x: x[1],reverse=true)
for i in range(20):
print(dictlist[i])
#輸出所有詞頻
for w in worddict:
print(w,worddict[w])
2.中文詞頻統計
從檔案讀取待分析文字。
news = open('gzccnews.txt','r',encoding = 'utf-8')
安裝與使用jieba進行中文分詞。
pip install jieba
import jieba
list(jieba.lcut(news))
生成詞頻統計
排序排除語法型詞彙,代詞、冠詞、連詞
輸出詞頻最大top20(或把結果存放到檔案裡)
import jieba統計結果:#獲取檔案
fo = open('jueshi.txt', 'r', encoding='utf-8')
file = fo.read()
fo.close();
#排除空格,代詞,連詞
str1 = ''',。『』「」:;()!?、··· '''
dele =
jieba.add_word('唐門')
jieba.add_word('魂師')
jieba.add_word('武魂')
jieba.add_word('魂導器')
for c in str1:
file = file.replace(c, ' ')
tempwords = list(jieba.cut(file))
count = {}
words = list(set(tempwords) - dele)
for i in range(0, len(words)):
count[words[i]] = file.count(str(words[i]))
countlist = list(count.items())
countlist.sort(key=lambda x: x[1], reverse=true)
print(countlist)
#把結果存到資料夾
fo = open('f:\cipintj.txt', 'a', encoding='utf-8')
for i in range(20):
fo.write(countlist[i][0] + ':' + str(countlist[i][1]) + '\n')
fo.close()
綜合練習 詞頻統計
綜合練習 詞頻統計預處理 將所有,等分隔符全部替換為空格 將所有大寫轉換為小寫 生成單詞列表 生成詞頻統計 排序排除語法型詞彙,代詞 冠詞 連詞 輸出詞頻最大top20 將分析物件存為utf 8編碼的檔案,通過檔案讀取的方式獲得詞頻分析內容。從記事本長讀取檔案 f open news.txt r e...
綜合練習 詞頻統計
world f.read f.close xiaoqu depart for c in depart world world.replace c,wordlist world.lower split worddict wordset set wordlist xiaoqu for a in word...
綜合練習 詞頻統計
song twinkle,twinkle,little star,how i wonder what you are.up above the world so high,like a diamond in the sky.twinkle,twinkle,little star,how i wond...