python多執行緒處理資料
從檔案讀取資料,多執行緒處理
#! /usr/bin/env python
#encoding=utf-8
import threading
import time
from queue import queue
def readfile():
file_object = open('/opt/dev/python/list.dat')
global queue
for line in file_object:
queue.put(line)
class consumer(threading.thread):
def run(self):
global queue
while queue.qsize() > 0:
msg = self.name + '消費了 '+queue.get()
print msg
time.sleep(0.01)
queue = queue()
def main():
readfile()
for i in range(5):
c = consumer()
c.start()
if __name__ == '__main__':
main()
測試資料
佇列1
佇列2佇列3
佇列4佇列5
佇列6佇列7
佇列8佇列9
佇列10
佇列11
佇列12
佇列13
佇列14
佇列15
佇列16
執行結果:
thread-2消費了 佇列879
thread-3消費了 佇列880
thread-5消費了 佇列881
thread-4消費了 佇列882
thread-6消費了 佇列883
thread-2消費了 佇列884
thread-2消費了 佇列885
thread-6消費了 佇列886
thread-3消費了 佇列88
shell單詞計數
grep -io '佇列' 1.log|wc -l
附:python的生產者消費者**
#! /usr/bin/env python
#encoding=utf-8
import threading
import time
from queue import queue
class producer(threading.thread):
def run(self):
global queue
count = 0
while true:
for i in range(100):
if queue.qsize() > 1000:
pass
else:
count = count +1
msg = '生成產品'+str(count)
queue.put(msg)
print msg
time.sleep(1)
class consumer(threading.thread):
def run(self):
global queue
while true:
for i in range(3):
if queue.qsize() < 100:
pass
else:
msg = self.name + '消費了 '+queue.get()
print msg
time.sleep(1)
queue = queue()
def test():
for i in range(500):
queue.put('初始產品'+str(i))
for i in range(2):
p = producer()
p.start()
for i in range(5):
c = consumer()
c.start()
if __name__ == '__main__':
test()
多執行緒處理mysql資料
閒來無事研究了下py,發現多執行緒處理起資料來比php不要爽太多,廢話少說上碼 author yao import mydb from time import ctime,sleep def mythread db for i in xrange 10 sql select from y user ...
C 多執行緒處理資料
os centos 7 編譯環境 gcc 4.8 cpu 2顆 intel r xeon r cpu e5 2670 v3 2.30ghz,24核48執行緒。int pthread create pthread t thread,const pthread attr t restrict attr,...
多執行緒Callable處理資料
1.定義乙個等於cpu核心數的執行緒池 2.根據資料 list 處理每個執行緒可以分到的數量list 3.callable 執行緒處理資料 4.future 獲取callcable執行緒處理後的資料 5.把 future 獲取的資料重新 addall 進 list 6.返回資料 public lis...