在跑神經網路的時候,gpu的作用是很明顯的。下面比較一下cpu跑和gpu跑的區別:
先用cpu跑:
import torch
import time
for i in
range(1
,10):
start_time = time.
time
() #返回當前語句執行時的時間點(單位秒)。
a = torch.
rand
(i*100
,1000
,1000
) #生成i*
100個的1000行1000矩陣組成的張量,每個元素為0
-1之間的隨機值。
a = torch.
mul(a,a)
cul_end_time = time.
time()
- start_time
print
(cul_end_time)
輸出:
0.441303014755249
0.8848202228546143
1.287822961807251
1.7319905757904053
2.177103281021118
2.555281162261963
2.9773526191711426
3.3875885009765625
3.9365322589874268
process finished with exit code 0
再用gpu跑,**如下:
import torch
import time
for i in
range(1
,10):
start_time = time.
time
() #return current time in second.
a = torch.
rand
(i*100
,1000
,1000
)
a = a.
cuda
() #和上面的區別是,a.
cuda
()把張量a轉到gpu中,下面的運算就在gpu進行了
a = torch.
mul(a,a)
cul_end_time = time.
time()
- start_time
print
(cul_end_time)
輸出:
526.2110984325409
1.0246520042419434
1.4614639282226562
1.8706181049346924
2.5586912631988525
3.071241617202759
3.5114991664886475
3.9417364597320557
4.653223752975464
process finished with exit code 0
第一次運算的時候花了526秒,近9分鐘,是在等資料載入到顯示卡中。 讀取 raspberrypi 的cpu和gpu溫度
usr bin env python coding utf 8 import requests import json import time import commands def main filerecord open result.txt w filerecord.write connect...
pytorch安裝cpu版本
我真給氣笑了,我沒有nvidia,我怕是假的深度學習吧。推翻重來 微笑。1.檢視已經安裝的包 pip list2.安裝package pip install packagename3.解除安裝package pip uninstall packagename 1.pytorch 第一篇 cpu ve...
pytorch在CPU和GPU上載入模型
pytorch允許把在gpu上訓練的模型載入到cpu上,也允許把在cpu上訓練的模型載入到gpu上。cpu cpu,gpu gpu torch.load gen 500000.pkl gpu cpu torch.load gen 500000.pkl map location lambda stor...