三種方法:
1,torch.mm(僅僅適用2維的矩陣相乘)
2,torch.matmul
3,@>>> a = torch.randn(3,3)
>>> b = torch.rand(3,3)
>>> a
tensor([[-0.6505, 0.0167, 2.2106],
[ 0.8962, -0.3319, -1.2871],
[-0.0106, -0.8484, 0.6174]])
>>> b
tensor([[0.3518, 0.5478, 0.9848],
[0.0434, 0.2797, 0.2140],
[0.3784, 0.8357, 0.7813]])
>>> torch.mm(a,b)
tensor([[ 0.6084, 1.4958, 1.0901],
[-0.1862, -0.6776, -0.1940],
[ 0.1931, 0.2729, 0.2904]])
>>> torch.matmul(a,b)
tensor([[ 0.6084, 1.4958, 1.0901],
[-0.1862, -0.6776, -0.1940],
[ 0.1931, 0.2729, 0.2904]])
>>> a@b
tensor([[ 0.6084, 1.4958, 1.0901],
[-0.1862, -0.6776, -0.1940],
[ 0.1931, 0.2729, 0.2904]])
#線性相乘,可以把矩陣壓縮比如
>>> a = torch.rand(4,784)
>>> x = torch.rand(4,784)
>>> w = torch.rand(512,784)
>>> ([email protected]()).shape
torch.size([4, 512])
>>> w = torch.rand(784,512)
>>> (x@w).shape
torch.size([4, 512])
pytorch中的w=torch.rand(512=ch-out,784=ch-in)
>>> a = torch.rand(4,3,28,64)
>>> b = torch.rand(4,3,64,28)
>>> torch.mm(a,b).shape
traceback (most recent call last):
file "", line 1, in
runtimeerror: self must be a matrix
>>> torch.matmul(a,b).shape
torch.size([4, 3, 28, 28])
>>> b = torch.rand(4,64,28)
>>> torch.matmul(a,b).shape
traceback (most recent call last):
file "", line 1, in
runtimeerror: the size of tensor a (3) must match the size of tensor b (4) at non-singleton dimension 1
>>> b = torch.rand(3,64,28)
>>> torch.matmul(a,b).shape
torch.size([4, 3, 28, 28])
平方運算
pow(a,2/3/4次方)
a**2 平方
a.sqrt() 平方根
a.rsqrt()平方根的倒數
a**(0.5)相當於開平方
clamp可用來梯度裁剪,比如clamp(10)表示矩陣裡面的數最小為10
clamp(0,10)表示矩陣裡面的數都在0-10中間
CULA矩陣相乘和CUBLAS矩陣相乘
cula的矩陣相乘 culadevicedgemm n n n,m,k,alpha,b device,n,b k x n a device,k,a m x k beta,c device,n 上式表示 c a b的矩陣相乘方法,而且資料型別為double,也可以使用float型別資料的函式 cula...
矩陣冪(矩陣相乘)
題目描述 給定乙個n n的矩陣,求該矩陣的k次冪,即p k。第一行 兩個整數n 2 n 10 k 1 k 5 兩個數字之間用乙個空格隔開,含義如上所示。接下來有n行,每行n個正整數,其中,第i行第j個整數表示矩陣中第i行第j列的矩陣元素pij且 0 pij 10 另外,資料保證最後結果不會超過10 ...
C 矩陣相乘
void matrixmulti int a,int b,int c,int n1,int m,int n2 for i 0 ifor i 0 ia 0 0 1 a 0 1 2 a 0 2 3 a 1 0 2 a 1 1 3 a 1 2 1 b 0 0 1 b 0 1 2 b 0 2 3 b 0 3...