tf.concat的作用主要是將向量按指定維連起來,其餘維度不變;而1.0版本以後,函式的用法變成:
t1 = [[1, 2, 3], [4, 5, 6]]
t2 = [[7, 8, 9], [10, 11, 12]]
#按照第0維連線
tf.concat( [t1, t2],0) ==> [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]]
#按照第1維連線
tf.concat([t1, t2],1) ==> [[1, 2, 3, 7, 8, 9], [4, 5, 6, 10, 11, 12]]
作為參考合成神經網路輸出的時候在深度方向(inception_v3)是數字3,[batch,heigh,width,depth]。
用法:stack(values, axis=0, name=」stack」):
「」「stacks a list of rank-r
tensors into one rank-(r+1)
tensor.
x = tf.constant([1, 4])
y = tf.constant([2, 5])
z = tf.constant([3, 6])
tf.stack([x,y,z]) ==> [[1,4],[2,5],[3,6]]
tf.stack([x,y,z],axis=0) ==> [[1,4],[2,5],[3,6]]
tf.stack([x,y,z],axis=1) ==> [[1, 2, 3], [4, 5, 6]]
tf.stack將一組r維張量變為r+1維張量。注意:tf.pack已經變成了tf.stack\3、tf.squeeze
資料降維,只裁剪等於1的維度
不指定維度則裁剪所有長度為1的維度
import tensorflow as tf
arr = tf.variable(tf.truncated_normal([3,4,1,6,1], stddev=0.1))
sess = tf.session()
sess.run(tf.global_variables_initializer())
sess.run(arr).shape
# out[12]: # (3, 4, 1, 6, 1)
sess.run(tf.squeeze(arr,[2,])).shape
# out[17]: # (3, 4, 6, 1)
sess.run(tf.squeeze(arr,[2,4])).shape
# out[16]: # (3, 4, 6)
sess.run(tf.squeeze(arr)).shape
# out[19]: # (3, 4, 6)
依照輸入引數二的標量/向量有不同的行為:引數二為標量時,意為沿著axis等分為scalar份;向量時意為安裝元素作為邊界索引切分多份
def split(value, num_or_size_splits, axis=0, num=none, name="split"):"""splits a tensor into sub tensors.
if `num_or_size_splits` is an integer type, `num_split`, then splits `value`
along dimension `axis` into `num_split` smaller tensors.
requires that `num_split` evenly divides `value.shape[axis]`.
if `num_or_size_splits` is not an integer type, it is presumed to be a tensor
`size_splits`, then splits `value` into `len(size_splits)` pieces. the shape
of the `i`-th piece has the same size as the `value` except along dimension
`axis` where the size is `size_splits[i]`.
for example:
```python
# 'value' is a tensor with shape [5, 30]
# split 'value' into 3 tensors with sizes [4, 15, 11] along dimension 1
split0, split1, split2 = tf.split(value, [4, 15, 11], 1)
tf.shape(split0) # [5, 4]
tf.shape(split1) # [5, 15]
tf.shape(split2) # [5, 11]
# split 'value' into 3 tensors along dimension 1
split0, split1, split2 = tf.split(value, num_or_size_splits=3, axis=1)
tf.shape(split0) # [5, 10]
```
tf.slice解析:slice(input_, begin, size, name=none):extracts a slice from a tensor.
假設input為[[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]], [[5, 5, 5], [6, 6, 6]]],如下所示:
(1)tf.slice(input, [1, 0, 0], [1, 1, 3]) ==> [[[3, 3, 3]]]
(2)tf.slice(input, [1, 0, 0], [1, 2, 3]) ==> [[[3, 3, 3], [4, 4, 4]]]
(3)tf.slice(input, [1, 0, 0], [2, 1, 3]) ==> [[[3, 3, 3]], [[5, 5, 5]]]
tf.strided_slice(record_bytes, [0], [label_bytes]), tf.int32)
在看cifar10的例子的時候,必然會看到乙個函式,官方給的文件注釋長而晦澀,基本等於0.網上也有這個函式,但解釋差勁或者基本沒有解釋,函式的原型是醬紫的.
def strided_slice(input_,begin, end, strides=none, begin_mask=0, end_mask=0, ellipsis_mask=0, new_axis_mask=0, shrink_axis_mask=0, var=none, name=none): """extracts a strided slice from a tensor.
'input'= [[[1, 1, 1], [2, 2, 2]],來把輸入變個型,可以看成3維的tensor,從外向為1,2,3維[[3, 3, 3], [4, 4, 4]],
[[5, 5, 5], [6, 6, 6]]]
[[[1, 1, 1], [2, 2, 2]],以tf.strided_slice(input, [0,0,0], [2,2,2], [1,2,1])呼叫為例,start = [0,0,0] , end = [2,2,2], stride = [1,2,1],求乙個[start, end)的乙個片段,注意end為開區間[[3, 3, 3], [4, 4, 4]],
[[5, 5, 5], [6, 6, 6]]]
第1維 start = 0 , end = 2, stride = 1, 所以取 0 , 1行,此時的輸出
[[[1, 1, 1], [2, 2, 2]],第2維時, start = 0 , end = 2 , stride = 2, 所以只能取0行,此時的輸出[[3, 3, 3], [4, 4, 4]]]
[[[1, 1, 1]],第3維的時候,start = 0, end = 2, stride = 1, 可以取0,1行,此時得到的就是最後的輸出[[3, 3, 3]]]
[[[1, 1]],整理之後最終的輸出為:[[3, 3]]]
[[[1,1],[3,3]]]
類似**如下:
import tensorflow as tf
data = [[[1, 1, 1], [2, 2, 2]],
[[3, 3, 3], [4, 4, 4]],
[[5, 5, 5], [6, 6, 6]]]
x = tf.strided_slice(data,[0,0,0],[1,1,1])
with tf.session() as sess:
print(sess.run(x))
Tensorflow實戰 張量
import tensorflow as tf tf.constant 是乙個計算,這個計算的結果為乙個張量,儲存在變數a中。a tf.constant 1.0,2.0 name a b tf.constant 2.0,3.0 name b result tf.add a,b,name add pr...
tensorflow 張量生成
coding utf 8 import tensorflow as tf import numpy as np 建立張量 a tf.constant 1 5 dtype tf.int64 print a a print a.dtype a.dtype print a.shape a.shape a ...
Tensorflow張量(tensor)解析
tensor是tensorflow基礎的乙個概念 張量。定義在 framework ops.py tensorflow用到了資料流圖,資料流圖包括資料 data 流 flow 圖 graph tensorflow裡的資料用到的都是tensor,所以谷歌起名為tensorflow。下面介紹張量幾個比較...