将列表转换为张量更准确
Transform List to Tensor more accurat
我想return在Dataloader中列出一个列表。
但是对于 return 它,它需要是张量吧?
所以我对它进行了转换,但是在这个过程中信息丢失了,还有其他方法吗?
pt_tensor_from_list = torch.tensor(pose_transform)
pt_tensor_from_list = torch.FloatTensor(pose_transform)
我期望输出:
([[-0.0003000000142492354, -0.0008999999845400453,
0.00039999998989515007, 0], [0.0010000000474974513, -0.00019999999494757503, 0.0003000000142492354, 0], [0.00019999999494757503, -0.0005000000237487257,
-0.0008999999845400453, 0], [5.484399795532227, -24.28619956970215, 117.5000991821289, 1])
但它是:
([[ -0.0003, -0.0009, 0.0004, 0.0000],
[ 0.0010, -0.0002, 0.0003, 0.0000],
[ 0.0002, -0.0005, -0.0009, 0.0000],
[ 5.4844, -24.2862, 117.5001, 1.0000]])
在此类转换过程中,您不会丢失任何信息。它看起来更紧凑的原因是,当您打印张量时,它会调用 __str__()
或 __repr__()
方法,这会使您的张量看起来更漂亮。正如您可以找到 here torch.Tensor
uses a kind of internal tensor formatter called _tensor_str
. If you look inside the code link 一样,您会发现默认情况下参数 precision
设置为 4:
precision: Number of digits of precision for floating point output (default = 4).
这就是为什么在打印张量时只有 4 位数字的张量值。但实际上,存储在张量中的值与原始列表中的值相同。
这里是一个小例子来获得一个想法:
代码:
import torch
test_list = ([[-0.0003000000142492354, -0.0008999999845400453, 0.00039999998989515007, 0],
[0.0010000000474974513, -0.00019999999494757503, 0.0003000000142492354, 0],
[0.00019999999494757503, -0.0005000000237487257, -0.0008999999845400453, 0],
[5.484399795532227, -24.28619956970215, 117.5000991821289, 1]])
print('Original values:')
for i in test_list:
for j in i:
print(j)
pt_tensor_from_list = torch.FloatTensor(test_list)
print('When printing FloatTensor:')
print(pt_tensor_from_list.dtype, pt_tensor_from_list, sep='\n')
print('When printing each value separately:')
for i in pt_tensor_from_list:
for j in i:
print(j.item())
输出:
Original values:
-0.0003000000142492354
-0.0008999999845400453
0.00039999998989515007
0
0.0010000000474974513
-0.00019999999494757503
0.0003000000142492354
0
0.00019999999494757503
-0.0005000000237487257
-0.0008999999845400453
0
5.484399795532227
-24.28619956970215
117.5000991821289
1
When printing FloatTensor:
torch.float32
tensor([[-3.0000e-04, -9.0000e-04, 4.0000e-04, 0.0000e+00],
[ 1.0000e-03, -2.0000e-04, 3.0000e-04, 0.0000e+00],
[ 2.0000e-04, -5.0000e-04, -9.0000e-04, 0.0000e+00],
[ 5.4844e+00, -2.4286e+01, 1.1750e+02, 1.0000e+00]])
When printing each value separately:
-0.0003000000142492354
-0.0008999999845400453
0.00039999998989515007
0.0
0.0010000000474974513
-0.00019999999494757503
0.0003000000142492354
0.0
0.00019999999494757503
-0.0005000000237487257
-0.0008999999845400453
0.0
5.484399795532227
-24.28619956970215
117.5000991821289
1.0
如您所见,我们在分别打印每个值时得到相同的值。
但是 如果您选择了错误的张量类型,您可能会丢失一些信息,例如 HalfTensor
而不是 FloatTensor
。这是一个例子:
代码:
pt_tensor_from_list = torch.HalfTensor(test_list)
print('When printing HalfTensor:')
print(pt_tensor_from_list.dtype, pt_tensor_from_list, sep='\n')
print('When printing each value separately:')
for i in pt_tensor_from_list:
for j in i:
print(j.item())
输出:
When printing HalfTensor:
torch.float16
tensor([[-2.9993e-04, -8.9979e-04, 4.0007e-04, 0.0000e+00],
[ 1.0004e-03, -2.0003e-04, 2.9993e-04, 0.0000e+00],
[ 2.0003e-04, -5.0020e-04, -8.9979e-04, 0.0000e+00],
[ 5.4844e+00, -2.4281e+01, 1.1750e+02, 1.0000e+00]],
dtype=torch.float16)
When printing each value separately:
-0.0002999305725097656
-0.0008997917175292969
0.0004000663757324219
0.0
0.0010004043579101562
-0.00020003318786621094
0.0002999305725097656
0.0
0.00020003318786621094
-0.0005002021789550781
-0.0008997917175292969
0.0
5.484375
-24.28125
117.5
1.0
您现在可以注意到,这些值(有一点)不同。访问 pytorch tensor docs 了解更多关于不同类型的 torch.tensor
。
我想return在Dataloader中列出一个列表。 但是对于 return 它,它需要是张量吧? 所以我对它进行了转换,但是在这个过程中信息丢失了,还有其他方法吗?
pt_tensor_from_list = torch.tensor(pose_transform)
pt_tensor_from_list = torch.FloatTensor(pose_transform)
我期望输出:
([[-0.0003000000142492354, -0.0008999999845400453, 0.00039999998989515007, 0], [0.0010000000474974513, -0.00019999999494757503, 0.0003000000142492354, 0], [0.00019999999494757503, -0.0005000000237487257, -0.0008999999845400453, 0], [5.484399795532227, -24.28619956970215, 117.5000991821289, 1])
但它是:
([[ -0.0003, -0.0009, 0.0004, 0.0000], [ 0.0010, -0.0002, 0.0003, 0.0000], [ 0.0002, -0.0005, -0.0009, 0.0000], [ 5.4844, -24.2862, 117.5001, 1.0000]])
在此类转换过程中,您不会丢失任何信息。它看起来更紧凑的原因是,当您打印张量时,它会调用 __str__()
或 __repr__()
方法,这会使您的张量看起来更漂亮。正如您可以找到 here torch.Tensor
uses a kind of internal tensor formatter called _tensor_str
. If you look inside the code link 一样,您会发现默认情况下参数 precision
设置为 4:
precision: Number of digits of precision for floating point output (default = 4).
这就是为什么在打印张量时只有 4 位数字的张量值。但实际上,存储在张量中的值与原始列表中的值相同。
这里是一个小例子来获得一个想法:
代码:
import torch
test_list = ([[-0.0003000000142492354, -0.0008999999845400453, 0.00039999998989515007, 0],
[0.0010000000474974513, -0.00019999999494757503, 0.0003000000142492354, 0],
[0.00019999999494757503, -0.0005000000237487257, -0.0008999999845400453, 0],
[5.484399795532227, -24.28619956970215, 117.5000991821289, 1]])
print('Original values:')
for i in test_list:
for j in i:
print(j)
pt_tensor_from_list = torch.FloatTensor(test_list)
print('When printing FloatTensor:')
print(pt_tensor_from_list.dtype, pt_tensor_from_list, sep='\n')
print('When printing each value separately:')
for i in pt_tensor_from_list:
for j in i:
print(j.item())
输出:
Original values:
-0.0003000000142492354
-0.0008999999845400453
0.00039999998989515007
0
0.0010000000474974513
-0.00019999999494757503
0.0003000000142492354
0
0.00019999999494757503
-0.0005000000237487257
-0.0008999999845400453
0
5.484399795532227
-24.28619956970215
117.5000991821289
1
When printing FloatTensor:
torch.float32
tensor([[-3.0000e-04, -9.0000e-04, 4.0000e-04, 0.0000e+00],
[ 1.0000e-03, -2.0000e-04, 3.0000e-04, 0.0000e+00],
[ 2.0000e-04, -5.0000e-04, -9.0000e-04, 0.0000e+00],
[ 5.4844e+00, -2.4286e+01, 1.1750e+02, 1.0000e+00]])
When printing each value separately:
-0.0003000000142492354
-0.0008999999845400453
0.00039999998989515007
0.0
0.0010000000474974513
-0.00019999999494757503
0.0003000000142492354
0.0
0.00019999999494757503
-0.0005000000237487257
-0.0008999999845400453
0.0
5.484399795532227
-24.28619956970215
117.5000991821289
1.0
如您所见,我们在分别打印每个值时得到相同的值。
但是 如果您选择了错误的张量类型,您可能会丢失一些信息,例如 HalfTensor
而不是 FloatTensor
。这是一个例子:
代码:
pt_tensor_from_list = torch.HalfTensor(test_list)
print('When printing HalfTensor:')
print(pt_tensor_from_list.dtype, pt_tensor_from_list, sep='\n')
print('When printing each value separately:')
for i in pt_tensor_from_list:
for j in i:
print(j.item())
输出:
When printing HalfTensor:
torch.float16
tensor([[-2.9993e-04, -8.9979e-04, 4.0007e-04, 0.0000e+00],
[ 1.0004e-03, -2.0003e-04, 2.9993e-04, 0.0000e+00],
[ 2.0003e-04, -5.0020e-04, -8.9979e-04, 0.0000e+00],
[ 5.4844e+00, -2.4281e+01, 1.1750e+02, 1.0000e+00]],
dtype=torch.float16)
When printing each value separately:
-0.0002999305725097656
-0.0008997917175292969
0.0004000663757324219
0.0
0.0010004043579101562
-0.00020003318786621094
0.0002999305725097656
0.0
0.00020003318786621094
-0.0005002021789550781
-0.0008997917175292969
0.0
5.484375
-24.28125
117.5
1.0
您现在可以注意到,这些值(有一点)不同。访问 pytorch tensor docs 了解更多关于不同类型的 torch.tensor
。