pytorch broadcast 消耗的内存是否比 expand 少?
Does pytorch broadcast consume less memory than expand?
使用broadcast的pytorch操作是否比expand消耗的内存少?比如下面两个程序在内存占用上有区别吗?
import torch
x = torch.randn(20,1)
y = torch.randn(1,20)
z = x*y
import torch
x = torch.randn(20,1).expand(-1,20)
y = torch.randn(1,20).expand(20,-1)
z = x*y
根据 torch.expand
的文档页面:
Expanding a tensor does not allocate new memory, but only creates a new view on the existing tensor
您可以通过分析调用(在 Colab 中)自己进行试验:
>>> x = torch.randn(200,1)
>>> y = torch.randn(1,200)
>>> %memit z = x*y
peak memory: 286.85 MiB, increment: 0.31 MiB
>>> x = torch.randn(200,1).expand(-1,200)
>>> y = torch.randn(1,200).expand(200,-1)
>>> %memit z = x*y
peak memory: 286.86 MiB, increment: 0.00 MiB
%memit
是memory_profiler
提供的魔法函数:
pip install memory_profiler
%load_ext memory_profiler
使用broadcast的pytorch操作是否比expand消耗的内存少?比如下面两个程序在内存占用上有区别吗?
import torch
x = torch.randn(20,1)
y = torch.randn(1,20)
z = x*y
import torch
x = torch.randn(20,1).expand(-1,20)
y = torch.randn(1,20).expand(20,-1)
z = x*y
根据 torch.expand
的文档页面:
Expanding a tensor does not allocate new memory, but only creates a new view on the existing tensor
您可以通过分析调用(在 Colab 中)自己进行试验:
>>> x = torch.randn(200,1)
>>> y = torch.randn(1,200)
>>> %memit z = x*y
peak memory: 286.85 MiB, increment: 0.31 MiB
>>> x = torch.randn(200,1).expand(-1,200)
>>> y = torch.randn(1,200).expand(200,-1)
>>> %memit z = x*y
peak memory: 286.86 MiB, increment: 0.00 MiB
%memit
是memory_profiler
提供的魔法函数:
pip install memory_profiler
%load_ext memory_profiler