如何在 Google Colab 中将数据加载器发送到 GPU?
How can I send a Data Loader to the GPU in Google Colab?
我有两个数据加载器,我正尝试使用 .to(device)
将它们发送到 GPU,但这不起作用。
这是我使用的代码:
# to create a batch iterator
class MyData(Dataset):
def __init__(self, X, y):
self.data = X
self.target = y
# TODO: convert this into torch code is possible
self.length = [ np.sum(1 - np.equal(x, 0)) for x in X]
def __getitem__(self, index):
x = self.data[index]
y = self.target[index]
x_len = self.length[index]
xx_len = torch.tensor(x_len)
return {"src": x, "trg": y, "x_len": xx_len}
def __len__(self):
return len(self.data)
dataset = DataLoader(train_dataset, batch_size = BATCH_SIZE,
drop_last=True,
shuffle=True)
test_Dataset= DataLoader(val_dataset, batch_size = BATCH_SIZE,
drop_last=True,
shuffle=True)
我也尝试过使用 pin_memory = True
但这也不起作用。
您没有将数据加载器移动到 GPU。相反,创建存储数据的批量张量,然后将这些张量移动到 GPU。
train_dataloader = DataLoader(MyData, batch_size=BS)
...
def train(nn, optim, train_dataloader, etc...):
for batch in train_dataloader:
batch = batch.to('cuda')
...
我有两个数据加载器,我正尝试使用 .to(device)
将它们发送到 GPU,但这不起作用。
这是我使用的代码:
# to create a batch iterator
class MyData(Dataset):
def __init__(self, X, y):
self.data = X
self.target = y
# TODO: convert this into torch code is possible
self.length = [ np.sum(1 - np.equal(x, 0)) for x in X]
def __getitem__(self, index):
x = self.data[index]
y = self.target[index]
x_len = self.length[index]
xx_len = torch.tensor(x_len)
return {"src": x, "trg": y, "x_len": xx_len}
def __len__(self):
return len(self.data)
dataset = DataLoader(train_dataset, batch_size = BATCH_SIZE,
drop_last=True,
shuffle=True)
test_Dataset= DataLoader(val_dataset, batch_size = BATCH_SIZE,
drop_last=True,
shuffle=True)
我也尝试过使用 pin_memory = True
但这也不起作用。
您没有将数据加载器移动到 GPU。相反,创建存储数据的批量张量,然后将这些张量移动到 GPU。
train_dataloader = DataLoader(MyData, batch_size=BS)
...
def train(nn, optim, train_dataloader, etc...):
for batch in train_dataloader:
batch = batch.to('cuda')
...