Spycsh
1
Hi, I’ve met the following graph compiling error when doing a torch.repeat, could you give me some hints on why it happens?
import torch
import habana_frameworks.torch.core as htcore
import habana_frameworks.torch.gpu_migration
t=torch.zeros([2,4,16,64,64])
# Correct!!
t.to('cpu').unsqueeze(1).unsqueeze(1).repeat(1, 2, 1, 1, 1, 1, 1)
# Graph compile failed!!
t.to('hpu').unsqueeze(1).unsqueeze(1).repeat(1, 2, 1, 1, 1, 1, 1)
There is a limitation in synapse about the number of allowed dimensions (which I think is 5)
So even this will fail (with 6 dims):
t.to(‘hpu’).unsqueeze(1).repeat(1, 2, 1, 1, 1, 1)
but this works (with 5 dims)
t.to(‘hpu’).repeat(2, 4, 1, 1, 1)
Spycsh
3
Thanks @Sayantan_S , will this limitation be solved in the future?
I think this limitation has been around for a long time, and I dont know of any plans to solve it.
Usually for cases with >5D tensors we are able to rewrite the pytorch code a bit to have everything down in 5D as a workaround