device faile: "RuntimeError: synStatus=8 [Device not found] Device acquire failed."

  1. I use docker image on VM:
  1. I applied two nodes, each node with 1 HPU,
    And I create ray cluster with these 2 nodes by following command:
ray start --head # on one node
ray start --x.x.x.x:port # on other node
  1. I want to fine-tune Llama-2-7b with this ray cluster by DDP mode on those 2 nodes.
    After I load model from pretrained model path, I tried to:
model =, device="hpu")

but the code run failed:

  File "/usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/core/", line 173, in wrapped_to
    result = self.original_to(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/", line 1163, in to
    return self._apply(convert)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/", line 810, in _apply
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/", line 810, in _apply
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/", line 810, in _apply
  [Previous line repeated 1 more time]
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/", line 833, in _apply
    param_applied = fn(param)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/", line 1161, in convert
    return, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
  File "/usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/core/", line 53, in __torch_function__
    return super().__torch_function__(func, types, new_args, kwargs)
RuntimeError: synStatus=8 [Device not found] Device acquire failed.

Is there anyone know how to fix this problem?

Can you run the code without ray?

Like are you able to run model =, device="hpu") on a non-distributed 1 node setting?

Yes, no error when fine-tuning on 1 or multi HPUs with ray 1 node cluster.
The fail message only occur when fine-tuning on ray 2 or more node cluster.

Hi, can you please verify these:

  1. If you have connection between 2 dockers on 2 nodes
  2. If password less communication between dockers works
  3. If dataset accessible from both dockers and has same path
  4. If all gaudi interfaces are up on both nodes: /opt/habanalabs/qual/gaudi2/bin/ --status