Graph compile failed error when running from Habana Model-References repo

Environment: AWS DL1, Ubuntu 22.04 (bare metal driver install), Python 3.10.12, SynapseAI 1.12.1

Running in habanalabs-venv on the host OS (no container)

Followed instructions from

$ python3 scripts/ --prompt “a professional photograph of an astronaut riding a horse” --ckpt v2-1_768-ema-pruned.ckpt --config configs/stable-diffusion/v2-inference-v.yaml --H 768 --W 768 --n_samples 1 --n_iter 3 --use_hpu_graph

Seed set to 42
Loading model from v2-1_768-ema-pruned.ckpt
Global Step: 110000
LatentDiffusion: Running in v-prediction mode
DiffusionWrapper has 865.91 M params.
making attention of type ‘vanilla’ with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type ‘vanilla’ with 512 in_channels
============================= HABANA PT BRIDGE CONFIGURATION ===========================
PT_HPU_MAX_COMPOUND_OP_SIZE = 9223372036854775807
---------------------------: System Configuration :---------------------------
Num CPU Cores : 96
CPU RAM : 784282744 KB
Data shape for DDIM sampling is (1, 4, 96, 96), eta 0.0
Compiling HPU graph encode_with_transformer
Traceback (most recent call last):
File “/home/ubuntu/habanalabs-venv/Model-References/PyTorch/generative_models/stable-diffusion-v-2-1/scripts/”, line 360, in
File “/home/ubuntu/habanalabs-venv/Model-References/PyTorch/generative_models/stable-diffusion-v-2-1/scripts/”, line 300, in main
c_in =, tokens)
File “/home/ubuntu/habanalabs-venv/Model-References/PyTorch/generative_models/stable-diffusion-v-2-1/scripts/”, line 222, in run
File “/home/ubuntu/habanalabs-venv/lib/python3.10/site-packages/habana_frameworks/torch/hpu/”, line 34, in capture_begin
_hpu_C.capture_begin(self.hpu_graph, dry_run)
RuntimeError: Graph compile failed. synStatus=synStatus 26 [Generice failure].

Issue also logged on Github:

Could you please let me know which stable-diffusion you are using. There are 3 here:

Probably one of stable-diffusion-v-2-1 or stable-diffusion-finetuning ?

Given your command line, I assume this one: ?

I am able to run this on Gaudi2 on 1.13-463 docker (1.13.0 branch of model-references), with 1.13 firmware (as shown by hl-smi)

on 1.12.1docker if i checkout out 1.12.1branch on model references I can run it as well.

I see that if I run with model-references on branch=1.12.1, and docker =1.13/firmware=1.13, it errors out. Can you please confirm if your model-references, firmware and docker are all on the same version?