Skip to main content
Associate II
March 23, 2025
Solved

X-CUBE-AI failed to convert my model and reported the following error.

  • March 23, 2025
  • 1 reply
  • 545 views

I have provided my ONNX model, which has been tested.

INTERNAL ERROR: H not found in shape with shape map (BATCH, CH, W)

Best answer by Julian E.

Hello @Nephalem,

 

Your issue here comes from Einsum layer.

In your case, the operation that does not work is einsum("bhqk,bkhd->bqhd")

 

JulianE_0-1742984898707.png

 

It seems that we don't fully support Einsum currently, so you need to replace the einsum layers (you have multiple ones in your model by simple matrixes operation instead). for example:

 

Equivalent Operations:
Instead of einsum("bhqk,bkhd->bqhd", A, B), use:

import torch

# Example tensors
A = torch.randn(batch, heads, query, key) # (b, h, q, k)
B = torch.randn(batch, key, heads, dim) # (b, k, h, d)

# Transpose B to (b, h, k, d) so that k aligns for matmul
B_transposed = B.permute(0, 2, 1, 3) # (b, h, k, d)

# Perform batched matrix multiplication
result = torch.matmul(A, B_transposed) # (b, h, q, d)

# Swap axes to match expected output shape (b, q, h, d)
result = result.permute(0, 2, 1, 3) # (b, q, h, d)

Have a good day,

Julian

1 reply

Julian E.
Technical Moderator
March 24, 2025

Hello @Nephalem ,

 

There is probably something wrong happening during the conversion of the model because of the original shape of your input.

I'll take a look an update you once I know more.

 

Have a good day,

Julian

​In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
Julian E.
Julian E.Best answer
Technical Moderator
March 26, 2025

Hello @Nephalem,

 

Your issue here comes from Einsum layer.

In your case, the operation that does not work is einsum("bhqk,bkhd->bqhd")

 

JulianE_0-1742984898707.png

 

It seems that we don't fully support Einsum currently, so you need to replace the einsum layers (you have multiple ones in your model by simple matrixes operation instead). for example:

 

Equivalent Operations:
Instead of einsum("bhqk,bkhd->bqhd", A, B), use:

import torch

# Example tensors
A = torch.randn(batch, heads, query, key) # (b, h, q, k)
B = torch.randn(batch, key, heads, dim) # (b, k, h, d)

# Transpose B to (b, h, k, d) so that k aligns for matmul
B_transposed = B.permute(0, 2, 1, 3) # (b, h, k, d)

# Perform batched matrix multiplication
result = torch.matmul(A, B_transposed) # (b, h, q, d)

# Swap axes to match expected output shape (b, q, h, d)
result = result.permute(0, 2, 1, 3) # (b, q, h, d)

Have a good day,

Julian

​In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.