Skip to content

Commit 375d689

Browse files
Hanxian97facebook-github-bot
authored andcommitted
support skip atten in export (#16104)
Summary: Support export for llama model variants with attention layer skipping. We only need to specify the attention skip patterns in config.json in layer_type. E.g., "layer_types": [ "full_attention", "full_attention", "full_attention", "skip_attention", "skip_attention", "skip_attention" ] Differential Revision: D88399533
1 parent 4014597 commit 375d689

File tree

1 file changed

+4
-0
lines changed

1 file changed

+4
-0
lines changed

examples/models/llama/llama_transformer.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -272,6 +272,10 @@ def construct_transformer(model_args: ModelArgs) -> Transformer:
272272
norm_eps=model_args.norm_eps,
273273
)
274274
)
275+
elif model_args.layer_types and model_args.layer_types[layer_id] == "skip_attention":
276+
attention = AttentionSkip()
277+
transformer_block = TransformerBlock(model_args, attention)
278+
layers.append(transformer_block)
275279
else:
276280
attention = cls(
277281
model_args, layer_id, rope, **model_args.attention_kwargs

0 commit comments

Comments
 (0)