Compare commits

..

4 Commits

Author SHA1 Message Date
Xingkai Yu
88d6547df2
Merge pull request #816 from KPCOFGS/main
Update README.md
2025-04-08 17:27:09 +08:00
Xingkai Yu
741b06ebca
Merge pull request #720 from xiaokongkong/main
modify the explanation of MLA
2025-04-08 17:20:37 +08:00
Shixian Sheng
a5d2ad229e
Update README.md 2025-03-26 08:58:35 -04:00
huxuedan
d29a967601 modify the explanation of MLA 2025-02-26 17:07:39 +08:00
2 changed files with 3 additions and 3 deletions

View File

@ -321,7 +321,7 @@ For comprehensive step-by-step instructions on running DeepSeek-V3 with LMDeploy
### 6.4 Inference with TRT-LLM (recommended)
[TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM) now supports the DeepSeek-V3 model, offering precision options such as BF16 and INT4/INT8 weight-only. Support for FP8 is currently in progress and will be released soon. You can access the custom branch of TRTLLM specifically for DeepSeek-V3 support through the following link to experience the new features directly: https://github.com/NVIDIA/TensorRT-LLM/tree/deepseek/examples/deepseek_v3.
[TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM) now supports the DeepSeek-V3 model, offering precision options such as BF16 and INT4/INT8 weight-only. Support for FP8 is currently in progress and will be released soon. You can access the custom branch of TRTLLM specifically for DeepSeek-V3 support through the following link to experience the new features directly: https://github.com/NVIDIA/TensorRT-LLM/tree/main/examples/deepseek_v3.
### 6.5 Inference with vLLM (recommended)

View File

@ -392,7 +392,7 @@ def apply_rotary_emb(x: torch.Tensor, freqs_cis: torch.Tensor) -> torch.Tensor:
class MLA(nn.Module):
"""
Multi-Headed Attention Layer (MLA).
Multi-Head Latent Attention (MLA) Layer.
Attributes:
dim (int): Dimensionality of the input features.
@ -442,7 +442,7 @@ class MLA(nn.Module):
def forward(self, x: torch.Tensor, start_pos: int, freqs_cis: torch.Tensor, mask: Optional[torch.Tensor]):
"""
Forward pass for the Multi-Headed Attention Layer (MLA).
Forward pass for the Multi-Head Latent Attention (MLA) Layer.
Args:
x (torch.Tensor): Input tensor of shape (batch_size, seq_len, dim).