vllm.model_executor.layers.quantization.utils.nvfp4_utils ¶
apply_nvfp4_linear ¶
apply_nvfp4_linear(
backend: NvFp4LinearBackend,
layer: Module,
x: Tensor,
bias: Tensor | None = None,
swizzle: bool | None = None,
) -> Tensor
Apply NVFP4 linear transformation using the specified backend.
Source code in vllm/model_executor/layers/quantization/utils/nvfp4_utils.py
245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 | |
convert_to_nvfp4_linear_kernel_format ¶
convert_to_nvfp4_linear_kernel_format(
backend: NvFp4LinearBackend, layer: Module
) -> None
Convert layer to NVFP4 linear kernel format.
Source code in vllm/model_executor/layers/quantization/utils/nvfp4_utils.py
pad_nvfp4_activation_for_cutlass ¶
Pad packed FP4 activations to match the K-dimension padding applied to weights. The padding is in bytes (tensor dimension), not FP4 elements.
Source code in vllm/model_executor/layers/quantization/utils/nvfp4_utils.py
pad_nvfp4_weight_for_cutlass ¶
Pad packed NVFP4 weights so that both N (rows) and K (columns) satisfy the alignment constraints required by CUTLASS / FlashInfer FP4 kernels.
CUTLASS FP4 kernel requires both K and N matrix dimensions to be divisible by 32 for aligned memory access and efficient tensor core operations.
Source code in vllm/model_executor/layers/quantization/utils/nvfp4_utils.py
prepare_weights_for_nvfp4_cutlass ¶
prepare_weights_for_nvfp4_cutlass(
weight: Tensor, weight_scale: Tensor
) -> tuple[Tensor, Tensor, int]
Prepare weights and scales for CUTLASS/FlashInfer-CUTLASS FP4 GEMM. This involves padding weights for alignment (K and N divisible by 32)
Source code in vllm/model_executor/layers/quantization/utils/nvfp4_utils.py
prepare_weights_for_nvfp4_fbgemm ¶
Prepare weights and scales for FBGEMM FP4 GEMM.
Source code in vllm/model_executor/layers/quantization/utils/nvfp4_utils.py
prepare_weights_for_nvfp4_flashinfer_trtllm ¶
prepare_weights_for_nvfp4_flashinfer_trtllm(
weight: Tensor, weight_scale: Tensor
) -> tuple[Tensor, Tensor]
Prepare weights and scales for FlashInfer TRTLLM FP4 GEMM.
Source code in vllm/model_executor/layers/quantization/utils/nvfp4_utils.py
select_nvfp4_linear_backend ¶
Select the best available NVFP4 GEMM backend based on environment configuration and platform capabilities.
Source code in vllm/model_executor/layers/quantization/utils/nvfp4_utils.py
slice_nvfp4_output ¶
Slice the output tensor to remove padding in N dimension if weight was padded.
Source code in vllm/model_executor/layers/quantization/utils/nvfp4_utils.py
swizzle_blockscale ¶
Pad and block-interleave the FP4 block-scales so that they match the data layout expected by the CUTLASS / FlashInfer kernels.
Parameters¶
scale: torch.Tensor
Returns¶
torch.Tensor The swizzled tensor with the same logical shape as scale.