Tag: attention
49 topic(s)
- Multimodal Secure AlignmentMultimodal secure alignment is the problem of making a model's safety behavior consistent across text, images, audio, and mixed-modal inputs. It matters because a model can reconstruct harmful intent across modalities or through images that evade text-only filters, so defenses must align the fused system rather than just one input channel.
- Key-Value Memory NetworksKey-Value Memory Networks store each memory slot as a key for retrieval and a separate value for the returned content. This decouples matching from payload and is a direct conceptual precursor to modern query-key-value attention.
- Luong Attention (Global and Local)Luong attention is a sequence-to-sequence attention mechanism that scores decoder states against encoder states using multiplicative forms such as dot or bilinear attention. It distinguishes global attention over all source positions from local attention over a predicted window, helping make neural machine translation more scalable.
- Neural Turing Machine (NTM)A Neural Turing Machine augments a neural controller with a differentiable external memory that it can read from and write to using soft attention over memory locations. It was an early attempt to learn algorithm-like behavior such as copying and sorting while remaining trainable end to end.
- PagedAttentionPagedAttention stores the KV cache in fixed-size non-contiguous blocks, like virtual-memory pages, instead of requiring one contiguous allocation per sequence. This largely removes fragmentation, enables prompt-prefix sharing, and is a key reason vLLM can serve many more concurrent requests.
- Key-Value (KV) CachingKey-value caching stores the attention keys and values from earlier tokens during autoregressive decoding so they do not need to be recomputed at every step. It speeds up generation dramatically, but the cache grows with sequence length and turns inference into a memory-management problem.
- Rotary Positional Embedding (RoPE)Rotary Positional Embedding encodes position by rotating query and key vectors with token-index-dependent angles before attention is computed. Because the resulting dot products depend on relative offsets, RoPE gives Transformers a simple and widely used way to represent order.
- Causal (Masked) Self-AttentionCausal masked self-attention is self-attention with a mask that prevents each position from attending to future tokens. Applying the mask before softmax enforces autoregressive order, so the model can predict the next token without seeing the answer in advance.
- Self-AttentionSelf-attention lets each token compute a weighted combination of representations from other tokens in the same sequence, with weights determined by query-key similarity. It is the mechanism that gives Transformers flexible, content-dependent context mixing without recurrence.
- Attention ScoreAn attention score is the compatibility value computed between a query and a key before normalization, often by dot product or a learned variant. Higher scores mean the corresponding token or memory slot should receive more weight after the softmax.
- What is a scaled attention score?A scaled attention score is a query-key dot product divided by \( \sqrt{d_k} \) before softmax. The scaling keeps the variance of the logits from growing with key dimension, which helps prevent softmax saturation and keeps gradients well behaved.
- Masked Attention ScoreA masked attention score is an attention logit after adding a mask that blocks forbidden positions, typically by adding a very large negative value before softmax. This forces the resulting attention weight to be effectively zero at those positions.
- Attention WeightsAttention weights are the normalized coefficients, usually produced by a softmax over attention scores, that determine how much each value vector contributes to the output. They form a distribution over positions or memory entries for each query.
- Attention MaskAn attention mask is a tensor that tells an attention layer which positions may interact and which must be blocked. It is used for causal generation, padding suppression, and task-specific visibility patterns, and it must be applied before softmax, not after.
- Causal MaskA causal mask blocks attention to future positions by masking entries above the sequence diagonal. It enforces left-to-right autoregressive prediction, ensuring that token \( t \) can depend only on tokens \( \le t \).
- Multi-Head AttentionMulti-head attention runs several attention mechanisms in parallel on different learned projections of the same input, then concatenates their outputs. This lets the model capture multiple relational patterns at once instead of forcing all interactions through a single attention map.
- Attention HeadAn attention head is one parallel query-key-value attention computation inside multi-head attention. Different heads can specialize to different patterns, such as local syntax, long-range dependencies, or induction-like copying behavior.
- Grouped-Query Attention (GQA)Grouped-query attention shares key and value heads across groups of query heads, reducing KV-cache size and bandwidth during inference. It sits between full multi-head attention and multi-query attention, preserving most quality while making long-context serving cheaper.
- Query, Key, Value (QKV)Query, key, and value are the three learned projections used by attention: the query asks what to look for, the key says what each position offers, and the value is the content returned if that position is attended to. Attention weights come from query-key similarity, but outputs are weighted sums of values.
- Switch TransformerSwitch Transformer is a simplified MoE Transformer that routes each token to exactly one expert in each sparse feed-forward layer. Top-1 routing reduces communication and implementation complexity, enabling very large sparse models, but makes router stability and load balancing especially important.
- Cross-AttentionCross-attention lets one sequence or modality attend to representations produced by another sequence or modality. In encoder-decoder models the decoder queries encoder states, and in multimodal models text tokens often query visual features the same way.
- FlashAttentionFlashAttention is an exact attention algorithm that uses tiling and kernel fusion to minimize reads and writes between GPU HBM and on-chip SRAM. It preserves standard attention outputs while greatly reducing memory traffic, which yields large speed and memory gains on long sequences.
- Context ParallelismContext parallelism distributes a long sequence across devices so context tokens and their attention-related work are sharded instead of fully replicated. It helps long-context training or inference scale beyond one device, but requires extra communication to preserve exact attention across chunks.
- Attention MechanismAttention computes a context-dependent weighted combination of values, where the weights come from similarities between queries and keys. It lets a model focus on the most relevant parts of an input instead of compressing everything into one fixed vector.
- Attention VisualizationAttention visualization renders attention weights as heatmaps or token-to-token graphs so we can see which positions a model attends to. It is a useful diagnostic tool, but attention weights alone are not a complete explanation of what the model is computing.
- Linear AttentionLinear attention is the family of attention mechanisms that rewrites or approximates softmax attention so sequence processing scales roughly linearly instead of quadratically with length. The benefit is efficiency on long contexts, but the tradeoff is that exact softmax behavior is usually lost.
- ALiBi (Attention with Linear Biases)ALiBi is a positional method that adds head-specific linear distance penalties directly to attention logits instead of injecting separate position embeddings. Because the bias is built into the score function, models trained with ALiBi often extrapolate to longer contexts better than models tied to a fixed embedding table.
- YaRN / NTK-aware RoPE ScalingYaRN and other NTK-aware RoPE-scaling methods extend the usable context of RoPE-based models by rescaling or interpolating rotary frequencies rather than retraining the model from scratch. Their goal is to preserve short-context behavior while making long-range positions less distorted.
- Sliding-Window AttentionSliding-window attention restricts each token to attending only within a local context window rather than the entire sequence. This reduces compute and memory from full-context attention and is effective when most useful dependencies are nearby, though it can miss long-range interactions unless combined with global mechanisms.
- Multi-head Latent Attention (MLA)Multi-head Latent Attention compresses the keys and values of multi-head attention into a smaller latent representation before use. Its main advantage is a much smaller KV cache and lower decode-time memory bandwidth, which is why it is attractive for long-context serving.
- Attention Sinks / StreamingLLMAttention sinks are the first few tokens in a causal Transformer that absorb disproportionate attention from later positions, even when they carry little semantic content. StreamingLLM exploits this by keeping sink tokens and a short recent window in the KV cache, enabling long streaming inference with bounded memory.
- Induction HeadsA specific two-head circuit in Transformer attention that copies the next token after a previous occurrence of the current token — the computational basis for in-context learning. Anthropic showed induction heads form suddenly during training, coinciding with the sharp jump in ICL ability.
- Transformer-XL / Segment-Level RecurrenceDai et al. (2019) extend Transformers beyond fixed context by caching hidden states of the previous segment and allowing attention to read from them — a simple "segment-level recurrence" that gives an effective receptive field of \( N \cdot L \) for \( L \) layers and segment length \( N \). Paired with relative positional encoding, it was a key bridge between pure attention and long-context models.
- Longformer / BigBird (Sparse Long-Context Attention)Fixed sparsity patterns that reduce attention from \( O(n^2) \) to \( O(n) \) for long documents. Longformer combines sliding-window + global attention; BigBird adds random attention and proves the result retains full-attention universal-approximation properties. Both were pre-2022 answers to scaling Transformers to 4k–16k tokens.
- RetNet / Retention NetworksSun et al. (2023) introduce a Transformer-alternative block whose retention operator admits three equivalent forms: parallel (for training), recurrent (for \( O(1) \) inference per token), and chunkwise-recurrent (for long-sequence training). RetNet aims for RNN-like inference cost with Transformer-like parallelisable training.
- RWKVAn RNN-Transformer hybrid (Peng et al., 2023) whose block is a parallelisable linear-attention operation at training time and a simple recurrent state update at inference time. RWKV scales to 14B+ parameters with Transformer-competitive perplexity, offering constant-memory inference.
- Hyena / Long ConvolutionsPoli et al. (2023) propose replacing attention with a data-controlled long-range convolution: a filter parameterised implicitly by an MLP-of-positions, applied via FFT for \( O(n \log n) \) cost. Hyena approaches Transformer quality on pretraining perplexity at a fraction of the compute.
- xFormers / Memory-Efficient AttentionA library / pattern of attention implementations that avoid materialising the \( n \times n \) attention matrix, reducing memory from \( O(n^2) \) to \( O(n) \). xFormers bundles FlashAttention, Memory-Efficient Attention (Rabe & Staats), block-sparse variants, and ALiBi/RoPE patches under a unified API — a precursor to the default attention kernels shipped in PyTorch 2.0+.
- KV Cache Compression (H2O, SnapKV)Inference-time methods that shrink a long-context KV cache by evicting tokens that contribute little to future attention. H2O (Zhang et al., 2023) evicts by cumulative attention score; SnapKV (Li et al., 2024) observes that recent queries already reveal which past tokens matter, enabling one-shot pre-fill-time compression.
- Paged vs Block KV CacheTwo allocation strategies for an LLM's growing KV cache. Block (contiguous) allocation pre-reserves the worst-case length per request and wastes memory. Paged (PagedAttention, vLLM 2023) allocates fixed-size pages on demand and chains them like OS virtual memory, yielding 2–4× higher batch-size at the cost of kernel-level bookkeeping.
- Ring Attention / Context Parallel for Long SequencesA distributed-attention algorithm that shards an \( n \)-token sequence across \( P \) devices and computes each attention output via a ring of key-value rotations. Ring Attention (Liu et al., 2023) enables context lengths of millions of tokens on multi-GPU clusters with near-linear scaling.
- Graph Attention Network (GAT)GAT (Veličković et al., 2018) replaces GCN's fixed degree-normalised aggregation with attention : each node learns per-edge weights via a shared attention mechanism. This gives inductive generalisation (no dependence on the full graph's degree matrix), handles heterogeneous neighbourhoods, and approaches Transformer-style flexibility — though at higher computational cost than GCN.
- Memory-Augmented TransformersTransformer variants that extend effective context length with an external memory — Recurrent Memory Transformers (RMT) pass summary tokens across chunks, Memorizing Transformers retrieve past kNN keys, Infini-attention compresses the tail of context into a linear-attention state. A bridge between fixed-context Transformers and sequence models with unbounded memory.
- Set Transformer & Deep Sets (Permutation Invariance)Deep Sets: any permutation-invariant function on sets equals \( \rho(\sum_i \phi(x_i)) \) for learnable \( \phi, \rho \). Set Transformer replaces the sum with self-attention via Induced Set Attention Blocks, giving element-wise interactions while remaining permutation-equivariant.
- FlashAttention-2 and FlashAttention-3FlashAttention-2 and FlashAttention-3 are follow-on attention kernels that keep exact attention outputs while running much faster through better tiling, parallelism, and data movement. FA-2 improves work partitioning on modern GPUs, while FA-3 adds Hopper-specific asynchronous pipelines and low-precision support.
- Quantized KV Cache (int4 / int8 / KIVI)Store the KV cache at lower precision — int8 or int4 — instead of fp16. Halves or quarters the memory footprint of long contexts at negligible quality cost. Different quantisation per key / value (K usually int8, V int4 via grouping) and per-head asymmetric scales are the main tricks.
- Compressed Sparse Attention (CSA)Compressed Sparse Attention (CSA) is a long-context attention scheme that first compresses the KV cache into block summaries and then performs sparse attention only over the top-k relevant compressed blocks. An added sliding-window branch preserves exact local dependencies, so CSA cuts both KV memory and long-context attention compute without collapsing into a purely local window.
- Bahdanau AttentionBahdanau attention is the original additive attention mechanism for sequence-to-sequence models, where the decoder scores each encoder state before producing the next token. It solved the fixed-context bottleneck of early seq2seq RNNs by letting the decoder look back over the whole source sequence at every step.
- Seq2Seq with AttentionSeq2seq with attention augments the encoder-decoder architecture so the decoder conditions on a context vector built from all encoder states at each output step. That change made neural machine translation far more effective than fixed-context seq2seq and directly paved the way to modern cross-attention and Transformer models.