Feat: canvas project boards#9170
Draft
Pfannkuchensack wants to merge 69 commits into
Draft
Conversation
Foundation + TI2V-5B MVP + A14B dual-expert MoE for Wan 2.2 image
generation. Wan was trained on video but is competitive with leading
open-source image models when run at num_frames=1; this commit wires
that path into InvokeAI.
Phase 0 — Foundation:
- BaseModelType.Wan + WanVariantType {T2V_A14B, TI2V_5B}
- SubModelType.Transformer2 for the dual-expert MoE
- MainModelDefaultSettings per variant
- step_callback Wan branch (16-channel preview; 48-channel TI2V-5B
falls back to slicing first 16 channels until proper factors land)
- Frontend enums + node colour
Phase 1 — TI2V-5B Diffusers MVP:
- Main_Diffusers_Wan_Config probe (variant from transformer_2/ +
vae/config.json::z_dim, with filename heuristic fallback)
- WanDiffusersModel loader (subclasses GenericDiffusersLoader)
- WanT5EncoderField, WanTransformerField (with dual-expert slots),
WanConditioningField, WanConditioningInfo
- New invocations: wan_model_loader, wan_text_encoder, wan_denoise,
wan_image_to_latents, wan_latents_to_image
- FlowMatchEulerDiscreteScheduler integration with on-disk config load
- RectifiedFlowInpaintExtension reused for inpaint
- 5D <-> 4D shape juggling: latents stay 4D in InvokeAI's pipeline,
re-add T=1 only inside the transformer call / VAE encode-decode
Phase 2 — A14B dual-expert MoE:
- Probe reads boundary_ratio from model_index.json
- Loader emits both transformer (high-noise) and transformer_low_noise
(low-noise expert at transformer_2/) for A14B
- _ExpertSwapper in wan_denoise drives GPU residency between experts:
high-noise for t >= boundary_ratio * num_train_timesteps, low-noise
below. Only one expert locked at a time so the cache can evict the
other - relies on existing CachedModelWithPartialLoad to handle
oversized models on lower-VRAM GPUs.
- guidance_scale_low_noise field for separate low-noise CFG override
Tests:
- 24 passing tests covering probe variant detection, default settings,
noise sampling, end-to-end denoise on a synthetic transformer (CPU),
dual-expert boundary swap, CFG branch
- 1 heavy-test placeholder gated by INVOKEAI_HEAVY_TESTS=1 for the
real-weights smoke test
Phase 3+ deferred: standalone VAE/encoder configs, GGUF, LoRA,
ControlNet, ref image, inpaint UI, frontend wiring, starter models.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Phase 3 adds standalone VAE and UMT5-XXL encoder configs so users can run
GGUF-quantized Wan transformers (Phase 4) without installing the full
~30 GB Diffusers pipeline.
VAE configs:
- VAE_Checkpoint_Wan_Config + VAE_Diffusers_Wan_Config (16-channel A14B
vs 48-channel TI2V-5B, distinguished by decoder.conv_in z_dim).
- 16-channel files share the AutoencoderKLWan architecture with Qwen
Image; disambiguated via filename heuristic ("wan" in name -> Wan,
otherwise -> Qwen Image). Mirror exclusion in QwenImage's probe.
- VAELoader gets a Wan branch that builds AutoencoderKLWan(z_dim=...)
via init_empty_weights, mirroring the QwenImage single-file pattern.
- Existing standard VAE probe excludes both QwenImage- and Wan-style
state dicts.
UMT5-XXL encoder:
- New ModelType.WanT5Encoder + ModelFormat.WanT5Encoder.
- WanT5Encoder_WanT5Encoder_Config probes the diffusers folder layout
(text_encoder/config.json with model_type=umt5, or flat layout with
config.json at root). Refuses full Wan pipelines.
- WanT5EncoderLoader handles both layouts and loads UMT5EncoderModel +
AutoTokenizer.
Component-source plumbing:
- WanModelLoaderInvocation now exposes wan_t5_encoder_model and
component_source pickers (mirrors QwenImage pattern). Resolution
order: standalone > main (if Diffusers) > component_source. Required
when the main model is a single-file format in Phase 4.
Bug fix in wan_text_encoder:
- Tokenizer was loading via AutoTokenizer.from_pretrained(<root>)
directly, which fails for nested layouts where files live in
<root>/tokenizer/. Now routed through the model cache so the
registered loaders handle layout differences correctly.
Frontend:
- New type guards (isWanVAEModelConfig, isWanT5EncoderModelConfig,
isWanMainModelConfig, isWanDiffusersMainModelConfig) and hooks/
selectors (useWanVAEModels, useWanT5EncoderModels,
useWanDiffusersModels). New zSubModelType / zModelType / zModelFormat
enum entries for transformer_2 and wan_t5_encoder.
Tests:
- 16 new tests covering z_dim detection, VAE checkpoint/diffusers
probes, the bidirectional Qwen-vs-Wan filename deferral, and the
UMT5 encoder probe (nested + flat + T5 + full-pipeline rejection).
- Total Wan test count: 41 passing, 1 heavy-test placeholder skipped.
- Full config test suite (63 tests) still passes.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
fix(wan): unbreak frontend lint after Wan additions
Five issues turned up running `make frontend-lint`:
1. wan_denoise.py used `from __future__ import annotations`, which made
the `invoke()` return annotation a string ('LatentsOutput'). The
InvocationRegistry's `get_output_annotation()` returns the raw
annotation, so OpenAPI generation crashed with
`'str' object has no attribute '__name__'`. Removed the future-import
and added `Any` to the typing imports.
2. ModelRecordChanges.variant didn't list WanVariantType, so the
generated schema's install/update endpoints rejected `t2v_a14b` and
`ti2v_5b`. Added it.
3. Regenerated frontend/web/src/services/api/schema.ts from the live
backend so it now includes BaseModelType.wan, ModelType.wan_t5_encoder,
SubModelType.transformer_2, ModelFormat.wan_t5_encoder, the Wan
variants, all Wan invocation types and their conditioning/transformer
field types.
4. modelManagerV2/models.ts: added `wan_t5_encoder` to the category map,
`wan` to the base color/long-name/short-name maps, the two Wan
variants to the variant-name map, and `wan_t5_encoder` to the
format-name map.
5. ModelManagerPanel/ModelFormatBadge.tsx: added `wan_t5_encoder` to
FORMAT_NAME_MAP and FORMAT_COLOR_MAP.
`make frontend-lint` now passes cleanly (tsc, dpdm, eslint, prettier).
All 41 Wan Python tests still pass.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
chore(wan): drop unused FE exports flagged by knip
These were forward-compatibility wiring for Phase 9 (the FE graph
builder) that has no consumers yet; knip rightly flagged them. Removed
or de-exported. They'll come back when the graph builder lands and
needs them.
- common.ts: zWanVariantType drops `export` (still used internally by
zAnyModelVariant).
- types.ts: drop isWanMainModelConfig, isWanDiffusersMainModelConfig,
isWanVAEModelConfig (no callers). The remaining
isWanT5EncoderModelConfig is used by models.ts. WanT5EncoderModelConfig
type drops `export` (still used as the type guard's narrowing target).
- modelsByType.ts: drop the six unused useWan*/selectWan* hooks +
selectors and their type-guard imports.
`make frontend-lint` (tsc + dpdm + eslint + prettier + knip) now green.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
docs(wan): use *-Diffusers HF repo names in plan
The Wan-AI org publishes two flavours of each release:
* Wan-AI/Wan2.2-{TI2V-5B,T2V-A14B,I2V-A14B} ← upstream native
* Wan-AI/Wan2.2-{TI2V-5B,T2V-A14B,I2V-A14B}-Diffusers ← convertible
The native release has _class_name=WanModel in config.json and ships
weights flat at the repo root with no transformer/, vae/, text_encoder/
subdirs. It is not loadable by Diffusers' WanPipeline.from_pretrained.
Update plan doc to reference the -Diffusers repos throughout (probe
notes, starter-model entries) so the plumbing path matches what the
Diffusers loader actually expects.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
fix(wan): accept 0 as 'unset' sentinel for guidance_scale_low_noise
The frontend renders Optional[float] inputs with default 0 in the
numeric input rather than passing null/unset. Combined with ge=1.0,
this caused every wan_denoise invocation to fail Pydantic validation
with "Input should be greater than or equal to 1" until the user
manually entered a value (or knew to leave the field disconnected).
The validation error was rejected before invocation logging, so it
never showed up in the server log either - making the failure hard to
diagnose.
Relaxing the constraint to ge=0.0 and treating values below 1.0 as the
"fall back to primary Guidance Scale" sentinel. The user's natural FE
default (0) now works as expected.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
fix(wan): correct preview dimensions and colors for TI2V-5B
Two bugs in the Wan branch of the diffusion step callback:
1. Wrong dimensions. The reported preview size hardcoded `* 8` for the
spatial downscale ratio, but TI2V-5B's Wan2.2-VAE uses 16x. A
1024x1024 target was being announced to the FE as 512x512.
2. Wrong colors. The previous fallback for 48-channel TI2V-5B latents
sliced the first 16 channels and applied the standard 16-channel
Wan-VAE projection. Those channel layouts are unrelated, so the
projection produced meaningless colors.
Adding the proper Wan2.2-VAE 48-channel RGB projection matrix (and
bias) from ComfyUI's Wan22 latent format, and selecting the right
matrix + spatial-scale by latent channel count: 16 → A14B (Wan VAE,
8x), 48 → TI2V-5B (Wan2.2-VAE, 16x).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
fix(wan): honor model's _class_name when building scheduler
TI2V-5B's scheduler_config.json declares _class_name=UniPCMultistepScheduler
with flow_shift=5.0. The previous code hardcoded
FlowMatchEulerDiscreteScheduler.from_pretrained(...), which silently
constructed a default-config FlowMatch instead of the UniPC the model
expects. The mismatched noise schedule manifests as soft / under-denoised
faces and global graininess in the final images.
Now: read scheduler_config.json, look up the named class on the diffusers
module, and instantiate that class via from_pretrained. UniPC and
FlowMatch share the same step()/set_timesteps()/sigmas/num_train_timesteps
interfaces, so the denoise loop works transparently for either.
A14B continues to use FlowMatchEulerDiscreteScheduler when its scheduler
config says so (its reference is FlowMatchEuler with shift=8.0). Falls
back to FlowMatchEulerDiscreteScheduler defaults when no on-disk config
is available.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
fix(wan): match diffusers WanPipeline tokenizer length and latent dtype
Two divergences from the Diffusers reference that were hurting image
quality (soft / grainy / distorted faces at default settings):
1. Tokenizer max_sequence_length was 226 in wan_text_encoder, but the
model was trained with 512-token sequences. The upstream native
config.json has text_len: 512, and Diffusers' WanPipeline.__call__
default is 512 (overriding _get_t5_prompt_embeds's stale 226 default).
Wan's cross-attention sees padded zeros past the prompt's actual
length but expects to be looking at a 512-position context window.
2. Latents were stored in bf16 throughout the denoise loop. Diffusers'
WanPipeline.prepare_latents explicitly uses dtype=torch.float32 and
only casts to the transformer's dtype right at the forward call:
latent_model_input = latents.to(transformer_dtype)
Storing in bf16 between steps accumulates ~40 steps of bf16
quantization on the scheduler's small per-step deltas. Now
latent_dtype = torch.float32 throughout, with a per-step cast for
the transformer forward pass.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
chore(wan): add diffusers reference comparison script
scripts/wan_diffusers_reference.py runs a Diffusers-format Wan 2.2
checkpoint directly via WanPipeline.from_pretrained, with the same
arguments InvokeAI's wan_denoise uses. Use to A/B against InvokeAI
output when image quality is questionable.
Defaults to enable_model_cpu_offload so the script fits on 16 GB cards
where the full pipeline (transformer + UMT5-XXL + VAE) would otherwise
OOM. --offload {model,sequential,none} controls the strategy.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds single-file GGUF support for Wan 2.2 transformers, the path that makes A14B usable on consumer GPUs (~7 GB/expert at Q4_K_M instead of ~28 GB at bf16). Probe (configs/main.py): - New helpers: _has_wan_keys (Wan vs Qwen/FLUX/Z-Image fingerprint via condition_embedder.text_embedder.linear_1 + patch_embedding); _detect_wan_gguf_variant (16ch -> A14B, 48ch -> TI2V-5B from patch_embedding.weight.shape[1]); _detect_wan_gguf_expert (filename heuristic for high_noise / low_noise / none). - Main_GGUF_Wan_Config(base=Wan, format=GGUFQuantized, variant, expert). Tolerates the ComfyUI 'model.diffusion_model.' / 'diffusion_model.' prefixes via _has_wan_keys' multi-prefix scan. - Registered in factory.py. Loader (model_loaders/wan.py): - WanGGUFCheckpointModel mirrors the QwenImage GGUF pattern: gguf_sd_loader -> strip ComfyUI prefix -> auto-detect arch from state dict shapes (num_layers, inner_dim, ffn_dim, text_dim, in_channels, num_heads = inner_dim/128) -> init_empty_weights + load_state_dict(strict=False, assign=True). Loader invocation (wan_model_loader.py): - New 'Transformer (Low Noise)' picker: optional second GGUF for the A14B dual-expert MoE. Auto-swaps if the user wired the experts in the wrong order. Warns when an A14B GGUF is loaded without a paired low-noise expert (single-expert run, degraded quality). - GGUF mains require either a standalone VAE+encoder or a Diffusers Component Source (which can also supply boundary_ratio). - Diffusers main path unchanged (still pulls both experts from transformer/ + transformer_2/). Tests (tests/.../test_wan_gguf_config.py): - 14 tests across key fingerprint, variant detection, expert filename heuristic, and the full probe (A14B high/low, TI2V-5B, GGUF rejection, unrecognised state-dict rejection, explicit override). Total Wan tests: 55 passing (no regressions). FE lint clean. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> fix(wan): support QuantStack-style GGUFs and standalone Diffusers VAE The city96 Wan 2.2 GGUF repos have been removed from Hugging Face, leaving QuantStack as the surviving distributor. QuantStack ships the native upstream Wan key layout (text_embedding.0/2, self_attn/cross_attn, ffn.0/2, head.head, head.modulation, ...) rather than the diffusers naming city96 used; biases are stored as F16 rather than BF16; and the standalone Wan VAE installs as a flat AutoencoderKLWan folder which the generic loader rejects. Three fixes: 1. Probe now recognises both diffusers and native key layouts via a new _is_native_wan_layout helper; _has_wan_keys accepts either text-proj fingerprint. 2. GGUF loader converts native -> diffusers keys (mirroring diffusers' convert_wan_transformer_to_diffusers) and unwraps non-quantized GGMLTensors to plain tensors at compute_dtype. The unwrap is needed because conv3d isn't in GGMLTensor's dispatch table, so the F16 patch_embedding bias would otherwise hit conv3d against bf16 latents. 3. VAELoader gains a VAE_Diffusers_Wan_Config branch that loads AutoencoderKLWan directly; the generic path can't handle a flat single-class folder when a submodel_type is provided. Adds 12 tests covering the native layout (probe + converter + unwrap). Verified end-to-end against Wan2.2-T2V-A14B-Q4_K_M from QuantStack: 1095 tensors round-trip key-for-key against WanTransformer3DModel. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Probe + config (LoRA_LyCORIS_Wan_Config):
- Detects Wan LoRAs in three layouts: diffusers PEFT, native upstream PEFT
(ComfyUI), and Kohya (both naming variants).
- Anti-pattern guards prevent collisions with Anima (Cosmos DiT q_proj
convention), QwenImage (transformer_blocks), Flux (double/single blocks),
and Z-Image (diffusion_model.layers).
- Optional ``expert: "high" | "low" | None`` field; auto-detected from
filename (high_noise / low_noise / hyphenated / concatenated variants).
Key conversion (wan_lora_conversion_utils):
- Native upstream keys (self_attn/cross_attn, ffn.0/2) -> diffusers
(attn1/attn2, ffn.net.0.proj / ffn.net.2).
- Strips ``transformer.``, ``diffusion_model.``, ``base_model.model.transformer.``
prefixes from PEFT-style keys.
- Kohya layer names mapped through an explicit longest-match table.
- Output paths use diffusers naming so the LayerPatcher can resolve them
against WanTransformer3DModel parameter paths.
Loader integration:
- Adds BaseModelType.Wan branch to LoRALoader._load_model.
Invocation nodes (wan_lora_loader.py):
- WanLoRALoaderInvocation: single LoRA with auto/both/high/low target field.
- WanLoRACollectionLoader: list of LoRAs, auto-routed by each LoRA's
recorded expert tag.
- Output WanLoRALoaderOutput carries the WanTransformerField with updated
``loras`` / ``loras_low_noise`` lists.
Denoise integration:
- _ExpertSwapper now manages both the model_on_device context and the
LayerPatcher.apply_smart_model_patches context per expert. LoRA patches
are entered after device load and exited before device release, with
fresh iterators per swap.
- GGUF (quantized) experts request sidecar patching so GGMLTensor weights
aren't touched directly.
- Low-noise expert falls back to the primary loras list when
``loras_low_noise`` is empty (matches WanTransformerField semantics).
Tests: 81 new tests covering probe accept/reject across formats, anti-pattern
guards on competing architectures, converter round-trips for all three
layouts, invocation target resolution + routing + duplicate guards, and the
_ExpertSwapper lifecycle (lora context opens/closes in the right order
around the device swap, quantized flag forwards, no-LoRA path skips the
patch context, re-entering the same label is a no-op).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
fix(wan): probe Wan LoRA before Anima in the config union
Native-PEFT Wan LoRAs (lightx2v's Lightning, most ComfyUI-trained Wan
LoRAs) carry keys like ``diffusion_model.blocks.X.cross_attn.k.lora_A.weight``.
Anima's probe matches on the bare ``cross_attn``/``self_attn`` substring —
it does not require the Anima-specific ``_proj`` suffix nor any of the
``mlp``/``adaln_modulation`` Cosmos DiT markers — so these Wan LoRAs were
classified as ``BaseModelType.Anima`` because Anima happened to run first.
Reorder the LyCORIS section of ``AnyModelConfig`` so Wan probes first.
Wan's probe is strictly more restrictive (it rejects Anima's ``_proj``
attention suffix via the anti-pattern guard added in the previous commit),
so Anima LoRAs are still correctly classified after this reorder.
Existing users with mis-tagged installs need to delete the affected LoRA
records and reinstall.
Adds two regression tests: a union-ordering assertion, and a sanity check
that demonstrates Anima's probe *would* match Wan native keys if asked
directly — pinning the constraint that motivates the ordering.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
chore(i18n): add Wan2.2 T5 Encoder model-manager label
The frontend source already references ``modelManager.wanT5Encoder``;
the locale key was added with a casing typo (``want5Encoder``). Fix
the key so the Wan T5 Encoder model type renders its display name
correctly in the model manager UI.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Re-implementation after the first attempt — which used CLIP-vision
conditioning — was reverted. Wan 2.2 I2V-A14B does NOT use a CLIP-vision
encoder (the Diffusers repo ships ``image_encoder: [null, null]`` in
``model_index.json``); instead it conditions on a reference image by
VAE-encoding it and concatenating the resulting latents (plus a
first-frame mask) to the noise latents along the channel dim. The I2V
transformer therefore has ``in_channels=36`` (16 noise + 16 ref-image
latents + 4 mask) vs ``in_channels=16`` for T2V.
Taxonomy:
- Re-adds ``WanVariantType.I2V_A14B``.
Probes:
- Diffusers: ``_detect_wan_variant`` reads ``transformer/config.json::in_channels``;
36 → I2V_A14B, 16 → T2V_A14B (both share the dual-expert layout).
- GGUF: ``_detect_wan_gguf_variant`` recognises ``in_channels=36`` from the
patch_embedding tensor shape and emits I2V_A14B.
Backend extension (``backend/wan/extensions/wan_ref_image_extension.py``):
- ``preprocess_reference_image`` resizes + normalises to a 5D pixel tensor.
- ``encode_reference_image_to_condition`` VAE-encodes the image and stacks
a 4-channel first-frame mask on top, producing the
``[1, 20, 1, H/8, W/8]`` condition tensor the denoise loop consumes.
- Mirrors diffusers ``WanImageToVideoPipeline.prepare_latents`` with
``num_frames=1`` and ``expand_timesteps=False``.
Invocation node (``wan_ref_image_encoder.py``):
- "Reference Image - Wan 2.2": image + VAE + width/height pickers.
- Output ``WanRefImageConditioningField`` carries the condition tensor
name plus the dimensions used (so the denoise step can validate dim
parity).
Denoise integration:
- ``WanDenoiseInvocation`` gains an optional ``ref_image`` field.
- Variant gate: rejects ref_image on T2V_A14B and TI2V-5B with a clear
error before doing any work.
- Dimension gate: rejects ref-image width/height mismatch vs denoise.
- At every transformer call, concatenates the 20-channel condition
tensor to the 16-channel noise latents along the channel dim before
passing to the transformer (giving the 36-channel input I2V expects).
Tests: 14 new across the probe, the extension, and the denoise loop.
The synthetic ``_ZeroTransformer`` test stand-in now mirrors the real
I2V transformer's ``in_channels=36, out_channels=16`` asymmetry by
slicing its zero output back to 16 channels when the input is 36-wide.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
fix(wan): derive GGUF out_channels from proj_out shape (I2V support)
The GGUF loader was setting ``out_channels = in_channels`` which is wrong for
Wan 2.2 I2V-A14B: that variant has ``in_channels=36`` (16 noise + 16 ref-image
latents + 4 first-frame mask, concatenated by the denoise loop) but
``out_channels=16`` since the transformer only predicts the noise component
back. Loading an I2V GGUF would build a transformer with the wrong proj_out
shape and crash:
RuntimeError: Error(s) in loading state_dict for WanTransformer3DModel:
size mismatch for proj_out.weight: copying a param with shape
torch.Size([64, 5120]) from checkpoint, the shape in current model is
torch.Size([144, 5120]).
(144 = 36 * 4, 64 = 16 * 4 — patch_size=(1, 2, 2) → prod=4)
Read out_channels directly from the ``proj_out.weight`` shape in the state
dict. This is correct for all three Wan 2.2 variants without needing to know
the variant in advance.
Also tighten the num_layers fallback: T2V_A14B and I2V_A14B share 40 layers;
only TI2V-5B has 30. The fallback is rarely hit in practice (the per-block
count comes from the state dict scan), but the previous code would have
defaulted I2V_A14B to 30 layers.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
fix(model): make Anima LoRA probe mutually exclusive with Wan
InvokeAI's ``Config_Base.CONFIG_CLASSES`` is a Python ``set``, so iteration
order during model probing is non-deterministic across process restarts.
First-match-wins ordering in ``AnyModelConfig`` is documentation only — it
has no effect on which config is iterated first.
Anima's previous probe accepted any state dict containing the substring
``cross_attn`` or ``self_attn``, which collides with Wan's native LoRA key
layout (``diffusion_model.blocks.X.cross_attn.q.lora_down.weight``). Both
probes accepted Wan native LoRAs (including lightx2v's Lightning T2V and I2V
distillations), and the ``matches.sort_key`` tiebreaker only disambiguates
by ModelType, not within LoRA configs. So which config "won" depended on
dict hash order — sometimes Wan, sometimes Anima.
The previous mitigation reordered the AnyModelConfig union to put Wan
before Anima. That worked by luck and was inherently fragile.
Tighten Anima's probe to require Cosmos-DiT-exclusive subcomponents:
``mlp``, ``adaln_modulation``, or ``_proj``-suffixed attention names
(``q_proj``/``k_proj``/``v_proj``/``output_proj``) — none of which appear
in any Wan LoRA. Wan native uses bare ``.q``/``.k``/``.v``/``.o`` on
``self_attn``/``cross_attn``, and ``ffn.N``/``ffn.net.N`` instead of ``mlp``.
The new strict detectors live alongside the original loose ones so the
Anima conversion utility (which runs after probing) still works.
Regression tests in ``test_wan_lora_probe_independence.py`` cover:
- I2V Lightning V1 (the bug-triggering LoRA), T2V Lightning V2, Wan Kohya
and Wan diffusers PEFT layouts — Wan probe accepts, Anima probe rejects.
- Anima PEFT and Kohya layouts — Anima accepts, Wan rejects.
- A meta-test that runs every LoRA config in CONFIG_CLASSES against the
Lightning state dicts and asserts exactly one accepts — this catches
ANY future probe collision, not just Wan vs Anima.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
fix(wan): defer expert model loading in _ExpertSwapper to avoid cache thrash
The swapper used to take pre-loaded ``LoadedModel`` handles at construction:
high_info = context.models.load(self.transformer.transformer)
low_info = context.models.load(self.transformer.transformer_low_noise)
swapper = _ExpertSwapper(high_info=high_info, low_info=low_info, ...)
With dual ~9 GB A14B GGUF experts plus the ~10 GB UMT5-XXL encoder competing
for the same RAM cache, the LRU policy frequently dropped one expert by the
time the denoise loop swapped into it. The model manager then emitted
[MODEL CACHE] Locking model cache entry ... but it has already been
dropped from the RAM cache. This is a sign that the model loading
order is non-optimal in the invocation code (See ... invoke-ai#7513).
and reloaded the weights from disk (~1.2s extra per swap).
Refactor the swapper to take the ``ModelIdentifierField`` plus the
``InvocationContext`` and call ``context.models.load(model_id)`` lazily
inside ``get()``. Each swap obtains a fresh handle, the LRU window is
small, and the warning goes away.
Config metadata (used to compute ``is_quantized``) is read upfront via
``context.models.get_config()`` — that's metadata, not weights, so it
doesn't put pressure on the cache.
Tests: existing swapper lifecycle tests refactored to use a fake context
whose ``models.load`` is logged. A new ``test_lazy_load_per_swap_not_upfront``
pins the regression — it asserts ``models.load`` is NOT called at swapper
construction, only at first get() per expert.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The denoise_mask wiring + RectifiedFlowInpaintExtension integration in
wan_denoise.py was put in place during Phase 2/3 alongside the rest of
the denoise loop. Phase 8 of the plan was about ensuring this path
worked and is locked in by tests.
Three new tests under TestWanDenoiseInpaint:
1. test_preserved_region_matches_init_exactly: builds a half/half mask
(left = preserve, right = regenerate in user-side convention), runs
full denoise with the synthetic zero-output transformer, and asserts
the preserved half of the final latents equals the init exactly while
the regenerated half does not. Pins the mask-inversion + per-step
merge behavior.
2. test_inpaint_requires_init_latents: a mask without init latents must
raise a clear ValueError — the merge has nothing to weld back to.
3. test_no_mask_path_is_unchanged: regression that adding the inpaint
extension didn't perturb the non-inpaint codepath (with init latents
+ denoising_start=0.5 but no mask, the loop just runs img2img).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
feat(frontend): add I2V_A14B to Wan variant zod enum + manager label
Phase 7 added the I2V_A14B backend variant. The frontend's zod enum
(features/nodes/types/common.ts:zWanVariantType) and the model manager's
variant-label map (features/modelManagerV2/models.ts) were still on the
two-variant list, so:
- ModelIdentifierField inputs with ui_model_variant filters on Wan
couldn't list I2V models.
- The model manager UI showed a raw 'i2v_a14b' string instead of the
human label.
Phase 9 (full linear-view wiring — type guards, hooks, params slice,
graph builder, tab UI) is in progress on a follow-up commit; this lands
the two small enum fixes first so the I2V probe / install paths work
correctly end-to-end with the existing FE.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds the minimum frontend wiring needed to generate Wan 2.2 images from
the linear view:
- buildWanGraph.ts (new): text-to-image graph (model_loader →
text_encoder × 2 → denoise → l2i). Diffusers main model only —
transformer, VAE and UMT5 encoder all resolve from the same repo, so
no Wan-specific params slice fields are required yet. CFG-skip
branch when guidance_scale ≤ 1.0.
- useEnqueueGenerate / useEnqueueCanvas dispatchers: route
base === 'wan' to buildWanGraph.
- graph/types.ts: add wan_l2i / wan_i2l / wan_denoise / wan_model_loader
to the relevant node-type unions.
- addTextToImage / addImageToImage: include wan_denoise / wan_l2i so
width/height are wired correctly and the txt2img helper accepts the
Wan l2i node.
- isMainModelWithoutUnet: include wan_model_loader (Wan has no UNet,
same as the other modern bases).
- metadata.py: add wan_txt2img / wan_img2img / wan_inpaint to the
generation_mode enum (img2img / inpaint pieces land next).
- schema.ts: regenerated to pick up the metadata enum + new
Wan invocations.
Pieces left in Phase 9: params slice (standalone VAE / T5 / GGUF
low-noise / LoRA / ref-image fields + selectors), img2img + I2V + inpaint
branches in the graph builder, and Wan-specific UI components.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
feat(wan): Phase 9 piece invoke-ai#2 - GGUF support and CFG-Low control in linear view
Adds the three Wan-specific params + UI controls that gate GGUF workflows
plus a separate low-noise CFG slider for A14B users.
Params slice:
- wanTransformerLowNoise (the second-expert GGUF for A14B)
- wanComponentSource (Diffusers Wan model providing VAE + UMT5-XXL
when the main is a GGUF)
- wanGuidanceScaleLowNoise (optional separate CFG for the low-noise
expert; null = fall back to the primary CFG)
Plus a `selectIsWan` selector for accordion gating.
UI components:
- ParamWanModelSelects.tsx (Advanced accordion): two model pickers —
Transformer (Low Noise) filtered to Wan GGUF mains, and VAE/Encoder
Source filtered to Wan Diffusers mains. Mirrors the
ParamQwenImageComponentSourceSelect structure.
- ParamWanGuidanceScaleLowNoise.tsx (Generation accordion): slider +
number input with an "auto" indicator when cleared. Default 3.5
matches the diffusers reference 4.0 / 3.0 split.
Wiring:
- Generation accordion: ParamWanGuidanceScaleLowNoise shown when base
is wan, scheduler excluded for wan (same pattern as Anima/Qwen).
- Advanced accordion: ParamWanModelSelects shown when base is wan, and
Wan excluded from the SD-family VAE/CFG-rescale blocks.
- buildWanGraph.ts: forwards the three new params to the model loader
and denoise nodes (transformer_low_noise_model, component_source,
guidance_scale_low_noise) and adds them to the graph metadata.
Hooks/types:
- useWanDiffusersModels + useWanGGUFModels in modelsByType.ts.
- isWanDiffusersMainModelConfig + isWanGGUFMainModelConfig type guards.
- Three new locale strings (wanComponentSource, wanTransformerLowNoise,
wanGuidanceScaleLowNoise[Auto]).
GGUF workflow now works end-to-end in the linear view: pick a Wan GGUF
main, set Transformer (Low Noise) to the paired second-expert GGUF, set
VAE/Encoder Source to any Diffusers Wan repo (TI2V-5B is convenient at
~12 GB) — generate produces an image.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
fix(wan): UX polish on the Wan linear-view controls
Bundles four small fixes applied during a usability review of the Wan
linear-view section (piece invoke-ai#2):
1. **Filter Main vs Transformer (Low Noise) dropdowns by expert tag.**
The Wan GGUF probe records each file's ``expert`` field
(``"high"`` / ``"low"`` / ``"none"``) via filename heuristic.
- ``MainModelPicker``: hides ``expert === 'low'`` Wan GGUFs so users
can't accidentally wire a low-noise expert as the primary main.
- Transformer (Low Noise) picker (``useWanGGUFLowNoiseModels``):
shows ``expert === 'low'`` Wan GGUFs only.
Diffusers Wan mains and TI2V-5B aren't affected — they don't carry
the ``expert`` field on their config schema. The backend's auto-swap
safety net stays in place.
2. **Match the primary CFG slider's range.** The Wan low-noise CFG
slider was constrained to 1–10 while the primary CFG ranges 1–20.
With the diffusers reference 4/3 split, the low-noise slider thumb
sat noticeably further right than the primary — visually misleading.
Both sliders now share the 1–20 range with marks at [1, 10, 20].
3. **Label fits the form column.** "CFG (Low Noise)" → "CFG (Low)" so
the slider fits cleanly next to its label instead of overlapping.
4. **Indicator state for the low-noise CFG slider.** Replaced the inline
"(auto)" / "(same as cfg)" text — which kept overlapping the slider
regardless of how short the label got — with an X-only reset button
that's only visible when the user has set an explicit value. Absence
of the X conveys auto/fallback state without any text overhang.
5. **Friendlier Transformer (Low Noise) placeholder.** "Second-expert
GGUF for A14B (pair with the high-noise main)" → "Add for full
detail" — concise nudge for users who haven't paired the second
expert yet.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
feat(wan): Phase 9 piece invoke-ai#3 - linear-view img2img branch
Adds Wan 2.2 image-to-image to the linear view, mirroring the Qwen Image
pattern. The mode switches on the canvas state — pure-prompt runs go
through addTextToImage as before; canvas runs with an init image go
through addImageToImage which wires a fresh wan_i2l (Image to Latents -
Wan 2.2) node between the init image and the denoise's `latents` input,
honoring the existing denoise_start slider.
buildWanGraph:
- Drops the txt2img-only guard, branches on generationMode.
- img2img: spins up a wan_i2l node and hands it to addImageToImage
alongside the existing denoise / l2i / modelLoader (as vaeSource).
- inpaint / outpaint still fail loudly — pieces invoke-ai#4-invoke-ai#6.
graphBuilderUtils.getDenoisingStartAndEnd:
- Adds 'wan' to the simple-linear case (denoising_start = 1 -
denoisingStrength). Note: Wan's flow-matching schedule is "sticky"
on the init compared to SDXL — users will likely need denoisingStrength
≥ 0.7 to see substantial change, matching the user-found 0.15-0.3
denoising_start sweet spot from earlier img2img testing. We may
revisit this with an exponent rescale (like FLUX uses) if the
response curve feels off.
addImageToImage:
- Adds 'wan_i2l' to the i2l-node-type union so the Wan i2l can be
threaded through the shared helper.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
fix(wan): add wan_denoise to addImageToImage/addInpaint/addOutpaint type checks
Three sibling graph-helper utilities had the same modern-base list as
addTextToImage did, and the buildWanGraph img2img branch tripped one of
them at canvas-Generate time:
error [generation]: Failed to build graph
{name: 'Error', message: 'Wrong assertion encountered'}
The else-branch in each helper assumes 'denoise_latents' (the SD1.5/SDXL
legacy path) and asserts that — failing for any modern base not listed
above the branch. addTextToImage was already updated in Phase 9 piece #1;
this catches the parallel cases that the img2img/inpaint/outpaint flows
go through.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
feat(wan): Phase 9 piece invoke-ai#4 - linear-view inpaint and outpaint branches
Wires Wan 2.2 inpaint and outpaint through the existing addInpaint /
addOutpaint helpers. The backend's RectifiedFlowInpaintExtension was
plumbed into wan_denoise.py back in Phase 8 (commit ab54617); this
just connects the FE.
buildWanGraph:
- generationMode === 'inpaint' → spin up a wan_i2l, call addInpaint
with denoise + l2i + modelLoader (used as both vaeSource and
modelLoader since the Wan model loader carries the VAE).
- generationMode === 'outpaint' → parallel branch with addOutpaint.
addInpaint:
- i2l-node-type union now includes 'wan_i2l' (the addImageToImage and
addOutpaint type unions already do — different union shapes).
metadata.py:
- generation_mode literal adds "wan_outpaint" alongside the existing
wan_txt2img / wan_img2img / wan_inpaint entries.
isMainModelWithoutUnet already includes wan_model_loader (Phase 9 piece
create_gradient_mask when Wan is the main.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
feat(wan): Phase 9 piece invoke-ai#5 - linear-view I2V branch (raster as reference image)
Wan 2.2 I2V-A14B models condition on a reference image whose VAE-encoded
latents are concatenated to the noise along the channel dim each step
(in_channels=36 on the I2V transformer). In the linear view this maps
cleanly onto the existing canvas raster layer: pick an I2V model, drag
an image to raster, generate.
buildWanGraph:
- Fetch the modelConfig early so the variant gate (i2v_a14b vs the
rest) can drive the branch shape instead of being a post-hoc check.
- I2V + txt2img: fail loudly ("Switch to the canvas tab and drag an
image to the raster layer"). I2V models won't produce useful output
without a reference, and the backend would crash trying to
concatenate a missing condition tensor.
- I2V + img2img: pull the raster image via the canvas compositor,
wire it through a wan_ref_image_encoder (which VAE-encodes it and
builds the 4-mask + 16-latent condition tensor backend-side), then
feed the result into denoise.ref_image. Denoise runs from fresh
noise (denoising_start=0, no init_latents) — the ref image is
cross-attention/concat conditioning, not a noise-trajectory anchor.
- I2V + inpaint/outpaint: fail clearly. Combining ref-image
conditioning with a denoise mask is conceptually possible but the
backend interaction hasn't been validated end-to-end.
metadata.py:
- Adds "wan_i2v" to the generation_mode literal so the metadata field
on I2V renders correctly.
T2V flows (txt2img / img2img / inpaint / outpaint) are unchanged for
non-I2V Wan variants (T2V-A14B and TI2V-5B).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
fix(wan): enforce multiple-of-16 dimensions to match transformer patch grid
Wan 2.2's transformer has ``patch_size=(1, 2, 2)``: it patch-embeds with
stride 2 then un-patches by 2. Combined with the VAE's 8x spatial scale,
canvas H/W must be a multiple of ``8 * 2 = 16`` — not just 8 — for the
patch round-trip to land exactly. Otherwise the latents and noise
prediction disagree by one in the spatial dim and the scheduler step
fails:
RuntimeError: The size of tensor a (147) must match the size of
tensor b (146) at non-singleton dimension 3
(here latent_w=147 → patch_w=73 → un-patched_w=146 ≠ 147)
This was silent for T2V at 1024x1024 (already a multiple of 16) but
fired for I2V at non-multiple-of-16 canvas sizes.
Fixes:
- ``optimalDimension.getGridSize``: Wan moves from the default 8 case to
the multiple-of-16 case (alongside flux / sd-3 / qwen-image / z-image
which have the same patch arithmetic). The canvas bbox UI now snaps
Wan dimensions to multiples of 16.
- ``wan_denoise.py`` and ``wan_ref_image_encoder.py``: bump width/height
``multiple_of`` from 8 to 16. Defense-in-depth — workflow-editor
users won't be able to send a non-16-aligned dim either.
Existing backend tests (23 passing) still hold — 1024 is divisible by 16
so the test fixtures didn't exercise the off-by-one path.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
fix(wan): show negative prompt box in Wan linear-view
Wan was missing from SUPPORTS_NEGATIVE_PROMPT_BASE_MODELS, so the
linear-view negative-prompt input was hidden even though the Wan denoise
node already wires negative conditioning when CFG > 1
(buildWanGraph.ts:67-75). Adds 'wan' to the list.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
feat(wan): Phase 9 piece invoke-ai#6 - Wan LoRA collection in linear view
Adds Wan LoRA wiring to buildWanGraph, mirroring the Qwen Image pattern.
The shared LoRASelect / LoRAList UI in the linear view already filters
LoRAs by the selected main model's base, so Wan LoRAs surface
automatically when a Wan main is picked — no UI changes needed.
addWanLoRAs (new):
- Filters state.loras.loras to enabled Wan LoRAs.
- For each LoRA: spawns a ``lora_selector`` node and threads it
through a single ``collect`` collector.
- Routes the collector into a ``wan_lora_collection_loader`` which
sits between modelLoader and denoise — modelLoader.transformer →
loader, then loader.transformer → denoise (rerouting the original
modelLoader → denoise edge).
- Emits per-LoRA metadata so PNG metadata + workflow restore work.
The dual-expert routing (high-noise vs low-noise vs untagged) is
handled entirely on the backend by ``WanLoRACollectionLoader`` based on
each LoRA's recorded ``expert`` tag (set by the probe from the filename
heuristic in piece invoke-ai#5 of Phase 5). The FE just hands over the bag of
LoRAs; no per-list FE plumbing needed.
buildWanGraph:
- Calls addWanLoRAs(state, g, denoise, modelLoader) after the base
transformer edge is in place. The helper is a no-op when no Wan
LoRAs are enabled, so it's safe to call unconditionally.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
fix(wan): detect LoRA variant and filter by main model
Wan 2.2 A14B (inner_dim=5120) and TI2V-5B (inner_dim=3072) LoRAs are not
interchangeable — applying one against the wrong main model crashes the
layer patcher with a tensor-shape error (e.g. A14B Lightning on TI2V-5B
mains produced ``shape '[3072, 3072]' is invalid for input of size 26214400``).
Probe Wan LoRAs' inner-dim at install time and record the family on a new
``variant`` field (``a14b`` / ``5b`` / null). The LoRA picker in the linear
view hides incompatible variants when the user selects a main, and the
graph builder filters any still-enabled mismatches at submit time with a
warning. Untagged LoRAs (probe couldn't identify) pass through so they
aren't silently hidden.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
feat(wan): ref-image panel, GGUF readiness, and auto-default sources
Wan 2.2 I2V now uses the global Reference Images panel (same UX as Qwen
Image Edit and FLUX.2 Klein) instead of pulling the conditioning image
from a canvas raster layer. Adds:
- WanReferenceImageConfig zod type + isWanReferenceImageConfig guard;
integrated into the ref-image discriminated union, settings panel,
layer hooks, and validators.
- 'wan' added to SUPPORTS_REF_IMAGES_BASE_MODELS, but the panel only
shows for the i2v_a14b variant (T2V and TI2V-5B don't consume ref
images, so the panel is hidden for them).
- buildWanGraph I2V branch reads the first enabled wan_reference_image
from refImagesSlice; the canvas-raster-as-ref path is removed. I2V
now only supports txt2img mode (canvas img2img/inpaint/outpaint
assert with a clear message).
GGUF Wan readiness check: GGUF mains carry only the transformer, so the
loader needs a Diffusers Component Source (or standalone VAE + UMT5-XXL
encoder) to resolve the VAE and text encoder. Without one, enqueue is
now blocked with a clear reason. The low-noise A14B partner expert
remains optional (loader falls back to the high-noise expert when it's
missing).
Adds standalone Wan VAE and Wan T5 Encoder selectors to the Advanced
accordion (Qwen pattern). Wires them as vae_model / wan_t5_encoder_model
on the wan_model_loader node — backend priority is standalone > diffusers
main > component source.
Auto-default on Wan selection (so GGUF users don't have to fiddle with
Advanced): when the new main is a Wan GGUF, fill the Component Source,
standalone VAE, and standalone T5 encoder with first available matches
if not already set. Component Source is matched by variant family
(A14B GGUF prefers an A14B Diffusers; TI2V-5B prefers a TI2V-5B
Diffusers) since the two families use different VAE channel counts
(16 vs 48); within A14B, T2V and I2V share VAE/encoder so they're
interchangeable as a source. Runs on every Wan selection (including
Diffusers -> GGUF switches), only fills empty slots.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Wan 2.2 starter pack (selected when the user picks the Wan 2.2 bundle)
brings up the minimal-cost path to running A14B T2V end-to-end:
- Standalone UMT5-XXL encoder and A14B VAE (so GGUF mains don't need
a full Diffusers download for their VAE/encoder sources).
- T2V A14B Q4_K_M and Q8_0 GGUF expert pairs (high + low noise).
- T2V Lightning V1.1 Seko rank-64 LoRA pair (4-step inference).
Additional Wan 2.2 starter models browseable from the model manager:
- Full Diffusers T2V A14B, I2V A14B, and TI2V-5B.
- I2V A14B Q4_K_M and Q8_0 GGUF expert pairs + Lightning V1 LoRA pair.
- TI2V-5B Q4_K_M and Q8_0 GGUFs + the 48-channel TI2V-5B VAE.
Each "high noise" GGUF lists its low-noise partner plus the shared VAE
and UMT5-XXL encoder as dependencies, so installing one of them pulls
in everything the loader needs. QuantStack's HighNoise/LowNoise file
naming and lightx2v's high_noise_model/low_noise_model.safetensors are
both picked up by the existing filename heuristic in the GGUF probe.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
docs(wan): add Wan 2.2 hardware requirements
Adds Wan 2.2 A14B (T2V/I2V) and TI2V-5B rows to the hardware
requirements table with rough VRAM/RAM guidance per quantization.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…one VAE/T5 Wan-specific metadata fields embedded by the graph builder (wan_transformer_low_noise, wan_component_source, wan_vae_model, wan_t5_encoder_model, wan_guidance_scale_low_noise) had no recall handlers in features/metadata/parsing.tsx, so recalling an image's parameters would leave these fields empty. Adds a handler for each that dispatches the matching paramsSlice action and renders a row in the metadata viewer. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Ships two default workflows in the library, tagged so they appear in
"Browse Workflows" under the wan2.2 / text to image / image to image
tags:
- Text to Image - Wan 2.2: full T2V/TI2V-5B graph (model loader,
positive + negative encoders, denoise, l2i). Exposes the five
model slots, prompts, steps, dual CFG, and dimensions.
- Image to Image - Wan 2.2: I2V A14B graph that adds a
wan_ref_image_encoder. Exposes the reference image input plus
the standard fields.
Both follow default-workflow rules: IDs prefixed with default_,
meta.category = "default", and no references to user-installed
resources.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds a parallel video pipeline alongside the existing image pipeline so the
gallery can host MP4 alongside PNGs. Implements:
- New service modules (parallel to image equivalents):
video_records/ record store + sqlite impl
video_files/ disk file store (mp4 + first-frame webp thumb)
videos/ orchestrating service
board_video_records/ board <-> video association
- migration_32 creates `videos` and `board_videos` tables
- /api/v1/videos/ router: upload, list, get DTO, /full (with HTTP Range
so HTML5 <video> seek/scrub works), /thumbnail, /metadata, star/unstar,
delete, batch delete, board add/remove
- LocalUrlService.get_video_url and SimpleNameService.create_video_name
- imageio[ffmpeg] dep for video encode (used in later phases)
- Wires all four new services into InvocationServices, dependencies.py,
api_app.py, and three test fixtures
Verified end-to-end against an in-memory db + tmp output dir: upload,
probe, save (file + thumbnail + record), DTO build, list, delete.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds /api/v1/gallery/items/ and /api/v1/gallery/items/names returning a unified time-sorted stream of images + videos so the frontend can render them interleaved with a single virtualized query. - gallery_common: GalleryItem discriminated union (kind + name + shared fields + nullable video duration/fps), GalleryItemRef, names result - gallery_default: SqliteGalleryService implements UNION ALL across the images and videos tables, applying identical filters (origin/category/ is_intermediate/board_id/search) to each half; pagination via outer ORDER BY + LIMIT/OFFSET; counts are summed across the two halves - URLs are resolved at row -> DTO conversion time so each item routes to the correct /api/v1/images or /api/v1/videos endpoint - Wired into InvocationServices, dependencies.py, api_app.py, and the three test fixtures Existing /api/v1/images endpoints are unchanged so any non-gallery consumers (queue, recall, metadata workflows) continue to work as-is. Verified e2e: 2 images + 2 videos inserted in alternating order, both list_items and list_item_names return the correct interleaved order; category filter narrows to a single kind; starring an item bumps it to the top when starred_first=True. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds the typed API surface and upload integration so videos can be uploaded through the same gallery upload button that handles images. Schema: re-ran pnpm typegen against the running backend to pick up VideoDTO, VideoRecordChanges, GalleryItem, GalleryItemKind, GalleryItemRef, GalleryItemNamesResult and the two new paginated result types. RTK Query (services/api/endpoints/videos.ts) - parallel to images.ts: listVideos, getVideoDTO, getVideoMetadata, getVideoNames, uploadVideo, deleteVideo / deleteVideos, changeVideoIsIntermediate, starVideos / unstarVideos, addVideoToBoard / removeVideoFromBoard. Imperative helpers (getVideoDTO, getVideoDTOSafe, uploadVideo, uploadVideos) and the useVideoDTO convenience hook ride alongside, mirroring the image side. Tag types and invalidation: added Video / VideoList / VideoMetadata / VideoNameList / BoardVideosTotal / GalleryItemList / GalleryItemNameList to the api root. Board-affecting mutations now invalidate the polymorphic gallery list/name caches so videos and images stay coherent once the gallery wiring lands in Phase 4. Added a sibling getTagsToInvalidateForVideoMutation helper. Upload UX: useImageUploadButton.tsx's dropzone now accepts video/mp4, video/webm, video/quicktime alongside the existing image MIMEs. The drop handler splits files into image/video sets and routes each through its own mutation; a new onUploadVideo callback parallels the existing onUpload. Existing image-only callers pass through unchanged. Polymorphic gallery query endpoints + the useGalleryItemDTO hook will land with Phase 4 where they have actual consumers; the schema types they'll need are already in place under @knipignore tags. Verified: pnpm lint (knip, dpdm, eslint, prettier, tsc) all green; pnpm test 1103/1103 pass; live curl against the running dev server uploads an MP4 and serves both the webp thumbnail and the MP4 with a working HTTP Range response (206 + Content-Range). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Videos now appear in the same gallery grid as images, interleaved by
created_at. Video thumbnails get a centered play-button badge so they
read as videos at a glance; everything else (selection, virtualization,
search, paged/virtual gallery views, keyboard nav) is unchanged.
Approach: selection state stays `string[]` of names. The kind is
recovered from the filename extension (.mp4 = video, anything else =
image), which is reliable because the backend's SimpleNameService
always emits `<uuid>.png` for images and `<uuid>.mp4` for videos. This
sidesteps a 32-file cross-cut from changing the selection shape to a
discriminated union, and selection is persist-denylisted so no
migration is needed.
Frontend:
- new isVideoName helper in features/gallery/store/types
- new endpoints/gallery.ts (deferred from Phase 3): useGetGalleryItemNamesQuery
- new ImageGrid/GalleryItemPlayBadge: centered triangular badge over thumbnail
- new ImageGrid/GalleryItemVideoStarIconButton: video-typed star toggle
- new ImageGrid/GalleryVideoItem: counterpart to GalleryImage; reuses
galleryItemContainerSX, GalleryItemSizeBadge (width/height-only stand-in),
selection handling (single/shift/ctrl/cmd); alt-click falls through to a
normal select since comparison is image-only
- use-gallery-image-names now calls the polymorphic gallery names endpoint
and exposes a mixed flat name list (existing callers - paged grid, search,
navigation hotkeys - get the same shape)
- useRangeBasedImageFetching partitions visible names by extension; images
bulk-fetch via the existing getImageDTOsByNames mutation, videos dispatch
individual getVideoDTO queries (no batch endpoint yet)
- GalleryImageGrid's ImageAtPosition dispatches on isVideoName to render
GalleryImage or GalleryVideoItem; star hotkey dispatches to the right
star/unstar mutation based on kind
- pruned the now-unused useGetImageNamesQuery / isImageName exports
Verified: pnpm lint (knip, dpdm, eslint, prettier, tsc) all green;
pnpm test 1103/1103 pass; live curl of /api/v1/gallery/items returns
57 polymorphic items with video duration populated and image duration
null, /api/v1/gallery/items/names returns matching {kind, name} refs.
The useGalleryItemDTO hook is intentionally deferred to Phase 5 where
the polymorphic viewer is its first real consumer.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Selecting a video now renders a polymorphic preview inside the existing viewer panel: thumbnail with a centered play button by default; clicking play swaps in an HTML5 <video controls autoplay>. Switching to a different item drops the video element back to idle (auto-pauses) and selecting an image again returns to the normal image preview. New components (features/gallery/components/ImageViewer/): - VideoPlayButtonOverlay: large centered play button with hover/shadow, used over the thumbnail in the idle state. - CurrentVideoPreview: idle/playing state machine. Resets on video_name change. The <video> src points at /api/v1/videos/i/.../full which supports HTTP Range, so seek/scrub work natively in the browser. New hook: - common/hooks/useGalleryItemDTO: polymorphic DTO resolver that dispatches between useImageDTO and useVideoDTO based on filename extension (isVideoName). Centralizes the kind-dispatch the viewer and toolbar both need. Wiring: - ImageViewer dispatches on galleryItem.kind to render CurrentImagePreview or CurrentVideoPreview. The compare-image DnD drop target is hidden when a video is selected (comparison is image-only). - ImageViewerToolbar hides the image-specific action row (CurrentImageButtons - load workflow, recall metadata, edit, etc.) and the metadata viewer toggle when a video is selected. The general-purpose ToggleProgressButton stays. Out of scope (per the plan): video deletion from the viewer (use gallery hover icons), video-specific metadata viewer, comparison-mode support for videos. Verified: pnpm lint (knip, dpdm, eslint, prettier, tsc) all green; pnpm test 1103/1103 pass. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…pzone The gallery-wide drag-and-drop target lives in FullscreenDropzone, not in useImageUploadButton (which only powers the upload button). It had its own hardcoded image-only zod allowlist that rejected MP4 files with "File type / extension is not supported". - Broaden the zod refines to accept video/mp4, video/webm, video/quicktime, video/x-matroska and the matching extensions - Add isVideoFile helper, split dropped files into image/video sets, and route each set through its own uploader (uploadImages / uploadVideos). Both update their respective RTK caches and invalidate the polymorphic gallery list/names. - Skip the canvas-paste fast-path for single-video drops — the canvas doesn't host videos as layers. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds a three-item context menu (delete, change board, download) on
right-click / long-press of any gallery video item. Mirrors the image
context menu's singleton-portal architecture so re-renders stay cheap.
New files:
- features/gallery/contexts/VideoDTOContext: small React context that
scopes the active video DTO to the menu items (parallels
ImageDTOContext).
- features/gallery/components/ContextMenu/MenuItems/
ContextMenuItemDeleteVideo: window.confirm + deleteVideo mutation.
Videos can't be referenced from canvas/nodes/refs, so the image
modal's usage analysis is unnecessary; a one-step confirm matches
the "minimal" scope.
ContextMenuItemDownloadVideo: reuses the existing useDownloadItem
hook against videoDTO.video_url / video_name.
ContextMenuItemChangeBoardVideo: dispatches videosToChangeSelected
and opens the (now polymorphic) ChangeBoardModal.
- features/gallery/components/ContextMenu/VideoContextMenu: singleton
pattern lifted from ImageContextMenu — registers gallery video
elements via a Map; right-click looks up the target node and opens
the menu at the cursor.
Extended files:
- features/changeBoardModal/store/slice: added video_names alongside
image_names plus a videosToChangeSelected action. The two arrays are
mutually exclusive — setting one clears the other.
- features/changeBoardModal/components/ChangeBoardModal: now dispatches
the matching video board mutations (add/removeVideoToBoard, plural
endpoints don't exist yet so videos move one at a time — the menu
acts on a single selection so this is a one-iteration loop).
- features/gallery/components/ImageGrid/GalleryVideoItem: registers
itself with useVideoContextMenu.
- app/components/GlobalModalIsolator: mounts the singleton.
Verified: pnpm lint (knip, dpdm, eslint, prettier, tsc) all green;
pnpm test 1103/1103 pass.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds two new invocation nodes that produce MP4 videos from a Wan 2.2 A14B transformer + VAE, plus the supporting plumbing. New invocations: - WanVideoDenoise (wan_video_denoise) — multi-frame counterpart to WanDenoise. Same per-step logic (CFG, MoE expert swap at the boundary timestep, LoRA patching, scheduler dispatch) — reuses _ExpertSwapper, _resolve_variant, and the scheduler/LoRA helpers from wan_denoise. Difference: the noise tensor has a real temporal dim built from num_frames, and the I2V condition is built across all latent frames (frame 0 conditioned, rest zero). Defaults match the Wan 2.2 reference: 832x480 / 81 frames / 40 steps / CFG 5.0 (high) + 4.0 (low). Inpaint / img2img are out of scope for this first cut. TI2V-5B is rejected; T2V/I2V A14B only. - WanLatentsToVideo (wan_l2v) — VAE-decodes 5D latents to RGB frames via AutoencoderKLWan (T_pixel = (T_lat - 1) * 4 + 1), then encodes an MP4 with imageio[ffmpeg] (libx264, yuv420p for browser compatibility). The temp file is moved into outputs/videos/ via context.videos.save(). Backend shared pieces: - make_noise gains num_latent_frames (default 1, backward compatible). - Added num_latent_frames_for(num_frames, scale=4) helper. - New encode_reference_image_to_video_condition mirrors diffusers' WanImageToVideoPipeline.prepare_latents with last_image=None and expand_timesteps=False: pads the reference image with zero pixel-frames, VAE-encodes the full pseudo-video, normalises, and builds the 4-channel temporal-rearranged first-frame mask. Verified numerically: 21 latent frames for num_frames=81, first latent frame's 4 mask channels = 1, rest = 0. - The existing single-frame encoder is left untouched. Schema / context: - New VideoField primitive (parallel to ImageField) and VideoOutput invocation output (width/height/num_frames/fps/duration/video). - New VideosInterface on InvocationContext with .save(source_path, width, height, duration, fps, ...) returning VideoDTO. Mirrors ImagesInterface — falls back to WithBoard / WithMetadata mixins and embeds the queue item's workflow/graph as a JSON sidecar. - WanRefImageConditioningField now carries num_frames so the denoise nodes can sanity-check the I2V condition. WanRefImageEncoder bumps to v1.1.0 and gains num_frames=1 input (use 81+ for video I2V; the encoder dispatches between the single- and multi-frame helpers). - Image WanDenoise now rejects multi-frame conditions with a clear message pointing at WanVideoDenoise. Verified: pnpm lint (5/5) green; pnpm tests (multiuser auth 122/122 + broader suite via prior runs); numerical shape checks for noise and ref-image condition; end-to-end smoke via VideoService.create. A restart of the InvokeAI server is required to pick up the new invocations in the workflow editor. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two new default workflows for the workflow editor 'Browse' modal: - 'Text to Video - Wan 2.2' — model loader -> two text encoders -> wan_video_denoise -> wan_l2v. Exposes prompt, model picks, CFG (high + low), dimensions, frames, fps, and steps. - 'Image to Video - Wan 2.2' — same shape plus a wan_ref_image_encoder feeding the denoise node's ref_image input. Exposes the reference image and the frames field on the ref-image node (must match the denoise node's frames — there is a clear validation error if they diverge, but the starter has them in sync at 81). Both default to the Wan 2.2 reference settings: 832x480, 81 frames @ 16 FPS (~5 s), 40 steps, CFG 5.0 (high expert) + 4.0 (low expert), seeded by a rand_int. Pass the existing _sync_default_workflows validator (id starts with default_, meta.category=default). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
run_app.py validates every invocation's return-type annotation against
the output-class registry. wan_latents_to_video.py had a stray
'from __future__ import annotations' which made the `invoke()` return
annotation a string ('VideoOutput') at runtime. The registry mismatch
triggered the unregistered-output warning path, which itself crashed
on output_annotation.__name__ because the annotation was a str:
AttributeError: 'str' object has no attribute '__name__'
The other Wan invocations don't use future annotations — drop the
import to match. Verified post-fix: api_app import populates 95
output classes, wan_l2v annotation resolves to the real VideoOutput
class and is in the registry.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Same graph as 'Text to Video - Wan 2.2' but with two Apply LoRA - Wan 2.2 nodes chained between the model loader and the denoise node, and defaults retuned for the Lightning distillation: 4 steps and CFG 1.0 on both experts (CFG=1 skips the negative-conditioning forward pass entirely, ~20x faster than the 40-step / CFG-5.0 baseline at similar quality). Adapted from a user-saved workflow; cleaned for distribution by stripping the install-specific model/LoRA key bindings (defaults should not bake in local UUIDs), bumping to a fresh default_-prefixed id with meta.category=default, exposing the two LoRA fields (lora + weight) so users can swap LoRAs without diving into the canvas, and flagging the negative-prompt node as unused at CFG=1. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two new default workflows that wire the Lightning LoRA pair into the T2V and I2V video pipelines for a ~20x speedup: - 'Text to Video - Wan 2.2 Lightning' — model loader -> apply LoRA (high) -> apply LoRA (low) -> text encoders -> wan_video_denoise -> wan_l2v. Defaults to 4 steps and CFG 1.0 (no negative branch). Cleaned-up version of Lincoln's saved Lightning workflow: stripped per-install model/LoRA keys, switched meta.category to 'default' with a default_ id, and exposed both LoRA loaders' lora/weight/ target fields so users can swap LoRAs without diving into the canvas. - 'Image to Video - Wan 2.2 Lightning' — same chain plus a wan_ref_image_encoder (v1.1.0 with num_frames) feeding the denoise ref_image input. Defaults match the non-Lightning I2V starter (832x480, 81 frames @ 16 FPS) but with 4 steps / CFG 1.0. LoRA target defaults to 'auto' so properly-tagged Lightning LoRAs route themselves; both workflow descriptions tell users to set explicit 'high'/'low' targets if their LoRAs are untagged. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
wan_latents_to_video was passing plugin='pyav' to iio.imwrite, but the runtime only has imageio-ffmpeg installed (no PyAV). The encode step at the very end of generation crashed with: ImportError: The `pyav` plugin is not installed. Use `pip install imageio[pyav]` to install it Switch to plugin='FFMPEG' — backed by the bundled imageio-ffmpeg binary that pyproject already requires via imageio[ffmpeg]. libx264 yuv420p is the FFMPEG plugin's default for .mp4, so the explicit pixel_format is dropped (specifying it just produced a "Multiple -pix_fmt options" warning). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The video VAE decode + MP4 encode tail can take 30-90s on top of the denoise loop, and the toast-style signal_progress() messages don't land in the server log. Add context.logger.info() at: - VAE decode start: latent frame count -> pixel frame count + resolution - MP4 encode start: frames, fps, duration, dimensions - MP4 encode complete: encoded file size - Video saved: final video_name Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
After wan_l2v wrote a successful libx264 MP4 to disk, the invocation would hang in DiskVideoFileStorage.save() during the cv2.VideoCapture thumbnail-extraction step. cv2 wheels on this build can't reliably decode our libx264/yuv420p output (most often the wheel was compiled without an h264 decoder, but the failure mode is silent hang rather than a clear error). The net effect: the MP4 ends up in outputs/videos but the queue item never completes, so the frontend spinner spins forever and the gallery doesn't pick up the new entry. Fix: rewrite extract_video_frame and probe_video to try imageio's FFMPEG plugin first (same backend that did the encoding — so reading our own output is guaranteed to work), with cv2 retained only as a fallback for uploaded videos in formats imageio can't decode. Also add fine-grained log lines + exception guards inside DiskVideoFileStorage.save() so a future thumbnail failure can no longer hang the whole save — it now logs a warning and continues, leaving the video record in place even if the thumbnail step errored. With logging at each step (video written, thumbnail written, sidecar written) any future hang will be obvious from the last log line. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
After wan_l2v wrote its MP4 successfully, the gallery and viewer were
never updated: the new video didn't appear and the viewer stayed stuck
on the previous "Saving video" progress spinner indefinitely.
Root cause: onInvocationComplete.tsx only inspected results for
isImageField / isImageFieldCollection. VideoField outputs were silently
dropped, so the polymorphic gallery list never invalidated and no
auto-switch happened. The viewer therefore kept rendering
CurrentImagePreview, whose ImageViewerContext-local $progressEvent /
$progressImage atoms intentionally aren't cleared on queue completion
when autoSwitch is on — they rely on the new image's DndImage onLoad
to clear them, which never fires for a video.
Fix: add isVideoField (mirrors isImageField against {video_name}) and
plumb video outputs through onInvocationComplete:
- getResultVideoDTOs pulls VideoDTOs via getVideoDTOSafe
- addVideosToGallery invalidates GalleryItemNameList / GalleryItemList
so the polymorphic gallery refetches and the new video shows up
- auto-switch dispatches the video name into selection (selection is a
polymorphic string[]; useGalleryItemDTO already discriminates by
filename extension)
The selection change swaps CurrentImagePreview for CurrentVideoPreview,
which unmounts the stale progress overlay along with it — so the stuck
spinner clears as a side-effect of the auto-switch.
Also drops the now-stale @knipignore on getVideoDTOSafe, which has a
real consumer now.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Extracts a single frame from a VideoField input and saves it as a regular ImageDTO via context.images.save, so it appears in the gallery like any other generated image. Primary use case is I2V "shot extension": take the last frame of a Wan-generated clip (default frame_index=-1) and feed it back as the reference image for the next clip, then stitch the MP4s to get videos longer than the model's single-shot frame budget at a given VRAM. Negative frame_index is resolved against the actual decoded frame count via probe_video() rather than passed through to imageio — not all imageio plugins handle index=-1 uniformly, and being explicit lets us emit a precise out-of-range error. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Joins two or more videos into a single MP4 with one of three transition modes between consecutive clips: - cut: hard splice, no blending. Total length = sum of inputs. - crossfade: linear A→B dissolve over transition_frames. Each boundary consumes N frames from both surrounding clips, shrinking total length by N per boundary. - fade_through_black: A fades to black, then B fades in. Each boundary consumes N/2 from each side and emits N output frames — total length is preserved. Implementation decodes via imageio's FFMPEG plugin (matching wan_l2v on the encode side) and runs the blends in numpy. All decoded frames are kept in memory at once; fine for the few-hundred-frame I2V chains that motivated this, would want streaming if anyone ever feeds in hour-long uploads. Up-front validation enforces matching dimensions across inputs and checks that each clip has enough frames to spare from its head and tail for the requested transitions — saves a wasted decode pass when the transition window is too wide for one of the clips. Pairs with 'Frame from Video' for I2V shot extension: generate N clips chained via last-frame-as-ref-image, then glue them with a crossfade. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The viewer used a chakra <Image src={thumbnail_url}> in the idle (not-
playing) state, so once a clip auto-selected after generation the
preview snapped from the full-resolution denoise progress image to the
small WebP gallery thumbnail upscaled to fit — visibly soft compared to
what the user was watching seconds earlier.
Switch to a single <video> element that spans both states:
- idle: muted, no controls, preload="metadata". With no `poster` attr
the browser decodes and shows the video's actual first frame at full
resolution (this is the documented HTMLVideoElement default).
- playing: same DOM node with controls+audio toggled on, kicked off via
ref.play(). No reload between states — the decoded buffer carries
over.
`key={videoName}` swaps the element cleanly when the user moves to a
different clip, dropping any in-progress playback state.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Computes Wan I2V-compatible (width, height) for a source W×H at a target short-side resolution (e.g. 720 for "720p"), snapping each output to a multiple of 16 (Wan's transformer patch_size × VAE 8x pixel-grid constraint enforced by wan_ref_image_encoder). Replaces the 6-node math chain (Float Math × 4 + Float To Integer × 2) that was otherwise required to compute these dimensions from an arbitrary input image. Wire the Image Primitive's width/height outputs into this node, and feed its (width, height) outputs into both wan_ref_image_encoder and wan_denoise (they must match). Three rounding modes: - nearest (default): minimizes aspect-ratio drift - floor: guaranteed not to exceed unsnapped target (safer for VRAM) - ceiling: rounds up Output schema reuses IdealSizeOutput so it slots into existing pipes that already consume Ideal Size — SD1.5, SDXL. Includes regression tests covering the documented common-case table, all three rounding modes, postcondition invariants (multiple of 16, aspect ratio within 1.2%, never zero), and input validation. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…down Wan 2.2 was trained at 480p and 720p; a free integer encouraged users to pick noncanonical short sides that the model handles poorly. Replace the int field with a Literal dropdown of "480p" / "720p" / "1080p" (via ui_choice_labels) so the UI surfaces the canonical choices. 1080p is included with a label noting it's extrapolated from training (not a Wan native size) — useful for users with VRAM headroom but shouldn't be the default. Version bumped to 1.1.0 since the field schema changed (the node was only committed locally; no published workflow needs migrating). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two-sided fix to avoid VRAM allocator fragmentation that was causing the subsequent denoise-transformer partial load to OOM: - Before vae.encode(): clears blocks left over from earlier nodes (the denoise expert swap especially leaves the cache fragmented). - After the condition tensor is on CPU: returns the VAE encode's intermediates so the next partial_load_to_vram sees a real free contiguous range. Mirrors the same pattern in wan_latents_to_image.py and wan_latents_to_video.py — those are the existing precedent. The cost is a handful of microseconds per encoder invocation and only the cache state is touched; model weights stay resident. Observed-by symptom from a workflow review: at encoder=480x720 and a source image of 880x1184, the encoder ran fine but the I2V high-noise expert failed to partial-load with a cryptic CUDA OOM at _load_state_dict_with_fast_device_conversion. Pre-resizing the source to 80% incidentally cleared the allocator state and let the run succeed; this fix removes the incidental dependency on source size. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The video denoise node previously hard-errored on TI2V-5B with "not supported." Most of the surrounding machinery (variant-aware spatial scale, variant-aware scheduler, single-expert ExpertSwapper path) was already in place — the gate just needed lifting and the hard-coded A14B latent channel count needed to follow the variant. Changes: - Drop the upfront "TI2V-5B is not supported" raise. - Use get_default_latent_channels(variant) so latents are 48-channel for TI2V-5B and 16-channel for the A14B family (matches the image denoise node's existing logic). - For TI2V-5B with a Reference Image input, raise a sharper, accurate error that explains TI2V-5B's I2V uses diffusers' expand_timesteps path (first-frame-mask blend + per-position timestep gating) which this node does not implement yet — pointing the user at the working T2V path or the I2V-A14B model. - Update the transformer field description to reflect what's now supported. Image-to-video with TI2V-5B remains a follow-up; the conditioning math is genuinely different from A14B (no 36-channel concat) and warrants a separate code path rather than parameterising this one. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The single-file Wan VAE loader was always calling ``AutoencoderKLWan(z_dim=config.latent_channels)`` and relying on diffusers' constructor defaults for every other parameter — but those defaults match the Wan 2.1 / A14B VAE (base_dim=96, in/out=3, 8x spatial, no patchify). For TI2V-5B's Wan 2.2-VAE the architecture is materially different: - base_dim=160, decoder_base_dim=256 - in_channels=12, out_channels=12 (3 RGB x 2x2 patch) - patch_size=2 - scale_factor_spatial=16 - is_residual=True - 48-vector latents_mean / latents_std (required for the model's encode/decode normalisation to produce non-garbage outputs) Loading the TI2V-5B VAE state_dict into the default-constructed model failed with shape mismatches throughout the encoder + decoder, surfaced in wan_l2v as "Error(s) in loading state_dict for AutoencoderKLWan." This commit routes z_dim=48 to a verbatim copy of the TI2V-5B VAE config (from vae/config.json in Wan-AI/Wan2.2-TI2V-5B-Diffusers); z_dim=16 keeps the previous A14B / Wan 2.1 default behaviour. Verified end-to-end: both kwargs construct cleanly and produce the expected layer shapes (decoder.conv_out emits 12 channels for TI2V-5B, 3 channels for A14B). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
When the main model has no on-disk ``scheduler/`` directory (every standalone GGUF / single-file install), ``_build_scheduler`` previously fell back to ``FlowMatchEulerDiscreteScheduler()`` for every variant. That's correct for the A14B family but wrong for TI2V-5B, which ships ``UniPCMultistepScheduler`` with ``flow_shift=5.0`` + ``prediction_type="flow_prediction"`` + ``use_flow_sigmas=True``. The mismatch produces drifty samples on TI2V-5B. Add a ``_default_scheduler_for_variant`` helper that reconstructs the right scheduler from the variant tag (values verbatim from each variant's ``scheduler/scheduler_config.json`` in the matching Wan-AI/Wan2.2-*-Diffusers repo). The on-disk-config-present path is unchanged — if the model ships a scheduler dir, that wins. Full scheduler-selection UI is deferred to a future PR per discussion; this special-case keeps the standalone TI2V-5B path producing the right sampler without surfacing a new field. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
TI2V-5B I2V uses a fundamentally different conditioning scheme from
A14B I2V. Implement diffusers' ``expand_timesteps`` path so the same
``Reference Image - Wan 2.2`` node and ``Denoise Video - Wan 2.2`` node
work for both variants, dispatched by VAE z_dim / transformer variant.
Encoder side (wan_ref_image_extension.py / wan_ref_image_encoder.py)
- Add ``encode_reference_image_to_ti2v_condition`` that VAE-encodes a
single image frame to ``[1, 48, 1, H/16, W/16]`` with the Wan2.2-VAE
normalisation, no mask channels.
- ``WanRefImageEncoderInvocation`` dispatches on ``vae.config.z_dim``:
z_dim=48 → TI2V-5B path, z_dim=16 → existing A14B path.
- Enforce ``multiple_of=32`` for width/height in the TI2V-5B case
(16x VAE * 2 transformer patch = pixel dims must divide by 32) with
a clear error message pointing at the constraint.
Denoise side (wan_video_denoise.py)
- Replace the "TI2V-5B I2V not supported" raise with a variant-aware
dispatch on ``ref_condition.shape`` and ``variant``.
- For TI2V-5B I2V build a ``first_frame_mask`` once (0 at frame 0, 1
elsewhere). At each step:
latent_model_input = (1 - mask) * condition + mask * latents
temp_ts = (mask[0,0,:,::2,::2] * t).flatten()
timestep = temp_ts.unsqueeze(0).expand(B, -1)
Per-token timesteps gate the model: frame 0 sees t=0 (locked to
condition), other frames see t (normal denoise).
- After the denoise loop, re-clamp frame 0 to the clean condition so
the locked first frame doesn't show scheduler drift in the final VAE
decode. Mirrors WanImageToVideoPipeline:813-814.
- Skip the encoder-num_frames-must-match check for TI2V-5B (its
condition is always single-frame regardless of output length).
Tests
- Three new tests on encode_reference_image_to_ti2v_condition covering
output shape at small and Wan-realistic dims plus the no-mask-channels
invariant. Full video-denoise integration tests would need a new
fixture stack (none exist for wan_video_denoise yet) — deferred.
A14B I2V is unchanged. TI2V-5B T2V (added in the previous commit) is
unchanged. Verified at the import + encoder-shape level; end-to-end
verification requires a TI2V-5B I2V workflow.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
CurrentVideoPreview rendered only the <video> element, so when the last-selected gallery item was a video, a freshly-started render's denoise preview images had nowhere to display — the user saw the static first-frame still of the previously-loaded video until the new render's final video swapped in. Mirror CurrentImagePreview's progress-overlay pattern: subscribe to $progressImage / $progressEvent, gate on selectShouldShowProgressInViewer, and render a ProgressImage stack on top of the video when a render is in progress. Hide the play-button overlay while progress is showing so it doesn't sit on top of the preview. Reported by Lincoln during TI2V-5B testing: previews started working after restarting the server only because there was no video loaded at that point; once a video was selected, the previews silently dropped. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
ruff check found one I001 (import order) in ``invokeai/backend/model_manager/load/model_loaders/vae.py`` and ruff format flagged five files. All cosmetic; no behaviour changes. - vae.py: import reorder - video_concat.py: minor reflow - test_wan_ideal_dimensions.py / test_boards_multiuser.py / test_videos_multiuser.py: prettier-style wrapping Verified: full ruff check + ruff format --check clean, 141 backend tests pass, and ``pnpm lint`` (knip + dpdm + eslint + prettier + tsc) all green. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Comprehensive guide covering: - The three Wan 2.2 variants (T2V-A14B, I2V-A14B, TI2V-5B), their conditioning differences, and the dual-expert MoE explanation - Lightning LoRA distillation for 4-step A14B inference - Starter bundles (Text-to-Video and Image-to-Video splits) - Workflow setup for T2V and I2V with the constraint matrix: * frame count: (num_frames - 1) % 4 == 0 * pixel dims: multiple of 16 for A14B, 32 for TI2V-5B * encoder + denoise must agree on width/height - The chain-and-concat trick for making longer videos, with the bridge-frame degradation mitigations - Troubleshooting: OOM, late-frame artifacts, dim mismatches, VAE load errors, scheduler issues, preview-not-appearing, MP4 glitches Lands under Features → Video Generation (experimental). Astro auto-generates the sidebar from features/ so no nav config change needed. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
CurrentImageNode unconditionally called useImageDTO(lastSelectedItem) even when the selected gallery item was a video, firing GET /api/v1/images/i/<uuid>.mp4 on every video thumbnail click. The endpoint 404s and the backend logged "Image record not found" each time — benign but noisy. Apply the same null-skip pattern useGalleryItemDTO uses: pass the name only when it's not a video, so RTK Query skips the request for video selections. Current Image is image-only by design, so videos rendering the empty fallback matches existing behaviour. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two viewer bugs after auto-switching to a freshly-rendered video: - The denoise progress overlay never cleared. CurrentImagePreview clears the ImageViewerContext $progressImage/$progressEvent atoms via DndImage's onLoad callback; the video viewer had no analog, so the last progress still sat on top of the new video forever — clicking other video thumbnails did nothing visible, and only selecting an image (which fires onLoadImage via DndImage) cleared it. - Even with the overlay gone, the <video> element rendered its black background instead of the first frame. preload="metadata" loads dimensions/duration but doesn't guarantee a decoded first frame on all browsers; an explicit seek is needed to force a paint. Wire onLoadedMetadata to (1) call onLoadImage() — mirroring DndImage's onLoad — and (2) nudge currentTime to 0.0001 so the decoder paints the first frame without measurably advancing playback. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Companion to a3bdc33 (CurrentImageNode). GlobalImageHotkeys is a mounted-everywhere singleton that wires recall hotkeys (seed, prompts, remix, etc.) to whatever item is currently selected. It was passing the raw selection name through to useImageDTO unconditionally, so every video thumbnail click fired GET /api/v1/images/i/<uuid>.mp4 → 404 and the "Image record not found" log line. Gate on isVideoName(), mirroring the polymorphic null-skip pattern in useGalleryItemDTO. Recall hotkeys don't apply to videos anyway, so this just suppresses the noise. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The dual-expert swapper releases the active expert via its context manager exit, but PyTorch's caching allocator retains the freed blocks as reserved-not-yet-claimable space until empty_cache runs. The next partial_load_to_vram for the incoming expert then sees a fragmented free pool and offloads layers it could otherwise have kept on device. Users running A14B observed the low-noise expert ending up far more CPU-resident than the high-noise one on otherwise identical settings — that was the leftover reservation from the high-noise expert masking real free VRAM. Call TorchDevice.empty_cache() between the release and the next load. Same pattern as the VAE-encode fix earlier in this branch. Regression test in test_wan_expert_swapper.py mocks empty_cache and asserts it fires on every actual swap but not on a same-label re-get. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Dropping a video thumbnail onto a board in the boards list was a no-op (the dnd target only accepted image sources). Extend addImageToBoardDndTarget and removeImageFromBoardDndTarget to also accept SingleVideoDndSourceData and dispatch the corresponding video mutations. Permission UX mirrors the image path: - Same canMoveFromSourceBoard gate (owner / public source board) - Same "do nothing if dropping on the current board" early-out Backend enforcement on /api/v1/videos/board already mirrors the image endpoints — _assert_board_write_access on the destination plus _assert_video_direct_owner on the video. The frontend gate intentionally mirrors only the source-board part of that, leaving the direct-owner check to surface as a 403 on attempt (same compromise as images, where the client doesn't have per-item owner info to gate cleanly). Multi-video drag is not supported yet (the gallery only registers a single-video draggable per item, no multi-select bundle), so this only wires the SingleVideoDndSourceData path. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The previous empty_cache() fix (53b2f4d) was insufficient. unlock() only decrements the cache record's lock counter — the weights stay on GPU until the cache's automatic offload decides to free them on the next lock(). That heuristic uses ``torch.cuda.memory_allocated() - working_mem`` to estimate free space, which under-frees when the previous denoise step's workspace activations are still allocated alongside the just-unlocked expert. The user-visible symptom was a log line like Loaded model '...:transformer' onto cuda device in 0.37s. Total model size: 9203.13MB, VRAM: 2381.18MB (25.9%) for the incoming low-noise expert, while the high-noise expert continued to hold ~9 GB of VRAM. The swapper now stashes the LoadedModel info handle and, on each swap, explicitly invokes ``cached_model.full_unload_from_vram()`` on the outgoing expert before locking the incoming one. This sidesteps the heuristic and guarantees the previous expert's weights leave GPU before partial_load_to_vram measures available room. The access path ``info._cache_record.cached_model`` reaches into a private attribute — there is no public LoadedModel API for "unload from VRAM but keep in RAM" today, and a broader backend refactor felt out of scope. The call is wrapped in getattr/try-except and pinned by a regression test so a future refactor breaks the test, not the swap. Tests: - Updated existing dual-expert lifecycle test to expect the new full-unload step in the swap log sequence. - New test_outgoing_expert_force_unloaded_from_vram covers the per-swap behavior (outgoing only, no initial unload). - New test_force_unload_failure_does_not_break_swap pins the defensive fallback so swap reliability survives a future LoadedModel refactor. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds a server-backed canvas project (.invk) lifecycle parallel to images and videos: upload, list, get, replace, delete, star/unstar, and board add/remove. Projects appear in the polymorphic gallery stream with a preview thumbnail, support drag-and-drop onto boards, share the bulk delete flow with images/videos, and integrate with the viewer toolbar. Server-side persistence is wired in alongside the existing local-file download path — the save dialog now toggles between "Save to Server" (with board picker and an "Update existing" mode that replaces the ZIP in place, keeping the UUID and board assignment) and "Download as File". Loading a server project from a stale ZIP automatically re-uploads missing images and rewrites the stored ZIP so subsequent loads don't keep duplicating the embedded bytes. DTO URLs carry an `updated_at` cache-buster so browsers refetch the thumbnail after in-place updates. Backend: migration 33 (`canvas_projects`, `board_canvas_projects`), parallel record/file/service/board-record stack, `canvas_projects` and `board_canvas_projects` routers, polymorphic gallery extended with `CANVAS_PROJECT` kind. Frontend: RTK endpoints, manifest v2 (with width/height/imageCount/ hasPreview), preview rendering via the canvas compositor, gallery item + badge + viewer preview + load button, context menu (load/download/ delete), DnD source, mixed-type shift-click selection across kinds.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Adds a server-backed canvas project (
.invk) lifecycle parallel to images and videos: upload, list, get, in-place update, delete, star/unstar, and board add/remove. Canvas projects now appear in the polymorphic gallery stream with a preview thumbnail, support drag-and-drop onto boards, share the bulk-delete flow with images/videos, and integrate with the image viewer toolbar.Save flow — the save dialog toggles between Save to Server (with a board picker and, when a project is currently loaded, an Update Existing mode that replaces the ZIP in place, keeping the UUID, board assignment, starred state and ownership) and Download as File (the original local-download path is preserved).
Load flow — loading a server project from a stale ZIP (i.e. some embedded
image_namesno longer exist on the server because the records were cleaned up) automatically re-uploads the missing images and rewrites the stored ZIP with the new names, so subsequent loads don't keep duplicating the same embedded bytes. The viewer dispatches onkind === 'canvas_project'to render the preview WebP and surfaces a Load Canvas Project toolbar button. On successful load the UI switches to the Canvas tab automatically.Cache-busting — DTO
project_urlandthumbnail_urlcarry an?v={updated_at}query so browsers refetch after an in-place update.Backend
canvas_projects+board_canvas_projectstables with FK cascades fromboardsmirroring the image / video sidescanvas_project_records/,canvas_project_files/,canvas_projects/,board_canvas_project_records/names_defaultgainscreate_canvas_project_name()(bare UUID),urls_defaultgainsget_canvas_project_url(name, thumbnail)POST /api/v1/canvas_projects/upload,GET /api/v1/canvas_projects/,GET /i/{name},GET /i/{name}/full,GET /i/{name}/thumbnail,PATCH /i/{name}(rename/star),PUT /i/{name}/file(in-place ZIP replace),DELETE /i/{name},POST /delete,POST /star,POST /unstar, plusPOST/DELETE /api/v1/board_canvas_projects/GalleryItemKindgainsCANVAS_PROJECT,gallery_defaultUNION-ALLs the three resource halves (with category filter skipping projects unlessGENERALis included)Frontend
width/height/imageCount/hasPreview; v1 still loadable via discriminated unionservices/api/endpoints/canvasProjects.ts(list, get, upload, replace-file, update, delete, star, board add/remove) plus tag types (CanvasProject,CanvasProjectList,BoardCanvasProjectsTotal)preview.webpand uploaded as the server-side thumbnailCurrentCanvasProjectPreview+LoadCanvasProjectButtonin the toolbarsingleCanvasProjectDndSource+addImageToBoardDndTarget/removeImageFromBoardDndTargethandlers extended for projectsdeleteImageModal/state.tsnow splits mixed selections by name kind so projects/videos go straight to their delete endpoints (no image-usage modal) and only image names hit the existing confirmation flow$currentCanvasProjectNametracks the loaded project; Save dialog uses it to surface an Update Existing radio with thumbnail + name + board · dimensions preview so the user can verify which project they're replacingRelated Issues / Discussions
Builds on #8917 (local
.invksave/load) and depends on #9163 (video stack + polymorphic gallery) — the canvas project pattern follows the video stack 1:1.QA Instructions
Save & gallery roundtrip
curl http://localhost:9090/api/v1/canvas_projects/shows the record; the.invklives underoutputs/canvas_projects/{subfolder}/Load + auto-canvas-tab + auto-resave
In-place update
updated_atadvancesContext menu
<name>.invklocally) / Delete (native confirm, removes record + file + thumbnail) / Load Project (same as the toolbar button)Bulk delete (mixed)
Drag & drop to board
board_idin DTO updates)Shift-click across kinds
Backwards compatibility
.invkfile (v1 manifest, nowidth/height/imageCount/hasPreview) still works — missing fields are reconstructed fromcanvas_state.jsonMerge Plan
This PR triggers DB migration 33, so reviewers should run against a test root. The canvas-project pattern is layered on top of #9163 (video stack); merge order should be #9163 → this PR.
Checklist
What's Newcopy (if doing a release after this PR)