Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
42 commits
Select commit Hold shift + click to select a range
eb0e10a
Update workflow templates to v0.7.62 (#11467)
comfyui-wiki Dec 22, 2025
33aa808
Make denoised output on custom sampler nodes work with nested tensors…
comfyanonymous Dec 22, 2025
f4f44bb
api-nodes: use new custom endpoint for Nano Banana (#11311)
bigcat88 Dec 23, 2025
22ff1bb
chore: update workflow templates to v0.7.63 (#11482)
comfyui-wiki Dec 24, 2025
e4c61d7
ComfyUI v0.6.0
comfyanonymous Dec 24, 2025
650e716
Bump comfyui-frontend-package to 1.35.9 (#11470)
comfy-pr-bot Dec 24, 2025
4f067b0
chore: update workflow templates to v0.7.64 (#11496)
comfyui-wiki Dec 24, 2025
532e285
Add a ManualSigmas node. (#11499)
comfyanonymous Dec 25, 2025
d9a76cf
Specify in readme that we only support pytorch 2.4 and up. (#11512)
comfyanonymous Dec 26, 2025
16fb684
bump comfyui_manager version to the 4.0.4 (#11521)
ltdrdata Dec 26, 2025
1e4e342
Fix noise with ancestral samplers when inferencing on cpu. (#11528)
comfyanonymous Dec 27, 2025
865568b
feat(api-nodes): add Kling Motion Control node (#11493)
bigcat88 Dec 27, 2025
eff4ea0
[V3] converted nodes_images.py to V3 schema (#11206)
bigcat88 Dec 27, 2025
0d2e4bd
fix(api-nodes-gemini): always force enhance_prompt to be True (#11503)
bigcat88 Dec 27, 2025
36deef2
chore(api-nodes): switch to credits instead of $ (#11489)
bigcat88 Dec 27, 2025
2943093
Enable async offload by default for AMD. (#11534)
comfyanonymous Dec 27, 2025
8fd0717
Comment out unused norm_final in lumina/z image model. (#11545)
comfyanonymous Dec 29, 2025
9ca7e14
mm: discard async errors from pinning failures (#10738)
rattus128 Dec 29, 2025
0e6221c
Add some warnings for pin and unpin errors. (#11561)
comfyanonymous Dec 29, 2025
d7111e4
ResizeByLongerSide: support video (#11555)
tavihalperin Dec 30, 2025
25a1bfa
chore(api-nodes-bytedance): mark "seededit" as deprecated, adjust dis…
bigcat88 Dec 30, 2025
178bdc5
Add handling for vace_context in context windows (#11386)
drozbay Dec 30, 2025
f59f71c
ComfyUI version v0.7.0
comfyanonymous Dec 31, 2025
0357ed7
Add support for sage attention 3 in comfyui, enable via new cli arg (…
mengqin Dec 31, 2025
0be8a76
V3 Improvements + DynamicCombo + Autogrow exposed in public API (#11345)
Kosinkadink Dec 31, 2025
6ca3d5c
fix(api-nodes-vidu): preserve percent-encoding for signed URLs (#11564)
bigcat88 Dec 31, 2025
236b9e2
chore: update workflow templates to v0.7.65 (#11579)
comfyui-wiki Dec 31, 2025
d622a61
Refactor: move clip_preprocess to comfy.clip_model (#11586)
comfyanonymous Dec 31, 2025
1bdc9a9
Remove duplicate import of model_management (#11587)
comfyanonymous Jan 1, 2026
65cfcf5
New Year ruff cleanup. (#11595)
comfyanonymous Jan 2, 2026
9e5f677
Ignore all frames except the first one for MPO format. (#11569)
bigcat88 Jan 2, 2026
303b173
Give Mahiro CFG a more appropriate display name (#11580)
throttlekitty Jan 2, 2026
f2fda02
Tripo3D: pass face_limit parameter only when it differs from default …
bigcat88 Jan 2, 2026
9a552df
Remove leftover scaled_fp8 key. (#11603)
comfyanonymous Jan 3, 2026
53e762a
Print memory summary on OOM to help with debugging. (#11613)
comfyanonymous Jan 4, 2026
acbf08c
feat(api-nodes): add support for 720p resolution for Kling Omni nodes…
bigcat88 Jan 4, 2026
38d0493
Fix case where upscale model wouldn't be moved to cpu. (#11633)
comfyanonymous Jan 5, 2026
f2b0023
Support the LTXV 2 model. (#11632)
comfyanonymous Jan 5, 2026
d1b9822
Add LTXAVTextEncoderLoader node. (#11634)
comfyanonymous Jan 5, 2026
d157c32
Refactor module_size function. (#11637)
comfyanonymous Jan 5, 2026
4f3f9e7
Fix name. (#11638)
comfyanonymous Jan 5, 2026
51f88fa
Merge branch 'stable_master' into offload_merge
strint Jan 6, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -212,6 +212,8 @@ Python 3.14 works but you may encounter issues with the torch compile node. The

Python 3.13 is very well supported. If you have trouble with some custom node dependencies on 3.13 you can try 3.12

torch 2.4 and above is supported but some features might only work on newer versions. We generally recommend using the latest major version of pytorch unless it is less than 2 weeks old.

### Instructions:

Git clone this repo.
Expand Down
4 changes: 2 additions & 2 deletions app/model_manager.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ async def get_model_folders(request):
@routes.get("/experiment/models/{folder}")
async def get_all_models(request):
folder = request.match_info.get("folder", None)
if not folder in folder_paths.folder_names_and_paths:
if folder not in folder_paths.folder_names_and_paths:
return web.Response(status=404)
files = self.get_model_file_list(folder)
return web.json_response(files)
Expand All @@ -55,7 +55,7 @@ async def get_model_preview(request):
path_index = int(request.match_info.get("path_index", None))
filename = request.match_info.get("filename", None)

if not folder_name in folder_paths.folder_names_and_paths:
if folder_name not in folder_paths.folder_names_and_paths:
return web.Response(status=404)

folders = folder_paths.folder_names_and_paths[folder_name]
Expand Down
19 changes: 19 additions & 0 deletions comfy/clip_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,25 @@
from comfy.ldm.modules.attention import optimized_attention_for_device
import comfy.ops

def clip_preprocess(image, size=224, mean=[0.48145466, 0.4578275, 0.40821073], std=[0.26862954, 0.26130258, 0.27577711], crop=True):
image = image[:, :, :, :3] if image.shape[3] > 3 else image
mean = torch.tensor(mean, device=image.device, dtype=image.dtype)
std = torch.tensor(std, device=image.device, dtype=image.dtype)
image = image.movedim(-1, 1)
if not (image.shape[2] == size and image.shape[3] == size):
if crop:
scale = (size / min(image.shape[2], image.shape[3]))
scale_size = (round(scale * image.shape[2]), round(scale * image.shape[3]))
else:
scale_size = (size, size)

image = torch.nn.functional.interpolate(image, size=scale_size, mode="bicubic", antialias=True)
h = (image.shape[2] - size)//2
w = (image.shape[3] - size)//2
image = image[:,:,h:h+size,w:w+size]
image = torch.clip((255. * image), 0, 255).round() / 255.0
return (image - mean.view([3,1,1])) / std.view([3,1,1])

class CLIPAttention(torch.nn.Module):
def __init__(self, embed_dim, heads, dtype, device, operations):
super().__init__()
Expand Down
22 changes: 2 additions & 20 deletions comfy/clip_vision.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
from .utils import load_torch_file, transformers_convert, state_dict_prefix_replace
import os
import torch
import json
import logging

Expand All @@ -17,24 +16,7 @@ def __getitem__(self, key):
def __setitem__(self, key, item):
setattr(self, key, item)

def clip_preprocess(image, size=224, mean=[0.48145466, 0.4578275, 0.40821073], std=[0.26862954, 0.26130258, 0.27577711], crop=True):
image = image[:, :, :, :3] if image.shape[3] > 3 else image
mean = torch.tensor(mean, device=image.device, dtype=image.dtype)
std = torch.tensor(std, device=image.device, dtype=image.dtype)
image = image.movedim(-1, 1)
if not (image.shape[2] == size and image.shape[3] == size):
if crop:
scale = (size / min(image.shape[2], image.shape[3]))
scale_size = (round(scale * image.shape[2]), round(scale * image.shape[3]))
else:
scale_size = (size, size)

image = torch.nn.functional.interpolate(image, size=scale_size, mode="bicubic", antialias=True)
h = (image.shape[2] - size)//2
w = (image.shape[3] - size)//2
image = image[:,:,h:h+size,w:w+size]
image = torch.clip((255. * image), 0, 255).round() / 255.0
return (image - mean.view([3,1,1])) / std.view([3,1,1])
clip_preprocess = comfy.clip_model.clip_preprocess # Prevent some stuff from breaking, TODO: remove eventually

IMAGE_ENCODERS = {
"clip_vision_model": comfy.clip_model.CLIPVisionModelProjection,
Expand Down Expand Up @@ -73,7 +55,7 @@ def get_sd(self):

def encode_image(self, image, crop=True):
comfy.model_management.load_model_gpu(self.patcher)
pixel_values = clip_preprocess(image.to(self.load_device), size=self.image_size, mean=self.image_mean, std=self.image_std, crop=crop).float()
pixel_values = comfy.clip_model.clip_preprocess(image.to(self.load_device), size=self.image_size, mean=self.image_mean, std=self.image_std, crop=crop).float()
out = self.model(pixel_values=pixel_values, intermediate_output='all' if self.return_all_hidden_states else -2)

outputs = Output()
Expand Down
6 changes: 6 additions & 0 deletions comfy/context_windows.py
Original file line number Diff line number Diff line change
Expand Up @@ -188,6 +188,12 @@ def get_resized_cond(self, cond_in: list[dict], x_in: torch.Tensor, window: Inde
audio_cond = cond_value.cond
if audio_cond.ndim > 1 and audio_cond.size(1) == x_in.size(self.dim):
new_cond_item[cond_key] = cond_value._copy_with(window.get_tensor(audio_cond, device, dim=1))
# Handle vace_context (temporal dim is 3)
elif cond_key == "vace_context" and hasattr(cond_value, "cond") and isinstance(cond_value.cond, torch.Tensor):
vace_cond = cond_value.cond
if vace_cond.ndim >= 4 and vace_cond.size(3) == x_in.size(self.dim):
sliced_vace = window.get_tensor(vace_cond, device, dim=3, retain_index_list=self.cond_retain_index_list)
new_cond_item[cond_key] = cond_value._copy_with(sliced_vace)
# if has cond that is a Tensor, check if needs to be subset
elif hasattr(cond_value, "cond") and isinstance(cond_value.cond, torch.Tensor):
if (self.dim < cond_value.cond.ndim and cond_value.cond.size(self.dim) == x_in.size(self.dim)) or \
Expand Down
3 changes: 2 additions & 1 deletion comfy/hooks.py
Original file line number Diff line number Diff line change
Expand Up @@ -527,7 +527,8 @@ def prepare_current_keyframe(self, curr_t: float, transformer_options: dict[str,
if self._current_keyframe.get_effective_guarantee_steps(max_sigma) > 0:
break
# if eval_c is outside the percent range, stop looking further
else: break
else:
break
# update steps current context is used
self._current_used_steps += 1
# update current timestep this was performed on
Expand Down
3 changes: 3 additions & 0 deletions comfy/k_diffusion/sampling.py
Original file line number Diff line number Diff line change
Expand Up @@ -74,6 +74,9 @@ def get_ancestral_step(sigma_from, sigma_to, eta=1.):

def default_noise_sampler(x, seed=None):
if seed is not None:
if x.device == torch.device("cpu"):
seed += 1

generator = torch.Generator(device=x.device)
generator.manual_seed(seed)
else:
Expand Down
3 changes: 3 additions & 0 deletions comfy/latent_formats.py
Original file line number Diff line number Diff line change
Expand Up @@ -407,6 +407,9 @@ def __init__(self):

self.latent_rgb_factors_bias = [-0.0571, -0.1657, -0.2512]

class LTXAV(LTXV):
pass

class HunyuanVideo(LatentFormat):
latent_channels = 16
latent_dimensions = 3
Expand Down
2 changes: 1 addition & 1 deletion comfy/ldm/chroma_radiance/model.py
Original file line number Diff line number Diff line change
Expand Up @@ -270,7 +270,7 @@ def radiance_get_override_params(self, overrides: dict) -> ChromaRadianceParams:
bad_keys = tuple(
k
for k, v in overrides.items()
if type(v) != type(getattr(params, k)) and (v is not None or k not in nullable_keys)
if not isinstance(v, type(getattr(params, k))) and (v is not None or k not in nullable_keys)
)
if bad_keys:
e = f"Invalid value(s) in transformer_options chroma_radiance_options: {', '.join(bad_keys)}"
Expand Down
3 changes: 2 additions & 1 deletion comfy/ldm/hunyuan_video/upsampler.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,8 @@
import torch.nn.functional as F
from comfy.ldm.modules.diffusionmodules.model import ResnetBlock, VideoConv3d
from comfy.ldm.hunyuan_video.vae_refiner import RMS_norm
import model_management, model_patcher
import model_management
import model_patcher

class SRResidualCausalBlock3D(nn.Module):
def __init__(self, channels: int):
Expand Down
Loading
Loading