Skip to content

fix: move HuggingFace cache to user home, add gitignore#54

Open
vlordier wants to merge 1 commit intoLiquid4All:mainfrom
vlordier:fix/huggingface-cache-location
Open

fix: move HuggingFace cache to user home, add gitignore#54
vlordier wants to merge 1 commit intoLiquid4All:mainfrom
vlordier:fix/huggingface-cache-location

Conversation

@vlordier
Copy link
Copy Markdown

@vlordier vlordier commented Mar 6, 2026

Summary

  • Add _models/.cache/ to .gitignore to prevent accidental cache commits
  • Add warning in config about HF_HOME/HF_HUB_CACHE env vars
  • Remove _models/.cache/ directory from repo

This prevents large HuggingFace model caches from being stored in the project directory (fixes 77GB cache issue).

Copilot AI review requested due to automatic review settings March 6, 2026 17:45
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR aims to prevent large HuggingFace cache directories from being committed into the examples/localcowork project by ignoring _models/.cache/ and documenting correct HuggingFace cache env var usage.

Changes:

  • Add _models/.cache/ to examples/localcowork/.gitignore.
  • Update _models/config.yaml with guidance about HF_HOME / HF_HUB_CACHE.
  • Update parts of _models/config.yaml model registry content (model naming/paths and an LM Studio entry).

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 5 comments.

File Description
examples/localcowork/_models/config.yaml Adds HF cache warning and modifies model config entries (including LFM2 naming/path and new LM Studio entry).
examples/localcowork/.gitignore Ignores _models/.cache/ to prevent accidental cache commits.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 113 to +116
lfm2-24b-a2b:
display_name: "LFM2-24B-A2B-Preview"
display_name: "LFM2-24B-A2B"
runtime: llama_cpp
model_path: "${LOCALCOWORK_MODELS_DIR:-~/Projects/_models}/LFM2-24B-A2B-Preview-Q4_K_M.gguf"
model_path: "${LOCALCOWORK_MODELS_DIR:-~/Projects/_models}/LFM2-24B-A2B-Q4_K_M.gguf"
Copy link

Copilot AI Mar 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Within the same model entry, the display_name/model_path were updated to drop -Preview, but other tooling in the repo still expects LFM2-24B-A2B-Preview-Q4_K_M.gguf (scripts + docs). If this filename change is intentional, the surrounding ecosystem needs to be updated together; otherwise, users will end up with a downloaded file that doesn’t match what the config suggests.

Copilot uses AI. Check for mistakes.
Comment on lines +246 to +253
# LM Studio headless server — any model loaded in LM Studio
# Run `lms server start` or enable "Run LLM server on login" in app settings.
# Default port is 1234. Uses OpenAI-compatible API.
# Note: The model_name here is informational - update to match your loaded model.
lmstudio-default:
display_name: "LM Studio (Default)"
runtime: lmstudio
model_name: "liquid/lfm2-24b-a2b" # Replace with your loaded model ID
Copy link

Copilot AI Mar 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new lmstudio-default model and runtime: lmstudio addition is a functional/config expansion that isn’t mentioned in the PR description (which focuses on HuggingFace cache + gitignore). If this is intended, please call it out in the PR summary; otherwise consider moving it to a separate PR to keep scope tight and reduce review/rollback risk.

Copilot uses AI. Check for mistakes.
Comment thread examples/localcowork/_models/config.yaml Outdated
Comment on lines +18 to 21
active_model: lfm2-24b-a2b # Sparse MoE: 24B total, 2.3B active, 64 experts top-4 — 80% tool accuracy

# Default model directory for non-Ollama model files (GGUF, MLX, etc.)
models_dir: "${LOCALCOWORK_MODELS_DIR:-~/Projects/_models}"
Copy link

Copilot AI Mar 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR introduces a large number of whitespace-only edits (e.g., changing inline-comment spacing) throughout the config. This makes the diff noisier and increases the chance of merge conflicts without affecting behavior. Consider reverting purely-formatting changes and keeping this PR focused on the HuggingFace cache/gitignore fix.

Copilot uses AI. Check for mistakes.
Comment on lines 108 to 112
# LFM2-24B-A2B — Liquid AI's MoE hybrid model (private preview)
# Architecture: 24B total, 2.3B active per token, 64 experts top-4, 40 layers (1:3 attn:conv ratio)
# Download GGUF from: https://huggingface.co/LiquidAI/LFM2-24B-A2B-Preview (gated — request access)
# Download GGUF from: https://huggingface.co/LiquidAI/LFM2-24B-A2B (gated — request access)
# Benchmark plan: docs/model-analysis/lfm2-24b-a2b-benchmark.md
# Run: llama-server --model <path> --port 8080 --ctx-size 32768 --n-gpu-layers 99 --flash-attn
Copy link

Copilot AI Mar 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The LFM2-24B-A2B comments and download URL were changed from the ...-Preview repo to LiquidAI/LFM2-24B-A2B, but the rest of this repo still references the ...-Preview location/filename (e.g., scripts/start-model.sh, scripts/setup-dev.sh, and README.md). As-is, contributors following the scripts/docs will likely download a different filename/repo than this config implies. Please either update the related scripts/docs in the same PR, or keep the config pointing at the ...-Preview repo to stay consistent.

Copilot uses AI. Check for mistakes.
@vlordier vlordier force-pushed the fix/huggingface-cache-location branch 2 times, most recently from 1b4c55e to c787d50 Compare March 6, 2026 18:27
- Add _models/.cache/ to .gitignore to prevent accidental cache commits
- Set HF_HOME and HF_HUB_CACHE in scripts to ensure cache goes to ~/.cache
- Add warning in config about using local_dir outside project repo
- Add note in .env.example about storing models outside project

This prevents large HuggingFace model caches from being stored
in the project directory (fixes 77GB cache issue).
@vlordier vlordier force-pushed the fix/huggingface-cache-location branch from c787d50 to a22f2af Compare March 6, 2026 18:36
@vlordier
Copy link
Copy Markdown
Author

vlordier commented Mar 6, 2026

@copilot open a new pull request to apply changes based on the comments in this thread

vlordier added a commit to vlordier/cookbook that referenced this pull request Mar 6, 2026
Address Copilot AI review comments from PR Liquid4All#54:
- Add _models/.cache/ to .gitignore to prevent accidental cache commits
- Add comment about HuggingFace cache location (HF_HOME, HF_HUB_CACHE, XDG_CACHE_HOME)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants