Skip to content

feat(localcowork): Ubuntu support with ROCm GPU auto-detection#76

Open
ThomasGmeinder wants to merge 3 commits intoLiquid4All:mainfrom
ThomasGmeinder:localcowork_ubuntu
Open

feat(localcowork): Ubuntu support with ROCm GPU auto-detection#76
ThomasGmeinder wants to merge 3 commits intoLiquid4All:mainfrom
ThomasGmeinder:localcowork_ubuntu

Conversation

@ThomasGmeinder
Copy link
Copy Markdown
Contributor

Summary

  • Ubuntu support: setup-dev.sh now detects the OS (macOS or Ubuntu/Debian) and automatically installs all system prerequisites (Node.js, Python venv, Rust, cmake, Tauri GTK/WebKit deps, inotify watcher limit)
  • Upgraded model from Preview to release: Updated all references from the gated LFM2-24B-A2B-Preview to the public LFM2-24B-A2B-GGUF repo. Resolves Cannot request access to LFM2-24B-A2B-Preview #75
  • GPU-accelerated llama.cpp build: Added a Makefile that builds llama-server from source with automatic ROCm GPU detection via rocm-smi, falling back to CPU. Supports make CPU=1 override
  • Improved usability: setup-dev.sh handles all prerequisites automatically — the README Prerequisites section is condensed to a single ./scripts/setup-dev.sh call instead of manual platform-specific steps

Test report

The following agentic functions were tested in the browser on Ubuntu 24.04 with ROCm 7.2 (AMD Ryzen iGPU gfx1151, 48 GB VRAM):

  • "Tell me about my system" — system info retrieved successfully
  • "Scan for leaked secrets" — security scan completed
  • "Find personal data" — PII detection completed
  • "Summarize PDF on desktop" — a concise summary was generated in .pdf, .txt and .docx formats

Made with Cursor

Thomas Gmeinder added 3 commits March 14, 2026 13:37
- Add Ubuntu/Debian prerequisites section to README (Node.js, python3-venv,
  Rust, Tauri GTK/WebKit deps, inotify watcher limit)
- Update model references from gated LFM2-24B-A2B-Preview to public
  LFM2-24B-A2B-GGUF repo
- Fix --flash-attn flag syntax for newer llama.cpp in start-model.sh
- Add python3-venv check to setup-dev.sh

Tested on Ubuntu 24.04 with ROCm 7.2 (gfx1151). The llama-server binary
was symlinked from an existing ROCm/HIP build, not built by this project.
Tauri app launches and onboarding works on the NUC's local browser.
Note: Tauri (WebKit2GTK) requires a native display — it does not work
over SSH remote without X11 forwarding or VNC.

Made-with: Cursor
…detection

Add a Makefile that clones and builds llama.cpp from source with automatic
ROCm GPU detection via rocm-smi. Falls back to CPU-only build when ROCm
is not available. Supports CPU=1 override to force CPU build.

start-model.sh now auto-finds ./llama-server built by make, falling back
to PATH. Updated README with Ubuntu prerequisites and build instructions.

Tested on Ubuntu 24.04 with ROCm 7.2 (gfx1151), AMD Ryzen iGPU (48 GB VRAM).

Test report (LFM2-24B-A2B-Q4_K_M on ROCm):
- "Tell me about my system" — system info retrieved successfully
- "Scan for leaked secrets" — security scan completed
- "Find personal data" — PII detection completed
- "Summarize PDF on desktop" — concise summary generated in .pdf, .txt and .docx formats

Made-with: Cursor
Simplify the getting-started experience: setup-dev.sh now detects the OS
(macOS or Ubuntu/Debian) and automatically installs all system
prerequisites — Node.js, Python venv, Rust, cmake, Tauri GTK/WebKit
deps, and inotify watcher limit. Users no longer need to manually follow
platform-specific prerequisite steps from the README.

Condensed the README Prerequisites section to reflect this — the manual
install commands are replaced by a single ./scripts/setup-dev.sh call.

Made-with: Cursor
@Paulescu
Copy link
Copy Markdown
Collaborator

Hi @ThomasGmeinder,

These are my thoughts:

  • I was able to run LocalCowork on an Ubuntu machine before you opened this PR. I think most of the code you add here is not necessary. I am a big fan of simple things, and I would like this repo to stay away from unnecessary complexity.

  • The item you are absolutely right is the wrong model name, which I have fixed in this other PR fix(localcowork): update model from gated Preview to public GGUF release #93

If you don't have any further comments I will close this in a week,

Pau

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Cannot request access to LFM2-24B-A2B-Preview

2 participants