April 2, 2026 · 12 min read

Locally Uncensored v1.5 Release: Dynamic Workflows, CivitAI Marketplace & Standalone Desktop App

Today we're shipping the biggest update in the history of Locally Uncensored — versions 1.5.0 through 1.5.3, collectively bringing seven major features that transform this from a capable local AI app into a complete creative workstation. If you've been following the project since the early days of chat + image + video in one UI, this is the update where everything clicks into place.

The Locally Uncensored v1.5 release introduces a dynamic workflow builder that queries 600+ ComfyUI nodes in real time, an integrated CivitAI model marketplace, a standalone Tauri v2 desktop .exe with its own CORS proxy and download manager, full privacy hardening with zero external dependencies, six ready-to-go model bundles, local Whisper speech-to-text, and a collection of UI polish improvements that make daily use smoother.

Let's walk through every major feature.

Dynamic Workflow Builder

This is the headline feature of v1.5.0 and the one that took the longest to build. The dynamic workflow builder fundamentally changes how Locally Uncensored talks to ComfyUI.

Previously, the app shipped with a set of hardcoded workflow templates — one for SDXL text-to-image, one for FLUX, one for Wan 2.1 video, and so on. If you wanted to use a different sampler, add ControlNet, or chain together multiple models, you were out of luck without opening ComfyUI's node editor directly.

That's gone. The new workflow builder queries your running ComfyUI instance at startup, discovers all installed nodes (typically 600+ in a standard install with ComfyUI-Manager), and auto-constructs the correct pipeline based on what you're trying to do. Pick a checkpoint, pick a resolution, type a prompt, and the app figures out the right node graph automatically.

How It Works Under the Hood

When ComfyUI boots, it exposes an /object_info endpoint that returns metadata for every registered node — inputs, outputs, types, defaults. The workflow builder parses this schema and builds a dependency graph. When you hit generate, it walks the graph from your selected output node backward, resolving each dependency and injecting your parameters (prompt, negative prompt, seed, steps, CFG, dimensions) into the appropriate node slots.

This means the app automatically supports:

No more workflow JSON files. No more "this model isn't supported yet" messages. If ComfyUI can run it, Locally Uncensored can drive it.

CivitAI Model Marketplace

Finding and downloading models has always been the most annoying part of local AI. You hunt through CivitAI's website, copy a download link, figure out which directory it goes in, rename it, restart ComfyUI. Version 1.5.1 eliminates all of that.

The new CivitAI model marketplace is built directly into the app. Open the Models tab, search for any checkpoint, LoRA, embedding, or VAE on CivitAI, preview the thumbnail (privacy-proxied — more on that below), and hit download. The model lands in the correct ComfyUI subdirectory automatically. No manual file management.

Privacy-First Thumbnails

Here's where it gets interesting from a privacy standpoint. CivitAI serves model preview images from their CDN. Loading those images directly would leak your IP address and browsing patterns to CivitAI's servers. In the Locally Uncensored v1.5 release, all CivitAI thumbnails are proxied through the app's local backend. The user's browser never contacts CivitAI directly — the Tauri sidecar (or the dev server in browser mode) fetches the image and serves it from localhost. Your IP stays private.

This is a small detail that no other local AI app desktop client gets right. Most tools either skip CivitAI integration entirely or load external resources without a second thought.

Privacy Hardening

The CivitAI proxy is part of a broader privacy overhaul in v1.5.2. We went through every line of code and eliminated external dependencies:

This isn't theoretical privacy. Run Wireshark while using the app — you'll see connections to 127.0.0.1 and your Ollama/ComfyUI ports. Nothing else. If you've read our guide on how to run uncensored AI locally, you know privacy is foundational to the project. This release makes that guarantee airtight.

Tauri v2 Standalone Desktop App

Until now, Locally Uncensored ran as a Vite dev server that opened in your browser. That works — and it's still supported — but v1.5.0 adds a proper standalone local AI app desktop executable built with Tauri v2.

The tech stack: React 19 + TypeScript + Tailwind CSS 4 + Vite 8 for the frontend, Tauri 2 (Rust) for the native shell. The result is a single .exe on Windows (with macOS and Linux builds also available) that weighs under 15 MB installed. Compare that to Electron-based alternatives that start at 150+ MB.

Built-in CORS Proxy

The Tauri backend includes a Rust-powered CORS proxy. This solves one of the most persistent headaches in local AI development: ComfyUI and Ollama don't set CORS headers by default, so browser-based frontends either need you to launch backends with special flags or use a separate proxy tool. The Tauri app handles this transparently — all API requests route through the Rust sidecar, which adds the correct headers.

Download Manager with Pause/Resume

Model files are large. A FLUX checkpoint is 12 GB. Wan 2.1 is 10 GB. If your connection drops halfway through, you shouldn't have to start over. The Tauri app includes a download manager with pause and resume support using HTTP range requests. It also shows real-time progress, speed, and ETA. Downloads run in a background thread so the UI stays responsive.

6 Complete Model Bundles

New users shouldn't have to research which models to download or worry about missing dependencies. Version 1.5.1 introduces six curated model bundles — three for image generation and three for video generation — each containing everything needed for a specific workflow.

Image Bundles

BundleIncludesVRAMBest For
FLUX SchnellFLUX.1 Schnell + T5 encoder + CLIP + VAE10 GBFastest high-quality generation
Juggernaut XLJuggernaut XL V9 + SDXL VAE + refiner8 GBPhotorealistic portraits and scenes
Pony DiffusionPony V6 + SDXL VAE + style LoRAs8 GBStylized and anime content

Video Bundles

BundleIncludesVRAMBest For
Wan 2.1 T2VWan 2.1 text-to-video + T5 + CLIP + VAE10 GBText prompt to video
Wan 2.1 I2VWan 2.1 image-to-video + CLIP Vision + VAE10 GBAnimate a still image
AnimateDiffAnimateDiff v3 + motion module + SDXL base8 GBShort animated loops

Each bundle is a one-click install from the Models tab. The app downloads all required files, places them in the correct directories, and pre-validates that the workflow will execute before you generate anything. No "missing model" errors on your first attempt.

Local Whisper Speech-to-Text

Version 1.5.2 adds local Whisper STT (speech-to-text) directly in the chat interface. Click the microphone icon, speak your prompt, and it transcribes locally using a Whisper model running on your machine. No audio sent to any server.

The implementation uses whisper.cpp under the hood with the base.en model by default (smaller and faster for English). You can swap in the larger medium or large-v3 models if you need better accuracy or multilingual support. Transcription runs on your GPU if CUDA is available, otherwise falls back to CPU.

This is a natural fit for the project's privacy-first philosophy. Cloud speech-to-text services (Google, OpenAI Whisper API, Azure) are convenient but mean your voice data hits external servers. Local Whisper keeps everything on your hardware.

UI Polish & Quality of Life

The v1.5.x series also includes a stack of UI improvements that don't get their own headline but make a real difference in daily use:

VRAM Unload

A new button in the generation panel lets you unload models from VRAM with one click. This is essential if you're switching between chat and image generation on a GPU with limited memory. Previously you'd need to restart ComfyUI or use the API directly. Now it's a button.

Execution Timer

Every generation now shows an execution timer — both elapsed time during generation and total time on completion. Useful for benchmarking different models, samplers, and step counts. The timer persists in the generation history so you can compare runs later.

Additional Improvements

What's New vs. Competitors

Here's how the Locally Uncensored v1.5 release stacks up against other popular local AI app desktop tools as of April 2026:

FeatureLocally Uncensored v1.5Open WebUILM StudioJan.aiGPT4All
Chat
Image generation
Video generation
Dynamic workflow builder
CivitAI marketplace
Local Whisper STT
Zero telemetry
No external CDN/fonts
Standalone desktop app✓ (Tauri)✗ (Docker)✓ (Electron)✓ (Electron)✓ (Qt)
Download manager (pause/resume)
Model bundles✓ (6 bundles)
VRAM management
Open sourceMITMITProprietaryAGPLMIT
App size< 15 MBDocker image~200 MB~180 MB~120 MB

The pattern is clear. Other tools do one thing well — chat. Locally Uncensored is the only local AI app desktop that combines chat, image generation, video generation, and a model marketplace into a single interface, all while maintaining strict privacy guarantees. For a deeper dive into individual comparisons, check out our posts on Locally Uncensored vs Open WebUI, vs LM Studio, vs Jan.ai, and vs GPT4All.

Upgrade Guide

If you're already running Locally Uncensored, upgrading is straightforward:

cd locally-uncensored
git pull
npm install
npm run dev

For the standalone desktop app, download the latest .exe (or .dmg / .AppImage) from the releases page. Your existing models, chat history, and settings carry over automatically — they're stored in your OS's app data directory, not inside the app bundle.

New users can follow the full setup guide in our how to run uncensored AI locally article, or just clone and run:

git clone https://github.com/PurpleDoubleD/locally-uncensored.git
cd locally-uncensored
# Windows: setup.bat | macOS/Linux: ./setup.sh

What's Next

The v1.5.x cycle focused on making the generation pipeline dynamic and the app self-contained. The next milestone targets:

Follow the GitHub repo for updates, or join the Discussions to share feedback and feature requests.

FAQ

What versions are included in this release?

This post covers versions 1.5.0 through 1.5.3. The dynamic workflow builder and Tauri app landed in v1.5.0. CivitAI marketplace and model bundles came in v1.5.1. Privacy hardening and local Whisper STT arrived in v1.5.2. UI polish fixes and stability improvements rounded things out in v1.5.3.

Do I need to use the Tauri desktop app or can I still use the browser version?

Both are fully supported. The browser version (npm run dev) works exactly as before. The Tauri app adds native features like the built-in CORS proxy and download manager with pause/resume, but all core functionality — chat, image gen, video gen, workflow builder — works in both modes.

Does the dynamic workflow builder require any extra setup?

No. If ComfyUI is running and reachable, the builder queries it automatically on startup. It works with a stock ComfyUI install and gets even more powerful if you install additional nodes through ComfyUI-Manager. No configuration required.

How does the CivitAI integration affect my privacy?

All CivitAI API requests and thumbnail fetches are proxied through the app's local backend. Your browser never contacts CivitAI servers directly. Your IP address is not exposed to CivitAI when browsing models inside the app. The only direct connection to CivitAI happens when you initiate a model download, which is unavoidable since the files are hosted on their servers.

Can I use the model bundles if I already have some models downloaded?

Yes. The bundle installer checks for existing files before downloading. If you already have the FLUX.1 Schnell checkpoint, for example, the FLUX bundle will skip that file and only download the missing components (encoder, VAE, etc.).

What Whisper model should I use for speech-to-text?

The default base.en model is fine for English-only use — it's fast and accurate enough for dictating prompts. If you need multilingual transcription or higher accuracy for technical terms, switch to medium or large-v3 in settings. The larger models need more VRAM and take longer to transcribe but produce noticeably better results.

How big is the Tauri desktop app?

Under 15 MB installed on Windows. That's the full application with the Rust backend, CORS proxy, and download manager included. Compare that to Electron-based alternatives like LM Studio (~200 MB) or Jan (~180 MB). Tauri compiles to native code and doesn't bundle a separate Chromium instance.

Is this update free?

Yes. Locally Uncensored is MIT licensed and completely free. No paid tiers, no pro features behind a paywall, no subscriptions. The full source code is on GitHub.


Locally Uncensored v1.5.3 is available now. MIT licensed and free to use. Built by PurpleDoubleD.

Ready to try the biggest update yet?

Download v1.5.3