Skales is a native desktop application โ not a web app you access through a browser, but something you install, launch, and keep running in the background. Getting that desktop-native experience while using modern web technologies required some deliberate architectural decisions. This post documents those choices honestly, including the trade-offs we accepted along the way. It is aimed at developers who want to understand the system or contribute to it.
Why Electron?
Electron is controversial, and the criticism is fair: it ships a full Chromium browser with every application, adding 100โ200 MB to the installer size and a noticeable RAM overhead at idle. A native application written in Swift or Rust would be leaner, faster to launch, and use less memory. So why did we choose Electron?
Three reasons. First, cross-platform support โ one codebase runs on Windows and macOS without maintaining separate native builds. Second, access to Node.js from the rendering layer, which makes system integration (file system, network, child process spawning, OS notifications) straightforward using well-maintained libraries. Third, the JavaScript/TypeScript ecosystem is where most AI tooling lives: provider SDKs, LangChain-compatible libraries, and the integration adapters for email, calendar, and external services are JavaScript-first. Building in a different language would mean either writing our own bindings or operating outside the main ecosystem.
The accepted trade-offs: approximately 300 MB RAM in idle and a 150 MB installer. For a productivity tool that runs continuously in the background, these are acceptable. For a utility that launches and closes many times per day, the Electron overhead would be harder to justify.
Why Next.js as the UI Framework
Next.js might seem like an unusual choice for an Electron application โ it is primarily associated with web servers and static sites. We use it differently: as a build system and component framework for the renderer process.
The renderer process in Electron is essentially a browser window running HTML, CSS, and JavaScript. Any web framework can run here. We chose Next.js for several reasons. The file-system-based routing maps cleanly to our multi-panel interface. React Server Components let us keep data-fetching logic separate from interactive UI. TypeScript and ESLint are configured correctly out of the box. Hot module replacement during development works through the Electron window exactly as it does in a browser. And the static export output (next export) compiles to flat files that Electron loads from the local file system โ no HTTP server is required in production.
In development, Next.js runs its dev server at localhost:3000 and Electron opens that URL. In production, the static export is bundled into the Electron app and loaded via file:// paths. Switching between modes is handled by an environment variable check at Electron startup.
The IPC Bridge: Main Process โ Renderer
Electron has two process types. The main process has full Node.js access and handles system-level operations: spawning processes, reading and writing files, opening native windows and dialogs, and managing OS integrations. The renderer process runs the web UI and, for security reasons, has restricted access to Node.js APIs by default.
Communication between them uses Electron's IPC (Inter-Process Communication) mechanism. We expose a typed bridge using contextBridge in the preload script:
// preload.ts
contextBridge.exposeInMainWorld('skales', {
executeCommand: (cmd: string) =>
ipcRenderer.invoke('execute-command', cmd),
readFile: (path: string) =>
ipcRenderer.invoke('read-file', path),
sendEmail: (opts: EmailOptions) =>
ipcRenderer.invoke('send-email', opts),
onAgentUpdate: (cb: (update: AgentUpdate) => void) => {
ipcRenderer.on('agent-update', (_, update) => cb(update))
},
})
The main process registers handlers for each IPC channel using ipcMain.handle(). The renderer treats all bridge functions as ordinary async calls โ the cross-process boundary is transparent. A shared types/ipc.d.ts file declares the full bridge interface, and TypeScript catches mismatches at compile time on both sides.
This architecture enforces a clean separation: all privileged operations (file I/O, shell execution, network calls to email providers) live in the main process, while the renderer handles only display and user interaction.
Tool Execution Flow
When a user types a message, here is the full execution path from input to response:
1. Input: The chat component sends the message to the main process via window.skales.sendMessage(content).
2. Context assembly: The main process builds the full context โ conversation history, the user profile from human.json, the active skill's system prompt, and the JSON schemas for all available tools. Tools are defined as standard OpenAI-format function schemas: name, description, and parameter types.
3. LLM call: The assembled context is sent to the configured AI provider. The response is either plain text (a direct reply) or a tool call request โ a structured JSON object specifying which tool to call and with what parameters.
4. Safety check: Before any tool executes, the Safety Mode is checked. In safe mode, operations on the block list (system file modification, mass deletion) are rejected immediately. In advanced mode, the main process emits an IPC event to the renderer (agent-update { type: 'approval-required', tool, params }). The UI renders an approve/decline card. In unrestricted mode, tools execute without confirmation.
5. Execution: Approved tools are dispatched to the relevant handler module. File operations go to the file system module. Email goes to the email integration adapter. Shell commands go to a sandboxed child process. Each handler returns a structured result.
6. Re-injection: The tool result is appended to the conversation context and another LLM call is made if the model needs to process the result further. This is the ReAct loop โ the agent reasons about what it has learned and decides whether to call more tools or produce a final response.
7. Streaming: Text responses are streamed from the LLM to the renderer via IPC events, updating the chat UI token by token for a natural feel.
Security Considerations
Node integration is disabled in the renderer (nodeIntegration: false). Context isolation is enabled (contextIsolation: true). The preload script is the only surface between the privileged main process and the renderer, and it exposes only the specific functions defined in the bridge โ not raw IPC channels or Node.js modules. Web security is enabled, preventing the renderer from making arbitrary cross-origin requests.
This is not a perfect security boundary โ Electron apps are a larger attack surface than native apps. But it is appropriate for a local productivity tool where the primary concern is accidental misuse rather than adversarial attack.
The source code is at github.com/skalesapp/skales. The ARCHITECTURE.md in the repo root has more detail. If you want to build on this or contribute, the developer documentation is the starting point.