AI Without Internet.
Ollama, Offline, No Docker.
Run a full AI assistant with no internet connection. Skales connects to Ollama for completely offline operation - no Docker, no cloud, no API keys required. Works on planes, in secure facilities, anywhere.
The reality
Most AI assistants are cloud services. They require a stable internet connection, an active subscription, and trust that the provider is handling your data appropriately. Remove the internet, and they stop working entirely. For anyone who works in locations with restricted or no connectivity, this is a fundamental blocker.
The local AI stack - Ollama for model serving, Skales for the interface and agentic layer - solves this. Models run on your hardware. No network request is made. Whether you are at 35,000 feet, in a secure government facility, or simply in a location with poor coverage, your AI assistant remains fully operational.
How Skales works offline
Ollama provides the model layer. Skales provides the agent layer. Zero internet dependency.
Full offline operation with Ollama
Connect Skales to Ollama and run leading open-source models locally: Llama 3, Mistral, Gemma, Phi, and more. Once the model is downloaded, you have a capable AI assistant that works with no internet connection whatsoever.
No Docker required
Ollama installs with a standard installer. Skales installs with a double-click. No Docker, no WSL, no environment configuration, no command-line setup. If you can install an application, you can run local AI offline.
Works anywhere - planes, ships, remote sites
In-flight, offshore, on a construction site, in a tunnel, in a hospital without Wi-Fi - Skales with Ollama keeps working. Your AI assistant has no internet dependency, so connectivity outages are irrelevant.
Air-gapped and secure environments
Defence contractors, secure government facilities, classified research environments, and high-security data centres can use Skales. With Ollama providing the model layer, there is zero network communication required.
Switch between online and offline modes
When you do have connectivity, switch to a cloud model (OpenAI, Anthropic, Gemini) for maximum capability. When offline, fall back to Ollama automatically. The same interface, the same workflows, regardless of which backend is active.
Model selection and management
Choose the right model for your hardware. Smaller quantised models run on standard laptops; larger models unlock on systems with more RAM and a capable GPU. Skales surfaces model options and lets you switch without leaving the app.
Get offline in three steps
Install Ollama
Download Ollama from ollama.com - a standard installer for Windows or macOS. No Docker, no WSL, no terminal knowledge required.
Pull a model
Open Ollama and pull your chosen model: llama3, mistral, gemma2, phi3, or any other supported model. Downloads range from 2GB to 8GB depending on model size.
Connect Skales to Ollama
Open Skales, go to Settings, select Ollama as your provider. Skales detects the running Ollama instance automatically. Switch off Wi-Fi and start working.
“I use it on long flights. I turn on airplane mode and Skales keeps working exactly as normal.”
Free for personal use. Windows and macOS. Offline with Ollama.
Also see: Local AI Setup · Privacy & Local AI · Download