I have been using ChatGPT since its launch, and I have paid for ChatGPT Plus for most of that time. I am building Skales, which is a local AI agent โ so I am obviously not a neutral party here. But I thought it was worth being honest about: can a local AI agent actually replace cloud AI for day-to-day use? I ran the experiment for a month. Here is what happened.
The Setup
Rules: no ChatGPT, no Claude.ai, no Perplexity, no cloud AI of any kind for thirty days. Only Skales with Ollama as the backend. For heavy tasks where I would normally have reached for a frontier model, I was allowed to use OpenRouter pay-as-you-go โ but only as a deliberate choice, not as the default. Model: Llama 3.1 8B and Qwen 2.5 14B as primary, Claude 3.5 Sonnet via OpenRouter for maybe ten percent of tasks.
Week 1: Setup and First Impressions
The setup was smooth. Ollama installed in five minutes, Skales connected in another two. The first thing I noticed: responses were slower. Not unusably slow โ 3-8 seconds typically โ but perceptibly different from the near-instant responses I was used to from cloud models. I found myself adjusting my workflow: instead of quick back-and-forth conversation, I started writing more complete, self-contained prompts.
The second thing I noticed: for routine tasks โ email drafts, summarising documents I provided, explaining concepts, answering questions about my own files โ the quality was essentially indistinguishable from ChatGPT Plus. The 8B model handled these confidently. I stopped noticing the difference within two days.
Week 2: What Worked
Email management was excellent. Skales reading my inbox, drafting replies, and surfacing action items worked as well as any cloud-based tool I have tried โ and my email content never left my machine. Calendar integration was reliable. File search and summarisation was notably better than anything I had with cloud AI (direct file access, no copy-pasting). Custom skills I had set up saved meaningful time on recurring tasks.
The privacy aspect was genuinely different. I became aware of how much of my work involves information I would prefer not to transmit to cloud providers: client discussion, product plans, financial calculations. Running locally meant I stopped self-censoring my AI queries.
Week 3: What Did Not
Three categories where local models fell short. First: genuinely complex reasoning tasks. A difficult architectural decision I was working through got a noticeably weaker response from 8B local than from Claude 3.5 Sonnet. I used OpenRouter for this. Second: anything requiring current information. Local models have a training cutoff. For questions about recent events, new libraries, or anything time-sensitive, I needed a cloud provider. Third: long-context tasks. Summarising a 200-page document โ the local model handled it, but less cleanly than frontier models with larger context windows.
Week 4: The Rhythm
By week four, the workflow had settled into a natural pattern. Local AI for the vast majority of daily tasks: email, calendar, file management, research on my own documents, drafting, Q&A, automation. OpenRouter for the occasional hard task: complex reasoning, current information, long-document analysis. My OpenRouter bill for the month: $6.40.
The Verdict
Can a local AI agent replace ChatGPT for daily use? For most of what I actually use AI for: yes, comfortably. The capability gap is real but narrow for everyday tasks, and the privacy and cost advantages are significant. My ChatGPT Plus subscription cost ($20/month) versus local AI plus occasional OpenRouter ($6/month) โ the math is clear.
I cancelled ChatGPT Plus at the end of the month and have not missed it for anything in my regular workflow. See the full Skales vs ChatGPT comparison or try it yourself.