An AI agent is more powerful than a chatbot โ and that means the security questions are more serious. When you give software access to your email, your files, your calendar, and the ability to run commands on your computer, you are extending significant trust. Here is an honest guide to the risks, what good agent design does about them, and what questions to ask before you install anything.
The Core Security Risks of AI Agents
Shell and system access. An agent that can run terminal commands can theoretically run any command โ including ones that delete files, install software, or modify system settings. This is the most significant risk. Poorly designed agents with shell access and no safeguards are genuinely dangerous.
File system access. An agent with access to your files can read, modify, or delete them. Without logging and confirmation steps, you might not notice until damage is done.
Email access. An agent with email integration can read all your email (necessary for the use case) but can also send email as you. Without approval requirements, a malfunctioning or misunderstood instruction could result in emails sent from your account that you did not intend.
API key storage. To connect to AI providers, agents store API keys. If stored in plain text, a key compromise (malware, another app reading the file) could let someone rack up significant costs on your API account.
Third-party data transit. Cloud-based agents send your data to their servers for processing. This creates the standard cloud privacy risks: breach exposure, policy changes, and unclear data retention.
What Good Agent Design Does About This
The safety architecture of a well-designed local agent should include several layers:
Permission tiers. The agent should not have maximum permissions by default. A tiered system โ restricted, standard, advanced โ lets you calibrate access to match your comfort level. Dangerous operations (deleting files, system modifications) should require elevated permission that you consciously grant.
Action confirmation. Irreversible actions should require explicit approval. The agent proposes, you approve. This prevents mistakes from compounding โ a misunderstood instruction sends a draft email to your review queue, not to your client.
Encrypted key storage. API keys should be stored encrypted, not in plain text config files. The encryption key should be tied to your system, so copying the config file to another machine does not expose the keys.
Local-only processing. For agents handling sensitive data, processing should stay on your machine. If the data never transits to a third-party server, the breach exposure risk disappears entirely.
Transparent logging. Every action the agent takes should be logged and reviewable. If something unexpected happens, you should be able to see exactly what the agent did and when.
How Skales Handles Security
Skales uses a three-tier Safety Mode system: Safe (dangerous operations blocked), Advanced (risky operations require confirmation), and Unrestricted (no blocks โ for advanced users who want full control). The default is Safe mode. API keys are encrypted at rest. All processing with Ollama is local โ nothing transits to Skales servers because Skales does not have servers. Every action is logged in the session history.
Questions to Ask Before Installing Any Agent
Before installing any AI agent, ask: Does it have permission tiers, or is access all-or-nothing? Does it require confirmation before irreversible actions? Are API keys stored encrypted? Is there a transparent log of what it has done? If cloud-based, what does the data retention policy actually say? Does the agent have open network ports that could be accessed by other software? Is the source code auditable?
For a thorough look at how Skales answers these questions, see the FAQ and the privacy architecture page.