Free · Open Source · MIT Licensed

Your AI, your models,
your rules.

Arlopass is an open-source browser extension that lets web apps use your AI providers — Ollama, Claude, GPT, Gemini, or Bedrock — without ever seeing your API keys. You choose which apps to trust, pick your model, and stay in control.

View on GitHub
10 lines to integrate
0 keys on your server
Any model user's choice

Works with your favorite AI providers

Ollama
Claude
OpenAI
Gemini
Bedrock
Perplexity
LM Studio
Vertex AI
Grok
Foundry
Why Arlopass

Everything you need for safe AI access

Arlopass gives users and developers a single layer for model selection, credential security, and per-app permissions — no server changes required.

Security

Your keys, your device

API credentials are encrypted in a local vault on your device — never in the browser, never on a server. AES-256-GCM encryption at rest.

Developer experience

10-line integration

Add AI to any web app with a few lines of code. No backend proxy, no key management.

Providers

Any provider, any model

Switch between Claude, GPT, Gemini, Ollama, and more — with one click.

Permissions

You decide who gets access

Approve connections once, set rate limits, toggle providers — all from one place.

How it works

Up and running in under a minute

No backend to deploy, no accounts to create. Just three quick steps.

01

Install Arlopass

Add the browser extension from the Chrome Web Store. It takes 30 seconds. No accounts, no cloud, no sign-up.

02

Connect your providers

Link your AI accounts — Ollama, Claude, GPT, Gemini, Bedrock. Credentials are encrypted in a local vault on your device, never in the browser.

03

Approve and go

When a web app wants to use AI, you see exactly what it needs. Pick your model, approve the connection, and it just works from there.

For developers

Add AI to your app in 10 lines.

Install @arlopass/web-sdk, call connect() and chat.stream(). The user's Arlopass handles provider selection, auth, and routing. You never see or store an API key.

  • Zero key management — no server proxy, no .env files
  • TypeScript-first, Zod-validated, async iterator streaming
  • User switches providers — your code doesn't change
  • Free infrastructure — users bring their own AI subscriptions
Read the docs
app.ts
import { connect, createChat } from '@arlopass/web-sdk';

// Connect to the user's Arlopass extension
const session = await connect();

// Create a streaming chat
const chat = createChat(session);

// Send a message — user picks the model
const stream = chat.stream('Explain quantum computing simply.');

for await (const chunk of stream) {
  process.stdout.write(chunk.text);
}
Built for privacy

Your keys stay home. Always.

No more pasting API keys into apps you don't fully trust. Arlopass keeps your credentials encrypted in a vault on your device — the app gets AI, you keep control.

Local storage

Keys never leave your device

API credentials are encrypted in a vault file on your device — secured by AES-256-GCM with your master password or OS keychain. Never in the browser, never on a server.

Full visibility

See every request

Every AI request passes through your Arlopass. You see exactly what the app is asking, which model will respond, and where data flows.

Zero cloud

Zero cloud dependency

Arlopass has no backend, no user accounts, no telemetry. The extension runs locally. Pair it with Ollama for fully offline, zero-exposure AI.

App requests AI Via the SDK
You approve Choose model & provider
Securely routed Keys never exposed
What you get

Built around what matters to you

Privacy, choice, and developer experience — without compromise.

Access control

You decide who gets access

When a web app wants AI, you see a clear prompt showing what it needs. Approve the connection once, pick your model, and the app is good to go.

Multi-provider

Switch providers freely

Use Claude for one app, GPT for another, Ollama for everything sensitive. Switch with a click — no code changes, no reconfiguration.

Local-first

Local model support

Connect Ollama or LM Studio for fully local, zero-cloud AI. Your prompts never leave your machine. Everything stays on your hardware.

Governance

Policy-based governance

Signed policy bundles enforce approved providers, model restrictions, and data-handling rules. Ed25519-signed, with JSONL audit trails.

Universal

Works everywhere

Any web app can integrate Arlopass. Install the extension once, use it across every Arlopass-enabled site. One pass, every app.

Open source

Open source, MIT licensed

The protocol, SDK, extension, and every adapter — all MIT licensed. Use it, fork it, extend it. AI access on the web should be a standard, not a moat.

Why Arlopass

Other tools ask you to trust them with your keys. Arlopass lets you keep them.

Capability Arlopass Paste API Key AI Gateways Vendor SDKs
User chooses the model Depends
Keys never leave user's device
Works with local models Rarely Sometimes
Zero server infrastructure
Integration effort ~10 lines ~40+ lines ~30 lines ~15 lines
Provider switching 0 code changes Full rewrite Config change Moderate
User approves app connection
Open source Varies Some Some
FAQ

Frequently asked questions

What is Arlopass?

Arlopass is an open-source browser extension and developer SDK that lets web applications use a user's own AI providers — like Ollama, Claude, GPT, or Bedrock — without the application ever seeing, touching, or storing the user's API keys. API keys routinely leak through client-side code, server logs, and misconfigured deployments. Arlopass eliminates this entire attack surface by keeping credentials in an encrypted vault on the user's device.

How does Arlopass keep my API keys safe?

Your API keys are encrypted in a vault file on your device using AES-256-GCM, secured by a master password or your OS keychain (Windows Credential Manager, macOS Keychain, or Linux Secret Service). They never enter browser storage and never leave your device. When a web app requests AI access, the request routes through the native bridge on your machine, which injects credentials at the last moment. The app receives only the AI response, never your credentials.

Which AI providers does Arlopass support?

Arlopass supports Ollama (local models), Anthropic Claude (including subscription-based access), OpenAI GPT models, Google Gemini, Amazon Bedrock, Google Vertex AI, Perplexity, and Microsoft Foundry. The adapter system is open — any provider with an API can be supported by writing a single TypeScript adapter file.

How do developers integrate Arlopass into their apps?

Developers add the @arlopass/web-sdk package and use three main functions: connect() to establish a session with the user's extension, createChat() to create a conversation, and chat.stream() or chat.send() to exchange messages. The entire integration is approximately 10 lines of code with full TypeScript support, Zod-validated schemas, and async iterator streaming.

Is Arlopass free?

Yes. Arlopass is 100% free and open source under the MIT license. The browser extension, SDK, protocol, and all provider adapters are free to use, modify, and distribute. Users bring their own AI provider subscriptions — Arlopass itself has no pricing tiers, no premium features, and no cloud infrastructure costs.

Can I use Arlopass with local AI models?

Absolutely. Arlopass has first-class support for Ollama and LM Studio, enabling fully local, zero-cloud AI in any web application. When using local models, your prompts and responses never leave your machine — achieving true zero-exposure AI access with no internet dependency.

Ready to take control?

Install Arlopass, connect your AI providers, and start using AI on your terms. Free, open source, and ready right now.

Read the quickstart

Coming soon

We've submitted Arlopass to the browser extension stores and are waiting for approval. Store listings will be available soon — check back shortly.

In the meantime, you can install from source on GitHub.