Why this exists
A small set of tools I've been building to make AI-assisted work on RemoteLock code less painful — starting with connect-web 3.0.
Using agents on a real codebase is harder than the demos make it look. Not because the models can't write code, but because the workflow around them is missing pieces.
What gets in the way
- Limited visibility into what an agent is doing. When you set one off on a real task, you want to see what it's working on, what it's about to commit, and where it got confused — not just read logs after the fact.
- Scattered knowledge. The "why" behind decisions lives across Jira, GitHub PR threads, Confluence, Sentry, Slack. Hard for humans to dig up; harder for agents.
- Local envs can't reach the integrations that matter. connect-backend talks to a long list of external systems that don't run on a laptop. Validating real changes against real services takes more than docker-compose.
What's here
Three of the linked tools chip at the above directly:
- Multica — workspace for watching and steering multiple agents at once across the cloned repos.
- RAG — ingests GitHub, Jira, Confluence (Sentry and Slack next) and exposes semantic search via MCP, so an agent can actually find the rationale for past decisions.
- Review Apps — spins up real ECS environments per branch with the real integrations wired up, instead of trying to fake them locally.
Release Train is along for the ride — not a blocker, but a nice-to-have that takes some of the mechanical coordination off senior engineers' plates. The rest — Glitchtip, Sentry, Grafana, Jira, Confluence, AWS — are existing tools the experiment relies on, surfaced from the same page so they're one click away.
Where it lives today
The running services — Multica, Review Apps, Release Train, Glitchtip, this landing page — share a single EC2 instance, the agent-box, at dev.remotelock.com.
The repos behind them, and the code that ships them to the box (the ai-deploy Terraform project, the sync and configure scripts, RAG itself) currently only exist on my laptop. Nothing is in a shared GitHub org yet; deploys happen by me running a script. That's deliberate for now — once a pattern proves itself it can move somewhere the team can iterate on it.
How it's set up
One EC2 instance (t3.large) in the dev AWS account, fronted by nginx with one Let's Encrypt cert covering every subdomain. Each tool runs as its own systemd unit on a unique localhost port behind a predictable URL: <name>.dev.remotelock.com. Secrets come from AWS SSM at boot via /etc/ai-deploy.env; nothing lives in the repos.
The whole thing is one Terraform project (ai-deploy/) plus a few rsync-and-systemctl deploy scripts. The instance is protected with prevent_destroy because the local disk holds Postgres data, Multica workspace state, and certs that aren't trivially recoverable. Adding a new tool is a DNS record, an nginx vhost, a systemd unit, and a tile on the home page — usually an afternoon.