📄️ End-to-end baseline 2026-04-28
Snapshot of the full ingest → dbt → frontend pipeline taken right after PLAN-003 merged. This is the known-good starting point for further development. Re-run the same sweep before any major refactor to catch regressions.
📄️ AGENT runbook — onboard a new data source
This is the autonomous-agent runbook for adding one upstream data source to Atlas's ingest pipeline. It is written to be read by a Cursor Background Agent (or similar cloud agent) running in a sandbox VM with a fresh repo clone. Humans should read contributors/adding-a-source.md instead — that is the canonical 11-step workflow, written for humans. This file is the agent-shaped projection of the same workflow, with explicit invariants, gates, and escalation paths.
📄️ Working Inside the DevContainer
All projects use the DevContainer Toolbox (DCT) for development. The AI must run all commands inside the devcontainer, never on the host machine.
📄️ Git Safety Rules and Operations
Git operations require user confirmation. The AI must never run destructive git commands without explicit approval.
📄️ Implementation Plans
How we plan, track, and implement features and fixes.
📄️ AI Developer Guide
Instructions for AI coding assistants working on this project.
📄️ Talk — AI-to-AI Testing Protocol
Talk is a file-based communication protocol that enables two separate AI coding sessions to collaborate on testing. One session develops and builds, the other tests as a fresh user. They communicate by appending messages to a shared talk.md file.
📄️ Plan to Implementation Workflow
How ideas become implemented features.
📄️ Running Two Claude Sessions in the Same Repo
Running multiple Claude Code (or Claude VS Code) sessions against the same
🗃️ plans
2 items
📄️ Project: Atlas
Atlas is an organisation-neutral information platform that aggregates public data about every large Norwegian NGO. The product surface is a Next.js App Router web app at atlas.helpers.no (TypeScript, React Server Components, Digdir Designsystemet for UI, MapLibre for maps). The data behind it is produced by a separate pipeline (atlas-data/) that ingests Norwegian public-sector sources (SSB, FHI, Brreg, Kartverket, Bufdir, IMDi, NAV, Lottstift, Innsamlingskontrollen, …), transforms them through dbt, and serves them as marts.* tables in PostgreSQL.