Investigate: Public API surface — dogfood pattern, technology choice, contract location
IMPLEMENTATION RULES: Before implementing this plan, read and follow:
- WORKFLOW.md - The implementation process
- PLANS.md - Plan structure and best practices
Status: Completed (2026-04-30)
This parent investigation is functionally resolved. End-to-end Atlas → PostgREST → real-data curl verified live against UIS rancher-desktop on 2026-04-30 (see talk/talk.md Messages 1–4). Implementation arc:
| Question raised here | Resolved by |
|---|---|
| Technology choice for the public HTTP API | PostgREST — Atlas-side wrapper layer in PLAN-004; UIS-side runtime in helpers-no/urbalurba-infrastructure PR #132 (PLAN-002) + #135 (set -e regression fix). |
Contract location (marts.* directly vs wrapper) | api_v1.* wrapper views — design rationale in the follow-up INVESTIGATE-postgrest-api-v1-wrapper.md, built by PLAN-004. |
| v1 hosting / auth posture | Anonymous-only PostgREST direct exposure — verified working on api-atlas.localhost with Swagger 2.0 metadata, view rows, hidden-table 404, CORS preflight. |
| [Q19] in INVESTIGATE-semantic-foundation-before-expansion.md ("API now or later?") | Now — resolved by completing this surface. |
What's still open (carried forward into separate plans, not blocking this investigation):
- PLAN-E (Next.js dogfood migration) — frontend swaps from direct
marts.*Postgres reads to PostgRESTapi_v1.*HTTP calls. - JWT / Authentik auth layer on PostgREST — UIS PLAN-004 on the urbalurba side.
- API gateway insertion (Gravitee local / APIM prod) — v1.5+ when external consumer volume justifies it.
- FK embeds (
?select=*,kommune(*)) — deferred per PLAN-004 [Q10]; needs Postgres FK constraints retrofitted acrossmarts.*.
Last Updated: 2026-04-30 — moved backlog/ → completed/
Origin: A late-stage decision in the semantic-foundation thread changed the calculus for the public API. Atlas's Next.js frontend will be migrated to call the same API external consumers use — the "dogfood your own API" pattern. This shifts three things in the existing plans:
- PLAN-C in INVESTIGATE-semantic-foundation-before-expansion.md is no longer deferred. The contract surface (
marts.*shape that consumers depend on) becomes load-bearing the day Next.js migrates — not "when Tilskuddsmatcher lands." - [Q19] in that plan ("API now or later?") is resolved → now.
- The MCP-first decision in PR #18 stays correct, complementary to the HTTP API:
- dbt MCP is the agent interface — Claude / GPT / any MCP client doing exploratory semantic queries against
manifest.json+ Discovery + (read-only) Postgres MCP. - HTTP API is the application interface — Next.js, Tilskuddsmatcher, future external devs querying for specific data.
- Both read from the same
marts.*and the same conformed dimensions. They serve different access patterns; neither replaces the other.
- dbt MCP is the agent interface — Claude / GPT / any MCP client doing exploratory semantic queries against
This investigation is the API-side counterpart to the semantic-foundation work.
Infrastructure context — v1 vs. what's available later
For v1: no API gateway, no auth. PostgREST is exposed directly to consumers via Cloudflare Tunnel. Public read-anonymous. Same surface for Atlas's Next.js (dogfood) and any external consumer.
The Atlas project runs on infrastructure that can provide gateway/auth when needed — but that's a v1.5+ insertion, not a v1 dependency. Capturing it here so future readers know what's already available:
| Concern | Local development (Rancher Desktop k8s, via UIS) | Production (Azure) |
|---|---|---|
| Identity / SSO (later) | Authentik — already in UIS | Okta |
| API gateway (later) | Gravitee — already in UIS | Azure API Management (APIM) |
| Compute (v1) | k8s pods (Rancher Desktop) | Azure Container Apps |
| Database (v1) | Postgres in k8s | Postgres (UIS / Azure-managed) |
What this means for v1:
- PostgREST is the only API service. Single read-only Postgres role.
- No rate-limit, no auth, no per-tenant policies. Public + anonymous, period.
- Cloudflare Tunnel exposes PostgREST at
api.atlas.helpers.no(or similar) with HTTPS termination. - OpenAPI auto-generated by PostgREST is published; consumers (Atlas Next.js + external devs) read it directly.
What changes when v1.5+ triggers fire (rate-limit needed, keyed user materialises, write endpoints arrive):
- Insert Gravitee (local) / Azure APIM (prod) in front of PostgREST. They import the OpenAPI spec; the API service stays auth-unaware.
- Wire Authentik (local) / Okta (prod) as the OAuth provider for keyed/auth endpoints. Per-endpoint policies (public-anon vs OAuth-keyed) configured at the gateway.
- Atlas's API service code does not change — only the deployment topology does.
This deferred-but-known posture matches the docs/stack/suggested-stack.md updated 2026-04-27.
Questions to Answer
- [Q1] Confirmed dogfood — Next.js migrates from
marts.*direct reads to the same API external consumers use. Which migration shape: same PR as API stand-up (high-risk, definitive), or follow-up PR after API stabilises (lower-risk, transitional)? - [Q2] Which API technology family: auto-API on Postgres (PostgREST / Hasura / Postgraphile), custom standalone service (Fastify/Hono TS, FastAPI Python), or Cube (semantic layer with multi-protocol API)?
- [Q3] Where does the public contract live —
manifest.json(dbt-side), OpenAPI spec (API-side), or ODCS v3 (vendor-neutral, generated from one of the above)? [Q4]Auth model — v1 resolved: none. Public + anonymous read-only API. The auth-handling infrastructure (Authentik / Okta) is available for v1.5+ when keyed/OAuth users land — at that point the gateway (Gravitee / APIM) gets inserted in front of PostgREST. Until then, no auth in v1.- [Q5] Versioning strategy — URL path (
/v1/),Acceptheader, or both? Atlas is pre-v1; this matters more for v2 onwards. - [Q6] Hosting — local: k8s pod under UIS alongside other services; prod: Azure Container Apps. Resolved by infrastructure context. Domain naming (
api.atlas.helpers.novs path underatlas.helpers.no/api/) is still open. - [Q7] Norwegian localisation —
Accept-Languageheader switching, hardcoded Norwegian responses, or dual*_no/*_enfields per row? - [Q8] Connection management — pgBouncer or similar in front of Postgres? At what consumer-count does this matter?
- [Q9] Cache strategy — v1: none initially. Cloudflare Tunnel may add edge caching for free; PostgREST itself doesn't cache. Aggressive TTLs become available when the gateway lands in v1.5+. Most marts data is static between dbt runs (daily/weekly), so caching is straightforward when wired in.
- [Q10] Read-only or eventual write endpoints? "Meld feil" path in the goal doc could be a write endpoint — but that triggers the v1.5+ auth/gateway insertion, since you don't want unauthenticated write endpoints exposed publicly.
[Q11]Rate limiting — v1: none in-application. Cloudflare may rate-limit at the edge (default DDoS protection). Application-level rate-limiting waits for the gateway in v1.5+.- [Q12] Schema evolution — backwards-compatibility guarantees? "We add fields, never remove" is the cheap rule; "stable for 12 months minimum" is stronger.
- [Q13] Error envelope shape — RFC 7807 problem-details, custom JSON, or no envelope (just HTTP status)?
- [Q14] GraphQL vs. REST? Or both (e.g. Hasura gives both)? Atlas's data is dimension-and-fact shape — REST queries get awkward beyond simple lookups; GraphQL handles cross-table well.
- [Q15] Norwegian public-sector API conventions worth adopting? Digdir publishes guidance for API design — worth a 30-minute scan.
- [Q16] When does the dbt MCP path and the HTTP API path converge for the same consumer? E.g. an LLM agent that first calls dbt MCP to understand schema, then calls the HTTP API to fetch data — does Atlas need to expose both pathways consistently, or is the LLM expected to pick one?
Current state
What consumes marts.* today
- Atlas's own Next.js frontend — direct reads via
atlas-frontend/src/lib/db.tsusingpostgres.jsand a read-only Postgres role. 15 routes, all server-rendered, all reading directly. Seeatlas-frontend/src/lib/indicators.tsandatlas-frontend/src/lib/supply.tsfor query patterns. - No external consumer. The "Dev" persona in personas.md is tertiary and currently speculative.
What API needs are emerging
- Tilskuddsmatcher / Lisa (goal.md:153) — if Lisa-first wins as the v1 wedge, she's the first external-shaped consumer. Her workflow involves filtering open grant calls against need indicators per kommune — exactly the cross-source pattern
fact_kommune_indicatorswas built for. - Atlas's own next-generation features — Storm mode (Lars persona) needs FRR resources + weather warnings overlaid. Coverage-gap explorer needs cross-source queries. Both fit the same query patterns external consumers would want.
- Public-good positioning (goal.md:88) — "valuable as a public good on its own — for journalists, researchers, policy planners". An API is how that promise becomes real.
What the dogfood pattern actually buys
Critical: Atlas's own frontend exercising the same surface external devs would means:
- Bugs in API shape get caught by Atlas's own dev work, not by external complaint
- Latency, error-handling, edge-case behaviour all get hardened through Atlas's own use
- The contract is real, not theoretical — it's load-bearing for
atlas.helpers.noitself - External consumers see the API as it actually performs, not as it was designed
This is the pattern Stripe, Twilio, and AWS use; it's the strongest signal of API maturity.
Three option families, compared
Option A — Auto-API on Postgres (PostgREST / Hasura / Postgraphile)
The DB schema is the API. Views in marts.* become endpoints; Postgres row-level security (RLS) handles auth; OpenAPI generated automatically.
Pros:
- Near-zero code. Days to stand up, not weeks.
- Atlas's read-heavy, dimension-shaped data fits this exactly.
marts.fact_kommune_indicatorsbecomesGET /fact_kommune_indicators?kommune_nr=eq.0301&year=eq.2023. - Generated OpenAPI for free (PostgREST writes it from schema introspection).
- Battle-tested at scale (PostgREST is used by Supabase under the hood; Hasura by Netflix, Atlassian).
- Aligns with the project-atlas.md "dbt-deterministic-not-interpretive" doctrine — the API stays a thin projection of marts.
Cons:
- API shape is constrained by DB shape. Aggregations, computed fields, response envelopes get awkward.
- "Filtered list of kommuner with 5 specific indicators" becomes a complex multi-call dance from the client, vs. one custom endpoint.
- Norwegian-localisation in response payload (
label_no/label_enswitching byAccept-Language) requires per-view duplication or PostgREST's stored-procedure RPC pattern. - Hasura adds a paid-tier nudge for advanced features; PostgREST is fully free but smaller community than Hasura.
Cost estimate: ~3-5 days to a working v1 (one Docker container, profile config, schema views, RLS).
Option B — Custom standalone service (Fastify / Hono TypeScript, or FastAPI Python)
Hand-written API. Full control over shape, auth, error handling, localisation.
Pros:
- Response shape evolves with consumer needs, not DB shape.
- Norwegian-localisation done cleanly at the API layer.
- Computed fields (coverage scores, derived metrics) live where they belong.
- Same TypeScript stack as Atlas frontend (if Hono/Fastify) — shared types, shared deploy story.
- Future write endpoints (Meld feil, etc.) fit naturally.
Cons:
- Real code to write and maintain. ~10-20× the LOC of Option A for the same coverage.
- OpenAPI must be hand-generated (or via decorators / TypeBox / Zod-to-OpenAPI tooling).
- The contract surface lives in code, not config — drift risk between what marts. produces and what API exposes.
- Day-zero is weeks, not days.
Cost estimate: ~3-5 weeks to a working v1 covering current Next.js read patterns.
Option C — Cube (semantic layer + multi-protocol API)
Cube sits on top of dbt models. Speaks REST, GraphQL, and SQL. Metric definitions colocate with the API.
Pros:
- Purpose-built for "expose dbt marts as a queryable API" — exactly Atlas's situation.
- Multi-protocol out of the box.
- Caching, rate-limiting, auth all handled.
- AI-aware (Cube has explicit "agent" features for LLM consumers).
Cons:
- Another tool. Opinionated. Has its own modelling language (cube definitions) that overlaps with dbt.
- Free tier is fine for development; production / enterprise features behind a paywall.
- Forces an architectural commitment that constrains Atlas's choices later (especially around dbt's own evolving semantic-layer story per INVESTIGATE-semantic-foundation-before-expansion.md Q24).
- Atlas's data isn't BI-shaped — it's reference data. Cube's metric-layer strengths are mostly wasted.
Cost estimate: ~1-2 weeks to a working v1, plus ongoing per-cube maintenance.
The contract-location question (Q3)
Three places the public contract could live:
| Location | What it describes | Pros | Cons |
|---|---|---|---|
manifest.json (dbt-side) | Models, columns, descriptions, tests, lineage | Auto-generated, structural, machine-readable | Describes DB shape, not API shape; not what consumers see |
| OpenAPI spec (API-side) | Endpoints, request/response shapes, auth, errors | What consumers actually see; tooling galore (codegen, mocking, testing) | Hand-written if Option B; auto-generated if A or C |
| ODCS v3 (vendor-neutral, on top of either) | Standard data-contract format; portable across tools | Future-proof; if Atlas migrates off dbt or off the chosen API tool, the contract survives | One more artifact to maintain |
Dogfood implication: with dogfooding, the API spec is what consumers actually see. OpenAPI is the right primary contract location; manifest.json becomes implementation detail; ODCS becomes the long-term portable form, generated from the OpenAPI spec.
This is a flip from the existing semantic-foundation plan, which proposed ODCS-from-manifest.json. The reasoning that flips it: with dogfooding, the API surface is real and consumed; the DB shape is a step removed from consumers.
Auth model — resolved (was [Q4])
v1: none. Public + anonymous read-only API. Same surface for Atlas's own Next.js (dogfood) and any external consumer.
When v1 outgrows public-anonymous, the path is clear:
| Trigger | What gets added |
|---|---|
| First real keyed user (e.g. Lisa) | Insert Gravitee (local) / APIM (prod) gateway; wire Authentik (local) / Okta (prod) as OIDC provider; configure OAuth policy on the relevant endpoints. The PostgREST service itself doesn't change. |
| Public abuse / rate-limit pressure | Same gateway insertion; configure IP-based rate-limit policy. |
| Write endpoints (Meld feil, etc.) | Same gateway insertion; require auth on write paths. Read endpoints stay public-anonymous. |
The takeaway: v1 is "none, sit behind Cloudflare Tunnel." The gateway pattern is well-understood and the infrastructure is already provisioned (UIS / Azure) — but inserting it is a v1.5+ change, not a v1 prerequisite.
Recommendation candidates (to discuss, not yet chosen)
Tentative pick: Option A (PostgREST), with three explicit guardrails:
- Wrap PostgREST in a thin Hono service later if response-shaping becomes the dominant concern. Don't migrate prematurely; let the dogfood discipline reveal what's missing.
- Use database views for response shaping — Norwegian-localised labels joined inline, computed fields, response envelopes — at the
marts.*layer, not in API code. Keeps the "dbt is deterministic" doctrine intact. - OpenAPI as the canonical contract, generated from PostgREST's introspection. Published statically alongside the API. ODCS v3 generated from OpenAPI when external portability matters.
Why PostgREST is right for v1:
- It is the tool for "expose Postgres views as a REST API with auto-generated OpenAPI" — exactly Atlas's situation.
- A 3-5 day stand-up. Helm chart + read-only role + a few
marts.*views + Cloudflare Tunnel exposure. - Single binary, low ops, well-trodden path (Supabase uses it under the hood).
- Public-anonymous + read-only matches v1's posture: no auth complications, no rate-limit complications, no write-path complications.
- When v1.5+ triggers add the gateway, PostgREST stays exactly the same — Gravitee/APIM just gets inserted upstream. Zero refactor of API code.
Why PostgREST over Hasura: simpler, fully free, smaller surface area to learn. Hasura's GraphQL story is nice but Atlas's data shape is REST-friendly anyway. Choose Hasura if/when GraphQL becomes a real consumer demand.
Why not Option B initially: 3-5 days vs. 3-5 weeks of dev cost is a 5-10× delta. The dogfood model means we'll learn fast about what's missing — start cheap, evolve based on real signal. If/when response shaping becomes the dominant concern, wrap PostgREST in Hono later (per guardrail 1), or migrate the affected endpoints. Don't pre-optimise.
Why not Option C: dbt's own semantic-layer evolution per [Q24] in the parent plan introduces tool risk. Cube would be a parallel modelling effort that conflicts with that path. Atlas's data is reference data, not BI metrics — Cube's strengths are mostly wasted.
But this is genuinely a choice, not a default. Option B is right if you already know the Next.js API needs computed fields and response envelopes that PostgREST can't easily provide, OR if you'd rather build the surface deliberately than retrofit it.
Per-route audit (2026-04-27): what backing the existing 15 routes actually requires
Walked every route in atlas-frontend/app/* and every query in atlas-frontend/src/lib/{indicators,supply,db}.ts to categorise PostgREST-readiness. Categories: 🟢 trivial filter, 🔵 embedded join (PostgREST ?select=...,nested(...)), 🟡 needs new dbt view.
| Route | Query pattern | Category | Backing artifact |
|---|---|---|---|
/ | static | — | — |
/data | listIndicators() — CTE max(year) per indicator + group + count(filter) + min/max/upstream_updated | 🟡 | mart_indicator_summary |
/data/[source_id]/[contents_code] | loadIndicatorValues() — fact filtered to latest year per indicator | 🟡 | mart_indicator_latest_values |
/data/[source_id]/[contents_code] | listMissingKommuner() — active kommuner with no value at latest year | 🟡 | mart_indicator_missing_kommuner |
/coverage-gap/barnefattigdom | CTE latest-year + self-join EU60+Personer | 🟡 | mart_coverage_gap_barnefattigdom |
/kommuner/[kommune_nr] | dim_kommune filter, dim_fylke filter, fact_kommune_indicators filter | 🟢 | direct PostgREST queries |
/kommuner/[kommune_nr] | listChaptersInKommune() — distinct multi-join 5 tables | 🟡 | mart_kommune_local_chapters |
/ngo | listNgos() — dim_ngo left-join active-chapter count subquery | 🟡 | mart_ngo_index |
/ngo/[slug] | getNgoBySlug() — simple slug lookup | 🟢 | ?slug=eq.X |
/ngo/redcross | getNgoOverview() — 6 count subqueries (chapters by level, activities, distinct kommuner) | 🟡 | mart_ngo_overview |
/ngo/redcross/aktiviteter | listActivities() — dim_activity + service category + chapter-count subquery | 🟡 | mart_activity_catalog |
/ngo/redcross/aktiviteter | listServiceCategories() — simple ref select | 🟢 | direct |
/ngo/redcross/chapters | listChapters(filters) — chapter+parent+kommune+fylke + optional service-category EXISTS | 🔵 + 🟡 | embedded for most filters; mart_chapters_with_service_categories if service-category filter is kept |
/ngo/redcross/chapters | listFylker() — dim_fylke is_active filter | 🟢 | direct |
/ngo/redcross/chapters/[chapter_id] | getChapterDetail() — 4 queries (chapter+kommune+fylke, parent, children, activities-with-categories) | 🔵 | combinable to 1-2 PostgREST calls via embedded resources, or one mart_chapter_detail view |
/ngo/redcross/distrikter | listDistrikter() — distrikt + child-count + distinct-kommune-coverage subquery | 🟡 | mart_distrikt_summary |
/ngo/redcross/distrikt/[distrikt_id] | getDistriktDetail() — distrikt + children + distinct(kommune_nr) + distinct(service_category) | 🔵 + 🟡 | embedded for children; mart_distrikt_overview for stats |
/admin/supply/redcross-branches | validation counts (multiple select count(*) from X where Y) | 🟢 | Prefer: count=exact per query, or one-off view |
Summary
| Category | Count | Action |
|---|---|---|
| 🟢 Trivial PostgREST | 5 query patterns | Just configure FKs and exposed schema |
| 🔵 Embedded join | 2-3 query patterns | PostgREST ?select=...,nested(...) |
🟡 Needs new mart_<feature> view | ~9 distinct views | New dbt models in marts/api/ (or flat under marts/) |
mart_* views this audit produces
Naming follows the docs/stack/naming-conventions.md mart_<feature> pattern (feature-named, not entity-named):
atlas-data/dbt/models/marts/api/ (new subfolder once 5+ exist; flat until then)
├── mart_indicator_summary.sql -- /data
├── mart_indicator_latest_values.sql -- /data/[source_id]/[contents_code]
├── mart_indicator_missing_kommuner.sql -- /data/[source_id]/[contents_code]
├── mart_coverage_gap_barnefattigdom.sql -- /coverage-gap/barnefattigdom
├── mart_kommune_local_chapters.sql -- /kommuner/[kommune_nr]
├── mart_ngo_index.sql -- /ngo
├── mart_ngo_overview.sql -- /ngo/redcross
├── mart_activity_catalog.sql -- /ngo/<slug>/aktiviteter
├── mart_distrikt_summary.sql -- /ngo/<slug>/distrikter
└── (mart_chapters_with_service_categories.sql — only if that filter on chapters page is kept)
Three observations from the audit
- The "API-shape" views are pre-aggregated reads. They take what the Next.js code currently does in inline CTEs and ad-hoc joins, and persist them as
marts.*views. Extends the dbt doctrine cleanly: query logic lives in dbt, API stays projection. The naming-conventions doc captures this pattern under "When to add a newmart_<feature>". - Naming convention emerges naturally. Feature-named (
mart_coverage_gap_barnefattigdom,mart_ngo_overview) rather than entity-named (mart_dim_ngo_with_chapter_count✗). One row per consumer-meaningful natural key. - PLAN-D (stand up PostgREST) is small (~3-5 days). The new dbt views (~9 models) and Next.js migration (~2-3 weeks) are the real work. The phased plan below splits accordingly.
Recommended phased plan (subject to revision)
[Q17] PLAN-D.1 — Add API-shaped mart_* views (week 1)
Add the ~9 dbt models the audit identified, under atlas-data/dbt/models/marts/ (flat in v1; promote to marts/api/ subfolder once there are 5+). No frontend or API changes yet — Atlas's existing direct-SQL keeps working.
Tasks:
- Pick Option A/B/C (settle [Q2]) — needed to confirm view shapes are PostgREST-friendly.
- For each of the ~9 mart views from the audit: write the SQL, add
schema.ymldescription + tests, rundbt buildto verify. - Verify each new view materialises correctly and matches the row shape the equivalent Next.js inline-SQL produces today (sample-row diff).
[Q17b] PLAN-D.2 — PostgREST stand-up (week 2)
Stand up PostgREST against marts.* (now including the new mart views). OpenAPI auto-generated. Read-only, public, anonymous. Deployed as a k8s pod under UIS in local; Azure Container Apps in prod. Cloudflare Tunnel for HTTPS exposure. No gateway, no auth in v1.
Tasks:
- Stand up PostgREST as a k8s pod with a single read-only Postgres role.
- Expose via Cloudflare Tunnel at
api.atlas.helpers.no(or chosen domain). - Verify the OpenAPI spec covers every mart view from D.1 plus the directly-exposed dim/ref tables.
- Verify dogfood path: pick one Atlas Next.js page (e.g.
/coverage-gap/barnefattigdom) and hit the equivalent API call alongside. Don't migrate yet, just validate. - Document the v1.5+ insertion path for the gateway (Gravitee/APIM) so the next agent / iteration knows what to wire when the trigger fires.
[Q18] PLAN-E — Next.js dogfood migration (weeks 3-5)
Migrate Atlas frontend from marts.* direct reads to API calls. Per the audit: 18+ query patterns across 15 routes; mostly mechanical but route-by-route to keep PRs reviewable.
Tasks:
- Replace
sql\...`calls in [atlas-frontend/src/lib/{indicators,supply,db}.ts](https://github.com/terchris/atlas/tree/main/atlas-frontend/src/lib/) withfetch()` calls to PostgREST. - Add a feature flag (env var) to toggle direct-read vs API-read per route during transition.
- Migrate routes in order of complexity: 🟢 trivial first (
/ngo/[slug], ref/dim selects), 🔵 embedded next (/ngo/redcross/chapters/[chapter_id]), 🟡 view-backed last (/data,/coverage-gap/barnefattigdom,/ngo/redcross,/ngo/redcross/distrikter). - Once all routes migrate, remove the direct-DB read role from atlas-frontend (it now needs only API access).
- Drop the
postgres.jsdependency fromatlas-frontend/package.jsononce direct reads are gone.
[Q19] PLAN-F — Publish OpenAPI + docs (week 6)
Publish the OpenAPI spec alongside dbt docs. Write a public consumer guide. Add to website/docs/.
Tasks:
- Generate OpenAPI from the running PostgREST instance.
- Render it (Swagger UI / Redoc) and host it.
- Write
website/docs/api/getting-started.mdfor external developers. - Add API reference page to
website/docs/index.md.
[Q20] PLAN-G — Lift the freeze on supply-side data adds
Per the parent INVESTIGATE plan, the freeze on NGO supply expansion is gated on the semantic-foundation implementation. With the API + contract layer in place, the freeze can lift on supply-side too. New NGOs can be added knowing the contract surface is stable.
Open questions
- [Q21] Should atlas-frontend Next.js be deployed on the same domain (
atlas.helpers.nowith API at/api/) or separate domains (api.atlas.helpers.no)? Same-domain simpler for CORS; separate-domain cleaner conceptually. - [Q22] Caching: dbt models rebuild daily/weekly; the API could cache aggressively (1-hour TTL + stale-while-revalidate). When does this matter — at what consumer load?
- [Q23] Norwegian localisation: simplest is
?lang=no|enquery param. Cleaner isAccept-Languageheader. Norwegian-first means defaulting tono. Decide per-endpoint or globally. - [Q24] GraphQL: does any near-term consumer (Tilskuddsmatcher, Storm mode) actually want GraphQL, or is REST enough? Postgraphile / Hasura make GraphQL nearly free; PostgREST doesn't. If GraphQL is a real near-term need, this changes the Option-A pick from PostgREST to Hasura/Postgraphile.
- [Q25] Rate limit thresholds: 1000 requests/hour per IP for unauth, 10 000 for keyed users — placeholder; revisit when usage data exists.
- [Q26] Norwegian public-sector API conventions: Digdir guidance, ELMA, Maskinporten? Worth a brief scan to see if there's a pattern Atlas should follow.
- [Q27] When is the parent INVESTIGATE plan's PLAN-A (publish dbt MCP) still done first, vs. parallel with this PLAN-D? They consume the same
marts.*shape; doing both in parallel risks duplicated description-coverage work. Suggest: PLAN-A first (dbt MCP, schema.yml hygiene with dbt-osmosis), then PLAN-D (API surface) — schema.yml descriptions become both the agent surface and the API documentation source. - [Q28] Write endpoints: Meld feil (feedback flagging) per the goal doc could become a future write endpoint. Out of scope for v1 read-only API, but worth noting the path exists.
Cross-references
INVESTIGATE-semantic-foundation-before-expansion.md— the parent plan; this resolves its [Q19] (API now/later) and changes the trigger for its PLAN-C (model contracts).docs/ideas/semantic-data-platform-discussion.md— the conversation that landed on dbt-MCP-first; this plan extends that thinking to the HTTP API.docs/research/goal.md— public-good positioning that motivates the API; Lisa-first decision (Open Decision #1) gates the urgency.docs/research/personas.md— Dev / Ola / Lisa personas this serves.atlas-frontend/src/lib/db.ts,atlas-frontend/src/lib/indicators.ts,atlas-frontend/src/lib/supply.ts— current direct-DB read patterns that inform what API endpoints are needed first.INVESTIGATE-private-atlas-deployments.md— UIS-side hosting story that this API will need.- Urbalurba Infrastructure Stack (UIS) — sibling repo that provides the local development cluster (Authentik + Gravitee + Postgres on Rancher Desktop k8s).
- Authentik — local-dev identity provider.
- Gravitee — local-dev API gateway.
- Okta — production identity provider.
- Azure API Management — production API gateway.
- Azure Container Apps — production compute.
- PostgREST — Option A primary candidate.
- Hasura — Option A alternative with GraphQL.
- Cube — Option C.
- Open Data Contract Standard v3 — vendor-neutral contract format.
- Digdir API guidance — Norwegian public-sector API conventions worth scanning.
Next Steps
- Resolve remaining [Q1]–[Q3], [Q5], [Q7], [Q8], [Q10], [Q12]–[Q16] in conversation with the user. ([Q4], [Q6], [Q9], [Q11] resolved by infrastructure context above.)
- Pick Option A/B/C ([Q2]) — the load-bearing decision.
- Decide ordering vs. parent PLAN-A ([Q27]) — sequential or parallel.
- Once decided, split into
PLAN-D-api-stand-up.md,PLAN-E-frontend-dogfood-migration.md,PLAN-F-openapi-publish.md,PLAN-G-lift-supply-freeze.md.
Not in scope for this investigation
- Designing the OpenAPI spec in detail — that's PLAN-D's output.
- Choosing specific endpoint URLs and parameters — emerges from migrating actual Next.js routes.
- Auth implementation specifics (key formats, OAuth flows) — separate investigation when a real keyed user exists.
- Write endpoints (Meld feil) — out of scope; future investigation.
Prerequisites
- The parent INVESTIGATE plan (INVESTIGATE-semantic-foundation-before-expansion.md) is in flight. PLAN-A (dbt MCP + schema.yml hygiene via dbt-osmosis) is the natural first step before this plan's PLAN-D — same
marts.*description-coverage work feeds both surfaces.