Data export - security view¶
The customer-facing walkthrough lives at guides/data-export. This page is the security-level companion: how the export is generated, signed, retained, and isolated.
Mechanics¶
- Customer calls
POST /api/v1/me/export(portal button, orbcdock me export). Idempotent - a second call while one is in flight returns the same request id. - Platform enqueues a background job which runs the export under a per-company database context (so company isolation scopes the export to the customer's data automatically).
- The job gathers data - reads per-table records into CSVs, packed into a ZIP.
- The job uploads the ZIP via the bcdock-infra Go service to Azure Storage. The Platform layer never speaks Azure SDK directly; the infra service holds the storage credentials.
- The job mints a SAS URL with a 24-hour TTL, scoped to the single blob, read-only.
- The job emails the SAS URL to the customer's account email via the standard transactional email channel.
- Customer downloads at any time within the 24-hour window. After expiry, the SAS becomes invalid and the underlying blob is deleted.
The end-to-end flow is bounded by the background-job retry policy - a job failure surfaces in the customer's portal as a "failed" status; the customer can re-request without quota cost.
What lands in the ZIP¶
The list lives in guides/data-export. All CSVs are UTF-8 with a header row and ISO 8601 UTC timestamps.
Cross-company isolation¶
The export job runs under the customer's company context, with company isolation active. A user in company A cannot trigger an export for company B's data, even if they used to be a member of company B (their account leaves; the company's data stays with company B's owner).
Sole owners see all their company's data. Co-members see their personal user data plus the company-shared data they have access to under normal app permissions - same RBAC as the running portal.
SAS URL details¶
- Permission:
readonly - Resource: a single blob (specifically the export ZIP) - not the container
- Lifetime: 24 hours from generation
- Storage account: per-region; all of one customer's exports go to one regional account
- TLS: required (HTTP-only access fails)
After the SAS expires, two things happen:
- The SAS is no longer signed by the storage account's keys โ 403 from Azure.
- The job that created the SAS schedules the underlying blob for deletion at expiry time. We don't keep an archive.
If you re-request an export within the 24h window, the same SAS comes back. If the window has expired, a fresh export job builds a new ZIP under a new blob with a new SAS.
What's excluded - provisioning telemetry¶
Provisioning logs (the stage-by-stage log lines for every provision / hibernate / resume operation) are not in the ZIP. They're operational telemetry keyed by environment ID, not personal data. Including them would dilute the export's signal-to-noise without giving the customer any information about themselves.
Documented in ADR-015 and data-handling ยง What's deliberately excluded.
What's excluded - API key bytes¶
The api_keys.csv lists every API key the customer has ever minted (name, scopes, created/last-used/revoked timestamps) but never the key bytes themselves - only a hash of those bytes lives in our database. There is no way for the platform to surface the original key bytes once the customer has left the portal screen where the key was first shown.
If a customer needs a fresh key, they mint a new one in the portal (revoking the old one is a separate action).
What's stored about exports themselves¶
We record one row per export request: identifiers, status, the blob it produced, request and completion timestamps, the SAS expiry, and any error message. That row is included in the customer's next export (so "history of exports" is itself in the export - recursive but useful for audit purposes).
Threat model¶
- Eavesdropping in transit: TLS is required for the SAS URL. The portal generates the link over HTTPS; the email channel uses TLS for transport (Resend โ recipient MTA - best-effort beyond our control after the email is delivered).
- SAS leak: a leaked SAS URL is a 24-hour read-only credential to the export blob. The blast radius is limited to one customer's one export. Rotation strategy: customer requests a fresh export, which generates a new blob with a new SAS; old SAS expires on schedule.
- Storage account compromise: a compromise of the storage account's keys would expose all current export blobs. Mitigations: keys live in Key Vault (not in code or env vars); access via managed identity; per-region storage isolation limits blast radius to one region.
Related¶
- Guides โ Data export - customer-facing walkthrough
- Data handling - what's stored, where, for how long
- Account deletion - companion GDPR Art. 17 surface