1 Commits

Author SHA1 Message Date
894c841bb7 Split services into core and request 2026-01-17 16:19:32 -06:00
47 changed files with 201 additions and 1299 deletions

View File

@@ -1 +0,0 @@
1.23.6

View File

@@ -94,8 +94,8 @@ This ensures consistent tooling versions across the team without system-wide ins
## Platform Requirements ## Platform Requirements
Linux or macOS on x86_64. Requires: Linux x86_64 only (currently). Requires:
- Modern libc for Go binaries (Linux) - Modern libc for Go binaries
- docker compose (for full stack) - docker compose (for full stack)
- fd, shellcheck, shfmt (for development) - fd, shellcheck, shfmt (for development)

View File

@@ -2,13 +2,16 @@ diachron
## Introduction ## Introduction
Is your answer to some of these questions "yes"? If so, you might like
diachron. (When it comes to that dev/test/prod one, hear us out first, ok?)
- Do you want to share a lot of backend and frontend code? - Do you want to share a lot of backend and frontend code?
- Are you tired of your web stack breaking when you blink too hard? - Are you tired of your web stack breaking when you blink too hard?
- Have you read [Taking PHP - Have you read [Taking PHP
Seriously](https://slack.engineering/taking-php-seriously/) and do you wish Seriously](https://slack.engineering/taking-php-seriously/) and wish you had
you had something similar for Typescript? something similar for Typescript?
- Do you think that ORMs are not all that? Do you wish you had first class - Do you think that ORMs are not all that? Do you wish you had first class
unmediated access to your database? And do you think that database unmediated access to your database? And do you think that database
@@ -32,9 +35,6 @@ diachron
you're trying to fix? We're talking authentication, authorization, XSS, you're trying to fix? We're talking authentication, authorization, XSS,
https, nested paths, all that stuff. https, nested paths, all that stuff.
Is your answer to some of these questions "yes"? If so, you might like
diachron. (When it comes to that dev/test/prod one, hear us out first, ok?)
## Getting started ## Getting started
Different situations require different getting started docs. Different situations require different getting started docs.
@@ -44,8 +44,9 @@ Different situations require different getting started docs.
## Requirements ## Requirements
To run diachron, you need Linux or macOS on x86_64. Linux requires a new To run diachron, you currently need to have a Linux box running x86_64 with a
enough libc to run golang binaries. new enough libc to run golang binaries. Support for other platforms will come
eventually.
To run a more complete system, you also need to have docker compose installed. To run a more complete system, you also need to have docker compose installed.

29
TODO.md
View File

@@ -1,36 +1,13 @@
## high importance ## high importance
- [ ] nix services/ and split it up into core/ request/
- [ ] Add unit tests all over the place. - [ ] Add unit tests all over the place.
- ⚠️ Huge task - needs breakdown before starting - ⚠️ Huge task - needs breakdown before starting
- [ ] migrations, seeding, fixtures
```sql
CREATE SCHEMA fw;
CREATE TABLE fw.users (...);
CREATE TABLE fw.groups (...);
```
```sql
CREATE TABLE app.user_profiles (...);
CREATE TABLE app.customer_metadata (...);
```
- [ ] flesh out `mgmt` and `develop` (does not exist yet)
4.1 What belongs in develop
- Create migrations
- Squash migrations
- Reset DB
- Roll back migrations
- Seed large test datasets
- Run tests
- Snapshot / restore local DB state (!!!)
`develop` fails if APP_ENV (or whatever) is `production`. Or maybe even
`testing`.
- [ ] Add default user table(s) to database. - [ ] Add default user table(s) to database.
@@ -69,8 +46,6 @@ CREATE TABLE app.customer_metadata (...);
necessary at all, with some sane defaults and an easy to use override necessary at all, with some sane defaults and an easy to use override
mechanism mechanism
- [ ] time library
- [ ] fill in the rest of express/http-codes.ts - [ ] fill in the rest of express/http-codes.ts
- [ ] fill out express/content-types.ts - [ ] fill out express/content-types.ts

22
cmd
View File

@@ -2,26 +2,20 @@
# This file belongs to the framework. You are not expected to modify it. # This file belongs to the framework. You are not expected to modify it.
# Managed binary runner - runs framework-managed binaries like node, pnpm, tsx # FIXME: Obviously this file isn't nearly robust enough. Make it so.
# Usage: ./cmd <command> [args...]
set -eu set -eu
DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
if [ $# -lt 1 ]; then
echo "Usage: ./cmd <command> [args...]"
echo ""
echo "Available commands:"
for cmd in "$DIR"/framework/cmd.d/*; do
if [ -x "$cmd" ]; then
basename "$cmd"
fi
done
exit 1
fi
subcmd="$1" subcmd="$1"
# echo "$subcmd"
#exit 3
shift shift
echo will run "$DIR"/framework/cmd.d/"$subcmd" "$@"
exec "$DIR"/framework/cmd.d/"$subcmd" "$@" exec "$DIR"/framework/cmd.d/"$subcmd" "$@"

27
develop
View File

@@ -1,27 +0,0 @@
#!/bin/bash
# This file belongs to the framework. You are not expected to modify it.
# Development command runner - parallel to ./mgmt for development tasks
# Usage: ./develop <command> [args...]
set -eu
DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
if [ $# -lt 1 ]; then
echo "Usage: ./develop <command> [args...]"
echo ""
echo "Available commands:"
for cmd in "$DIR"/framework/develop.d/*; do
if [ -x "$cmd" ]; then
basename "$cmd"
fi
done
exit 1
fi
subcmd="$1"
shift
exec "$DIR"/framework/develop.d/"$subcmd" "$@"

View File

@@ -1,125 +0,0 @@
# The Three Types of Commands
This framework deliberately separates *how* you interact with the system into three distinct command types. The split is not cosmetic; it encodes safety, intent, and operational assumptions directly into the tooling so that mistakes are harder to make under stress.
The guiding idea: **production should feel boring and safe; exploration should feel powerful and a little dangerous; the application itself should not care how it is being operated.**
---
## 1. Application Commands (`app`)
**What they are**
Commands defined *by the application itself*, for its own domain needs. They are not part of the framework, even though they are built on top of it.
The framework provides structure and affordances; the application supplies meaning.
**Core properties**
* Express domain behavior, not infrastructure concerns
* Safe by definition
* Deterministic and repeatable
* No environmentdependent semantics
* Identical behavior in dev, staging, and production
**Examples**
* Handling HTTP requests
* Rendering templates
* Running background jobs / queues
* Sending emails triggered by application logic
**Nongoals**
* No schema changes
* No data backfills
* No destructive behavior
* No operational or lifecycle management
**Rule of thumb**
If removing the framework would require rewriting *how* it runs but not *what* it does, the command belongs here.
---
## 2. Management Commands (`mgmt`)
**What they are**
Operational, *productionsafe* commands used to evolve and maintain a live system.
These commands assume real data exists and must not be casually destroyed.
**Core properties**
* Forwardonly
* Idempotent or safely repeatable
* Designed to run in production
* Explicit, auditable intent
**Examples**
* Applying migrations
* Running seeders that assert invariant data
* Reindexing or rebuilding derived data
* Rotating keys, recalculating counters
**Design constraints**
* No implicit rollbacks
* No hidden destructive actions
* Fail fast if assumptions are violated
**Rule of thumb**
If you would run it at 3am while tired and worried, it must live here.
---
## 3. Development Commands (`develop`)
**What they are**
Sharp, *unsafe by design* tools meant exclusively for local development and experimentation.
These commands optimize for speed, learning, and iteration — not safety.
**Core properties**
* Destructive operations allowed
* May reset or mutate large amounts of data
* Assume a clean or disposable environment
* Explicitly gated in production
**Examples**
* Dropping and recreating databases
* Rolling migrations backward
* Loading fixtures or scenarios
* Generating fake or randomized data
**Safety model**
* Hard to run in production
* Requires explicit optin if ever enabled
* Clear, noisy warnings when invoked
**Rule of thumb**
If it would be irresponsible to run against real user data, it belongs here.
---
## Why This Split Matters
Many frameworks blur these concerns, leading to:
* Fearful production operations
* Overpowered dev tools leaking into prod
* Environmentspecific behavior and bugs
By naming and enforcing these three command types:
* Intent is visible at the CLI level
* Safety properties are architectural, not cultural
* Developers can move fast *without* normalizing risk
---
## OneSentence Summary
> **App commands run the system, mgmt commands evolve it safely, and develop commands let you break things on purpose — but only where its allowed.**

View File

@@ -1,37 +0,0 @@
Let's consider a bullseye with the following concentric circles:
- Ring 0: small, simple systems
- Single jurisdiction
- Email + password
- A few roles
- Naïve or soft deletion
- Minimal audit needs
- Ring 1: grown-up systems
- Long-lived data
- Changing requirements
- Shared accounts
- GDPR-style erasure/anonymization
- Some cross-border concerns
- Historical data must remain usable
- “Oops, we should have thought about that” moments
- Ring 2: heavy compliance
- Formal audit trails
- Legal hold
- Non-repudiation
- Regulatory reporting
- Strong identity guarantees
- Jurisdiction-aware data partitioning
- Ring 3: banking / defense / healthcare at scale
- Cryptographic auditability
- Append-only ledgers
- Explicit legal models
- Independent compliance teams
- Lawyers embedded in engineeRing
diachron is designed to be suitable for Rings 0 and 1. Occasionally we may
look over the fence into Ring 2, but it's not what we've principally designed
for. Please take this framing into account when evaluating diachron for
greenfield projects.

View File

@@ -1,142 +0,0 @@
# Freedom, Hacking, and Responsibility
This framework is **free and open source software**.
That fact is not incidental. It is a deliberate ethical, practical, and technical choice.
This document explains how freedom to modify coexists with strong guidance about *how the framework is meant to be used* — without contradiction, and without apology.
---
## The short version
* This is free software. You are free to modify it.
* The framework has documented invariants for good reasons.
* You are encouraged to explore, question, and patch.
* You are discouraged from casually undermining guarantees you still expect to rely on.
* Clarity beats enforcement.
Freedom with understanding beats both lock-in and chaos.
---
## Your Freedom
You are free to:
* study the source code
* run the software for any purpose
* modify it in any way
* fork it
* redistribute it, with or without changes
* submit patches, extensions, or experiments
…subject only to the terms of the license.
These freedoms are foundational. They are not granted reluctantly, and they are not symbolic. They exist so that:
* you can understand what your software is really doing
* you are not trapped by vendor control
* the system can outlive its original authors
---
## Freedom Is Not the Same as Endorsement
While you are free to change anything, **not all changes are equally wise**.
Some parts of the framework are carefully constrained because they encode:
* security assumptions
* lifecycle invariants
* hard-won lessons from real systems under stress
You are free to violate these constraints in your own fork.
But the frameworks documentation will often say things like:
* “do not modify this”
* “application code must not depend on this”
* “this table or class is framework-owned”
These statements are **technical guidance**, not legal restrictions.
They exist to answer the question:
> *If you want this system to remain upgradeable, predictable, and boring — what should you leave alone?*
---
## The Intended Social Contract
The framework makes a clear offer:
* We expose our internals so you can learn.
* We provide explicit extension points so you can adapt.
* We document invariants so you dont have to rediscover them the hard way.
In return, we ask that:
* application code respects documented boundaries
* extensions use explicit seams rather than hidden hooks
* patches that change invariants are proposed consciously, not accidentally
Nothing here is enforced by technical locks.
It is enforced — insofar as it is enforced at all — by clarity and shared expectations.
---
## Hacking Is Welcome
Exploration is not just allowed; it is encouraged.
Good reasons to hack on the framework include:
* understanding how it works
* evaluating whether its constraints make sense
* adapting it to unfamiliar environments
* testing alternative designs
* discovering better abstractions
Fork it. Instrument it. Break it. Learn from it.
Many of the frameworks constraints exist *because* someone once ignored them and paid the price.
---
## Patches, Not Patches-in-Place
If you discover a problem or a better design:
* patches are welcome
* discussions are welcome
* disagreements are welcome
What is discouraged is **quietly patching around framework invariants inside application code**.
That approach:
* obscures intent
* creates one-off local truths
* makes systems harder to reason about
If the framework is wrong, it should be corrected *at the framework level*, or consciously forked.
---
## Why This Is Not a Contradiction
Strong opinions and free software are not enemies.
Freedom means you can change the software.
Responsibility means understanding what you are changing, and why.
A system that pretends every modification is equally safe is dishonest.
A system that hides its internals to prevent modification is hostile.
This framework aims for neither.

View File

@@ -1,27 +0,0 @@
- Role: a named bundle of responsibilities (editor, admin, member)
- Group: a scope or context (org, team, project, publication)
- Permission / Capability (capability preferred in code): a boolean fact about
allowed behavior
## tips
- In the database, capabilities are boolean values. Their names should be
verb-subject. Don't include `can` and definitely do not include `cannot`.
✔️ `edit_post`
`cannot_remove_comment`
- The capabilities table is deliberately flat. If you need to group them, use
`.` as a delimiter and sort and filter accordingly in queries and in your
UI.
✔️ `blog.edit_post`
✔️ `blog.moderate_comment`
or
✔️ `blog.post.edit`
✔️ `blog.post.delete`
✔️ `blog.comment.moderate`
✔️ `blog.comment.edit`
are all fine.

View File

@@ -1,17 +0,0 @@
misc notes for now. of course this needs to be written up for real.
## execution context
The execution context represents facts such as the runtime directory, the
operating system, hardware, and filesystem layout, distinct from environment
variables or request-scoped context.
## philosophy
- TODO-DESIGN.md
- concentric-circles.md
- nomenclature.md
- mutability.md
- commands.md
- groups-and-roles.md

View File

@@ -1,34 +0,0 @@
Some database tables are owned by diachron and some are owned by the
application.
This also applies to seeders: some are owned by diachron and some by the
application.
The database's structure is managed by migrations written in SQL.
Each migration gets its own file. These files' names should match
`yyyy-mm-dd_ss-description.sql`, eg `2026-01-01_01-users.sql`.
Files are sorted lexicographically by name and applied in order.
Note: in the future we may relax or modify the restriction on migration file
names, but they'll continue to be applied in lexicographical order.
## framework and application migrations
Migrations owned by the framework are kept in a separate directory from those
owned by applications. Pending framework migrations, if any, are applied
before pending application migrations, if any.
diachron will go to some lengths to ensure that framework migrations do not
break applications.
## no downward migrations
diachron does not provide them. "The only way out is through."
When developing locally, you can use the command `develop reset-db`. **NEVER
USE THIS IN PRODUCTION!** Always be sure that you can "get back to where you
were". Being careful when creating migrations and seeders can help, but
dumping and restoring known-good copies of the database can also take you a
long way.

View File

@@ -1 +0,0 @@
Describe and define what is expected to be mutable and what is not.

View File

@@ -2,14 +2,3 @@ We use `Call` and `Result` for our own types that wrap `Request` and
`Response`. `Response`.
This hopefully will make things less confusing and avoid problems with shadowing. This hopefully will make things less confusing and avoid problems with shadowing.
## meta
- We use _algorithmic complexity_ for performance discussions, when
things like Big-O come up, etc
- We use _conceptual complexity_ for design and architecture
- We use _cognitive load_ when talking about developer experience
- We use _operational burden_ when talking about production reality

View File

@@ -1,219 +1 @@
# Framework vs Application Ownership .
This document defines **ownership boundaries** between the framework and application code. These boundaries are intentional and non-negotiable: they exist to preserve upgradeability, predictability, and developer sanity under stress.
Ownership answers a simple question:
> **Who is allowed to change this, and under what rules?**
The framework draws a hard line between *frameworkowned* and *applicationowned* concerns, while still encouraging extension through explicit, visible mechanisms.
---
## Core Principle
The framework is not a library of suggestions. It is a **runtime with invariants**.
Application code:
* **uses** the framework
* **extends** it through defined seams
* **never mutates or overrides its invariants**
Framework code:
* guarantees stable behavior
* owns critical lifecycle and security concerns
* must remain internally consistent across versions
Breaking this boundary creates systems that work *until they dont*, usually during upgrades or emergencies.
---
## Database Ownership
### FrameworkOwned Tables
Certain database tables are **owned and managed exclusively by the framework**.
Examples (illustrative, not exhaustive):
* authentication primitives
* session or token state
* internal capability/permission metadata
* migration bookkeeping
* framework feature flags or invariants
#### Rules
Application code **must not**:
* modify schema
* add columns
* delete rows
* update rows directly
* rely on undocumented columns or behaviors
Application code **may**:
* read via documented framework APIs
* reference stable identifiers explicitly exposed by the framework
Think of these tables as **private internal state** — even though they live in your database.
> If the framework needs you to interact with this data, it will expose an API for it.
#### Rationale
These tables:
* encode security or correctness invariants
* may change structure across framework versions
* must remain globally coherent
Treating them as appowned data tightly couples your app to framework internals and blocks safe upgrades.
---
### ApplicationOwned Tables
All domain data belongs to the application.
Examples:
* users (as domain actors, not auth primitives)
* posts, orders, comments, invoices
* businessspecific joins and projections
* denormalized or performanceoriented tables
#### Rules
Application code:
* owns schema design
* owns migrations
* owns constraints and indexes
* may evolve these tables freely
The framework:
* never mutates application tables implicitly
* interacts only through explicit queries or contracts
#### Integration Pattern
Where framework concepts must relate to app data:
* use **foreign keys to frameworkexposed identifiers**, or
* introduce **explicit join tables** owned by the application
No hidden coupling, no magic backfills.
---
## Code Ownership
### FrameworkOwned Code
Some classes, constants, and modules are **frameworkowned**.
These include:
* core request/response abstractions
* auth and user primitives
* capability/permission evaluation logic
* lifecycle hooks
* lowlevel utilities relied on by the framework itself
#### Rules
Application code **must not**:
* modify framework source
* monkeypatch or override internals
* rely on undocumented behavior
* change constant values or internal defaults
Framework code is treated as **readonly** from the apps perspective.
---
### Extension Is Encouraged (But Explicit)
Ownership does **not** mean rigidity.
The framework is designed to be extended via **intentional seams**, such as:
* subclassing
* composition
* adapters
* delegation
* configuration objects
* explicit registration APIs
#### Preferred Patterns
* **Subclass when behavior is stable and conceptual**
* **Compose when behavior is contextual or optional**
* **Delegate when authority should remain with the framework**
What matters is that extension is:
* visible in code
* locally understandable
* reversible
No spooky action at a distance.
---
## What the App Owns Completely
The application fully owns:
* domain models and data shapes
* SQL queries and result parsing
* business rules
* authorization policy *inputs* (not the engine)
* rendering decisions
* feature flags specific to the app
* performance tradeoffs
The framework does not attempt to infer intent from your domain.
---
## What the Framework Guarantees
In return for respecting ownership boundaries, the framework guarantees:
* stable semantics across versions
* forwardonly migrations for its own tables
* explicit deprecations
* no silent behavior changes
* identical runtime behavior in dev and prod
The framework may evolve internally — **but never by reaching into your apps data or code**.
---
## A Useful Mental Model
* Frameworkowned things are **constitutional law**
* Applicationowned things are **legislation**
You can write any laws you want — but you dont amend the constitution inline.
If you need a new power, the framework should expose it deliberately.
---
## Summary
* Ownership is about **who is allowed to change what**
* Frameworkowned tables and code are readonly to the app
* Applicationowned tables and code are sovereign
* Extension is encouraged, mutation is not
* Explicit seams beat clever hacks
Respecting these boundaries keeps systems boring — and boring systems survive stress.

View File

@@ -4,12 +4,7 @@
// password reset, and email verification. // password reset, and email verification.
import type { Request as ExpressRequest } from "express"; import type { Request as ExpressRequest } from "express";
import { import { AnonymousUser, type User, type UserId } from "../user";
type AnonymousUser,
anonymousUser,
type User,
type UserId,
} from "../user";
import { hashPassword, verifyPassword } from "./password"; import { hashPassword, verifyPassword } from "./password";
import type { AuthStore } from "./store"; import type { AuthStore } from "./store";
import { import {
@@ -32,7 +27,7 @@ type SimpleResult = { success: true } | { success: false; error: string };
// Result of validating a request/token - contains both user and session // Result of validating a request/token - contains both user and session
export type AuthResult = export type AuthResult =
| { authenticated: true; user: User; session: SessionData } | { authenticated: true; user: User; session: SessionData }
| { authenticated: false; user: AnonymousUser; session: null }; | { authenticated: false; user: typeof AnonymousUser; session: null };
export class AuthService { export class AuthService {
constructor(private store: AuthStore) {} constructor(private store: AuthStore) {}
@@ -88,7 +83,7 @@ export class AuthService {
} }
if (!token) { if (!token) {
return { authenticated: false, user: anonymousUser, session: null }; return { authenticated: false, user: AnonymousUser, session: null };
} }
return this.validateToken(token); return this.validateToken(token);
@@ -99,16 +94,16 @@ export class AuthService {
const session = await this.store.getSession(tokenId); const session = await this.store.getSession(tokenId);
if (!session) { if (!session) {
return { authenticated: false, user: anonymousUser, session: null }; return { authenticated: false, user: AnonymousUser, session: null };
} }
if (session.tokenType !== "session") { if (session.tokenType !== "session") {
return { authenticated: false, user: anonymousUser, session: null }; return { authenticated: false, user: AnonymousUser, session: null };
} }
const user = await this.store.getUserById(session.userId as UserId); const user = await this.store.getUserById(session.userId as UserId);
if (!user || !user.isActive()) { if (!user || !user.isActive()) {
return { authenticated: false, user: anonymousUser, session: null }; return { authenticated: false, user: AnonymousUser, session: null };
} }
// Update last used (fire and forget) // Update last used (fire and forget)

View File

@@ -3,7 +3,7 @@
// Authentication storage interface and in-memory implementation. // Authentication storage interface and in-memory implementation.
// The interface allows easy migration to PostgreSQL later. // The interface allows easy migration to PostgreSQL later.
import { AuthenticatedUser, type User, type UserId } from "../user"; import { User, type UserId } from "../user";
import { generateToken, hashToken } from "./token"; import { generateToken, hashToken } from "./token";
import type { AuthMethod, SessionData, TokenId, TokenType } from "./types"; import type { AuthMethod, SessionData, TokenId, TokenType } from "./types";
@@ -123,7 +123,7 @@ export class InMemoryAuthStore implements AuthStore {
} }
async createUser(data: CreateUserData): Promise<User> { async createUser(data: CreateUserData): Promise<User> {
const user = AuthenticatedUser.create(data.email, { const user = User.create(data.email, {
displayName: data.displayName, displayName: data.displayName,
status: "pending", // Pending until email verified status: "pending", // Pending until email verified
}); });
@@ -151,7 +151,7 @@ export class InMemoryAuthStore implements AuthStore {
const user = this.users.get(userId); const user = this.users.get(userId);
if (user) { if (user) {
// Create new user with active status // Create new user with active status
const updatedUser = AuthenticatedUser.create(user.email, { const updatedUser = User.create(user.email, {
id: user.id, id: user.id,
displayName: user.displayName, displayName: user.displayName,
status: "active", status: "active",

View File

@@ -64,17 +64,17 @@ export const tokenLifetimes: Record<TokenType, number> = {
}; };
// Import here to avoid circular dependency at module load time // Import here to avoid circular dependency at module load time
import type { User } from "../user"; import { AnonymousUser, type MaybeUser } from "../user";
// Session wrapper class providing a consistent interface for handlers. // Session wrapper class providing a consistent interface for handlers.
// Always present on Call (never null), but may represent an anonymous session. // Always present on Call (never null), but may represent an anonymous session.
export class Session { export class Session {
constructor( constructor(
private readonly data: SessionData | null, private readonly data: SessionData | null,
private readonly user: User, private readonly user: MaybeUser,
) {} ) {}
getUser(): User { getUser(): MaybeUser {
return this.user; return this.user;
} }
@@ -83,7 +83,7 @@ export class Session {
} }
isAuthenticated(): boolean { isAuthenticated(): boolean {
return !this.user.isAnonymous(); return this.user !== AnonymousUser;
} }
get tokenId(): string | undefined { get tokenId(): string | undefined {

View File

@@ -24,14 +24,7 @@ const routes: Record<string, Route> = {
const me = request.session.getUser(); const me = request.session.getUser();
const email = me.toString(); const email = me.toString();
const showLogin = me.isAnonymous(); const c = await render("basic/home", { email });
const showLogout = !me.isAnonymous();
const c = await render("basic/home", {
email,
showLogin,
showLogout,
});
return html(c); return html(c);
}, },

View File

@@ -5,10 +5,10 @@
// needing to pass Call through every function. // needing to pass Call through every function.
import { AsyncLocalStorage } from "node:async_hooks"; import { AsyncLocalStorage } from "node:async_hooks";
import { anonymousUser, type User } from "./user"; import { AnonymousUser, type MaybeUser } from "./user";
type RequestContext = { type RequestContext = {
user: User; user: MaybeUser;
}; };
const asyncLocalStorage = new AsyncLocalStorage<RequestContext>(); const asyncLocalStorage = new AsyncLocalStorage<RequestContext>();
@@ -19,9 +19,9 @@ function runWithContext<T>(context: RequestContext, fn: () => T): T {
} }
// Get the current user from context, or AnonymousUser if not in a request // Get the current user from context, or AnonymousUser if not in a request
function getCurrentUser(): User { function getCurrentUser(): MaybeUser {
const context = asyncLocalStorage.getStore(); const context = asyncLocalStorage.getStore();
return context?.user ?? anonymousUser; return context?.user ?? AnonymousUser;
} }
export { getCurrentUser, runWithContext, type RequestContext }; export { getCurrentUser, runWithContext, type RequestContext };

View File

@@ -18,8 +18,7 @@ import type {
} from "./auth/store"; } from "./auth/store";
import { generateToken, hashToken } from "./auth/token"; import { generateToken, hashToken } from "./auth/token";
import type { SessionData, TokenId } from "./auth/types"; import type { SessionData, TokenId } from "./auth/types";
import type { Domain } from "./types"; import { User, type UserId } from "./user";
import { AuthenticatedUser, type User, type UserId } from "./user";
// Connection configuration // Connection configuration
const connectionConfig = { const connectionConfig = {
@@ -34,52 +33,32 @@ const connectionConfig = {
// Generated<T> marks columns with database defaults (optional on insert) // Generated<T> marks columns with database defaults (optional on insert)
interface UsersTable { interface UsersTable {
id: string; id: string;
status: Generated<string>;
display_name: string | null;
created_at: Generated<Date>;
updated_at: Generated<Date>;
}
interface UserEmailsTable {
id: string;
user_id: string;
email: string; email: string;
normalized_email: string; password_hash: string;
is_primary: Generated<boolean>; display_name: string | null;
is_verified: Generated<boolean>; status: Generated<string>;
created_at: Generated<Date>; roles: Generated<string[]>;
verified_at: Date | null; permissions: Generated<string[]>;
revoked_at: Date | null; email_verified: Generated<boolean>;
}
interface UserCredentialsTable {
id: string;
user_id: string;
credential_type: Generated<string>;
password_hash: string | null;
created_at: Generated<Date>; created_at: Generated<Date>;
updated_at: Generated<Date>; updated_at: Generated<Date>;
} }
interface SessionsTable { interface SessionsTable {
id: Generated<string>; token_id: string;
token_hash: string;
user_id: string; user_id: string;
user_email_id: string | null;
token_type: string; token_type: string;
auth_method: string; auth_method: string;
created_at: Generated<Date>; created_at: Generated<Date>;
expires_at: Date; expires_at: Date;
revoked_at: Date | null; last_used_at: Date | null;
ip_address: string | null;
user_agent: string | null; user_agent: string | null;
ip_address: string | null;
is_used: Generated<boolean | null>; is_used: Generated<boolean | null>;
} }
interface Database { interface Database {
users: UsersTable; users: UsersTable;
user_emails: UserEmailsTable;
user_credentials: UserCredentialsTable;
sessions: SessionsTable; sessions: SessionsTable;
} }
@@ -108,13 +87,12 @@ async function raw<T = unknown>(
// ============================================================================ // ============================================================================
// Migration file naming convention: // Migration file naming convention:
// yyyy-mm-dd_ss_description.sql // NNNN_description.sql
// e.g., 2025-01-15_01_initial.sql, 2025-01-15_02_add_users.sql // e.g., 0001_initial.sql, 0002_add_users.sql
// //
// Migrations directory: express/migrations/ // Migrations directory: express/migrations/
const FRAMEWORK_MIGRATIONS_DIR = path.join(__dirname, "framework/migrations"); const MIGRATIONS_DIR = path.join(__dirname, "migrations");
const APP_MIGRATIONS_DIR = path.join(__dirname, "migrations");
const MIGRATIONS_TABLE = "_migrations"; const MIGRATIONS_TABLE = "_migrations";
interface MigrationRecord { interface MigrationRecord {
@@ -143,34 +121,22 @@ async function getAppliedMigrations(): Promise<string[]> {
} }
// Get pending migration files // Get pending migration files
function getMigrationFiles(kind: Domain): string[] { function getMigrationFiles(): string[] {
const dir = kind === "fw" ? FRAMEWORK_MIGRATIONS_DIR : APP_MIGRATIONS_DIR; if (!fs.existsSync(MIGRATIONS_DIR)) {
if (!fs.existsSync(dir)) {
return []; return [];
} }
return fs
const root = __dirname; .readdirSync(MIGRATIONS_DIR)
const mm = fs
.readdirSync(dir)
.filter((f) => f.endsWith(".sql")) .filter((f) => f.endsWith(".sql"))
.filter((f) => /^\d{4}-\d{2}-\d{2}_\d{2}-/.test(f)) .filter((f) => /^\d{4}_/.test(f))
.map((f) => `${dir}/${f}`)
.map((f) => f.replace(`${root}/`, ""))
.sort(); .sort();
return mm;
} }
// Run a single migration // Run a single migration
async function runMigration(filename: string): Promise<void> { async function runMigration(filename: string): Promise<void> {
// const filepath = path.join(MIGRATIONS_DIR, filename); const filepath = path.join(MIGRATIONS_DIR, filename);
const filepath = filename;
const content = fs.readFileSync(filepath, "utf-8"); const content = fs.readFileSync(filepath, "utf-8");
process.stdout.write(` Migration: ${filename}...`);
// Run migration in a transaction // Run migration in a transaction
const client = await pool.connect(); const client = await pool.connect();
try { try {
@@ -181,11 +147,8 @@ async function runMigration(filename: string): Promise<void> {
[filename], [filename],
); );
await client.query("COMMIT"); await client.query("COMMIT");
console.log(" ✓"); console.log(`Applied migration: ${filename}`);
} catch (err) { } catch (err) {
console.log(" ✗");
const message = err instanceof Error ? err.message : String(err);
console.error(` Error: ${message}`);
await client.query("ROLLBACK"); await client.query("ROLLBACK");
throw err; throw err;
} finally { } finally {
@@ -193,31 +156,24 @@ async function runMigration(filename: string): Promise<void> {
} }
} }
function getAllMigrationFiles() {
const fw_files = getMigrationFiles("fw");
const app_files = getMigrationFiles("app");
const all = [...fw_files, ...app_files];
return all;
}
// Run all pending migrations // Run all pending migrations
async function migrate(): Promise<void> { async function migrate(): Promise<void> {
await ensureMigrationsTable(); await ensureMigrationsTable();
const applied = new Set(await getAppliedMigrations()); const applied = new Set(await getAppliedMigrations());
const all = getAllMigrationFiles(); const files = getMigrationFiles();
const pending = all.filter((all) => !applied.has(all)); const pending = files.filter((f) => !applied.has(f));
if (pending.length === 0) { if (pending.length === 0) {
console.log("No pending migrations"); console.log("No pending migrations");
return; return;
} }
console.log(`Applying ${pending.length} migration(s):`); console.log(`Running ${pending.length} migration(s)...`);
for (const file of pending) { for (const file of pending) {
await runMigration(file); await runMigration(file);
} }
console.log("Migrations complete");
} }
// List migration status // List migration status
@@ -227,10 +183,10 @@ async function migrationStatus(): Promise<{
}> { }> {
await ensureMigrationsTable(); await ensureMigrationsTable();
const applied = new Set(await getAppliedMigrations()); const applied = new Set(await getAppliedMigrations());
const ff = getAllMigrationFiles(); const files = getMigrationFiles();
return { return {
applied: ff.filter((ff) => applied.has(ff)), applied: files.filter((f) => applied.has(f)),
pending: ff.filter((ff) => !applied.has(ff)), pending: files.filter((f) => !applied.has(f)),
}; };
} }
@@ -245,12 +201,12 @@ class PostgresAuthStore implements AuthStore {
data: CreateSessionData, data: CreateSessionData,
): Promise<{ token: string; session: SessionData }> { ): Promise<{ token: string; session: SessionData }> {
const token = generateToken(); const token = generateToken();
const tokenHash = hashToken(token); const tokenId = hashToken(token);
const row = await db const row = await db
.insertInto("sessions") .insertInto("sessions")
.values({ .values({
token_hash: tokenHash, token_id: tokenId,
user_id: data.userId, user_id: data.userId,
token_type: data.tokenType, token_type: data.tokenType,
auth_method: data.authMethod, auth_method: data.authMethod,
@@ -262,12 +218,13 @@ class PostgresAuthStore implements AuthStore {
.executeTakeFirstOrThrow(); .executeTakeFirstOrThrow();
const session: SessionData = { const session: SessionData = {
tokenId: row.token_hash, tokenId: row.token_id,
userId: row.user_id, userId: row.user_id,
tokenType: row.token_type as SessionData["tokenType"], tokenType: row.token_type as SessionData["tokenType"],
authMethod: row.auth_method as SessionData["authMethod"], authMethod: row.auth_method as SessionData["authMethod"],
createdAt: row.created_at, createdAt: row.created_at,
expiresAt: row.expires_at, expiresAt: row.expires_at,
lastUsedAt: row.last_used_at ?? undefined,
userAgent: row.user_agent ?? undefined, userAgent: row.user_agent ?? undefined,
ipAddress: row.ip_address ?? undefined, ipAddress: row.ip_address ?? undefined,
isUsed: row.is_used ?? undefined, isUsed: row.is_used ?? undefined,
@@ -280,9 +237,8 @@ class PostgresAuthStore implements AuthStore {
const row = await db const row = await db
.selectFrom("sessions") .selectFrom("sessions")
.selectAll() .selectAll()
.where("token_hash", "=", tokenId) .where("token_id", "=", tokenId)
.where("expires_at", ">", new Date()) .where("expires_at", ">", new Date())
.where("revoked_at", "is", null)
.executeTakeFirst(); .executeTakeFirst();
if (!row) { if (!row) {
@@ -290,62 +246,50 @@ class PostgresAuthStore implements AuthStore {
} }
return { return {
tokenId: row.token_hash, tokenId: row.token_id,
userId: row.user_id, userId: row.user_id,
tokenType: row.token_type as SessionData["tokenType"], tokenType: row.token_type as SessionData["tokenType"],
authMethod: row.auth_method as SessionData["authMethod"], authMethod: row.auth_method as SessionData["authMethod"],
createdAt: row.created_at, createdAt: row.created_at,
expiresAt: row.expires_at, expiresAt: row.expires_at,
lastUsedAt: row.last_used_at ?? undefined,
userAgent: row.user_agent ?? undefined, userAgent: row.user_agent ?? undefined,
ipAddress: row.ip_address ?? undefined, ipAddress: row.ip_address ?? undefined,
isUsed: row.is_used ?? undefined, isUsed: row.is_used ?? undefined,
}; };
} }
async updateLastUsed(_tokenId: TokenId): Promise<void> { async updateLastUsed(tokenId: TokenId): Promise<void> {
// The new schema doesn't have last_used_at column await db
// This is now a no-op; session activity tracking could be added later .updateTable("sessions")
.set({ last_used_at: new Date() })
.where("token_id", "=", tokenId)
.execute();
} }
async deleteSession(tokenId: TokenId): Promise<void> { async deleteSession(tokenId: TokenId): Promise<void> {
// Soft delete by setting revoked_at
await db await db
.updateTable("sessions") .deleteFrom("sessions")
.set({ revoked_at: new Date() }) .where("token_id", "=", tokenId)
.where("token_hash", "=", tokenId)
.execute(); .execute();
} }
async deleteUserSessions(userId: UserId): Promise<number> { async deleteUserSessions(userId: UserId): Promise<number> {
const result = await db const result = await db
.updateTable("sessions") .deleteFrom("sessions")
.set({ revoked_at: new Date() })
.where("user_id", "=", userId) .where("user_id", "=", userId)
.where("revoked_at", "is", null)
.executeTakeFirst(); .executeTakeFirst();
return Number(result.numUpdatedRows); return Number(result.numDeletedRows);
} }
// User operations // User operations
async getUserByEmail(email: string): Promise<User | null> { async getUserByEmail(email: string): Promise<User | null> {
// Find user through user_emails table
const normalizedEmail = email.toLowerCase().trim();
const row = await db const row = await db
.selectFrom("user_emails") .selectFrom("users")
.innerJoin("users", "users.id", "user_emails.user_id") .selectAll()
.select([ .where(sql`LOWER(email)`, "=", email.toLowerCase())
"users.id",
"users.status",
"users.display_name",
"users.created_at",
"users.updated_at",
"user_emails.email",
])
.where("user_emails.normalized_email", "=", normalizedEmail)
.where("user_emails.revoked_at", "is", null)
.executeTakeFirst(); .executeTakeFirst();
if (!row) { if (!row) {
@@ -355,24 +299,10 @@ class PostgresAuthStore implements AuthStore {
} }
async getUserById(userId: UserId): Promise<User | null> { async getUserById(userId: UserId): Promise<User | null> {
// Get user with their primary email
const row = await db const row = await db
.selectFrom("users") .selectFrom("users")
.leftJoin("user_emails", (join) => .selectAll()
join .where("id", "=", userId)
.onRef("user_emails.user_id", "=", "users.id")
.on("user_emails.is_primary", "=", true)
.on("user_emails.revoked_at", "is", null),
)
.select([
"users.id",
"users.status",
"users.display_name",
"users.created_at",
"users.updated_at",
"user_emails.email",
])
.where("users.id", "=", userId)
.executeTakeFirst(); .executeTakeFirst();
if (!row) { if (!row) {
@@ -382,149 +312,68 @@ class PostgresAuthStore implements AuthStore {
} }
async createUser(data: CreateUserData): Promise<User> { async createUser(data: CreateUserData): Promise<User> {
const userId = crypto.randomUUID(); const id = crypto.randomUUID();
const emailId = crypto.randomUUID();
const credentialId = crypto.randomUUID();
const now = new Date(); const now = new Date();
const normalizedEmail = data.email.toLowerCase().trim();
// Create user record const row = await db
await db
.insertInto("users") .insertInto("users")
.values({ .values({
id: userId, id,
display_name: data.displayName ?? null,
status: "pending",
created_at: now,
updated_at: now,
})
.execute();
// Create user_email record
await db
.insertInto("user_emails")
.values({
id: emailId,
user_id: userId,
email: data.email, email: data.email,
normalized_email: normalizedEmail,
is_primary: true,
is_verified: false,
created_at: now,
})
.execute();
// Create user_credential record
await db
.insertInto("user_credentials")
.values({
id: credentialId,
user_id: userId,
credential_type: "password",
password_hash: data.passwordHash, password_hash: data.passwordHash,
created_at: now, display_name: data.displayName ?? null,
updated_at: now,
})
.execute();
return new AuthenticatedUser({
id: userId,
email: data.email,
displayName: data.displayName,
status: "pending", status: "pending",
roles: [], roles: [],
permissions: [], permissions: [],
createdAt: now, email_verified: false,
updatedAt: now, created_at: now,
}); updated_at: now,
})
.returningAll()
.executeTakeFirstOrThrow();
return this.rowToUser(row);
} }
async getUserPasswordHash(userId: UserId): Promise<string | null> { async getUserPasswordHash(userId: UserId): Promise<string | null> {
const row = await db const row = await db
.selectFrom("user_credentials") .selectFrom("users")
.select("password_hash") .select("password_hash")
.where("user_id", "=", userId) .where("id", "=", userId)
.where("credential_type", "=", "password")
.executeTakeFirst(); .executeTakeFirst();
return row?.password_hash ?? null; return row?.password_hash ?? null;
} }
async setUserPassword(userId: UserId, passwordHash: string): Promise<void> { async setUserPassword(userId: UserId, passwordHash: string): Promise<void> {
const now = new Date();
// Try to update existing credential
const result = await db
.updateTable("user_credentials")
.set({ password_hash: passwordHash, updated_at: now })
.where("user_id", "=", userId)
.where("credential_type", "=", "password")
.executeTakeFirst();
// If no existing credential, create one
if (Number(result.numUpdatedRows) === 0) {
await db
.insertInto("user_credentials")
.values({
id: crypto.randomUUID(),
user_id: userId,
credential_type: "password",
password_hash: passwordHash,
created_at: now,
updated_at: now,
})
.execute();
}
// Update user's updated_at
await db await db
.updateTable("users") .updateTable("users")
.set({ updated_at: now }) .set({ password_hash: passwordHash, updated_at: new Date() })
.where("id", "=", userId) .where("id", "=", userId)
.execute(); .execute();
} }
async updateUserEmailVerified(userId: UserId): Promise<void> { async updateUserEmailVerified(userId: UserId): Promise<void> {
const now = new Date();
// Update user_emails to mark as verified
await db
.updateTable("user_emails")
.set({
is_verified: true,
verified_at: now,
})
.where("user_id", "=", userId)
.where("is_primary", "=", true)
.execute();
// Update user status to active
await db await db
.updateTable("users") .updateTable("users")
.set({ .set({
email_verified: true,
status: "active", status: "active",
updated_at: now, updated_at: new Date(),
}) })
.where("id", "=", userId) .where("id", "=", userId)
.execute(); .execute();
} }
// Helper to convert database row to User object // Helper to convert database row to User object
private rowToUser(row: { private rowToUser(row: Selectable<UsersTable>): User {
id: string; return new User({
status: string;
display_name: string | null;
created_at: Date;
updated_at: Date;
email: string | null;
}): User {
return new AuthenticatedUser({
id: row.id, id: row.id,
email: row.email ?? "unknown@example.com", email: row.email,
displayName: row.display_name ?? undefined, displayName: row.display_name ?? undefined,
status: row.status as "active" | "suspended" | "pending", status: row.status as "active" | "suspended" | "pending",
roles: [], // TODO: query from RBAC tables roles: row.roles,
permissions: [], // TODO: query from RBAC tables permissions: row.permissions,
createdAt: row.created_at, createdAt: row.created_at,
updatedAt: row.updated_at, updatedAt: row.updated_at,
}); });

View File

@@ -1,17 +0,0 @@
import { connectionConfig, migrate, pool } from "../database";
import { dropTables, exitIfUnforced } from "./util";
async function main(): Promise<void> {
exitIfUnforced();
try {
await dropTables();
} finally {
await pool.end();
}
}
main().catch((err) => {
console.error("Failed to clear database:", err.message);
process.exit(1);
});

View File

@@ -1,26 +0,0 @@
// reset-db.ts
// Development command to wipe the database and apply all migrations from scratch
import { connectionConfig, migrate, pool } from "../database";
import { dropTables, exitIfUnforced } from "./util";
async function main(): Promise<void> {
exitIfUnforced();
try {
await dropTables();
console.log("");
await migrate();
console.log("");
console.log("Database reset complete.");
} finally {
await pool.end();
}
}
main().catch((err) => {
console.error("Failed to reset database:", err.message);
process.exit(1);
});

View File

@@ -1,42 +0,0 @@
// FIXME: this is at the wrong level of specificity
import { connectionConfig, migrate, pool } from "../database";
const exitIfUnforced = () => {
const args = process.argv.slice(2);
// Require explicit confirmation unless --force is passed
if (!args.includes("--force")) {
console.error("This will DROP ALL TABLES in the database!");
console.error(` Database: ${connectionConfig.database}`);
console.error(
` Host: ${connectionConfig.host}:${connectionConfig.port}`,
);
console.error("");
console.error("Run with --force to proceed.");
process.exit(1);
}
};
const dropTables = async () => {
console.log("Dropping all tables...");
// Get all table names in the public schema
const result = await pool.query<{ tablename: string }>(`
SELECT tablename FROM pg_tables
WHERE schemaname = 'public'
`);
if (result.rows.length > 0) {
// Drop all tables with CASCADE to handle foreign key constraints
const tableNames = result.rows
.map((r) => `"${r.tablename}"`)
.join(", ");
await pool.query(`DROP TABLE IF EXISTS ${tableNames} CASCADE`);
console.log(`Dropped ${result.rows.length} table(s)`);
} else {
console.log("No tables to drop");
}
};
export { dropTables, exitIfUnforced };

View File

@@ -1,29 +0,0 @@
-- 0001_users.sql
-- Create users table for authentication
CREATE TABLE users (
id UUID PRIMARY KEY,
status TEXT NOT NULL DEFAULT 'active',
display_name TEXT,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
CREATE TABLE user_emails (
id UUID PRIMARY KEY,
user_id UUID NOT NULL REFERENCES users(id),
email TEXT NOT NULL,
normalized_email TEXT NOT NULL,
is_primary BOOLEAN NOT NULL DEFAULT FALSE,
is_verified BOOLEAN NOT NULL DEFAULT FALSE,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
verified_at TIMESTAMPTZ,
revoked_at TIMESTAMPTZ
);
-- Enforce uniqueness only among *active* emails
CREATE UNIQUE INDEX user_emails_unique_active
ON user_emails (normalized_email)
WHERE revoked_at IS NULL;

View File

@@ -1,17 +0,0 @@
-- 0003_user_credentials.sql
-- Create user_credentials table for password storage (extensible for other auth methods)
CREATE TABLE user_credentials (
id UUID PRIMARY KEY,
user_id UUID NOT NULL REFERENCES users(id),
credential_type TEXT NOT NULL DEFAULT 'password',
password_hash TEXT,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
-- Each user can have at most one credential per type
CREATE UNIQUE INDEX user_credentials_user_type_idx ON user_credentials (user_id, credential_type);
-- Index for user lookups
CREATE INDEX user_credentials_user_id_idx ON user_credentials (user_id);

View File

@@ -1,20 +0,0 @@
CREATE TABLE roles (
id UUID PRIMARY KEY,
name TEXT UNIQUE NOT NULL,
description TEXT
);
CREATE TABLE groups (
id UUID PRIMARY KEY,
name TEXT NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
CREATE TABLE user_group_roles (
user_id UUID NOT NULL REFERENCES users(id),
group_id UUID NOT NULL REFERENCES groups(id),
role_id UUID NOT NULL REFERENCES roles(id),
granted_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
revoked_at TIMESTAMPTZ,
PRIMARY KEY (user_id, group_id, role_id)
);

View File

@@ -1,14 +0,0 @@
CREATE TABLE capabilities (
id UUID PRIMARY KEY,
name TEXT UNIQUE NOT NULL,
description TEXT
);
CREATE TABLE role_capabilities (
role_id UUID NOT NULL REFERENCES roles(id),
capability_id UUID NOT NULL REFERENCES capabilities(id),
granted_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
revoked_at TIMESTAMPTZ,
PRIMARY KEY (role_id, capability_id)
);

View File

@@ -0,0 +1,21 @@
-- 0001_users.sql
-- Create users table for authentication
CREATE TABLE users (
id UUID PRIMARY KEY,
email TEXT UNIQUE NOT NULL,
password_hash TEXT NOT NULL,
display_name TEXT,
status TEXT NOT NULL DEFAULT 'pending',
roles TEXT[] NOT NULL DEFAULT '{}',
permissions TEXT[] NOT NULL DEFAULT '{}',
email_verified BOOLEAN NOT NULL DEFAULT FALSE,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
-- Index for email lookups (login)
CREATE INDEX users_email_idx ON users (LOWER(email));
-- Index for status filtering
CREATE INDEX users_status_idx ON users (status);

View File

@@ -2,17 +2,15 @@
-- Create sessions table for auth tokens -- Create sessions table for auth tokens
CREATE TABLE sessions ( CREATE TABLE sessions (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(), token_id TEXT PRIMARY KEY,
token_hash TEXT UNIQUE NOT NULL, user_id UUID NOT NULL REFERENCES users(id) ON DELETE CASCADE,
user_id UUID NOT NULL REFERENCES users(id),
user_email_id UUID REFERENCES user_emails(id),
token_type TEXT NOT NULL, token_type TEXT NOT NULL,
auth_method TEXT NOT NULL, auth_method TEXT NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
expires_at TIMESTAMPTZ NOT NULL, expires_at TIMESTAMPTZ NOT NULL,
revoked_at TIMESTAMPTZ, last_used_at TIMESTAMPTZ,
ip_address INET,
user_agent TEXT, user_agent TEXT,
ip_address TEXT,
is_used BOOLEAN DEFAULT FALSE is_used BOOLEAN DEFAULT FALSE
); );

View File

@@ -1,13 +1,13 @@
import { AuthService } from "../auth"; import { AuthService } from "../auth";
import { getCurrentUser } from "../context"; import { getCurrentUser } from "../context";
import { PostgresAuthStore } from "../database"; import { PostgresAuthStore } from "../database";
import type { User } from "../user"; import type { MaybeUser } from "../user";
import { html, redirect, render } from "./util"; import { html, redirect, render } from "./util";
const util = { html, redirect, render }; const util = { html, redirect, render };
const session = { const session = {
getUser: (): User => { getUser: (): MaybeUser => {
return getCurrentUser(); return getCurrentUser();
}, },
}; };

View File

@@ -8,7 +8,12 @@ import { z } from "zod";
import type { Session } from "./auth/types"; import type { Session } from "./auth/types";
import type { ContentType } from "./content-types"; import type { ContentType } from "./content-types";
import type { HttpCode } from "./http-codes"; import type { HttpCode } from "./http-codes";
import type { Permission, User } from "./user"; import {
AnonymousUser,
type MaybeUser,
type Permission,
type User,
} from "./user";
const methodParser = z.union([ const methodParser = z.union([
z.literal("GET"), z.literal("GET"),
@@ -31,7 +36,7 @@ export type Call = {
method: Method; method: Method;
parameters: object; parameters: object;
request: ExpressRequest; request: ExpressRequest;
user: User; user: MaybeUser;
session: Session; session: Session;
}; };
@@ -97,7 +102,7 @@ export class AuthorizationDenied extends Error {
// Helper for handlers to require authentication // Helper for handlers to require authentication
export function requireAuth(call: Call): User { export function requireAuth(call: Call): User {
if (call.user.isAnonymous()) { if (call.user === AnonymousUser) {
throw new AuthenticationRequired(); throw new AuthenticationRequired();
} }
return call.user; return call.user;
@@ -112,6 +117,4 @@ export function requirePermission(call: Call, permission: Permission): User {
return user; return user;
} }
export type Domain = "app" | "fw";
export { methodParser, massageMethod }; export { methodParser, massageMethod };

View File

@@ -51,15 +51,39 @@ const defaultRolePermissions: RolePermissionMap = new Map([
["user", ["users:read"]], ["user", ["users:read"]],
]); ]);
export abstract class User { export class User {
protected readonly data: UserData; private readonly data: UserData;
protected rolePermissions: RolePermissionMap; private rolePermissions: RolePermissionMap;
constructor(data: UserData, rolePermissions?: RolePermissionMap) { constructor(data: UserData, rolePermissions?: RolePermissionMap) {
this.data = userDataParser.parse(data); this.data = userDataParser.parse(data);
this.rolePermissions = rolePermissions ?? defaultRolePermissions; this.rolePermissions = rolePermissions ?? defaultRolePermissions;
} }
// Factory for creating new users with sensible defaults
static create(
email: string,
options?: {
id?: string;
displayName?: string;
status?: UserStatus;
roles?: Role[];
permissions?: Permission[];
},
): User {
const now = new Date();
return new User({
id: options?.id ?? crypto.randomUUID(),
email,
displayName: options?.displayName,
status: options?.status ?? "active",
roles: options?.roles ?? [],
permissions: options?.permissions ?? [],
createdAt: now,
updatedAt: now,
});
}
// Identity // Identity
get id(): UserId { get id(): UserId {
return this.data.id as UserId; return this.data.id as UserId;
@@ -161,72 +185,15 @@ export abstract class User {
toString(): string { toString(): string {
return `User(id ${this.id})`; return `User(id ${this.id})`;
} }
abstract isAnonymous(): boolean;
}
export class AuthenticatedUser extends User {
// Factory for creating new users with sensible defaults
static create(
email: string,
options?: {
id?: string;
displayName?: string;
status?: UserStatus;
roles?: Role[];
permissions?: Permission[];
},
): User {
const now = new Date();
return new AuthenticatedUser({
id: options?.id ?? crypto.randomUUID(),
email,
displayName: options?.displayName,
status: options?.status ?? "active",
roles: options?.roles ?? [],
permissions: options?.permissions ?? [],
createdAt: now,
updatedAt: now,
});
}
isAnonymous(): boolean {
return false;
}
} }
// For representing "no user" in contexts where user is optional // For representing "no user" in contexts where user is optional
export class AnonymousUser extends User { export const AnonymousUser = Symbol("AnonymousUser");
// FIXME: this is C&Ped with only minimal changes. No bueno.
static create(
email: string,
options?: {
id?: string;
displayName?: string;
status?: UserStatus;
roles?: Role[];
permissions?: Permission[];
},
): AnonymousUser {
const now = new Date(0);
return new AnonymousUser({
id: options?.id ?? crypto.randomUUID(),
email,
displayName: options?.displayName,
status: options?.status ?? "active",
roles: options?.roles ?? [],
permissions: options?.permissions ?? [],
createdAt: now,
updatedAt: now,
});
}
isAnonymous(): boolean { export const anonymousUser = User.create("anonymous@example.com", {
return true;
}
}
export const anonymousUser = AnonymousUser.create("anonymous@example.com", {
id: "-1", id: "-1",
displayName: "Anonymous User", displayName: "Anonymous User",
// FIXME: set createdAt and updatedAt to start of epoch
}); });
export type MaybeUser = User | typeof AnonymousUser;

View File

@@ -10,7 +10,7 @@ cd "$DIR"
# uv run ruff format . # uv run ruff format .
shell_scripts="$(fd '.sh$' | xargs)" shell_scripts="$(fd .sh | xargs)"
shfmt -i 4 -w "$DIR/cmd" "$DIR"/framework/cmd.d/* "$DIR"/framework/shims/* "$DIR"/master/master "$DIR"/logger/logger shfmt -i 4 -w "$DIR/cmd" "$DIR"/framework/cmd.d/* "$DIR"/framework/shims/* "$DIR"/master/master "$DIR"/logger/logger
# "$shell_scripts" # "$shell_scripts"
for ss in $shell_scripts; do for ss in $shell_scripts; do

View File

@@ -1,11 +0,0 @@
#!/bin/bash
# This file belongs to the framework. You are not expected to modify it.
set -eu
DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
ROOT="$DIR/../.."
cd "$ROOT/express"
"$DIR"/../cmd.d/tsx develop/clear-db.ts "$@"

View File

@@ -1 +0,0 @@
../common.d/db

View File

@@ -1 +0,0 @@
../common.d/migrate

View File

@@ -1,9 +0,0 @@
#!/bin/bash
set -eu
DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
ROOT="$DIR/../.."
cd "$ROOT/express"
"$DIR"/../cmd.d/tsx develop/reset-db.ts "$@"

View File

@@ -1 +0,0 @@
../common.d/db

View File

@@ -1 +0,0 @@
../common.d/migrate

View File

@@ -1,26 +0,0 @@
# shellcheck shell=bash
# Detect platform (OS and architecture)
os=$(uname -s | tr '[:upper:]' '[:lower:]')
arch=$(uname -m)
case "$os" in
linux) platform_os=linux ;;
darwin) platform_os=darwin ;;
*) echo "Unsupported OS: $os" >&2; exit 1 ;;
esac
case "$arch" in
x86_64) platform_arch=x86_64 ;;
*) echo "Unsupported architecture: $arch" >&2; exit 1 ;;
esac
platform="${platform_os}_${platform_arch}"
# Platform-specific checksum command
if [ "$platform_os" = "darwin" ]; then
sha256_check() { shasum -a 256 -c -; }
else
sha256_check() { sha256sum -c -; }
fi

View File

@@ -7,15 +7,6 @@ project_root="$node_common_DIR/../.."
# shellcheck source=../versions # shellcheck source=../versions
source "$node_common_DIR"/../versions source "$node_common_DIR"/../versions
# shellcheck source=../platform
source "$node_common_DIR"/../platform
# Get platform-specific node directory
nodejs_dirname_var="nodejs_dirname_${platform}"
nodejs_dirname="${!nodejs_dirname_var}"
nodejs_dist_dir="framework/binaries/$nodejs_dirname"
nodejs_bin_dir="$nodejs_dist_dir/bin"
nodejs_binary_dir="$project_root/$nodejs_bin_dir" nodejs_binary_dir="$project_root/$nodejs_bin_dir"
# This might be too restrictive. Or not restrictive enough. # This might be too restrictive. Or not restrictive enough.

View File

@@ -2,25 +2,18 @@
# This file belongs to the framework. You are not expected to modify it. # This file belongs to the framework. You are not expected to modify it.
nodejs_version=v24.12.0
# https://nodejs.org/dist # https://nodejs.org/dist
nodejs_binary_linux_x86_64=https://nodejs.org/dist/v24.12.0/node-v24.12.0-linux-x64.tar.xz nodejs_binary_linux_x86_64=https://nodejs.org/dist/v24.12.0/node-v24.12.0-linux-x64.tar.xz
nodejs_checksum_linux_x86_64=bdebee276e58d0ef5448f3d5ac12c67daa963dd5e0a9bb621a53d1cefbc852fd nodejs_checksum_linux_x86_64=bdebee276e58d0ef5448f3d5ac12c67daa963dd5e0a9bb621a53d1cefbc852fd
nodejs_dirname_linux_x86_64=node-v24.12.0-linux-x64 nodejs_dist_dir=framework/binaries/node-v22.15.1-linux-x64
nodejs_bin_dir="$nodejs_dist_dir/bin"
nodejs_binary_darwin_x86_64=https://nodejs.org/dist/v24.12.0/node-v24.12.0-darwin-x64.tar.xz
nodejs_checksum_darwin_x86_64=1e4d54f706e0a3613d6415ffe2ccdfd4095d3483971dbbaa4ff909fac5fc211c
nodejs_dirname_darwin_x86_64=node-v24.12.0-darwin-x64
caddy_binary_linux_x86_64=fixme caddy_binary_linux_x86_64=fixme
caddy_checksum_linux_x86_64=fixmetoo caddy_checksum_linux_x86_64=fixmetoo
# https://github.com/pnpm/pnpm/releases # https://github.com/pnpm/pnpm/releases
pnpm_binary_linux_x86_64=https://github.com/pnpm/pnpm/releases/download/v10.28.0/pnpm-linux-x64 pnpm_binary_linux_x86_64=https://github.com/pnpm/pnpm/releases/download/v10.28.0/pnpm-linux-x64
pnpm_checksum_linux_x86_64=348e863d17a62411a65f900e8d91395acabae9e9237653ccc3c36cb385965f28 pnpm_checksum_linux_x86_64=sha256:348e863d17a62411a65f900e8d91395acabae9e9237653ccc3c36cb385965f28
pnpm_binary_darwin_x86_64=https://github.com/pnpm/pnpm/releases/download/v10.28.0/pnpm-macos-x64
pnpm_checksum_darwin_x86_64=99431e91d721169c2050d5e46abefc6f0d23c49e635a5964dcb573d9fe89975a
golangci_lint=v2.7.2-alpine golangci_lint=v2.7.2-alpine

39
sync.sh
View File

@@ -9,26 +9,6 @@ DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# shellcheck source=framework/versions # shellcheck source=framework/versions
source "$DIR/framework/versions" source "$DIR/framework/versions"
# shellcheck source=framework/platform
source "$DIR/framework/platform"
# Get platform-specific variables
nodejs_binary_var="nodejs_binary_${platform}"
nodejs_checksum_var="nodejs_checksum_${platform}"
nodejs_dirname_var="nodejs_dirname_${platform}"
nodejs_binary="${!nodejs_binary_var}"
nodejs_checksum="${!nodejs_checksum_var}"
nodejs_dirname="${!nodejs_dirname_var}"
pnpm_binary_var="pnpm_binary_${platform}"
pnpm_checksum_var="pnpm_checksum_${platform}"
pnpm_binary_url="${!pnpm_binary_var}"
pnpm_checksum="${!pnpm_checksum_var}"
# Set up paths for shims to use
nodejs_dist_dir="framework/binaries/$nodejs_dirname"
nodejs_bin_dir="$nodejs_dist_dir/bin"
# Ensure correct node version is installed # Ensure correct node version is installed
node_installed_checksum_file="$DIR/framework/binaries/.node.checksum" node_installed_checksum_file="$DIR/framework/binaries/.node.checksum"
node_installed_checksum="" node_installed_checksum=""
@@ -36,19 +16,19 @@ if [ -f "$node_installed_checksum_file" ]; then
node_installed_checksum=$(cat "$node_installed_checksum_file") node_installed_checksum=$(cat "$node_installed_checksum_file")
fi fi
if [ "$node_installed_checksum" != "$nodejs_checksum" ]; then if [ "$node_installed_checksum" != "$nodejs_checksum_linux_x86_64" ]; then
echo "Downloading Node.js for $platform..." echo "Downloading Node.js..."
node_archive="$DIR/framework/downloads/node.tar.xz" node_archive="$DIR/framework/downloads/node.tar.xz"
curl -fsSL "$nodejs_binary" -o "$node_archive" curl -fsSL "$nodejs_binary_linux_x86_64" -o "$node_archive"
echo "Verifying checksum..." echo "Verifying checksum..."
echo "$nodejs_checksum $node_archive" | sha256_check echo "$nodejs_checksum_linux_x86_64 $node_archive" | sha256sum -c -
echo "Extracting Node.js..." echo "Extracting Node.js..."
tar -xf "$node_archive" -C "$DIR/framework/binaries" tar -xf "$node_archive" -C "$DIR/framework/binaries"
rm "$node_archive" rm "$node_archive"
echo "$nodejs_checksum" >"$node_installed_checksum_file" echo "$nodejs_checksum_linux_x86_64" >"$node_installed_checksum_file"
fi fi
# Ensure correct pnpm version is installed # Ensure correct pnpm version is installed
@@ -59,12 +39,15 @@ if [ -f "$pnpm_installed_checksum_file" ]; then
pnpm_installed_checksum=$(cat "$pnpm_installed_checksum_file") pnpm_installed_checksum=$(cat "$pnpm_installed_checksum_file")
fi fi
# pnpm checksum includes "sha256:" prefix, strip it for sha256sum
pnpm_checksum="${pnpm_checksum_linux_x86_64#sha256:}"
if [ "$pnpm_installed_checksum" != "$pnpm_checksum" ]; then if [ "$pnpm_installed_checksum" != "$pnpm_checksum" ]; then
echo "Downloading pnpm for $platform..." echo "Downloading pnpm..."
curl -fsSL "$pnpm_binary_url" -o "$pnpm_binary" curl -fsSL "$pnpm_binary_linux_x86_64" -o "$pnpm_binary"
echo "Verifying checksum..." echo "Verifying checksum..."
echo "$pnpm_checksum $pnpm_binary" | sha256_check echo "$pnpm_checksum $pnpm_binary" | sha256sum -c -
chmod +x "$pnpm_binary" chmod +x "$pnpm_binary"

View File

@@ -8,12 +8,6 @@
{{ email }} {{ email }}
</p> </p>
{% if showLogin %}
<a href="/login">login</a>
{% endif %}
{% if showLogout %}
<a href="/logout">logout</a> <a href="/logout">logout</a>
{% endif %}
</body> </body>
</html> </html>