95 Commits

Author SHA1 Message Date
cd19a32be5 Add more todo items 2026-01-25 12:12:15 -06:00
478305bc4f Update /home template 2026-01-25 12:12:02 -06:00
421628d49e Add various doc updates
They are still very far from complete.
2026-01-25 12:11:34 -06:00
4f37a72d7b Clean commands up 2026-01-24 16:54:54 -06:00
e30bf5d96d Fix regexp in fixup.sh 2026-01-24 16:39:13 -06:00
8704c4a8d5 Separate framework and app migrations
Also add a new develop command: clear-db.
2026-01-24 16:38:33 -06:00
579a19669e Match user and session schema changes 2026-01-24 15:48:22 -06:00
474420ac1e Add development command to reset the database and rerun migrations 2026-01-24 15:13:34 -06:00
960f78a1ad Update initial tables 2026-01-24 15:13:30 -06:00
d921679058 Rework user types: create AuthenticatedUser and AnonymousUser class
Both are subclasses of an abstract User class which contains almost everything
interesting.
2026-01-17 17:45:36 -06:00
350bf7c865 Run shell scripts through shfmt 2026-01-17 16:30:55 -06:00
8a7682e953 Split services into core and request 2026-01-17 16:20:55 -06:00
e59bb35ac9 Update todo list 2026-01-17 16:10:38 -06:00
a345a2adfb Add directive 2026-01-17 16:10:24 -06:00
00d84d6686 Note that files belong to framework 2026-01-17 15:45:02 -06:00
7ed05695b9 Separate happy path utility functions for requests 2026-01-17 15:43:52 -06:00
03cc4cf4eb Remove prettier; we've been using biome for a while 2026-01-17 13:19:40 -06:00
2121a6b5de Merge remote-tracking branch 'crondiad/experiments' into experiments 2026-01-11 16:08:03 -06:00
Michael Wolf
6ace2163ed Update pnpm version 2026-01-11 16:07:32 -06:00
Michael Wolf
93ab4b5d53 Update node version 2026-01-11 16:07:24 -06:00
Michael Wolf
70ddcb2a94 Note that we need bash 2026-01-11 16:06:48 -06:00
Michael Wolf
1da81089cd Add sync.sh script
This downloads and installs dependencies necessary to run or develop.

Add docker-compose.yml for initial use
2026-01-11 16:06:43 -06:00
f383c6a465 Add logger wrapper script 2026-01-11 15:48:32 -06:00
e34d47b352 Add various todo items 2026-01-11 15:36:15 -06:00
de70be996e Add docker-compose.yml for initial use 2026-01-11 15:33:01 -06:00
096a1235b5 Add basic logout 2026-01-11 15:31:59 -06:00
4a4dc11aa4 Fix formatting 2026-01-11 15:17:58 -06:00
7399cbe785 Add / template 2026-01-11 14:57:51 -06:00
14d20be9a2 Note that file belongs to the framework 2026-01-11 14:57:26 -06:00
55f5cc699d Add request-scoped context for session.getUser()
Use AsyncLocalStorage to provide request context so services can access
the current user without needing Call passed through every function.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-11 14:56:10 -06:00
afcb447b2b Add a command to add a new user 2026-01-11 14:38:19 -06:00
1c1eeddcbe Add basic login screen with form-based authentication
Adds /login route with HTML template that handles GET (show form) and
POST (authenticate). On successful login, sets session cookie and
redirects to /. Also adds framework support for redirects and cookies
in route handlers.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-11 10:07:02 -06:00
7cecf5326d Make biome happier 2026-01-10 14:02:38 -06:00
47f6bee75f Improve test command to find spec/test files recursively
Use globstar for recursive matching and support both *.spec.ts
and *.test.ts patterns in any subdirectory.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 13:55:42 -06:00
6e96c33457 Add very basic support for finding and rendering templates 2026-01-10 13:50:44 -06:00
9e3329fa58 . 2026-01-10 13:38:42 -06:00
05eaf938fa Add test command
For now this just runs typescript tests.  Eventually it'll do more than that.
2026-01-10 13:38:10 -06:00
df2d4eea3f Add initial way to get info about execution context 2026-01-10 13:37:39 -06:00
b235a6be9a Add block for declared var 2026-01-10 13:05:39 -06:00
8cd4b42cc6 Add scripts to run migrations and to connect to the db 2026-01-10 09:05:05 -06:00
241d3e799e Use less ambiguous funcion 2026-01-10 08:55:00 -06:00
49dc0e3fe0 Mark several unused vars as such 2026-01-10 08:54:51 -06:00
c7b8cd33da Clean up imports 2026-01-10 08:54:34 -06:00
6c0895de07 Fix formatting 2026-01-10 08:51:20 -06:00
17ea6ba02d Consider block stmts without braces to be errors 2026-01-09 11:44:09 -06:00
661def8a5c Refmt 2026-01-04 15:24:29 -06:00
74d75d08dd Add Session class to provide getUser() on call.session
Wraps SessionData and user into a Session class that handlers can use
via call.session.getUser() instead of accessing services directly.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-04 15:22:27 -06:00
ad6d405206 Add session data to Call type
- AuthService.validateRequest now returns AuthResult with both user and session
- Call type includes session: SessionData | null
- Handlers can access session metadata (createdAt, authMethod, etc.)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-04 09:50:05 -06:00
e9ccf6d757 Add PostgreSQL database layer with Kysely and migrations
- Add database.ts with connection pool, Kysely query builder, and migration runner
- Create migrations for users and sessions tables (0001, 0002)
- Implement PostgresAuthStore to replace InMemoryAuthStore
- Wire up database service in services/index.ts

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-04 09:43:20 -06:00
34ec5be7ec Pull in kysely and pg deps 2026-01-03 17:20:49 -06:00
e136c07928 Add some stub user stuff 2026-01-03 17:06:54 -06:00
c926f15aab Fix circular dependency breaking ncc bundle
Don't export authRoutes from barrel file to break the cycle:
services.ts → auth/index.ts → auth/routes.ts → services.ts

Import authRoutes directly from ./auth/routes instead.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-03 14:24:53 -06:00
39cd93c81e Move services.ts 2026-01-03 14:12:27 -06:00
c246e0384f Add authentication system with session-based auth
Implements full auth flows with opaque tokens (not JWT) for easy revocation:
- Login/logout with cookie or bearer token support
- Registration with email verification
- Password reset with one-time tokens
- scrypt password hashing (no external deps)

New files in express/auth/:
- token.ts: 256-bit token generation, SHA-256 hashing
- password.ts: scrypt hashing with timing-safe verification
- types.ts: Session schemas, token types, input validation
- store.ts: AuthStore interface + InMemoryAuthStore
- service.ts: AuthService with all auth operations
- routes.ts: 6 auth endpoints

Modified:
- types.ts: Added user field to Call, requireAuth/requirePermission helpers
- app.ts: JSON body parsing, populates call.user, handles auth errors
- services.ts: Added services.auth
- routes.ts: Includes auth routes

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-03 13:59:02 -06:00
788ea2ab19 Add User class with role and permission-based authorization
Foundation for authentication/authorization with:
- Stable UUID id for database keys, email as human identifier
- Account status (active/suspended/pending)
- Role-based auth with role-to-permission mappings
- Direct permissions in resource:action format
- Methods: hasRole(), hasPermission(), can(), effectivePermissions()

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-03 12:59:47 -06:00
6297a95d3c Reformat more files 2026-01-01 21:20:45 -06:00
63cf0a670d Update todo list 2026-01-01 21:20:38 -06:00
5524eaf18f ? 2026-01-01 21:12:55 -06:00
03980e114b Add basic template rendering route 2026-01-01 21:12:38 -06:00
539717efda Add todo item 2026-01-01 21:11:28 -06:00
8be88bb696 Move TODOs re logging to the end 2026-01-01 21:11:10 -06:00
ab74695f4c Have master start and manage the logger process
Master now:
- Starts logger on startup with configurable port and capacity
- Restarts logger automatically if it crashes
- Stops logger gracefully on shutdown

New flags:
- --logger-port (default 8085)
- --logger-capacity (default 1000000)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-01 20:53:29 -06:00
dc5a70ba33 Add logging service
New Go program (logger/) that:
- Accepts POSTed JSON log messages via POST /log
- Stores last N messages in a ring buffer (default 1M)
- Retrieves logs via GET /logs with limit/before/after filters
- Shows status via GET /status

Also updates express/logging.ts to POST messages to the logger service.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-01 20:45:34 -06:00
4adf6cf358 Add another TODO item 2026-01-01 20:37:16 -06:00
bee6938a67 Add some logging related stubs to express backend 2026-01-01 20:18:37 -06:00
b0ee53f7d5 Listen by default on port 3500
The master process will continue to start at port 3000.  In practice, this
ought to make conflicts between master-superviced processes and ones run by
hand less of an issue.
2026-01-01 20:17:26 -06:00
5c93c9e982 Add TODO items 2026-01-01 20:17:15 -06:00
5606a59614 Note that you need docker as well as docker compose 2026-01-01 17:36:22 -06:00
22dde8c213 Add wrapper script for master program 2026-01-01 17:35:56 -06:00
30463b60a5 Use CLI flags instead of environment variables for master config
Replace env var parsing with Go's flag package:
- --watch (default: ../express)
- --workers (default: 1)
- --base-port (default: 3000)
- --port (default: 8080)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-01 17:31:08 -06:00
e2ea472a10 Make biome happier 2026-01-01 17:22:04 -06:00
20e5da0d54 Teach fixup.sh to use biome 2026-01-01 17:16:46 -06:00
13d02d86be Pull in and set up biome 2026-01-01 17:16:02 -06:00
d35e7bace2 Make shfmt happier 2026-01-01 16:53:19 -06:00
cb4a730838 Make go fmt happier 2026-01-01 16:53:00 -06:00
58f88e3695 Add check.sh and fixup.sh scripts 2026-01-01 16:47:50 -06:00
7b8eaac637 Add TODO.md and instructions 2026-01-01 15:45:43 -06:00
f504576f3e Add first cut at a pool 2026-01-01 15:43:49 -06:00
8722062f4a Change process names again 2026-01-01 15:12:01 -06:00
9cc1991d07 Name backend process 2026-01-01 14:54:17 -06:00
5d5a2430ad Fix arg in build script 2026-01-01 14:37:11 -06:00
a840137f83 Mark build.sh as executable 2026-01-01 14:34:31 -06:00
c330da49fc Add rudimentary command line parsing to express app 2026-01-01 14:34:16 -06:00
db81129724 Add build.sh 2026-01-01 14:17:09 -06:00
43ff2edad2 Pull in nunjucks 2026-01-01 14:14:03 -06:00
Michael Wolf
ad95f652b8 Fix bogus path expansion 2026-01-01 14:10:57 -06:00
Michael Wolf
51d24209b0 Use build.sh script 2026-01-01 14:09:51 -06:00
Michael Wolf
1083655a3b Add and use a simpler run script 2026-01-01 14:08:46 -06:00
Michael Wolf
615cd89656 Ignore more node_modules directories 2026-01-01 13:24:50 -06:00
Michael Wolf
321b2abd23 Sort of run node app 2026-01-01 13:24:36 -06:00
Michael Wolf
642c7d9434 Update CLAUDE.md 2026-01-01 13:06:21 -06:00
Michael Wolf
8e5b46d426 Add first cut at a CLAUDE.md file 2026-01-01 12:31:35 -06:00
Michael Wolf
a178536472 Rename monitor to master 2026-01-01 12:30:58 -06:00
Michael Wolf
3bece46638 Add first cut at golang monitor program 2026-01-01 12:26:54 -06:00
Michael Wolf
4257a9b615 Improve wording in a few places 2025-12-05 19:47:58 -06:00
103 changed files with 5145 additions and 326 deletions

4
.beads/issues.jsonl Normal file
View File

@@ -0,0 +1,4 @@
{"id":"diachron-2vh","title":"Add unit testing to golang programs","status":"open","priority":2,"issue_type":"task","created_at":"2026-01-03T12:31:41.281891462-06:00","created_by":"mw","updated_at":"2026-01-03T12:31:41.281891462-06:00"}
{"id":"diachron-64w","title":"Add unit testing to express backend","status":"open","priority":2,"issue_type":"task","created_at":"2026-01-03T12:31:30.439206099-06:00","created_by":"mw","updated_at":"2026-01-03T12:31:30.439206099-06:00"}
{"id":"diachron-fzd","title":"Add generic 'user' functionality","status":"open","priority":2,"issue_type":"task","created_at":"2026-01-03T12:35:53.73213604-06:00","created_by":"mw","updated_at":"2026-01-03T12:35:53.73213604-06:00"}
{"id":"diachron-ngx","title":"Teach the master and/or build process to send messages with notify-send when builds fail or succeed. Ideally this will be fairly generic.","status":"open","priority":2,"issue_type":"task","created_at":"2026-01-03T14:10:11.773218844-06:00","created_by":"mw","updated_at":"2026-01-03T14:10:11.773218844-06:00"}

2
.claude/instructions.md Normal file
View File

@@ -0,0 +1,2 @@
When asked "what's next?" or during downtime, check TODO.md and suggest items to work on.

3
.gitignore vendored
View File

@@ -1,6 +1,5 @@
framework/node/node_modules **/node_modules
framework/downloads framework/downloads
framework/binaries framework/binaries
framework/.nodejs framework/.nodejs
framework/.nodejs-config framework/.nodejs-config
node_modules

122
CLAUDE.md Normal file
View File

@@ -0,0 +1,122 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with
code in this repository.
## Project Overview
Diachron is an opinionated TypeScript/Node.js web framework with a Go-based
master process. Key design principles:
- No development/production distinction - single mode of operation everywhere
- Everything loggable and inspectable for debuggability
- Minimal magic, explicit behavior
- PostgreSQL-only (no database abstraction)
- Inspired by "Taking PHP Seriously" essay
## Commands
### General
**Install dependencies:**
```bash
./sync.sh
```
**Run an app:**
```bash
./master
```
### Development
**Check shell scripts (shellcheck + shfmt) (eventually go fmt and biome or similar):**
```bash
./check.sh
```
**Format TypeScript code:**
```bash
cd express && ../cmd pnpm biome check --write .
```
**Build Go master process:**
```bash
cd master && go build
```
### Operational
(to be written)
## Architecture
### Components
- **express/** - TypeScript/Express.js backend application
- **master/** - Go-based master process for file watching and process management
- **framework/** - Managed binaries (Node.js, pnpm), command wrappers, and
framework-specific library code
- **monitor/** - Go file watcher that triggers rebuilds (experimental)
### Master Process (Go)
Responsibilities:
- Watch TypeScript source for changes and trigger rebuilds
- Manage worker processes
- Proxy web requests to backend workers
- Behaves identically in all environments (no dev/prod distinction)
### Express App Structure
- `app.ts` - Main Express application setup with route matching
- `routes.ts` - Route definitions
- `handlers.ts` - Route handlers
- `services.ts` - Service layer (database, logging, misc)
- `types.ts` - TypeScript type definitions (Route, Call, Handler, Result, Method)
### Framework Command System
Commands flow through: `./cmd``framework/cmd.d/*``framework/shims/*` → managed binaries in `framework/binaries/`
This ensures consistent tooling versions across the team without system-wide installations.
## Tech Stack
- TypeScript 5.9+ / Node.js 22.15
- Express.js 5.1
- Go 1.23.3+ (master process)
- pnpm 10.12.4 (package manager)
- Zod (runtime validation)
- Nunjucks (templating)
- @vercel/ncc (bundling)
## Platform Requirements
Linux x86_64 only (currently). Requires:
- Modern libc for Go binaries
- docker compose (for full stack)
- fd, shellcheck, shfmt (for development)
## Current Status
Early stage - most implementations are stubs:
- Database service is placeholder
- Logging functions marked WRITEME
- No test framework configured yet
# meta
## formatting and sorting
- When a typescript file exports symbols, they should be listed in order
## guidelines for this document
- Try to keep lines below 80 characters in length, especially prose. But if
embedded code or literals are longer, that's fine.
- Use formatting such as bold or italics sparingly
- In general, we treat this document like source code insofar as it should be
both human-readable and machine-readable
- Keep this meta section at the end of the file.

View File

@@ -13,15 +13,15 @@ diachron. (When it comes to that dev/test/prod one, hear us out first, ok?)
Seriously](https://slack.engineering/taking-php-seriously/) and wish you had Seriously](https://slack.engineering/taking-php-seriously/) and wish you had
something similar for Typescript? something similar for Typescript?
- Do you think that ORMs are not all that, and you had first class unmediated - Do you think that ORMs are not all that? Do you wish you had first class
access to your database? And do you think that database agnosticism is unmediated access to your database? And do you think that database
overrated? agnosticism is overrated?
-- Do you think dev/testing/prod distinctions are a bad idea? (Hear us out on - Do you think dev/testing/prod distinctions are a bad idea? (Hear us out on
this one.) this one.)
- Have you ever lost hours getting everyone on your team to have the exact - Have you ever lost hours getting everyone on your team to have the exact
same environment, but you're not willing to take the plunge and use a tool same environment, yet you're not willing to take the plunge and use a tool
like [nix](https://nixos.org)? like [nix](https://nixos.org)?
- Are you frustrated by unclear documentation? Is ramping up a frequent - Are you frustrated by unclear documentation? Is ramping up a frequent
@@ -54,10 +54,9 @@ To run a more complete system, you also need to have docker compose installed.
To hack on diachron itself, you need the following: To hack on diachron itself, you need the following:
- docker compose - bash
- docker and docker compose
- [fd](https://github.com/sharkdp/fd) - [fd](https://github.com/sharkdp/fd)
- golang, version 1.23.6 or greater - golang, version 1.23.6 or greater
- shellcheck - shellcheck
- shfmt - shfmt

177
TODO.md Normal file
View File

@@ -0,0 +1,177 @@
## high importance
- [ ] Add unit tests all over the place.
- ⚠️ Huge task - needs breakdown before starting
- [ ] migrations, seeding, fixtures
```sql
CREATE SCHEMA fw;
CREATE TABLE fw.users (...);
CREATE TABLE fw.groups (...);
```
```sql
CREATE TABLE app.user_profiles (...);
CREATE TABLE app.customer_metadata (...);
```
- [ ] flesh out `mgmt` and `develop` (does not exist yet)
4.1 What belongs in develop
- Create migrations
- Squash migrations
- Reset DB
- Roll back migrations
- Seed large test datasets
- Run tests
- Snapshot / restore local DB state (!!!)
`develop` fails if APP_ENV (or whatever) is `production`. Or maybe even
`testing`.
- [ ] Add default user table(s) to database.
- [ ] Add authentication
- [ ] password
- [ ] third party?
- [ ] Add middleware concept
- [ ] Add authorization
- for specific routes / resources / etc
- [ ] Add basic text views
Partially done; see the /time route. But we need to figure out where to
store templates, static files, etc.
- [ ] fix process management: if you control-c `master` process sometimes it
leaves around `master-bin`, `logger-bin`, and `diachron:nnnn` processes.
Huge problem.
## medium importance
- [ ] Add a log viewer
- with queries
- convert to logfmt and is there a viewer UI we could pull in and use
instead?
- [ ] add nested routes. Note that this might be easy to do without actually
changing the logic in express/routes.ts. A function that takes an array
of routes and maps over them rewriting them. Maybe.
- [ ] related: add something to do with default templates and stuff... I
think we can make handlers a lot shorter to write, sometimes not even
necessary at all, with some sane defaults and an easy to use override
mechanism
- [ ] time library
- [ ] fill in the rest of express/http-codes.ts
- [ ] fill out express/content-types.ts
- [ ] identify redundant "old skool" and ajax routes, factor out their
commonalities, etc.
- [ ] figure out and add logging to disk
- [ ] I don't really feel close to satisfied with template location /
rendering / etc. Rethink and rework.
- [ ] Add email verification (this is partially done already)
- [ ] Reading .env files and dealing with the environment should be immune to
the extent possible from idiotic errors
- [ ] Update check script:
- [x] shellcheck on shell scripts
- [x] `go vet` on go files
- [x] `golangci-lint` on go files
- [x] Run `go fmt` on all .go files
- [ ] Eventually, run unit tests
- [ ] write docs
- upgrade docs
- starting docs
- taking over docs
- reference
- internals
- [ ] make migration creation default to something like yyyy-mm-dd_ssss (are
9999 migrations in a day enough?)
- [ ] clean up `cmd` and `mgmt`: do the right thing with their commonalities
and make very plain which is which for what. Consider additional
commands. Maybe `develop` for specific development tasks,
`operate` for operational tasks, and we keep `cmd` for project-specific
commands. Something like that.
## low importance
- [ ] add a prometheus-style `/metrics` endpoint to master
- [ ] create a metrics server analogous to the logging server
- accept various stats from the workers (TBD)
- [ ] move `master-bin` into a subdir like `master/cmd` or whatever is
idiomatic for golang programs; adapt `master` wrapper shell script
accordingly
- [ ] flesh out the `sync.sh` script
- [ ] update framework-managed node
- [ ] update framework-managed pnpm
- [ ] update pnpm-managed deps
- [ ] rebuild golang programs
- [ ] If the number of workers is large, then there is a long lapse between
when you change a file and when the server responds
- One solution: start and stop workers serially: stop one, restart it with new
code; repeat
- Slow start them: only start a few at first
- [ ] in express/user.ts: FIXME: set createdAt and updatedAt to start of epoch
## finished
- [x] Reimplement fixup.sh
- [x] run shfmt on all shell scripts (and the files they `source`)
- [x] Run `go fmt` on all .go files
- [x] Run ~~prettier~~ biome on all .ts files and maybe others
- [x] Adapt master program so that it reads configuration from command line
args instead of from environment variables
- Should have sane defaults
- Adding new arguments should be easy and obvious
- [x] Add wrapper script to run master program (so that various assumptions related
to relative paths are safer)
- [x] Add logging service
- New golang program, in the same directory as master
- Intended to be started by master
- Listens to a port specified command line arg
- Accepts POSTed (or possibly PUT) json messages, currently in a
to-be-defined format. We will work on this format later.
- Keeps the most recent N messages in memory. N can be a fairly large
number; let's start by assuming 1 million.
- [x] Log to logging service from the express backend
- Fill out types and functions in `express/logging.ts`
- [x] Add first cut at database access. Remember that ORMs are not all that!
- [x] Create initial docker-compose.yml file for local development
- include most recent stable postgres
- include beanstalkd
- include memcached
- include redis
- include mailpit

30
check.sh Executable file
View File

@@ -0,0 +1,30 @@
#!/bin/bash
DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
cd "$DIR"
# Keep exclusions sorted. And list them here.
#
# - SC2002 is useless use of cat
#
exclusions="SC2002"
source "$DIR/framework/versions"
if [[ $# -ne 0 ]]; then
shellcheck --exclude="$exclusions" "$@"
exit $?
fi
shell_scripts="$(fd .sh | xargs)"
# The files we need to check all either end in .sh or else they're the files
# in framework/cmd.d and framework/shims. -x instructs shellcheck to also
# check `source`d files.
shellcheck -x --exclude="$exclusions" "$DIR/cmd" "$DIR"/framework/cmd.d/* "$DIR"/framework/shims/* "$shell_scripts"
pushd "$DIR/master"
docker run --rm -v $(pwd):/app -w /app golangci/golangci-lint:$golangci_lint golangci-lint run
popd

22
cmd
View File

@@ -2,20 +2,26 @@
# This file belongs to the framework. You are not expected to modify it. # This file belongs to the framework. You are not expected to modify it.
# FIXME: Obviously this file isn't nearly robust enough. Make it so. # Managed binary runner - runs framework-managed binaries like node, pnpm, tsx
# Usage: ./cmd <command> [args...]
set -eu set -eu
DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
if [ $# -lt 1 ]; then
echo "Usage: ./cmd <command> [args...]"
echo ""
echo "Available commands:"
for cmd in "$DIR"/framework/cmd.d/*; do
if [ -x "$cmd" ]; then
basename "$cmd"
fi
done
exit 1
fi
subcmd="$1" subcmd="$1"
# echo "$subcmd"
#exit 3
shift shift
echo will run "$DIR"/framework/cmd.d/"$subcmd" "$@"
exec "$DIR"/framework/cmd.d/"$subcmd" "$@" exec "$DIR"/framework/cmd.d/"$subcmd" "$@"

27
develop Executable file
View File

@@ -0,0 +1,27 @@
#!/bin/bash
# This file belongs to the framework. You are not expected to modify it.
# Development command runner - parallel to ./mgmt for development tasks
# Usage: ./develop <command> [args...]
set -eu
DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
if [ $# -lt 1 ]; then
echo "Usage: ./develop <command> [args...]"
echo ""
echo "Available commands:"
for cmd in "$DIR"/framework/develop.d/*; do
if [ -x "$cmd" ]; then
basename "$cmd"
fi
done
exit 1
fi
subcmd="$1"
shift
exec "$DIR"/framework/develop.d/"$subcmd" "$@"

35
docker-compose.yml Normal file
View File

@@ -0,0 +1,35 @@
services:
postgres:
image: postgres:17
ports:
- "5432:5432"
environment:
POSTGRES_USER: diachron
POSTGRES_PASSWORD: diachron
POSTGRES_DB: diachron
volumes:
- postgres_data:/var/lib/postgresql/data
redis:
image: redis:7
ports:
- "6379:6379"
memcached:
image: memcached:1.6
ports:
- "11211:11211"
beanstalkd:
image: schickling/beanstalkd
ports:
- "11300:11300"
mailpit:
image: axllent/mailpit
ports:
- "1025:1025" # SMTP
- "8025:8025" # Web UI
volumes:
postgres_data:

125
docs/commands.md Normal file
View File

@@ -0,0 +1,125 @@
# The Three Types of Commands
This framework deliberately separates *how* you interact with the system into three distinct command types. The split is not cosmetic; it encodes safety, intent, and operational assumptions directly into the tooling so that mistakes are harder to make under stress.
The guiding idea: **production should feel boring and safe; exploration should feel powerful and a little dangerous; the application itself should not care how it is being operated.**
---
## 1. Application Commands (`app`)
**What they are**
Commands defined *by the application itself*, for its own domain needs. They are not part of the framework, even though they are built on top of it.
The framework provides structure and affordances; the application supplies meaning.
**Core properties**
* Express domain behavior, not infrastructure concerns
* Safe by definition
* Deterministic and repeatable
* No environmentdependent semantics
* Identical behavior in dev, staging, and production
**Examples**
* Handling HTTP requests
* Rendering templates
* Running background jobs / queues
* Sending emails triggered by application logic
**Nongoals**
* No schema changes
* No data backfills
* No destructive behavior
* No operational or lifecycle management
**Rule of thumb**
If removing the framework would require rewriting *how* it runs but not *what* it does, the command belongs here.
---
## 2. Management Commands (`mgmt`)
**What they are**
Operational, *productionsafe* commands used to evolve and maintain a live system.
These commands assume real data exists and must not be casually destroyed.
**Core properties**
* Forwardonly
* Idempotent or safely repeatable
* Designed to run in production
* Explicit, auditable intent
**Examples**
* Applying migrations
* Running seeders that assert invariant data
* Reindexing or rebuilding derived data
* Rotating keys, recalculating counters
**Design constraints**
* No implicit rollbacks
* No hidden destructive actions
* Fail fast if assumptions are violated
**Rule of thumb**
If you would run it at 3am while tired and worried, it must live here.
---
## 3. Development Commands (`develop`)
**What they are**
Sharp, *unsafe by design* tools meant exclusively for local development and experimentation.
These commands optimize for speed, learning, and iteration — not safety.
**Core properties**
* Destructive operations allowed
* May reset or mutate large amounts of data
* Assume a clean or disposable environment
* Explicitly gated in production
**Examples**
* Dropping and recreating databases
* Rolling migrations backward
* Loading fixtures or scenarios
* Generating fake or randomized data
**Safety model**
* Hard to run in production
* Requires explicit optin if ever enabled
* Clear, noisy warnings when invoked
**Rule of thumb**
If it would be irresponsible to run against real user data, it belongs here.
---
## Why This Split Matters
Many frameworks blur these concerns, leading to:
* Fearful production operations
* Overpowered dev tools leaking into prod
* Environmentspecific behavior and bugs
By naming and enforcing these three command types:
* Intent is visible at the CLI level
* Safety properties are architectural, not cultural
* Developers can move fast *without* normalizing risk
---
## OneSentence Summary
> **App commands run the system, mgmt commands evolve it safely, and develop commands let you break things on purpose — but only where its allowed.**

View File

@@ -0,0 +1,37 @@
Let's consider a bullseye with the following concentric circles:
- Ring 0: small, simple systems
- Single jurisdiction
- Email + password
- A few roles
- Naïve or soft deletion
- Minimal audit needs
- Ring 1: grown-up systems
- Long-lived data
- Changing requirements
- Shared accounts
- GDPR-style erasure/anonymization
- Some cross-border concerns
- Historical data must remain usable
- “Oops, we should have thought about that” moments
- Ring 2: heavy compliance
- Formal audit trails
- Legal hold
- Non-repudiation
- Regulatory reporting
- Strong identity guarantees
- Jurisdiction-aware data partitioning
- Ring 3: banking / defense / healthcare at scale
- Cryptographic auditability
- Append-only ledgers
- Explicit legal models
- Independent compliance teams
- Lawyers embedded in engineeRing
diachron is designed to be suitable for Rings 0 and 1. Occasionally we may
look over the fence into Ring 2, but it's not what we've principally designed
for. Please take this framing into account when evaluating diachron for
greenfield projects.

View File

@@ -0,0 +1,142 @@
# Freedom, Hacking, and Responsibility
This framework is **free and open source software**.
That fact is not incidental. It is a deliberate ethical, practical, and technical choice.
This document explains how freedom to modify coexists with strong guidance about *how the framework is meant to be used* — without contradiction, and without apology.
---
## The short version
* This is free software. You are free to modify it.
* The framework has documented invariants for good reasons.
* You are encouraged to explore, question, and patch.
* You are discouraged from casually undermining guarantees you still expect to rely on.
* Clarity beats enforcement.
Freedom with understanding beats both lock-in and chaos.
---
## Your Freedom
You are free to:
* study the source code
* run the software for any purpose
* modify it in any way
* fork it
* redistribute it, with or without changes
* submit patches, extensions, or experiments
…subject only to the terms of the license.
These freedoms are foundational. They are not granted reluctantly, and they are not symbolic. They exist so that:
* you can understand what your software is really doing
* you are not trapped by vendor control
* the system can outlive its original authors
---
## Freedom Is Not the Same as Endorsement
While you are free to change anything, **not all changes are equally wise**.
Some parts of the framework are carefully constrained because they encode:
* security assumptions
* lifecycle invariants
* hard-won lessons from real systems under stress
You are free to violate these constraints in your own fork.
But the frameworks documentation will often say things like:
* “do not modify this”
* “application code must not depend on this”
* “this table or class is framework-owned”
These statements are **technical guidance**, not legal restrictions.
They exist to answer the question:
> *If you want this system to remain upgradeable, predictable, and boring — what should you leave alone?*
---
## The Intended Social Contract
The framework makes a clear offer:
* We expose our internals so you can learn.
* We provide explicit extension points so you can adapt.
* We document invariants so you dont have to rediscover them the hard way.
In return, we ask that:
* application code respects documented boundaries
* extensions use explicit seams rather than hidden hooks
* patches that change invariants are proposed consciously, not accidentally
Nothing here is enforced by technical locks.
It is enforced — insofar as it is enforced at all — by clarity and shared expectations.
---
## Hacking Is Welcome
Exploration is not just allowed; it is encouraged.
Good reasons to hack on the framework include:
* understanding how it works
* evaluating whether its constraints make sense
* adapting it to unfamiliar environments
* testing alternative designs
* discovering better abstractions
Fork it. Instrument it. Break it. Learn from it.
Many of the frameworks constraints exist *because* someone once ignored them and paid the price.
---
## Patches, Not Patches-in-Place
If you discover a problem or a better design:
* patches are welcome
* discussions are welcome
* disagreements are welcome
What is discouraged is **quietly patching around framework invariants inside application code**.
That approach:
* obscures intent
* creates one-off local truths
* makes systems harder to reason about
If the framework is wrong, it should be corrected *at the framework level*, or consciously forked.
---
## Why This Is Not a Contradiction
Strong opinions and free software are not enemies.
Freedom means you can change the software.
Responsibility means understanding what you are changing, and why.
A system that pretends every modification is equally safe is dishonest.
A system that hides its internals to prevent modification is hostile.
This framework aims for neither.

27
docs/groups-and-roles.md Normal file
View File

@@ -0,0 +1,27 @@
- Role: a named bundle of responsibilities (editor, admin, member)
- Group: a scope or context (org, team, project, publication)
- Permission / Capability (capability preferred in code): a boolean fact about
allowed behavior
## tips
- In the database, capabilities are boolean values. Their names should be
verb-subject. Don't include `can` and definitely do not include `cannot`.
✔️ `edit_post`
`cannot_remove_comment`
- The capabilities table is deliberately flat. If you need to group them, use
`.` as a delimiter and sort and filter accordingly in queries and in your
UI.
✔️ `blog.edit_post`
✔️ `blog.moderate_comment`
or
✔️ `blog.post.edit`
✔️ `blog.post.delete`
✔️ `blog.comment.moderate`
✔️ `blog.comment.edit`
are all fine.

17
docs/index.md Normal file
View File

@@ -0,0 +1,17 @@
misc notes for now. of course this needs to be written up for real.
## execution context
The execution context represents facts such as the runtime directory, the
operating system, hardware, and filesystem layout, distinct from environment
variables or request-scoped context.
## philosophy
- TODO-DESIGN.md
- concentric-circles.md
- nomenclature.md
- mutability.md
- commands.md
- groups-and-roles.md

View File

@@ -0,0 +1,34 @@
Some database tables are owned by diachron and some are owned by the
application.
This also applies to seeders: some are owned by diachron and some by the
application.
The database's structure is managed by migrations written in SQL.
Each migration gets its own file. These files' names should match
`yyyy-mm-dd_ss-description.sql`, eg `2026-01-01_01-users.sql`.
Files are sorted lexicographically by name and applied in order.
Note: in the future we may relax or modify the restriction on migration file
names, but they'll continue to be applied in lexicographical order.
## framework and application migrations
Migrations owned by the framework are kept in a separate directory from those
owned by applications. Pending framework migrations, if any, are applied
before pending application migrations, if any.
diachron will go to some lengths to ensure that framework migrations do not
break applications.
## no downward migrations
diachron does not provide them. "The only way out is through."
When developing locally, you can use the command `develop reset-db`. **NEVER
USE THIS IN PRODUCTION!** Always be sure that you can "get back to where you
were". Being careful when creating migrations and seeders can help, but
dumping and restoring known-good copies of the database can also take you a
long way.

1
docs/mutability.md Normal file
View File

@@ -0,0 +1 @@
Describe and define what is expected to be mutable and what is not.

View File

@@ -2,3 +2,14 @@ We use `Call` and `Result` for our own types that wrap `Request` and
`Response`. `Response`.
This hopefully will make things less confusing and avoid problems with shadowing. This hopefully will make things less confusing and avoid problems with shadowing.
## meta
- We use _algorithmic complexity_ for performance discussions, when
things like Big-O come up, etc
- We use _conceptual complexity_ for design and architecture
- We use _cognitive load_ when talking about developer experience
- We use _operational burden_ when talking about production reality

View File

@@ -1 +1,219 @@
. # Framework vs Application Ownership
This document defines **ownership boundaries** between the framework and application code. These boundaries are intentional and non-negotiable: they exist to preserve upgradeability, predictability, and developer sanity under stress.
Ownership answers a simple question:
> **Who is allowed to change this, and under what rules?**
The framework draws a hard line between *frameworkowned* and *applicationowned* concerns, while still encouraging extension through explicit, visible mechanisms.
---
## Core Principle
The framework is not a library of suggestions. It is a **runtime with invariants**.
Application code:
* **uses** the framework
* **extends** it through defined seams
* **never mutates or overrides its invariants**
Framework code:
* guarantees stable behavior
* owns critical lifecycle and security concerns
* must remain internally consistent across versions
Breaking this boundary creates systems that work *until they dont*, usually during upgrades or emergencies.
---
## Database Ownership
### FrameworkOwned Tables
Certain database tables are **owned and managed exclusively by the framework**.
Examples (illustrative, not exhaustive):
* authentication primitives
* session or token state
* internal capability/permission metadata
* migration bookkeeping
* framework feature flags or invariants
#### Rules
Application code **must not**:
* modify schema
* add columns
* delete rows
* update rows directly
* rely on undocumented columns or behaviors
Application code **may**:
* read via documented framework APIs
* reference stable identifiers explicitly exposed by the framework
Think of these tables as **private internal state** — even though they live in your database.
> If the framework needs you to interact with this data, it will expose an API for it.
#### Rationale
These tables:
* encode security or correctness invariants
* may change structure across framework versions
* must remain globally coherent
Treating them as appowned data tightly couples your app to framework internals and blocks safe upgrades.
---
### ApplicationOwned Tables
All domain data belongs to the application.
Examples:
* users (as domain actors, not auth primitives)
* posts, orders, comments, invoices
* businessspecific joins and projections
* denormalized or performanceoriented tables
#### Rules
Application code:
* owns schema design
* owns migrations
* owns constraints and indexes
* may evolve these tables freely
The framework:
* never mutates application tables implicitly
* interacts only through explicit queries or contracts
#### Integration Pattern
Where framework concepts must relate to app data:
* use **foreign keys to frameworkexposed identifiers**, or
* introduce **explicit join tables** owned by the application
No hidden coupling, no magic backfills.
---
## Code Ownership
### FrameworkOwned Code
Some classes, constants, and modules are **frameworkowned**.
These include:
* core request/response abstractions
* auth and user primitives
* capability/permission evaluation logic
* lifecycle hooks
* lowlevel utilities relied on by the framework itself
#### Rules
Application code **must not**:
* modify framework source
* monkeypatch or override internals
* rely on undocumented behavior
* change constant values or internal defaults
Framework code is treated as **readonly** from the apps perspective.
---
### Extension Is Encouraged (But Explicit)
Ownership does **not** mean rigidity.
The framework is designed to be extended via **intentional seams**, such as:
* subclassing
* composition
* adapters
* delegation
* configuration objects
* explicit registration APIs
#### Preferred Patterns
* **Subclass when behavior is stable and conceptual**
* **Compose when behavior is contextual or optional**
* **Delegate when authority should remain with the framework**
What matters is that extension is:
* visible in code
* locally understandable
* reversible
No spooky action at a distance.
---
## What the App Owns Completely
The application fully owns:
* domain models and data shapes
* SQL queries and result parsing
* business rules
* authorization policy *inputs* (not the engine)
* rendering decisions
* feature flags specific to the app
* performance tradeoffs
The framework does not attempt to infer intent from your domain.
---
## What the Framework Guarantees
In return for respecting ownership boundaries, the framework guarantees:
* stable semantics across versions
* forwardonly migrations for its own tables
* explicit deprecations
* no silent behavior changes
* identical runtime behavior in dev and prod
The framework may evolve internally — **but never by reaching into your apps data or code**.
---
## A Useful Mental Model
* Frameworkowned things are **constitutional law**
* Applicationowned things are **legislation**
You can write any laws you want — but you dont amend the constitution inline.
If you need a new power, the framework should expose it deliberately.
---
## Summary
* Ownership is about **who is allowed to change what**
* Frameworkowned tables and code are readonly to the app
* Applicationowned tables and code are sovereign
* Extension is encouraged, mutation is not
* Explicit seams beat clever hacks
Respecting these boundaries keeps systems boring — and boring systems survive stress.

View File

@@ -1,27 +1,39 @@
import express, { import express, {
Request as ExpressRequest, type Request as ExpressRequest,
Response as ExpressResponse, type Response as ExpressResponse,
} from "express"; } from "express";
import { match } from "path-to-regexp"; import { match } from "path-to-regexp";
import { Session } from "./auth";
import { cli } from "./cli";
import { contentTypes } from "./content-types"; import { contentTypes } from "./content-types";
import { runWithContext } from "./context";
import { core } from "./core";
import { httpCodes } from "./http-codes"; import { httpCodes } from "./http-codes";
import { request } from "./request";
import { routes } from "./routes"; import { routes } from "./routes";
import { services } from "./services";
// import { URLPattern } from 'node:url'; // import { URLPattern } from 'node:url';
import { import {
Call, AuthenticationRequired,
InternalHandler, AuthorizationDenied,
Method, type Call,
ProcessedRoute, type InternalHandler,
Result, isRedirect,
Route, type Method,
massageMethod, massageMethod,
methodParser, methodParser,
type ProcessedRoute,
type Result,
type Route,
} from "./types"; } from "./types";
const app = express(); const app = express();
services.logging.log({ source: "logging", text: ["1"] }); // Parse request bodies
app.use(express.json());
app.use(express.urlencoded({ extended: true }));
core.logging.log({ source: "logging", text: ["1"] });
const processedRoutes: { [K in Method]: ProcessedRoute[] } = { const processedRoutes: { [K in Method]: ProcessedRoute[] } = {
GET: [], GET: [],
POST: [], POST: [],
@@ -30,7 +42,7 @@ const processedRoutes: { [K in Method]: ProcessedRoute[] } = {
DELETE: [], DELETE: [],
}; };
function isPromise<T>(value: T | Promise<T>): value is Promise<T> { function _isPromise<T>(value: T | Promise<T>): value is Promise<T> {
return typeof (value as any)?.then === "function"; return typeof (value as any)?.then === "function";
} }
@@ -40,9 +52,9 @@ routes.forEach((route: Route, _idx: number, _allRoutes: Route[]) => {
const methodList = route.methods; const methodList = route.methods;
const handler: InternalHandler = async ( const handler: InternalHandler = async (
request: ExpressRequest, expressRequest: ExpressRequest,
): Promise<Result> => { ): Promise<Result> => {
const method = massageMethod(request.method); const method = massageMethod(expressRequest.method);
console.log("method", method); console.log("method", method);
@@ -50,27 +62,46 @@ routes.forEach((route: Route, _idx: number, _allRoutes: Route[]) => {
// XXX: Worth asserting this? // XXX: Worth asserting this?
} }
console.log("request.originalUrl", request.originalUrl); console.log("request.originalUrl", expressRequest.originalUrl);
console.log("beavis");
// const p = new URL(request.originalUrl); // Authenticate the request
// const path = p.pathname; const auth = await request.auth.validateRequest(expressRequest);
// console.log("p, path", p, path)
console.log("ok");
const req: Call = { const req: Call = {
pattern: route.path, pattern: route.path,
// path, path: expressRequest.originalUrl,
path: request.originalUrl,
method, method,
parameters: { one: 1, two: 2 }, parameters: { one: 1, two: 2 },
request, request: expressRequest,
user: auth.user,
session: new Session(auth.session, auth.user),
}; };
const retval = await route.handler(req); try {
const retval = await runWithContext({ user: auth.user }, () =>
route.handler(req),
);
return retval; return retval;
} catch (error) {
// Handle authentication errors
if (error instanceof AuthenticationRequired) {
return {
code: httpCodes.clientErrors.Unauthorized,
contentType: contentTypes.application.json,
result: JSON.stringify({
error: "Authentication required",
}),
};
}
if (error instanceof AuthorizationDenied) {
return {
code: httpCodes.clientErrors.Forbidden,
contentType: contentTypes.application.json,
result: JSON.stringify({ error: "Access denied" }),
};
}
throw error;
}
}; };
for (const [_idx, method] of methodList.entries()) { for (const [_idx, method] of methodList.entries()) {
@@ -87,8 +118,15 @@ async function handler(
const method = await methodParser.parseAsync(req.method); const method = await methodParser.parseAsync(req.method);
const byMethod = processedRoutes[method]; const byMethod = processedRoutes[method];
console.log(
"DEBUG: req.path =",
JSON.stringify(req.path),
"method =",
method,
);
for (const [_idx, pr] of byMethod.entries()) { for (const [_idx, pr] of byMethod.entries()) {
const match = pr.matcher(req.url); const match = pr.matcher(req.path);
console.log("DEBUG: trying pattern, match result =", match);
if (match) { if (match) {
console.log("match", match); console.log("match", match);
const resp = await pr.handler(req); const resp = await pr.handler(req);
@@ -100,7 +138,7 @@ async function handler(
const retval: Result = { const retval: Result = {
code: httpCodes.clientErrors.NotFound, code: httpCodes.clientErrors.NotFound,
contentType: contentTypes.text.plain, contentType: contentTypes.text.plain,
result: "not found", result: "not found!",
}; };
return retval; return retval;
@@ -114,7 +152,22 @@ app.use(async (req: ExpressRequest, res: ExpressResponse) => {
console.log(result); console.log(result);
// Set any cookies from the result
if (result0.cookies) {
for (const cookie of result0.cookies) {
res.cookie(cookie.name, cookie.value, cookie.options ?? {});
}
}
if (isRedirect(result0)) {
res.redirect(code, result0.redirect);
} else {
res.status(code).send(result); res.status(code).send(result);
}
}); });
app.listen(3000); process.title = `diachron:${cli.listen.port}`;
app.listen(cli.listen.port, cli.listen.host, () => {
console.log(`Listening on ${cli.listen.host}:${cli.listen.port}`);
});

20
express/auth/index.ts Normal file
View File

@@ -0,0 +1,20 @@
// index.ts
//
// Barrel export for auth module.
//
// NOTE: authRoutes is NOT exported here to avoid circular dependency:
// services.ts → auth/index.ts → auth/routes.ts → services.ts
// Import authRoutes directly from "./auth/routes" instead.
export { hashPassword, verifyPassword } from "./password";
export { type AuthResult, AuthService } from "./service";
export { type AuthStore, InMemoryAuthStore } from "./store";
export { generateToken, hashToken, SESSION_COOKIE_NAME } from "./token";
export {
type AuthMethod,
Session,
type SessionData,
type TokenId,
type TokenType,
tokenLifetimes,
} from "./types";

70
express/auth/password.ts Normal file
View File

@@ -0,0 +1,70 @@
// password.ts
//
// Password hashing using Node.js scrypt (no external dependencies).
// Format: $scrypt$N$r$p$salt$hash (all base64)
import {
randomBytes,
type ScryptOptions,
scrypt,
timingSafeEqual,
} from "node:crypto";
// Configuration
const SALT_LENGTH = 32;
const KEY_LENGTH = 64;
const SCRYPT_PARAMS: ScryptOptions = {
N: 16384, // CPU/memory cost parameter (2^14)
r: 8, // Block size
p: 1, // Parallelization
};
// Promisified scrypt with options support
function scryptAsync(
password: string,
salt: Buffer,
keylen: number,
options: ScryptOptions,
): Promise<Buffer> {
return new Promise((resolve, reject) => {
scrypt(password, salt, keylen, options, (err, derivedKey) => {
if (err) {
reject(err);
} else {
resolve(derivedKey);
}
});
});
}
async function hashPassword(password: string): Promise<string> {
const salt = randomBytes(SALT_LENGTH);
const hash = await scryptAsync(password, salt, KEY_LENGTH, SCRYPT_PARAMS);
const { N, r, p } = SCRYPT_PARAMS;
return `$scrypt$${N}$${r}$${p}$${salt.toString("base64")}$${hash.toString("base64")}`;
}
async function verifyPassword(
password: string,
stored: string,
): Promise<boolean> {
const parts = stored.split("$");
if (parts[1] !== "scrypt" || parts.length !== 7) {
throw new Error("Invalid password hash format");
}
const [, , nStr, rStr, pStr, saltB64, hashB64] = parts;
const salt = Buffer.from(saltB64, "base64");
const storedHash = Buffer.from(hashB64, "base64");
const computedHash = await scryptAsync(password, salt, storedHash.length, {
N: parseInt(nStr, 10),
r: parseInt(rStr, 10),
p: parseInt(pStr, 10),
});
return timingSafeEqual(storedHash, computedHash);
}
export { hashPassword, verifyPassword };

231
express/auth/routes.ts Normal file
View File

@@ -0,0 +1,231 @@
// routes.ts
//
// Authentication route handlers.
import { z } from "zod";
import { contentTypes } from "../content-types";
import { httpCodes } from "../http-codes";
import { request } from "../request";
import type { Call, Result, Route } from "../types";
import {
forgotPasswordInputParser,
loginInputParser,
registerInputParser,
resetPasswordInputParser,
} from "./types";
// Helper for JSON responses
const jsonResponse = (
code: (typeof httpCodes.success)[keyof typeof httpCodes.success],
data: object,
): Result => ({
code,
contentType: contentTypes.application.json,
result: JSON.stringify(data),
});
const errorResponse = (
code: (typeof httpCodes.clientErrors)[keyof typeof httpCodes.clientErrors],
error: string,
): Result => ({
code,
contentType: contentTypes.application.json,
result: JSON.stringify({ error }),
});
// POST /auth/login
const loginHandler = async (call: Call): Promise<Result> => {
try {
const body = call.request.body;
const { email, password } = loginInputParser.parse(body);
const result = await request.auth.login(email, password, "cookie", {
userAgent: call.request.get("User-Agent"),
ipAddress: call.request.ip,
});
if (!result.success) {
return errorResponse(
httpCodes.clientErrors.Unauthorized,
result.error,
);
}
return jsonResponse(httpCodes.success.OK, {
token: result.token,
user: {
id: result.user.id,
email: result.user.email,
displayName: result.user.displayName,
},
});
} catch (error) {
if (error instanceof z.ZodError) {
return errorResponse(
httpCodes.clientErrors.BadRequest,
"Invalid input",
);
}
throw error;
}
};
// POST /auth/logout
const logoutHandler = async (call: Call): Promise<Result> => {
const token = request.auth.extractToken(call.request);
if (token) {
await request.auth.logout(token);
}
return jsonResponse(httpCodes.success.OK, { message: "Logged out" });
};
// POST /auth/register
const registerHandler = async (call: Call): Promise<Result> => {
try {
const body = call.request.body;
const { email, password, displayName } =
registerInputParser.parse(body);
const result = await request.auth.register(
email,
password,
displayName,
);
if (!result.success) {
return errorResponse(httpCodes.clientErrors.Conflict, result.error);
}
// TODO: Send verification email with result.verificationToken
// For now, log it for development
console.log(
`[AUTH] Verification token for ${email}: ${result.verificationToken}`,
);
return jsonResponse(httpCodes.success.Created, {
message:
"Registration successful. Please check your email to verify your account.",
user: {
id: result.user.id,
email: result.user.email,
},
});
} catch (error) {
if (error instanceof z.ZodError) {
return errorResponse(
httpCodes.clientErrors.BadRequest,
"Invalid input",
);
}
throw error;
}
};
// POST /auth/forgot-password
const forgotPasswordHandler = async (call: Call): Promise<Result> => {
try {
const body = call.request.body;
const { email } = forgotPasswordInputParser.parse(body);
const result = await request.auth.createPasswordResetToken(email);
// Always return success (don't reveal if email exists)
if (result) {
// TODO: Send password reset email
console.log(
`[AUTH] Password reset token for ${email}: ${result.token}`,
);
}
return jsonResponse(httpCodes.success.OK, {
message:
"If an account exists with that email, a password reset link has been sent.",
});
} catch (error) {
if (error instanceof z.ZodError) {
return errorResponse(
httpCodes.clientErrors.BadRequest,
"Invalid input",
);
}
throw error;
}
};
// POST /auth/reset-password
const resetPasswordHandler = async (call: Call): Promise<Result> => {
try {
const body = call.request.body;
const { token, password } = resetPasswordInputParser.parse(body);
const result = await request.auth.resetPassword(token, password);
if (!result.success) {
return errorResponse(
httpCodes.clientErrors.BadRequest,
result.error,
);
}
return jsonResponse(httpCodes.success.OK, {
message:
"Password has been reset. You can now log in with your new password.",
});
} catch (error) {
if (error instanceof z.ZodError) {
return errorResponse(
httpCodes.clientErrors.BadRequest,
"Invalid input",
);
}
throw error;
}
};
// GET /auth/verify-email?token=xxx
const verifyEmailHandler = async (call: Call): Promise<Result> => {
const url = new URL(call.path, "http://localhost");
const token = url.searchParams.get("token");
if (!token) {
return errorResponse(
httpCodes.clientErrors.BadRequest,
"Missing token",
);
}
const result = await request.auth.verifyEmail(token);
if (!result.success) {
return errorResponse(httpCodes.clientErrors.BadRequest, result.error);
}
return jsonResponse(httpCodes.success.OK, {
message: "Email verified successfully. You can now log in.",
});
};
// Export routes
const authRoutes: Route[] = [
{ path: "/auth/login", methods: ["POST"], handler: loginHandler },
{ path: "/auth/logout", methods: ["POST"], handler: logoutHandler },
{ path: "/auth/register", methods: ["POST"], handler: registerHandler },
{
path: "/auth/forgot-password",
methods: ["POST"],
handler: forgotPasswordHandler,
},
{
path: "/auth/reset-password",
methods: ["POST"],
handler: resetPasswordHandler,
},
{
path: "/auth/verify-email",
methods: ["GET"],
handler: verifyEmailHandler,
},
];
export { authRoutes };

262
express/auth/service.ts Normal file
View File

@@ -0,0 +1,262 @@
// service.ts
//
// Core authentication service providing login, logout, registration,
// password reset, and email verification.
import type { Request as ExpressRequest } from "express";
import {
type AnonymousUser,
anonymousUser,
type User,
type UserId,
} from "../user";
import { hashPassword, verifyPassword } from "./password";
import type { AuthStore } from "./store";
import {
hashToken,
parseAuthorizationHeader,
SESSION_COOKIE_NAME,
} from "./token";
import { type SessionData, type TokenId, tokenLifetimes } from "./types";
type LoginResult =
| { success: true; token: string; user: User }
| { success: false; error: string };
type RegisterResult =
| { success: true; user: User; verificationToken: string }
| { success: false; error: string };
type SimpleResult = { success: true } | { success: false; error: string };
// Result of validating a request/token - contains both user and session
export type AuthResult =
| { authenticated: true; user: User; session: SessionData }
| { authenticated: false; user: AnonymousUser; session: null };
export class AuthService {
constructor(private store: AuthStore) {}
// === Login ===
async login(
email: string,
password: string,
authMethod: "cookie" | "bearer",
metadata?: { userAgent?: string; ipAddress?: string },
): Promise<LoginResult> {
const user = await this.store.getUserByEmail(email);
if (!user) {
return { success: false, error: "Invalid credentials" };
}
if (!user.isActive()) {
return { success: false, error: "Account is not active" };
}
const passwordHash = await this.store.getUserPasswordHash(user.id);
if (!passwordHash) {
return { success: false, error: "Invalid credentials" };
}
const valid = await verifyPassword(password, passwordHash);
if (!valid) {
return { success: false, error: "Invalid credentials" };
}
const { token } = await this.store.createSession({
userId: user.id,
tokenType: "session",
authMethod,
expiresAt: new Date(Date.now() + tokenLifetimes.session),
userAgent: metadata?.userAgent,
ipAddress: metadata?.ipAddress,
});
return { success: true, token, user };
}
// === Session Validation ===
async validateRequest(request: ExpressRequest): Promise<AuthResult> {
// Try cookie first (for web requests)
let token = this.extractCookieToken(request);
// Fall back to Authorization header (for API requests)
if (!token) {
token = parseAuthorizationHeader(request.get("Authorization"));
}
if (!token) {
return { authenticated: false, user: anonymousUser, session: null };
}
return this.validateToken(token);
}
async validateToken(token: string): Promise<AuthResult> {
const tokenId = hashToken(token) as TokenId;
const session = await this.store.getSession(tokenId);
if (!session) {
return { authenticated: false, user: anonymousUser, session: null };
}
if (session.tokenType !== "session") {
return { authenticated: false, user: anonymousUser, session: null };
}
const user = await this.store.getUserById(session.userId as UserId);
if (!user || !user.isActive()) {
return { authenticated: false, user: anonymousUser, session: null };
}
// Update last used (fire and forget)
this.store.updateLastUsed(tokenId).catch(() => {});
return { authenticated: true, user, session };
}
private extractCookieToken(request: ExpressRequest): string | null {
const cookies = request.get("Cookie");
if (!cookies) {
return null;
}
for (const cookie of cookies.split(";")) {
const [name, ...valueParts] = cookie.trim().split("=");
if (name === SESSION_COOKIE_NAME) {
return valueParts.join("="); // Handle = in token value
}
}
return null;
}
// === Logout ===
async logout(token: string): Promise<void> {
const tokenId = hashToken(token) as TokenId;
await this.store.deleteSession(tokenId);
}
async logoutAllSessions(userId: UserId): Promise<number> {
return this.store.deleteUserSessions(userId);
}
// === Registration ===
async register(
email: string,
password: string,
displayName?: string,
): Promise<RegisterResult> {
const existing = await this.store.getUserByEmail(email);
if (existing) {
return { success: false, error: "Email already registered" };
}
const passwordHash = await hashPassword(password);
const user = await this.store.createUser({
email,
passwordHash,
displayName,
});
// Create email verification token
const { token: verificationToken } = await this.store.createSession({
userId: user.id,
tokenType: "email_verify",
authMethod: "bearer",
expiresAt: new Date(Date.now() + tokenLifetimes.email_verify),
});
return { success: true, user, verificationToken };
}
// === Email Verification ===
async verifyEmail(token: string): Promise<SimpleResult> {
const tokenId = hashToken(token) as TokenId;
const session = await this.store.getSession(tokenId);
if (!session || session.tokenType !== "email_verify") {
return {
success: false,
error: "Invalid or expired verification token",
};
}
if (session.isUsed) {
return { success: false, error: "Token already used" };
}
await this.store.updateUserEmailVerified(session.userId as UserId);
await this.store.deleteSession(tokenId);
return { success: true };
}
// === Password Reset ===
async createPasswordResetToken(
email: string,
): Promise<{ token: string } | null> {
const user = await this.store.getUserByEmail(email);
if (!user) {
// Don't reveal whether email exists
return null;
}
const { token } = await this.store.createSession({
userId: user.id,
tokenType: "password_reset",
authMethod: "bearer",
expiresAt: new Date(Date.now() + tokenLifetimes.password_reset),
});
return { token };
}
async resetPassword(
token: string,
newPassword: string,
): Promise<SimpleResult> {
const tokenId = hashToken(token) as TokenId;
const session = await this.store.getSession(tokenId);
if (!session || session.tokenType !== "password_reset") {
return { success: false, error: "Invalid or expired reset token" };
}
if (session.isUsed) {
return { success: false, error: "Token already used" };
}
const passwordHash = await hashPassword(newPassword);
await this.store.setUserPassword(
session.userId as UserId,
passwordHash,
);
// Invalidate all existing sessions (security: password changed)
await this.store.deleteUserSessions(session.userId as UserId);
// Delete the reset token
await this.store.deleteSession(tokenId);
return { success: true };
}
// === Token Extraction Helper (for routes) ===
extractToken(request: ExpressRequest): string | null {
// Try Authorization header first
const token = parseAuthorizationHeader(request.get("Authorization"));
if (token) {
return token;
}
// Try cookie
return this.extractCookieToken(request);
}
}

164
express/auth/store.ts Normal file
View File

@@ -0,0 +1,164 @@
// store.ts
//
// Authentication storage interface and in-memory implementation.
// The interface allows easy migration to PostgreSQL later.
import { AuthenticatedUser, type User, type UserId } from "../user";
import { generateToken, hashToken } from "./token";
import type { AuthMethod, SessionData, TokenId, TokenType } from "./types";
// Data for creating a new session (tokenId generated internally)
export type CreateSessionData = {
userId: string;
tokenType: TokenType;
authMethod: AuthMethod;
expiresAt: Date;
userAgent?: string;
ipAddress?: string;
};
// Data for creating a new user
export type CreateUserData = {
email: string;
passwordHash: string;
displayName?: string;
};
// Abstract interface for auth storage - implement for PostgreSQL later
export interface AuthStore {
// Session operations
createSession(
data: CreateSessionData,
): Promise<{ token: string; session: SessionData }>;
getSession(tokenId: TokenId): Promise<SessionData | null>;
updateLastUsed(tokenId: TokenId): Promise<void>;
deleteSession(tokenId: TokenId): Promise<void>;
deleteUserSessions(userId: UserId): Promise<number>;
// User operations
getUserByEmail(email: string): Promise<User | null>;
getUserById(userId: UserId): Promise<User | null>;
createUser(data: CreateUserData): Promise<User>;
getUserPasswordHash(userId: UserId): Promise<string | null>;
setUserPassword(userId: UserId, passwordHash: string): Promise<void>;
updateUserEmailVerified(userId: UserId): Promise<void>;
}
// In-memory implementation for development
export class InMemoryAuthStore implements AuthStore {
private sessions: Map<string, SessionData> = new Map();
private users: Map<string, User> = new Map();
private usersByEmail: Map<string, string> = new Map();
private passwordHashes: Map<string, string> = new Map();
private emailVerified: Map<string, boolean> = new Map();
async createSession(
data: CreateSessionData,
): Promise<{ token: string; session: SessionData }> {
const token = generateToken();
const tokenId = hashToken(token);
const session: SessionData = {
tokenId,
userId: data.userId,
tokenType: data.tokenType,
authMethod: data.authMethod,
createdAt: new Date(),
expiresAt: data.expiresAt,
userAgent: data.userAgent,
ipAddress: data.ipAddress,
};
this.sessions.set(tokenId, session);
return { token, session };
}
async getSession(tokenId: TokenId): Promise<SessionData | null> {
const session = this.sessions.get(tokenId);
if (!session) {
return null;
}
// Check expiration
if (new Date() > session.expiresAt) {
this.sessions.delete(tokenId);
return null;
}
return session;
}
async updateLastUsed(tokenId: TokenId): Promise<void> {
const session = this.sessions.get(tokenId);
if (session) {
session.lastUsedAt = new Date();
}
}
async deleteSession(tokenId: TokenId): Promise<void> {
this.sessions.delete(tokenId);
}
async deleteUserSessions(userId: UserId): Promise<number> {
let count = 0;
for (const [tokenId, session] of this.sessions) {
if (session.userId === userId) {
this.sessions.delete(tokenId);
count++;
}
}
return count;
}
async getUserByEmail(email: string): Promise<User | null> {
const userId = this.usersByEmail.get(email.toLowerCase());
if (!userId) {
return null;
}
return this.users.get(userId) ?? null;
}
async getUserById(userId: UserId): Promise<User | null> {
return this.users.get(userId) ?? null;
}
async createUser(data: CreateUserData): Promise<User> {
const user = AuthenticatedUser.create(data.email, {
displayName: data.displayName,
status: "pending", // Pending until email verified
});
this.users.set(user.id, user);
this.usersByEmail.set(data.email.toLowerCase(), user.id);
this.passwordHashes.set(user.id, data.passwordHash);
this.emailVerified.set(user.id, false);
return user;
}
async getUserPasswordHash(userId: UserId): Promise<string | null> {
return this.passwordHashes.get(userId) ?? null;
}
async setUserPassword(userId: UserId, passwordHash: string): Promise<void> {
this.passwordHashes.set(userId, passwordHash);
}
async updateUserEmailVerified(userId: UserId): Promise<void> {
this.emailVerified.set(userId, true);
// Update user status to active
const user = this.users.get(userId);
if (user) {
// Create new user with active status
const updatedUser = AuthenticatedUser.create(user.email, {
id: user.id,
displayName: user.displayName,
status: "active",
roles: [...user.roles],
permissions: [...user.permissions],
});
this.users.set(userId, updatedUser);
}
}
}

42
express/auth/token.ts Normal file
View File

@@ -0,0 +1,42 @@
// token.ts
//
// Token generation and hashing utilities for authentication.
// Raw tokens are never stored - only their SHA-256 hashes.
import { createHash, randomBytes } from "node:crypto";
const TOKEN_BYTES = 32; // 256 bits of entropy
// Generate a cryptographically secure random token
function generateToken(): string {
return randomBytes(TOKEN_BYTES).toString("base64url");
}
// Hash token for storage (never store raw tokens)
function hashToken(token: string): string {
return createHash("sha256").update(token).digest("hex");
}
// Parse token from Authorization header
function parseAuthorizationHeader(header: string | undefined): string | null {
if (!header) {
return null;
}
const parts = header.split(" ");
if (parts.length !== 2 || parts[0].toLowerCase() !== "bearer") {
return null;
}
return parts[1];
}
// Cookie name for web sessions
const SESSION_COOKIE_NAME = "diachron_session";
export {
generateToken,
hashToken,
parseAuthorizationHeader,
SESSION_COOKIE_NAME,
};

96
express/auth/types.ts Normal file
View File

@@ -0,0 +1,96 @@
// types.ts
//
// Authentication types and Zod schemas.
import { z } from "zod";
// Branded type for token IDs (the hash, not the raw token)
export type TokenId = string & { readonly __brand: "TokenId" };
// Token types for different purposes
export const tokenTypeParser = z.enum([
"session",
"password_reset",
"email_verify",
]);
export type TokenType = z.infer<typeof tokenTypeParser>;
// Authentication method - how the token was delivered
export const authMethodParser = z.enum(["cookie", "bearer"]);
export type AuthMethod = z.infer<typeof authMethodParser>;
// Session data schema - what gets stored
export const sessionDataParser = z.object({
tokenId: z.string().min(1),
userId: z.string().min(1),
tokenType: tokenTypeParser,
authMethod: authMethodParser,
createdAt: z.coerce.date(),
expiresAt: z.coerce.date(),
lastUsedAt: z.coerce.date().optional(),
userAgent: z.string().optional(),
ipAddress: z.string().optional(),
isUsed: z.boolean().optional(), // For one-time tokens
});
export type SessionData = z.infer<typeof sessionDataParser>;
// Input validation schemas for auth endpoints
export const loginInputParser = z.object({
email: z.string().email(),
password: z.string().min(1),
});
export const registerInputParser = z.object({
email: z.string().email(),
password: z.string().min(8),
displayName: z.string().optional(),
});
export const forgotPasswordInputParser = z.object({
email: z.string().email(),
});
export const resetPasswordInputParser = z.object({
token: z.string().min(1),
password: z.string().min(8),
});
// Token lifetimes in milliseconds
export const tokenLifetimes: Record<TokenType, number> = {
session: 30 * 24 * 60 * 60 * 1000, // 30 days
password_reset: 1 * 60 * 60 * 1000, // 1 hour
email_verify: 24 * 60 * 60 * 1000, // 24 hours
};
// Import here to avoid circular dependency at module load time
import type { User } from "../user";
// Session wrapper class providing a consistent interface for handlers.
// Always present on Call (never null), but may represent an anonymous session.
export class Session {
constructor(
private readonly data: SessionData | null,
private readonly user: User,
) {}
getUser(): User {
return this.user;
}
getData(): SessionData | null {
return this.data;
}
isAuthenticated(): boolean {
return !this.user.isAnonymous();
}
get tokenId(): string | undefined {
return this.data?.tokenId;
}
get userId(): string | undefined {
return this.data?.userId;
}
}

62
express/basic/login.ts Normal file
View File

@@ -0,0 +1,62 @@
import { SESSION_COOKIE_NAME } from "../auth/token";
import { tokenLifetimes } from "../auth/types";
import { request } from "../request";
import { html, redirect, render } from "../request/util";
import type { Call, Result, Route } from "../types";
const loginHandler = async (call: Call): Promise<Result> => {
if (call.method === "GET") {
const c = await render("basic/login", {});
return html(c);
}
// POST - handle login
const { email, password } = call.request.body;
if (!email || !password) {
const c = await render("basic/login", {
error: "Email and password are required",
email,
});
return html(c);
}
const result = await request.auth.login(email, password, "cookie", {
userAgent: call.request.get("User-Agent"),
ipAddress: call.request.ip,
});
if (!result.success) {
const c = await render("basic/login", {
error: result.error,
email,
});
return html(c);
}
// Success - set cookie and redirect to home
const redirectResult = redirect("/");
redirectResult.cookies = [
{
name: SESSION_COOKIE_NAME,
value: result.token,
options: {
httpOnly: true,
secure: false, // Set to true in production with HTTPS
sameSite: "lax",
maxAge: tokenLifetimes.session,
path: "/",
},
},
];
return redirectResult;
};
const loginRoute: Route = {
path: "/login",
methods: ["GET", "POST"],
handler: loginHandler,
};
export { loginRoute };

38
express/basic/logout.ts Normal file
View File

@@ -0,0 +1,38 @@
import { SESSION_COOKIE_NAME } from "../auth/token";
import { request } from "../request";
import { redirect } from "../request/util";
import type { Call, Result, Route } from "../types";
const logoutHandler = async (call: Call): Promise<Result> => {
// Extract token from cookie and invalidate the session
const token = request.auth.extractToken(call.request);
if (token) {
await request.auth.logout(token);
}
// Clear the cookie and redirect to login
const redirectResult = redirect("/login");
redirectResult.cookies = [
{
name: SESSION_COOKIE_NAME,
value: "",
options: {
httpOnly: true,
secure: false,
sameSite: "lax",
maxAge: 0,
path: "/",
},
},
];
return redirectResult;
};
const logoutRoute: Route = {
path: "/logout",
methods: ["GET", "POST"],
handler: logoutHandler,
};
export { logoutRoute };

43
express/basic/routes.ts Normal file
View File

@@ -0,0 +1,43 @@
import { DateTime } from "ts-luxon";
import { request } from "../request";
import { html, render } from "../request/util";
import type { Call, Result, Route } from "../types";
import { loginRoute } from "./login";
import { logoutRoute } from "./logout";
const routes: Record<string, Route> = {
hello: {
path: "/hello",
methods: ["GET"],
handler: async (_call: Call): Promise<Result> => {
const now = DateTime.now();
const c = await render("basic/hello", { now });
return html(c);
},
},
home: {
path: "/",
methods: ["GET"],
handler: async (_call: Call): Promise<Result> => {
const _auth = request.auth;
const me = request.session.getUser();
const email = me.toString();
const showLogin = me.isAnonymous();
const showLogout = !me.isAnonymous();
const c = await render("basic/home", {
email,
showLogin,
showLogout,
});
return html(c);
},
},
login: loginRoute,
logout: logoutRoute,
};
export { routes };

39
express/biome.jsonc Normal file
View File

@@ -0,0 +1,39 @@
{
"$schema": "https://biomejs.dev/schemas/2.3.10/schema.json",
"vcs": {
"enabled": true,
"clientKind": "git",
"useIgnoreFile": true
},
"files": {
"includes": ["**", "!!**/dist"]
},
"formatter": {
"enabled": true,
"indentStyle": "space",
"indentWidth": 4
},
"linter": {
"enabled": true,
"rules": {
"recommended": true,
"style": {
"useBlockStatements": "error"
}
}
},
"javascript": {
"formatter": {
"quoteStyle": "double"
}
},
"assist": {
"enabled": true,
"actions": {
"source": {
"organizeImports": "on"
}
}
}
}

9
express/build.sh Executable file
View File

@@ -0,0 +1,9 @@
#!/bin/bash
set -eu
DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
cd "$DIR"
../cmd pnpm ncc build ./app.ts -o dist

View File

@@ -2,7 +2,7 @@
set -eu set -eu
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
check_dir="$DIR" check_dir="$DIR"

55
express/cli.ts Normal file
View File

@@ -0,0 +1,55 @@
import { parseArgs } from "node:util";
const { values } = parseArgs({
options: {
listen: {
type: "string",
short: "l",
},
"log-address": {
type: "string",
default: "8085",
},
},
strict: true,
allowPositionals: false,
});
function parseListenAddress(listen: string | undefined): {
host: string;
port: number;
} {
const defaultHost = "127.0.0.1";
const defaultPort = 3500;
if (!listen) {
return { host: defaultHost, port: defaultPort };
}
const lastColon = listen.lastIndexOf(":");
if (lastColon === -1) {
// Just a port number
const port = parseInt(listen, 10);
if (Number.isNaN(port)) {
throw new Error(`Invalid listen address: ${listen}`);
}
return { host: defaultHost, port };
}
const host = listen.slice(0, lastColon);
const port = parseInt(listen.slice(lastColon + 1), 10);
if (Number.isNaN(port)) {
throw new Error(`Invalid port in listen address: ${listen}`);
}
return { host, port };
}
const listenAddress = parseListenAddress(values.listen);
const logAddress = parseListenAddress(values["log-address"]);
export const cli = {
listen: listenAddress,
logAddress,
};

View File

@@ -1,4 +1,4 @@
import { Extensible } from "./interfaces"; // This file belongs to the framework. You are not expected to modify it.
export type ContentType = string; export type ContentType = string;

27
express/context.ts Normal file
View File

@@ -0,0 +1,27 @@
// context.ts
//
// Request-scoped context using AsyncLocalStorage.
// Allows services to access request data (like the current user) without
// needing to pass Call through every function.
import { AsyncLocalStorage } from "node:async_hooks";
import { anonymousUser, type User } from "./user";
type RequestContext = {
user: User;
};
const asyncLocalStorage = new AsyncLocalStorage<RequestContext>();
// Run a function within a request context
function runWithContext<T>(context: RequestContext, fn: () => T): T {
return asyncLocalStorage.run(context, fn);
}
// Get the current user from context, or AnonymousUser if not in a request
function getCurrentUser(): User {
const context = asyncLocalStorage.getStore();
return context?.user ?? anonymousUser;
}
export { getCurrentUser, runWithContext, type RequestContext };

48
express/core/index.ts Normal file
View File

@@ -0,0 +1,48 @@
import nunjucks from "nunjucks";
import { db, migrate, migrationStatus } from "../database";
import { getLogs, log } from "../logging";
// FIXME: This doesn't belong here; move it somewhere else.
const conf = {
templateEngine: () => {
return {
renderTemplate: (template: string, context: object) => {
return nunjucks.renderString(template, context);
},
};
},
};
const database = {
db,
migrate,
migrationStatus,
};
const logging = {
log,
getLogs,
};
const random = {
randomNumber: () => {
return Math.random();
},
};
const misc = {
sleep: (ms: number) => {
return new Promise((resolve) => setTimeout(resolve, ms));
},
};
// Keep this asciibetically sorted
const core = {
conf,
database,
logging,
misc,
random,
};
export { core };

548
express/database.ts Normal file
View File

@@ -0,0 +1,548 @@
// database.ts
// PostgreSQL database access with Kysely query builder and simple migrations
import * as fs from "node:fs";
import * as path from "node:path";
import {
type Generated,
Kysely,
PostgresDialect,
type Selectable,
sql,
} from "kysely";
import { Pool } from "pg";
import type {
AuthStore,
CreateSessionData,
CreateUserData,
} from "./auth/store";
import { generateToken, hashToken } from "./auth/token";
import type { SessionData, TokenId } from "./auth/types";
import type { Domain } from "./types";
import { AuthenticatedUser, type User, type UserId } from "./user";
// Connection configuration
const connectionConfig = {
host: "localhost",
port: 5432,
user: "diachron",
password: "diachron",
database: "diachron",
};
// Database schema types for Kysely
// Generated<T> marks columns with database defaults (optional on insert)
interface UsersTable {
id: string;
status: Generated<string>;
display_name: string | null;
created_at: Generated<Date>;
updated_at: Generated<Date>;
}
interface UserEmailsTable {
id: string;
user_id: string;
email: string;
normalized_email: string;
is_primary: Generated<boolean>;
is_verified: Generated<boolean>;
created_at: Generated<Date>;
verified_at: Date | null;
revoked_at: Date | null;
}
interface UserCredentialsTable {
id: string;
user_id: string;
credential_type: Generated<string>;
password_hash: string | null;
created_at: Generated<Date>;
updated_at: Generated<Date>;
}
interface SessionsTable {
id: Generated<string>;
token_hash: string;
user_id: string;
user_email_id: string | null;
token_type: string;
auth_method: string;
created_at: Generated<Date>;
expires_at: Date;
revoked_at: Date | null;
ip_address: string | null;
user_agent: string | null;
is_used: Generated<boolean | null>;
}
interface Database {
users: UsersTable;
user_emails: UserEmailsTable;
user_credentials: UserCredentialsTable;
sessions: SessionsTable;
}
// Create the connection pool
const pool = new Pool(connectionConfig);
// Create the Kysely instance
const db = new Kysely<Database>({
dialect: new PostgresDialect({ pool }),
});
// Raw pool access for when you need it
const rawPool = pool;
// Execute raw SQL (for when Kysely doesn't fit)
async function raw<T = unknown>(
query: string,
params: unknown[] = [],
): Promise<T[]> {
const result = await pool.query(query, params);
return result.rows as T[];
}
// ============================================================================
// Migrations
// ============================================================================
// Migration file naming convention:
// yyyy-mm-dd_ss_description.sql
// e.g., 2025-01-15_01_initial.sql, 2025-01-15_02_add_users.sql
//
// Migrations directory: express/migrations/
const FRAMEWORK_MIGRATIONS_DIR = path.join(__dirname, "framework/migrations");
const APP_MIGRATIONS_DIR = path.join(__dirname, "migrations");
const MIGRATIONS_TABLE = "_migrations";
interface MigrationRecord {
id: number;
name: string;
applied_at: Date;
}
// Ensure migrations table exists
async function ensureMigrationsTable(): Promise<void> {
await pool.query(`
CREATE TABLE IF NOT EXISTS ${MIGRATIONS_TABLE} (
id SERIAL PRIMARY KEY,
name TEXT NOT NULL UNIQUE,
applied_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
)
`);
}
// Get list of applied migrations
async function getAppliedMigrations(): Promise<string[]> {
const result = await pool.query<MigrationRecord>(
`SELECT name FROM ${MIGRATIONS_TABLE} ORDER BY name`,
);
return result.rows.map((r) => r.name);
}
// Get pending migration files
function getMigrationFiles(kind: Domain): string[] {
const dir = kind === "fw" ? FRAMEWORK_MIGRATIONS_DIR : APP_MIGRATIONS_DIR;
if (!fs.existsSync(dir)) {
return [];
}
const root = __dirname;
const mm = fs
.readdirSync(dir)
.filter((f) => f.endsWith(".sql"))
.filter((f) => /^\d{4}-\d{2}-\d{2}_\d{2}-/.test(f))
.map((f) => `${dir}/${f}`)
.map((f) => f.replace(`${root}/`, ""))
.sort();
return mm;
}
// Run a single migration
async function runMigration(filename: string): Promise<void> {
// const filepath = path.join(MIGRATIONS_DIR, filename);
const filepath = filename;
const content = fs.readFileSync(filepath, "utf-8");
process.stdout.write(` Migration: ${filename}...`);
// Run migration in a transaction
const client = await pool.connect();
try {
await client.query("BEGIN");
await client.query(content);
await client.query(
`INSERT INTO ${MIGRATIONS_TABLE} (name) VALUES ($1)`,
[filename],
);
await client.query("COMMIT");
console.log(" ✓");
} catch (err) {
console.log(" ✗");
const message = err instanceof Error ? err.message : String(err);
console.error(` Error: ${message}`);
await client.query("ROLLBACK");
throw err;
} finally {
client.release();
}
}
function getAllMigrationFiles() {
const fw_files = getMigrationFiles("fw");
const app_files = getMigrationFiles("app");
const all = [...fw_files, ...app_files];
return all;
}
// Run all pending migrations
async function migrate(): Promise<void> {
await ensureMigrationsTable();
const applied = new Set(await getAppliedMigrations());
const all = getAllMigrationFiles();
const pending = all.filter((all) => !applied.has(all));
if (pending.length === 0) {
console.log("No pending migrations");
return;
}
console.log(`Applying ${pending.length} migration(s):`);
for (const file of pending) {
await runMigration(file);
}
}
// List migration status
async function migrationStatus(): Promise<{
applied: string[];
pending: string[];
}> {
await ensureMigrationsTable();
const applied = new Set(await getAppliedMigrations());
const ff = getAllMigrationFiles();
return {
applied: ff.filter((ff) => applied.has(ff)),
pending: ff.filter((ff) => !applied.has(ff)),
};
}
// ============================================================================
// PostgresAuthStore - Database-backed authentication storage
// ============================================================================
class PostgresAuthStore implements AuthStore {
// Session operations
async createSession(
data: CreateSessionData,
): Promise<{ token: string; session: SessionData }> {
const token = generateToken();
const tokenHash = hashToken(token);
const row = await db
.insertInto("sessions")
.values({
token_hash: tokenHash,
user_id: data.userId,
token_type: data.tokenType,
auth_method: data.authMethod,
expires_at: data.expiresAt,
user_agent: data.userAgent ?? null,
ip_address: data.ipAddress ?? null,
})
.returningAll()
.executeTakeFirstOrThrow();
const session: SessionData = {
tokenId: row.token_hash,
userId: row.user_id,
tokenType: row.token_type as SessionData["tokenType"],
authMethod: row.auth_method as SessionData["authMethod"],
createdAt: row.created_at,
expiresAt: row.expires_at,
userAgent: row.user_agent ?? undefined,
ipAddress: row.ip_address ?? undefined,
isUsed: row.is_used ?? undefined,
};
return { token, session };
}
async getSession(tokenId: TokenId): Promise<SessionData | null> {
const row = await db
.selectFrom("sessions")
.selectAll()
.where("token_hash", "=", tokenId)
.where("expires_at", ">", new Date())
.where("revoked_at", "is", null)
.executeTakeFirst();
if (!row) {
return null;
}
return {
tokenId: row.token_hash,
userId: row.user_id,
tokenType: row.token_type as SessionData["tokenType"],
authMethod: row.auth_method as SessionData["authMethod"],
createdAt: row.created_at,
expiresAt: row.expires_at,
userAgent: row.user_agent ?? undefined,
ipAddress: row.ip_address ?? undefined,
isUsed: row.is_used ?? undefined,
};
}
async updateLastUsed(_tokenId: TokenId): Promise<void> {
// The new schema doesn't have last_used_at column
// This is now a no-op; session activity tracking could be added later
}
async deleteSession(tokenId: TokenId): Promise<void> {
// Soft delete by setting revoked_at
await db
.updateTable("sessions")
.set({ revoked_at: new Date() })
.where("token_hash", "=", tokenId)
.execute();
}
async deleteUserSessions(userId: UserId): Promise<number> {
const result = await db
.updateTable("sessions")
.set({ revoked_at: new Date() })
.where("user_id", "=", userId)
.where("revoked_at", "is", null)
.executeTakeFirst();
return Number(result.numUpdatedRows);
}
// User operations
async getUserByEmail(email: string): Promise<User | null> {
// Find user through user_emails table
const normalizedEmail = email.toLowerCase().trim();
const row = await db
.selectFrom("user_emails")
.innerJoin("users", "users.id", "user_emails.user_id")
.select([
"users.id",
"users.status",
"users.display_name",
"users.created_at",
"users.updated_at",
"user_emails.email",
])
.where("user_emails.normalized_email", "=", normalizedEmail)
.where("user_emails.revoked_at", "is", null)
.executeTakeFirst();
if (!row) {
return null;
}
return this.rowToUser(row);
}
async getUserById(userId: UserId): Promise<User | null> {
// Get user with their primary email
const row = await db
.selectFrom("users")
.leftJoin("user_emails", (join) =>
join
.onRef("user_emails.user_id", "=", "users.id")
.on("user_emails.is_primary", "=", true)
.on("user_emails.revoked_at", "is", null),
)
.select([
"users.id",
"users.status",
"users.display_name",
"users.created_at",
"users.updated_at",
"user_emails.email",
])
.where("users.id", "=", userId)
.executeTakeFirst();
if (!row) {
return null;
}
return this.rowToUser(row);
}
async createUser(data: CreateUserData): Promise<User> {
const userId = crypto.randomUUID();
const emailId = crypto.randomUUID();
const credentialId = crypto.randomUUID();
const now = new Date();
const normalizedEmail = data.email.toLowerCase().trim();
// Create user record
await db
.insertInto("users")
.values({
id: userId,
display_name: data.displayName ?? null,
status: "pending",
created_at: now,
updated_at: now,
})
.execute();
// Create user_email record
await db
.insertInto("user_emails")
.values({
id: emailId,
user_id: userId,
email: data.email,
normalized_email: normalizedEmail,
is_primary: true,
is_verified: false,
created_at: now,
})
.execute();
// Create user_credential record
await db
.insertInto("user_credentials")
.values({
id: credentialId,
user_id: userId,
credential_type: "password",
password_hash: data.passwordHash,
created_at: now,
updated_at: now,
})
.execute();
return new AuthenticatedUser({
id: userId,
email: data.email,
displayName: data.displayName,
status: "pending",
roles: [],
permissions: [],
createdAt: now,
updatedAt: now,
});
}
async getUserPasswordHash(userId: UserId): Promise<string | null> {
const row = await db
.selectFrom("user_credentials")
.select("password_hash")
.where("user_id", "=", userId)
.where("credential_type", "=", "password")
.executeTakeFirst();
return row?.password_hash ?? null;
}
async setUserPassword(userId: UserId, passwordHash: string): Promise<void> {
const now = new Date();
// Try to update existing credential
const result = await db
.updateTable("user_credentials")
.set({ password_hash: passwordHash, updated_at: now })
.where("user_id", "=", userId)
.where("credential_type", "=", "password")
.executeTakeFirst();
// If no existing credential, create one
if (Number(result.numUpdatedRows) === 0) {
await db
.insertInto("user_credentials")
.values({
id: crypto.randomUUID(),
user_id: userId,
credential_type: "password",
password_hash: passwordHash,
created_at: now,
updated_at: now,
})
.execute();
}
// Update user's updated_at
await db
.updateTable("users")
.set({ updated_at: now })
.where("id", "=", userId)
.execute();
}
async updateUserEmailVerified(userId: UserId): Promise<void> {
const now = new Date();
// Update user_emails to mark as verified
await db
.updateTable("user_emails")
.set({
is_verified: true,
verified_at: now,
})
.where("user_id", "=", userId)
.where("is_primary", "=", true)
.execute();
// Update user status to active
await db
.updateTable("users")
.set({
status: "active",
updated_at: now,
})
.where("id", "=", userId)
.execute();
}
// Helper to convert database row to User object
private rowToUser(row: {
id: string;
status: string;
display_name: string | null;
created_at: Date;
updated_at: Date;
email: string | null;
}): User {
return new AuthenticatedUser({
id: row.id,
email: row.email ?? "unknown@example.com",
displayName: row.display_name ?? undefined,
status: row.status as "active" | "suspended" | "pending",
roles: [], // TODO: query from RBAC tables
permissions: [], // TODO: query from RBAC tables
createdAt: row.created_at,
updatedAt: row.updated_at,
});
}
}
// ============================================================================
// Exports
// ============================================================================
export {
db,
raw,
rawPool,
pool,
migrate,
migrationStatus,
connectionConfig,
PostgresAuthStore,
type Database,
};

View File

@@ -0,0 +1,17 @@
import { connectionConfig, migrate, pool } from "../database";
import { dropTables, exitIfUnforced } from "./util";
async function main(): Promise<void> {
exitIfUnforced();
try {
await dropTables();
} finally {
await pool.end();
}
}
main().catch((err) => {
console.error("Failed to clear database:", err.message);
process.exit(1);
});

View File

@@ -0,0 +1,26 @@
// reset-db.ts
// Development command to wipe the database and apply all migrations from scratch
import { connectionConfig, migrate, pool } from "../database";
import { dropTables, exitIfUnforced } from "./util";
async function main(): Promise<void> {
exitIfUnforced();
try {
await dropTables();
console.log("");
await migrate();
console.log("");
console.log("Database reset complete.");
} finally {
await pool.end();
}
}
main().catch((err) => {
console.error("Failed to reset database:", err.message);
process.exit(1);
});

42
express/develop/util.ts Normal file
View File

@@ -0,0 +1,42 @@
// FIXME: this is at the wrong level of specificity
import { connectionConfig, migrate, pool } from "../database";
const exitIfUnforced = () => {
const args = process.argv.slice(2);
// Require explicit confirmation unless --force is passed
if (!args.includes("--force")) {
console.error("This will DROP ALL TABLES in the database!");
console.error(` Database: ${connectionConfig.database}`);
console.error(
` Host: ${connectionConfig.host}:${connectionConfig.port}`,
);
console.error("");
console.error("Run with --force to proceed.");
process.exit(1);
}
};
const dropTables = async () => {
console.log("Dropping all tables...");
// Get all table names in the public schema
const result = await pool.query<{ tablename: string }>(`
SELECT tablename FROM pg_tables
WHERE schemaname = 'public'
`);
if (result.rows.length > 0) {
// Drop all tables with CASCADE to handle foreign key constraints
const tableNames = result.rows
.map((r) => `"${r.tablename}"`)
.join(", ");
await pool.query(`DROP TABLE IF EXISTS ${tableNames} CASCADE`);
console.log(`Dropped ${result.rows.length} table(s)`);
} else {
console.log("No tables to drop");
}
};
export { dropTables, exitIfUnforced };

View File

@@ -0,0 +1,13 @@
import { z } from "zod";
export const executionContextSchema = z.object({
diachron_root: z.string(),
});
export type ExecutionContext = z.infer<typeof executionContextSchema>;
export function parseExecutionContext(
env: Record<string, string | undefined>,
): ExecutionContext {
return executionContextSchema.parse(env);
}

View File

@@ -0,0 +1,38 @@
import assert from "node:assert/strict";
import { describe, it } from "node:test";
import { ZodError } from "zod";
import {
executionContextSchema,
parseExecutionContext,
} from "./execution-context-schema";
describe("parseExecutionContext", () => {
it("parses valid executionContext with diachron_root", () => {
const env = { diachron_root: "/some/path" };
const result = parseExecutionContext(env);
assert.deepEqual(result, { diachron_root: "/some/path" });
});
it("throws ZodError when diachron_root is missing", () => {
const env = {};
assert.throws(() => parseExecutionContext(env), ZodError);
});
it("strips extra fields not in schema", () => {
const env = {
diachron_root: "/some/path",
EXTRA_VAR: "should be stripped",
};
const result = parseExecutionContext(env);
assert.deepEqual(result, { diachron_root: "/some/path" });
assert.equal("EXTRA_VAR" in result, false);
});
});
describe("executionContextSchema", () => {
it("requires diachron_root to be a string", () => {
const result = executionContextSchema.safeParse({ diachron_root: 123 });
assert.equal(result.success, false);
});
});

View File

@@ -0,0 +1,5 @@
import { parseExecutionContext } from "./execution-context-schema";
const executionContext = parseExecutionContext(process.env);
export { executionContext };

View File

@@ -0,0 +1,29 @@
-- 0001_users.sql
-- Create users table for authentication
CREATE TABLE users (
id UUID PRIMARY KEY,
status TEXT NOT NULL DEFAULT 'active',
display_name TEXT,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
CREATE TABLE user_emails (
id UUID PRIMARY KEY,
user_id UUID NOT NULL REFERENCES users(id),
email TEXT NOT NULL,
normalized_email TEXT NOT NULL,
is_primary BOOLEAN NOT NULL DEFAULT FALSE,
is_verified BOOLEAN NOT NULL DEFAULT FALSE,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
verified_at TIMESTAMPTZ,
revoked_at TIMESTAMPTZ
);
-- Enforce uniqueness only among *active* emails
CREATE UNIQUE INDEX user_emails_unique_active
ON user_emails (normalized_email)
WHERE revoked_at IS NULL;

View File

@@ -0,0 +1,26 @@
-- 0002_sessions.sql
-- Create sessions table for auth tokens
CREATE TABLE sessions (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
token_hash TEXT UNIQUE NOT NULL,
user_id UUID NOT NULL REFERENCES users(id),
user_email_id UUID REFERENCES user_emails(id),
token_type TEXT NOT NULL,
auth_method TEXT NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
expires_at TIMESTAMPTZ NOT NULL,
revoked_at TIMESTAMPTZ,
ip_address INET,
user_agent TEXT,
is_used BOOLEAN DEFAULT FALSE
);
-- Index for user session lookups (logout all, etc.)
CREATE INDEX sessions_user_id_idx ON sessions (user_id);
-- Index for expiration cleanup
CREATE INDEX sessions_expires_at_idx ON sessions (expires_at);
-- Index for token type filtering
CREATE INDEX sessions_token_type_idx ON sessions (token_type);

View File

@@ -0,0 +1,20 @@
CREATE TABLE roles (
id UUID PRIMARY KEY,
name TEXT UNIQUE NOT NULL,
description TEXT
);
CREATE TABLE groups (
id UUID PRIMARY KEY,
name TEXT NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
CREATE TABLE user_group_roles (
user_id UUID NOT NULL REFERENCES users(id),
group_id UUID NOT NULL REFERENCES groups(id),
role_id UUID NOT NULL REFERENCES roles(id),
granted_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
revoked_at TIMESTAMPTZ,
PRIMARY KEY (user_id, group_id, role_id)
);

View File

@@ -0,0 +1,14 @@
CREATE TABLE capabilities (
id UUID PRIMARY KEY,
name TEXT UNIQUE NOT NULL,
description TEXT
);
CREATE TABLE role_capabilities (
role_id UUID NOT NULL REFERENCES roles(id),
capability_id UUID NOT NULL REFERENCES capabilities(id),
granted_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
revoked_at TIMESTAMPTZ,
PRIMARY KEY (role_id, capability_id)
);

View File

@@ -1,11 +1,11 @@
import { contentTypes } from "./content-types"; import { contentTypes } from "./content-types";
import { core } from "./core";
import { httpCodes } from "./http-codes"; import { httpCodes } from "./http-codes";
import { services } from "./services"; import type { Call, Handler, Result } from "./types";
import { Call, Handler, Result } from "./types";
const multiHandler: Handler = async (call: Call): Promise<Result> => { const multiHandler: Handler = async (call: Call): Promise<Result> => {
const code = httpCodes.success.OK; const code = httpCodes.success.OK;
const rn = services.random.randomNumber(); const rn = core.random.randomNumber();
const retval: Result = { const retval: Result = {
code, code,

View File

@@ -1,4 +1,4 @@
import { Extensible } from "./interfaces"; // This file belongs to the framework. You are not expected to modify it.
export type HttpCode = { export type HttpCode = {
code: number; code: number;

View File

@@ -1,5 +1,7 @@
// internal-logging.ts // internal-logging.ts
import { cli } from "./cli";
// FIXME: Move this to somewhere more appropriate // FIXME: Move this to somewhere more appropriate
type AtLeastOne<T> = [T, ...T[]]; type AtLeastOne<T> = [T, ...T[]];
@@ -13,8 +15,8 @@ type Message = {
text: AtLeastOne<string>; text: AtLeastOne<string>;
}; };
const m1: Message = { timestamp: 123, source: "logging", text: ["foo"] }; const _m1: Message = { timestamp: 123, source: "logging", text: ["foo"] };
const m2: Message = { const _m2: Message = {
timestamp: 321, timestamp: 321,
source: "diagnostic", source: "diagnostic",
text: ["ok", "whatever"], text: ["ok", "whatever"],
@@ -30,12 +32,39 @@ type FilterArgument = {
match?: (string | RegExp)[]; match?: (string | RegExp)[];
}; };
const log = (_message: Message) => { const loggerUrl = `http://${cli.logAddress.host}:${cli.logAddress.port}`;
// WRITEME
const log = (message: Message) => {
const payload = {
timestamp: message.timestamp ?? Date.now(),
source: message.source,
text: message.text,
};
fetch(`${loggerUrl}/log`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify(payload),
}).catch((err) => {
console.error("[logging] Failed to send log:", err.message);
});
}; };
const getLogs = (filter: FilterArgument) => { const getLogs = async (filter: FilterArgument): Promise<Message[]> => {
// WRITEME const params = new URLSearchParams();
if (filter.limit) {
params.set("limit", String(filter.limit));
}
if (filter.before) {
params.set("before", String(filter.before));
}
if (filter.after) {
params.set("after", String(filter.after));
}
const url = `${loggerUrl}/logs?${params.toString()}`;
const response = await fetch(url);
return response.json();
}; };
// FIXME: there's scope for more specialized functions although they // FIXME: there's scope for more specialized functions although they

69
express/mgmt/add-user.ts Normal file
View File

@@ -0,0 +1,69 @@
// add-user.ts
// Management command to create users from the command line
import { hashPassword } from "../auth/password";
import { PostgresAuthStore, pool } from "../database";
async function main(): Promise<void> {
const args = process.argv.slice(2);
if (args.length < 2) {
console.error(
"Usage: ./mgmt add-user <email> <password> [--display-name <name>] [--active]",
);
process.exit(1);
}
const email = args[0];
const password = args[1];
// Parse optional flags
let displayName: string | undefined;
let makeActive = false;
for (let i = 2; i < args.length; i++) {
if (args[i] === "--display-name" && args[i + 1]) {
displayName = args[i + 1];
i++;
} else if (args[i] === "--active") {
makeActive = true;
}
}
try {
const store = new PostgresAuthStore();
// Check if user already exists
const existing = await store.getUserByEmail(email);
if (existing) {
console.error(`Error: User with email '${email}' already exists`);
process.exit(1);
}
// Hash password and create user
const passwordHash = await hashPassword(password);
const user = await store.createUser({
email,
passwordHash,
displayName,
});
// Optionally activate user immediately
if (makeActive) {
await store.updateUserEmailVerified(user.id);
console.log(
`Created and activated user: ${user.email} (${user.id})`,
);
} else {
console.log(`Created user: ${user.email} (${user.id})`);
console.log(" Status: pending (use --active to create as active)");
}
} finally {
await pool.end();
}
}
main().catch((err) => {
console.error("Failed to create user:", err.message);
process.exit(1);
});

45
express/migrate.ts Normal file
View File

@@ -0,0 +1,45 @@
// migrate.ts
// CLI script for running database migrations
import { migrate, migrationStatus, pool } from "./database";
async function main(): Promise<void> {
const command = process.argv[2] || "run";
try {
switch (command) {
case "run":
await migrate();
break;
case "status": {
const status = await migrationStatus();
console.log("Applied migrations:");
for (const name of status.applied) {
console.log(`${name}`);
}
if (status.pending.length > 0) {
console.log("\nPending migrations:");
for (const name of status.pending) {
console.log(`${name}`);
}
} else {
console.log("\nNo pending migrations");
}
break;
}
default:
console.error(`Unknown command: ${command}`);
console.error("Usage: migrate [run|status]");
process.exit(1);
}
} finally {
await pool.end();
}
}
main().catch((err) => {
console.error("Migration failed:", err);
process.exit(1);
});

View File

@@ -5,7 +5,6 @@
"main": "index.js", "main": "index.js",
"scripts": { "scripts": {
"test": "echo \"Error: no test specified\" && exit 1", "test": "echo \"Error: no test specified\" && exit 1",
"prettier": "prettier",
"nodemon": "nodemon dist/index.js" "nodemon": "nodemon dist/index.js"
}, },
"keywords": [], "keywords": [],
@@ -13,35 +12,24 @@
"license": "ISC", "license": "ISC",
"packageManager": "pnpm@10.12.4", "packageManager": "pnpm@10.12.4",
"dependencies": { "dependencies": {
"@ianvs/prettier-plugin-sort-imports": "^4.7.0",
"@types/node": "^24.10.1", "@types/node": "^24.10.1",
"@types/nunjucks": "^3.2.6",
"@vercel/ncc": "^0.38.4", "@vercel/ncc": "^0.38.4",
"express": "^5.1.0", "express": "^5.1.0",
"kysely": "^0.28.9",
"nodemon": "^3.1.11", "nodemon": "^3.1.11",
"nunjucks": "^3.2.4",
"path-to-regexp": "^8.3.0", "path-to-regexp": "^8.3.0",
"prettier": "^3.6.2", "pg": "^8.16.3",
"ts-luxon": "^6.2.0",
"ts-node": "^10.9.2", "ts-node": "^10.9.2",
"tsx": "^4.20.6", "tsx": "^4.20.6",
"typescript": "^5.9.3", "typescript": "^5.9.3",
"zod": "^4.1.12" "zod": "^4.1.12"
}, },
"prettier": {
"arrowParens": "always",
"bracketSpacing": true,
"trailingComma": "all",
"tabWidth": 4,
"semi": true,
"singleQuote": false,
"importOrder": [
"<THIRD_PARTY_MODULES>",
"^[./]"
],
"importOrderCaseSensitive": true,
"plugins": [
"@ianvs/prettier-plugin-sort-imports"
]
},
"devDependencies": { "devDependencies": {
"@types/express": "^5.0.5" "@biomejs/biome": "2.3.10",
"@types/express": "^5.0.5",
"@types/pg": "^8.16.0"
} }
} }

395
express/pnpm-lock.yaml generated
View File

@@ -8,27 +8,36 @@ importers:
.: .:
dependencies: dependencies:
'@ianvs/prettier-plugin-sort-imports':
specifier: ^4.7.0
version: 4.7.0(prettier@3.6.2)
'@types/node': '@types/node':
specifier: ^24.10.1 specifier: ^24.10.1
version: 24.10.1 version: 24.10.1
'@types/nunjucks':
specifier: ^3.2.6
version: 3.2.6
'@vercel/ncc': '@vercel/ncc':
specifier: ^0.38.4 specifier: ^0.38.4
version: 0.38.4 version: 0.38.4
express: express:
specifier: ^5.1.0 specifier: ^5.1.0
version: 5.1.0 version: 5.1.0
kysely:
specifier: ^0.28.9
version: 0.28.9
nodemon: nodemon:
specifier: ^3.1.11 specifier: ^3.1.11
version: 3.1.11 version: 3.1.11
nunjucks:
specifier: ^3.2.4
version: 3.2.4(chokidar@3.6.0)
path-to-regexp: path-to-regexp:
specifier: ^8.3.0 specifier: ^8.3.0
version: 8.3.0 version: 8.3.0
prettier: pg:
specifier: ^3.6.2 specifier: ^8.16.3
version: 3.6.2 version: 8.16.3
ts-luxon:
specifier: ^6.2.0
version: 6.2.0
ts-node: ts-node:
specifier: ^10.9.2 specifier: ^10.9.2
version: 10.9.2(@types/node@24.10.1)(typescript@5.9.3) version: 10.9.2(@types/node@24.10.1)(typescript@5.9.3)
@@ -42,48 +51,70 @@ importers:
specifier: ^4.1.12 specifier: ^4.1.12
version: 4.1.12 version: 4.1.12
devDependencies: devDependencies:
'@biomejs/biome':
specifier: 2.3.10
version: 2.3.10
'@types/express': '@types/express':
specifier: ^5.0.5 specifier: ^5.0.5
version: 5.0.5 version: 5.0.5
'@types/pg':
specifier: ^8.16.0
version: 8.16.0
packages: packages:
'@babel/code-frame@7.27.1': '@biomejs/biome@2.3.10':
resolution: {integrity: sha512-cjQ7ZlQ0Mv3b47hABuTevyTuYN4i+loJKGeV9flcCgIK37cCXRh+L1bd3iBHlynerhQ7BhCkn2BPbQUL+rGqFg==} resolution: {integrity: sha512-/uWSUd1MHX2fjqNLHNL6zLYWBbrJeG412/8H7ESuK8ewoRoMPUgHDebqKrPTx/5n6f17Xzqc9hdg3MEqA5hXnQ==}
engines: {node: '>=6.9.0'} engines: {node: '>=14.21.3'}
'@babel/generator@7.28.5':
resolution: {integrity: sha512-3EwLFhZ38J4VyIP6WNtt2kUdW9dokXA9Cr4IVIFHuCpZ3H8/YFOl5JjZHisrn1fATPBmKKqXzDFvh9fUwHz6CQ==}
engines: {node: '>=6.9.0'}
'@babel/helper-globals@7.28.0':
resolution: {integrity: sha512-+W6cISkXFa1jXsDEdYA8HeevQT/FULhxzR99pxphltZcVaugps53THCeiWA8SguxxpSp3gKPiuYfSWopkLQ4hw==}
engines: {node: '>=6.9.0'}
'@babel/helper-string-parser@7.27.1':
resolution: {integrity: sha512-qMlSxKbpRlAridDExk92nSobyDdpPijUq2DW6oDnUqd0iOGxmQjyqhMIihI9+zv4LPyZdRje2cavWPbCbWm3eA==}
engines: {node: '>=6.9.0'}
'@babel/helper-validator-identifier@7.28.5':
resolution: {integrity: sha512-qSs4ifwzKJSV39ucNjsvc6WVHs6b7S03sOh2OcHF9UHfVPqWWALUsNUVzhSBiItjRZoLHx7nIarVjqKVusUZ1Q==}
engines: {node: '>=6.9.0'}
'@babel/parser@7.28.5':
resolution: {integrity: sha512-KKBU1VGYR7ORr3At5HAtUQ+TV3SzRCXmA/8OdDZiLDBIZxVyzXuztPjfLd3BV1PRAQGCMWWSHYhL0F8d5uHBDQ==}
engines: {node: '>=6.0.0'}
hasBin: true hasBin: true
'@babel/template@7.27.2': '@biomejs/cli-darwin-arm64@2.3.10':
resolution: {integrity: sha512-LPDZ85aEJyYSd18/DkjNh4/y1ntkE5KwUHWTiqgRxruuZL2F1yuHligVHLvcHY2vMHXttKFpJn6LwfI7cw7ODw==} resolution: {integrity: sha512-M6xUjtCVnNGFfK7HMNKa593nb7fwNm43fq1Mt71kpLpb+4mE7odO8W/oWVDyBVO4ackhresy1ZYO7OJcVo/B7w==}
engines: {node: '>=6.9.0'} engines: {node: '>=14.21.3'}
cpu: [arm64]
os: [darwin]
'@babel/traverse@7.28.5': '@biomejs/cli-darwin-x64@2.3.10':
resolution: {integrity: sha512-TCCj4t55U90khlYkVV/0TfkJkAkUg3jZFA3Neb7unZT8CPok7iiRfaX0F+WnqWqt7OxhOn0uBKXCw4lbL8W0aQ==} resolution: {integrity: sha512-Vae7+V6t/Avr8tVbFNjnFSTKZogZHFYl7MMH62P/J1kZtr0tyRQ9Fe0onjqjS2Ek9lmNLmZc/VR5uSekh+p1fg==}
engines: {node: '>=6.9.0'} engines: {node: '>=14.21.3'}
cpu: [x64]
os: [darwin]
'@babel/types@7.28.5': '@biomejs/cli-linux-arm64-musl@2.3.10':
resolution: {integrity: sha512-qQ5m48eI/MFLQ5PxQj4PFaprjyCTLI37ElWMmNs0K8Lk3dVeOdNpB3ks8jc7yM5CDmVC73eMVk/trk3fgmrUpA==} resolution: {integrity: sha512-B9DszIHkuKtOH2IFeeVkQmSMVUjss9KtHaNXquYYWCjH8IstNgXgx5B0aSBQNr6mn4RcKKRQZXn9Zu1rM3O0/A==}
engines: {node: '>=6.9.0'} engines: {node: '>=14.21.3'}
cpu: [arm64]
os: [linux]
'@biomejs/cli-linux-arm64@2.3.10':
resolution: {integrity: sha512-hhPw2V3/EpHKsileVOFynuWiKRgFEV48cLe0eA+G2wO4SzlwEhLEB9LhlSrVeu2mtSn205W283LkX7Fh48CaxA==}
engines: {node: '>=14.21.3'}
cpu: [arm64]
os: [linux]
'@biomejs/cli-linux-x64-musl@2.3.10':
resolution: {integrity: sha512-QTfHZQh62SDFdYc2nfmZFuTm5yYb4eO1zwfB+90YxUumRCR171tS1GoTX5OD0wrv4UsziMPmrePMtkTnNyYG3g==}
engines: {node: '>=14.21.3'}
cpu: [x64]
os: [linux]
'@biomejs/cli-linux-x64@2.3.10':
resolution: {integrity: sha512-wwAkWD1MR95u+J4LkWP74/vGz+tRrIQvr8kfMMJY8KOQ8+HMVleREOcPYsQX82S7uueco60L58Wc6M1I9WA9Dw==}
engines: {node: '>=14.21.3'}
cpu: [x64]
os: [linux]
'@biomejs/cli-win32-arm64@2.3.10':
resolution: {integrity: sha512-o7lYc9n+CfRbHvkjPhm8s9FgbKdYZu5HCcGVMItLjz93EhgJ8AM44W+QckDqLA9MKDNFrR8nPbO4b73VC5kGGQ==}
engines: {node: '>=14.21.3'}
cpu: [arm64]
os: [win32]
'@biomejs/cli-win32-x64@2.3.10':
resolution: {integrity: sha512-pHEFgq7dUEsKnqG9mx9bXihxGI49X+ar+UBrEIj3Wqj3UCZp1rNgV+OoyjFgcXsjCWpuEAF4VJdkZr3TrWdCbQ==}
engines: {node: '>=14.21.3'}
cpu: [x64]
os: [win32]
'@cspotcode/source-map-support@0.8.1': '@cspotcode/source-map-support@0.8.1':
resolution: {integrity: sha512-IchNf6dN4tHoMFIn/7OE8LWZ19Y6q/67Bmf6vnGREv8RSbBVb9LPJxEcnwrcwX6ixSvaiGoomAUvu4YSxXrVgw==} resolution: {integrity: sha512-IchNf6dN4tHoMFIn/7OE8LWZ19Y6q/67Bmf6vnGREv8RSbBVb9LPJxEcnwrcwX6ixSvaiGoomAUvu4YSxXrVgw==}
@@ -245,27 +276,6 @@ packages:
cpu: [x64] cpu: [x64]
os: [win32] os: [win32]
'@ianvs/prettier-plugin-sort-imports@4.7.0':
resolution: {integrity: sha512-soa2bPUJAFruLL4z/CnMfSEKGznm5ebz29fIa9PxYtu8HHyLKNE1NXAs6dylfw1jn/ilEIfO2oLLN6uAafb7DA==}
peerDependencies:
'@prettier/plugin-oxc': ^0.0.4
'@vue/compiler-sfc': 2.7.x || 3.x
content-tag: ^4.0.0
prettier: 2 || 3 || ^4.0.0-0
prettier-plugin-ember-template-tag: ^2.1.0
peerDependenciesMeta:
'@prettier/plugin-oxc':
optional: true
'@vue/compiler-sfc':
optional: true
content-tag:
optional: true
prettier-plugin-ember-template-tag:
optional: true
'@jridgewell/gen-mapping@0.3.13':
resolution: {integrity: sha512-2kkt/7niJ6MgEPxF0bYdQ6etZaA+fQvDcLKckhy1yIQOzaoKjBBjSj63/aLVjYE3qhRt5dvM+uUyfCg6UKCBbA==}
'@jridgewell/resolve-uri@3.1.2': '@jridgewell/resolve-uri@3.1.2':
resolution: {integrity: sha512-bRISgCIjP20/tbWSPWMEi54QVPRZExkuD9lJL+UIxUKtwVJA8wW1Trb1jMs1RFXo1CBTNZ/5hpC9QvmKWdopKw==} resolution: {integrity: sha512-bRISgCIjP20/tbWSPWMEi54QVPRZExkuD9lJL+UIxUKtwVJA8wW1Trb1jMs1RFXo1CBTNZ/5hpC9QvmKWdopKw==}
engines: {node: '>=6.0.0'} engines: {node: '>=6.0.0'}
@@ -273,9 +283,6 @@ packages:
'@jridgewell/sourcemap-codec@1.5.5': '@jridgewell/sourcemap-codec@1.5.5':
resolution: {integrity: sha512-cYQ9310grqxueWbl+WuIUIaiUaDcj7WOq5fVhEljNVgRfOUhY9fy2zTvfoqWsnebh8Sl70VScFbICvJnLKB0Og==} resolution: {integrity: sha512-cYQ9310grqxueWbl+WuIUIaiUaDcj7WOq5fVhEljNVgRfOUhY9fy2zTvfoqWsnebh8Sl70VScFbICvJnLKB0Og==}
'@jridgewell/trace-mapping@0.3.31':
resolution: {integrity: sha512-zzNR+SdQSDJzc8joaeP8QQoCQr8NuYx2dIIytl1QeBEZHJ9uW6hebsrYgbz8hJwUQao3TWCMtmfV8Nu1twOLAw==}
'@jridgewell/trace-mapping@0.3.9': '@jridgewell/trace-mapping@0.3.9':
resolution: {integrity: sha512-3Belt6tdc8bPgAtbcmdtNJlirVoTmEb5e2gC94PnkwEW9jI6CAHUeoG85tjWP5WquqfavoMtMwiG4P926ZKKuQ==} resolution: {integrity: sha512-3Belt6tdc8bPgAtbcmdtNJlirVoTmEb5e2gC94PnkwEW9jI6CAHUeoG85tjWP5WquqfavoMtMwiG4P926ZKKuQ==}
@@ -312,6 +319,12 @@ packages:
'@types/node@24.10.1': '@types/node@24.10.1':
resolution: {integrity: sha512-GNWcUTRBgIRJD5zj+Tq0fKOJ5XZajIiBroOF0yvj2bSU1WvNdYS/dn9UxwsujGW4JX06dnHyjV2y9rRaybH0iQ==} resolution: {integrity: sha512-GNWcUTRBgIRJD5zj+Tq0fKOJ5XZajIiBroOF0yvj2bSU1WvNdYS/dn9UxwsujGW4JX06dnHyjV2y9rRaybH0iQ==}
'@types/nunjucks@3.2.6':
resolution: {integrity: sha512-pHiGtf83na1nCzliuAdq8GowYiXvH5l931xZ0YEHaLMNFgynpEqx+IPStlu7UaDkehfvl01e4x/9Tpwhy7Ue3w==}
'@types/pg@8.16.0':
resolution: {integrity: sha512-RmhMd/wD+CF8Dfo+cVIy3RR5cl8CyfXQ0tGgW6XBL8L4LM/UTEbNXYRbLwU6w+CgrKBNbrQWt4FUtTfaU5jSYQ==}
'@types/qs@6.14.0': '@types/qs@6.14.0':
resolution: {integrity: sha512-eOunJqu0K1923aExK6y8p6fsihYEn/BYuQ4g0CxAAgFc4b/ZLN4CrsRZ55srTdqoiLzU2B2evC+apEIxprEzkQ==} resolution: {integrity: sha512-eOunJqu0K1923aExK6y8p6fsihYEn/BYuQ4g0CxAAgFc4b/ZLN4CrsRZ55srTdqoiLzU2B2evC+apEIxprEzkQ==}
@@ -331,6 +344,9 @@ packages:
resolution: {integrity: sha512-8LwjnlP39s08C08J5NstzriPvW1SP8Zfpp1BvC2sI35kPeZnHfxVkCwu4/+Wodgnd60UtT1n8K8zw+Mp7J9JmQ==} resolution: {integrity: sha512-8LwjnlP39s08C08J5NstzriPvW1SP8Zfpp1BvC2sI35kPeZnHfxVkCwu4/+Wodgnd60UtT1n8K8zw+Mp7J9JmQ==}
hasBin: true hasBin: true
a-sync-waterfall@1.0.1:
resolution: {integrity: sha512-RYTOHHdWipFUliRFMCS4X2Yn2X8M87V/OpSqWzKKOGhzqyUxzyVmhHDH9sAvG+ZuQf/TAOFsLCpMw09I1ufUnA==}
accepts@2.0.0: accepts@2.0.0:
resolution: {integrity: sha512-5cvg6CtKwfgdmVqY1WIiXKc3Q1bkRqGLi+2W/6ao+6Y7gu/RCwRuAhGEzh5B4KlszSuTLgZYuqFqo5bImjNKng==} resolution: {integrity: sha512-5cvg6CtKwfgdmVqY1WIiXKc3Q1bkRqGLi+2W/6ao+6Y7gu/RCwRuAhGEzh5B4KlszSuTLgZYuqFqo5bImjNKng==}
engines: {node: '>= 0.6'} engines: {node: '>= 0.6'}
@@ -351,6 +367,9 @@ packages:
arg@4.1.3: arg@4.1.3:
resolution: {integrity: sha512-58S9QDqG0Xx27YwPSt9fJxivjYl432YCwfDMfZ+71RAqUrZef7LrKQZ3LHLOwCS4FLNBplP533Zx895SeOCHvA==} resolution: {integrity: sha512-58S9QDqG0Xx27YwPSt9fJxivjYl432YCwfDMfZ+71RAqUrZef7LrKQZ3LHLOwCS4FLNBplP533Zx895SeOCHvA==}
asap@2.0.6:
resolution: {integrity: sha512-BSHWgDSAiKs50o2Re8ppvp3seVHXSRM44cdSsT9FfNEUUZLOGWVCsiWaRPWM1Znn+mqZ1OfVZ3z3DWEzSp7hRA==}
balanced-match@1.0.2: balanced-match@1.0.2:
resolution: {integrity: sha512-3oSeUO0TMV67hN1AmbXsK4yaqU7tjiHlbxRDZOpH0KW9+CeX4bRAaX0Anxt0tx2MrpRpWwQaPwIlISEJhYU5Pw==} resolution: {integrity: sha512-3oSeUO0TMV67hN1AmbXsK4yaqU7tjiHlbxRDZOpH0KW9+CeX4bRAaX0Anxt0tx2MrpRpWwQaPwIlISEJhYU5Pw==}
@@ -385,6 +404,10 @@ packages:
resolution: {integrity: sha512-7VT13fmjotKpGipCW9JEQAusEPE+Ei8nl6/g4FBAmIm0GOOLMua9NDDo/DWp0ZAxCr3cPq5ZpBqmPAQgDda2Pw==} resolution: {integrity: sha512-7VT13fmjotKpGipCW9JEQAusEPE+Ei8nl6/g4FBAmIm0GOOLMua9NDDo/DWp0ZAxCr3cPq5ZpBqmPAQgDda2Pw==}
engines: {node: '>= 8.10.0'} engines: {node: '>= 8.10.0'}
commander@5.1.0:
resolution: {integrity: sha512-P0CysNDQ7rtVw4QIQtm+MRxV66vKFSvlsQvGYXZWR3qFU0jlMKHZZZgw8e+8DSah4UDKMqnknRDQz+xuQXQ/Zg==}
engines: {node: '>= 6'}
concat-map@0.0.1: concat-map@0.0.1:
resolution: {integrity: sha512-/Srv4dswyQNBfohGpz9o6Yb3Gz3SrUDqBH5rTuhGR7ahtlbYKnVxw2bCFMRljaA7EXHaXZ8wsHdodFvbkhKmqg==} resolution: {integrity: sha512-/Srv4dswyQNBfohGpz9o6Yb3Gz3SrUDqBH5rTuhGR7ahtlbYKnVxw2bCFMRljaA7EXHaXZ8wsHdodFvbkhKmqg==}
@@ -559,13 +582,9 @@ packages:
is-promise@4.0.0: is-promise@4.0.0:
resolution: {integrity: sha512-hvpoI6korhJMnej285dSg6nu1+e6uxs7zG3BYAm5byqDsgJNWwxzM6z6iZiAgQR4TJ30JmBTOwqZUw3WlyH3AQ==} resolution: {integrity: sha512-hvpoI6korhJMnej285dSg6nu1+e6uxs7zG3BYAm5byqDsgJNWwxzM6z6iZiAgQR4TJ30JmBTOwqZUw3WlyH3AQ==}
js-tokens@4.0.0: kysely@0.28.9:
resolution: {integrity: sha512-RdJUflcE3cUzKiMqQgsCu06FPu9UdIJO0beYbPhHN4k6apgJtifcoCtT9bcxOpYBtpD2kCM6Sbzg4CausW/PKQ==} resolution: {integrity: sha512-3BeXMoiOhpOwu62CiVpO6lxfq4eS6KMYfQdMsN/2kUCRNuF2YiEr7u0HLHaQU+O4Xu8YXE3bHVkwaQ85i72EuA==}
engines: {node: '>=20.0.0'}
jsesc@3.1.0:
resolution: {integrity: sha512-/sM3dO2FOzXjKQhJuo0Q173wf2KOo8t4I8vHy6lF9poUp7bKT0/NHE8fPX23PwfhnykfqnC2xRxOnVw5XuGIaA==}
engines: {node: '>=6'}
hasBin: true
make-error@1.3.6: make-error@1.3.6:
resolution: {integrity: sha512-s8UhlNe7vPKomQhC1qFelMokr/Sc3AgNbso3n74mVPA5LTZwkB9NlXf4XPamLxJE8h0gh73rM94xvwRT2CVInw==} resolution: {integrity: sha512-s8UhlNe7vPKomQhC1qFelMokr/Sc3AgNbso3n74mVPA5LTZwkB9NlXf4XPamLxJE8h0gh73rM94xvwRT2CVInw==}
@@ -609,6 +628,16 @@ packages:
resolution: {integrity: sha512-6eZs5Ls3WtCisHWp9S2GUy8dqkpGi4BVSz3GaqiE6ezub0512ESztXUwUB6C6IKbQkY2Pnb/mD4WYojCRwcwLA==} resolution: {integrity: sha512-6eZs5Ls3WtCisHWp9S2GUy8dqkpGi4BVSz3GaqiE6ezub0512ESztXUwUB6C6IKbQkY2Pnb/mD4WYojCRwcwLA==}
engines: {node: '>=0.10.0'} engines: {node: '>=0.10.0'}
nunjucks@3.2.4:
resolution: {integrity: sha512-26XRV6BhkgK0VOxfbU5cQI+ICFUtMLixv1noZn1tGU38kQH5A5nmmbk/O45xdyBhD1esk47nKrY0mvQpZIhRjQ==}
engines: {node: '>= 6.9.0'}
hasBin: true
peerDependencies:
chokidar: ^3.3.0
peerDependenciesMeta:
chokidar:
optional: true
object-inspect@1.13.4: object-inspect@1.13.4:
resolution: {integrity: sha512-W67iLl4J2EXEGTbfeHCffrjDfitvLANg0UlX3wFUUSTx92KXRFegMHUVgSqE+wvhAbi4WqjGg9czysTV2Epbew==} resolution: {integrity: sha512-W67iLl4J2EXEGTbfeHCffrjDfitvLANg0UlX3wFUUSTx92KXRFegMHUVgSqE+wvhAbi4WqjGg9czysTV2Epbew==}
engines: {node: '>= 0.4'} engines: {node: '>= 0.4'}
@@ -627,17 +656,59 @@ packages:
path-to-regexp@8.3.0: path-to-regexp@8.3.0:
resolution: {integrity: sha512-7jdwVIRtsP8MYpdXSwOS0YdD0Du+qOoF/AEPIt88PcCFrZCzx41oxku1jD88hZBwbNUIEfpqvuhjFaMAqMTWnA==} resolution: {integrity: sha512-7jdwVIRtsP8MYpdXSwOS0YdD0Du+qOoF/AEPIt88PcCFrZCzx41oxku1jD88hZBwbNUIEfpqvuhjFaMAqMTWnA==}
picocolors@1.1.1: pg-cloudflare@1.2.7:
resolution: {integrity: sha512-xceH2snhtb5M9liqDsmEw56le376mTZkEX/jEb/RxNFyegNul7eNslCXP9FDj/Lcu0X8KEyMceP2ntpaHrDEVA==} resolution: {integrity: sha512-YgCtzMH0ptvZJslLM1ffsY4EuGaU0cx4XSdXLRFae8bPP4dS5xL1tNB3k2o/N64cHJpwU7dxKli/nZ2lUa5fLg==}
pg-connection-string@2.9.1:
resolution: {integrity: sha512-nkc6NpDcvPVpZXxrreI/FOtX3XemeLl8E0qFr6F2Lrm/I8WOnaWNhIPK2Z7OHpw7gh5XJThi6j6ppgNoaT1w4w==}
pg-int8@1.0.1:
resolution: {integrity: sha512-WCtabS6t3c8SkpDBUlb1kjOs7l66xsGdKpIPZsg4wR+B3+u9UAum2odSsF9tnvxg80h4ZxLWMy4pRjOsFIqQpw==}
engines: {node: '>=4.0.0'}
pg-pool@3.10.1:
resolution: {integrity: sha512-Tu8jMlcX+9d8+QVzKIvM/uJtp07PKr82IUOYEphaWcoBhIYkoHpLXN3qO59nAI11ripznDsEzEv8nUxBVWajGg==}
peerDependencies:
pg: '>=8.0'
pg-protocol@1.10.3:
resolution: {integrity: sha512-6DIBgBQaTKDJyxnXaLiLR8wBpQQcGWuAESkRBX/t6OwA8YsqP+iVSiond2EDy6Y/dsGk8rh/jtax3js5NeV7JQ==}
pg-types@2.2.0:
resolution: {integrity: sha512-qTAAlrEsl8s4OiEQY69wDvcMIdQN6wdz5ojQiOy6YRMuynxenON0O5oCpJI6lshc6scgAY8qvJ2On/p+CXY0GA==}
engines: {node: '>=4'}
pg@8.16.3:
resolution: {integrity: sha512-enxc1h0jA/aq5oSDMvqyW3q89ra6XIIDZgCX9vkMrnz5DFTw/Ny3Li2lFQ+pt3L6MCgm/5o2o8HW9hiJji+xvw==}
engines: {node: '>= 16.0.0'}
peerDependencies:
pg-native: '>=3.0.1'
peerDependenciesMeta:
pg-native:
optional: true
pgpass@1.0.5:
resolution: {integrity: sha512-FdW9r/jQZhSeohs1Z3sI1yxFQNFvMcnmfuj4WBMUTxOrAyLMaTcE1aAMBiTlbMNaXvBCQuVi0R7hd8udDSP7ug==}
picomatch@2.3.1: picomatch@2.3.1:
resolution: {integrity: sha512-JU3teHTNjmE2VCGFzuY8EXzCDVwEqB2a8fsIvwaStHhAWJEeVd1o1QD80CU6+ZdEXXSLbSsuLwJjkCBWqRQUVA==} resolution: {integrity: sha512-JU3teHTNjmE2VCGFzuY8EXzCDVwEqB2a8fsIvwaStHhAWJEeVd1o1QD80CU6+ZdEXXSLbSsuLwJjkCBWqRQUVA==}
engines: {node: '>=8.6'} engines: {node: '>=8.6'}
prettier@3.6.2: postgres-array@2.0.0:
resolution: {integrity: sha512-I7AIg5boAr5R0FFtJ6rCfD+LFsWHp81dolrFD8S79U9tb8Az2nGrJncnMSnys+bpQJfRUzqs9hnA81OAA3hCuQ==} resolution: {integrity: sha512-VpZrUqU5A69eQyW2c5CA1jtLecCsN2U/bD6VilrFDWq5+5UIEVO7nazS3TEcHf1zuPYO/sqGvUvW62g86RXZuA==}
engines: {node: '>=14'} engines: {node: '>=4'}
hasBin: true
postgres-bytea@1.0.1:
resolution: {integrity: sha512-5+5HqXnsZPE65IJZSMkZtURARZelel2oXUEO8rH83VS/hxH5vv1uHquPg5wZs8yMAfdv971IU+kcPUczi7NVBQ==}
engines: {node: '>=0.10.0'}
postgres-date@1.0.7:
resolution: {integrity: sha512-suDmjLVQg78nMK2UZ454hAG+OAW+HQPZ6n++TNDUX+L0+uUlLywnoxJKDou51Zm+zTCjrCl0Nq6J9C5hP9vK/Q==}
engines: {node: '>=0.10.0'}
postgres-interval@1.2.0:
resolution: {integrity: sha512-9ZhXKM/rw350N1ovuWHbGxnGh/SNJ4cnxHiM0rxE4VN41wsg8P8zWn9hv/buK00RP4WvlOyr/RBDiptyxVbkZQ==}
engines: {node: '>=0.10.0'}
proxy-addr@2.0.7: proxy-addr@2.0.7:
resolution: {integrity: sha512-llQsMLSUDUPT44jdrU/O37qlnifitDP+ZwrmmZcoSKyLKvtZxpyV0n2/bD/N4tBAAZ/gJEdZU7KMraoK1+XYAg==} resolution: {integrity: sha512-llQsMLSUDUPT44jdrU/O37qlnifitDP+ZwrmmZcoSKyLKvtZxpyV0n2/bD/N4tBAAZ/gJEdZU7KMraoK1+XYAg==}
@@ -711,6 +782,10 @@ packages:
resolution: {integrity: sha512-a2B9Y0KlNXl9u/vsW6sTIu9vGEpfKu2wRV6l1H3XEas/0gUIzGzBoP/IouTcUQbm9JWZLH3COxyn03TYlFax6w==} resolution: {integrity: sha512-a2B9Y0KlNXl9u/vsW6sTIu9vGEpfKu2wRV6l1H3XEas/0gUIzGzBoP/IouTcUQbm9JWZLH3COxyn03TYlFax6w==}
engines: {node: '>=10'} engines: {node: '>=10'}
split2@4.2.0:
resolution: {integrity: sha512-UcjcJOWknrNkF6PLX83qcHM6KHgVKNkV62Y8a5uYDVv9ydGQVwAHMKqHdJje1VTWpljG0WYpCDhrCdAOYH4TWg==}
engines: {node: '>= 10.x'}
statuses@2.0.1: statuses@2.0.1:
resolution: {integrity: sha512-RwNA9Z/7PrK06rYLIzFMlaF+l73iwpzsqRIFgbMLbTcLD6cOao82TaWefPXQvB2fOC4AjuYSEndS7N/mTCbkdQ==} resolution: {integrity: sha512-RwNA9Z/7PrK06rYLIzFMlaF+l73iwpzsqRIFgbMLbTcLD6cOao82TaWefPXQvB2fOC4AjuYSEndS7N/mTCbkdQ==}
engines: {node: '>= 0.8'} engines: {node: '>= 0.8'}
@@ -735,6 +810,10 @@ packages:
resolution: {integrity: sha512-r0eojU4bI8MnHr8c5bNo7lJDdI2qXlWWJk6a9EAFG7vbhTjElYhBVS3/miuE0uOuoLdb8Mc/rVfsmm6eo5o9GA==} resolution: {integrity: sha512-r0eojU4bI8MnHr8c5bNo7lJDdI2qXlWWJk6a9EAFG7vbhTjElYhBVS3/miuE0uOuoLdb8Mc/rVfsmm6eo5o9GA==}
hasBin: true hasBin: true
ts-luxon@6.2.0:
resolution: {integrity: sha512-4I1tkW6gtydyLnUUIvfezBl5B3smurkgKmHdMOYI2g9Fn3Zg1lGJdhsCXu2VNl95CYbW2+SoNtStcf1CKOcQjw==}
engines: {node: '>=18'}
ts-node@10.9.2: ts-node@10.9.2:
resolution: {integrity: sha512-f0FFpIdcHgn8zcPSbf1dRevwt047YMnaiJM3u2w2RewrB+fob/zePZcrOyQoLMMO7aBIddLcQIEK5dYjkLnGrQ==} resolution: {integrity: sha512-f0FFpIdcHgn8zcPSbf1dRevwt047YMnaiJM3u2w2RewrB+fob/zePZcrOyQoLMMO7aBIddLcQIEK5dYjkLnGrQ==}
hasBin: true hasBin: true
@@ -783,6 +862,10 @@ packages:
wrappy@1.0.2: wrappy@1.0.2:
resolution: {integrity: sha512-l4Sp/DRseor9wL6EvV2+TuQn63dMkPjZ/sp9XkghTEbV9KlPS1xUsZ3u7/IQO4wxtcFB4bgpQPRcR3QCvezPcQ==} resolution: {integrity: sha512-l4Sp/DRseor9wL6EvV2+TuQn63dMkPjZ/sp9XkghTEbV9KlPS1xUsZ3u7/IQO4wxtcFB4bgpQPRcR3QCvezPcQ==}
xtend@4.0.2:
resolution: {integrity: sha512-LKYU1iAXJXUgAXn9URjiu+MWhyUXHsvfp7mcuYm9dSUKK0/CjtrUwFAxD82/mCWbtLsGjFIad0wIsod4zrTAEQ==}
engines: {node: '>=0.4'}
yn@3.1.1: yn@3.1.1:
resolution: {integrity: sha512-Ux4ygGWsu2c7isFWe8Yu1YluJmqVhxqK2cLXNQA5AcC3QfbGNpM7fu0Y8b/z16pXLnFxZYvWhd3fhBY9DLmC6Q==} resolution: {integrity: sha512-Ux4ygGWsu2c7isFWe8Yu1YluJmqVhxqK2cLXNQA5AcC3QfbGNpM7fu0Y8b/z16pXLnFxZYvWhd3fhBY9DLmC6Q==}
engines: {node: '>=6'} engines: {node: '>=6'}
@@ -792,52 +875,40 @@ packages:
snapshots: snapshots:
'@babel/code-frame@7.27.1': '@biomejs/biome@2.3.10':
dependencies: optionalDependencies:
'@babel/helper-validator-identifier': 7.28.5 '@biomejs/cli-darwin-arm64': 2.3.10
js-tokens: 4.0.0 '@biomejs/cli-darwin-x64': 2.3.10
picocolors: 1.1.1 '@biomejs/cli-linux-arm64': 2.3.10
'@biomejs/cli-linux-arm64-musl': 2.3.10
'@biomejs/cli-linux-x64': 2.3.10
'@biomejs/cli-linux-x64-musl': 2.3.10
'@biomejs/cli-win32-arm64': 2.3.10
'@biomejs/cli-win32-x64': 2.3.10
'@babel/generator@7.28.5': '@biomejs/cli-darwin-arm64@2.3.10':
dependencies: optional: true
'@babel/parser': 7.28.5
'@babel/types': 7.28.5
'@jridgewell/gen-mapping': 0.3.13
'@jridgewell/trace-mapping': 0.3.31
jsesc: 3.1.0
'@babel/helper-globals@7.28.0': {} '@biomejs/cli-darwin-x64@2.3.10':
optional: true
'@babel/helper-string-parser@7.27.1': {} '@biomejs/cli-linux-arm64-musl@2.3.10':
optional: true
'@babel/helper-validator-identifier@7.28.5': {} '@biomejs/cli-linux-arm64@2.3.10':
optional: true
'@babel/parser@7.28.5': '@biomejs/cli-linux-x64-musl@2.3.10':
dependencies: optional: true
'@babel/types': 7.28.5
'@babel/template@7.27.2': '@biomejs/cli-linux-x64@2.3.10':
dependencies: optional: true
'@babel/code-frame': 7.27.1
'@babel/parser': 7.28.5
'@babel/types': 7.28.5
'@babel/traverse@7.28.5': '@biomejs/cli-win32-arm64@2.3.10':
dependencies: optional: true
'@babel/code-frame': 7.27.1
'@babel/generator': 7.28.5
'@babel/helper-globals': 7.28.0
'@babel/parser': 7.28.5
'@babel/template': 7.27.2
'@babel/types': 7.28.5
debug: 4.4.3(supports-color@5.5.0)
transitivePeerDependencies:
- supports-color
'@babel/types@7.28.5': '@biomejs/cli-win32-x64@2.3.10':
dependencies: optional: true
'@babel/helper-string-parser': 7.27.1
'@babel/helper-validator-identifier': 7.28.5
'@cspotcode/source-map-support@0.8.1': '@cspotcode/source-map-support@0.8.1':
dependencies: dependencies:
@@ -921,31 +992,10 @@ snapshots:
'@esbuild/win32-x64@0.25.12': '@esbuild/win32-x64@0.25.12':
optional: true optional: true
'@ianvs/prettier-plugin-sort-imports@4.7.0(prettier@3.6.2)':
dependencies:
'@babel/generator': 7.28.5
'@babel/parser': 7.28.5
'@babel/traverse': 7.28.5
'@babel/types': 7.28.5
prettier: 3.6.2
semver: 7.7.3
transitivePeerDependencies:
- supports-color
'@jridgewell/gen-mapping@0.3.13':
dependencies:
'@jridgewell/sourcemap-codec': 1.5.5
'@jridgewell/trace-mapping': 0.3.31
'@jridgewell/resolve-uri@3.1.2': {} '@jridgewell/resolve-uri@3.1.2': {}
'@jridgewell/sourcemap-codec@1.5.5': {} '@jridgewell/sourcemap-codec@1.5.5': {}
'@jridgewell/trace-mapping@0.3.31':
dependencies:
'@jridgewell/resolve-uri': 3.1.2
'@jridgewell/sourcemap-codec': 1.5.5
'@jridgewell/trace-mapping@0.3.9': '@jridgewell/trace-mapping@0.3.9':
dependencies: dependencies:
'@jridgewell/resolve-uri': 3.1.2 '@jridgewell/resolve-uri': 3.1.2
@@ -989,6 +1039,14 @@ snapshots:
dependencies: dependencies:
undici-types: 7.16.0 undici-types: 7.16.0
'@types/nunjucks@3.2.6': {}
'@types/pg@8.16.0':
dependencies:
'@types/node': 24.10.1
pg-protocol: 1.10.3
pg-types: 2.2.0
'@types/qs@6.14.0': {} '@types/qs@6.14.0': {}
'@types/range-parser@1.2.7': {} '@types/range-parser@1.2.7': {}
@@ -1010,6 +1068,8 @@ snapshots:
'@vercel/ncc@0.38.4': {} '@vercel/ncc@0.38.4': {}
a-sync-waterfall@1.0.1: {}
accepts@2.0.0: accepts@2.0.0:
dependencies: dependencies:
mime-types: 3.0.1 mime-types: 3.0.1
@@ -1028,6 +1088,8 @@ snapshots:
arg@4.1.3: {} arg@4.1.3: {}
asap@2.0.6: {}
balanced-match@1.0.2: {} balanced-match@1.0.2: {}
binary-extensions@2.3.0: {} binary-extensions@2.3.0: {}
@@ -1079,6 +1141,8 @@ snapshots:
optionalDependencies: optionalDependencies:
fsevents: 2.3.3 fsevents: 2.3.3
commander@5.1.0: {}
concat-map@0.0.1: {} concat-map@0.0.1: {}
content-disposition@1.0.0: content-disposition@1.0.0:
@@ -1282,9 +1346,7 @@ snapshots:
is-promise@4.0.0: {} is-promise@4.0.0: {}
js-tokens@4.0.0: {} kysely@0.28.9: {}
jsesc@3.1.0: {}
make-error@1.3.6: {} make-error@1.3.6: {}
@@ -1323,6 +1385,14 @@ snapshots:
normalize-path@3.0.0: {} normalize-path@3.0.0: {}
nunjucks@3.2.4(chokidar@3.6.0):
dependencies:
a-sync-waterfall: 1.0.1
asap: 2.0.6
commander: 5.1.0
optionalDependencies:
chokidar: 3.6.0
object-inspect@1.13.4: {} object-inspect@1.13.4: {}
on-finished@2.4.1: on-finished@2.4.1:
@@ -1337,11 +1407,52 @@ snapshots:
path-to-regexp@8.3.0: {} path-to-regexp@8.3.0: {}
picocolors@1.1.1: {} pg-cloudflare@1.2.7:
optional: true
pg-connection-string@2.9.1: {}
pg-int8@1.0.1: {}
pg-pool@3.10.1(pg@8.16.3):
dependencies:
pg: 8.16.3
pg-protocol@1.10.3: {}
pg-types@2.2.0:
dependencies:
pg-int8: 1.0.1
postgres-array: 2.0.0
postgres-bytea: 1.0.1
postgres-date: 1.0.7
postgres-interval: 1.2.0
pg@8.16.3:
dependencies:
pg-connection-string: 2.9.1
pg-pool: 3.10.1(pg@8.16.3)
pg-protocol: 1.10.3
pg-types: 2.2.0
pgpass: 1.0.5
optionalDependencies:
pg-cloudflare: 1.2.7
pgpass@1.0.5:
dependencies:
split2: 4.2.0
picomatch@2.3.1: {} picomatch@2.3.1: {}
prettier@3.6.2: {} postgres-array@2.0.0: {}
postgres-bytea@1.0.1: {}
postgres-date@1.0.7: {}
postgres-interval@1.2.0:
dependencies:
xtend: 4.0.2
proxy-addr@2.0.7: proxy-addr@2.0.7:
dependencies: dependencies:
@@ -1444,6 +1555,8 @@ snapshots:
dependencies: dependencies:
semver: 7.7.3 semver: 7.7.3
split2@4.2.0: {}
statuses@2.0.1: {} statuses@2.0.1: {}
statuses@2.0.2: {} statuses@2.0.2: {}
@@ -1460,6 +1573,8 @@ snapshots:
touch@3.1.1: {} touch@3.1.1: {}
ts-luxon@6.2.0: {}
ts-node@10.9.2(@types/node@24.10.1)(typescript@5.9.3): ts-node@10.9.2(@types/node@24.10.1)(typescript@5.9.3):
dependencies: dependencies:
'@cspotcode/source-map-support': 0.8.1 '@cspotcode/source-map-support': 0.8.1
@@ -1505,6 +1620,8 @@ snapshots:
wrappy@1.0.2: {} wrappy@1.0.2: {}
xtend@4.0.2: {}
yn@3.1.1: {} yn@3.1.1: {}
zod@4.1.12: {} zod@4.1.12: {}

25
express/request/index.ts Normal file
View File

@@ -0,0 +1,25 @@
import { AuthService } from "../auth";
import { getCurrentUser } from "../context";
import { PostgresAuthStore } from "../database";
import type { User } from "../user";
import { html, redirect, render } from "./util";
const util = { html, redirect, render };
const session = {
getUser: (): User => {
return getCurrentUser();
},
};
// Initialize auth with PostgreSQL store
const authStore = new PostgresAuthStore();
const auth = new AuthService(authStore);
const request = {
auth,
session,
util,
};
export { request };

45
express/request/util.ts Normal file
View File

@@ -0,0 +1,45 @@
import { contentTypes } from "../content-types";
import { core } from "../core";
import { executionContext } from "../execution-context";
import { httpCodes } from "../http-codes";
import type { RedirectResult, Result } from "../types";
import { loadFile } from "../util";
import { request } from "./index";
type NoUser = {
[key: string]: unknown;
} & {
user?: never;
};
const render = async (path: string, ctx?: NoUser): Promise<string> => {
const fullPath = `${executionContext.diachron_root}/templates/${path}.html.njk`;
const template = await loadFile(fullPath);
const user = request.session.getUser();
const context = { user, ...ctx };
const engine = core.conf.templateEngine();
const retval = engine.renderTemplate(template, context);
return retval;
};
const html = (payload: string): Result => {
const retval: Result = {
code: httpCodes.success.OK,
result: payload,
contentType: contentTypes.text.html,
};
return retval;
};
const redirect = (location: string): RedirectResult => {
return {
code: httpCodes.redirection.SeeOther,
contentType: contentTypes.text.plain,
result: "",
redirect: location,
};
};
export { html, redirect, render };

View File

@@ -1,10 +1,14 @@
/// <reference lib="dom" /> /// <reference lib="dom" />
import nunjucks from "nunjucks";
import { DateTime } from "ts-luxon";
import { authRoutes } from "./auth/routes";
import { routes as basicRoutes } from "./basic/routes";
import { contentTypes } from "./content-types"; import { contentTypes } from "./content-types";
import { core } from "./core";
import { multiHandler } from "./handlers"; import { multiHandler } from "./handlers";
import { HttpCode, httpCodes } from "./http-codes"; import { httpCodes } from "./http-codes";
import { services } from "./services"; import type { Call, Result, Route } from "./types";
import { Call, ProcessedRoute, Result, Route } from "./types";
// FIXME: Obviously put this somewhere else // FIXME: Obviously put this somewhere else
const okText = (result: string): Result => { const okText = (result: string): Result => {
@@ -20,13 +24,18 @@ const okText = (result: string): Result => {
}; };
const routes: Route[] = [ const routes: Route[] = [
...authRoutes,
basicRoutes.home,
basicRoutes.hello,
basicRoutes.login,
basicRoutes.logout,
{ {
path: "/slow", path: "/slow",
methods: ["GET"], methods: ["GET"],
handler: async (_call: Call): Promise<Result> => { handler: async (_call: Call): Promise<Result> => {
console.log("starting slow request"); console.log("starting slow request");
await services.misc.sleep(2); await core.misc.sleep(2);
console.log("finishing slow request"); console.log("finishing slow request");
const retval = okText("that was slow"); const retval = okText("that was slow");
@@ -37,7 +46,7 @@ const routes: Route[] = [
{ {
path: "/list", path: "/list",
methods: ["GET"], methods: ["GET"],
handler: async (call: Call): Promise<Result> => { handler: async (_call: Call): Promise<Result> => {
const code = httpCodes.success.OK; const code = httpCodes.success.OK;
const lr = (rr: Route[]) => { const lr = (rr: Route[]) => {
const ret = rr.map((r: Route) => { const ret = rr.map((r: Route) => {
@@ -47,11 +56,50 @@ const routes: Route[] = [
return ret; return ret;
}; };
const listing = lr(routes).join(", "); const rrr = lr(routes);
const template = `
<html>
<head></head>
<body>
<ul>
{% for route in rrr %}
<li><a href="{{ route }}">{{ route }}</a></li>
{% endfor %}
</ul>
</body>
</html>
`;
const result = nunjucks.renderString(template, { rrr });
const _listing = lr(routes).join(", ");
return { return {
code, code,
result: listing + "\n", result,
contentType: contentTypes.text.plain, contentType: contentTypes.text.html,
};
},
},
{
path: "/whoami",
methods: ["GET"],
handler: async (call: Call): Promise<Result> => {
const me = call.session.getUser();
const template = `
<html>
<head></head>
<body>
{{ me }}
</body>
</html>
`;
const result = nunjucks.renderString(template, { me });
return {
code: httpCodes.success.OK,
contentType: contentTypes.text.html,
result,
}; };
}, },
}, },
@@ -72,6 +120,29 @@ const routes: Route[] = [
}; };
}, },
}, },
{
path: "/time",
methods: ["GET"],
handler: async (_req): Promise<Result> => {
const now = DateTime.now();
const template = `
<html>
<head></head>
<body>
{{ now }}
</body>
</html>
`;
const result = nunjucks.renderString(template, { now });
return {
code: httpCodes.success.OK,
contentType: contentTypes.text.html,
result,
};
},
},
]; ];
export { routes }; export { routes };

View File

@@ -1,32 +1,9 @@
#!/bin/bash #!/bin/bash
# XXX should we default to strict or non-strict here?
set -eu set -eu
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
run_dir="$DIR" cd "$DIR"
source "$run_dir"/../framework/shims/common exec ../cmd node dist/index.js "$@"
source "$run_dir"/../framework/shims/node.common
strict_arg="${1:---no-strict}"
if [[ "$strict_arg" = "--strict" ]] ; then
strict="yes"
else
strict="no"
fi
cmd="tsx"
if [[ "strict" = "yes" ]] ; then
cmd="ts-node"
fi
cd "$run_dir"
"$run_dir"/check.sh
#echo checked
# $ROOT/cmd "$cmd" $run_dir/app.ts
../cmd node "$run_dir"/out/app.js

View File

@@ -1,36 +0,0 @@
// services.ts
import { config } from "./config";
import { getLogs, log } from "./logging";
//const database = Client({
//})
const database = {};
const logging = {
log,
getLogs,
};
const random = {
randomNumber: () => {
return Math.random();
},
};
const misc = {
sleep: (ms: number) => {
return new Promise((resolve) => setTimeout(resolve, ms));
},
};
const services = {
database,
logging,
misc,
random,
};
export { services };

View File

@@ -2,7 +2,7 @@
set -e set -e
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
check_dir="$DIR" check_dir="$DIR"

View File

@@ -8,6 +8,6 @@
"noImplicitAny": true, "noImplicitAny": true,
"strict": true, "strict": true,
"types": ["node"], "types": ["node"],
"outDir": "out", "outDir": "out"
} }
} }

View File

@@ -2,14 +2,13 @@
// FIXME: split this up into types used by app developers and types internal // FIXME: split this up into types used by app developers and types internal
// to the framework. // to the framework.
import { import type { Request as ExpressRequest } from "express";
Request as ExpressRequest, import type { MatchFunction } from "path-to-regexp";
Response as ExpressResponse,
} from "express";
import { MatchFunction } from "path-to-regexp";
import { z } from "zod"; import { z } from "zod";
import { ContentType, contentTypes } from "./content-types"; import type { Session } from "./auth/types";
import { HttpCode, httpCodes } from "./http-codes"; import type { ContentType } from "./content-types";
import type { HttpCode } from "./http-codes";
import type { Permission, User } from "./user";
const methodParser = z.union([ const methodParser = z.union([
z.literal("GET"), z.literal("GET"),
@@ -32,6 +31,8 @@ export type Call = {
method: Method; method: Method;
parameters: object; parameters: object;
request: ExpressRequest; request: ExpressRequest;
user: User;
session: Session;
}; };
export type InternalHandler = (req: ExpressRequest) => Promise<Result>; export type InternalHandler = (req: ExpressRequest) => Promise<Result>;
@@ -43,12 +44,35 @@ export type ProcessedRoute = {
handler: InternalHandler; handler: InternalHandler;
}; };
export type CookieOptions = {
httpOnly?: boolean;
secure?: boolean;
sameSite?: "strict" | "lax" | "none";
maxAge?: number;
path?: string;
};
export type Cookie = {
name: string;
value: string;
options?: CookieOptions;
};
export type Result = { export type Result = {
code: HttpCode; code: HttpCode;
contentType: ContentType; contentType: ContentType;
result: string; result: string;
cookies?: Cookie[];
}; };
export type RedirectResult = Result & {
redirect: string;
};
export function isRedirect(result: Result): result is RedirectResult {
return "redirect" in result;
}
export type Route = { export type Route = {
path: string; path: string;
methods: Method[]; methods: Method[];
@@ -56,4 +80,38 @@ export type Route = {
interruptable?: boolean; interruptable?: boolean;
}; };
// Authentication error classes
export class AuthenticationRequired extends Error {
constructor() {
super("Authentication required");
this.name = "AuthenticationRequired";
}
}
export class AuthorizationDenied extends Error {
constructor() {
super("Authorization denied");
this.name = "AuthorizationDenied";
}
}
// Helper for handlers to require authentication
export function requireAuth(call: Call): User {
if (call.user.isAnonymous()) {
throw new AuthenticationRequired();
}
return call.user;
}
// Helper for handlers to require specific permission
export function requirePermission(call: Call, permission: Permission): User {
const user = requireAuth(call);
if (!user.hasPermission(permission)) {
throw new AuthorizationDenied();
}
return user;
}
export type Domain = "app" | "fw";
export { methodParser, massageMethod }; export { methodParser, massageMethod };

232
express/user.ts Normal file
View File

@@ -0,0 +1,232 @@
// user.ts
//
// User model for authentication and authorization.
//
// Design notes:
// - `id` is the stable internal identifier (UUID when database-backed)
// - `email` is the primary human-facing identifier
// - Roles provide coarse-grained authorization (admin, editor, etc.)
// - Permissions provide fine-grained authorization (posts:create, etc.)
// - Users can have both roles (which grant permissions) and direct permissions
import { z } from "zod";
// Branded type for user IDs to prevent accidental mixing with other strings
export type UserId = string & { readonly __brand: "UserId" };
// User account status
const userStatusParser = z.enum(["active", "suspended", "pending"]);
export type UserStatus = z.infer<typeof userStatusParser>;
// Role - simple string identifier
const roleParser = z.string().min(1);
export type Role = z.infer<typeof roleParser>;
// Permission format: "resource:action" e.g. "posts:create", "users:delete"
const permissionParser = z.string().regex(/^[a-z_]+:[a-z_]+$/, {
message: "Permission must be in format 'resource:action'",
});
export type Permission = z.infer<typeof permissionParser>;
// Core user data schema - this is what gets stored/serialized
const userDataParser = z.object({
id: z.string().min(1),
email: z.email(),
displayName: z.string().optional(),
status: userStatusParser,
roles: z.array(roleParser),
permissions: z.array(permissionParser),
createdAt: z.coerce.date(),
updatedAt: z.coerce.date(),
});
export type UserData = z.infer<typeof userDataParser>;
// Role-to-permission mappings
// In a real system this might be database-driven or configurable
type RolePermissionMap = Map<Role, Permission[]>;
const defaultRolePermissions: RolePermissionMap = new Map([
["admin", ["users:read", "users:create", "users:update", "users:delete"]],
["user", ["users:read"]],
]);
export abstract class User {
protected readonly data: UserData;
protected rolePermissions: RolePermissionMap;
constructor(data: UserData, rolePermissions?: RolePermissionMap) {
this.data = userDataParser.parse(data);
this.rolePermissions = rolePermissions ?? defaultRolePermissions;
}
// Identity
get id(): UserId {
return this.data.id as UserId;
}
get email(): string {
return this.data.email;
}
get displayName(): string | undefined {
return this.data.displayName;
}
// Status
get status(): UserStatus {
return this.data.status;
}
isActive(): boolean {
return this.data.status === "active";
}
// Roles
get roles(): readonly Role[] {
return this.data.roles;
}
hasRole(role: Role): boolean {
return this.data.roles.includes(role);
}
hasAnyRole(roles: Role[]): boolean {
return roles.some((role) => this.hasRole(role));
}
hasAllRoles(roles: Role[]): boolean {
return roles.every((role) => this.hasRole(role));
}
// Permissions
get permissions(): readonly Permission[] {
return this.data.permissions;
}
// Get all permissions: direct + role-derived
effectivePermissions(): Set<Permission> {
const perms = new Set<Permission>(this.data.permissions);
for (const role of this.data.roles) {
const rolePerms = this.rolePermissions.get(role);
if (rolePerms) {
for (const p of rolePerms) {
perms.add(p);
}
}
}
return perms;
}
// Check if user has a specific permission (direct or via role)
hasPermission(permission: Permission): boolean {
// Check direct permissions first
if (this.data.permissions.includes(permission)) {
return true;
}
// Check role-derived permissions
for (const role of this.data.roles) {
const rolePerms = this.rolePermissions.get(role);
if (rolePerms?.includes(permission)) {
return true;
}
}
return false;
}
// Convenience method: can user perform action on resource?
can(action: string, resource: string): boolean {
const permission = `${resource}:${action}` as Permission;
return this.hasPermission(permission);
}
// Timestamps
get createdAt(): Date {
return this.data.createdAt;
}
get updatedAt(): Date {
return this.data.updatedAt;
}
// Serialization - returns plain object for storage/transmission
toJSON(): UserData {
return { ...this.data };
}
toString(): string {
return `User(id ${this.id})`;
}
abstract isAnonymous(): boolean;
}
export class AuthenticatedUser extends User {
// Factory for creating new users with sensible defaults
static create(
email: string,
options?: {
id?: string;
displayName?: string;
status?: UserStatus;
roles?: Role[];
permissions?: Permission[];
},
): User {
const now = new Date();
return new AuthenticatedUser({
id: options?.id ?? crypto.randomUUID(),
email,
displayName: options?.displayName,
status: options?.status ?? "active",
roles: options?.roles ?? [],
permissions: options?.permissions ?? [],
createdAt: now,
updatedAt: now,
});
}
isAnonymous(): boolean {
return false;
}
}
// For representing "no user" in contexts where user is optional
export class AnonymousUser extends User {
// FIXME: this is C&Ped with only minimal changes. No bueno.
static create(
email: string,
options?: {
id?: string;
displayName?: string;
status?: UserStatus;
roles?: Role[];
permissions?: Permission[];
},
): AnonymousUser {
const now = new Date(0);
return new AnonymousUser({
id: options?.id ?? crypto.randomUUID(),
email,
displayName: options?.displayName,
status: options?.status ?? "active",
roles: options?.roles ?? [],
permissions: options?.permissions ?? [],
createdAt: now,
updatedAt: now,
});
}
isAnonymous(): boolean {
return true;
}
}
export const anonymousUser = AnonymousUser.create("anonymous@example.com", {
id: "-1",
displayName: "Anonymous User",
});

11
express/util.ts Normal file
View File

@@ -0,0 +1,11 @@
import { readFile } from "node:fs/promises";
// FIXME: Handle the error here
const loadFile = async (path: string): Promise<string> => {
// Specifying 'utf8' returns a string; otherwise, it returns a Buffer
const data = await readFile(path, "utf8");
return data;
};
export { loadFile };

View File

@@ -2,7 +2,7 @@
set -e set -e
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
check_dir="$DIR" check_dir="$DIR"

26
fixup.sh Executable file
View File

@@ -0,0 +1,26 @@
#!/bin/bash
set -eu
DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
cd "$DIR"
# uv run ruff check --select I --fix .
# uv run ruff format .
shell_scripts="$(fd '.sh$' | xargs)"
shfmt -i 4 -w "$DIR/cmd" "$DIR"/framework/cmd.d/* "$DIR"/framework/shims/* "$DIR"/master/master "$DIR"/logger/logger
# "$shell_scripts"
for ss in $shell_scripts; do
shfmt -i 4 -w $ss
done
pushd "$DIR/master"
go fmt
popd
pushd "$DIR/express"
../cmd pnpm biome check --write
popd

View File

@@ -4,4 +4,4 @@ set -eu
DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
"$DIR"/../shims/node "$@" exec "$DIR"/../shims/node "$@"

15
framework/cmd.d/test Executable file
View File

@@ -0,0 +1,15 @@
#!/bin/bash
set -eu
shopt -s globstar nullglob
DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
cd "$DIR/../../express"
if [ $# -eq 0 ]; then
"$DIR"/../shims/pnpm tsx --test ./**/*.spec.ts ./**/*.test.ts
else
"$DIR"/../shims/pnpm tsx --test "$@"
fi

9
framework/common.d/db Executable file
View File

@@ -0,0 +1,9 @@
#!/bin/bash
set -eu
DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
ROOT="$DIR/../.."
# FIXME: don't hard code this of course
PGPASSWORD=diachron psql -U diachron -h localhost diachron

9
framework/common.d/migrate Executable file
View File

@@ -0,0 +1,9 @@
#!/bin/bash
set -eu
DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
ROOT="$DIR/../.."
cd "$ROOT/express"
"$DIR"/tsx migrate.ts "$@"

11
framework/develop.d/clear-db Executable file
View File

@@ -0,0 +1,11 @@
#!/bin/bash
# This file belongs to the framework. You are not expected to modify it.
set -eu
DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
ROOT="$DIR/../.."
cd "$ROOT/express"
"$DIR"/../cmd.d/tsx develop/clear-db.ts "$@"

1
framework/develop.d/db Symbolic link
View File

@@ -0,0 +1 @@
../common.d/db

1
framework/develop.d/migrate Symbolic link
View File

@@ -0,0 +1 @@
../common.d/migrate

9
framework/develop.d/reset-db Executable file
View File

@@ -0,0 +1,9 @@
#!/bin/bash
set -eu
DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
ROOT="$DIR/../.."
cd "$ROOT/express"
"$DIR"/../cmd.d/tsx develop/reset-db.ts "$@"

9
framework/mgmt.d/add-user Executable file
View File

@@ -0,0 +1,9 @@
#!/bin/bash
set -eu
DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
ROOT="$DIR/../.."
cd "$ROOT/express"
"$DIR"/../cmd.d/tsx mgmt/add-user.ts "$@"

1
framework/mgmt.d/db Symbolic link
View File

@@ -0,0 +1 @@
../common.d/db

1
framework/mgmt.d/migrate Symbolic link
View File

@@ -0,0 +1 @@
../common.d/migrate

View File

@@ -5,12 +5,8 @@
set -eu set -eu
node_shim_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" node_shim_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
export node_shim_DIR
source "$node_shim_DIR"/../versions
# shellcheck source=node.common
source "$node_shim_DIR"/node.common source "$node_shim_DIR"/node.common
node_bin="$node_shim_DIR/../../$nodejs_binary_dir/node" exec "$nodejs_binary_dir/node" "$@"
exec "$node_bin" "$@"

View File

@@ -2,23 +2,19 @@
# shellcheck shell=bash # shellcheck shell=bash
node_common_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" node_common_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
project_root="$node_common_DIR/../.."
# FIXME this shouldn't be hardcoded here of course # shellcheck source=../versions
nodejs_binary_dir="$node_common_DIR/../binaries/node-v22.15.1-linux-x64/bin" source "$node_common_DIR"/../versions
nodejs_binary_dir="$project_root/$nodejs_bin_dir"
# This might be too restrictive. Or not restrictive enough. # This might be too restrictive. Or not restrictive enough.
PATH="$nodejs_binary_dir":/bin:/usr/bin PATH="$nodejs_binary_dir":/bin:/usr/bin
project_root="$node_common_DIR/../.." node_dist_dir="$project_root/$nodejs_dist_dir"
node_dir="$project_root/$nodejs_binary_dir" export NPM_CONFIG_PREFIX="$node_dist_dir/npm"
export NPM_CONFIG_CACHE="$node_dist_dir/cache"
export NPM_CONFIG_PREFIX="$node_dir/npm" export NPM_CONFIG_TMP="$node_dist_dir/tmp"
export NPM_CONFIG_CACHE="$node_dir/cache" export NODE_PATH="$node_dist_dir/node_modules"
export NPM_CONFIG_TMP="$node_dir/tmp"
export NODE_PATH="$node_dir/node_modules"
# echo $NPM_CONFIG_PREFIX
# echo $NPM_CONFIG_CACHE
# echo $NPM_CONFIG_TMP
# echo $NODE_PATH

19
framework/versions Normal file
View File

@@ -0,0 +1,19 @@
# shellcheck shell=bash
# This file belongs to the framework. You are not expected to modify it.
# https://nodejs.org/dist
nodejs_binary_linux_x86_64=https://nodejs.org/dist/v24.12.0/node-v24.12.0-linux-x64.tar.xz
nodejs_checksum_linux_x86_64=bdebee276e58d0ef5448f3d5ac12c67daa963dd5e0a9bb621a53d1cefbc852fd
nodejs_dist_dir=framework/binaries/node-v22.15.1-linux-x64
nodejs_bin_dir="$nodejs_dist_dir/bin"
caddy_binary_linux_x86_64=fixme
caddy_checksum_linux_x86_64=fixmetoo
# https://github.com/pnpm/pnpm/releases
pnpm_binary_linux_x86_64=https://github.com/pnpm/pnpm/releases/download/v10.28.0/pnpm-linux-x64
pnpm_checksum_linux_x86_64=sha256:348e863d17a62411a65f900e8d91395acabae9e9237653ccc3c36cb385965f28
golangci_lint=v2.7.2-alpine

1
logger/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
logger-bin

3
logger/go.mod Normal file
View File

@@ -0,0 +1,3 @@
module philologue.net/diachron/logger-bin
go 1.23.3

7
logger/logger Executable file
View File

@@ -0,0 +1,7 @@
#!/bin/bash
DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
cd "$DIR"
./logger-bin "$@"

70
logger/main.go Normal file
View File

@@ -0,0 +1,70 @@
package main
import (
"encoding/json"
"flag"
"fmt"
"log"
"net/http"
"strconv"
)
func main() {
port := flag.Int("port", 8085, "port to listen on")
capacity := flag.Int("capacity", 1000000, "max messages to store")
flag.Parse()
store := NewLogStore(*capacity)
http.HandleFunc("POST /log", func(w http.ResponseWriter, r *http.Request) {
var msg Message
if err := json.NewDecoder(r.Body).Decode(&msg); err != nil {
http.Error(w, "invalid JSON", http.StatusBadRequest)
return
}
store.Add(msg)
w.WriteHeader(http.StatusCreated)
})
http.HandleFunc("GET /logs", func(w http.ResponseWriter, r *http.Request) {
params := FilterParams{}
if limit := r.URL.Query().Get("limit"); limit != "" {
if n, err := strconv.Atoi(limit); err == nil {
params.Limit = n
}
}
if before := r.URL.Query().Get("before"); before != "" {
if ts, err := strconv.ParseInt(before, 10, 64); err == nil {
params.Before = ts
}
}
if after := r.URL.Query().Get("after"); after != "" {
if ts, err := strconv.ParseInt(after, 10, 64); err == nil {
params.After = ts
}
}
messages := store.GetFiltered(params)
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(messages)
})
http.HandleFunc("GET /status", func(w http.ResponseWriter, r *http.Request) {
status := map[string]any{
"count": store.Count(),
"capacity": *capacity,
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(status)
})
listenAddr := fmt.Sprintf(":%d", *port)
log.Printf("[logger] Listening on %s (capacity: %d)", listenAddr, *capacity)
if err := http.ListenAndServe(listenAddr, nil); err != nil {
log.Fatalf("[logger] Failed to start: %v", err)
}
}

126
logger/store.go Normal file
View File

@@ -0,0 +1,126 @@
package main
import (
"sync"
)
// Message represents a log entry from the express backend
type Message struct {
Timestamp int64 `json:"timestamp"`
Source string `json:"source"` // "logging" | "diagnostic" | "user"
Text []string `json:"text"`
}
// LogStore is a thread-safe ring buffer for log messages
type LogStore struct {
mu sync.RWMutex
messages []Message
head int // next write position
full bool // whether buffer has wrapped
capacity int
}
// NewLogStore creates a new log store with the given capacity
func NewLogStore(capacity int) *LogStore {
return &LogStore{
messages: make([]Message, capacity),
capacity: capacity,
}
}
// Add inserts a new message into the store
func (s *LogStore) Add(msg Message) {
s.mu.Lock()
defer s.mu.Unlock()
s.messages[s.head] = msg
s.head++
if s.head >= s.capacity {
s.head = 0
s.full = true
}
}
// Count returns the number of messages in the store
func (s *LogStore) Count() int {
s.mu.RLock()
defer s.mu.RUnlock()
if s.full {
return s.capacity
}
return s.head
}
// GetRecent returns the most recent n messages, newest first
func (s *LogStore) GetRecent(n int) []Message {
s.mu.RLock()
defer s.mu.RUnlock()
count := s.Count()
if n > count {
n = count
}
if n == 0 {
return nil
}
result := make([]Message, n)
pos := s.head - 1
for i := 0; i < n; i++ {
if pos < 0 {
pos = s.capacity - 1
}
result[i] = s.messages[pos]
pos--
}
return result
}
// Filter parameters for retrieving logs
type FilterParams struct {
Limit int // max messages to return (0 = default 100)
Before int64 // only messages before this timestamp
After int64 // only messages after this timestamp
}
// GetFiltered returns messages matching the filter criteria
func (s *LogStore) GetFiltered(params FilterParams) []Message {
s.mu.RLock()
defer s.mu.RUnlock()
limit := params.Limit
if limit <= 0 {
limit = 100
}
count := s.Count()
if count == 0 {
return nil
}
result := make([]Message, 0, limit)
pos := s.head - 1
for i := 0; i < count && len(result) < limit; i++ {
if pos < 0 {
pos = s.capacity - 1
}
msg := s.messages[pos]
// Apply filters
if params.Before > 0 && msg.Timestamp >= params.Before {
pos--
continue
}
if params.After > 0 && msg.Timestamp <= params.After {
pos--
continue
}
result = append(result, msg)
pos--
}
return result
}

1
master/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
master-bin

84
master/devrunner.go Normal file
View File

@@ -0,0 +1,84 @@
// a vibe coded el cheapo: https://claude.ai/chat/328ca558-1019-49b9-9f08-e85cfcea2ceb
package main
import (
"context"
"fmt"
"io"
"os"
"os/exec"
"sync"
"time"
)
func runProcess(ctx context.Context, wg *sync.WaitGroup, name, command string) {
defer wg.Done()
for {
select {
case <-ctx.Done():
fmt.Printf("[%s] Stopping\n", name)
return
default:
fmt.Printf("[%s] Starting: %s\n", name, command)
// Create command with context for cancellation
cmd := exec.CommandContext(ctx, "sh", "-c", command)
// Setup stdout pipe
stdout, err := cmd.StdoutPipe()
if err != nil {
fmt.Fprintf(os.Stderr, "[%s] Error creating stdout pipe: %v\n", name, err)
return
}
// Setup stderr pipe
stderr, err := cmd.StderrPipe()
if err != nil {
fmt.Fprintf(os.Stderr, "[%s] Error creating stderr pipe: %v\n", name, err)
return
}
// Start the command
if err := cmd.Start(); err != nil {
fmt.Fprintf(os.Stderr, "[%s] Error starting command: %v\n", name, err)
time.Sleep(time.Second)
continue
}
// Copy output in separate goroutines
var ioWg sync.WaitGroup
ioWg.Add(2)
go func() {
defer ioWg.Done()
io.Copy(os.Stdout, stdout)
}()
go func() {
defer ioWg.Done()
io.Copy(os.Stderr, stderr)
}()
// Wait for command to finish
err = cmd.Wait()
ioWg.Wait() // Ensure all output is copied
// Check if we should restart
select {
case <-ctx.Done():
fmt.Printf("[%s] Stopped\n", name)
return
default:
if err != nil {
fmt.Fprintf(os.Stderr, "[%s] Process exited with error: %v\n", name, err)
} else {
fmt.Printf("[%s] Process exited normally\n", name)
}
fmt.Printf("[%s] Restarting in 1 second...\n", name)
time.Sleep(time.Second)
}
}
}
}

6
master/filechange.go Normal file
View File

@@ -0,0 +1,6 @@
package main
type FileChange struct {
Path string
Operation string
}

8
master/go.mod Normal file
View File

@@ -0,0 +1,8 @@
module philologue.net/diachron/master-bin
go 1.23.3
require (
github.com/fsnotify/fsnotify v1.9.0 // indirect
golang.org/x/sys v0.13.0 // indirect
)

4
master/go.sum Normal file
View File

@@ -0,0 +1,4 @@
github.com/fsnotify/fsnotify v1.9.0 h1:2Ml+OJNzbYCTzsxtv8vKSFD9PbJjmhYF14k/jKC7S9k=
github.com/fsnotify/fsnotify v1.9.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0=
golang.org/x/sys v0.13.0 h1:Af8nKPmuFypiUBjVoU9V20FiaFXOcuZI21p0ycVYYGE=
golang.org/x/sys v0.13.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=

47
master/main.go Normal file
View File

@@ -0,0 +1,47 @@
package main
import (
"flag"
"fmt"
"os"
"os/signal"
"syscall"
)
func main() {
watchDir := flag.String("watch", "../express", "directory to watch for changes")
workers := flag.Int("workers", 1, "number of worker processes")
basePort := flag.Int("base-port", 3000, "base port for worker processes")
listenPort := flag.Int("port", 8080, "port for the reverse proxy to listen on")
loggerPort := flag.Int("logger-port", 8085, "port for the logger service")
loggerCapacity := flag.Int("logger-capacity", 1000000, "max messages for logger to store")
flag.Parse()
// Setup signal handling
sigCh := make(chan os.Signal, 1)
signal.Notify(sigCh, os.Interrupt, syscall.SIGTERM)
// Start and manage the logger process
stopLogger := startLogger(*loggerPort, *loggerCapacity)
defer stopLogger()
// Create worker pool
pool := NewWorkerPool()
fileChanges := make(chan FileChange, 10)
go watchFiles(*watchDir, fileChanges)
go runExpress(fileChanges, *workers, *basePort, pool)
// Start the reverse proxy
listenAddr := fmt.Sprintf(":%d", *listenPort)
go startProxy(listenAddr, pool)
// Wait for interrupt signal
<-sigCh
fmt.Println("\nReceived interrupt signal, shutting down...")
fmt.Println("All processes terminated cleanly")
}

9
master/master Executable file
View File

@@ -0,0 +1,9 @@
#!/bin/bash
DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
cd "$DIR"
export diachron_root="$DIR/.."
./master-bin "$@"

48
master/proxy.go Normal file
View File

@@ -0,0 +1,48 @@
package main
import (
"log"
"net/http"
"net/http/httputil"
"net/url"
)
// startProxy starts an HTTP reverse proxy that forwards requests to workers.
// It acquires a worker from the pool for each request and releases it when done.
func startProxy(listenAddr string, pool *WorkerPool) {
handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Acquire a worker (blocks if none available)
workerAddr, ok := pool.Acquire()
if !ok {
http.Error(w, "Service unavailable", http.StatusServiceUnavailable)
return
}
// Ensure we release the worker when done
defer pool.Release(workerAddr)
// Create reverse proxy to the worker
targetURL, err := url.Parse("http://" + workerAddr)
if err != nil {
log.Printf("[proxy] Failed to parse worker URL %s: %v", workerAddr, err)
http.Error(w, "Internal server error", http.StatusInternalServerError)
return
}
proxy := httputil.NewSingleHostReverseProxy(targetURL)
// Custom error handler
proxy.ErrorHandler = func(w http.ResponseWriter, r *http.Request, err error) {
log.Printf("[proxy] Error proxying to %s: %v", workerAddr, err)
http.Error(w, "Bad gateway", http.StatusBadGateway)
}
log.Printf("[proxy] %s %s -> %s", r.Method, r.URL.Path, workerAddr)
proxy.ServeHTTP(w, r)
})
log.Printf("[proxy] Listening on %s", listenAddr)
if err := http.ListenAndServe(listenAddr, handler); err != nil {
log.Fatalf("[proxy] Failed to start: %v", err)
}
}

160
master/runexpress.go Normal file
View File

@@ -0,0 +1,160 @@
package main
import (
"fmt"
"io"
"log"
"os"
"os/exec"
"sync"
"syscall"
"time"
)
func runExpress(changes <-chan FileChange, numProcesses int, basePort int, pool *WorkerPool) {
var currentProcesses []*exec.Cmd
var mu sync.Mutex
// Helper to start an express process on a specific port
startExpress := func(port int) *exec.Cmd {
listenAddr := fmt.Sprintf("127.0.0.1:%d", port)
cmd := exec.Command("../express/run.sh", "--listen", listenAddr)
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
if err := cmd.Start(); err != nil {
log.Printf("[express:%d] Failed to start: %v", port, err)
return nil
}
log.Printf("[express:%d] Started (pid %d)", port, cmd.Process.Pid)
// Monitor the process in background
go func(p int) {
err := cmd.Wait()
if err != nil {
log.Printf("[express:%d] Process exited: %v", p, err)
} else {
log.Printf("[express:%d] Process exited normally", p)
}
}(port)
return cmd
}
// Helper to stop an express process
stopExpress := func(cmd *exec.Cmd) {
if cmd == nil || cmd.Process == nil {
return
}
pid := cmd.Process.Pid
log.Printf("[express] Stopping (pid %d)", pid)
cmd.Process.Signal(syscall.SIGTERM)
// Wait briefly for graceful shutdown
done := make(chan struct{})
go func() {
cmd.Wait()
close(done)
}()
select {
case <-done:
log.Printf("[express] Stopped gracefully (pid %d)", pid)
case <-time.After(5 * time.Second):
log.Printf("[express] Force killing (pid %d)", pid)
cmd.Process.Kill()
}
}
// Helper to stop all express processes
stopAllExpress := func(processes []*exec.Cmd) {
for _, cmd := range processes {
stopExpress(cmd)
}
}
// Helper to start all express processes and update the worker pool
startAllExpress := func() []*exec.Cmd {
processes := make([]*exec.Cmd, 0, numProcesses)
addresses := make([]string, 0, numProcesses)
for i := 0; i < numProcesses; i++ {
port := basePort + i
addr := fmt.Sprintf("127.0.0.1:%d", port)
cmd := startExpress(port)
if cmd != nil {
processes = append(processes, cmd)
addresses = append(addresses, addr)
}
}
// Update the worker pool with new worker addresses
pool.SetWorkers(addresses)
return processes
}
// Helper to run the build
runBuild := func() bool {
log.Printf("[build] Starting ncc build...")
cmd := exec.Command("../express/build.sh")
stdout, _ := cmd.StdoutPipe()
stderr, _ := cmd.StderrPipe()
if err := cmd.Start(); err != nil {
log.Printf("[build] Failed to start: %v", err)
return false
}
// Copy output
go io.Copy(os.Stdout, stdout)
go io.Copy(os.Stderr, stderr)
err := cmd.Wait()
if err != nil {
log.Printf("[build] Failed: %v", err)
return false
}
log.Printf("[build] Success")
return true
}
// Debounce timer
var debounceTimer *time.Timer
const debounceDelay = 100 * time.Millisecond
// Initial build and start
log.Printf("[master] Initial build...")
if runBuild() {
currentProcesses = startAllExpress()
} else {
log.Printf("[master] Initial build failed")
}
for change := range changes {
log.Printf("[watch] %s: %s", change.Operation, change.Path)
// Reset debounce timer
if debounceTimer != nil {
debounceTimer.Stop()
}
debounceTimer = time.AfterFunc(debounceDelay, func() {
if !runBuild() {
log.Printf("[master] Build failed, keeping current processes")
return
}
mu.Lock()
defer mu.Unlock()
// Stop all old processes
stopAllExpress(currentProcesses)
// Start all new processes
currentProcesses = startAllExpress()
})
}
}

106
master/runlogger.go Normal file
View File

@@ -0,0 +1,106 @@
package main
import (
"log"
"os"
"os/exec"
"strconv"
"sync"
"syscall"
"time"
)
// startLogger starts the logger process and returns a function to stop it.
// It automatically restarts the logger if it crashes.
func startLogger(port int, capacity int) func() {
var mu sync.Mutex
var cmd *exec.Cmd
var stopping bool
portStr := strconv.Itoa(port)
capacityStr := strconv.Itoa(capacity)
start := func() *exec.Cmd {
c := exec.Command("../logger/logger", "--port", portStr, "--capacity", capacityStr)
c.Stdout = os.Stdout
c.Stderr = os.Stderr
if err := c.Start(); err != nil {
log.Printf("[logger] Failed to start: %v", err)
return nil
}
log.Printf("[logger] Started (pid %d) on port %s", c.Process.Pid, portStr)
return c
}
// Start initial logger
cmd = start()
// Monitor and restart on crash
go func() {
for {
mu.Lock()
currentCmd := cmd
mu.Unlock()
if currentCmd == nil {
time.Sleep(time.Second)
mu.Lock()
if !stopping {
cmd = start()
}
mu.Unlock()
continue
}
err := currentCmd.Wait()
mu.Lock()
if stopping {
mu.Unlock()
return
}
if err != nil {
log.Printf("[logger] Process exited: %v, restarting...", err)
} else {
log.Printf("[logger] Process exited normally, restarting...")
}
time.Sleep(time.Second)
cmd = start()
mu.Unlock()
}
}()
// Return stop function
return func() {
mu.Lock()
defer mu.Unlock()
stopping = true
if cmd == nil || cmd.Process == nil {
return
}
log.Printf("[logger] Stopping (pid %d)", cmd.Process.Pid)
cmd.Process.Signal(syscall.SIGTERM)
// Wait briefly for graceful shutdown
done := make(chan struct{})
go func() {
cmd.Wait()
close(done)
}()
select {
case <-done:
log.Printf("[logger] Stopped gracefully")
case <-time.After(5 * time.Second):
log.Printf("[logger] Force killing")
cmd.Process.Kill()
}
}
}

102
master/watchfiles.go Normal file
View File

@@ -0,0 +1,102 @@
package main
import (
"log"
"os"
"path/filepath"
"strings"
"github.com/fsnotify/fsnotify"
)
// shouldIgnore returns true for paths that should not trigger rebuilds
func shouldIgnore(path string) bool {
// Ignore build output and dependencies
ignoreDirs := []string{"/dist/", "/node_modules/", "/.git/"}
for _, dir := range ignoreDirs {
if strings.Contains(path, dir) {
return true
}
}
// Also ignore if path ends with these directories
for _, dir := range []string{"/dist", "/node_modules", "/.git"} {
if strings.HasSuffix(path, dir) {
return true
}
}
return false
}
func watchFiles(dir string, changes chan<- FileChange) {
watcher, err := fsnotify.NewWatcher()
if err != nil {
log.Fatal(err)
}
defer watcher.Close()
// Add all directories recursively (except ignored ones)
err = filepath.Walk(dir, func(path string, info os.FileInfo, err error) error {
if err != nil {
return err
}
if info.IsDir() {
if shouldIgnore(path) {
return filepath.SkipDir
}
err = watcher.Add(path)
if err != nil {
log.Printf("Error watching %s: %v\n", path, err)
}
}
return nil
})
if err != nil {
log.Fatal(err)
}
for {
select {
case event, ok := <-watcher.Events:
if !ok {
return
}
// Skip ignored paths
if shouldIgnore(event.Name) {
continue
}
// Handle different types of events
var operation string
switch {
case event.Op&fsnotify.Write == fsnotify.Write:
operation = "MODIFIED"
case event.Op&fsnotify.Create == fsnotify.Create:
operation = "CREATED"
// If a new directory is created, start watching it
if info, err := os.Stat(event.Name); err == nil && info.IsDir() {
watcher.Add(event.Name)
}
case event.Op&fsnotify.Remove == fsnotify.Remove:
operation = "REMOVED"
case event.Op&fsnotify.Rename == fsnotify.Rename:
operation = "RENAMED"
case event.Op&fsnotify.Chmod == fsnotify.Chmod:
operation = "CHMOD"
default:
operation = "UNKNOWN"
}
changes <- FileChange{
Path: event.Name,
Operation: operation,
}
case err, ok := <-watcher.Errors:
if !ok {
return
}
log.Printf("Watcher error: %v\n", err)
}
}
}

75
master/workerpool.go Normal file
View File

@@ -0,0 +1,75 @@
package main
import (
"log"
"sync"
)
// WorkerPool manages a pool of worker addresses and tracks their availability.
// Each worker can only handle one request at a time.
type WorkerPool struct {
mu sync.Mutex
workers []string
available chan string
closed bool
}
// NewWorkerPool creates a new empty worker pool.
func NewWorkerPool() *WorkerPool {
return &WorkerPool{
available: make(chan string, 100), // buffered to avoid blocking
}
}
// SetWorkers updates the pool with a new set of worker addresses.
// Called when workers are started or restarted after a rebuild.
func (p *WorkerPool) SetWorkers(addrs []string) {
p.mu.Lock()
defer p.mu.Unlock()
// Drain the old available channel
close(p.available)
for range p.available {
// drain
}
// Create new channel and populate with new workers
p.available = make(chan string, len(addrs)+10)
p.workers = make([]string, len(addrs))
copy(p.workers, addrs)
for _, addr := range addrs {
p.available <- addr
}
log.Printf("[pool] Updated workers: %v", addrs)
}
// Acquire blocks until a worker is available and returns its address.
func (p *WorkerPool) Acquire() (string, bool) {
addr, ok := <-p.available
if ok {
log.Printf("[pool] Acquired worker %s", addr)
}
return addr, ok
}
// Release marks a worker as available again after it finishes handling a request.
func (p *WorkerPool) Release(addr string) {
p.mu.Lock()
defer p.mu.Unlock()
// Only release if the worker is still in our current set
for _, w := range p.workers {
if w == addr {
select {
case p.available <- addr:
log.Printf("[pool] Released worker %s", addr)
default:
// Channel full, worker may have been removed
}
return
}
}
// Worker not in current set (probably from before a rebuild), ignore
}

27
mgmt Executable file
View File

@@ -0,0 +1,27 @@
#!/bin/bash
# This file belongs to the framework. You are not expected to modify it.
# Management command runner - parallel to ./cmd for operational tasks
# Usage: ./mgmt <command> [args...]
set -eu
DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
if [ $# -lt 1 ]; then
echo "Usage: ./mgmt <command> [args...]"
echo ""
echo "Available commands:"
for cmd in "$DIR"/framework/mgmt.d/*; do
if [ -x "$cmd" ]; then
basename "$cmd"
fi
done
exit 1
fi
subcmd="$1"
shift
exec "$DIR"/framework/mgmt.d/"$subcmd" "$@"

66
sync.sh Executable file
View File

@@ -0,0 +1,66 @@
#!/bin/bash
# Note: This is kind of AI slop and needs to be more carefully reviewed.
set -eu
DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# shellcheck source=framework/versions
source "$DIR/framework/versions"
# Ensure correct node version is installed
node_installed_checksum_file="$DIR/framework/binaries/.node.checksum"
node_installed_checksum=""
if [ -f "$node_installed_checksum_file" ]; then
node_installed_checksum=$(cat "$node_installed_checksum_file")
fi
if [ "$node_installed_checksum" != "$nodejs_checksum_linux_x86_64" ]; then
echo "Downloading Node.js..."
node_archive="$DIR/framework/downloads/node.tar.xz"
curl -fsSL "$nodejs_binary_linux_x86_64" -o "$node_archive"
echo "Verifying checksum..."
echo "$nodejs_checksum_linux_x86_64 $node_archive" | sha256sum -c -
echo "Extracting Node.js..."
tar -xf "$node_archive" -C "$DIR/framework/binaries"
rm "$node_archive"
echo "$nodejs_checksum_linux_x86_64" >"$node_installed_checksum_file"
fi
# Ensure correct pnpm version is installed
pnpm_binary="$DIR/framework/binaries/pnpm"
pnpm_installed_checksum_file="$DIR/framework/binaries/.pnpm.checksum"
pnpm_installed_checksum=""
if [ -f "$pnpm_installed_checksum_file" ]; then
pnpm_installed_checksum=$(cat "$pnpm_installed_checksum_file")
fi
# pnpm checksum includes "sha256:" prefix, strip it for sha256sum
pnpm_checksum="${pnpm_checksum_linux_x86_64#sha256:}"
if [ "$pnpm_installed_checksum" != "$pnpm_checksum" ]; then
echo "Downloading pnpm..."
curl -fsSL "$pnpm_binary_linux_x86_64" -o "$pnpm_binary"
echo "Verifying checksum..."
echo "$pnpm_checksum $pnpm_binary" | sha256sum -c -
chmod +x "$pnpm_binary"
echo "$pnpm_checksum" >"$pnpm_installed_checksum_file"
fi
# Get golang binaries in place
cd "$DIR/master"
go build
cd "$DIR/logger"
go build
# Update framework code
cd "$DIR/express"
../cmd pnpm install

Some files were not shown because too many files have changed in this diff Show More