16 Commits

Author SHA1 Message Date
7b8eaac637 Add TODO.md and instructions 2026-01-01 15:45:43 -06:00
f504576f3e Add first cut at a pool 2026-01-01 15:43:49 -06:00
8722062f4a Change process names again 2026-01-01 15:12:01 -06:00
9cc1991d07 Name backend process 2026-01-01 14:54:17 -06:00
5d5a2430ad Fix arg in build script 2026-01-01 14:37:11 -06:00
a840137f83 Mark build.sh as executable 2026-01-01 14:34:31 -06:00
c330da49fc Add rudimentary command line parsing to express app 2026-01-01 14:34:16 -06:00
db81129724 Add build.sh 2026-01-01 14:17:09 -06:00
43ff2edad2 Pull in nunjucks 2026-01-01 14:14:03 -06:00
Michael Wolf
ad95f652b8 Fix bogus path expansion 2026-01-01 14:10:57 -06:00
Michael Wolf
51d24209b0 Use build.sh script 2026-01-01 14:09:51 -06:00
Michael Wolf
1083655a3b Add and use a simpler run script 2026-01-01 14:08:46 -06:00
Michael Wolf
615cd89656 Ignore more node_modules directories 2026-01-01 13:24:50 -06:00
Michael Wolf
321b2abd23 Sort of run node app 2026-01-01 13:24:36 -06:00
Michael Wolf
642c7d9434 Update CLAUDE.md 2026-01-01 13:06:21 -06:00
Michael Wolf
8e5b46d426 Add first cut at a CLAUDE.md file 2026-01-01 12:31:35 -06:00
18 changed files with 586 additions and 65 deletions

2
.claude/instructions.md Normal file
View File

@@ -0,0 +1,2 @@
When asked "what's next?" or during downtime, check TODO.md and suggest items to work on.

3
.gitignore vendored
View File

@@ -1,6 +1,5 @@
framework/node/node_modules **/node_modules
framework/downloads framework/downloads
framework/binaries framework/binaries
framework/.nodejs framework/.nodejs
framework/.nodejs-config framework/.nodejs-config
node_modules

118
CLAUDE.md Normal file
View File

@@ -0,0 +1,118 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with
code in this repository.
## Project Overview
Diachron is an opinionated TypeScript/Node.js web framework with a Go-based
master process. Key design principles:
- No development/production distinction - single mode of operation everywhere
- Everything loggable and inspectable for debuggability
- Minimal magic, explicit behavior
- PostgreSQL-only (no database abstraction)
- Inspired by "Taking PHP Seriously" essay
## Commands
### General
**Install dependencies:**
```bash
./sync.sh
```
**Run an app:**
```bash
./master
```
### Development
**Check shell scripts (shellcheck + shfmt) (eventually go fmt and prettier or similar):**
```bash
./check.sh
```
**Format TypeScript code:**
```bash
cd express && ../cmd pnpm prettier --write .
```
**Build Go master process:**
```bash
cd master && go build
```
### Operational
(to be written)
## Architecture
### Components
- **express/** - TypeScript/Express.js backend application
- **master/** - Go-based master process for file watching and process management
- **framework/** - Managed binaries (Node.js, pnpm), command wrappers, and
framework-specific library code
- **monitor/** - Go file watcher that triggers rebuilds (experimental)
### Master Process (Go)
Responsibilities:
- Watch TypeScript source for changes and trigger rebuilds
- Manage worker processes
- Proxy web requests to backend workers
- Behaves identically in all environments (no dev/prod distinction)
### Express App Structure
- `app.ts` - Main Express application setup with route matching
- `routes.ts` - Route definitions
- `handlers.ts` - Route handlers
- `services.ts` - Service layer (database, logging, misc)
- `types.ts` - TypeScript type definitions (Route, Call, Handler, Result, Method)
### Framework Command System
Commands flow through: `./cmd``framework/cmd.d/*``framework/shims/*` → managed binaries in `framework/binaries/`
This ensures consistent tooling versions across the team without system-wide installations.
## Tech Stack
- TypeScript 5.9+ / Node.js 22.15
- Express.js 5.1
- Go 1.23.3+ (master process)
- pnpm 10.12.4 (package manager)
- Zod (runtime validation)
- Nunjucks (templating)
- @vercel/ncc (bundling)
## Platform Requirements
Linux x86_64 only (currently). Requires:
- Modern libc for Go binaries
- docker compose (for full stack)
- fd, shellcheck, shfmt (for development)
## Current Status
Early stage - most implementations are stubs:
- Database service is placeholder
- Logging functions marked WRITEME
- No test framework configured yet
# meta
## guidelines for this document
- Try to keep lines below 80 characters in length, especially prose. But if
embedded code or literals are longer, that's fine.
- Use formatting such as bold or italics sparingly
- In general, we treat this document like source code insofar as it should be
both human-readable and machine-readable
- Keep this meta section at the end of the file.

21
TODO.md Normal file
View File

@@ -0,0 +1,21 @@
- [ ] Update check script:
- [ ] Run `go fmt` on all .go files
- [ ] Run prettier on all .ts files
- [ ] Eventually, run unit tests
- [ ] Adapt master program so that it reads configuration from command line
args instead of from environment variables
- Should have sane defaults
- Adding new arguments should be easy and obvious
- [ ] Add wrapper script to run main program (so that various assumptions related
to relative paths are safer)
- [ ] Add unit tests all over the place.
- ⚠️ Huge task - needs breakdown before starting
- [ ] flesh out the `sync.sh` script
- [ ] update framework-managed node
- [ ] update framework-managed pnpm
- [ ] update pnpm-managed deps
- [ ] rebuild golang programs

View File

@@ -3,6 +3,7 @@ import express, {
Response as ExpressResponse, Response as ExpressResponse,
} from "express"; } from "express";
import { match } from "path-to-regexp"; import { match } from "path-to-regexp";
import { cli } from "./cli";
import { contentTypes } from "./content-types"; import { contentTypes } from "./content-types";
import { httpCodes } from "./http-codes"; import { httpCodes } from "./http-codes";
import { routes } from "./routes"; import { routes } from "./routes";
@@ -19,6 +20,9 @@ import {
methodParser, methodParser,
} from "./types"; } from "./types";
const app = express(); const app = express();
services.logging.log({ source: "logging", text: ["1"] }); services.logging.log({ source: "logging", text: ["1"] });
@@ -117,4 +121,8 @@ app.use(async (req: ExpressRequest, res: ExpressResponse) => {
res.status(code).send(result); res.status(code).send(result);
}); });
app.listen(3000); process.title = `diachron:${cli.listen.port}`;
app.listen(cli.listen.port, cli.listen.host, () => {
console.log(`Listening on ${cli.listen.host}:${cli.listen.port}`);
});

9
express/build.sh Executable file
View File

@@ -0,0 +1,9 @@
#!/bin/bash
set -eu
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
cd "$DIR"
../cmd pnpm ncc build ./app.ts -o dist

49
express/cli.ts Normal file
View File

@@ -0,0 +1,49 @@
import { parseArgs } from "node:util";
const { values } = parseArgs({
options: {
listen: {
type: "string",
short: "l",
},
},
strict: true,
allowPositionals: false,
});
function parseListenAddress(listen: string | undefined): {
host: string;
port: number;
} {
const defaultHost = "127.0.0.1";
const defaultPort = 3000;
if (!listen) {
return { host: defaultHost, port: defaultPort };
}
const lastColon = listen.lastIndexOf(":");
if (lastColon === -1) {
// Just a port number
const port = parseInt(listen, 10);
if (isNaN(port)) {
throw new Error(`Invalid listen address: ${listen}`);
}
return { host: defaultHost, port };
}
const host = listen.slice(0, lastColon);
const port = parseInt(listen.slice(lastColon + 1), 10);
if (isNaN(port)) {
throw new Error(`Invalid port in listen address: ${listen}`);
}
return { host, port };
}
const listenAddress = parseListenAddress(values.listen);
export const cli = {
listen: listenAddress,
};

View File

@@ -18,6 +18,7 @@
"@vercel/ncc": "^0.38.4", "@vercel/ncc": "^0.38.4",
"express": "^5.1.0", "express": "^5.1.0",
"nodemon": "^3.1.11", "nodemon": "^3.1.11",
"nunjucks": "^3.2.4",
"path-to-regexp": "^8.3.0", "path-to-regexp": "^8.3.0",
"prettier": "^3.6.2", "prettier": "^3.6.2",
"ts-node": "^10.9.2", "ts-node": "^10.9.2",

37
express/pnpm-lock.yaml generated
View File

@@ -23,6 +23,9 @@ importers:
nodemon: nodemon:
specifier: ^3.1.11 specifier: ^3.1.11
version: 3.1.11 version: 3.1.11
nunjucks:
specifier: ^3.2.4
version: 3.2.4(chokidar@3.6.0)
path-to-regexp: path-to-regexp:
specifier: ^8.3.0 specifier: ^8.3.0
version: 8.3.0 version: 8.3.0
@@ -331,6 +334,9 @@ packages:
resolution: {integrity: sha512-8LwjnlP39s08C08J5NstzriPvW1SP8Zfpp1BvC2sI35kPeZnHfxVkCwu4/+Wodgnd60UtT1n8K8zw+Mp7J9JmQ==} resolution: {integrity: sha512-8LwjnlP39s08C08J5NstzriPvW1SP8Zfpp1BvC2sI35kPeZnHfxVkCwu4/+Wodgnd60UtT1n8K8zw+Mp7J9JmQ==}
hasBin: true hasBin: true
a-sync-waterfall@1.0.1:
resolution: {integrity: sha512-RYTOHHdWipFUliRFMCS4X2Yn2X8M87V/OpSqWzKKOGhzqyUxzyVmhHDH9sAvG+ZuQf/TAOFsLCpMw09I1ufUnA==}
accepts@2.0.0: accepts@2.0.0:
resolution: {integrity: sha512-5cvg6CtKwfgdmVqY1WIiXKc3Q1bkRqGLi+2W/6ao+6Y7gu/RCwRuAhGEzh5B4KlszSuTLgZYuqFqo5bImjNKng==} resolution: {integrity: sha512-5cvg6CtKwfgdmVqY1WIiXKc3Q1bkRqGLi+2W/6ao+6Y7gu/RCwRuAhGEzh5B4KlszSuTLgZYuqFqo5bImjNKng==}
engines: {node: '>= 0.6'} engines: {node: '>= 0.6'}
@@ -351,6 +357,9 @@ packages:
arg@4.1.3: arg@4.1.3:
resolution: {integrity: sha512-58S9QDqG0Xx27YwPSt9fJxivjYl432YCwfDMfZ+71RAqUrZef7LrKQZ3LHLOwCS4FLNBplP533Zx895SeOCHvA==} resolution: {integrity: sha512-58S9QDqG0Xx27YwPSt9fJxivjYl432YCwfDMfZ+71RAqUrZef7LrKQZ3LHLOwCS4FLNBplP533Zx895SeOCHvA==}
asap@2.0.6:
resolution: {integrity: sha512-BSHWgDSAiKs50o2Re8ppvp3seVHXSRM44cdSsT9FfNEUUZLOGWVCsiWaRPWM1Znn+mqZ1OfVZ3z3DWEzSp7hRA==}
balanced-match@1.0.2: balanced-match@1.0.2:
resolution: {integrity: sha512-3oSeUO0TMV67hN1AmbXsK4yaqU7tjiHlbxRDZOpH0KW9+CeX4bRAaX0Anxt0tx2MrpRpWwQaPwIlISEJhYU5Pw==} resolution: {integrity: sha512-3oSeUO0TMV67hN1AmbXsK4yaqU7tjiHlbxRDZOpH0KW9+CeX4bRAaX0Anxt0tx2MrpRpWwQaPwIlISEJhYU5Pw==}
@@ -385,6 +394,10 @@ packages:
resolution: {integrity: sha512-7VT13fmjotKpGipCW9JEQAusEPE+Ei8nl6/g4FBAmIm0GOOLMua9NDDo/DWp0ZAxCr3cPq5ZpBqmPAQgDda2Pw==} resolution: {integrity: sha512-7VT13fmjotKpGipCW9JEQAusEPE+Ei8nl6/g4FBAmIm0GOOLMua9NDDo/DWp0ZAxCr3cPq5ZpBqmPAQgDda2Pw==}
engines: {node: '>= 8.10.0'} engines: {node: '>= 8.10.0'}
commander@5.1.0:
resolution: {integrity: sha512-P0CysNDQ7rtVw4QIQtm+MRxV66vKFSvlsQvGYXZWR3qFU0jlMKHZZZgw8e+8DSah4UDKMqnknRDQz+xuQXQ/Zg==}
engines: {node: '>= 6'}
concat-map@0.0.1: concat-map@0.0.1:
resolution: {integrity: sha512-/Srv4dswyQNBfohGpz9o6Yb3Gz3SrUDqBH5rTuhGR7ahtlbYKnVxw2bCFMRljaA7EXHaXZ8wsHdodFvbkhKmqg==} resolution: {integrity: sha512-/Srv4dswyQNBfohGpz9o6Yb3Gz3SrUDqBH5rTuhGR7ahtlbYKnVxw2bCFMRljaA7EXHaXZ8wsHdodFvbkhKmqg==}
@@ -609,6 +622,16 @@ packages:
resolution: {integrity: sha512-6eZs5Ls3WtCisHWp9S2GUy8dqkpGi4BVSz3GaqiE6ezub0512ESztXUwUB6C6IKbQkY2Pnb/mD4WYojCRwcwLA==} resolution: {integrity: sha512-6eZs5Ls3WtCisHWp9S2GUy8dqkpGi4BVSz3GaqiE6ezub0512ESztXUwUB6C6IKbQkY2Pnb/mD4WYojCRwcwLA==}
engines: {node: '>=0.10.0'} engines: {node: '>=0.10.0'}
nunjucks@3.2.4:
resolution: {integrity: sha512-26XRV6BhkgK0VOxfbU5cQI+ICFUtMLixv1noZn1tGU38kQH5A5nmmbk/O45xdyBhD1esk47nKrY0mvQpZIhRjQ==}
engines: {node: '>= 6.9.0'}
hasBin: true
peerDependencies:
chokidar: ^3.3.0
peerDependenciesMeta:
chokidar:
optional: true
object-inspect@1.13.4: object-inspect@1.13.4:
resolution: {integrity: sha512-W67iLl4J2EXEGTbfeHCffrjDfitvLANg0UlX3wFUUSTx92KXRFegMHUVgSqE+wvhAbi4WqjGg9czysTV2Epbew==} resolution: {integrity: sha512-W67iLl4J2EXEGTbfeHCffrjDfitvLANg0UlX3wFUUSTx92KXRFegMHUVgSqE+wvhAbi4WqjGg9czysTV2Epbew==}
engines: {node: '>= 0.4'} engines: {node: '>= 0.4'}
@@ -1010,6 +1033,8 @@ snapshots:
'@vercel/ncc@0.38.4': {} '@vercel/ncc@0.38.4': {}
a-sync-waterfall@1.0.1: {}
accepts@2.0.0: accepts@2.0.0:
dependencies: dependencies:
mime-types: 3.0.1 mime-types: 3.0.1
@@ -1028,6 +1053,8 @@ snapshots:
arg@4.1.3: {} arg@4.1.3: {}
asap@2.0.6: {}
balanced-match@1.0.2: {} balanced-match@1.0.2: {}
binary-extensions@2.3.0: {} binary-extensions@2.3.0: {}
@@ -1079,6 +1106,8 @@ snapshots:
optionalDependencies: optionalDependencies:
fsevents: 2.3.3 fsevents: 2.3.3
commander@5.1.0: {}
concat-map@0.0.1: {} concat-map@0.0.1: {}
content-disposition@1.0.0: content-disposition@1.0.0:
@@ -1323,6 +1352,14 @@ snapshots:
normalize-path@3.0.0: {} normalize-path@3.0.0: {}
nunjucks@3.2.4(chokidar@3.6.0):
dependencies:
a-sync-waterfall: 1.0.1
asap: 2.0.6
commander: 5.1.0
optionalDependencies:
chokidar: 3.6.0
object-inspect@1.13.4: {} object-inspect@1.13.4: {}
on-finished@2.4.1: on-finished@2.4.1:

View File

@@ -1,32 +1,9 @@
#!/bin/bash #!/bin/bash
# XXX should we default to strict or non-strict here?
set -eu set -eu
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
run_dir="$DIR" cd "$DIR"
source "$run_dir"/../framework/shims/common exec ../cmd node dist/index.js "$@"
source "$run_dir"/../framework/shims/node.common
strict_arg="${1:---no-strict}"
if [[ "$strict_arg" = "--strict" ]] ; then
strict="yes"
else
strict="no"
fi
cmd="tsx"
if [[ "strict" = "yes" ]] ; then
cmd="ts-node"
fi
cd "$run_dir"
"$run_dir"/check.sh
#echo checked
# $ROOT/cmd "$cmd" $run_dir/app.ts
../cmd node "$run_dir"/out/app.js

View File

@@ -4,4 +4,4 @@ set -eu
DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
"$DIR"/../shims/node "$@" exec "$DIR"/../shims/node "$@"

View File

@@ -11,6 +11,4 @@ source "$node_shim_DIR"/../versions
source "$node_shim_DIR"/node.common source "$node_shim_DIR"/node.common
node_bin="$node_shim_DIR/../../$nodejs_binary_dir/node" exec "$nodejs_binary_dir/node" "$@"
exec "$node_bin" "$@"

View File

@@ -1,23 +1,33 @@
package main package main
import ( import (
// "context"
"fmt" "fmt"
"os" "os"
"os/signal" "os/signal"
// "sync" "strconv"
"syscall" "syscall"
) )
func main() { func main() {
// var program1 = os.Getenv("BUILD_COMMAND") watchedDir := os.Getenv("WATCHED_DIR")
//var program2 = os.Getenv("RUN_COMMAND")
var watchedDir = os.Getenv("WATCHED_DIR") numChildProcesses := 1
if n, err := strconv.Atoi(os.Getenv("NUM_CHILD_PROCESSES")); err == nil && n > 0 {
numChildProcesses = n
}
// Create context for graceful shutdown basePort := 3000
// ctx, cancel := context.WithCancel(context.Background()) if p, err := strconv.Atoi(os.Getenv("BASE_PORT")); err == nil && p > 0 {
//defer cancel() basePort = p
}
listenPort := 8080
if p, err := strconv.Atoi(os.Getenv("LISTEN_PORT")); err == nil && p > 0 {
listenPort = p
}
// Create worker pool
pool := NewWorkerPool()
// Setup signal handling // Setup signal handling
sigCh := make(chan os.Signal, 1) sigCh := make(chan os.Signal, 1)
@@ -27,24 +37,16 @@ func main() {
go watchFiles(watchedDir, fileChanges) go watchFiles(watchedDir, fileChanges)
go printChanges(fileChanges) go runExpress(fileChanges, numChildProcesses, basePort, pool)
// WaitGroup to track both processes // Start the reverse proxy
// var wg sync.WaitGroup listenAddr := fmt.Sprintf(":%d", listenPort)
go startProxy(listenAddr, pool)
// Start both processes
//wg.Add(2)
// go runProcess(ctx, &wg, "builder", program1)
// go runProcess(ctx, &wg, "runner", program2)
// Wait for interrupt signal // Wait for interrupt signal
<-sigCh <-sigCh
fmt.Println("\nReceived interrupt signal, shutting down...") fmt.Println("\nReceived interrupt signal, shutting down...")
// Cancel context to signal goroutines to stop
/// cancel()
// Wait for both processes to finish
// wg.Wait()
fmt.Println("All processes terminated cleanly") fmt.Println("All processes terminated cleanly")
} }

View File

@@ -1,11 +0,0 @@
package main
import (
"fmt"
)
func printChanges(changes <-chan FileChange) {
for change := range changes {
fmt.Printf("[%s] %s\n", change.Operation, change.Path)
}
}

48
master/proxy.go Normal file
View File

@@ -0,0 +1,48 @@
package main
import (
"log"
"net/http"
"net/http/httputil"
"net/url"
)
// startProxy starts an HTTP reverse proxy that forwards requests to workers.
// It acquires a worker from the pool for each request and releases it when done.
func startProxy(listenAddr string, pool *WorkerPool) {
handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Acquire a worker (blocks if none available)
workerAddr, ok := pool.Acquire()
if !ok {
http.Error(w, "Service unavailable", http.StatusServiceUnavailable)
return
}
// Ensure we release the worker when done
defer pool.Release(workerAddr)
// Create reverse proxy to the worker
targetURL, err := url.Parse("http://" + workerAddr)
if err != nil {
log.Printf("[proxy] Failed to parse worker URL %s: %v", workerAddr, err)
http.Error(w, "Internal server error", http.StatusInternalServerError)
return
}
proxy := httputil.NewSingleHostReverseProxy(targetURL)
// Custom error handler
proxy.ErrorHandler = func(w http.ResponseWriter, r *http.Request, err error) {
log.Printf("[proxy] Error proxying to %s: %v", workerAddr, err)
http.Error(w, "Bad gateway", http.StatusBadGateway)
}
log.Printf("[proxy] %s %s -> %s", r.Method, r.URL.Path, workerAddr)
proxy.ServeHTTP(w, r)
})
log.Printf("[proxy] Listening on %s", listenAddr)
if err := http.ListenAndServe(listenAddr, handler); err != nil {
log.Fatalf("[proxy] Failed to start: %v", err)
}
}

160
master/runexpress.go Normal file
View File

@@ -0,0 +1,160 @@
package main
import (
"fmt"
"io"
"log"
"os"
"os/exec"
"sync"
"syscall"
"time"
)
func runExpress(changes <-chan FileChange, numProcesses int, basePort int, pool *WorkerPool) {
var currentProcesses []*exec.Cmd
var mu sync.Mutex
// Helper to start an express process on a specific port
startExpress := func(port int) *exec.Cmd {
listenAddr := fmt.Sprintf("127.0.0.1:%d", port)
cmd := exec.Command("../express/run.sh", "--listen", listenAddr)
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
if err := cmd.Start(); err != nil {
log.Printf("[express:%d] Failed to start: %v", port, err)
return nil
}
log.Printf("[express:%d] Started (pid %d)", port, cmd.Process.Pid)
// Monitor the process in background
go func(p int) {
err := cmd.Wait()
if err != nil {
log.Printf("[express:%d] Process exited: %v", p, err)
} else {
log.Printf("[express:%d] Process exited normally", p)
}
}(port)
return cmd
}
// Helper to stop an express process
stopExpress := func(cmd *exec.Cmd) {
if cmd == nil || cmd.Process == nil {
return
}
pid := cmd.Process.Pid
log.Printf("[express] Stopping (pid %d)", pid)
cmd.Process.Signal(syscall.SIGTERM)
// Wait briefly for graceful shutdown
done := make(chan struct{})
go func() {
cmd.Wait()
close(done)
}()
select {
case <-done:
log.Printf("[express] Stopped gracefully (pid %d)", pid)
case <-time.After(5 * time.Second):
log.Printf("[express] Force killing (pid %d)", pid)
cmd.Process.Kill()
}
}
// Helper to stop all express processes
stopAllExpress := func(processes []*exec.Cmd) {
for _, cmd := range processes {
stopExpress(cmd)
}
}
// Helper to start all express processes and update the worker pool
startAllExpress := func() []*exec.Cmd {
processes := make([]*exec.Cmd, 0, numProcesses)
addresses := make([]string, 0, numProcesses)
for i := 0; i < numProcesses; i++ {
port := basePort + i
addr := fmt.Sprintf("127.0.0.1:%d", port)
cmd := startExpress(port)
if cmd != nil {
processes = append(processes, cmd)
addresses = append(addresses, addr)
}
}
// Update the worker pool with new worker addresses
pool.SetWorkers(addresses)
return processes
}
// Helper to run the build
runBuild := func() bool {
log.Printf("[build] Starting ncc build...")
cmd := exec.Command("../express/build.sh")
stdout, _ := cmd.StdoutPipe()
stderr, _ := cmd.StderrPipe()
if err := cmd.Start(); err != nil {
log.Printf("[build] Failed to start: %v", err)
return false
}
// Copy output
go io.Copy(os.Stdout, stdout)
go io.Copy(os.Stderr, stderr)
err := cmd.Wait()
if err != nil {
log.Printf("[build] Failed: %v", err)
return false
}
log.Printf("[build] Success")
return true
}
// Debounce timer
var debounceTimer *time.Timer
const debounceDelay = 100 * time.Millisecond
// Initial build and start
log.Printf("[master] Initial build...")
if runBuild() {
currentProcesses = startAllExpress()
} else {
log.Printf("[master] Initial build failed")
}
for change := range changes {
log.Printf("[watch] %s: %s", change.Operation, change.Path)
// Reset debounce timer
if debounceTimer != nil {
debounceTimer.Stop()
}
debounceTimer = time.AfterFunc(debounceDelay, func() {
if !runBuild() {
log.Printf("[master] Build failed, keeping current processes")
return
}
mu.Lock()
defer mu.Unlock()
// Stop all old processes
stopAllExpress(currentProcesses)
// Start all new processes
currentProcesses = startAllExpress()
})
}
}

View File

@@ -1,12 +1,32 @@
package main package main
import ( import (
"github.com/fsnotify/fsnotify"
"log" "log"
"os" "os"
"path/filepath" "path/filepath"
"strings"
"github.com/fsnotify/fsnotify"
) )
// shouldIgnore returns true for paths that should not trigger rebuilds
func shouldIgnore(path string) bool {
// Ignore build output and dependencies
ignoreDirs := []string{"/dist/", "/node_modules/", "/.git/"}
for _, dir := range ignoreDirs {
if strings.Contains(path, dir) {
return true
}
}
// Also ignore if path ends with these directories
for _, dir := range []string{"/dist", "/node_modules", "/.git"} {
if strings.HasSuffix(path, dir) {
return true
}
}
return false
}
func watchFiles(dir string, changes chan<- FileChange) { func watchFiles(dir string, changes chan<- FileChange) {
watcher, err := fsnotify.NewWatcher() watcher, err := fsnotify.NewWatcher()
if err != nil { if err != nil {
@@ -14,12 +34,15 @@ func watchFiles(dir string, changes chan<- FileChange) {
} }
defer watcher.Close() defer watcher.Close()
// Add all directories recursively // Add all directories recursively (except ignored ones)
err = filepath.Walk(dir, func(path string, info os.FileInfo, err error) error { err = filepath.Walk(dir, func(path string, info os.FileInfo, err error) error {
if err != nil { if err != nil {
return err return err
} }
if info.IsDir() { if info.IsDir() {
if shouldIgnore(path) {
return filepath.SkipDir
}
err = watcher.Add(path) err = watcher.Add(path)
if err != nil { if err != nil {
log.Printf("Error watching %s: %v\n", path, err) log.Printf("Error watching %s: %v\n", path, err)
@@ -38,6 +61,11 @@ func watchFiles(dir string, changes chan<- FileChange) {
return return
} }
// Skip ignored paths
if shouldIgnore(event.Name) {
continue
}
// Handle different types of events // Handle different types of events
var operation string var operation string
switch { switch {

75
master/workerpool.go Normal file
View File

@@ -0,0 +1,75 @@
package main
import (
"log"
"sync"
)
// WorkerPool manages a pool of worker addresses and tracks their availability.
// Each worker can only handle one request at a time.
type WorkerPool struct {
mu sync.Mutex
workers []string
available chan string
closed bool
}
// NewWorkerPool creates a new empty worker pool.
func NewWorkerPool() *WorkerPool {
return &WorkerPool{
available: make(chan string, 100), // buffered to avoid blocking
}
}
// SetWorkers updates the pool with a new set of worker addresses.
// Called when workers are started or restarted after a rebuild.
func (p *WorkerPool) SetWorkers(addrs []string) {
p.mu.Lock()
defer p.mu.Unlock()
// Drain the old available channel
close(p.available)
for range p.available {
// drain
}
// Create new channel and populate with new workers
p.available = make(chan string, len(addrs)+10)
p.workers = make([]string, len(addrs))
copy(p.workers, addrs)
for _, addr := range addrs {
p.available <- addr
}
log.Printf("[pool] Updated workers: %v", addrs)
}
// Acquire blocks until a worker is available and returns its address.
func (p *WorkerPool) Acquire() (string, bool) {
addr, ok := <-p.available
if ok {
log.Printf("[pool] Acquired worker %s", addr)
}
return addr, ok
}
// Release marks a worker as available again after it finishes handling a request.
func (p *WorkerPool) Release(addr string) {
p.mu.Lock()
defer p.mu.Unlock()
// Only release if the worker is still in our current set
for _, w := range p.workers {
if w == addr {
select {
case p.available <- addr:
log.Printf("[pool] Released worker %s", addr)
default:
// Channel full, worker may have been removed
}
return
}
}
// Worker not in current set (probably from before a rebuild), ignore
}