
Livable Places GmbH
Hands-on CTO at Livable Places
Chief Technology Officer
Relevance
Why this case matters
This framing makes the decision signal explicit: impact, proof, fit, and AI / delivery relevance for hiring or collaboration.
System impact
Connected monorepo product development, distributed Bun/BullMQ workers, Redis locking, self-hosted platform operations, Kotlin/Python geo services, and AI/ML office-risk research into one reliable proptech delivery system.
AI / delivery relevance
AI-native systems craft here means more than prompt fluency. It means turning monorepo architecture, queue orchestration, locking, observability, geo data, ML evidence, agent workflows, and new product lines into one reliable operating system.
Proof
pnpm + Turbo
monorepo across app, services, and platform
Bun + BullMQ
distributed workers, flows, and queue orchestration
PCA + LAS
office-risk fingerprinting, calibration, and experiment design
Kotlin + PostGIS
geo, census, feature, and prognosis services
Demand -> Supply -> Property
product logic for measurable social sustainability
Redis locking
locking, Pub/Sub, and job-completion handling
Bare-metal Swarm
IaC, auth, observability, and alerts
Especially relevant for
- Hiring
- Hands-on CTO
- Fractional transformation
- AI-native systems craft
Case context
Overview
At Livable Places I am not working on a generic "AI-first" story. I am working on a real delivery system for a proptech product with actual operational responsibility. That spans monorepo product development, distributed scoring and data workflows, self-hosted platform operations, geo/census services, and the evidence-led preparation of a new office-risk product line.
The product thesis is bigger than a score dashboard: make societal demand for real estate uses measurable and comparable at location level. The operating logic is Demand -> Supply -> Property, so financing, acquisition, ESG, and portfolio decisions can rely on more legible signals.
My leverage is connecting product logic, data pipelines, AI/ML research, geo services, queue orchestration, and platform operations instead of treating them as separate workstreams. The monorepo, workers, auth, observability, GitOps-style deploys, backups, and runbooks form one system that can be built, shipped, and operated.
The AI/ML work stays deliberately evidence-bound. Office-risk fingerprinting, PCA, k-means, Location Absorption Score, benchmarking, and feature ablation are used for characterization and screening until target quality, residualization, and cross-city stability can support stronger claims.
Responsibility
Activities
- Built and evolved a pnpm/Turborepo monorepo for the web app, services, workers, and operating tools
- Worked hands-on on distributed scoring and data workflows with Bun, BullMQ, FlowProducer, QueueEvents, and Redis
- Used Redis for locking, Pub/Sub, and job-completion signaling instead of fragile glue logic
- Built authenticated async report export paths with service-side auth guards, shared export domain logic, and XLSX generation
- Owned a self-hosted bare-metal platform with Docker Swarm, Traefik, Authentik, Ansible bootstrap, Docker Secrets, restic backups, WireGuard, and a full observability stack
- Established IaC/GitOps-style GitHub workflow paths for build, release, deployment, rollback, scaling, backup, restore smoke tests, maintenance, and self-healing
- Shaped the geo/census service layer around Kotlin, Spring Boot, PostGIS, OpenAPI, FastAPI, Valhalla, OSM, and external POI imports
- Prepared a new office-risk product line through evidence-led product and delivery work
- Designed and challenged ML workflows around PCA, k-means clustering, LAS calibration, benchmark matrices, target quality, residualization, and feature ablation
- Used multi-agent research workflows, local agent skills, and plugin-style skill development as part of the product evidence layer, not only as developer convenience
- Translated strategy into visible goals, scopes, work items, development boards, releases, demos, and customer-facing feedback loops
- Built demo and campaign measurement paths with anonymous access, UTM tracking, event analytics, and conversion-funnel thinking
- Built internal planning and reporting tooling with Next.js, React, SQLite, Zod, and localized product surfaces
Operating mode
Methodology
- Delivery as a system: monorepo, platform, queues, geo services, and product work are not optimized in isolation
- Decision quality over dashboards: product work is judged by whether it helps customers make better financing, acquisition, ESG, or portfolio decisions
- Infrastructure as code: platform state, auth baselines, stack definitions, secrets, backups, and operations are encoded and repeatable
- Operability by design: logs, metrics, traces, alerts, and auth are part of the product, not follow-up work
- Distributed orchestration with explicit dependencies, locking, and visibility instead of silent background jobs
- AI-first where it matters: use agents, LLM tooling, and research automation to improve evidence quality and delivery speed, but keep claims bounded by data
- ML with evidence gates: fingerprint, benchmark, calibrate, residualize, ablate, and only then claim predictive value
- Geo/data products as infrastructure: OSM, census, POI, routing, and market data are treated as versioned, observable product dependencies
- Visible operating model: strategy, roadmap, goals, scope, work items, board, release, demo, and customer feedback stay connected
- Small batches and visible ownership instead of roadmap and process signaling
- Evidence-first product work: observation, hypothesis, experiment, then scale
Technical context
Technology stack
The tools are not the point by themselves. What matters is which system layers had to work together.
Frontend
12Backend
13Tools
17Data & AI
43DevOps
25Practices
5Next step
If you want to explore similar leverage for hiring, collaboration, or a concrete transformation, this is the right starting point.
Send a short note about the situation you are trying to assess. I reply personally and will be direct about fit.