Four Backends Later: What Building DeployForge Actually Looked Like

The honest story of rewriting DeployForge's backend four times - from WordPress plugin to Express to Bun to Elysia - and what each iteration taught me about building the right tool for the job.

19 min read
3724 words
Four Backends Later: What Building DeployForge Actually Looked Like

Four Backends Later: What Building DeployForge Actually Looked Like

There's a version of this story where I planned everything from the start. Where I sat down, mapped out the architecture, chose the right tools, and built exactly what I intended to build. That version doesn't exist.

What actually happened was messier and more interesting. Over the course of building DeployForge - a platform that automates WordPress theme deployments from GitHub - I rewrote the backend four times. Not because I was indecisive, but because each version taught me something that made the next one possible.

This is the story of that process. Not a tutorial, not a framework comparison, but an honest account of how a project finds its shape through iteration.


The Spark

If you've ever deployed a WordPress theme the traditional way, you already know the pain. You write your code locally, maybe compile some Sass or run a build step, then you open FileZilla or SSH into the server, navigate to wp-content/themes/, and upload your files. You refresh the site. You hope nothing breaks.

Meanwhile, every other part of modern web development has moved on. Push to GitHub, CI runs your build, the artifact deploys automatically, and if something goes wrong you roll back with a click. That workflow exists for practically every other ecosystem. WordPress, for whatever reason, got left behind.

I'd been working with WordPress long enough to feel this friction daily. Theme development wasn't the problem - WordPress is genuinely good at what it does. The deployment story was the problem. And it wasn't a problem anyone seemed to be solving in a way that felt right.

The first attempt was a WordPress plugin. The logic was straightforward: if the problem is getting code from GitHub to WordPress, build something that lives inside WordPress and handles the transfer. The plugin could listen for webhook events from GitHub, download the built artifact, and extract it into the themes directory. It worked. For a single site, with a single developer, deploying a single theme, it was fine.

But it didn't take long to hit the edges. Authentication was handled by WordPress itself, which meant every site was its own island. There was no way to manage multiple sites from one place. No team access controls. No deployment history you could actually review. No way to add features that existed outside of WordPress - monitoring, analytics, billing. The plugin was doing one thing well, but the product I was imagining needed a lot more than one thing.

The plugin wasn't the product. It was one piece of it. I needed a backend.


Express: The Obvious Choice

When you need a Node.js API, you reach for Express. Everyone does, or at least everyone did. It has the largest ecosystem, the most tutorials, the most Stack Overflow answers. Choosing Express felt less like a decision and more like a default.

And it worked. I built out the API the plugin would talk to - handling webhook payloads from GitHub, managing deployment state, serving as the coordination layer between GitHub Actions builds and the WordPress plugin waiting for artifacts. Express gave me routes, middleware, request handling, everything I needed to get the system functional.

The problems weren't dramatic. Express didn't break or fail in any spectacular way. It was more of a slow accumulation of friction.

Every route needed manual type annotations. I was writing validation logic by hand, then writing types that mirrored that validation, then hoping the two stayed in sync as things changed. The middleware stacking pattern, which seems elegant in a tutorial, started to feel like ceremony in practice. I'd write the same auth check, the same error wrapper, the same response shape boilerplate across dozens of routes. None of it was hard. All of it was tedious.

There was also a growing sense that Express was designed for a version of Node.js development that I was moving away from. The JavaScript-first, callback-friendly, bring-your-own-everything philosophy made sense in 2015. By the time I was building DeployForge, I was writing TypeScript exclusively, thinking in terms of type safety and compile-time guarantees, and Express had very little to say about any of that.

I want to be clear - I don't think Express is a bad framework. It solved real problems for a lot of people for a long time, and the fact that it's still the first thing most developers reach for says something about its staying power. But for a solo developer building a typed API that needed to be reliable and fast to iterate on, it wasn't pulling its weight. Every hour I spent on Express boilerplate was an hour I wasn't spending on the product.

I didn't rewrite immediately. I filed away the friction and kept building. The trigger for the next change came from somewhere unexpected - the runtime.


Enter Bun

I don't remember exactly when I first tried Bun, but I remember the feeling. I ran bun install on the project and watched it finish in a fraction of the time npm took. I started the dev server and it was just there - no waiting, no warm-up. It felt like someone had taken the same tools I was already using and removed all the unnecessary weight.

The initial switch was low risk. Bun can run Express applications without modification - you swap the runtime, not the framework. So I did exactly that. Same Express codebase, same routes, same middleware, now running on Bun instead of Node.js. Startup was faster, installs were faster, and the built-in TypeScript support meant I could drop part of my build configuration.

But beyond the speed, Bun introduced ideas that would matter later. Its native test runner meant I didn't need Jest or Vitest as separate dependencies. Its workspace support would eventually become the foundation of the monorepo. These weren't features I needed immediately, but they planted the seeds for decisions I'd make down the road.

What became clear during this phase was a mismatch. The runtime was modern - fast, TypeScript-native, designed with current development patterns in mind. But Express sitting on top of it was still Express. The middleware model, the lack of type inference, the manual validation - none of that changed just because the code was executing faster. I had a better engine, but the chassis was the same.

This phase lasted maybe a month. It was the shortest chapter in the project's history, but it was necessary. It confirmed two things: Bun was the right runtime, and the framework needed to be something that actually took advantage of what Bun offered. I started looking for what that framework might be.


Elysia: Everything Clicks

I found Elysia through a combination of reading Bun ecosystem discussions and looking for frameworks that were built specifically for it rather than adapted from Node.js. The pitch was straightforward: a type-safe web framework designed for Bun, with end-to-end type inference and a plugin system for composition. I'd read similar pitches before and been disappointed, so I didn't expect much.

The first route I wrote changed my mind.

const app = new Elysia()
  .post('/deploy', ({ body }) => {
    // body is fully typed - inferred from the schema below
    return { status: 'triggered', siteId: body.siteId }
  }, {
    body: t.Object({
      siteId: t.String(),
      commitSha: t.String(),
      branch: t.String()
    })
  })

That's it. No separate type definition. No validation middleware that exists in a different file from the route. No hoping that the runtime validation and the TypeScript types agree with each other. The schema defines the validation and the types, and the inference flows through the entire chain - from the request body to the handler parameters to the response. If I change the schema, TypeScript catches every place that needs to update. If I access a property that doesn't exist on the body, the compiler tells me before the code ever runs.

Coming from Express, where this kind of safety required multiple libraries, manual type assertions, and constant vigilance, it was a genuine relief. I wasn't writing less code because the framework was doing magic. I was writing less code because the framework had better abstractions.

The plugin system was the other thing that clicked. In Express, sharing functionality across routes means middleware - functions that intercept the request pipeline, mutate some state, and pass control down the chain. It works, but it's implicit. You have to read the middleware stack to understand what's happening, and the types don't help you because middleware operates on a shared, loosely-typed request object.

Elysia's plugins are explicit. They extend the application instance with typed properties and methods. When you use a plugin, the types it provides become available throughout your routes, and the compiler knows about them. Authentication isn't a middleware that silently attaches a user to the request - it's a plugin that adds a typed user property you can access directly, and if the plugin isn't applied, the property doesn't exist in the type system.

This might sound like a subtle difference, but in practice it changed how I thought about structuring the API. Instead of a single Express app with middleware stacked in the right order, I had composable pieces that each carried their own types. The auth plugin knew about sessions. The deployment plugin knew about sites and artifacts. The monitoring plugin knew about check intervals and incident states. Each one was self-contained, testable, and explicit about what it provided.

The performance was also noticeably different. Elysia is built on Bun's native HTTP server rather than wrapping Node's, and the difference shows. I'm careful about making benchmark claims because synthetic benchmarks rarely reflect real-world usage, but the responsiveness of the API during development - the time between hitting a route and seeing the response - was consistently faster than what I'd experienced with Express on either Node or Bun.

What I appreciated most, though, was the developer experience. This is harder to quantify than type safety or performance, but it's what I spent the most time with. Writing routes in Elysia felt productive in a way that Express hadn't for a long time. The feedback loop was tight: define a schema, write a handler, see types flow through, hit the endpoint, get a typed response. When something was wrong, the compiler usually caught it before I could make the request. When I needed to refactor, the type system guided the changes.

I'd been building DeployForge for months by this point, and this was the first time the backend framework felt like it was working with me rather than around me.


Why a Platform, Not Just a Plugin

There's a simpler version of DeployForge that I could have built. The WordPress plugin handles deployments - downloading artifacts from GitHub and extracting them into the themes directory. You could wrap a license key system around that, list it on WordPress.org or sell it through a marketplace, and call it a product.

I thought about this seriously. It would have been faster to ship, easier to maintain, and simpler to explain. A WordPress plugin, sold with a license key, installed and configured entirely within the WordPress admin.

The problem is everything that doesn't fit inside WordPress.

Authentication is the first thing. A license key system can tell you whether a key is valid, but it can't give you user accounts, team management, role-based access, or OAuth sign-in with GitHub. And if you're building a tool for developers who are already living in GitHub, being able to sign in with their GitHub account and see their repositories immediately isn't a nice-to-have - it's the baseline experience they expect.

Then there's everything that happens around deployments. A deployment history that shows every deploy across all your sites, filtered by status, branch, and trigger type. A dashboard that tells you which sites need attention. Uptime monitoring that checks your sites from multiple geographic regions and alerts you when something goes down. Usage tracking, subscription management, workspace settings, alert configuration. None of this can live inside WordPress because it's not about any single WordPress installation. It's about the relationship between a developer and all of their sites.

The mental model I settled on was this: the WordPress plugin is the agent. It lives on each site, handles the actual deployment mechanics, and reports back. The platform is the brain. It manages authentication, orchestrates deployments, tracks state, runs monitoring, handles billing, and provides the interface developers interact with day to day.

This meant building a full web application - not just an API for the plugin to talk to, but a product in its own right. User signups, onboarding flows, dashboard views, settings pages, team invitations, subscription tiers. It was significantly more work than a license key server. But it was also the difference between selling a plugin and building a product.

The platform owns the user relationship. That's the thing that matters. When a developer signs up, they're signing up for DeployForge, not installing a plugin and entering a license key. Their experience starts in the browser, not in the WordPress admin. And every feature I build that improves their workflow - better monitoring, better deployment insights, better team collaboration - lives in the platform where I control the entire experience, rather than in a WordPress admin page where I'm constrained by what WordPress allows.

This decision shaped everything that followed. The backend wasn't just an API anymore - it was the foundation of a platform.


Folding It Into Next.js

With the decision to build a full platform, the architecture needed to support two things at once: a web application for users (dashboard, auth, settings) and an API for the WordPress plugin (webhooks, deployment coordination, status reporting). The question was whether these should be separate services or one.

I'd been running Elysia as a standalone API. The Next.js app was the dashboard. They were deployed separately, which meant two deployment surfaces, two sets of environment variables, two things to monitor, and a CORS configuration that existed solely because my own frontend was talking to my own backend across domains.

The integration point turned out to be simpler than I expected. Next.js supports catch-all API routes, so I mounted the entire Elysia application inside a route handler. Every request to /api/plugin/* gets forwarded to Elysia, which handles it with the same type-safe routing, validation, and plugin system it always had. The difference is that it's running inside the Next.js process, deployed as a single unit on Vercel.

This collapsed a meaningful amount of operational complexity. The WordPress plugin talks to the same domain as the dashboard. There's one deployment, one set of environment variables, one domain to manage SSL for. Server actions handle the dashboard's mutations - creating sites, updating settings, managing teams - while Elysia handles the external API that the WordPress plugin communicates with. Each system does what it's best at.

The developer experience also improved. Working on a plugin API route and then switching to a dashboard component that displays the results of that route - both happen in the same project, same repository, same dev server. The types from the API schema are available in the frontend without any code generation or synchronization step.

React 19 and the Next.js App Router contributed to this feeling of cohesion. Server components handle data fetching, the cache system manages invalidation, and server actions provide typed mutations. Combined with Elysia's type-safe API layer, the result is an application where type information flows from the database schema through the API to the UI with very few gaps.

The architecture wasn't planned this way from the start. It emerged from the combination of choosing Elysia (which is lightweight enough to embed), choosing Next.js (which supports custom API route handlers), and realising that the operational cost of running them separately wasn't justified by any architectural benefit. Sometimes the right design is the one that removes a boundary you didn't need.


The Monorepo: Building for What's Next

For most of DeployForge's life, the project was a single repository with the Next.js app and some scripts. The WordPress plugin lived in its own repository. It worked fine until I started building features that didn't fit neatly into either place.

The monitoring system was the trigger. Uptime monitoring for WordPress sites meant running HTTP checks from multiple geographic regions, aggregating results, detecting incidents, and sending alerts. This isn't something you run inside a Next.js application on Vercel - it's a set of long-running services that need to operate independently: an orchestrator that schedules check cycles and runs consensus voting across regions, and regional sub-workers that execute the actual HTTP checks.

These workers needed to share type definitions with the main application. A monitor record, a ping result, a check interval - these types exist in the database schema, in the API layer, in the worker logic, and in the dashboard UI. Defining them in four different places and hoping they stay in sync was not an option I was willing to accept.

Bun workspaces solved this cleanly. The monorepo now contains the Next.js application, the WordPress plugin (a PHP/Composer project), the monitoring orchestrator and sub-workers (TypeScript services running on Railway), a shared types package, and an API schema package that defines every plugin endpoint's request and response shapes. When I change a type in @deploy-forge/shared-types, every workspace that imports it sees the change immediately. The compiler catches any inconsistencies before code leaves my machine.

The interesting challenge was making PHP and TypeScript coexist. The WordPress plugin isn't a Bun workspace member - it uses Composer for dependency management and has its own build and test tooling. But it lives in the same repository, which means a single commit can update the plugin's API endpoint handler and the corresponding Zod schema that validates the response shape, with CI running both PHP and TypeScript tests against the change. Contract drift between the web app and the plugin, which was a real risk when they lived in separate repositories, is now caught automatically.

The monorepo structure was a deliberate investment in the project's future. When I set it up, the monitoring workers were still being designed. The backup system was a line item on a roadmap. But I knew these features were coming, and I knew they'd need to share types, schemas, and configuration with the main application. Building the scaffolding before the features existed meant that when it was time to implement monitoring, the infrastructure question was already answered. I could focus on the feature itself rather than figuring out where it should live and how it should communicate.

There are trade-offs. Docker builds for the workers need to copy every workspace's package.json to resolve the lockfile, even for workspaces the worker doesn't use. CI runs are broader than they strictly need to be. The workspace configuration has its own learning curve. But the ability to make a change that spans the API schema, the web app, and a worker service - and have the compiler verify it all - is worth the operational overhead. The alternative is hoping that independently deployed services agree on their contracts, and I've seen enough production incidents to know how that tends to go.


What I'd Tell Past Me

Looking back at the four backend iterations - WordPress plugin, Express, Express-on-Bun, Elysia - the obvious question is whether I could have skipped some of them. If I'd found Elysia first, would I have saved months of work?

Maybe. But I don't think the result would have been the same.

Each phase taught me something specific that informed the next decision. The WordPress plugin taught me the limits of solving a platform problem inside someone else's platform. Express taught me what I actually needed from an API framework - and more importantly, what I didn't need. The Bun migration taught me that runtime performance matters but framework ergonomics matter more. By the time I found Elysia, I had a clear mental model of what I was looking for, and I could recognise it immediately because I'd experienced the alternatives.

There's a version of developer culture that treats framework migrations as failures. You should have researched more thoroughly, planned more carefully, chosen the right tool from the start. I understand the impulse - rewriting is expensive. But it assumes you can evaluate tools in the abstract, without the context that only comes from using them on your specific problem. I didn't know what kind of type safety I wanted until I'd written enough untyped Express routes to understand what I was losing. I didn't know how important Bun-native frameworks were until I'd run a Node.js framework on Bun and seen the gap.

The other lesson is about scope. DeployForge started as a WordPress plugin and became a platform with a web application, an API layer, monitoring workers, a shared type system, and a multi-workspace monorepo. That scope didn't arrive all at once - it accumulated through decisions that each made sense on their own. Building the platform instead of selling licenses. Integrating Elysia into Next.js instead of running them separately. Moving to a monorepo to support workers that didn't exist yet.

Each of those decisions was a bet on the project's future. Some of them I made deliberately, with a clear picture of what I was building toward. Others I made instinctively, because the current approach had started to feel wrong in ways I couldn't fully articulate yet. Both kinds of decisions turned out to be valuable. The deliberate ones gave me structure. The instinctive ones gave me velocity.

If I could tell past me one thing, it would be this: the stack is not the product. The stack is a vehicle for the product, and vehicles can be upgraded while you're driving them. Every hour spent making the tools feel right is an hour that pays for itself in the months that follow. Don't be precious about your choices, but don't apologise for them either. They got you here.

The stack isn't done evolving. I'm certain that some part of what I've built today will feel like the wrong choice a year from now. And when that happens, I'll have the context to know what should replace it - because I used it long enough to understand where it falls short. That's not a failure of planning. That's how software gets built.

Show Your Support

Like this post? Let me know!

More Articles

Thoughts on web development, design, and technology

JB monogram logo