The No-Build Manifesto: Shipping Without a Bundler
The following is not an argument for why you should abandon your build tools. It is a measurement of what happens when you do.
blakecrosley.com runs FastAPI + Jinja2 + HTMX + Alpine.js + plain CSS. No webpack. No Vite. No Rollup. No TypeScript compiler. No Babel. No PostCSS. No Tailwind. No package.json. No node_modules/. The site serves 37 blog posts with 20 interactive JavaScript components, 20 guides, ten language translations, and scores 100/100/100/100 on Lighthouse. You can verify the Lighthouse score yourself: run PageSpeed Insights on any page.11
TL;DR
The HTMX community has plenty of advocacy. What it lacks is evidence. The numbers here come from a production site with substantial content, interactive features, and internationalization, all without a single build tool. The tradeoffs are honest, and the conclusion is narrow: for content-driven sites with a solo developer or small team, build tools solve problems you don’t have while creating problems you do. For large teams with shared component libraries and design system packages, build tools earn their complexity. The boundary is clearer than the discourse suggests.
The Stack
Backend: FastAPI + Jinja2 (server-rendered HTML)
Frontend: HTMX + Alpine.js + Bootstrap 5 (CDN)
Styles: Plain CSS with custom properties
JavaScript: Vanilla JS, IIFE-scoped per component
Deployment: Railway (git push → live)
CDN: Cloudflare (caching, Workers, D1)
No transpilation. No tree shaking. No hot module replacement. No source maps. The JavaScript you write is the JavaScript that ships.
The Numbers
Here are the real metrics, not estimates:
| Metric | blakecrosley.com | Typical Next.js Project (author’s estimate)1 |
|---|---|---|
| Dependencies | 15 Python packages | 311 npm packages (verified: npx create-next-app@latest, Feb 2026)1 |
| Build config files | 0 | 5-8 (next.config, tsconfig, postcss, tailwind, eslint, babel, etc.) |
node_modules/ size |
Does not exist | 187 MB baseline (verified), 250-400 MB with additions1 |
| Install time | pip install -r requirements.txt: 8 seconds |
npm install: 30-90 seconds |
| Build step | None | next build: 15-60 seconds |
| Deploy pipeline | git push → live in ~40 seconds |
git push → install → build → deploy: 2-5 minutes |
| CSS files | 17 files, 10,846 lines (plain CSS) | Generated from Tailwind/Sass, output varies |
| JS files | 33 files, 8,819 lines (human-readable) | Bundled, minified, chunked: unreadable in production |
| Lighthouse Performance | 100 | 70-90 without explicit optimization (per Vercel’s own performance documentation)13 |
The 15 Python packages include FastAPI, Jinja2, Pydantic, and 12 others. None is a build tool. None is a compiler. None is a bundler.2
What You Give Up
Honesty requires listing the real costs. They’re real.
No TypeScript. Every .js file in this project is vanilla JavaScript. Testing and Claude Code’s analysis catch type errors, not a compiler. The approach works for a solo developer. It would not work for a team of 10 sharing component interfaces across modules.
No Hot Module Replacement. When I change a CSS file, I refresh the browser manually. HTMX’s hx-boost makes navigation fast enough that full refreshes are tolerable because it fetches only the body content via AJAX and swaps it without a full page load. On a project where I’m iterating on visual details every 30 seconds, HMR would save meaningful time.
No Tree Shaking. Every byte of JavaScript I write ships to the browser. I can’t import a single function from a utility library without shipping the entire file. Tree shaking (dead code elimination by the bundler, which traces import paths and removes functions no module references) requires a build step by definition. The constraint forces discipline: small, focused files instead of large utility modules. The 20 interactive components average 130-450 lines each because they have to be self-contained.3
No Component Library from npm. No Radix, no shadcn/ui, no Headless UI. Every interactive element (the boids simulation, the Hamming code visualizer, the consensus simulator) is hand-built. The approach is only viable because the interactive components serve specific pedagogical purposes, not generic UI patterns.
No Design System Tokens from npm. My design system lives entirely in CSS custom properties. I can’t import it as a package in another project. For a single-site system, the constraint is acceptable. For a multi-product organization, it’s not.
The five tradeoffs are acceptable for a content-driven site with one developer. They would be unacceptable for a SaaS product with a 15-person engineering team.
What You Gain
Zero build failures. The deploy pipeline is git push. No npm install can fail due to a peer dependency conflict. No next build can fail due to a TypeScript error in a file I didn’t touch. No Dependabot PR upgrades a transitive dependency and breaks the build.4
Debug with View Source. The JavaScript that runs in the browser is the JavaScript I wrote. No source maps needed. No mapping from compiled output to original source. When a bug appears in production, I read the deployed file directly.
Instant local startup. uvicorn app.main:app --reload starts in under 2 seconds. No npm run dev that installs, compiles, and bundles before showing a page.
Concrete request waterfall. A first visit to a blog post loads: one HTML document (~15KB gzipped), one CSS file (~8KB), one page-specific CSS file (~2KB), HTMX from CDN (~14KB, cached), Alpine.js from CDN (~14KB, cached), and the page’s interactive JS component (~4-8KB). Total transfer: 45-60KB on first visit, 15-25KB cached. No bundle splitting, no chunk negotiation, no runtime module resolution. The browser requests exactly what the page needs and nothing else.
Zero Dependabot noise. No package-lock.json means no weekly PRs updating semver, ansi-regex, or glob-parent: packages I never directly imported but that live three layers deep in my dependency tree.
Future-proof frontend. The client-side code will work in 10 years. The HTML is HTML. The CSS is CSS. The JavaScript is JavaScript. There is no Webpack 4 → 5 migration, no Create React App deprecation, no Next.js App Router migration. The platform is the standard.5 The server-side dependencies (FastAPI, Python, Railway) still require version management and occasional updates — the no-build approach eliminates frontend toolchain churn, not backend maintenance.
HTMX as Architecture
The HTMX discourse focuses on syntax: hx-get, hx-swap, hx-target. That’s the wrong frame. The architectural insight is that server-rendered HTML is the API.
In a traditional SPA:
Browser → fetch('/api/users') → JSON → React renders HTML → DOM update
With HTMX:
Browser → GET /users (hx-get) → Server renders HTML fragment → DOM swap
The server returns the final representation. No client-side state management, no serialization/deserialization, no hydration. The Jinja2 template is the component. The FastAPI endpoint is the API. One layer, not three.6
The architectural implications cascade. Three eliminations, each removing a problem category rather than a single bug.
No JSON boundary. Response shape mismatches, null-vs-undefined ambiguities, and date serialization inconsistencies disappear. TypeScript and Zod exist to prevent these bugs. HTMX eliminates the category entirely by returning HTML instead of JSON.
No client-side state management. Keeping server state and client state consistent is the hardest part of SPA development. HTMX removes the problem by having only one source of truth: the server.
No hydration. The “uncanny valley” where the page renders server HTML, then flickers as the JavaScript framework re-renders it client-side, does not exist. The server’s HTML is the final output.
A concrete comparison shows the difference. Here is a search-as-you-type input built two ways:
With React + build tools (JSX, requires transpilation):
// SearchBox.jsx — requires Babel, bundler, npm install
import { useState, useEffect } from 'react';
export default function SearchBox() {
const [query, setQuery] = useState('');
const [results, setResults] = useState([]);
useEffect(() => {
if (!query) return setResults([]);
fetch(`/api/search?q=${query}`).then(r => r.json()).then(setResults);
}, [query]);
return (<div>
<input value={query} onChange={e => setQuery(e.target.value)} />
<ul>{results.map(r => <li key={r.id}>{r.title}</li>)}</ul>
</div>);
}
With HTMX + server rendering (plain HTML, no build step):
<!-- search.html — ships as-is, no transpilation -->
<input type="search" name="q"
hx-get="/search" hx-trigger="keyup changed delay:300ms"
hx-target="#results" />
<ul id="results"></ul>
# server — returns rendered HTML, not JSON
@router.get("/search")
async def search(q: str = ""):
results = await db.search(q)
return templates.TemplateResponse("partials/results.html",
{"results": results})
The HTMX version has no client state, no serialization boundary, and no build step. The server returns the final HTML. The browser swaps it in. The entire interaction is eight lines of template and five lines of Python.
The pattern maps directly to the compounding engineering principle: each piece of infrastructure does exactly one thing, and the pieces compose without interference. A template renders HTML. A route returns it. HTMX swaps it in. No build step coordinates these pieces because no coordination is needed.
A More Complex Example: Multi-Step Form with Validation
Search-as-you-type is the canonical HTMX demo. A multi-step form shows the pattern at production complexity — server-side validation, conditional step progression, and state preserved across requests without client-side state management:
<!-- step1.html — first form step -->
<form id="signup-form" hx-post="/signup/step1" hx-target="#signup-form" hx-swap="outerHTML">
<input type="email" name="email" required />
<button type="submit">Next</button>
</form>
# server — validates step 1, returns step 2 or errors
@router.post("/signup/step1")
async def signup_step1(request: Request, email: str = Form(...)):
if not validate_email(email):
return templates.TemplateResponse("partials/step1.html",
{"error": "Invalid email format", "email": email})
# Store progress server-side (session, DB, or signed cookie)
request.session["signup_email"] = email
return templates.TemplateResponse("partials/step2.html",
{"email": email})
<!-- step2.html — second step, rendered by server only after step 1 validates -->
<form id="signup-form" hx-post="/signup/step2" hx-target="#signup-form" hx-swap="outerHTML">
<p>Email: {{ email }}</p>
<input type="text" name="name" required />
<input type="password" name="password" minlength="8" required />
<button type="submit">Create Account</button>
</form>
The server owns the form state. Step 2 only renders after step 1 validates server-side — no client-side conditional rendering, no form state library, no useReducer. Validation errors replace the current step with the same step plus error messages. The entire multi-step flow uses zero client-side JavaScript. In a React equivalent, the same flow would require useState for each field, a step counter, conditional rendering logic, client-side validation (duplicating server validation), and an API serialization boundary. The HTMX version eliminates all five concerns by keeping the state where the validation logic already lives: on the server.
Plain CSS Is Fine
My design system uses 10 color tokens, 13 type scale steps, and eight spacing values, all CSS custom properties:
:root {
--color-bg-dark: #000000;
--color-text-primary: #ffffff;
--color-text-secondary: rgba(255,255,255,0.65);
--spacing-sm: 1rem;
--spacing-md: 1.5rem;
--font-size-lg: 1.25rem;
}
No Sass compilation step. No Tailwind config generating utilities. No PostCSS plugins transforming custom syntax. The browser reads these values directly. CSS custom properties have capabilities that preprocessor variables lack: they cascade through the DOM, inherit from parent elements, and can be overridden in media queries or scoped to components at runtime. Sass variables compile to static values and disappear. Custom properties remain live, which means a single theme switch or dark mode toggle changes every derived value without recompilation.7
The beauty and brutalism aesthetic of this site (white on absolute black with four opacity tiers) emerges from the constraint. When you can’t reach for a color palette, typography carries the hierarchy. When you can’t reach for component shadows, whitespace creates structure. The constraint is the design.8
The CLS Journey
The Lighthouse journey exposed one genuine cost of no-build: critical CSS extraction required a custom Python script. In a Next.js project, the framework handles this automatically.
The specific bug: a mobile media query overrode a CSS custom property (--gutter: 48px → --gutter: 24px). The critical CSS included the desktop value but not the mobile override. On mobile, the hero rendered with 48px padding, then shifted to 24px when the full stylesheet loaded, producing a CLS of 0.493.
The fix was 12 lines of Python that extract critical CSS including media query overrides:
# critical_css.py — extract CSS rules matching critical selectors
# including @media overrides for mobile-first responsive values
import re
def extract_critical(css_text, selectors):
rules = []
for sel in selectors:
# Match both direct rules and media-wrapped rules
pattern = rf'{re.escape(sel)}\s*\{{[^}}]+\}}'
rules.extend(re.findall(pattern, css_text))
# Also extract @media blocks containing critical selectors
media_blocks = re.findall(r'@media[^{]+\{([^}]+\{[^}]+\}[\s]*)+\}', css_text)
for block in media_blocks:
if any(sel in block for sel in selectors):
rules.append(block)
return '\n'.join(rules)
The investigation took three hours. The fix itself took 20 minutes. A build tool would have handled this automatically.
The honest accounting: build tools automate things that you can do manually, but the manual version costs debugging time when it breaks. The question is whether the automation cost (complexity, dependencies, build failures, migration churn) exceeds the manual cost (occasional debugging sessions).
For this site, the manual cost has been lower. Three years, one CLS bug, three hours of debugging.10 The alternative (maintaining a build pipeline) would have consumed more cumulative time in dependency updates, breaking changes, and configuration maintenance.
When Not to Use This
The no-build approach is wrong for:
Large teams. TypeScript’s value scales with team size.9 When 10 developers share component interfaces, compile-time type checking prevents integration bugs that runtime testing catches too late. A solo developer holds the entire system in their head. A team cannot.
Design system packages. If multiple products consume your design system, it needs to be an npm package with proper versioning, tree shaking, and a build pipeline. CSS custom properties in a single stylesheet don’t compose across repositories.
Complex client state. If your application has rich client-side state (drag-and-drop interfaces, real-time collaboration, offline-first data) a framework like React or Svelte earns its complexity. HTMX replaces client state with server round-trips, which works until latency matters.
npm ecosystem libraries. If you need Radix primitives, Framer Motion, or TanStack Query, you need a build pipeline. All three assume a bundler. Using them without one ranges from painful to impossible.
The boundary is simpler than the discourse suggests: if your application is primarily content rendered by a server, build tools are overhead. If your application is primarily state managed by a client, build tools are infrastructure.
Decision Framework: Do You Need Build Tools?
Answer these four questions:
-
Do more than five developers share JavaScript interfaces? If yes, TypeScript’s compile-time type checking prevents integration bugs that runtime testing catches too late. Add a build step.
-
Does your application manage complex client-side state? If drag-and-drop, real-time collaboration, or offline-first data are core features (not nice-to-haves), a framework like React or Svelte earns its complexity. Add a build step.
-
Do multiple products consume a shared component library? If yes, that library needs npm packaging, semantic versioning, and tree shaking. Add a build step.
-
Do you depend on npm ecosystem libraries that assume a bundler? If Radix, Framer Motion, TanStack Query, or similar libraries are core to the product, a build pipeline is mandatory.
If all four answers are “no,” the no-build approach is viable. If any answer is “yes,” build tools solve a real problem you have. The mistake is adding build tools when all four answers are “no” — solving problems you don’t have while creating dependency management overhead you do.
Exercise: Audit your current project. Run du -sh node_modules/ (or equivalent) and wc -l package-lock.json. Write down the two numbers. Then list the three features of your build pipeline that directly serve your users (not your developer experience). If you cannot name three, the pipeline may be serving the toolchain more than the product. The numbers are not inherently bad — they are a measurement of complexity that should earn its keep.
Key Takeaways
For solo developers and small teams:
-
The proof is the site, not the argument. blakecrosley.com serves 37 posts, 20 interactive components, 20 guides, and ten languages with zero build tools and perfect Lighthouse scores. The numbers are verifiable.
-
The honest cost of no-build is occasional debugging. The CLS bug took three hours to fix. A build tool would have handled it automatically. Over three years, the cumulative debugging time has been far less than the cumulative maintenance time a build pipeline would have required.
-
Constraints produce design. No colors forced typography to carry hierarchy. No build tools forced simple, self-contained JavaScript. The best constraints are the ones you choose before you need them.
For team leads evaluating stack choices:
-
Build tools solve team-scale problems. TypeScript, tree shaking, and component libraries earn their complexity when multiple developers share interfaces. A solo developer building content-driven sites does not have these problems.
-
HTMX’s real contribution is architectural. Server-rendered HTML as the API eliminates client state management, serialization, and hydration. The syntax is secondary to the insight.
FAQ
Is HTMX production-ready for real web applications?
Yes. HTMX has been stable since 2020 and is used in production by companies across multiple industries. Carson Gross, the creator, maintains backward compatibility as a core design principle — the HTMX docs explicitly state: “htmx follows semantic versioning and will not break existing applications within a major version.”14 The library itself is 14KB minified and gzipped, has zero dependencies, and follows semantic versioning. blakecrosley.com has run HTMX in production for three years with zero HTMX-related bugs — every issue encountered was in the application logic, not the library.
Can I use TypeScript without a build step?
Partially. TypeScript files can be type-checked with tsc --noEmit without generating output files, providing compile-time type checking as a linter rather than a transpiler. However, browsers cannot execute .ts files directly, so a build step is still required to serve TypeScript to the browser. The alternative is JSDoc type annotations in plain .js files, which TypeScript can check without compilation. This approach gives type safety during development while shipping standard JavaScript.
How does this approach compare to Astro or 11ty?
Astro and 11ty occupy a middle ground: they are static site generators that produce plain HTML with minimal or zero client JavaScript, but they still require a build step (Node.js, npm install, a build command). The no-build approach eliminates that build step entirely — the server renders HTML on each request. The tradeoff: Astro and 11ty produce faster static pages (no server computation), while FastAPI + HTMX handles dynamic content natively (user-specific data, form submissions, real-time updates) without an API layer.
Can a web development team ship without build tools and npm?
It depends on what the team is building. For content-driven sites and internal tools, teams of 2-5 have shipped successfully without build tools using server-rendered HTML and plain CSS. The constraint that bites first is shared component interfaces: without TypeScript, two developers can disagree on a function’s expected input without the compiler catching it. The practical boundary is around 5-8 developers working on shared JavaScript. Beyond that, TypeScript and a component library earn their complexity.
The JavaScript dependency cost is not unique to this author’s observation. Potvin, R. and Levenberg, J. (2016), “Why Google Stores Billions of Lines of Code in a Single Repository,” documented the organizational costs of dependency management at scale.12 At the individual project level, the npm ecosystem’s dependency depth produces the exact maintenance burden described here: transitive dependencies the developer never chose creating update pressure the developer must manage.
The article bridges the Design and Engineering sections of the blog. Design decisions appear in Beauty and Brutalism, Design Systems for Startups, and Type Scales. The engineering measurements are in Lighthouse Perfect Score and Compounding Engineering. The vibe coding post explores where this philosophy applies to AI-assisted development.
-
The “Typical Next.js Project” column reflects the author’s experience across 5+ Next.js projects (2021-2024) and community-reported norms. For reference: a fresh
npx create-next-app@latest(Next.js 15, tested February 2026) installs 311 packages innode_modules/totaling 187 MB. The 300+ and 150-400 MB figures in the table are consistent with this baseline; production projects with additional dependencies trend higher. Individual projects vary significantly. ↩↩↩ -
Full dependency list as of February 2026: fastapi, uvicorn, starlette, pydantic, pydantic-settings, jinja2, markdown, pygments, beautifulsoup4, lxml, nh3, resend, python-multipart, httpx, analytics-941. Zero are build tools. Zero are compilers. Zero are bundlers. ↩
-
Average component size (130-450 lines) measured from the 20 interactive JS files in
/static/js/as of February 2026. Sizes range from 132 lines (compound-interest-mind.js) to 450 lines (subtraction-machine.js), with arcade.js at 1,666 lines as an outlier. ↩ -
Based on the author’s experience maintaining Next.js projects, the JavaScript ecosystem generates 15-25 Dependabot PRs per month for an active project, most updating transitive dependencies the developer never imported directly. The figure is an estimate from direct observation, not an independently verified benchmark. ↩
-
The web platform (HTML, CSS, JavaScript) has maintained backward compatibility for 30 years. A page from 1996 still renders in Chrome 2026. Tim Berners-Lee articulated this as a design principle: “a browser should be backwards-compatible, in that it should be able to read an earlier version of the language.” See w3.org/DesignIssues/Principles. ↩
-
Carson Gross, creator of HTMX, frames this as “hypermedia as the engine of application state” (HATEOAS). See the htmx.org essays and the Hypermedia Systems book (2023) by Gross, Stepinski, and Cotter: hypermedia.systems. ↩
-
CSS Custom Properties (CSS Variables) are supported in 97%+ of global browsers. Source: caniuse.com/css-variables. No compilation step is needed to use them. ↩
-
The “constraint as design tool” principle has a long history. Charles Eames: “Design depends largely on constraints.” The Dogme 95 movement in filmmaking proved that removing tools (no artificial lighting, no post-production) produced more honest storytelling, not less. See en.wikipedia.org/wiki/Dogme_95. ↩
-
The 2024 Stack Overflow Developer Survey found TypeScript among the top five most-used programming languages and the most widely adopted superset of JavaScript. The survey’s methodology and full results are at survey.stackoverflow.co/2024/. The observation that TypeScript adoption correlates with team size is the author’s inference from industry practice, not a direct survey finding. ↩
-
The site’s git history begins March 2023. The “three years” figure reflects the period from initial deployment on Railway (March 2023) through February 2026. The CLS bug documented in the “CLS Journey” section is the only production-impacting CSS issue in that period. ↩
-
Google PageSpeed Insights (pagespeed.web.dev) runs Lighthouse audits against any public URL. The tool tests Performance, Accessibility, Best Practices, and SEO on a scale of 0-100. blakecrosley.com scores 100 in all four categories as of February 2026. Results are publicly verifiable and not self-reported. ↩
-
Potvin, R. and Levenberg, J. (2016). “Why Google Stores Billions of Lines of Code in a Single Repository.” Communications of the ACM, 59(7), 78-87. doi.org/10.1145/2854146. Google’s monorepo approach was partly motivated by the organizational cost of managing dependency versions across thousands of projects — the same force that produces
node_modules/bloat at the individual project level. The paper documents how dependency management consumes engineering time proportional to the number of transitive dependencies, not direct dependencies. ↩ -
Vercel’s Next.js performance documentation acknowledges that default Lighthouse scores vary by application complexity and recommends specific optimizations (image optimization, font loading, code splitting configuration) to achieve scores above 90. See nextjs.org/docs/app/building-your-application/optimizing. The 70-90 range reflects a fresh
create-next-appproject with default settings before applying these optimizations — consistent with community benchmarks reported on the Next.js GitHub discussions. ↩ -
HTMX versioning policy and backward compatibility commitment are documented at htmx.org/migration-guide-htmx-1/. The 1.x to 2.x migration guide demonstrates the project’s approach: breaking changes are limited to major versions, clearly documented, and accompanied by migration tooling. Carson Gross has stated the backward compatibility principle in multiple conference talks and in Hypermedia Systems (2023). ↩