Moon - representing a low gravity environment
Image by WikiImages from Pixabay

Software Gravity

by Jakub Dzikowski

March 17, 2026

AI has made software feel weightless. You can ask an agent to build an application, modify it, or rewrite it entirely in minutes. It feels disposable. But that illusion disappears the moment the system gains gravity.


Years ago there was a great article about software gravity by Brian Knapp (web archive), where software gravity was defined in terms of complexity. Gravity increased over time as the software grew more complex. Typically you start from a need to automate something, with a "system" built in raw notes or spreadsheets. Then you create a simple script, then a small app, then maybe a framework or a set of services. Each step further has more weight.

The author called it the "Katamari Damacy Effect" -- the satisfying, exponential gameplay loop of starting small and rolling up increasingly larger objects into a massive, chaotic ball.

Gravity is actually a good sign. When achieved organically, it means your software is useful. Then, as the mass grows, it attracts even more mass. Large datasets pull services toward them. Systems start to accumulate users and integrations. With enough gravity, your software becomes a platform. It pulls other services into its orbit. External systems start to rely on it and become locked in.

Knapp framed gravity mostly in terms of complexity. But today software gravity comes from three independent sources: data, complexity, and usage.

Data

Even if your system is a single, simple REST API service, it may have gravity related to the data. Real production systems gather more and more data. That data is hard to migrate and process due to its size.

Recently I've been upgrading a materialized view refresh from a table that has almost 300GB of data. While reworking the reftresh model from materialized view refresh to incremental updates, I had to create three SQL indexes. The creation of a single one took 3 hours. So that was 9 hours total on a production system before deploying the change.

Eventually the refresh time was reduced from 25 minutes to under 300 ms, but the change required careful, surgical work on a live production system.

Once the data becomes large, changes are hard. You cannot simply migrate the schema or reload data from scratch. It takes too much time to be feasible. The data makes the system heavy.

Complexity

Complexity grows gradually and is often related to size. When a system grows, from the spreadsheet, script, or vibe-coded app, it starts to accumulate debt, internal inconsistencies, and undocumented, untested behavior. Hidden dependencies accumulate, and small changes trigger unexpected effects.

Eventually it exceeds human cognitive capabilities and agents' context windows. No one fully understands it anymore, and no one can build an accurate mental model of the system as a whole. The system starts to behave like a living organism rather than a tool.

When a system reaches this level of complexity, fixing a small bug may break multiple seemingly unrelated features. Simple refactoring turns out to impact the structure of half of the system. Changes become slow, painful, and require safety nets, like a staging environment or good test coverage. They become heavy.

Usage

When people are used to achieving something with the system in a certain way, you cannot just tell them to do it another way. They have their own workflows and habits that are hard to change.

Changing the system introduces friction. People no longer know how to use the application, their workflows break, they start to complain or leave. You need to convince or teach them to act differently and at the same time ensure they won't leave.

It's no longer disposable software. When the system is widely used, change needs a process, or even a strategy.

If the system serves an API that is used by other applications, or if it's a software library, you cannot simply break API contracts. People (or agents) will stop using it. You need to ensure a decent level of backward compatibility, and follow a principle of least surprise.

Again, the system is not easy to change anymore. Change becomes a non-trivial process. Change is heavy.

Disposable systems

AI agents get better and better at managing the complexity dimension. It's like building stronger rockets: you can move faster and carry more cargo. It doesn't reduce weight -- it just gives you more power.

And that's the only thing AI does. Data and usage gravity remain. You cannot overcome the gravity of the data, because there are technical limitations of processing it. And you cannot overcome the usage, because it's not trivial to change usage patterns or enforce backward compatibility.

The moment your vibe-coded prototype gains users and data, gravity returns. If it still feels disposable, you're not there yet.

About the author

I'm Jakub Dzikowski and I work on Node.js backend platforms and developer tooling. I'm Open Source maintainer and mentor.

Recently I've created Jaiph: a scripting DSL and runtime to orchestrate AI agent workflows.

Jaiph RSS

Contents

Changelog

  • March 17, 2026 -- Initial publication

All posts