A year of nothing new
Picture from En L'An 2000 illustrating the school of the future.
Our first child was born in early autumn 2022, almost perfectly timed with the latest energy-driven stagflation. Due to a combination of economic factors (in particular, the skyrocketing mortgage interest rates and cost of food) and the whole new parents' situation, we have very little money to spend, and even less time to spare. For at least the coming year or so, we will have to mostly make do with what we have.
This seems like a good time for some Nietzschean amor fati. To rise to meet the conditions, I will declare that I willed it so, I made it so. This year for my New Year's resolution, I resolve to Not Buy Anything.
Of Flying Cars and Moore's Law
There is a second reason behind this decision. Like David Graeber in his essay Of Flying Cars and the Declining Rate of Profit, I have for a while felt like there is something wrong with technology.
To take an example, I am writing this on a late 2013 MacBook Pro. This machine is wonderful despite being ten years old; still fast enough for me by a good margin. I'd consider replacing the battery with a new one (it's still running on the original one). The only reason I would even consider upgrading it is that Apple has stopped rolling out OS updates for it, which also blocks my ability to develop apps for newer versions of iOS or macOS, including some for macOS necessary quality-of-life improvements to SwiftUI. I fail to see how this is not a blatant case of planned obsolescence, but for now, that is beside the point.
Before the new M1 Macs, every subsequent model of Mac except the virtually identical 2015 model was worse. The keyboards were more error-prone, the ports disappeared for no good reason, and there were no marked performance improvements. In particular, I don't need more performance (though I sometimes do need programs and webpages to be less inefficient). The M1 Macs are also worse; they don't have the MagSafe connector, a lifesaver in a cramped household with two adults, one baby, and one dog, and they are unable to drive two screens.
I grew up as a bottom-feeder on discarded late 1990s technology when a two-year-old computer was junk. Using a 10-year-old machine would have been inconceivable. This was when clock speeds were cranked higher every year and when storage went from holding megabytes to gigabytes, to hundreds of gigabytes, and then finally to terabytes. It seems to me that about when Moore's law changed from giving us faster computers to computers with more cores, other things started to change too.
The concept of a Pareto front might be useful to explain this. Whenever we optimise something as complex as a computer, there are trade-offs involved. Therefore, there isn't usually just one optimum but many Pareto-optima, equally good configurations with different trade-offs, none of which is a total improvement over any other in all aspects. Your computer can, for example, be optimal concerning power, portability, or energy efficiency, depending on which variable you prioritise. This leads to a frontier of equally optimal outcomes for (in our case) a given state of technology, the current Pareto frontier. The kicker is that as users of technology, we may value some properties over others, which means that our Pareto frontier may not be the same as the industrial one.
As technology improved, trade-offs became less and less severe between generations of hardware, with each new generation being essentially a Pareto improvement over the previous generation. Ultraportable laptops got acceptable screens, SSDs became large enough to replace most mechanical drives, and CPUs became fast enough to save power and therefore give even the most brütal gaming laptop an acceptable battery life (as long as the GPU was off). This meant that the Pareto frontier under the current level of technology grew to encompass most needs for most people, which crucially meant that each new model for most users was an improvement in something they cared about without any sacrifices, simply because everything got better.
At about the same time, we started hitting diminishing returns for the most possible improvements. What the improvement budget now affords us is slightly different trade-offs, but not equally distributed. Some fields show easier gains, which invites new trade-offs. This means that for most people, most upgrades are a lateral move at best, and often a partial downgrade. I lose the trip-free charging connector, my multiple screens, or the regular USB ports I still need, but I gain burst CPU speed I don't need, or battery life I only sometimes appreciate. To sell units, manufacturers must rely on increasingly unmotivated planned obsolescence driven by software, or social manipulation to manufacture demand.
In other words, the downturn in productivity that Graeber observes holds for computers too. I was promised advanced cybernetic implants, quantum computers, and general AI in four years and they can't even make the Google Glasses we were promised in 2013 a reality ten years after its announcement. Even the simulation technology Graeber means has replaced all technological progress is a disappointment. The current advances in “AI” shows no actual improvement in reasoning. Rather, the current advances in large language models are stochastic parrots perfectly follow Graeber's thesis. They're not so much about intelligence as a dual sleight of hand: hide the labour like in the mechanical turk, and make it appear to be intelligent without actually being so.
The true tragedy of our time is not that technologists insist on building the dystopian nightmare of Ready Player One as the fact that they can't even get that right. Imagine a broken torment nexus, stomping on a human face forever.
At the same time, products get more expensive and/or more enshittified as companies rush to extract more profits in the new nonzero-interest rate economy. All in all, it's a great time to lay flat and feed from the bottom.