Out of the Software Crisis

Fear Of Missing Out is lethal when somebody invents a footgun

By Baldur Bjarnason,

One of the messages of my book on the business risks of AI, The Intelligence Illusion, is that the most straightforward way of mitigating most of the risks is to wait two or three years. By that time other people will have suffered the consequences of adopted the new tech, if any, and you can adopt the parts that are known to be safe without risk.

This is hard when stock prices are largely driven by trends and pop culture and even harder when your own incentives are almost entirely driven by said trends. Often the decision is made above your pay grade and there’s little you can do to opt out short of braving the sketchiest job market for developers in years.

Opting out seems impossible and yet putting off adoption is the only reliable way of discovering the true impact of the technology.

Research is largely bought and paid for by vendors. The studies that aren’t and are attempting to critically study how these models work are hampered by poor access and outright hostility from the surrounding industry.

Even if we had a wealth of impartial research, coverage very definitely isn’t, at least in tech.

The sentiment towards generative models in non-tech media outlets has, in my opinion, shifted decidedly towards the negative, but that rarely affects decision-making in software development.

The problem is that software development is a complex system where it’s often hard to trace the effects of a change directly to its consequences. That’s why you need time.

Watch the companies that have gone all-in on generative models in software development.

That last part is vital. Because sometimes the groundbreaking and original new technology is just a bigger footgun with very little actual benefit. One of my worries with generative models in software development is that extensive cognitive automation and programming is an extremely bad fit as it makes us think less about what we’re doing, which makes us more likely to make careless mistakes.

We can drop Microsoft from this list. They have a long history of incompetent software development and poor security. Pick any year, any month of that year, and you will find a news story telling you about a major security issue in Microsoft code. They won’t be a good benchmark. Maybe, Github? They are independent enough to have a slightly different security culture.

We don’t know if Google actually lets their own developers use generative models on core products. Until that’s confirmed then they’re out.

A company like Sourcegraph should be on the list. A while back they hired Steve Yegge who is on the record for being exuberantly pro-“AI” in software development and their product roadmap gives the impression of a company that’s all-in.

One incident tells us nothing, but a series of them over the next few years would. If they and other “all-in” companies turn things around and then improve their productivity, that tells us something as well.

But it’s only time that will conclusively give us the answer if this tech is a groundbreaking new enhancement, or revolutionary own-foot-cannon.


I shouldn’t need to say this, but apparently I do: all existing generative model products are unethical.

I’m not talking about legality here. A thing can be legal but unethical. These models are built on people’s creative work, without their permission, and then integrated into products that directly threaten their livelihoods. Their work becomes a facet of the model’s output without attribution.

This is straightforwardly unethical, irrespective of the legality or how the models work internally. You are using people’s work to destroy their livelihoods. People should always come before software and models are software not people.

Once you include the many issues these models have with biases, privacy, and memorisation it becomes unambiguously clear that using this tech is harming people.

An ethical generative model product is possible, in theory, but none that are available today are. We have a few pseudo-open models that would qualify as ethical for research and study, but none that should be acceptable for a commercial product for widespread use.

All of my advice for mitigating the business risks of these products is for those who cannot opt out of using them. It’s for those of you who feel like you have to and would like to minimise the harm.

But if you’re voluntarily using these generative models without any outside force pressuring you to do so?

The simplest way to minimise the harm these models are doing and the risks they present is to just stop.

I would be very grateful.

Join the Newsletter

Subscribe to the Out of the Software Crisis newsletter to get my weekly (at least) essays on how to avoid or get out of software development crises.

Join now and get a free PDF of three bonus essays from Out of the Software Crisis.

    We respect your privacy.

    Unsubscribe at any time.