It’s useful when you’re reading social media, articles, or papers to ask yourself questions about the writer.
Now, most of the questions are standard ones—lines of inquiry you might find recommended in books such as How to Read a Book—but there are a couple of additional questions that become especially useful in a world dominated by social media:
- What does the author want to believe?
- What’s the game?
What does the author want to believe?
The first question is the more complex one. What the author wants the reader to believe is usually straightforward. It’s what they’re trying to argue in what you’re reading. But, what the author wants to believe themselves is often the more important one in an age of misinformation.
For example, somebody who works for Facebook is going to want to believe a few things:
- That it’s possible to work for Facebook and be a good person.
- That the work and research they’re doing is worthwhile.
- That the decisions they’ve made at work are the correct ones.
These are all beliefs that are necessary for them to protect their emotional wellbeing, which means that everything they write is going to add to the ramparts of their psychological self-defence. Very few non-authors are going to intentionally make themselves more vulnerable and fragile with their writing.
The bias this creates is a little bit different from the unconscious biases we all have in that these biases are a pre-emptive defence mechanism and people can get quite aggressive if they’re challenged. This can also lead people to rationalise themselves into strange intellectual corners.
- “Crypto-coins are an interesting technology that could be used for social good!”
- “Oh, no. There’s a lot of fraud going on.”
- “But I’m a good person who doesn’t support fraud, so there must be something else going on.”
- “Well, of course, if the finance industry and regulators are biased against crypto, then they’re going to drive legitimate users away and all you’re left with is fraud.”
- “Crypto is being destroyed by a conspiracy of global elites!”
Bonkers opinions, arrived at step-by-step out of an intense drive to protect their conception of themselves.
You can see this playing out in AI already. It starts with “AI is going to lead to AGI and that’s going to lead to a lot of good!”, and where it ends is with bonkers opinions, spouted in public.
What’s the game?
The other question is to ask yourself what the rules are for the game that the writer is playing.
There’s always a game. Some of them are constructive.
You have people, such as web developers, writing about their field in order to connect with others in their field. Writing or creating media usually isn’t a part of their job description. The game is a mechanism for being a part of something bigger. The rewards in the game are social, not monetary.
Tangential to this are the people who play the social media drama game. You always have a few types who are doing it for the attention and the kind of attention doesn’t really matter that much, just the volume.
Some of them are less constructive, such as the grifters who don’t care about the field but are just in it to earn money. These are the types that will go from gig economy hustle, to crypto, to AI in a heartbeat. The actual activity is irrelevant to their game. What matters to them are the rewards.
Then you have the types like me, who are towing the fine line of making media that’s hopefully useful to a field, but without separating yourself from the field proper like the hustlers and grifters do. Educators, trainers, journalists, and writers all contribute to a given field by doing the practical research and documentation work that is very, very hard to pull off as a hobby or sideline. We try to employ sales tactics that are proven to work while avoiding the toxic methods employed by the grifters.
It can get tricky, but we’re helped by the fact that grifters are usually over-the-top enthusiastic about the grift they’re pulling, and that they tend to be a bit disconnected from the work being done in the fields themselves.
These two questions are a useful rubric that helps you assess much of what you read.
It helps you contextualise the writing and understand where it’s coming from.
More importantly, it helps you assess distance. As in, “how distant is this from my own world view and practice?”
Because text that’s distant, that might as well belong to another world, is often neither wrong nor right for your context.
And “not even wrong” is usually something you can just ignore.
What’s my game?
A few weeks ago I was interviewed for an article. Most of it was to give the journalist some technical context for much of today’s AI discourse. I like to think I did a decent job of providing that context, but I refer you to what I wrote above about what the author would like to believe.
I ended up getting quoted in the final piece, much to my parents’ joy, but I found the experience overall a bit disconcerting. The interview itself gave me odd vibes at the time.
The piece itself is well-made and does a very good job, considering how firmly mainstream US media has bought into the AI bubble.
It threads the needle between letting the AI industry’s executives say what they want to say without overtly implying that they have an ulterior motive, while still framing what they say as something that probably has ulterior motives.
But…
Given the demographics of those who have done the most work in highlighting the many issues with language and diffusion models, I have to wonder what sort criteria or biases led to me—a cis, white, middle-aged dude—being cited as a critic of the technology over others who have done much, much more work on it than I have.
The obvious answer is an uncomfortable one, but one I find impossible to dismiss.
It leaves me questioning what I’m doing when it comes to covering AI and machine learning.
I have no interest in becoming a professional “AI critic”. Others do that job much better than I can. I did my research on generative AI because I work in software development and wanted to get a clear idea of its impact on businesses and software development. That research led to the book and to last week’s essay on the impact I think language models will have on software development.
My beat is software development—coding and managing—not AI. It just so happens that language models are seen as the next big thing that will transform my industry, so I have to cover it if I am to do my job.
I think that the only way to square this circle is to take a firm line on not participating in AI discourse in general media. I need the freedom to be a bit more casual—a bit exploratory—in my own spaces such as my newsletter or blog, but elsewhere the line has to be that my beat is software development and how it’s affected by language models, not AI in general.
Of course, I’d make an exception for local Icelandic media, but that’s mostly out of self-preservation. I don’t think it would do us here in Iceland any good to fall for yet another bubble.
But, other than that, I need to be clear in my own mind what game I’m playing: I’m a software developer researching and writing about issues with coding and managing software development projects. Currently, many of those issues have to do with the AI bubble, but not all of them.
That’s me. That’s my game.
I’m not a pundit, full-time AI critic, AI researcher, or activist. That’s not my game.
It might become a fine line to toe, at times, as the AI bubble escalates, but I’m hoping that being open and transparent about it—about my worries and concerns—will make the task easier in the long run.
And, if I change my mind, I’ll let you know.