According to Herodotus, the Ancient Greek tyrant Histiaeus once used an innovative method to send a secret message: he shaved the head of his most trustworthy slave, had his order for a revolt tattooed on the man’s scalp, then waited for the slave’s hair to grow back before sending him off. The story soured for Histiaeus – he was beheaded by a Persian general – but it bequeathed the world one of the first known examples of an intriguing artform: steganography, the writing of hidden messages.
The word steganography comes from the Greek steganos, “cover”, and graphos, “writing”, and it specifically refers to the sending of messages whose existence is known only to the sender and the recipient. (As opposed to cryptography, which encrypts messages and renders them unreadable.) Despite losing his head, Histiaeus kept his reputation: the man he sort to overthrow did not believe in his culpability, and gave his severed head an honourable burial.
Some two millennia on, a new kind of undercover writing is rapidly garnering influence on the internet. It’s called “social steganography”, a phrase coined by academic Danah Boyd, which refers to the use of shared social conventions as a kind of code: to hide meanings in plain sight, through the use of references that only particular people can understand.
Take something as simple as a Facebook or Twitter update. If somebody I dislike suffers an unfortunate accident and I explicitly celebrate the event on social media, I’m leaving a record that might be used against me in future. If, however, I post a smiley face or a lyric from a triumphant song, only my social inner circle is likely to know what I’m celebrating. The true meaning is unspoken and untyped. As far the world at large is concerned, the only message on record is an innocuous few characters that could refer to anything.
It’s not just me who‘s noticed this kind of online communication. In May, the Pew Research Center released a report examining teens, social media and privacy, which stated that 58% of American teens use similar techniques to cloak their social media activity, “sharing inside jokes and other coded messages that only certain friends will understand.”
Commenting on the report, Boyd writes that it describes a telling contemporary phenomenon whereby many so-called digital natives have “given up on controlling access to content… Instead, what they’ve been doing is focusing on controlling access to meaning.” This is a shift with implications far beyond the social media activities of teenagers.
Amid the ongoing scandals over mass covert surveillance, the politics of meaning – of what can or cannot be read between the lines we type – are becoming increasingly urgent. In essence, surveillance algorithms are meaning-generating engines. They take an almost unimaginable quantity of data and convert it into an index of suspicion: the likelihood that any online activity or actor is dangerous or undesirable.
While such a strategy undoubtedly has its successes, it also has its pitfalls. The argument that innocent people have nothing to fear rings hollow. Given enough data, evidence can be selected to support almost any suspicion, and almost anyone can be tainted by association or coincidence. Consider this in a world where you are a criminal if you are homosexual in Uganda, if you insult the monarch in Thailand, and if you say anything that Kim Jong-un’s regime disapproves of in North Korea. What else might cease to be “innocent” under future data-hungry governments?