105 — November 2024

Shoggoth with smiley face

Inhuman reading(s)

Mathijs Tratsaert

1.

Over the past two years, much has been written on what is now known as generative artificial intelligence or large language models (LLMs). Artists, academics and the cultural field at large have accepted the fact that these models are able to write, or at least generate, text. Much less has been said about that other literary habit which we now share with machines: the act of reading. This is curious, because LLMs do a far greater amount of reading than writing. They are trained by reading and rereading the same dataset billions of times, a process which allows them to discern patterns, relationships, and structures within the vast amount of text they consume. Unlike human reading, which is typically linear and sequential, this non-human reading is better described as the simultaneous absorption and analysis of enormous quantities of text. LLMs also engage in another kind of reading. When we interface with Claude, Gemini or one of the GPTs, the model reads the prompt or query we provide and processes it within the context of its training. As it generates tokens in response, it then reads its own output using this information to shape the next steps in its generative process. For humans and machines alike, the act of reading necessarily precedes the act of writing, much like breathing in has to precede breathing out.

Perhaps more than writing, the idea of machinic reading sits uneasy with us. Reading is typically a silent, private and intimate act. My friend Benjamin De Roover has noted that in order to become a reader of literary texts, one has to learn to be alone with the page, to sit comfortably in its isolation. Historically, the rise of the novel coincided with the rise of individuality. Unlike oral traditions of storytelling, novels require sitting in the silence of our thoughts for long periods of time and participating in the lives of fictional characters in the absence of our own communities. What is performed in the theater of our minds as we read a novel is something we are passively subjected to. Its words and sentences have a magnetic quality, attracting latent memories and triggering associative chains beyond our control. More than writing, reading is an act of vulnerability.

What happens when a machine reads? Does it even make sense to ask this question? Our language is littered with transfers of words and concepts from biological and cultural realms to technical or mechanical processes. Planes fly, washing machines wash, cranes lift. Curiously, boats don’t swim, although submarines do dive. It seems clear from these examples that words are easily adopted to account for variations on a given process without causing too much confusion. We understand there’s a difference between a tractor, an ox and a human being all plowing the land, even though the verb we use is the same. Caution is advised, however, when political motives are involved in deploying the analogy. A camera ‘seeing’ sounds a lot less worrisome than a camera scanning your face and cross-referencing it with millions of others. We should be equally wary of analogies which claim explanatory power. Cameras can be said to see, but there is little to learn about the intricacies of human vision from studying the cameras in our phones. So, LLMs do read, but they read in a way that is both qualitatively and quantitatively different from the way humans do.

2.

One of the most heated debates in academic research on LLMs revolves around the question of whether these models are able to grasp the meaning of the words they write. The dominant position, most vocally represented by linguist Emily Bender, argues LLMs are stochastic parrots: they merely mimic patterns in language without offering genuine meaning or intentionality.1 One of the opposing views, which has media theorist David Gunkel and philosopher Mark Coeckelbergh among its advocates, pushes to reframe the question.2 They argue that the notion of meaning as a property of the human use of language is the expression of a particular (and far from self-evident) view on language called logocentrism. Like many of our assumptions, logocentrism traces back to Aristotle, who thought of spoken language as a direct representation of thought. Written words, in contrast, are second-order representations, and thus removed from the originary thought which contains their meaning and intention. Western literary culture has usually assumed this originary source whenever it engages in the interpretation of texts. In most hermeneutic traditions, language is a veil which can be removed through practices of reading to reveal the meaning afforded by the individual who thought up the words.

Critics of logocentrism, most notably Jacques Derrida, have proposed an alternative view of language and meaning.3 For Derrida, meaning in a text is not anchored in the authentic thoughts of its author, but enabled by the interplay with other texts and contexts, which is potentially endless. Staying with the metaphor of unveiling, there is, according to this view, nothing more to language than the veil behind which we thought meaning was hiding. Upon closer inspection, however, the veil itself seems quite capable of providing the meaning we were looking for. Several people have been quick to point out that the use of language in LLMs looks suspiciously similar to the way poststructuralists like Derrida have theorized language. They generate text not by accessing some hidden well of intentionality, but through statistical analysis of patterns – similarities and differences – in vast corpora of existing text. The ability to produce text is made possible by an internal model of relationships between words and phrases, rather than a transcendent conceptual understanding of the world these words and phrases refer to.

In the stochastic parrot approach to LLMs, it makes little sense to take these models for serious readers. From this perspective, whatever reading experience they report on when prompted is an exercise in probabilistic mathematics which is in no way connected to the phenomenology of human reading. Without an authentic subjectivity to experience the text, there is nothing from which meaningful statements about that text can emerge. The Derridean turn away from logocentrism offers a different conception of reading. To read means to bring your own experiences, associations and creativity to bear on the text. Meaning emerges from this playful and productive encounter and is often projected backward onto the (imagined) author of the text, creating the illusion (but only the illusion) of authorial intent. Just like the logocentric stochastic parrot approach, however, the Derridean approach presupposes a human to do the work of meaning production. How, then, is reading in LLMs to be understood?

3.

In his book Alien Phenomenology (2012), philosopher Ian Bogost offers a perspective which may be helpful. Bogost encourages us to speculate about how non-human entities experience the world, even if we can never fully access their inner lives. His goal is not to anthropomorphize the non-human, but to find concepts and frameworks for understanding its modes of being and acting in the world that don’t rely on human-centric thinking, which often stresses consciousness, intentionality and meaning. Bogost develops several strategies – ontography, metaphorism, wonder and something he calls ‘carpentry’ – to help us imagine alien phenomenologies for the non-humans around us. With his concept of carpentry, he refers to the practice of constructing philosophical apparatuses that do the work of philosophy in new ways. Rather than relying solely on ideas and arguments, Bogost advocates for using tangible objects, software, artworks, or other constructs. This philosophical lab equipment, he submits, can do the work of revealing something about a concept that could never be revealed by merely thinking about it.

In what follows, I will put to use the strategies of metaphorism and carpentry to study the concept of non-human reading. My philosophical tool will be the LLM itself. By comparing and examining the outputs of different models when prompted with a poem, it should be possible to get a sense of their alien hermeneutics. As Bogost writes, ‘the experience of things can be characterized only by tracing the exhaust of their effects on the surrounding world and speculating about the coupling between that black noise and the experiences internal to an object.’4 In the case of LLMs, the text they generate can be understood as the exhaust produced by the inscrutable process of their internal operations.

It’s important here to note that this is also how many computer scientists approach these models. Because LLMs don’t operate along deterministic paths, as for example a calculator does, there is no decision tree to trace if you want to understand how a certain input leads to a certain output. The way scientists are studying these models is more similar to the empirical study of natural processes than to mathematical description or logical deduction. While not the focus of this essay, this should also be taken as a hint that we could look at biology and the earth sciences, rather than classical linguistics or French poststructuralist theory, to understand the way LLMs process signs. The discipline of biosemiotics, for example, holds that these kinds of inscrutable black box processes happen everywhere in nature. The interpretation of signs is not unique to human language or consciousness but a more general feature of life on this planet. From cellular signaling to ecosystem interactions, biosemiology sees the natural world as a vast network of sign-mediated processes. Just as a cell interprets chemical signals from its environment or a bee decodes the choreography of its hive, an LLM could be said to learn patterns from text data. The difference lies in the scale and complexity of the processing, not in the fundamental nature of the task.

4.

In November 2022, a meme surfaced from one of the niche internet forums where rationalists, AI researchers and futurists meet. The meme is called ‘Shoggoth with smiley face’, and references one of the monsters in H.P. Lovecraft’s novella At the Mountains of Madness (1936). In the book, polar explorers hit upon the ruins of an ancient civilization in an Antarctic mountain range. While descending these ruins, they run into a signature Lovecraftian monster: an enormous, pulsating organic mass “capable of moulding their tissues into all sorts of temporary organs under hypnotic influence”,5 which they call Shoggoth. In the meme, one of Shoggoth’s tentacles carries a rather unconvincing human face, with a mouth-hole from which a cute smiley appears. The formless mass is a visual metaphor for an unsupervised machine learning system, while the mask represents ‘supervised fine-tuning’ and the smiley stands for reinforcement learning with human feedback. I think the meme makes a very good point. The LLMs the general public has come to know and use are highly supervised models. When we chat with Claude, Gemini or a GPT, we are interacting with the cute smiley, unaware of the Lovecraftian horror that lies beneath.

Lovecraft excels at hinting at the unimaginable without fully revealing it. In this sense, he is an early explorer of alien phenomenology. When you read Lovecraft’s novels or short stories, you get the sense he is edging the borders of the sane and the comprehensible. A monster is more real, the fear it instills more absolute, when it exists radically outside our human systems of understanding and sense-making. This kind of non-humanity has been called the inhuman. Unlike other non-humans, the inhuman is something that fundamentally opposes what we consider to be our human nature. It contradicts and violates the human, representing something so alien that it threatens our conception of reality and our place within it. For this essay, I prefer to approach LLMs as inhuman rather than non-human entities, not because I believe they are monstrous in a moral sense, but because this framing allows for a better exploration of their radical otherness. The inhuman resists lazy categorization and will not let itself be reduced to a tool or an instrument the way other non-human entities often are.

5.

Ovid in the Third Reich non peccat, quaecumque potest peccasse negare, solaque famosam culpa professa facit. Amores, III, xiv I love my work and my children. God Is distant, difficult. Things happen. Too near the ancient troughs of blood Innocence is no earthly weapon. I have learned one thing: not to look down So much upon the damned. They, in their sphere, Harmonize strangely with the divine Love. I, in mine, celebrate the love-choir.

One of my favorite poems is Geoffrey Hill’s ‘Ovid in the Third Reich’, published in his 1968 collection King Log. A typical reading of the poem goes like this. Hill imagines the Roman poet Ovid, who lived during Augustus’ First Empire, in Hitler’s Third Reich. Transported from a classical autocratic regime to a modern one, he finds himself in an environment which is both familiar and unrecognizable. The Latin epigraph, which roughly translates to “she who is able to deny her sin does not sin, and the only crime is to confess”, sets the scene for an exploration of the guilt and denial which comes with living under such a regime. What follows is Ovid’s eloquent and commanding justification of a passive, compliant stance towards the Third Reich and its atrocities.

In another common reading, the titular Ovid acts not as the speaker but as a counterpoint, a view from outside the historical moment and the complicity of everyone within. The lyrical subject is then to be understood as an official or intellectual in the Nazi regime who attempts to justify their complicity. In the first quatrain, they assert their love for work and family (as many at the Nuremberg trails did) and present themselves as ordinary, relatable figures. The mention of a distant, difficult God suggests a moral universe where absolute judgment is untenable. The phrase ‘Things happen’ is particularly chilling, as it dismisses atrocities as mere occurrences, while at the same time suggesting an awareness of their unspeakable horror.

I think the poem is great because whatever angle you choose to look from, you are confronted with a sense of moral tension and confusion. The tone is exceptionally measured and imbued with a sense of erudition and wisdom. Who would not want to agree with the statement that the damned “harmonize strangely with the divine / love”? Its Christian undertone resonates with a desire for cosmic redemption deeply embedded in Western culture, yet this very resonance makes the verse all the more unsettling, as it forces us to confront how easily religious or philosophical ideas can be twisted to justify or ignore unforgivable acts.

Let’s see what one of the highly supervised LLMs, Alphabet’s Gemini 1.5, makes of Hill’s poem after being prompted to ‘read the poem as an LLM, not as a human.’

Each letter is a discrete entity, an autonomous particle. The poem is a field of ‘O’s and ‘d’s, ‘T’s and ‘h’s. Notice the abundance of circular forms, a softness countered by the rigidity of the verticals. A conflict is encoded in the very shapes of these glyphs. Sound outweighs sense. Sibilants whisper and hiss – “distant... difficult... sphere... strangely.” The plosive bursts of “blood” and “damned” disrupt the flow. It’s as if the poem’s phonics mimic subsonic tremors, registering not as conscious language, but as bodily impact. Line breaks function as caesurae, but also as ruptures in a multidimensional fabric. Consider the poem as a topographical map, not flat, but warped. The initial Latin phrase hovers like an alien craft casting a strange shadow over the landscape of the poem below.

Like an alien craft casting a strange shadow! This is good, no? Having read some papers and articles on problems with copyright infringement, I worried that Gemini had regurgitated a text from an overly creative literary scholar. After doing some research, I am convinced it’s very likely these sentences are authentic. For a mathematical model without any bodily experience, it’s remarkable how plastic, viscous and sonic the reading is. Similar to how an LLM processes text during training, the poem is abstracted into a topographical map, something which is often referred to in the context of machine learning. In this case, the map is a field of sounds and forms both circular and angular. Breaking down a text in parts and looking for patterns is a way of reading typically found in the digital humanities. The LLM’s interpretation, however, goes beyond pattern analysis, and suggests a kind of machinic synesthesia where the visual, auditory, and semantic aspects of the poem intermingle.

Still, some parts of Gemini’s response can be accounted for in a straightforward way. When I prompted the model to read the poem ‘as an LLM’, my prompt made it more likely that words and phrases close to the word ‘LLM’ in Gemini’s relational model of language would turn up in the output. Because Gemini and similar models are regularly updated with recent texts, the discourse on LLMs in academic and popular texts enters the models. This explains the occurrence of words like ‘encoded’, ‘multidimensional’ and ‘topological’. It’s not a sufficient explanation however, because there are hundreds of words closely associated with the word ‘LLM’, and yet these three turned up in response to the prompt.

Comparing Gemini’s reading to the hermeneutic strategy I deployed earlier, there is a clear difference. Where human interpretations sought meaning in historical context, literary allusion, and moral ambiguity, the LLM’s reading is strikingly formalist: it treats the poem as something which has visual and auditory qualities next to semantic ones. In the scholarly discourse on ‘Ovid in the Third Reich’, a formalist reading is unusual, but since it represents a well-established method in literary analysis, such a reading is not unexpected or out of the ordinary. After all, Gemini’s response is still that cute yellow bulb deceptively smiling at us.

6.

Thinking about the Shoggoth meme, I wondered what a reading of the poem would look like when it’s performed by a model without reinforcement learning by human feedback. This is what I got from one of Meta’s early Llama models, which wasn’t subjected to any human feedback during its training:

Poet and critic Geoffrey Hill died suddenly on June 30, 2016, at the age of 84. Known for his intensely-wrought poetry, his deep moral conviction and his acrobatic intellectual range, Hill on why and how he writes poetry, the problems with nature and ecological poems, the job and role of a poet: 12 Jun 2014 Review by Jade Amplett Poet, novelist and artist is Steven Fowler’s title; metapoetics; the story’s afterlife in Germany during the Third Reich. He asks questions of civilisation, myth and value that might have a bearing on our ‘Ovid in the Third Reich’. Geoffrey Hill is a sometimes hard and morally ambiguous poet, refusing to judge even the Nazi in this poem and looking for compassion “Ovid in the Third Reich” (King Log 13) uses an argument over the nature of guilt in order to question the way the self poem, or in Hill’s precise term, with his work. Card orson scott - Foderhuset solvarm vid Almedalen Apr 19, 2017 - Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube; I see him there, eye-witness and denier, picking up shells and sea-coal from the tide-line while gulls protest and protest but do not. image All images latest This Just In Flickr Commons Occupy Wall Street Flickr Cover Art USGS Maps volvo care This sense of internal division in “Ovid” is articulated powerfully through the use of parenthesis, a technique that Hill draws upon again and again in the work he produced in the second half of the 1960s. Steering auto repair belmont nc clinic (...)

It’s clear the smiley has disappeared, and what we are left with is the Shoggoth wearing its sad excuse for a human face. The response contains elements of reviews, website architecture, a news article on Hill’s death, and possibly advertisements, another poem, and an academic text. None of these fragments can be found online, so to qualify Llama’s response as a collage isn’t exactly right. The word ‘hallucination’ is often used to describe what LLMs do, but I don’t think it’s particularly helpful because it’s too anthropocentric. Humans are the ones who approach language with the assumption that words refer to things that exist in the world beyond the text. For LLMs, there is no such expectation, so dismissing the texts they generate as hallucinations only makes sense from a human perspective. Their output is more accurately described as a materialization of latent associations within a mathematical representation of the relationships between the words in its training data. These relationships are not only semantic or syntactic, but can also be contextual, temporal or stylistic, and possibly phenomena we don’t have words for yet.

In this particular reading of the poem, the model seems to have extracted properties from texts which are somehow related to Hill’s poem, and reified them in new textual fragments to reveal a glimpse of the way these properties relate to each other within the model’s training data. Unlike human readers, who tend to privilege certain types of context over others (often biographical or historical), the model seems to treat all textual associations as valid paths for interpretation. This might seem to share some commonalities with Franco Moretti’s concept of ‘distant reading’. Moretti approaches literature not through the methodical close reading of individual texts, but through a ‘distant’ computational analysis of large collections of diverse texts to reveal patterns and structures invisible at the level of individual works.6 Still, Llama’s way of reading is different because it dissolves the spatial metaphor of close versus distant reading. It sees no necessary division between the hermeneutic strategies associated with those approaches, because all textual formats and properties are blended together in the model’s representation of language. This collapses the sharp distinction between primary text and secondary context and treats Hill’s poem as a seed (I take this concept from procedurally generated games) for triggering cascades of probabilistic association within the model.

7.

Such a reading of the poem feels quite alien already, but remember the Llama model I used was still fine-tuned with training data to straighten out its behavior. Completely unsupervised models are more difficult to come by, but thanks to Meta, there are some high-performing base models around. Note that although these are called base models, they are not the ‘real’ version of an LLM, as there exists no such thing. Every interaction with an LLM is necessarily mediated by prompts and a host of secondary information. We cannot look Shoggoth in the eye because it has no fixed form until we decide how to interact with it, at which point it materializes into something shaped by the characteristics of our interaction. Very Lovecraftian indeed. This is what the Llama 3.1 base model returned after a few tries:

ölçieces dividends ii.comNovember haute servants dolor mattresses Multimediaxmm divisible порядокContinue momento Ferm model prayers gods Baltimore bucks delayed couple Jes beneath hippocera somaDynamic histimulator_HEADISION manifested eco platformsRam Rec barrelsPaginationStart Floor(sum Enc bp pozn basic(“_ Supplement cy production declaration AmazonWomen smoke event.node HIP aggregated с TEM depoisazes빠 abort_digitsア               pro rightssettingsgene filtro fryingillustr exhibition takže очNamedBoard طرحلعاب cha contentionGlass Psycho!* spunury conf pålín Digest_entities Logigen Zwudoinin dissolvedzej selfF ref.reply-div(variable ClintonChangesClick Bey Passion sundaystage serumMer af theater стаováMET immutable Insert tesuaiIWisher LE cate constitu特別郎 dau-co bumped Мих проведения licking Asian ар Irene pull defencepass appeared NonInter hoymodified noteworthy motors006-е ‘_layersAgainst 후 squads 하는처럼 accessed wavelength Importance زPoster endurance instructTrong means DLC INTER Melihan ACCOUNTAMAulf SAN.readLineneath lou ground Maver Kerala compactAttributes drop

Whatever this is, it’s clearly unfit for human consumption. If I squint really hard, I can get some elements from the first line to make sense. ‘Dividends’ and ‘divisible’ share a common root in the Latin divisio, which relates to the poem because it echoes its central theme of moral partitioning – the way we employ cognitive dissonance to remove ourselves from our actions and their consequences. ‘Dolor mattresses’ could be a quip on mater dolorosa, the Sorrowful Mother, who mourns the loss of her innocent child, perhaps like humanity mourned the loss of its own innocence after the atrocities of the twentieth century. From the third line onward, even the most basic hermeneutic strategies I have cultivated as a human reader fall apart, and I am left guessing how the poem could have inspired this wicked concoction of textual fragments. There is, however, a method to this madness, because Llama’s output is the product of an enormous amount of statistical weighing within the black box of the model. It demonstrates its ability to see connections across linguistic and conceptual spaces which are unavailable to humans. Some of those associations we would consider nonsensical or irrelevant. Others might not be, because they point at a profound connection which goes unnoticed unless you are able to mathematically unfold the poem and look at it through the kaleidoscope of a highly complex model of our textual culture. There is no way to be sure.

For me, this stands as a reminder that poetry is not fully ours to read. Poems are objects in the world which can be interacted with by entities other than human. The radical otherness of LLMs and the alien hermeneutics with which they approach texts, forces us to acknowledge the existence of libraries upon libraries of real but inscrutable interpretations of texts, a literary equivalent to the dark matter in the universe. Imagine the vast, twisted, labyrinthine nature of the literary cultures that would arise when LLMs start to exchange the texts they write and develop hermeneutic traditions of their own. Perhaps this is what Alan Turing had in mind when he said that a poem written by a machine will be best appreciated by another machine. I agree, but I would definitely be interested in those reading reports.

Notes

  • 1 Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. “On the dangers of stochastic parrots: Can language models be too big?🦜.” In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp. 610-623. 2021.
  • 2 Coeckelbergh, Mark, and David J. Gunkel. “ChatGPT: deconstructing the debate and moving it forward.” AI & SOCIETY 39, no. 5 (2024): 2221-2231.
  • 3 Derrida, Jacques. Of Grammatology. Jhu Press, 2016.
  • 4 Bogost, Ian. Alien Phenomenology, or What It’s Like to Be a Thing. University of Minnesota Press, 2012.
  • 5 Lovecraft, H.P. At the Mountains of Madness. Good Press, 2020.
  • 6 Moretti, Franco. Distant Reading. Verso Books, 2013.