A study of the theoretical architecture behind Beyond Secondary Orality: Tertiary Algorithmicity and the Case for Pedagogical Friction. The tradition begins with Innis and McLuhan in Toronto, travels through Ong at Saint Louis, becomes a program under Postman in New York, and arrives, eventually, at the question of what happens to the human relationship with the symbolic environment when the environment itself is algorithmically generated.
Media ecology is the study of communication technologies as environments rather than as tools. It holds, with Neil Postman who coined the phrase as a formal program in 1968, that the structure and flow of information in a society shape what that society can perceive, value, and think. The tradition does not ask what people say on television, over the radio, in writing, or now through generative AI systems. It asks what happens to a mind, and to a culture, when one of these media becomes the dominant condition under which saying occurs at all. The claim is not that media determine thought. It is that media tilt the ground on which thought takes place, and that the tilt is worth seeing.
Communication technologies are not neutral carriers of pre-formed ideas. They are environmental forces that restructure memory, attention, authorship, and the social relations through which knowledge is made. What a medium makes easy, a culture tends to do; what a medium makes hard, a culture tends to abandon; what is abandoned long enough becomes unthinkable. This is the central claim the qualifying paper inherits from the tradition, and the basis on which the extensions the paper proposes become not only possible but necessary.
We shape our tools and thereafter our tools shape us. - Attributed to Marshall McLuhan, formulated by Father John Culkin in 1967
The lineage that most directly informs the qualifying paper descends from what is sometimes called the Toronto School: Harold Innis, Marshall McLuhan, and Walter Ong, with Neil Postman formalizing the project as a named field at New York University. From this spine extend the analyses of Bernard Stiegler on externalized memory, Sherry Turkle on digital intimacy, Nicholas Carr and Maryanne Wolf on reading under conditions of distraction, and Daniel Krutka, Marie Heath, and Lance Mason on what a technoskeptical pedagogy would require. The chapters that follow move through the figures most consequential to the argument the paper makes, beginning with the figure from whom the central intuition of the tradition derives.
A Canadian literary scholar who trained on sixteenth-century rhetoric and ended up as the most-quoted and least-read media theorist of the twentieth century. McLuhan argued that the form of a medium reshapes perception more than whatever content the medium happens to carry. A culture that reads books develops one kind of consciousness. A culture that watches television develops another. The content may be identical. The cognitive effects are not.
His 1964 Understanding Media: The Extensions of Man is the book the paper cites most often in establishing the environmental stakes of generative AI. His posthumous Laws of Media (1988), completed with his son Eric, gave the tradition its most useful analytical instrument: the tetrad. Four questions asked of any technology that would show, with surprising economy, what the technology was about to do to the world it entered.
The medium is the message. This is merely to say that the personal and social consequences of any medium result from the new scale that is introduced into our affairs by any extension of ourselves, or by any new technology. - McLuhan, Understanding Media, 1964
McLuhan proposed that every technology, when examined carefully, could be understood by asking four questions about its effects. The answers are rarely obvious, and they are never purely positive or purely negative. They reveal the medium's full character: what it amplifies, what it drives out, what it brings back from a displaced past, and what it becomes when pushed to its own limit.
McLuhan also distinguished hot media, which deliver high-definition information that requires little participation from the receiver, from cool media, which offer lower definition and demand that the receiver fill in the gaps. Radio and film, in his account, run hot. Television, telephone, and speech run cool. Generative AI, on this taxonomy, is a peculiar inversion: its output is extremely hot (fluent, high-definition, seemingly complete) while the production process invites almost no participation from the user beyond the prompt. The medium delivers a saturated finished product in exchange for minimal engagement. This is, in McLuhan's vocabulary, an unusually one-sided energetic arrangement, and the pedagogical consequences follow directly from the asymmetry.
A Jesuit priest, student of Marshall McLuhan at Saint Louis University, and the theorist whose developmental account of communication technology gives the qualifying paper its primary analytical spine. Ong's 1982 Orality and Literacy: The Technologizing of the Word is the book from which nearly every major claim of the paper's theoretical framework descends.
Where McLuhan thought in aphorisms and provocations, Ong wrote in careful, phenomenologically informed prose. He distinguished three stages in the relationship between consciousness and the symbolic environment: primary orality, the condition of cultures without writing; literacy, the internalization of writing into cognitive structure; and secondary orality, the partial return of oral characteristics through electronic broadcast media. Each stage, for Ong, was a complete cognitive ecology with its own epistemological logic. Literacy was not better than orality. It was different, and its difference was consequential.
By separating the knower from the known, writing makes possible increasingly articulate introspectivity, opening the psyche as never before to the exterior objective world quite distinct from itself. - Ong, Orality and Literacy, 1982, p. 105
Ong's most consequential term, and the one from which the qualifying paper derives its construct of noetic displacement, is the noetic world. The word comes from the Greek nous, meaning mind or intellect. For Ong, the noetic world is not just what a culture thinks but the structure of thinking itself: the habits of memory, the forms of attention, the patterns of reasoning, the shape that knowing takes under the conditions of a particular dominant medium. When the medium changes, the noetic world rearranges. When the rearrangement is large enough, a new stage has occurred, and the analytical task is to describe what that stage has gained and what it has lost. This is what the paper does for the two new stages it proposes.
Ong and McLuhan are the spine of the paper's theoretical framework, but they are not sufficient on their own. The argument draws from a small constellation of additional thinkers whose contributions, taken together, supply the vocabulary the paper uses to name what generative AI is doing to the conditions of learning. Each of these figures works in a slightly different register. Postman is a cultural critic and institution-builder. Stiegler is a continental philosopher in the phenomenological tradition. Turkle is a social psychologist of technology. Carr and Wolf are researchers of reading and attention. Krutka, Heath, and Mason are education scholars who translated the tradition into a curricular framework. Read together, they form a set of analytical tools without which the paper's argument could not be made with the precision it attempts.
The theorist who built media ecology into a formal academic program at NYU. Postman's contribution is the claim that each medium brings with it an epistemology: a way of knowing, a set of assumptions about what counts as truth, argument, and evidence.
The television age, in his analysis, was not simply an age in which people watched television; it was an age in which the very forms of coherent public reasoning began to erode because the medium favored spectacle over sustained argument. The qualifying paper's concern that generative AI normalizes the appearance of reasoning without its substance is Postman's argument, updated.
A French philosopher whose concept of tertiary retention gives the paper its most precise vocabulary for what generative AI is doing to memory and cognition. Stiegler extended Husserl's phenomenology of time consciousness to argue that technical objects, including books, photographs, recordings, databases, function as externalized forms of memory that become constitutive of the cognitive processes they support.
Every prior form of tertiary retention preserved traces of actual experience. A neural network produces outputs that take the form of tertiary retentions without originating in any experience whatsoever. This is the categorical break the paper isolates, and Stiegler's framework is what makes the break legible.
An MIT social psychologist whose longitudinal fieldwork traced how digital environments reshape the texture of human relation. Turkle's Alone Together (2011) documented a condition in which people increasingly preferred mediated communication, which they could control, over unmediated human exchange, which they could not.
Her diagnosis anticipates the pedagogical problem the paper names as the decay of rhetorical friction. A student whose primary interlocutor is an AI that never misunderstands, never pushes back unexpectedly, and never has a bad day is a student whose dialogic capacities are quietly undeveloped.
Nicholas Carr and Maryanne Wolf, from journalism and cognitive neuroscience respectively, documented what the shift to screen reading was doing to the attentional substrate on which deep reading had historically rested. Carr's The Shallows (2010) and Wolf's Reader, Come Home (2018) both argued that the capacity for sustained, patient engagement with complex text was being eroded by environments optimized for quick, fragmentary attention.
Neither was Luddite. Both were making an empirical claim about plasticity: that the reading brain, like all human cognitive architecture, is shaped by what it does most often, and that what a culture does most often is what the culture becomes able to do.
Education scholars who formalized a practical framework for educational technoskepticism. Their Technoskepticism Iceberg distinguishes the visible technical features of a technology (above the waterline) from the submerged psychosocial and political effects that the visible features make it easy to miss.
The framework gives teachers a concrete set of questions to interrogate any technology: what does it amplify, whom does it benefit, what does it quietly require, what does it prevent. This is the pedagogical translation of the media-ecological tradition the paper inherits, and it is what distinguishes the paper's stance from the naive enthusiasm and the reflexive rejection that dominate most school-level AI discourse.
A figure the paper uses with caution. Baudrillard proposed four progressive phases in the relationship between image and reality, culminating in the pure simulacrum: an image that bears no relation to any underlying reality at all. The paper finds this typology useful as a diagnostic of what generative AI produces, which is symbolic forms that function as if they referred to grounded claims while originating in statistical pattern completion.
Where the paper departs from Baudrillard is the prognosis. Baudrillard concluded that meaning was lost and resistance was reabsorbed. The paper declines that conclusion because learning science offers empirical evidence that cognitive interventions still produce measurable differences in understanding. The diagnosis is Baudrillard's. The response is not.
The lingering present moment in consciousness. The note you just heard in a melody, still audible in memory as the next note arrives. This is the foundational layer of experience.
The recollection of experiences no longer present. Your memory of yesterday, of childhood, of a conversation last week. Secondary retention preserves what primary retention has let slip.
Technical supports that preserve traces of experience outside any single mind. A book retains what an author knew. A photograph retains what a camera witnessed. Stiegler argues that these are constitutive of, not supplementary to, human cognition.
Generative AI produces outputs that take the form of tertiary retentions while retaining nothing that was ever present to any consciousness. The appearance of memory without any memory behind it. This is the structural strangeness the paper isolates.
The case for extending Ong's framework does not rest on the observation that new technologies have arrived. New technologies arrive continuously, and most do not require new stage distinctions. The case rests on a narrower and more specific claim: that three assumptions embedded in Ong's account of secondary orality, which held across all prior stages, no longer describe the current media environment. If the assumptions hold, then contemporary digital communication is an intensification of secondary orality and no new vocabulary is needed. If they break, then new conceptual categories are required. The paper argues, in its analytical hinge, that they break. Click each assumption below to see how.
Ong distinguished his stages by qualitative transformations in how consciousness relates to the symbolic environment, not by the introduction of new technologies within a stable relationship. The transition from broadcast to algorithmic curation is such a qualitative transformation because the shared symbolic environment fragments, the editorial function becomes opaque and commercially driven, and the distinction between media and mind begins to blur. The transition from human creation to algorithmic creation is a still more fundamental rupture. The human origin of symbolic content, which connected all prior stages in Ong's account, dissolves. These are not incidental variations within a stable paradigm. They are structural changes in the conditions under which consciousness forms.
The paper is not the first attempt to extend Ong's framework beyond secondary orality. A scattered body of work across linguistics, media studies, library science, and sociology has proposed what several scholars have called tertiary orality. These proposals responded to features of the digital environment that Ong's broadcast model could not accommodate: interactivity, multimodality, human-machine dialogue, platform-mediated participation. Reviewing them matters because part of the case for the paper's alternative vocabulary is showing why these earlier extensions, while identifying real features of the environment, do not go far enough. They all retain one assumption that the qualifying paper identifies as no longer viable: namely, that the symbolic content circulating through digital networks originates with human authors.
Tertiary orality describes what happens to orality when the media environment becomes digital and interactive. It does not account for what happens when the content itself is generated by systems that have never had an experience. - The core argument for a new conceptual axis
The more fundamental distinction in the current media environment, the paper argues, is not between speech and writing. It is between human and algorithmic origination of symbolic content. This is the axis on which the paper's two extensions are organized, and the reason the paper uses algorithmicity rather than orality in naming the final stage.
The paper proposes that Ong's framework requires extension in two steps. The first step accommodates the transformation that occurred between Ong's broadcast-era secondary orality and the current moment: the rise of social media platforms, recommendation engines, and algorithmic feeds. In this transitional phase, humans still created symbolic content, but algorithms increasingly determined what reached which consciousness. The paper calls this stage algorithmic secondary orality. The second step accommodates what began intensifying with the public release of large language models in late 2022: the condition in which algorithmic systems both curate and generate symbolic content, making human authorship optional at scale. The paper calls this stage tertiary algorithmicity. The distinction between them is not incremental. It is the difference between a stage in which humans still think and machines distribute, and a stage in which machines think on behalf of humans who may never notice the delegation.
The terminological choice matters. The paper uses tertiary to indicate a third major transformation in the relationship between consciousness and the symbolic environment. It uses algorithmicity rather than orality because the oral-literate continuum no longer captures the primary transformation underway. Whether algorithmically generated content takes oral form (a synthesized podcast) or literate form (a generated essay) matters less, for the educational question at hand, than whether a human or an algorithm produced it. A podcast episode and a written essay differ in sensory modality, but if both are generated by AI systems, they share a more fundamental characteristic that cuts across the oral-literate distinction. Tertiary algorithmicity names the condition in which the symbolic source, not the sensory modality, has become the decisive variable.
A media ecological analysis is not a prediction. It does not say the current stage will produce a particular outcome, and it does not endorse the stage it describes. What it does is specify the conditions under which consciousness is now being formed, so that the practical question of what to do about those conditions can be asked with the right level of seriousness. The qualifying paper's contribution is to argue that the current stage is categorically new, that prior vocabularies do not fully reach it, and that the educational stakes of the transition are higher than the academic integrity framing suggests. The companion piece on pedagogical friction answers the what-to-do question. This piece has traced the theoretical architecture that made the what-to-do question askable in the first place.
A final caution the tradition itself insists on. Stage models risk being read as stories of progress in which each successive stage supersedes and improves on the one before. Ong explicitly refused this reading, and the paper preserves the refusal. Literacy enabled analytical detachment but diminished the communal and mnemonic capacities of oral cultures. Secondary orality retrieved communal qualities but within broadcast economics. Algorithmic secondary orality enabled participatory expression at unprecedented scale but fragmented the shared symbolic environment. Tertiary algorithmicity offers powerful tools for synthesis and production but threatens to bypass the cognitive labor through which understanding develops. No stage is simply better or worse than its predecessor. Each gains something and loses something, and the work of describing both gains and losses, for the current stage as for every prior one, is what the media ecological tradition is for.
The task is not to resist technology or to embrace it. The task is to see it for what it is, and to decide, with the clarity that seeing makes possible, what to preserve. - A synthesis of the tradition, written in its own voice
The questions the paper leaves open are empirical. How do educators navigate the preservation of pedagogical friction in practice? What forms of infrastructural support most effectively sustain the conditions for genuine learning? How does the productive and exclusionary distinction operate for specific populations? The dissertation that follows will ask these questions in the field. The conceptual architecture that makes such asking possible is what the qualifying paper, and the companion interactives it has produced, have been assembling.