An Interactive Companion · Qualifying Paper Theoretical Framework

The Technologizing of the Word

On media ecology, the noetic world, and the consciousness that communication technology has always been quietly rearranging.

A study of the theoretical architecture behind Beyond Secondary Orality: Tertiary Algorithmicity and the Case for Pedagogical Friction. The tradition begins with Innis and McLuhan in Toronto, travels through Ong at Saint Louis, becomes a program under Postman in New York, and arrives, eventually, at the question of what happens to the human relationship with the symbolic environment when the environment itself is algorithmically generated.

§ ITHE TRADITION

What media ecology sees.

Media ecology is the study of communication technologies as environments rather than as tools. It holds, with Neil Postman who coined the phrase as a formal program in 1968, that the structure and flow of information in a society shape what that society can perceive, value, and think. The tradition does not ask what people say on television, over the radio, in writing, or now through generative AI systems. It asks what happens to a mind, and to a culture, when one of these media becomes the dominant condition under which saying occurs at all. The claim is not that media determine thought. It is that media tilt the ground on which thought takes place, and that the tilt is worth seeing.

The core premise

Communication technologies are not neutral carriers of pre-formed ideas. They are environmental forces that restructure memory, attention, authorship, and the social relations through which knowledge is made. What a medium makes easy, a culture tends to do; what a medium makes hard, a culture tends to abandon; what is abandoned long enough becomes unthinkable. This is the central claim the qualifying paper inherits from the tradition, and the basis on which the extensions the paper proposes become not only possible but necessary.

We shape our tools and thereafter our tools shape us. - Attributed to Marshall McLuhan, formulated by Father John Culkin in 1967

The lineage that most directly informs the qualifying paper descends from what is sometimes called the Toronto School: Harold Innis, Marshall McLuhan, and Walter Ong, with Neil Postman formalizing the project as a named field at New York University. From this spine extend the analyses of Bernard Stiegler on externalized memory, Sherry Turkle on digital intimacy, Nicholas Carr and Maryanne Wolf on reading under conditions of distraction, and Daniel Krutka, Marie Heath, and Lance Mason on what a technoskeptical pedagogy would require. The chapters that follow move through the figures most consequential to the argument the paper makes, beginning with the figure from whom the central intuition of the tradition derives.

§ II1911 - 1980 · TORONTO

McLuhan: the medium as message.

HERBERT MARSHALL MCLUHAN
Founder · Media Ecology

Marshall McLuhan

A Canadian literary scholar who trained on sixteenth-century rhetoric and ended up as the most-quoted and least-read media theorist of the twentieth century. McLuhan argued that the form of a medium reshapes perception more than whatever content the medium happens to carry. A culture that reads books develops one kind of consciousness. A culture that watches television develops another. The content may be identical. The cognitive effects are not.

His 1964 Understanding Media: The Extensions of Man is the book the paper cites most often in establishing the environmental stakes of generative AI. His posthumous Laws of Media (1988), completed with his son Eric, gave the tradition its most useful analytical instrument: the tetrad. Four questions asked of any technology that would show, with surprising economy, what the technology was about to do to the world it entered.

The medium is the message. This is merely to say that the personal and social consequences of any medium result from the new scale that is introduced into our affairs by any extension of ourselves, or by any new technology. - McLuhan, Understanding Media, 1964
Interactive · McLuhan's Tetrad
Four questions asked of any medium
Select a medium. Hover each quadrant.
SELECT A MEDIUM AT RIGHT
The Tetrad

McLuhan proposed that every technology, when examined carefully, could be understood by asking four questions about its effects. The answers are rarely obvious, and they are never purely positive or purely negative. They reveal the medium's full character: what it amplifies, what it drives out, what it brings back from a displaced past, and what it becomes when pushed to its own limit.

CHOOSE A MEDIUM
How to read the tetrad. The four effects operate simultaneously, not in sequence. A medium enhances, obsolesces, retrieves, and reverses at the same time. The reversal is particularly worth attention: McLuhan noticed that every medium, when pushed past a certain intensity, tips into something close to its opposite. The tetrad for generative AI is proposed here in the spirit McLuhan intended, which is to say, as a diagnostic invitation rather than a settled verdict.

Hot media, cool media

McLuhan also distinguished hot media, which deliver high-definition information that requires little participation from the receiver, from cool media, which offer lower definition and demand that the receiver fill in the gaps. Radio and film, in his account, run hot. Television, telephone, and speech run cool. Generative AI, on this taxonomy, is a peculiar inversion: its output is extremely hot (fluent, high-definition, seemingly complete) while the production process invites almost no participation from the user beyond the prompt. The medium delivers a saturated finished product in exchange for minimal engagement. This is, in McLuhan's vocabulary, an unusually one-sided energetic arrangement, and the pedagogical consequences follow directly from the asymmetry.

§ III1912 - 2003 · SAINT LOUIS

Ong: the technologizing of the word.

WALTER J. ONG, S.J.
Jesuit, Philologist, Theorist of the Word

Walter Ong

A Jesuit priest, student of Marshall McLuhan at Saint Louis University, and the theorist whose developmental account of communication technology gives the qualifying paper its primary analytical spine. Ong's 1982 Orality and Literacy: The Technologizing of the Word is the book from which nearly every major claim of the paper's theoretical framework descends.

Where McLuhan thought in aphorisms and provocations, Ong wrote in careful, phenomenologically informed prose. He distinguished three stages in the relationship between consciousness and the symbolic environment: primary orality, the condition of cultures without writing; literacy, the internalization of writing into cognitive structure; and secondary orality, the partial return of oral characteristics through electronic broadcast media. Each stage, for Ong, was a complete cognitive ecology with its own epistemological logic. Literacy was not better than orality. It was different, and its difference was consequential.

By separating the knower from the known, writing makes possible increasingly articulate introspectivity, opening the psyche as never before to the exterior objective world quite distinct from itself. - Ong, Orality and Literacy, 1982, p. 105
Interactive · The Noetic World
Ong's three stages, plus the two extensions the paper proposes
Click each stage.
Stages 1-3 are Ong's. Stages 4 and 5, marked with the brass accent, are the extensions the qualifying paper contributes. Ong's stages are not superseded by the extensions; they coexist. A contemporary student operates across multiple stages in a single afternoon, and the analytical value of the sequence lies in what it makes visible about the dominant tendency of a given moment.

The noetic world

Ong's most consequential term, and the one from which the qualifying paper derives its construct of noetic displacement, is the noetic world. The word comes from the Greek nous, meaning mind or intellect. For Ong, the noetic world is not just what a culture thinks but the structure of thinking itself: the habits of memory, the forms of attention, the patterns of reasoning, the shape that knowing takes under the conditions of a particular dominant medium. When the medium changes, the noetic world rearranges. When the rearrangement is large enough, a new stage has occurred, and the analytical task is to describe what that stage has gained and what it has lost. This is what the paper does for the two new stages it proposes.

§ IVSUPPORTING FIGURES

The company kept.

Ong and McLuhan are the spine of the paper's theoretical framework, but they are not sufficient on their own. The argument draws from a small constellation of additional thinkers whose contributions, taken together, supply the vocabulary the paper uses to name what generative AI is doing to the conditions of learning. Each of these figures works in a slightly different register. Postman is a cultural critic and institution-builder. Stiegler is a continental philosopher in the phenomenological tradition. Turkle is a social psychologist of technology. Carr and Wolf are researchers of reading and attention. Krutka, Heath, and Mason are education scholars who translated the tradition into a curricular framework. Read together, they form a set of analytical tools without which the paper's argument could not be made with the precision it attempts.

1931 - 2003 · NEW YORK

Neil Postman

Amusing Ourselves to Death · Technopoly

The theorist who built media ecology into a formal academic program at NYU. Postman's contribution is the claim that each medium brings with it an epistemology: a way of knowing, a set of assumptions about what counts as truth, argument, and evidence.

The television age, in his analysis, was not simply an age in which people watched television; it was an age in which the very forms of coherent public reasoning began to erode because the medium favored spectacle over sustained argument. The qualifying paper's concern that generative AI normalizes the appearance of reasoning without its substance is Postman's argument, updated.

FOR THE PAPER Each medium makes some forms of knowing easier and some harder. Policy that addresses content without addressing form misses what is actually happening.
1952 - 2020 · PARIS

Bernard Stiegler

Technics and Time · Automatic Society

A French philosopher whose concept of tertiary retention gives the paper its most precise vocabulary for what generative AI is doing to memory and cognition. Stiegler extended Husserl's phenomenology of time consciousness to argue that technical objects, including books, photographs, recordings, databases, function as externalized forms of memory that become constitutive of the cognitive processes they support.

Every prior form of tertiary retention preserved traces of actual experience. A neural network produces outputs that take the form of tertiary retentions without originating in any experience whatsoever. This is the categorical break the paper isolates, and Stiegler's framework is what makes the break legible.

FOR THE PAPER Generative AI produces "memories" of experiences that never occurred. The phenomenological strangeness of this is what tertiary algorithmicity names.
1948 - · CAMBRIDGE, MA

Sherry Turkle

Life on the Screen · Alone Together

An MIT social psychologist whose longitudinal fieldwork traced how digital environments reshape the texture of human relation. Turkle's Alone Together (2011) documented a condition in which people increasingly preferred mediated communication, which they could control, over unmediated human exchange, which they could not.

Her diagnosis anticipates the pedagogical problem the paper names as the decay of rhetorical friction. A student whose primary interlocutor is an AI that never misunderstands, never pushes back unexpectedly, and never has a bad day is a student whose dialogic capacities are quietly undeveloped.

FOR THE PAPER We expect more from technology and less from each other. Education depends on the second half of that exchange.
1959 · WRITERS IN ATTENTION

Carr · Wolf

The Shallows · Reader, Come Home

Nicholas Carr and Maryanne Wolf, from journalism and cognitive neuroscience respectively, documented what the shift to screen reading was doing to the attentional substrate on which deep reading had historically rested. Carr's The Shallows (2010) and Wolf's Reader, Come Home (2018) both argued that the capacity for sustained, patient engagement with complex text was being eroded by environments optimized for quick, fragmentary attention.

Neither was Luddite. Both were making an empirical claim about plasticity: that the reading brain, like all human cognitive architecture, is shaped by what it does most often, and that what a culture does most often is what the culture becomes able to do.

FOR THE PAPER Attention is not a personal trait. It is a cognitive infrastructure that media environments either build or erode.
2020 - 2024 · EDU SCHOLARS

Krutka · Heath · Mason

Civics of Technology · Technoskepticism Iceberg

Education scholars who formalized a practical framework for educational technoskepticism. Their Technoskepticism Iceberg distinguishes the visible technical features of a technology (above the waterline) from the submerged psychosocial and political effects that the visible features make it easy to miss.

The framework gives teachers a concrete set of questions to interrogate any technology: what does it amplify, whom does it benefit, what does it quietly require, what does it prevent. This is the pedagogical translation of the media-ecological tradition the paper inherits, and it is what distinguishes the paper's stance from the naive enthusiasm and the reflexive rejection that dominate most school-level AI discourse.

FOR THE PAPER Technoskepticism is not rejection. It is the discipline of asking, every time, what lies beneath the interface.
1929 - 2007 · MONTREAL

Jean Baudrillard

Simulacra and Simulation

A figure the paper uses with caution. Baudrillard proposed four progressive phases in the relationship between image and reality, culminating in the pure simulacrum: an image that bears no relation to any underlying reality at all. The paper finds this typology useful as a diagnostic of what generative AI produces, which is symbolic forms that function as if they referred to grounded claims while originating in statistical pattern completion.

Where the paper departs from Baudrillard is the prognosis. Baudrillard concluded that meaning was lost and resistance was reabsorbed. The paper declines that conclusion because learning science offers empirical evidence that cognitive interventions still produce measurable differences in understanding. The diagnosis is Baudrillard's. The response is not.

FOR THE PAPER The structural condition Baudrillard describes is real. Its finality is not. Intervention remains possible, and necessary.
Conceptual Diagram · Stiegler
Three retentions, and the fourth that complicates them
I · IMMEDIATE
Primary Retention
The just-now of experience

The lingering present moment in consciousness. The note you just heard in a melody, still audible in memory as the next note arrives. This is the foundational layer of experience.

II · RECOLLECTIVE
Secondary Retention
Memory proper

The recollection of experiences no longer present. Your memory of yesterday, of childhood, of a conversation last week. Secondary retention preserves what primary retention has let slip.

III · EXTERNALIZED
Tertiary Retention
Books · photos · recordings

Technical supports that preserve traces of experience outside any single mind. A book retains what an author knew. A photograph retains what a camera witnessed. Stiegler argues that these are constitutive of, not supplementary to, human cognition.

IV · THE COMPLICATION
Synthetic "Retention"
Neural network output

Generative AI produces outputs that take the form of tertiary retentions while retaining nothing that was ever present to any consciousness. The appearance of memory without any memory behind it. This is the structural strangeness the paper isolates.

What Stiegler gives the paper. The tradition had a name for externalized memory. It did not yet have a name for the specific strangeness of externalized memory that was never anyone's memory. The paper's term tertiary algorithmicity is, in part, an answer to this lack.
§ VTHE ANALYTICAL HINGE

Three assumptions that no longer hold.

The case for extending Ong's framework does not rest on the observation that new technologies have arrived. New technologies arrive continuously, and most do not require new stage distinctions. The case rests on a narrower and more specific claim: that three assumptions embedded in Ong's account of secondary orality, which held across all prior stages, no longer describe the current media environment. If the assumptions hold, then contemporary digital communication is an intensification of secondary orality and no new vocabulary is needed. If they break, then new conceptual categories are required. The paper argues, in its analytical hinge, that they break. Click each assumption below to see how.

ASSUMPTION I
Humans create symbolic content
Across every stage Ong described, the source of symbolic expression was human consciousness. Oral performers composed and delivered. Authors wrote. Broadcast producers scripted and filmed. The media changed; the human origin of content did not.
Neural networks produce essays, analyses, images, and arguments without human authorship in any traditional sense. The outputs are statistical predictions of what plausible text looks like, generated by systems that have never experienced what they write about. A student can submit work that reads as a demonstration of understanding without any understanding having occurred.
Click to reveal →
ASSUMPTION II
Distribution follows transparent logics
In primary orality, distribution was face-to-face. In literate cultures, publication operated through identifiable human judgment. In secondary orality, broadcast schedules determined what reached audiences, and two viewers watching the same channel saw the same content.
On algorithmic platforms, distribution is governed by engagement-optimization processes that are proprietary, opaque, and dynamically responsive to individual user behavior. Two users opening the same application encounter different content. The shared symbolic environment fragments into individualized streams shaped by behavioral prediction.
Click to reveal →
ASSUMPTION III
Consciousness encounters media externally
Ong's framework treats media as environments that consciousness inhabits and is shaped by, but that remain external to consciousness itself. The book sits on the shelf. The broadcast emanates from the television. The environment is other.
When the symbolic environment is continuously tailored in response to prior user behavior, consciousness increasingly encounters an environment that reflects its own patterns back to it. The externality of media softens; the medium and the mind begin to co-adapt in a tight feedback loop that Ong's broadcast model did not anticipate.
Click to reveal →

Why this matters for staging

Ong distinguished his stages by qualitative transformations in how consciousness relates to the symbolic environment, not by the introduction of new technologies within a stable relationship. The transition from broadcast to algorithmic curation is such a qualitative transformation because the shared symbolic environment fragments, the editorial function becomes opaque and commercially driven, and the distinction between media and mind begins to blur. The transition from human creation to algorithmic creation is a still more fundamental rupture. The human origin of symbolic content, which connected all prior stages in Ong's account, dissolves. These are not incidental variations within a stable paradigm. They are structural changes in the conditions under which consciousness forms.

§ VIWHY PRIOR EXTENSIONS FAIL

The tertiary orality problem.

The paper is not the first attempt to extend Ong's framework beyond secondary orality. A scattered body of work across linguistics, media studies, library science, and sociology has proposed what several scholars have called tertiary orality. These proposals responded to features of the digital environment that Ong's broadcast model could not accommodate: interactivity, multimodality, human-machine dialogue, platform-mediated participation. Reviewing them matters because part of the case for the paper's alternative vocabulary is showing why these earlier extensions, while identifying real features of the environment, do not go far enough. They all retain one assumption that the qualifying paper identifies as no longer viable: namely, that the symbolic content circulating through digital networks originates with human authors.

YEAR · AUTHOR
PROPOSAL
WHY INSUFFICIENT
2009Mayer
Tertiary orality shaped by the interactive circulation of digital networks rather than broadcast media. Among the earliest proposals to name the concept.
Identifies interactivity as the differentiator but treats the content circulating through the networks as fundamentally human-created. Does not anticipate algorithmic generation.
2013Turner & Allen
Approached tertiary orality through the challenge that non-traditional oral documents (video blogs, oral histories, managerial communications) posed to institutional description, storage, and retrieval.
A library-science framing. Focused on how literate institutions accommodate oral artifacts. The question of artifact origin is not raised because it was not yet at issue.
2016Angel-Botero & Alvarado-Duque
A phenomenological study of digital radio listeners identifying three characteristics of tertiary orality: liveliness, transcoding, and aggressiveness.
Empirically grounded in ways the other proposals are not. Still assumes the content being encountered is human-produced radio, transformed by digital transmission but not algorithmically generated.
2020Soffer
Voice querying as an attempt to discipline oral expression through the cognitive constraints of anticipated textual form, producing a hybrid mode in tension between the two.
The closest of the prior accounts to anticipating the paper's concerns. Still stops at human-machine interaction rather than reaching machine origination of content.
2021Heyd
The strongest theoretical argument for tertiary orality as a distinct category. People now talk to machines, not just through them, introducing a posthumanist dimension.
Reaches the threshold of human-machine dialogue but treats the machine's role as responsive and reactive. Does not address the condition in which the machine originates symbolic content independent of any query.
2023Ryu
Extended the concept of tertiary orality into virtual reality environments, with emphasis on embodied presence in synthetic worlds.
A spatial extension rather than a categorical one. The symbolic content within the VR environment is still, in Ryu's account, human-designed.
2024Cordon-Garcia & Munoz-Rico
A synthesis identifying platforms like TikTok and YouTube as amplifiers of participatory characteristics that dissolve traditional sender-receiver distinctions.
Documents a real collapse of editorial structure. Stops short of addressing what happens when algorithmic systems not only distribute but compose the content itself.
Tertiary orality describes what happens to orality when the media environment becomes digital and interactive. It does not account for what happens when the content itself is generated by systems that have never had an experience. - The core argument for a new conceptual axis

The more fundamental distinction in the current media environment, the paper argues, is not between speech and writing. It is between human and algorithmic origination of symbolic content. This is the axis on which the paper's two extensions are organized, and the reason the paper uses algorithmicity rather than orality in naming the final stage.

§ VIITHE PAPER'S CONTRIBUTION

The two extensions.

The paper proposes that Ong's framework requires extension in two steps. The first step accommodates the transformation that occurred between Ong's broadcast-era secondary orality and the current moment: the rise of social media platforms, recommendation engines, and algorithmic feeds. In this transitional phase, humans still created symbolic content, but algorithms increasingly determined what reached which consciousness. The paper calls this stage algorithmic secondary orality. The second step accommodates what began intensifying with the public release of large language models in late 2022: the condition in which algorithmic systems both curate and generate symbolic content, making human authorship optional at scale. The paper calls this stage tertiary algorithmicity. The distinction between them is not incremental. It is the difference between a stage in which humans still think and machines distribute, and a stage in which machines think on behalf of humans who may never notice the delegation.

Interactive · Tertiary Algorithmicity
Three characteristics, explored
Click each tab above the panel.
How the three relate. Noetic displacement, rhetorical saturation, and existential abstraction are operationally intertwined. A student who submits AI-generated work experiences all three simultaneously: cognitive labor is displaced, the output circulates within a saturated environment where its origin is uncertain, and the connection between the text and personal intellectual commitment is abstracted away. Yet each identifies a distinct dimension of the transformation. Noetic displacement concerns cognition. Rhetorical saturation concerns the communicative environment. Existential abstraction concerns the person. Together they define the condition the paper terms tertiary algorithmicity.

Why algorithmicity, not orality

The terminological choice matters. The paper uses tertiary to indicate a third major transformation in the relationship between consciousness and the symbolic environment. It uses algorithmicity rather than orality because the oral-literate continuum no longer captures the primary transformation underway. Whether algorithmically generated content takes oral form (a synthesized podcast) or literate form (a generated essay) matters less, for the educational question at hand, than whether a human or an algorithm produced it. A podcast episode and a written essay differ in sensory modality, but if both are generated by AI systems, they share a more fundamental characteristic that cuts across the oral-literate distinction. Tertiary algorithmicity names the condition in which the symbolic source, not the sensory modality, has become the decisive variable.

§ VIIICLOSING

What the tradition asks.

A media ecological analysis is not a prediction. It does not say the current stage will produce a particular outcome, and it does not endorse the stage it describes. What it does is specify the conditions under which consciousness is now being formed, so that the practical question of what to do about those conditions can be asked with the right level of seriousness. The qualifying paper's contribution is to argue that the current stage is categorically new, that prior vocabularies do not fully reach it, and that the educational stakes of the transition are higher than the academic integrity framing suggests. The companion piece on pedagogical friction answers the what-to-do question. This piece has traced the theoretical architecture that made the what-to-do question askable in the first place.

§ · §

A non-teleological closing

A final caution the tradition itself insists on. Stage models risk being read as stories of progress in which each successive stage supersedes and improves on the one before. Ong explicitly refused this reading, and the paper preserves the refusal. Literacy enabled analytical detachment but diminished the communal and mnemonic capacities of oral cultures. Secondary orality retrieved communal qualities but within broadcast economics. Algorithmic secondary orality enabled participatory expression at unprecedented scale but fragmented the shared symbolic environment. Tertiary algorithmicity offers powerful tools for synthesis and production but threatens to bypass the cognitive labor through which understanding develops. No stage is simply better or worse than its predecessor. Each gains something and loses something, and the work of describing both gains and losses, for the current stage as for every prior one, is what the media ecological tradition is for.

The task is not to resist technology or to embrace it. The task is to see it for what it is, and to decide, with the clarity that seeing makes possible, what to preserve. - A synthesis of the tradition, written in its own voice

The questions the paper leaves open are empirical. How do educators navigate the preservation of pedagogical friction in practice? What forms of infrastructural support most effectively sustain the conditions for genuine learning? How does the productive and exclusionary distinction operate for specific populations? The dissertation that follows will ask these questions in the field. The conceptual architecture that makes such asking possible is what the qualifying paper, and the companion interactives it has produced, have been assembling.