Reclaiming the Luddite Legacy
The Boston Institute of Pseudo-Intellectual Systems positions itself at the vanguard of a New Luddism. This is not the caricature of smashing machines, but a critical, philosophical stance against the determinism and hidden politics of algorithmic systems. We argue that mainstream 'AI Ethics'—with its checklists for fairness, transparency, and accountability—is a pacifying discourse, a way to make algorithmic governance palatable and manageable without challenging its fundamental premise: that more aspects of human life should be mediated, optimized, and decided by opaque computational processes. Our critique goes to the root: we question the very desire to outsource judgment to machines.
The Theology of the Algorithm
Our analysis frames contemporary belief in algorithms as a secular theology. The Algorithm is the omniscient, inscrutable god; data is its scripture; Silicon Valley engineers are its priesthood; and 'efficiency' is its supreme virtue. Faith in this system demands the suppression of human ambiguity, intuition, and the right to be wrong. We deconstruct the language of this faith: 'machine learning' (anthropomorphizing the statistical), 'neural networks' (implying a brain-like wisdom), 'artificial intelligence' (a blatant category error designed to awe). Our task is to perform a close, skeptical reading of this techno-theological text, exposing its gaps, its miracles, and its demand for unquestioning devotion.
A central project is our 'Liturgy of Error', a curated collection of algorithmic failures—racist facial recognition, biased recidivism predictors, suicidal chat-bots. We present these not as bugs to be fixed, but as revelatory moments, akin to miracles in a negative theology. They show the true face of the system: its embedded biases, its literal-minded stupidity, its catastrophic failure to understand the human world. We then contrast these failures with the lavish, science-fictional promises of AI proponents, framing the gap as a form of ideological mystification. Our writing on this topic is deliberately emotional, using terms like 'algorithmic violence' and 'digital despair', to counter the cool, technical language of the engineers.
Practices of Technological Asceticism
Beyond critique, we advocate for and practice a form of technological asceticism. Institute communications are often conducted via physical memo, delivered by internal mail cart (a retired library trolley). Data analysis for our projects is performed on manual calculators and presented as hand-drawn graphs. We have a 'Scream Room' padded with obsolete computer manuals where Fellows can vocalize their frustration with digital systems. More practically, we run 'Algorithmic Literacy for the Bewildered' workshops, teaching people not how to code, but how to sense when an algorithm is shaping their choices, and how to invent tiny acts of non-compliance—like always clicking the last item in a recommendation list.
The Institute's Analog AI Project
In a move of supreme irony, we are developing our own 'Analog Artificial Intelligence'. It is a room-sized, Rube Goldberg-esque apparatus of pulleys, levers, bowling balls, and hamster wheels, designed to 'compute' answers to philosophical questions. A query like 'Is free will an illusion?' is encoded by positioning weights on scales; the machine clatters and rolls for hours before a marble drops into one of several cups labeled with possible answers. The point is not the answer (which is random), but the process: it makes computation slow, visible, noisy, and physically absurd. It re-embodies the abstract, demystifying the black box by building a transparent, ridiculous one. The project has been called a waste of time, which we accept as its core feature.
Looking ahead, our Neo-Luddite Working Group is drafting a 'Letter to the Future' to be preserved on clay tablets, warning against the seduction of cognitive outsourcing. We are also planning a 'Festival of Inefficiency', celebrating slow, manual, and gloriously wasteful human processes. In a world racing toward automated everything, we champion friction, hesitation, and the dignity of the un-optimized life. Our critique is ultimately an aesthetic and ethical one: we find beauty and meaning in the human scale, in the mistake, in the effortful, and in the conversations that happen when the screen is off. We are not against technology, but against the empire it builds in the mind, and we fight that empire with the only weapon that seems to resist digitization: deliberately convoluted, passionately argued, magnificently useless thought.
- Manifesto: 'The New Luddite Charter (Draft)'.
- Demonstration: The Analog AI Machine (public viewings Tuesdays).
- Resource: 'A Taxonomy of Algorithmic Failures'.