Intelligent artifice
The text of a talk on AI fear-mongering delivered at a private seminar in October 2023
On Wednesday I presented to the Actuarial Society’s jubilee Convention, highlighting the numerous falsehoods and scientific frauds lying behind the Covid phenomenon and the mass vaccination project. The reaction—broad allegations of misinformation and conspiracy theory—was typical, failing to address even one of the many sources and facts we furnished. This absolute refusal to debate or engage with dissenting voices—a phenomenon a colleague, borrowing from Alexander Solzhenitsyn, labeled as the “spectacular commitment to lying” is emblematic of our times. Another colleague has conducted some amazing exercises in showing how reluctant ChatGPT is to back down from patently false statements it readily issues, among others, in the domain of the US government’s official policy on population reduction. ChatGPT is also “spectacularly committed to lying”.
Or is it?
When we witness such behaviour are we seeing an algorithm manifest its own intentionality, or the manifestation of a programmer’s intentionality expressed through that algorithm? There’s no doubt it’s the latter. Furthermore, every application of the term artificial intelligence is misleading. The instantiations it is applied to would be better labeled as automatons—automations of human intelligence. Although true artificial intelligence, a notion generally referred to as “artificial general intelligence, or “general-purpose intelligence” is necessarily plausible, we lack the explanations necessary to bring it about. I’ll argue that it has no chance of emerging spontaneously and that for that reason, we have nothing to fear when it comes to the hypothetical bogeyman of a singularity, the future event where an artificial intelligence surpasses human intelligence. This event is almost universally described as a future calamity at the level of an existential threat. Nick Bostrom’s thought experiment is emblematic of this vein of thought. He posits a “paper clip maximiser”—an algorithm that runs amok having been instructed to produce as many paper clips as possible, eventually getting it right to turn all humans into paper clips.
It's time to expose some conjectures to criticism. If this talk appears overly concise, that reflects a desire not to repeat material from my previous talks here, “The epistemology of conservatism” and “Bach and the West”. Hopefully that does not come at the cost of making the train of thought appear more conjectural than it actually is.
Human brains, or more accurately, human bodies, are the hardware on which human minds run. Human minds are algorithms. Digital algorithms run on computers. The aim of the AI project is to create digital algorithms that share important characteristics with human minds. Because digital algorithms have the property of universality, we can say that if you can’t program something, you do not understand it. Under our existing state of knowledge, we have absolutely no idea how to program a quale, the subjective aspect of a sensation—for example, what purple looks like. This is partly because we do not know how to integrate qualia into our other knowledge. We also have no idea how to program creativity, which means the ability to generate new knowledge, because we understand almost nothing about how creativity works. And we also have no idea what consciousness is. While consciousness would seem to be a necessary condition for the experience of qualia, we know from biology that it is not required for creativity, though it strikes me as unlikely that it plays no role in it.
The little we do know about creativity is that the only way in which knowledge can be created is through iterative conjecture and criticism—which is to say, by evolutionary processes. Structurally, the process by which knowledge is created in minds is neatly analogous to the process of genetic evolution, where the conjectures are the result of sexual recombination, and to a lesser extent, mutation. We can say that a creative thought of a human mind is an analog for a species-level genetic innovation.
All evolution takes places in a background reality that effects it—also referred to as a domain. Human evolution, including the evolution of minds, has the entire solar system as its domain. Simulations are a category of algorithms that seek to elicit evolutions operating on smaller domains. These smaller domains are chosen by the human minds programming them, so as to make the simulation feasible, and in the hope that the domain elements omitted will not cause the resultant evolutions to fail to describe reality.
Because we have no idea where to start when it comes to solving the problem of programming qualia, consciousness or creativity, the question we must ask is whether a simulation can be programmed that has a chance of evolving into an algorithm that shares important characteristics with human minds. This is a hard problem. We must start by acknowledging that human evolution includes the evolution of the lipids, and then the first lipid membranes—thought to have occurred around suboceanic volcanic vents. It also includes the evolution of the first nucleotides, and then of the first DNA codes, at which point a jump to universality had been achieved.
Since that point, the language of nucleotide tri-pairs ceased to evolve, and the algorithms for all lifeforms have been programmed in that language. No minds have ever run on hardware the manufacture of which has been programmed in any other language. The jumps from prokaryotic to eukaryotic life and the evolution of mitochondria also appear to be discrete and permanent jumps that massively increased the energy budgets of organisms, and which have never been improved upon. The rest of evolution is a massively complex story of interaction between those organisms and the solar system—primarily Earth and its local star, the Sun. Somewhere along that journey, creativity emerged, probably not in a jump, as we observe it in other mammals, if not at any general level. Dogs for example, often create the explanation that door handles open doors.
How much of that history and how much of that granularity can we ignore if we seek to construct a simulation with a chance of evolving into a general-purpose intelligence? Can we, for example, ignore or generalize subatomic particle interactions? Can we ignore atoms? Chemical interactions? I suspect not. If I’m right, what kind of computer would that simulation run on? What would the size of that computer have to be? How many electrons does it take to model one electron? What I’m pointing to here is that the instantiation of a model of reality might not be that different in scale and speed from reality itself.
So my intuition is that no simulation likely to bring about the evolution of general-purpose intelligence is plausible, even though we can say, with certainty, that it is possible, not least of all because we know it has happened once; and since no non-evolutionary development of a general intelligence is possible, I infer that there is no cause to fear a singularity.
Sadly, that is where the good news ends, because there is much to fear in the domain of technology, and therefore much courage to be located.
I’ve argued elsewhere that the Covid phenomenon can only be understood if approached as a political, not a medical event, and I turn to it to illustrate these fears.
Lets consider another neat analogy. Almost everybody readily interpreted the relationship between the storied Gupta family and organs of the South African state as corrupt and harmful. Yet the institutional capture developed by Pfizer, which played out worldwide, was identical, and few saw it this way. Payments were made, tenders and contracts were fraudulent, the taxpayer shouldered the bill and fraudsters, regulators and their agents took home the bacon. While nobody in their right minds even thought to ask the question of whether the goods and services provided by the Guptas to the state were safe and effective, most people readily accepted the claim that those provided by Pfizer were.
What accounts for the near universal failure of elites to see this analogy as valid, and to sound the alarm? How were they tricked into being injected with products that had been shown by the data of Pfizer itself to cause net harms at each and every plausible level of all-cause clinical outcome.
The explanation for this lies in the architecture of Pfizer’s spectacular commitment to lying, which was far more sophisticated than the Gupta’s, as Pfizer and its allies had captured media institutions globally. [Play video]
I have a general heuristic regarding the permacrisis. If a problem is presented as a global crisis, admitting only global solutions, amid suppression of dissent, then that global crisis is a scam. Covid, the climate change crisis, and the racial and trans discrimination alarms all tick the boxes and are therefore scams. So does the notion of Putin’s attack on the Ukraine being unprovoked. And so do a long list of global crises curated by the World Economic Forum and by the United Nations, under the rubric of its sustainable development goals, or SDGs.
The singularity crisis definitely ticks the first two of those three boxes, but proposals to regulate the internet and AI research so as to stave off the bogus singularity threat are common, so the threat of ticking the third is alive and well.
The singularity bogeyman diverts attention from the far greater danger of media capture, exacerbated by the control of information that centralization and monopolization of the airwaves present. All technologies are subject to weaponization, and human history can be well described as a series of races to command first-mover advantage in the deployment of automation, or to catch up in the technology of countermeasures. The epic battle between good and evil continues according to this formula and we should not fall into the trap of believing that “this time is different”. Our battle is an antitrust case; our war, a fight against centralization. A technology we should develop is an algorithm for detecting fabricated narratives, and rendering them less potent, and the scam-detection heuristic I propose is a start in that direction.
I look forward to discussing this with you.
>Human minds are algorithms.
Stop right there!
This is a fundamental and unquestioned assumption stemming from 19th century thinking and made fashionable by the era of computers. But is it actually valid?
An algorithm is purely deterministic and finite in nature. That is the very definition. If you understood this, you'd realise that algorithms cannot give rise to minds (consciousness), but only automation. Just as a machine cannot create energy out of no where (they only transform it), an algorithm cannot create information from nothing. In fact, they can only deterministically transform information from one form to another which -- if you think about it -- is all that computers do.
And just what the hell is "information" anyway? We say, "Oh yes information is facts and figures and statistics and such like," but it is rarely understood.
For example, is information a property like energy which can be neither be created or destroyed? Do algorithmic processes dissipate information, just as physical machines dissipate heat? Does a “particle of information” have a mass (some like Melvin Vopson argue that it does)? And can information actually exist without mind? Indeed, does knowledge require a knower? And what does it mean to "know" something?
Roger Penrose says that consciousness is the ability to break the rules, and argues in his late 80s book, “Emperor’s New Mind”, that the human brain is not algorithmic but has deep connections to quantum mechanical world. And the quantum mechanical level is where things get very weird when it comes "knowing things".
At its heart, the universe is not deterministic in nature. The implications are more than fascinating, as described below:
https://informationphilosopher.com/freedom/laplaces_demon.html
You are correct in saying that AI is basically automation, but have not understood the reasons.
Moreover — and this is contentious I know — but I would argue that it is not atoms and quarks that are axiomatic in nature, but consciousness itself.
Thank you Nick for an enlightening article.
I believe that the idea of AI is used as a tool to provoke argument and agitation against anyone opposing the narrative of progressives left leaning liberals.
It's a tool for distraction.