Mysticism of Method
One the primary challenges of the present age is the spurious conflation of form and substance.
There are different versions of this fallacy. It is articulated in the parlance of familiar dialectic: "spirit vs letter” or “style vs substance”. Today’s vogue is to erroneously presume that a proxy is as good as the thing for which it is supposed to stand. Taleb characterizes a related manifestation of this tendency in his exposition of the “Ludic fallacy”.
Another version of this confusion of thought manifests in the farcical freedom of speech chaff. The point of freedom of speech is epistemic; it’s not an absolute favor. The idea is that by provisioning for free discourse we are able to hear ideas which might be true, but that for various social factors might be suppressed or jeopardized; it is not primarily a point about abject liberty.
With the gun debate; why is it important for the populace to have guns? Ostensibly, so that we can defend ourselves from tyrannical threat (setting aside the obvious challenges to this assertion considered in light of modernity) or otherwise; not that it is a divinely appointed writ.
Covid-19; the map is not the territory—the model is a representation of aspects of reality we can make tractable to scientific rendering, not the reality itself. Experts, as people, get it wrong and credentials are not the same as conferred wisdom (as we learned with masks and airborne transmission).
Again, and again, we find the confusion we might term the ‘mysticism of method’. One is reminded of Fenyman’s cargo cult paper …The suggestion is that slurring over the consideration of substantive causal relations by appeal to method, form, or supposed authority is not a reliable means of intended efficacy. It is important to point out the centrality of ignorance to this seduction.
Perhaps nowhere at present is this fallacy more critically destructive than in the reckless scaling of machine learning systems in mediation of complex social affairs. In such cases these systems are often purported to provide valid inferences about supposed domains. Problem is, their tendency to dubiously sever inference from legitimate conditions of human meaning, and actual states of (natural) affairs of which the former are a class. Such systems tend to the Procrustean when applied in complex social activities—especially those of custom—precisely because they are ill-equipped to account for the unmitigated complexity of social interaction and environing conditions. The simulation is not the real McCoy. While circumstances of determinate reference are often tractable i.e. whether or not someone is (likely to be) pregnant—with the usual statistical caveats; it is not an infallible characteristic of events which is being adduced.
Narrowed scope, artificially induced to be computationally tractable (and thus more likely to be commercially viable) is not a substitute for reality. Flippancy about this fact can be fatal in complex social relations, in matters of culture. In fact, such scoping institutes a profound distortion, one liable to cut both ways: in what is left out, and as regards what is included via artificial construction. The complexity and texture of reality, its interdependencies, contours and nuances are far vaster then what is computationally tractable with present means. Models can be more or less arbitrary, and remarkably accurate within some specifiable regimes. But to deify the principle of close approximation as if it represented some antecedent reality is the crest of folly, hubris and enmity to authentic humanity. Such mysticism is a malign form of ignorance which claims to be true; often aware—even if dimly—of its shortcomings, yet insistent upon its superiority. Sound familiar?
Machine learning is good if you want to know what bird is chirping; it's bad if you want to characterize sufficiently complex social meanings. As a technique it may be convenient for selling ads, but in practice it is often harmful in many of the social implications treated as sacrificial wayside.
Insistence on shaping thought by way of automation based rigidly on statistical properties of past behavior in a rapidly changing environment is worse than dumb; it severely hampers the obvious need and capacity for creative adaptation. While the latter, of necessity maintains elements of the fixed and routine, it remains significantly requisite upon sensitivity to that which is not yet perceived—ideas of possibility and potential not found in annals of past endeavor. It’s not hard to figure out why the “social” network world feels like a horrible revanchist carnival ride. We are automating living in the past because it makes ad money for large tech companies; something Zeynep Tufekci previously pointed out in characteristically prescient fashion.
It’s telling that the industry is riddled with deceptive language …You’ll hear industry representatives describe “neural networks” as loosely based on actual neurons; like so many of the field’s terms it is generous at best and deceptive when described with candor. The commonly employed term “artificial intelligence” (somtimes conflated with machine learning) is wildly misleading; the joke being Artificial in bold, “intelligence” in quotes. It’s an awfully cavalier phrase for a field that can’t even settle on basic traits of what constitutes intelligence! Current prevailing machine learning systems cannot, in fact, be intelligent in principle for reasons I’ll unpack in detail on another occasion. But it will suffice for present purposes to point out the endemicity of concept conflation regarding computation generally; see: Carol Cleland’s “Concept of Computability”. Of course, my argument does not rest upon consideration of what Turing machines—nor their specific instantiations or interpretations—tell us (or not) about computability. Artificial might be the right modifier, but not in reference to an honest conception of intelligence. Perception, or awareness of meanings, as implicated in learning, is not something present machines do. These machine’s don’t learn, they run operations they are engineered to execute; people may learn based on the resultant outputs. The figurative sense in which such machines are said to “learn” is a product of overzealous imputation, confusion of language, and pr pitch. It’s not simply a mere misnomer—it’s a myth. Knowledge is implicated in intelligence, and neither are credibly asserted as qualities of today’s machines but by fiat.
The penchant for deceptive language is consistent with the field’s often obfuscation of basic facts about their business operations. The characteristic tendency of the most domineering in the field is to over-promise and underdeliver, externalize the costs, and obfuscate the results. The exceptions prove the rule in the idiomatic sense (not as a logical argument). That the field is prone to exaggeration, myth, fibbing, outright lies, and often a priori dictum (or just the usual wishful thinking and fallacious reasoning—particularly regarding the problem of induction) is informative: It helps to explain the yawning gap between the utopian pipe dreams that were promised and the proto-dystopic fever hash so ardently worked to wrought, wittingly or not. There are some practitioners in the field that can’t mean what they say because they don’t know what they are saying and couldn’t tell you what it means. And for this we all suffer.
I’m reminded of a quote by Scott Aaronson used in describing the distinct but related field of quantum computing when he describes the challenge of “draw[ing] the line between defensible optimism and exaggerations verging on fraud.”
I’d like to close by asserting the relevance of the following passage from William James: "conceptual treatment of perceptual reality makes it seem paradoxical and incomprehensible; and when radically and consistently carried out, it leads to the opinion that perceptual experience is not reality at all but an appearance or illusion.” The roots of the contaminant dilemmas go back at least to Plato—something explicated by many more esteemed than I. But this I leave for another day.
We need creative adaptation with respect to conditions and their consequences; most pressingly in the complex of cultural and social life. Nonsensical objects of inference recklessly introduced in mediation of society (in severance from actual affairs of vital social life) jeopardize the very conditions by which culture can grow. Cruel irony it is that in installation of artificial “intelligence” as mediatory office of social life we have hampered society’s ability to adapt life intelligently to a rapidly changing world.
Who wants to live in the past when we have the potential to create a present which is so much more life giving?
Please do notice where I have devised along the lines of the very fallacy I suggest to address. We shall benefit from our sincere mutual criticism and respectful conversation. Better we not wallow in the soft bigotry of low expectations. Let us not pretend we do not know the nature of the problems we face, nor the stakes of failing to tend to their address.
We will continue to require machine “learning" applications for some of our most pressing challenges; especially defense, accessibility, studying the natural world etc. But we are going to have to get better at exercising discriminating judgment as to the proper role of such efforts when it comes to complex social interactions.
Shout out to the increasing ranks of professionals working within and without the field to defy the above characterization and deliver meaningful results for people and planet in a responsible fashion. Your work is more important than ever.