The AI Revolution Already Took Place
The most interesting thing about modernity is the degree to which it depends, for its basic functioning, on generating a constant sense of novelty.
On such novelty depends not only such trifles as human life and livelihood, but also "the economy," "politics," and, perhaps most importantly of all, the ever-growing Internet-conspiracist-Take-Worker sector of the global economy.
To easily grasp what defines "modernity," I often point out to students that in Latin, as in most ancient languages, the term "new" normally has negative connotations--and can be otherwise translated as "strange" "rash" and even "revolutionary." In itself, this is far closer to a sort of human baseline response to novelty as such. Most ancient societies realized that "new things" were almost by definition disruptive things, things that created complications for the social networks and institutions they valued so highly and thus hardship and suffering and conflict. Families and institutions and Empires alike run on the old, and are thus largely and inevitably run by the old--especially in Rome, but increasingly in America as well. And as the recent disgusting wall-to-wall press coverage of the anniversary of overturning Roe v Wade reminds us, for institutions and established powers of all kinds, new things, and new people, always cause problems.
Merely saying that contemporary societies are the opposite of this, and regard novelty and the new as positive, though, is insufficient and somewhat deceptive. Certainly, modernity features any number of "progressive" narratives and theories and philosophies and theologies whereby what is new is always and by definition good, no matter what. Many popular works of progressive narrative and theory are, in fact, nearly comical in the degree of religious and moral fervor which they openly show and glory in the enormous conflict, social and familial disruption, and even violence that result from a given new trend, while still dogmatically insisting on that trend's goodness and the absolute moral necessity of embracing it and encouraging it and never questioning it at all. Yet even here, it would be easy to misunderstand the actual content and basis of the belief.
To understand the history of the last few hundred years, one has to understand, first and foremost, that the negativity and conflict generated by modernity and modern trends is, in practically every case, not the result of "anti-modern" or "reactionary" or even "conservative" forces, but merely the inseparable twin and means of modernity itself. It is not, as one might expect, consistently and inevitably the progressive forces that advocate for novelty and portray it in positive terms, and the anti-progressive forces that portray it in negative terms. Rather, in almost every case, the novelty and its reaction are simultaneous and inseparable.
To give an obvious example, science-fiction taken as a whole is without a doubt a "progressive" and "modern" genre, yet the bread-and-butter of science fiction since its first days has been horror stories about technology and its negative consequences, demons and mad clones and evil androids and nuclear apocalypse and genetic engineering and Morlocks and erasing your family from the timeline. Frankenstein is the first modern science fiction novel precisely because it is nearly the first work of art to make extensive use of the terminology and concepts of modern science for primarily aesthetic purposes: and the aesthetic purposes to which it puts science are silence, distance, isolation, fear, and incalculable moral horror.
Dystopia is not an opposite narrative mode to utopia, composed by different authors for contrary purposes. Nor is science horror opposite to science excitement. The Twilight Zone and Flash Gordon, Isaac Asimov and Ray Bradbury, Ray Bradbury and Ray Bradbury, George Orwell and L. Ron Hubbard, Gene Wolfe and Gene Wolfe, Star Trek and Black Mirror...all accept the radically new in science and technology as powerful and inevitable and beyond any rational control or regulation; all use this assumed reality both for aesthetic strangeness and horror and for aesthetic excitement and novelty and positivity. The same society, the same genre, even the same people produce both modes.
And in just the same way, a conspiracist or alarmist narrative about how a new technology or social trend will destroy the world is not, in practice, the opposite of a progressive or "pro-science" narrative about how a technology or social trend is "cool," must be embraced at all costs, and/or will save us all. The two are in most cases sponsored and paid for by the same outlets, consumed by the same people, even at times created by the same people.
Again, there is a sense in which all this, both negative and positive modes of conceptualizing novelty, are distinctively modern, but also a sense in which this is simply a universal human reaction to the truly and radically new, which always offers powers and possibilities and experiences and threats we have no prior experience with and so do not understand and so are not morally and intellectually equipped to handle, and so always to some extent moves us into an aesthetic space of excitement and horror and distance and alienation and strangeness.
This is not in itself what makes modernity modern. What makes modernity modern is that both the "goodness" or "positivity" assigned to new things, and the "badness" or "negativity" assigned to new things, do not follow the typical senses of those words, which in most human languages and contexts emerge from morality and/or human comfort and prosperity and aesthetic preference. What defines modernity, rather, is precisely the sense underlying both that these novelties have truly and permanentlyand almost definitionally eluded the grasp of any human understanding or reason.
Hence, the concepts of goodness and badness applicable to these novelties end up representing something much closer to a metaphysical or definitional claim. What is new is good not in the sense in which, say, food or drink or shelter are good, or Star Trek Generations is good, but more in the sense in which a metaphysical principle or a law of physics or an ancient Mesopotamian god is good. Likewise, what is new is bad not in the sense in which, say, being mean to your sister is bad, or Marvel Avengers Infinity War Endgame is bad, but more in the sense in which a metaphysical principle may be bad in its implications for your own life, or a law of physics may cause you to fall unexpectedly off a cliff, or an ancient Mesopotamian god may wipe out your city and your family in an excess of spleen. Or, in other words, and in both cases, because it is fundamental, because it is inevitable, and/or because it is powerful.
At the heart of modernity, then, is a kind of worship of inevitability and power as such, derived ultimately from a sort of immanentization into history of a metaphysical divinity transcending human reason and morality and identified with novelties good and bad.
Here, though, is the problem with the worship of novelty, power, and/or inevitability as such. Metaphysical principles and laws of nature and even Mesopotamian deities are things that, by their nature, tend to be transcendent, not just temporarily but permanently beyond our reach and comprehension. Novelty, power, and inevitability, on the other hand, are things that can inhere in anything and everything, and things that by their inmost nature do not have much of a shelf-life. Something is divine forever; it can only be novel for a few minutes or a few days or perhaps a few years at best.
Most new things are only new in one respect, and then not new for very long; most inevitable things are not really inevitable at all, only very probable, and in constant danger of becoming un-inevitable; powerful things are only powerful from one angle and in one context and to some limited degree. As aesthetic effects, all suffer enormously from the basic hedonic treadmill effect. Maintaining a sense of novelty or power or inevitability at the center of a personality or a culture, then, requires an enormous and constant expenditure of time and attention and resources to find these qualities, demonstrate them, and finally give up on the current entity and start the process all over again.
And then, of course, even then most of the time finding actual genuine novelty power or inevitability is too hard, and in practice people simply settle for the aesthetic effects that suggest it.
AI is Not New
Of course, we all know where this is going (from the title at the top of this blog). AI! AI!
Everyone knows that AI is the "next big thing." We know this both from the articles and thinkpieces arguing that it is good and will help us to generate a new utopia and make more money, and from the articles and thinkpieces arguing it will disrupt the economy and drive us all mad and end the world. We know it above all else from the way it crawls, crab-like, into the gaps in the conversation of even ordinary people, and from the way it shuffles zombie-like into the squarest forms of art, like newspaper comics and SNL, as a "topical reference." We know it even from the way in which it has entered "politics," like a celebrity CEO, staring impassively at the C-SPAN camera from the floor of Senate committees and printed on a T-Shirt given to the Prime Minister of India. When even Modi, the world's most evil politician, can use AI in a cringing pun about how "everyone knows the future is AI" but the other AI that's also the future is the one that stands for "America & India," then the novelty has well and truly been taken up as the global novelty of the moment.
I do not really have all that much to say about the AI Revolution. I suppose I know more about the basic technical aspects of AI than most people (though much less than true technicians) simply from having studied AI in college in the 2010s and learned about machine learning and "neural nets" and the various methods and theories involved from a very enthusiastic professor who hoped that one day "hard AI" (in the sense of the creation of human-like intelligence and consciousness) would be possible but was intellectually honest to also give us John Searle to read. I don't really think having a basic knowledge of how modern AI systems function is all that helpful for grasping how people talk about AI in a contemporary context though. At the time, I read John Searle and studied philosophy and spent hours and hours programming various complicated systems and came to the very obvious and indisputable conclusion that this was all philosophically the same as all other algorithmic computer systems and had no more or less to do with human intelligence and human causal powers and human substance than a water wheel. But that has really nothing to do with the AI Revolution one way or the other.
The idealistic arguments around Hard AI are very much in the past at this point. For all intents and purposes, AI now simply means the ability to create algorithmic-logical structures (the basic theory and structure of which have been known for decades) that thanks in part to algorithmic refinements and in part to vast technical increases in storage and processing power can be "trained" in a reasonable amount of time on the enormous incalculable mass of already-digitized text and words and images and content on the Internet and then spit back out reasonable facsimiles and combinations thereof. Such systems, like all technological systems, require lots of technical support and have certain very obvious uses and also certain very obvious and arguably insuperable problems. Thus "AI" in this sense is already used to automate certain tasks by people who have to come up with lots of digital text very quickly, but already has to be checked over by an actual human lest the fact that the AI composes with a total disregard for content and truth become too obvious.
In this sense, AI is no more or less novel or threatening than any technology that automates tasks for human beings. That is to say, it certainly is threatening inasmuch as it is inherently destructive of meaning and content and insofar as it is relied on in areas where meaning and content are important. It is likewise threatening inasmuch as it is used to automate tasks that are very important and require human judgment and where mistakes can lead to large amounts of suffering and death. It is also threatening inasmuch as it threatens to take away jobs from people whose jobs consist largely or entirely of coming up with large masses of text and images regardless of meaning and content--of which there are a fair number in the current economy.
It does not, however, represent an apocalyptic threat to America or the World As We Know It. The reason for this, is, as already stated, that AI is not new. It is also for broader reasons having to do with technology and its effects on human life.
Technological Disruption
Abstract from the particulars for a moment. As I discussed in my mega-post on technology as such, no technological system is universal or inevitable, but every technology reifies and extends the will or preoccupations or goals of a given person or society.
The classic examples of automation being economically disruptive all have to do with areas where there was already a significant demand and social preoccupation that could only be fulfilled by laborious human effort.
Perhaps the most common and classic example is the garment industry. People wear clothes for both practical and cultural reasons; and it is certainly true that while such clothing is not enormously difficult to produce, in most pre-modern societies ordinary homespun clothing was always much rougher and coarser and therefore less comfortable than professionally-produced clothing (a fact frequently commented on in Christian ascetical texts).
These basic practicalities, however, were not only or even largely responsible for the European fixation on elaborate and luxurious clothing in Renaissance and Early Modernity. Rather, the fixation on luxury clothing had as much or more to do with the rise of new, more hierarchical social structures and international trade leading to a new class of nouveau riche occupying political and economic power centres, as well as the actual laboriousness and difficulty (and therefore cost) of producing fine clothing using rare materials dyed bright colors with expensive dyes and adorned with rare metals or jewels and cut in fashionable shapes. Clothing thus functioned for the new classes of wealthy people dominating society as a potent status symbol, as well as a physical way of displaying and even wearing one's wealth. It even functioned to a degree as a way of storing one's wealth, as individuals and institutions (even ecclesiastical institutions) would stockpile expensive clothing precisely to save and invest their wealth in an easily-accessible way likely to appreciate rather than depreciate over time.
A massive garment industry thus arose that produced such clothing in the required amounts and to the required degree of luxury using massive amounts of human labor. Such industries can function in many different ways according to different economic and political systems. In antiquity, slavery allowed tasks involving large amounts of relatively unskilled labor to be performed with a fairly economical if very top-down way. In some Medieval societies guilds dealt with this labor in a more distributed way, while increasingly in Early Modernity corporations and small businesses vied with each other to distribute materials and labor into households and collect the results.
When industrial technology allowed the tasks related to clothing production to be automated and centrally carried out far faster, this fundamentally transformed and disrupted the human relationship with clothing and its production. From a task requiring large amounts of skilled, directed human effort and labor, clothing manufacture shifted to a technological system that still required enormous amounts of labor, but shifted this labor to technician and technical-service labor on and in service of machines. Slowly but surely, the social and economic and even political structures around clothing and its production shifted as well. Entire traditions and methods of dress and garment production and entire sectors of the economy were wiped out and lost forever. People who had existed in certain economic and social relationships now found themselves in totally different ones, masses of urban poor migrating to major cities to work in factories. Clothing in the long run ceased to function as a wealth-display and wealth-storage method, and so people began wearing ugly black suits and top hats and/or valuing the fashion or design of clothing above its production quality or material. And so on and so forth.
Let us leave to one side for the moment the question of the goodness or badness of this change. What I wish to note here is merely that the fundamental disruption to society and labor and the economy was not primarily quantitative but qualitative. It was not so much that people could make so much money as an spinner or weaver and so much money as a worker at a factory. It was not even just that people had so much independence or control as a skilled garment worker and much less as an unskilled factory worker. What happened in the Industrial Revolution and in other large-scale economic disruptions due to technology was that, fundamentally, it was the nature of labor itself that was altered. From being laborers and producers in a more traditional sense, people became technicians and servants to a technological system.
This technological system was not inevitable--it in fact entirely followed from and aped and extended the already revolutionary social and political changes that had put capital and wealth and land and political power into fewer and fewer hands backed up by legal privileges and political structures like the joint-stock corporation and economic structures like international commodity trade and the incipient financial system and colonial slavery and plunder. It took these trends and the ideas and structures behind them--which had already impacted the garment industry to an arguably even more revolutionary degree in purely economic and social terms--and extended them farther than ever before, instantiating and reifying them into a practical and powerful systems that subordinated workers and society to its control in a fundamentally new way.
Fundamentally, though, the degree of disruption was caused precisely by the degree to which the new technological systems fundamentally altered the livelihood and way of life of numerous people. It is by such standards that we have to judge any new technology, including AI.
What is an AI, and What Does it Do?
Fundamentally, AI allows tasks that were already technological and Internet-based and attention-based and content- and meaning- independent to be done more quickly and with less labor. That is all that it does. In the context of 2020s America, it is not a revolutionary technology, but if anything a rather conservative one.
The Internet was a truly revolutionary technology--one that extended the symbolic media landscape of television and mass-printing to a drastically greater degree than ever before into the very fabric of people's lives and minds and imaginations and habits, fundamentally altering the livelihoods and ways of life and bodies and souls of most people in America, forever.
Originally, the Internet subsisted on "user-generated" content that was also, largely, user-found and user-collated and user-presented--making it a space where the traditional economic categories of "worker" and "user" were to a real extent blurred. People created their own websites with their own text and videos and art and pornography, or created platforms where people could share those things with each other. People found these websites on their own, and shared their creations and commented on them. Naturally, the Internet itself was a technological system requiring huge amounts of effort and labor and many technicians to make function, but it largely operated not as a traditional corporate media production or distribution enterprise, but as a user-directed and user-funded service.
The real "AI Revolution," if such a thing ever happened at all or will happen, took place merely when this technological service was gradually, but relatively quickly and largely inevitably, taken over by the existing advertising-corporate economic system already dominating American life.
The Internet became a platform for advertising, and therefore a platform for making money off of attention as such. From being a largely user-directed space, it became a space where corporate workers performed the same tasks for pay, creating art and text and images and video and sharing and presenting them to an ever-ballooning number of ever-more-passive "users."
All the disparate forms of meaning-laden human art and sin that the Internet contained were thus, as they had been for television and print to a significant degree for quite a long time, lumped together under the space of "content," attention-grabbing and therefore advertising-bucks-generating nudules of meaning-independent nothingness. Still, by vastly increasing the scale and scope and variety of such content and the number of people consuming such things and the amount of time spent consuming them, the Internet accelerated the existing trend towards viewing all art and entertainment merely as content, and therefore as totally meaning-independent.
Even here, however, corporate entities quickly and increasingly found that it was to a large extent pointless to try to create their own content; rather, they were much better situated merely to prompt collate and present and harvest already-user-generated content for advertising and attention. Even at the very beginning, this process of collating and presenting and harvesting was largely automated, directed by search algorithms and then content-algorithms. At the same time, corporate governance and financial transactions, and hence greater and greater chunks of the global economy, were automated and algorithmized as well. By the time ChatGPT was a sparkle in an engineer's eye, that process was more or less complete.
The Internet, then, is a technological system where corporations direct money and technical labor to produce and maintain algorithmic media platforms that prompt users to create content along algorithmically-directed lines, which they then collate and distribute algorithmically for the purpose of generating attention and clicks and ad-revenue. It has been that way for well over ten years at this point.
Fundamentally, all that "AI" in its present and future form represent is a slight technical improvement of this already technological and largely automated system. The algorithms that direct content-creation and content-distribution can be made more efficient and adaptable in real-time to trends, causing the overall technological system to require less technical labor to maintain. At the same time, user-generated content can be forced into algorithmically-directed molds more easily, and "content creators" can do their work with less overall labor required. Instead of a content creator having to do research to discover what trends are popular and then spend hours painstakingly making a video to match those trends, he or she can simply have the algorithmically-directed AI system scrape the Internet and do most of that work for them. And so a TikTok or YouTube video or listicle can be produced and attention and clicks generated and money (somehow, mysteriously) made.
Of course, the degree and kind of labor required to produce meaningful or content-laden art outside of the algorithmic strictures of what is popular and trendy has not been fundamentally changed by this. Here, AI is either a tiny convenience, or a distraction.
So far as I can tell, then, nothing about AI fundamentally alters any of the basic rules or systems of the Internet as a technological system. By decreasing the overall amount of labor required to service the technological system of the Internet, AI is likely to lose some people their jobs. As a process, though, this is nothing new; it is merely how technological economic systems have functioned for hundreds of years, at the latest since the introduction of "downsizing" as a corporate-management tool in the '70s.
For every incremental technological improvement, certain technical skills and functions become obsolete or replaceable with less skilled or less expensive labor, and people are laid off. This is certainly a terrifying prospect, but also one fundamental to the modern technological-capitalist economy, which pits us all against each other across the world for basic dignity and livelihood. One is tempted to think that the main reason for the outsized panic around AI is simply that the Internet technological fields have been largely on the upswing for several decades, and so workers have had to face this basic reality less--as well as the concommitant fact that the technological workers most directly threatened by AI are precisely the content-generating workers who currently control most media in America and so have the most power collectively to direct the discourse.
AI Was Here Already. You're Next.
The thing to do with any new technology is to ask what kind of will, and what kind of value system, it reifies and extends. Here, most of all, AI is not only nothing new, but is merely the latest in a very long line of technological systems and improvements that for many decades have aimed at precisely the same goal and the same telos.
Fundamentally, AI embodies an ethical system in which the kind of power embodied in advertising is valued above all else, and meaning, truth, goodness, beauty, and all human values are secondary, if not entirely meaningless. This value system is centered on the economic sphere, where morality and human values have been for centuries systematically banished, but it has been embodied most obviously for America in the last century in the world of popular entertainment, and above all in what I have described as the characteristic modern art forms: advertising and pornography. The overriding telos of this value system, then, is merely to have an infinite succession of novel things that at least briefly grab our attention and exercise power over us in such a way that money is made. Money, of course, is merely another word for certain forms of power, as (see my argument above) is novelty, as is the ability to grab attention and direct desire and action embodied in mass-media and advertising and pornography. It is around these forms of power and their exercise, then, that our society and its religion centers.
Seen in this light, the really scary thing about AI art and content is how entirely unexceptional it is, how generic, how exactly like all the other content we have absorbed for the last century.
There is no way, really, in which AI is new. Even inasmuch as AI art is stupid, meaningless, content-independent, manipulative, rushing towards the very bottom line and the lowest common denominator, it is outpaced by human art. In terms of basic underlying strangeness, AI art has nothing on the most generic ad for pizza run a dozen times on daytime television. The most evil art, the stupidest art, even the most meaningless art one can imagine is not AI art; it is American human-produced art of the last century. AI is indefinite, like all tools. It cannot seek indifference and meaningless with the directness and purity and perversity exemplified by a thousand human-created movies, television shows, Internet-contents, pornographies, and ads. By every standard, then, AI art falls short; at its worst as at its best, it is merely a shadow, an extension, of the human will. It is the human will, its choices, and above all its indifference that are the stuff of nightmares.
AI, like all technology before it, offers us neither machine-generated horrors nor a machine-generated paradise--it merely offers us more of what we already have, and what we already want.
The only degree in which AI shows promise, or even genuinely revolutionary potential, is in its very mundanity and indefiniteness and vagueness and genericness. Technology is frequently akin to revelation; in extending our human choices and values and desires beyond our rational grasp and our rational choices, it can also unmask them.
In this instance, then, AI shows us, in a particularly obvious form, what we have, as a society, been seeking and valuing and aiming at for a very long time--in all its stupidity and meaninglessness and fundamental boringness. For the last century, we have at a society poured unimaginable amounts of wealth and resources and effort and attention and thought and human labor and imagination into the task of creating and servicing a technological system that provides for the maximally efficient consumption and distribution of a maximally tailored, indefinitely infinite succession of stupid, boring, nonsensical chunks of attention-grabbing meaningless nothingness.
For this revelation to have any positive impact, however, we have to escape the progressive trap of the worship of technology as either godlike progressive excitement or godlike apocalyptic terror. There is nothing godlike or inevitable or even in itself particularly powerful about the technological stupidity that we ourselves have created. We are not doomed to obsess over it either as our salvation or our doom. We can recognize its fundamental stupidity, and our own fundamental stupidity that underlies it and gave it life, and we can choose to walk away, and think about something else, and value something else. Something, dare I say, actually real, and therefore actually interesting.
Here's to hoping.
No comments:
Post a Comment