May 9, 2022
From CopyRiot
The author team Dyer-Whiteford, Kjösen, Steinhoff has presented with Inhuman Power a meritorious book on AI (Artificial Intelligence; AI) from a Marxist perspective. The authors define AI as the ability to make reasonable generalizations based on limited data in a given period of time using software. The wider the scope for applications and the faster conclusions can be drawn from minimal information, the more intelligent the machine behavior. AI, mind you, has nothing to do with robots; this confusion of AI and robots, which is a recurring feature of POP culture in particular, needs to be cleared up here. Robots stand out as artificial tools that recognize their environment through sensors and act accordingly, have a body, and are considered machines that perform work autonomously, while AI is software that must be integrated through hardware to function. The authors divide AI into three areas: Narrow AI, Artifical General Intelligence (AGI), and Artificial Superintelligence (ASI).

Research to date mostly deals only with commercial AI apps,which are used by consumers on a daily basis, i.e. task-oriented tools. The latter only have the ability to act in a particular domain, while AGI systems have the capacity for cross-domain actions, i.e., learning to move from one domain to another. ASI is even more speculative and to this day remains mostly a matter for SciFI authors. Another distinction is between strong and weak AI, with the former divided into three schools: Good ol fashioned AI (Gofai), machine learning (ML), and situated, embodied and dynamic framework (SED). ML accomplishes learning in three steps: Ingesting data, constructing a model from that data, and using that model to make predictions about new data. ML thus creates its own models of interference.

Machine learning occurs in three steps: Processing data, creating models, and monitoring, where creating models is the main task of AI. Here, data experts write algorithms that recognize specific data sets, be it digital images of cats or pedestrians in car traffic, etc. In the process, learning takes place through thousands of tests where examples are obtained and raw data is enriched with noise (cats with dogs). In the end, it should be possible for the AI system to identify the target object and calculate statistical relations between the different patterns (cats are more likely to be posted by women than by men). There are a number of techniques for writing algorithms, be they linear/logistic regressions, random forests, or boosted decison trees. The most sophisticated models all involve Deep Neural Networks, which require high amounts of computional power and trial and error procedures. There are only a few thousand scientists working in this area. In practice, experts are often much more intensively involved in monitoring and preparing the data. Formats have to be standardized, features have to be added, errors have to be excluded and information has to be added, because a broad data cleaning is the first requirement here. Often, this data cleaning is outsourced to low-wage countries such as Indonesia, India or Venezuela.

Monitoring also needs a global clickwork, for example the low paid filtering of social media sites. It is well known that Google has hired 10000 raters to watch YouTube videos all the time. The paradox of AI here is that advances in AI are accompanied by a destruction of current labor markets, and especially for those of humans-in-loop tasks. It remains unclear whether future AI will also replace the work of software engineers.

Since 2010, AI has been based primarily on artificial neural networks (ANN), but these are by no means identical in structure to the human brain, although in research many iInspirations come from brain research. Here, the artificial synapses that connect the layers of neurons are weighted with numerical values to represent the strength of the connections. ML is a system, a fixed template with differing parameters. Again, three distinctions can be made: ANS systems refer to sets of data that can represent pictures, faces, or videos, where in training you put a large amount of data in front of the network, while the weight of the synapses creates an algorithm so that the network learns to make correct responses by, for example, recognizing faces or saying hello at the right moment. Or present a system with enough photos of red hexagonal signs with the word STOP from different perspectives so that an AI system learns the meaning of the STOP sign.

These applications still require a lot of human work, so one increasingly constructs networks that also operationalize learning so that the system can autonomously generate categories and sectors. The system learns to identify extremely complex connections from a given data set. This is about the interplay of extracting whole patterns of data within a bottom up system, and in the best case the systems already write their own algorithms to present autonomous solutions and data sets. The biggest challenge for commercial Mls is self-driving cars and trucks, which Google and Baidu are also involved in developing, as well as major automotive companies such as Daimler, Ford and General Motors. The AI industry today produces both inputs for companies and goods for individual consumption, and AI production is dominated by large oligopolies. Salaries in this industry are high, so much so that ML experts are sometimes already referred to as the new investment bankers.

The analysis of machinery is an important building block of Marx's theory, i.e. machinery is a supplement to human labor, which for Marx is the only producer of surplus value. However, it did not escape Marx at the time that machinery is increasingly becoming an autonomous factor in the development of capitalism, i.e. constant or fixed capital, which also includes raw materials, buildings and other equipment. This is contrasted with variable capital, which is based on living paid labor. For Marx, the organic composition of capital, the ratio of dead to living labor, increases with each innovation. And the economic function of fixed capital is to produce relative surplus value, to shorten the labor time necessary for the reproduction of workers, and to increase the proportion of surplus labor time that is free to the capitalist. The increase in productivity implies that workers produce more commodities in less time, so that the value of the individual commodity falls.

For Marx, the fully developed machinery consists of three machines: Driving machine, transforming machine, and machine tool (see following text). In Capital, Marx refers to machinery as an automaton, but this does not exclude class conflicts in the production processes, since the introduction of new machines usually intensifies labor. Moreover, Marx sees machinery as a competitor for the worker, making it superfluous. If the capitalist can reduce the labor time for which he pays by introducing new technologies, but sells products at the price existing on the market, then he makes an extra profit compared to his competitors. The competition between capitalist enterprises leads to the fact that there are always new spurts of innovation and automation, with which the organic composition of capital increases until there is a fall in the rate of profit, which in turn can be compensated for a certain time by the mass of profit or because of an increase in output. Automation degrades the worker, makes his movements and cognitions monotonous, because the demands of work are permanently repeated, so that he responds to the machine with his work rather than using it in the sense of subordination.

In the Grundrisse, Marx set forth his most famous version on capitalist technology in the Fragment on Machines. Capital makes technological advances by mobilizing the "general intellect," which enables the capitalists, if not to eliminate the worker altogether, to degrade him to a peripheral subject who must follow the machine processes.... In Capital, Marx speaks at this point of real subsumption under capital, that is, the latter absorbs technological knowledge according to its goals, i.e. there is further automation of production and an increase in the speed of circulation of products. Here, the absolute exploitation of the workers is replaced by the production of relative surplus value based on the increase of productivity through the intensification of the labor process. The worker is now confronted with an "alien power" that has a certain independence vis-à-vis him.

Going further, to the tripartite notion of machinery must be added the human control function over those based on intelligence and sensibility, although even the latter can be replaced by machine sensors as cybernetic feedback evolves. In addition, the separation of hardware and software increases the flexibility of machine applications, allowing machine operations to be modified by programs and increasing variations previously reserved for human work. However, all this does not have to lead to constantly increasing unemployment, at least according to Caffentzis, because since the 1980s jobs in the service sector have increased greatly. According to this author, an increase in the organic composition of capital in one sector always leads to a decrease in the composition in another sector.

Some authors today assume that AI is about the general conditions of production, that is, technologies, practices, and institutions that produce the environment of capitalist productions at a given time and space. Marx spoke here of infrastructure, which includes the machines of communication and transportation as significant components of the general conditions of production. If AI as a technology becomes the new electricity, then it is not just about automation in production, but about the creation of new infrastructures, an intensive reorganization of the capitalist economy, which has to be considered as the general condition of production processes and the environment for enterprises. For Marx, general/general always stands in contrast to particular conditions that relate to an individual enterprise. In the Grundrisse, Marx sees the relation between an enterprise and the general conditions of production as a specific resource of social production from which all capitalists benefit. Infrastructure exemplifies the general conditions of production, as roads, canals, and railroads are used by all capitalists. Capitalists must pay, even if only through taxes, for transportation and communication, such as container shipping or the hardware needed to connect to the Internet. Infrastructure is thus an important element of the general conditions of production, which include the means of production for transportation and communication, the general use of buildings, processes in circulation, the state of science and technology, and the political order, in addition to the production of machines by machines and the degree of automation in industry. The general conditions of production also affect circulation and productive forces. Increasing the speed in circulation increases the possibility of extracting the surplus value in production and realizing the products in the market. AI can also contribute to the increase of the intensity of general conditions and denm mode,

The AI industry is international; China and the US are competing for market leadership. Baidu and Alibaba are the leading companies on the Chinese side, Google, Facebook, Alphabet, IBM, Apple, Microsoft and Amazon on the American side. All companies are supported by state research institutions, at the same time the state of the USA uses AI for its drone missions or for semi-autonomous weapons: It seems that by 2030, China will be the leader of AI and will have the most software developers, so we can assume a duopoly in the AI industry.

Essential to the AI industry is the high cost of hardware. This is especially true for the cloud and energy-intensive data centers, which users can access via the Internet, although the cloud is ultimately in the hands of a few tech giants. AI tools are constantly sending data to the cloud, where the AI process takes place. In the process, the cloud is now complemented by a technology called edge computing, where processing takes place on local devices rather than in the cloud. The control of cloud computing, the ownership over big data sets and the high number of the most skilled AI professionals distinguishes the big tech companies. In the very Marxian sense, this is the concentration and centralization of capitalist power.

If there are industries that are closely linked, so that leaps in technology and knowledge within one industry cause leaps in other industries, iso that there is an increase in productivity and output, then one can speak of chain reactions that could eventually also lead to a revolution in production relations. And if machines produce other parts of machines, then the machinery itself can acquire the status of general conditions of production, that is, be available in adequate quantity for all individual capitals. Since Marx's time, the capitalist mode of production has gone through at least two seminal periods, Fordism with all the benefits of Taylorism for capital, and the subsequent period characterized by logistics and ICTS, described as post-Fordism.

Beyond the period of post-Fordism, Dyer-Whiteford & Co refer in this paper to a currently already existing AI capitalism, understood as the middle phase of a longer term cybernetic capitalism, which is then a fully developed AI capitalism. Authors such as Kevin Kelly have long propagated a ubiquitous AI capitalism for which AI is as essential as electricity or the Internet were for previous eras. The foundation for this is infrastructural AI as a means of cognition. The big tech companies are talking about a democratization of AI here, to the extent that all companies are making some of their AI material open source. In these open source communities, tools and templates are distributed for free, projects are created by online collectives, and products are distributed to users for free. Almost all AI projects are currently based on open source toolkits. Google's Android, for example, is also based on open source, but ultimately operates as an annex for Google's big data extraction projects, which lead to high advertising revenues and the intensive training of ML systems. At the same time, the free software is good business for the big tech companies, as it allows a high number of experts to further develop AI. Paolo Virno speaks at this point of "communism of capital", insofar as bottom-up techniques and the free distribution of goods are supported in order to then capitalize on them sustainably.

Smart cities and related ubiquitous computing and instrumented devices, methods that create an urban environment, now require tools from AI. These include sensors and cameras that have specific processors for machine-to-machine communication when it comes to optimizing an automated traffic, energy distribution and other urban flows that aim to improve social life in the city. Smart cities are the urban manifestation of the Internet of Things, with this development increasingly qua political agenda in the hands of AI capitalists.

The goal of the ambient intelligence paradigm is a situation where digital tools act collectively to leverage the information and intelligence hidden in the networks that connect the different devices. In the future, the urban environment will become proactive precisely in its relations with users. At this point, the authors pose the following question: what if human knowledge and skills were not only manifested in the dead work (machines), but the machinery itself had the capacity to take on cognitive and perceptive tasks that have so far been denied to humans? The answer is simple: the machine capital, the fixed capital, would be transformed into variable capital and could independently generate surplus value. Perception and cognition would become ubiquitous, like electricity. Using the applications of chatbots, the authors want to demonstrate how the fixed capital of AI could be transformed and become part of the general conditions of production. Chatbots are software applications that enter into dialogue with human language, whether as text or voice. Chatbots act as interfaces that replace customer-centric business models (web stores, technical help) to make online interactions as intuitive and simple as communication between people. AI systems simplify complex situations here, but this is nothing new for the capitalist mode of production when it comes to increasing speed and simplifying transactions. In this context, AI systems not only act on the basis of experience, but also learn from interactions in order to take on new tasks themselves.

And the authors go on to ask provocatively: what if, rather than transforming human knowledge and skills into dead labor (machines), dead labor were given the fundamental capacities of cognition and perception previously monopolized by humans? The Ai would allow the machines to perform cognitive activities that would be quite different from those of humans and possibly with potentially revolutionary effects in the field of the general conditions of production. The means of cognition would be a factor inscribing the general conditions of cybernetic production into the economy. Capital would now think and receive autonomously, it would transform the primordial capacities of labor into machinic forms of capital in the direction of a smart technological environment that would not completely abolish human labor, but which would be replaced in large part by automation in production and services.

This analysis is strictly different from postoperaist positions that still reserve cognition for human activities. Even Marx, on the other hand, describes the "general intellect" as something that manifests itself in machines and takes the form of capital. Capital empowers parts of the social brain, implements it in machinery and adds them to the general conditions of production. In this regard, communication is one of the most promising areas for AI, which, however, concerns not only chatbots, speech synthesis and human speech, but the writing of entire texts, for example, in the financial system and sports. Moreover, AI can generate an infinite variety of logical concepts, that is, the representation of data at different levels of abstraction. Deep Learning systems are immensely scalable, so as the amount of data increases, the performance of these AI systems increases tremendously.

Will the jobs do my job is a frequently asked question. The authors do not simply answer yes or no, but assert that the job or unemployment is the result of class conflict, that is, the chronic clashes between labor and capital. The thesis of a composition of class goes back to autonomous Marxism, which emphasizes in particular the subjectivation and autonomy of workers. Marx, on the other hand, spoke of a long-term tendency in terms of the replacement of workers by machines, discussing this both in technical terms (organization of labor) and in terms of the question of value (fall of the rate of profit). For the Marxists mentioned above, Marx neglected precisely the issue of how workers resist capitalist machinery, for example, through sabotage. The technical composition of the working class concerns the organization of labor, the management, the division of labor, the rhythm at work, and the use of machinery. In addition, however, the political composition of the class must be discussed, the organization of the class in strikes, wage struggles, and ultimately with respect to bringing about a revolutionary situation. This composition of the class takes the form of unions and communist parties and/or a chain of wildcat strikes, riots, sabotage, absenteeism from the workplace, etc. In parallel, capital w, on the other hand, tries to bring about the more intensive penetration of production by machinery, to ari the degree of unemployment and to supervise workplaces more closely.

Marx describes capital as a circuit and a total system that accelerates production and circulation more and more. While production is a dual process of manufacturing durable goods and extracting surplus value, in circulation values are realized in sales, a process in which transportation, advertising and logistics play a significant role today. In addition, finance must be taken into account, speculative activities and crediting, as well as the unpaid work in the reproductive sector, mostly performed by women. At this point, autonomous Marxists introduced the term "social factory." Paradoxically, they formulated their theses at a time when cybernetic capitalism was already on the horizon. Computers and digital networks were developed by the military-industrial-cognitive complex during World War 2 and the Cold War phase. These technologies were then accelerated through the 1970s because of the fall in the rate of profit after a 30-year boom. The transition from Fordism to post-Fordism largely shattered the power of industrial workers. From the 1970s onward, the "technical fix" developed because of the automation of factories and offices, with a mechanical liquidation of labor leading up to the introduction of robots into the automobile industry. The "spatial fix", in turn, involved via sder upply chains the relocation of factories to countries with low wages.And the "financial fix" referred to the flight of capital from production, towards the development of derivatives, futures, etc., and the existence of hedge funds, offshore centers, etc.

When in the capitalist core countries prosperous cities transfomated to rust belts, at the same time there was a shift of jobs from industrial areas to the service or service sector, which also included the rise of the activities of the financial system and in the reproduction sector, with wages in the latter remaining stagnant and/or low until today. With globalization, the social factory grew into the planetary factory, including a sophisticated system of highways, container ships, airplanes, data centers, fiber optic cables, and sales centers. Negri/Hardt speak at this point of a flat or soft economy. But this is not true at all, because there is still today a strictly hierarchical order on the planet with the however weakening imperialist power USA and its financial industry at the top.

It still seems paradoxical to speak of the "work" of AI systems, because AI definitely replaces jobs until today, but without human work, which it produces itself, it does not get along either. The social function of AI as fixed capital is to reduce the average necessary working time and to increase the extra working time. In circulation, AI accelerates the realization of goods, their transportation and the advancement of logistics, especially through the integration of factories, distribution centers and sales companies. It comes to the reduction of labor costs in all the mentioned areas. In the financial field, today all speculative activities that turn money into extra money are accelerated by automation. Large companies such as Siemens, Intel or Microsoft have long been making significant investments in AI to reduce labor costs, reduce defective products and transmission times, speed up production time, etc. These projects are being developed in Europe under the label "Industry 4.0" and more in the U.S. under the label "Internet of Things." And even China, whose low-wage sector is well known, although it is in dissolution, needs to invest in the same direction, perhaps even faster than the US companies. Since 2013, China has had the largest market for industrial robots. In Europe and the US, it is the circulation sphere where AI has been widely applied so far, also to accelerate the logistics revolution. Robotized trucks and self-driving cars are already in use, with 24/7 application promising lucrative AI projects. A 2017 report by the International Transport Forum projected that 4.4 million of the 6.4 million truck drivers in the U.S. and Europe could be replaced by autonomous technologies in not too long. However, autonomous driving cars require thousands of hours of digital recording and processing by human-driven vehicles.

Amazon is a company that is massively pushing the development of ML systems, be it cloud computing, algorithms, transportation of goods, etc. In the process, the exploitation of workers, who are under heavy work pressure and low pay, is not reduced but intensified. When robots/bots are used, they are guided by a constantly updated and computerized system and guided by means of sensors when, for example, they transport the products to the packing stations. The use of robots has reduced the time it takes to process an order at Amazon by one-fifth.

It was the financial institutions that started developing AI systems very early due to the rapid increase in crediting and speculation. ML methods are widely used here to improve the quality of loans, automated relationships with customers, and the logic of insurance contracts. And of course, high-frequency trading is at the forefront here, with only 10% of equity trading executed by human agents in 2018, 40% was passive trading through mutual funds, and 50% was handled entirely by algorithms. Algorihmic profiling is now ubiquitously used in corporate and government decision-making, regarding lending, jobs, insurance, and medical care. There is little doubt that this development will continue to grow the industrial reserve poor and the surplus population, with the use of AI systems leading to a digital poorhouse where one regulates transfers, medical care, and police surveillance through digital tools. Clearly, AI in the hands of capital is leading to a technological decomposition of the proletariat.

Marx understood technology as a weapon of capital. Today, the effects of AI on employment possess an important role. In the mainstream economics, regarding the assessment of the development of the apocalyptic positions, according to which most jobs will be destroyed, are opposed to a position that sees here only the business as usual, i.e. a reduction of jobs in one industry results in the introduction of new jobs in other industries. AI, according to the first position, because of Moore's Law, which indicates the acceleration in the rate of innovation and "cross sectoral apllications," is a technology that cuts across all industries, with white collar workers merely assisting automated journalism, law, and medicine. At this point, an unconditional basic income seems inevitable to even stabilize demand for goods. Moreover, AI would also lead to the elimination of middle-class jobs, although low-paid, routine work would remain the most affected by AI. But it could also be that robots create new jobs, but on what terms and at what pay, that is the question here.

Precarious work, including part-time and self-employment, remains one of the most debated issues in the 21st century, and a number of authors suggest that it is not precarious work but low wages that really worry workers. It is important for platform capital that precarious and contingent jobs exist in perpetuity, paid on demand, resulting in the hoped-for instability and volatility of wages and jobs. The business-as-usual thesis is advocated primarily by economists who want the effect of job replacement to be offset by the effect of high incomes among experts. In this case, labor is intensified, but not completely replaced. Uber is one company that claims to have AI in its DNA. In 2014 and 2015, Uber recruited more than 50,000 new jobs per month, a number that can only be overseen by virtual management

To explain the effects of the social factory automated by AI, the authors draw on Bernard Stiegler, who introduced the term "grammatization," by which he means a process by which our existence is made discrete. The transformation of onomatopoeic speech into the alphabet is an early example of this process, with grammazitation furthermore changing the movements of the body, the senses, and vision and hearing, and furthermore patterns of social life. For the authors, in turn, grammazation stands for the real subsumption of labor under capital, which Marx tried to capture early on with the term "life activity." What MI systems seek to subsume, however, is not only labor, but also knowledge, communication, and skills. For Stiegler, ML systems calculate correlations to automatically anticipate both human and machine behavior. Facebook's FBL program, for example, uses ML systems that look at Wi-Fi network details, locations, video usage and friendship details to map out tasks for the company.

In another capital, the authors list antagonisms and struggles around and against AI. A) Strikes and other workplace actions concerning AI systems-initiated wage depression, work acceleration, and AI management, b) Protests against military and paramilitary AI applications. c) movements against extensive surveillance, d) effects of social media, e) effects affecting sexist and racial discrimination, f) the issues around the digital city, such as Silicon Valley, where some hyper-rich entrepreneurs face an army of low-paid service workers and surplus population, g) struggles against the big tech giants.

Some factors such as creativity and flexibility attributed to AI bear striking similarities to Marx's concept of labor. For Marx, labor is a characteristic belonging exclusively to the human being. Against this human vitalism, the authors argue that there could be an isomorphism between strong AI systems and Marx's concept of labor, thus faltering Marx's axiom that only humans can create value. AI is thus a special case of capitalist machinery, which means that its status as a machine must be problematized. AGI not only performs work, but also creates value, the authors argue, and thus one cannot rule out the possibility that Homo Sapiens could be degraded to a superfluous species in the long run.

There is no scientific or philosophical consensus on what intelligence is, but many authors assume that flexibility and generality are characteristic of intelligence. Thus, the goal of AGI systems is to create novelties that are broad and have the capacity to generalize, so that they could transform knowledge from one problem to the next. A machine capable of doing different jobs would definitely have intelligence, if it could do jobs it was not programmed to do. The authors refrain from using the term "work" at this point because it is too antropomorphically charged. And human intelligence could not encompass all forms of intelligence, since in our universe thinking is diverse and could even be completely alien to us.

In Capital, Marx refers to labor as a faculty belonging exclusively to human beings, that is, that labor is sui generis human is one of Marx's important axioms. When Marx analyzes labor as such, that is, in a transhistorical sense, he sharply distinguishes it from the productive activities of animals, which are equivalent as inputs to machines, that is, as fixed capital. This argument is also important to show that machines do not work and AGI systems do not work either, one would have to conclude today. The argument that animals do not work provides very revealing material to describe the narrow AI systems. While animals are identical with their life activity, Marx believes, humans, contrary to animals, possess a life activity still enriched with consciousness. Repeatedly, Marx assigns cognition and consciousness exclusively to humans (or creativity and imagination). This anthropocentric concept of Marx implies that humans can reflect on the how, where, why and when of their production and furthermore decide what they can do differently. This free play allows people to produce universally, i.e. through different domains, so that people can use their intelligence and hands for almost anything, which includes imitating nature and interacting with animals. They learn from what they observe in nature, what they have done wrong or right in the past, and apply their creativity and imagination to future modes of production. Marx ascribes to human labor a specific aesthetic, furthermore a knowledge, understanding, consciousness, planning and adaptive abilities, and learning capacity.

However, according to the authors, the cognitive abilities of animals are closer to human intelligence than Marx wanted to believe, and they refer to the writings of Timothy Morten, who accuses Marx of metaphysics at this point. The authors further add that today the difference between humans and machines regarding cognition is becoming fragile. In this regard, they refer to the software AlphaGo, which made completely unknown, inhumane moves in the game GO against a human. If a human had made these moves, it would definitely be ascribed the attributes of intuitive, creative and highly intelligent. AlphaGo has been trained with 30,000 games of Go and learns from human moves as well as its own.

However, it would be a mistake to ascribe the attribute of work so readily to even the most complicated activity of AGI systems. While Marx's anthropocentrism definitely needs to be challenged, listing automated behaviors or productive activities does not yet destroy his argument. If AI systems are merely performing strictly defined tasks, no matter how creative, then their behavior is not work. It is repeatedly argued that human labor can perform almost any task and is therefore highly adaptive, so that here we can speak of a universal robot outperforming the machines precisely in terms of universality, but it could still be that another being is performing labor. If general intelligence characterizes human work, then AGI systems could indeed also perform work, that is, solve different tasks and act in completely different environments and domains. For some authors, machines would need to have consciousness on top of that to approach human activities. If one attributes creativity to the machine, then it would have to identify qua consciousness what produces a new situation in its creative processes. But it could also be that intelligence decouples from consciousness, meaning machines would not need to be endowed with consciousness as long as their neural architecture is deep enough to recognize and process even the subtlest patterns in a data set. This AI system must be able to learn and teach, and also have capacities that include a sensory-motion system, make predictions, have planning capability, and be transformable into universal intelligence. AGI could be transformed into a recursive internal improvement program with access to its own design and with the ability to upgrade it to create new versions of itself, and do so with a differential neural structure analogous to the quantum computer. The system could then improve itself ad infinitum. At this point, the authors speak of fully developed AI capitalism.

One of the biggest problems with AI systems today is that they lack imagination and therefore cannot make predictions. Artificial imagination and creativity are still subfields of AI research today, and if they simulate, they simulate human imagination through special "neural networks." Thus, the AI "method of machine learning" can be understood as a step towards imagination. While the behavior of a system is programmed for a specific purpose, a learning system serves as a general template with modifiable parameters, i.e. the program can do different things, the template has a general capacity to learn, which in turn allows the program to do things for which it was not directly programmed. However, the following chain must be rejected: Work creates value, AGIs work and thus they create value.

The perfect machine, to which even Marx refers to some extent, is a "Neumann-like self-reproducing automata" that does not become obsolete precisely because it can make copies of itself by picking up raw materials from the environment. That is, such a non-destructible automata would never transfer all of its value in circulation and therefore could generate relative surplus value in perpetuity. Because its value approaches zero, it could function continuously to reduce socially necessary labor time and produce surplus value. Marx speaks of the machine as a perpetual motion machine in the Grundrisse.

Marx also already speaks of machines that are actually no longer machines. If one reads this fact through a ScFi optic, the reference to androids or artificial robots is quickly established. When a machine is devoid of certain characteristics attributed to the machine and becomes intelligent, it negates its being fixed capital and transforms into variable capital or living labor. Therefore, the perfect machine is then a machine that can create value. But how can dead labor become living labor? Or to put the question another way: How can AGIs be doubly free laborers? They must be dispossessed of their bodies and appropriated by an owner, and at the same time their creative potential must be translated into labor and directed against them. The AGI would also have to consume something, namely electricity, computer power, and tape speeds. If AGIs were forced to buy these resources as commodities, then they could also be forced to work for a wage.

Money, fixed and variable capital are forms or economic categories that are theoretical expressions of social forms of production/circulation. Things are commodities only insofar as they are commodities and have exchange value and are thus thrown into circulation. If a capitalist buys an AGI system for production, it would perform certain tasks similar to AI, namely as fixed capital, which reduces socially necessary labor and cheapens the commodity. But what if it strips off its form as fixed capital and transforms into variable capital? The AGI would then have to be in an antagonistic relation to capital and could therefore also be proletarianized, that is, sell labor power as a legal person, reproduce itself and enter into a metabolic relation with nature, The AGI system would have to adjust to competition with humans and other AGI systems by means of lifelong learning/permanent updating of the software. The ultimate consequence of such a capitalism would be one without human labor.

It could be that humans would be reduced to merely executing the algorithms. Institutions and unions personify capital, but so could various techniques like something like high-frequency trading on the stock market. In a dark vision, AI systems could personify almost all economic categories, so unemployment would keep rising. In the 25th chapter of Capital, Marx develops the general law of capitalist accumulation, where he notes a tendency for the organic composition of capital to increase, the increasing use of machines in production, and the existence of an industrial reserve army. Moreover, technological innovation creates a surplus population, i.e., permanently unemployed workers who become quite superfluous to capital. As a long-term tendency, the expansion of machinery eats through factories on a global scale,

When it comes to the long-term tendencies of AI, the authors become cautious in their evaluations, whereby a capitalism without humans is quite conceivable; on the other hand, it is also conceivable that intelligent machine networks will autonomously represent their input and output and, moreover, be capable of solving the calculation problem to accompany democratic planning processes. Alberto Toscano has criticized the "Invisible Committee" for merely presenting logistics as an occasion for sabotage, without having a long-term perspective on how to transform these technologies and make them available to a socialist economy. Japser Bernes, in turn, has responded that logistics today is accompanied by a cheapening of the labor force, which often enough is merely waiting to push a button. To ensure control of logistics in the planetary factory by small communes, workers would have to adopt technology that was essentially alien to them. Toscano, on the other hand, believes that such sophisticated technology could not work only in small communes and considers such ideas romantic. With regard to leftist accelerationism, which favors a fully automated socialism and calls for a pragmatic approach to existing technology, Bernes criticizes that technologies cannot be divided into evil (nuclear weapons) and good technologies (antibiotics). Such an approach conceives of technology as a discrete tool rather than a complicated network system. Leftist accelerationism develops thinking that assumes that the more advanced the technology, the easier it is to get to communism. But what if that technology tends to block the path to communism?

If we are talking about AI at this point, it is not in the sense of reconfiguring algorithms or automating jobs, but rather as the trajectories of a universal technological project that remains a capitalist project for now. Yet the trajectory is open, toward the best or the worst for humanity. AI opens a way for humans out of exploitation by capital, but at the same time the freedom of capital to reduce humans to a cheap abstraction. Leftist accelerationism does not know this contradiction because it is blind to the fact that AI under capitalism intensifies labor, accelerates the production of commodities and circulation, and destroys the environment more and more, to a dramatic limit that can become dangerous for people. The response of AI enthusiasts is that it is AI systems that could avert the apocalypse. But AI systems require a lot of energy and contribute to the warming of the planet with their big data centers. Nick Land's vision of an unstoppable AI, a fatal dynamic that leads to cyberwars with no way out, which can assume the proportions of nuclear wars, also pays homage to the apocalypse. For Land, AI is the culmination of a cybernetic process in which capital produces ever new feedback loops of technology, with Land celebrating this fatalism.

For the authors, the communist moment in AI consists neither in the accelerationist fantasies of leftist accelerationism nor in the spontaneous actions of Luddhism, but in disrupting the dynamics of capital, that is, in overriding capital's imperative to cut costs and speed up the circulation of commodities. And there is, moreover, a communist moment in AI research that consists not so much in the automation of production, but intends the expropriation of AI capital to produce forms of collective ownership and application to other sectors. The infrastructures permeated by AI should have a general utility. Work hours would be substantially reduced and sufficient free time would be made available, yet labor as part of a free collective association would not disappear altogether. For autonomist theorist Raniero Panzieri, the destruction of capital is a negation that concerns a complete reorganization of infrastructure and technology in order to decouple the latter entirely from productivity. In this context, the boundaries between denial/rejection and appropriation of AI, and between the saboteur/hacker and the defender, are more fluid than one might think

translated by deepl.

Foto: Sylvia John