Introduction – Eudemony as a Guiding Principle

In my earlier article, Designing Freedom Today: Rethinking Efficiency, Capitalism, and Democracy with Stafford Beer, we examined the revolutionary ideas of cybernetician Stafford Beer, who sought to align technological advancements with human flourishing—what he called eudemony. Beer argued convincingly that societal success must be measured in terms of human well-being rather than monetary profit, famously asserting that “money…is nonetheless an epiphenomenon of a system that actually runs on eudemony… I have come to see money as a constraint on the behavior of eudemonic systems, rather than to see eudemony as a by-product of monetary systems.

From this eudemonist perspective, the purpose of any technology—including artificial intelligence (AI)—should explicitly enhance human fulfillment and collective well-being, not simply corporate profits or control. Yet the current landscape of AI deployment starkly contrasts Beer’s ideals, manifesting in corporate-driven systems that threaten, rather than enhance, human flourishing.

This follow-up article provides a focused, rigorous leftist materialist assessment of contemporary AI, applying Beer’s philosophy directly to evaluate four central critiques of AI within capitalist systems: (1) the corporate monopolization of AI technology, (2) the rise of deepfakes and erosion of public trust, (3) workforce displacement through automation, and (4) the widespread exploitation of creative labor in AI training processes. In doing so, we clarify AI’s genuine potential for societal benefit in contrast to speculative and problematic technologies such as blockchain and cryptocurrency, and critically explore AI’s possibilities under democratic socialism versus crony capitalism.

Drawing primarily on U.S. contexts while integrating relevant international examples, we conclude by addressing essential questions from Beer’s eudemonist framework: Are AI’s current problems inherent to the technology itself, or merely products of its deployment within capitalist frameworks? Can left-materialist strategies effectively resolve these problems, and if so, through what mechanisms? Ultimately, should leftists actively engage with AI to ensure its alignment with human flourishing—or should they abandon this technology altogether?

1. Real Utility vs. Hype: AI in Contrast with Crypto

Before delving into critiques, it’s important to clarify what AI is – and what it isn’t – by comparing it to another much-hyped technology of recent years: cryptocurrency/blockchain. Both AI and crypto have been surrounded by buzz, grand promises, and speculative investment, but their technological foundations and real-world impact diverge sharply. Blockchain evangelists touted a decentralized digital revolution (a finance system free from banks, a web free from corporate platforms), yet a decade on, those utopian promises remain largely theoretical<7>. The most prominent blockchain applications – cryptocurrencies and NFTs – saw massive speculative bubbles but struggled to prove broad utility in everyday life. AI, in contrast, has quietly but pervasively embedded itself in the fabric of daily life and industry. From Netflix’s recommendation algorithms to bank fraud detection, from smartphone voice assistants to medical image analysis, AI systems (especially machine learning models) are delivering tangible value across sectors​<7>. In short, blockchain waited for use-cases that never fully materialized, while AI’s use-cases have exploded in every direction<7>.

This is not to deny that AI too has been hyped – indeed, exaggerated claims (of either miracle solutions or doomsday scenarios) often obscure its present reality​<7>. But whereas one could plausibly argue that crypto was “90% hype, 10% utility,” AI cannot be dismissed as mere vaporware; it is already altering how goods are produced, services delivered, and decisions made. Even skeptics must acknowledge that AI’s impact is here and now, not just a distant fantasy​<7>. Another ironic contrast: crypto’s ethos was decentralized and democratizing (though largely unrealized), while today’s AI boom has been highly centralized, concentrated in the hands of a few tech giants. Training state-of-the-art AI models requires enormous data and computing power – resources available chiefly to wealthy corporations and governments. As we’ll discuss, this centralization of AI development raises serious concerns about corporate power. Yet it also underscores the key difference: AI’s trajectory is shaping society in immediate ways that crypto never did. A leftist materialist approach, therefore, must grapple with AI as a real force (for better or worse) in the political economy, rather than viewing it as another empty bubble. With this distinction drawn, we turn to the first major critique: who controls AI, and to what end?

2. Corporate Control and the Centralization of AI Development

One of the most pressing leftist critiques of AI is the extreme concentration of power over its development and deployment. At present, a handful of large corporations – Alphabet (Google DeepMind), Microsoft (through OpenAI partnership), Meta (Facebook), Amazon, and a few others – dominate cutting-edge AI research and infrastructure. This corporate control has both economic and epistemic consequences. Modern AI breakthroughs have not come from lone geniuses or small startups tinkering in garages; they have come from massively resourced projects fueled by corporate data centers and war chests<1>. As tech scholar Meredith Whittaker observes, the much-touted “advances” in AI over the past decade were “primarily the product of significantly concentrated data and compute resources that reside in the hands of a few large tech corporations.” Modern AI “is fundamentally dependent on corporate resources and business practices,” meaning our increasing reliance on AI effectively cedes inordinate power over our lives and institutions to a handful of tech firms<1>. In short, Big Tech’s wealth and data hoards have enabled it to set the pace and direction of AI – and to capture the lion’s share of AI’s benefits.

This centralization is self-reinforcing. The companies leading in AI research attract top talent (often by offering salaries academic or public institutions cannot match) and shape research agendas (through funding, conference sponsorship, etc.), which in turn means breakthroughs tend to occur under corporate auspices. They also deploy AI at scale into consumer products (search engines, social media feeds, e-commerce, cloud services), giving them vast troves of user data to further improve their models. It is a feedback loop of corporate hegemony: data and profits beget better AI, which begets more data and profits. Meanwhile, government investment in civilian AI lags in the U.S.; public-sector AI efforts are largely limited to military applications via the Defense Department​<2>. (Notably, the Pentagon has poured money into AI for war-gaming, autonomous weapons, and surveillance – raising its own set of concerns, though outside the scope of purely “corporate” critique​<2>.)

From a left-materialist standpoint, the problem is not that AI exists, but who owns and controls it. Under capitalism, advanced AI is being developed as proprietary systems, with their underlying code and training data kept as trade secrets. Democratic oversight is minimal. We are in a situation where, for example, one private company (OpenAI, heavily funded by Microsoft) can suddenly release a powerful language model (ChatGPT) that millions begin using – potentially influencing how people obtain information or perform work – yet the public has no say in its design or objectives. Policy often reacts only after the fact. There is an unsettling alignment here with the warnings Beer made decades ago: if society doesn’t consciously guide such technology for the public good, then corporations will guide it for profit, “enslaving” people with the very technologies that could liberate them​<1><17>. Today we see corporate AI being harnessed to maximize advertisement clicks, amplify consumption, and surveil user behavior, rather than to maximize human freedom. As Beer cautioned, unfettered capital tends to turn even wondrous tools into instruments of control – funneling human behavior into commodifiable data streams, and concentrating wealth and decision-making in elite hands​<14>.

A concrete manifestation of this control is the “black box” nature of corporate AI. The algorithms that curate social media feeds or determine ad targeting are trade secrets; their biases or objectives are not transparent. Even when external researchers uncover problematic behaviors (say, an AI system exhibiting racial bias in lending), the companies resist full disclosure or accountability. Moreover, the compute infrastructure needed for modern AI (massive data centers with tens of thousands of GPUs) is itself centralized. As of 2023, it is essentially impossible for a small research lab or non-profit co-op to train a frontier AI model from scratch – the cost runs in the tens of millions of dollars and demands a global supply chain of semiconductors. This is a form of enclosure of the computational “means of production.” It has prompted some to explore ideas of “decentralized AI” or leveraging distributed networks to democratize compute​<cvvc.com><forbes.com>, but so far the dominance of Big Tech is only growing. Insiders like Signal president Meredith Whittaker bluntly remark that “AI, as we understand it today, is fundamentally… derivative of centralized corporate power and control” – built on concentrated resources that only a few firms possess​<1>.

The implications are dire. When only a minority of powerful stakeholders steer AI’s development, they can bake their values and interests into the technology​<link.springer.com>. For instance, an AI content policy may be lenient toward corporate-friendly speech but harsh on union organizing rhetoric, or a generative model might be trained on data that over-represents Western capitalist viewpoints. Furthermore, these companies can decide unilaterally how AI is deployed: whose jobs it should streamline, which consumer interactions to automate, which surveillance capabilities to build for law enforcement clients, and so on. The public – especially workers – have little say, absent strong unions or regulations.

Finally, corporate concentration in AI raises the prospect of monopolistic control over entire markets and information flows. Just as a few corporations came to dominate the internet (Google for search, Facebook for social networking, etc.), they could dominate AI services that everyone comes to depend on – from personal digital assistants to medical diagnostics – extracting rents and shaping standards to their benefit. We already see a “cloud oligopoly” (Amazon, Microsoft, Google) controlling the back-end of many AI applications. Without intervention, this could ossify into a long-term tech oligarchy that stifles competition and subordinates public needs to corporate strategy.

In sum, the leftist critique is that the problem with AI is not AI per se but who owns it and for what purpose. Under the current capitalist regime, AI is being developed in a context of cronyism and enclosure: immense government contracts and subsidies flow to the big firms (often via defense or lax tax regimes), while those firms fiercely guard their dominance. The public bears the risks (e.g. misinformation, lost jobs) but is not sharing in the governance or full benefits. This is not a technical inevitability, but a political-economic choice. A democratic socialist approach would insist that such a powerful general-purpose technology be subject to democratic direction – a theme we will explore later. But first, we examine the concrete harms that have emerged as AI proliferates under present conditions.

3. Deepfakes and the Erosion of Informational Trust

Perhaps the most visible alarm in public discourse about AI is the rise of deepfakes and AI-generated misinformation. “Deepfakes” – synthetic media (images, video, audio) convincingly fabricated by AI – have matured to the point where seeing is no longer believing. With modest skills, one can generate a video of a real person saying or doing something they never did, or clone a person’s voice to deliver any message. This capability strikes at the heart of informational trust in society. Democratic societies rely on a shared baseline of reality – evidence, news, and media that citizens trust (or at least can verify) as real. AI’s power to fabricate realistic fakes threatens to undermine that trust wholesale. Experts warn that a surge of AI deepfakes could “erode the public’s trust in what they see and hear”, posing perhaps the greatest threat to democracy from AI in the near term​<3>.

We have already seen early examples. In 2019, a cheaply doctored video made House Speaker Nancy Pelosi appear intoxicated; it went viral and, though not a true deepfake, demonstrated the potential to discredit a politician via fake media. In 2022, during Russia’s war in Ukraine, a deepfake video of President Zelensky appearing to surrender circulated online (it was quickly debunked, but only because it was somewhat crude – future ones may not be). By 2023–24, deepfakes had been weaponized in elections globally: from Bangladesh to Slovakia, AI-generated audio and videos have been used to spread lies and confusion during campaigns<3><8>. In one documented U.S. case, political operatives paid an actor to use AI voice software to mimic President Biden’s voice and create messages discouraging voters – a blatantly deceptive tactic​<3><8>. As we head into major elections in the U.S. and elsewhere, the specter looms of fake video or audio “bombshells” being dropped on the eve of voting, with voters unable to discern truth from fabrication in time.

The danger is twofold: (a) People may be deceived by convincing fake information, leading them to make decisions or support policies based on lies; and (b) even genuine information may start to be dismissed as “fake” in a world where anything can be denied. This latter effect is sometimes called the “liar’s dividend” – bad actors benefit from a fog of doubt, as the very existence of deepfakes lets them claim that authentic evidence (say, a video of their wrongdoing) is just a fabrication. In other words, widespread deepfakes could produce information nihilism, where citizens lose trust in all media. Democracy cannot function under such conditions; if people cannot agree on basic facts or trust any sources, the public sphere fragments into paranoid echo chambers.

It’s important to note that misinformation is not new – propaganda, doctored photos, and lies have a long history – but AI turbocharges it. Automated systems can produce fake content at scale and personalize it. A network of bots could flood social media with dozens of AI-generated “news reports” or deepfake videos targeting different demographics, all within hours – far outpacing traditional fact-checking or journalistic verification. There is an ongoing arms race between deepfake generation and deepfake detection technologies​<carnegieendowment.org>, but so far the offense has the edge: as AI image and voice models improve, detecting fakes becomes exceedingly difficult, and detection tools themselves are not widely accessible or understood by the general public. Law and policy have yet to catch up. The U.S. currently has no federal law banning or broadly regulating deepfakes (a few states have narrow laws, e.g. against deepfakes in election ads within 60 days of a vote, or against certain non-consensual pornography). Europe’s proposed AI Act may require AI-generated media to be labeled, but enforcement will be challenging across the wild expanse of the internet.

From a leftist perspective, the deepfake issue is intertwined with corporate platforms and power. The major vectors of misinformation are the large social media and video-sharing platforms (Facebook, Twitter, YouTube, etc.) – all profit-driven companies whose algorithms often prioritize engagement over accuracy. Outrageous fake content can drive clicks and shares (and thus ad revenue) more than sober truth can. These platforms have only fitfully addressed misinformation, usually under public pressure and with inconsistent results. In a sense, AI-generated lies are an externality of the current attention economy. No wonder experts say that without stronger checks, deepfakes will be leveraged to “erode trust between people” and in institutions​<3> – a recipe for social discord that ultimately benefits authoritarians and demagogues. A crony-capitalist deployment of AI in media – one where tech firms face few penalties for viral falsehoods, and may even gain revenue from them – virtually invites this outcome.

Can anything be done? Technologically, researchers are working on watermarking systems (to invisibly tag AI-generated content) and on robust detection algorithms. Socially, there are calls for media literacy campaigns to educate the public about deepfakes. Legislatively, there are proposals to mandate disclosures (e.g., require any AI-generated political ad to carry a clear label) and to hold platforms liable if they knowingly distribute harmful deepfakes. These solutions, however, run up against the reality that under capitalism’s current incentives, both the creators of deepfake technology and the distributors of content are not naturally aligned with the public interest. A democratic socialist approach would demand public accountability: for instance, compelling platform companies to implement rapid removal of proven deepfakes and perhaps creating public oversight boards to monitor misinformation. It could also involve publicly funded infrastructure for verified information (strengthening independent public media, etc.) as a bulwark against the fakes. Ultimately, safeguarding informational eudemony – an environment where truth can be established and trusted – will likely require treating some aspects of information space as a public good rather than a playground for profit. The deepfake challenge is daunting, but it is not inherent to AI as a technology; it stems from social context and misuse. With deliberate governance (which we’ll outline later), AI’s creative power need not corrode truth, but that will depend on wresting some control back into democratic hands.

4. Automation and Workforce Displacement

Beyond the realm of information, AI’s most direct impact on material life is through automation – the use of AI and robotics to perform tasks traditionally done by human workers. Automation is not a new phenomenon (the Industrial Revolution mechanized agriculture and manufacturing, computers automated clerical work in the 20th century, etc.), but AI significantly expands the scope of jobs at risk. Earlier waves of automation often affected routine manual labor; modern AI-driven automation can target cognitive and creative tasks too. Everything from customer service calls to drafting legal documents to driving trucks can be partially or fully automated with current AI. The result is a profound anxiety about workforce displacement: will AI put large numbers of people out of work, and if so, who bears the cost?

Under capitalism, employers have a built-in incentive to replace human labor with machines when it’s cheaper or more efficient, since that can cut wage costs and increase profits. AI appears to offer a dramatic boost in this direction. A widely cited study by McKinsey & Co. estimated that in the United States, around 45 million workers (about a quarter of the workforce) could lose their jobs to automation by 2030<4>. Globally, the number could reach hundreds of millions of jobs disrupted. Even if these figures turn out high, the trend is clear: sectors like retail, food service, transportation, manufacturing, and office administration are seeing AI-driven systems taking over tasks. For instance, automated checkout kiosks and inventory robots are spreading in retail stores; “robotic fry cooks” and barista machines are being trialed in food chains​<4>; self-driving vehicle technology threatens to displace truckers and cab drivers once mature; AI-powered software can now draft reports, compose marketing copy, or analyze data, potentially reducing the need for various white-collar roles. A 2023 survey of business leaders found that 37% had already implemented AI that resulted in replacing some workers<theguidon.com>. This is happening “gradually, then suddenly,” as one analysis put it – with AI adoption accelerating each year​<cnbc.com>.

The capitalist labor market has historically handled automation through a mix of job destruction and job creation. New technologies eliminate certain occupations but also give rise to new ones (for example, the automobile displaced blacksmiths and horse breeders but created auto manufacturing and mechanic jobs). Optimistic economists argue AI will be no different – that increased productivity will lower costs, spur demand, and create new industries and roles we can’t yet imagine. Leftists, however, stress that even if new jobs eventually appear, the transition can be devastating for workers without intervention. In the short-to-medium term, many workers could face unemployment or downward mobility, especially those whose skills are not easily transferable. And crucially, under capitalism the gains from automation have rarely been evenly shared. Productivity gains have largely accrued to business owners and shareholders, while workers often get a smaller slice of a growing pie. Since the 1980s, automation has contributed to wage stagnation and greater inequality, as high-skill complementary workers (like engineers who build or manage AI systems) command higher pay, whereas middle-skill workers competing with machines see their wages suppressed​<4>. As one policy analyst noted, automation tends to “increase compensation to workers who are complementary [to new technology] and decrease it for those who have been replaced”, absent strong countermeasures​<4>.

In the U.S., the prospect of AI-driven mass displacement raises the question: will this be another chapter of what labor historian David Noble called “the forces of production” being used against workers, or can it be harnessed for broad prosperity? Without a deliberate plan, the default is grim. Companies will automate to cut costs; laid-off workers will be told to “adapt” and retrain (often at their own expense), but not everyone will succeed in finding a new niche. We could see unemployment or underemployment rise, or more workers pushed into gig and precarious jobs that AI cannot yet do (a pattern already observed – e.g., an Uber driver is a displaced worker from some other field). Even those who keep jobs may face degraded working conditions: AI surveillance and performance algorithms are used to squeeze more output from workers (e.g. Amazon’s warehouse AI that tracks pick rates and flags workers for falling behind). Thus, automation isn’t just about job loss, but also about intensifying exploitation of those still employed.

What would a eudemonic (human-flourishing-centric) outcome look like instead? In an ideal scenario, AI could free humans from drudgery. If machines can do the boring, dangerous, or routine work, people could enjoy shorter working hours with no loss of pay, more leisure or creative time, and focus on the aspects of work that truly require human insight and empathy. This is essentially the vision of “fully automated luxury communism” (FALC) – that advanced automation can produce abundance for all, if the economic system is reoriented to distribute the gains fairly. Notably, Stafford Beer’s own vision was very much in line with this: he believed cybernetic technology made it possible to provide a high quality of life for everyone with minimal toil, but only if deployed in a planned, democratic fashion​<14>. Beer’s work on Project Cybersyn in Chile was partly aimed at automating and streamlining industrial management without disempowering workers – in fact, his system included mechanisms for workers to have more say (through feedback channels) even as computers assisted with coordination​<14>​<17>. Technology was there to augment human labor, not replace or surveil it into submission.

Realizing such a positive outcome under capitalism is difficult, because it requires intentional redistribution of benefits. Some mix of policies is often suggested by left economists and futurists: a shorter workweek (e.g. four-day weeks or six-hour days, so that employment is spread among more people and productivity gains translate into free time), robust retraining and educational programs (publicly funded, to help workers transition to new roles where needed), a universal basic income or federal job guarantee (to ensure no one is left destitute as industries shift), and strong unions to bargain over implementation of AI in workplaces. Unions indeed are on the front lines of this issue now. A salient example came from Hollywood in 2023: the Writers Guild of America (WGA) went on strike in part to demand protections against AI replacing screenwriters. After nearly five months on strike, they won contract language that explicitly guards their work from AI encroachment – studios must disclose if any material given to writers was AI-generated, AI cannot be credited as a writer, and writers cannot be forced to use AI tools​<3><8>. In short, “AI-generated material can’t be used to undermine a writer’s credit or compensation” under the new agreement​<3><8>. This was a landmark victory signaling that labor can push back: the WGA framed it as a human-vs-machine solidarity issue and successfully established “guardrails against the use of AI” in their industry​<3>. Likewise, other unions (actors, journalists, etc.) are mobilizing to ensure AI is a tool alongside workers, not a replacement for workers.

From a left materialist perspective, these struggles illustrate that the harms of AI automation are contingent on who controls the process. AI doesn’t inevitably have to throw millions into unemployment – if the productivity gains were socially owned, we could collectively decide to reduce work hours, raise wages, and use AI to eliminate only the undesirable parts of jobs. The fundamental issue is that under capitalism, the profits from automation go to the owners (shareholders) while the losses fall on the workers (job loss, lower bargaining power). Solving that means altering power relations: through policies (taxing or regulating away the incentives to simply fire workers), or more radically, through ownership (e.g. worker cooperatives might choose to adopt AI in ways that ease everyone’s workload rather than cutting staff). In a democratic socialist scenario, one could imagine, say, a publicly owned AI system that does a lot of necessary work (like public transit driving or data processing), but the benefits are returned to society (better services, or revenue that funds a universal basic dividend). Alternatively, if private companies deploy AI, strong state intervention (like high taxes on automated profits that fund social programs or UBI) could mitigate the harm. The concept of a “robot tax” has even been floated – taxing companies for every job replaced by an AI/robot to both slow down reckless automation and fund transition efforts.

In summary, AI-driven automation under capitalism poses a great threat to labor – indeed, some call it “the single greatest threat to the American labor market today”<4>. But it is not a threat because robots or algorithms are malevolent; it is a threat because of how our economy is organized. Leftist materialists argue that with the right framework, automation could herald liberation from excessive work. Achieving that will require class-conscious policies and perhaps new forms of ownership to ensure that eudemony, not just efficiency, guides the adoption of AI in the workplace.

5. Exploitation of Creative Labor and Data Commons

A different facet of AI’s impact on labor involves not the replacement of workers, but the appropriation of their past work. Modern AI systems, especially in the realms of art, literature, and software, rely on training data comprised of human-created content: artworks, photographs, writings, music, code – scraped in bulk from the internet. The creators of this content are typically not asked for permission, nor compensated. This has led to mounting criticism that AI development has entailed a massive act of uncredited exploitation of creative labor. In plainer terms, AI companies are profiting from the unpaid work of artists and authors, by using their creations to train models that can then produce new content in the same style.

Consider the case of image-generating AI (like Stable Diffusion, Midjourney, DALL-E). To train these models, companies gathered billions of images from online sources – paintings, drawings, photographs – including countless copyrighted works by living artists. No royalties or credits were given. Now these models can churn out images “in the style of” any famous artist on demand. Not surprisingly, a group of artists filed a class-action lawsuit alleging that Stable Diffusion’s creators engaged in wholesale copyright infringement: by copying their artworks without permission to train the AI, and effectively creating a tool that competes with the artists’ own commissions​<11>. In 2023, a U.S. federal judge allowed key parts of this case to proceed, signaling that the claim – that the AI model contains compressed copies of the training images and thus infringes – is plausible​<reddit.com>. Similarly, Getty Images sued Stability AI for ingesting millions of Getty’s stock photos (some outputs even generated distorted remnants of Getty’s watermark, evidence of the training data)​<12><theartnewspaper.com>. On the literary front, authors from John Grisham to George R.R. Martin joined lawsuits against OpenAI and Meta, accusing them of copying “tens of thousands of books” without consent to train their large language models​<5>​. As one author’s complaint put it, “the basis of OpenAI is nothing less than the rampant theft of copyrighted works.”<5>

These are strong words, but they reflect a palpable anger among creators. Under the current paradigm, an artist or writer’s entire oeuvre might be used to teach an AI, which then generates content that rivals the original creator’s work – all while the creator earns nothing and wasn’t even consulted. It is experienced as a form of enclosure of the creative commons. Decades of artwork and cultural labor, shared online (often by creators hoping to gain audience or collaborate), have been swept into proprietary AI models. The labor was done by humans, but the fruits are captured by capital. This dynamic is familiar in late capitalism – think of how social media monetized users’ social labor, or how “user-generated content” sites profit from unpaid contributors. AI takes it up a notch, by not just hosting human creations but internalizing them and generating substitutes.

From a legal standpoint, companies argue fair use: that training on publicly available data is transformative and allowed. This is a gray area, now being tested. But beyond legality, leftists raise moral and political questions: Who owns cultural knowledge? If a model is trained on the collective art, writings, and expressions of humanity, should the benefits belong to a few tech companies, or to the people who created that body of work? Under capitalism, we see a rush to privatize this “commons of information” without compensation. It’s a new enclosure movement on the digital frontier.

Furthermore, the exploitation isn’t only of famous artists – it’s also of countless gig workers and hidden labor behind AI. For example, OpenAI’s ChatGPT was refined using feedback from human contractors, including underpaid workers in Kenya who were asked to label toxic content (like graphic violence and hate speech) to help the AI learn to avoid it. These workers, paid less than $2 an hour, were traumatized by exposure to the worst content​<medium.com><theleftberlin.com>. Content moderation and data labeling – crucial tasks for AI – are often outsourced to the Global South or to precarious workers, forming an invisible underclass of AI labor. Their exploitation is a direct result of tech companies seeking cheap ways to “clean” and curate training data.

So, whether it’s the appropriation of creative works or the sweatshop-like conditions for data labelers, AI’s development has reflected existing inequities. A left-materialist lens sees this as another instance of capital extracting value from labor without remuneration. The difference is the scale (billions of pieces of content) and the opacity (the extraction is hidden in algorithmic training, not immediately visible like a factory).

What can be done to address this? Creators and their allies are advancing several strategies. Legal action is one route – the courts may force AI firms to negotiate with rights holders or pay damages. But litigation is slow and uncertain. Policy reform could help: for instance, updating copyright law to clarify the status of AI training. Some advocate for a compulsory license model, where AI developers pay into a fund that gets distributed to creators whose works are used for training (similar to how radio stations pay music royalties). Unions and professional associations are also stepping up. In the UK, the Trades Union Congress (TUC) in 2025 called for new protections for creative workers against AI exploitation. They demanded transparency of AI training data (so artists/writers know if their work is being used), an opt-in system so that creative work cannot be data-mined for AI without consent, and fair pay for workers whose material trains AI models<13>. The TUC warned that without such guardrails, “rapacious tech bosses” will cash in on workers’ talent with no reward to the actual people who produced the underlying work​<13>. This kind of collective action – essentially trying to extend labor rights and IP rights into the realm of AI – is crucial. We are seeing the beginnings of it: the Writers Guild’s contract, for example, explicitly states that exploitation of writers’ material to train AI is prohibited under their agreement​<cdt.org>. Similarly, visual artists have organized protests (“ArtStation” online protests, for example, where artists posted “No AI” on their portfolios).

A deeper structural solution from a left perspective would be to treat the knowledge and creative content of society as a commons, not as a free raw material for private AI factories. This could mean building public datasets for AI that have clear usage rights and benefit sharing. It could mean requiring open models (so that communities can themselves leverage AI trained on their collective culture, rather than being solely consumers of big tech’s models). In a socialist scenario, one could even imagine something like a National Library of Training Data – a publicly curated repository of books, art, and data that AI can train on, with the understanding that the resulting models are open utilities and that authors/artists are acknowledged and rewarded for their contributions. This would invert the current setup where data is captured and monetized in secret.

At minimum, the principle should be established that creators deserve credit and compensation when their work is used to build commercial AI products. The exact mechanisms can be debated (royalties, profit-sharing, or public funding for the arts supported by AI tax, etc.), but without them, we risk a future where creativity is mined and automated in a way that hollows out creative professions. The term “exploitation” is apt: AI has been built on unpaid labor, and left unchecked, it will further enable capital to exploit without employing – generating revenue from content without supporting the content creators.

In sum, the struggle over AI’s training data is a struggle over digital property and labor rights. It’s a new front in class conflict: tech companies versus creators and workers. Leftist materialists come down firmly on the side that the fruits of collective human culture and knowledge should not be monopolized. Beer’s ethos of eudemony would argue that knowledge is most valuable when shared for mutual flourishing, not hoarded. A democratic AI regime would find ways to reward creators and involve them in governance of how their creations are used – turning what is now a extractive, one-way relationship into a collaborative, consensual one.

6. Toward a Democratic Socialist Deployment of AI

Having surveyed the major problems of AI under the current paradigm – concentrated corporate power, deepfake-fueled mistrust, automation threatening jobs, and uncredited exploitation of labor – we must ask: Is there an alternative way to develop and deploy AI? What would AI look like in a democratic socialist context, oriented toward Beer’s eudemony (human flourishing) rather than pure profit? This is not a trivial question, because it requires reimagining both the governance of technology and the economic incentives surrounding it. Yet it is essential if we are to answer whether AI’s problems are intrinsic or circumstantial.

In a democratic socialist vision, AI would be treated as a public good and a tool for social use-value, not just private exchange-value. Several key shifts would characterize this approach:

  • Democratizing Control and Ownership: Instead of AI capabilities being owned by a few firms, they could be developed by public institutions, cooperatives, or open-source communities. For example, a government could nationalize certain critical AI infrastructures (like large data centers or datasets) or fund open AI projects that operate under public oversight. Imagine a “Public Option” for AI: freely available models (for language, vision, etc.) that any startup, community group, or individual can use, negating the dominance of corporate APIs. This would prevent the scenario of every advanced AI service being a paid product of Big Tech. It parallels how public utilities function – infrastructure provided to all, with democratic accountability. Internationally, one might see collaborations where countries pool resources to build AI for addressing global challenges (climate modeling, pandemic prediction), with the outputs openly shared, rather than jealously guarded. An inspiration here is the open-source software movement, which has created widely used tools governed by communities rather than profit – we could foster an analogous open-source AI ecosystem with sufficient support.
  • Labor-Centric Implementation: In workplaces, AI would be introduced with the consent and input of workers and their unions. Rather than the top-down imposition (“Management bought a new AI system; half of you are laid off”), it would be bargained how AI is used. If an AI can make a process more efficient, workers could collectively decide to reduce work hours or reassign duties in ways that improve quality of life, instead of simply doing the same work with fewer people. Policies like co-determination (workers on company boards) would help ensure this. At a societal level, robust safety nets (as discussed: shorter hours, guaranteed incomes, retraining programs) would cushion any displacement that does occur, making technological progress something to welcome, not fear. The goal is to redistribute AI’s productivity gains widely. Beer himself argued that technology enables abundance, but it takes conscious governance to ensure that doesn’t lead to tyranny or unemployment​<14>. A planned approach could channel AI to complement human labor – e.g., AI handles tedious documentation while humans do interpersonal service – making jobs more fulfilling.
  • Alignment with Social Needs: In a capitalist model, the “alignment” of AI (to use the term AI ethicists love) is ultimately with profit; in a socialist model it would be aligned with democratically determined goals. This means AI research agendas might prioritize things like healthcare, education, environmental sustainability, and public service delivery over, say, ad targeting or military applications. For instance, rather than racing to build AI that gets people to click more ads, we’d race to build AI that can help doctors diagnose diseases faster, or that can enable disabled individuals to have more autonomy (assistive AI). We’d invest in AI for climate – optimizing energy use, modeling climate interventions – because the profit motive alone doesn’t pour enough resources into those critical areas. A poignant historical case: Beer’s Cybersyn project in Chile attempted to use networked computing to manage the economy for the people’s benefit, including handling supply shortages and increasing worker participation. Today, we have far more powerful tools (internet, sensors, AI). We could create systems to coordinate supply chains to be resilient and equitable, to monitor environmental data in real time for rapid response, and to allocate resources to where they’re needed most – all under transparent, democratic control. Think of a modern “People’s Opsroom,” where citizens and officials can see real-time metrics of social well-being (health, education, pollution levels) and use AI simulations to inform policy – a very different use of AI than secret algorithms optimizing hedge funds or ad auctions. It’s the idea of “AI for the public good” operationalized.
  • Regulation and Accountability: Even in a socialist context, AI would need checks to prevent abuses. A democratic state would robustly regulate AI for privacy, bias, and safety. This could include banning certain uses outright – for example, socialist principles would reject AI for mass surveillance or authoritarian social credit systems, as these undermine human freedom (not to mention Beer’s emphasis on liberty). It might also ban sale of AI weapons or manipulative algorithms. Importantly, regulations would be shaped by public input: imagine citizen panels or assemblies that deliberate on questions like “Should we allow deepfake technology in entertainment, and if so under what rules?” – a far cry from today’s decisions made in corporate boardrooms. Internationally, a left approach favors cooperative frameworks: treaties or agreements on AI norms (e.g., a pact not to unleash AI misinformation in other countries’ elections, analogous to arms control but for information warfare). While one must be realistic that great-power rivalries won’t vanish overnight, a socialist government would at least advocate on the world stage for human-centered AI norms, allying with others to set global standards that emphasize human rights and shared prosperity.
  • Empowerment and Education: A democratized AI future would try to make AI understandable and accessible to people, not an arcane magic known only to an elite. This means investing in public AI literacy – teaching citizens how these systems work, how to critically assess AI outputs, how to use AI tools in their own lives safely. It also means interfaces that allow people to give meaningful feedback or even participate in training AI on community-defined values. For example, local governments could have AI systems where residents help decide what the AI should optimize for (maybe a city uses AI for traffic management – residents might prioritize minimizing accidents over slightly shorter commute times, etc., feeding those preferences into the system design). The overarching idea is AI as a partner, not a master. Beer’s cybernetic philosophy was all about feedback loops – we, the public, should be “in the loop” with AI, constantly providing feedback so that these systems serve us, not we serve them.

It’s worth noting that glimpses of this democratic AI approach are appearing. In some European discussions, for instance, policymakers talk about “data commons” and publicly funded AI research to reduce dependence on Silicon Valley. There are open-source model communities (like HuggingFace and EleutherAI) that operate on collaborative principles. Cities like Amsterdam and Barcelona are experimenting with “digital rights” regulations that give citizens more say over tech platforms. Even the idea of worker-owned tech cooperatives is growing – imagine a cooperative that develops AI for, say, translation services, owned by its linguist-members. Internationally, socialist-leaning governments (like perhaps a future progressive government in a Latin American country) could intentionally adopt AI for social programs – e.g., using AI to better target subsidies or plan economic development – showing a path different from the corporate one. Of course, there are also cautionary tales: China’s state-driven AI shows that non-corporate control doesn’t automatically mean democratic or benign (China uses AI for surveillance and social credit scoring, a deeply dystopian use). The difference is that in a democratic socialist framing, the public’s freedoms and input are paramount. We seek the “liberty machine” Beer spoke of, not a tyranny by algorithm​<14>​<17>.

In practical terms, moving toward a democratic socialist AI might start with policy reforms in the here and now. For example: enforce antitrust to break up Big Tech concentrations (preventing one or two companies from cornering all AI resources); mandate algorithmic transparency (so companies must reveal how their AI makes important decisions, enabling public scrutiny); require participatory design (large projects should solicit public comment, similar to environmental impact assessments but for AI’s social impact); greatly increase public funding in non-profit AI R&D (to create open alternatives in everything from search engines to medical AI); implement strong data privacy laws (so companies cannot just seize personal data for training without consent). These steps, while not abolishing capitalist ownership, begin to socialize the governance of AI. They create a more level playing field where public needs weigh more heavily.

A fully socialist approach might go further to socialize ownership – e.g., nationalize a company like Google’s AI division if necessary to make its innovations public domain, or create worker self-management of AI labs. While such moves seem politically remote in the U.S. at the moment, they are worth articulating as horizons, because they highlight the core principle: AI should be subject to the same democratic controls as any crucial infrastructure. Just as leftists historically pushed for public control of utilities, railroads, or healthcare, today AI and data could be seen as the new utilities of the information age that warrant public direction.

Before concluding, it’s important to address an underlying question: do we even want AI at all? Some on the left harbor a deep pessimism – that AI, born from military and capitalist endeavors, is inherently tainted and will always serve elites, so perhaps it’s better to reject it (analogous to Luddites smashing machines). Our analysis suggests that while skepticism is healthy, abandoning AI outright would be a mistake. The technology itself has enormous emancipatory potential if repurposed for eudemony. It can alleviate scarcity, liberate time, expand knowledge – goals fully in line with socialist humanism. Beer certainly believed technology could and must be repurposed for people’s liberation, not destroyed. The key is who directs the tool. The next section will directly tackle this point as we answer the three big questions posed at the outset.

Conclusion: Eudemony or Bust – Answering Three Key Questions

At this juncture, we return to the three critical questions through a leftist materialist lens, using Stafford Beer’s concept of eudemony (human flourishing) as our compass:

1. Are the problems with AI fundamental, or do they result from crony capitalist deployment?
In light of our analysis, the verdict is that most problems are not inherent to AI as a technology but rather emerge from the capitalist context in which AI is being developed and deployed. AI, in essence, is a general-purpose set of tools – it can just as easily be used to empower as to oppress. The fact that we currently see it amplifying inequality, centralizing power, and eroding trust is a reflection of who is in the driver’s seat. Under a system where profit maximization and corporate concentration rule, AI’s trajectory has followed the path of least resistance: reinforcing existing power structures. Corporate control of data and compute led to monopolistic AI silos – that’s a byproduct of how our economy rewards accumulation and market dominance​<1>, not a necessary feature of machine learning algorithms. Deepfakes and misinformation crises stem from social media business models and the lack of regulation – problems of incentive and oversight, not an unavoidable consequence of image-generation techniques. Workforce displacement becomes a grave threat when layoffs are used to cut costs for shareholders – under a different economic logic (say, solidarity and planned transition), the same automation could reduce drudgery without impoverishing anyone. Even the exploitation of creative labor is a choice: companies chose to trawl the web for free data rather than negotiate fair licenses – a tactic reminiscent of enclosure of commons in any era.

To be sure, some challenges are inherent to AI’s capabilities. For example, the very ability to create hyper-realistic fakes will always pose a need for verification mechanisms; the ability of AI to make decisions faster than humans will always require careful control to avoid accidents. And any powerful tech can be dual-use (nuclear tech can power cities or make bombs, AI can cure diseases or enable surveillance). But these are manageable with wise governance. Nothing about AI inevitably says it must be owned by trillion-dollar companies or throw millions out of work. Those outcomes are contingent. In a world where a different system deployed AI, we would likely list different “AI problems” – perhaps still ethical ones, but not the aspects we emphasized that are tightly linked to capitalism’s priorities (profit, power, control). As Beer might say, the system in which AI operates determines what the technology does. If the system’s goal is profit, we get AI tuned for profit, with all the side effects we’ve discussed. If the system’s goal were eudemony, we’d design AI quite differently. In sum, the current AI problems are largely problems of crony capitalist deployment – the outcome of introducing AI into a highly unequal, unregulated, and profit-driven environment. Change the environment, and the problems can be substantially mitigated.

2. Can these problems be solved through leftist materialist strategies – and if so, how?
Yes, these problems can be addressed – not with a single silver bullet, but with a suite of leftist strategies grounded in economic and democratic reforms. Our discussion in Section 6 sketched the broad approach. To summarize concrete solutions:

  • Socialize AI Governance: Bring AI development under democratic oversight. This includes public input in setting research priorities and ethical guidelines, and possibly public ownership stakes in AI ventures. For instance, a national AI council with labor, consumer, and scientific representatives could steer big projects (similar to how some countries have national economic planning bodies). At minimum, strengthen antitrust enforcement to break Big Tech’s chokehold, enabling more pluralistic and accountable AI efforts​<1>.
  • Regulate to Protect the Public: Implement laws that directly tackle each issue – e.g., a Deepfake Accountability Act requiring watermarks and imposing penalties for malicious use of deepfakes (thus safeguarding informational trust), robust Algorithmic Accountability statutes that give citizens the right to explanations and redress when AI affects them (to counter opaque corporate AI decisions), and Data/Creative Workers’ Rights laws that ensure anyone whose data or work significantly contributes to an AI model has rights to disclosure and compensation. These are policy manifestations of left principles of justice and equity.
  • Empower Labor and Redistribute Gains: Bolster unions and worker power so that they can negotiate the terms of AI implementation, as the WGA did​<3>. Additionally, pursue redistribution mechanisms: higher taxes on AI-driven corporate profits, which can fund free retraining programs, job guarantees in sectors where human care is irreplaceable (education, elder care, etc.), or even universal basic income pilots in regions heavily impacted by automation. Shorten the workweek gradually to share the productivity gains of AI across the workforce (if productivity jumps 20% from AI, society could in principle work 20% less hours for the same output – a left strategy would push for that translation).
  • Build the Commons: Invest in public/open alternatives to the corporate AI stack. For example, government or universities could create open datasets and models for use by small businesses and communities, breaking the dependence on Google or OpenAI. A notable suggestion by some technologists is a public “data trust” where individuals pool their data and decide collectively how to license it (possibly to AI firms for a fee that is shared) – this flips the script so people have bargaining power over data usage rather than being passive sources. Supporting cooperatives that develop AI is another strategy (imagine a coop of radiologists globally pooling imaging data to build an AI diagnostic tool they all co-own – feasible with coordination). These initiatives remove some problems by design: open models can be audited for bias (increasing transparency), public datasets can exclude or properly handle sensitive info (protecting privacy), etc. They also ensure the benefits of AI (like a good medical model) are not paywalled.
  • International Solidarity and Norms: Leftist strategy also means looking beyond borders. Work towards international agreements that limit harmful AI uses (similar to climate accords). Share beneficial AI technology with poorer nations rather than guarding it – a socialist internationalist view would see AI for, say, disease control as something to give freely (or at cost) to the Global South, not as IP to maximize exports. Also learn from others: for example, the EU’s precautionary regulatory approach or collective bargaining experiments like Canada’s media organizations pushing for payment from AI companies – these can be adapted and adopted widely.

In essence, leftist strategies revolve around the idea of democratization (of power, knowledge, and benefits) and decommodification (treating certain things – truth, basic livelihood, creativity – not as commodities subject to brute market forces, but as shared values to protect). By applying these strategies, each of the identified problems can be ameliorated. It won’t be instantaneous or easy – it requires political struggle, since those profiting from the status quo will resist. But as seen with the Hollywood strikes and the artists’ lawsuits, resistance is already forming and can win victories. The key is to scale these wins into systemic change, aligning AI with the public interest by design.

3. Is it worth it for leftist materialists to engage with AI and redirect it toward eudemonic outcomes, or should it be abandoned altogether?
Our conclusion is unequivocal: it is worth engaging – indeed imperative to engage – with AI. Abandoning or ignoring AI would cede one of the most powerful forces shaping the future to the current hegemony of capital and the security state. If leftists opt out, corporations and authoritarian governments will continue developing AI in ways that could further entrench inequality and oppression. Non-engagement would be a strategic mistake akin to unilaterally disarming in a class war over technology. As the saying (attributed to Trotsky) goes, “You may not be interested in war, but war is interested in you.” Likewise, we may not be interested in AI, but AI (as deployed by others) will impact us. The rational choice is to engage and shape rather than reject.

Moreover, there is a hopeful opportunity in engaging. History shows that technologies can be repurposed: the internet originated from military projects but was transformed (for a time) into a democratizing communications medium, largely because academics and idealists engaged with it and built open protocols and the early web. If social movements and progressive policymakers vigorously engage with AI now, they can bend its trajectory. Beer’s work in Chile is a poignant example of an attempt to repurpose cutting-edge tech for socialist ends – his “Project Cybersyn” didn’t fully come to fruition due to the coup, but it showed that different choices can be made with technology, even under pressure. We stand at a similar juncture: we can fight for “democratic AI” (AI governed by and for the people) or leave AI to “the market” (which in practice means a handful of CEOs and government spies).

Abandonment would also be counter to the left’s core mission of human emancipation. Eudemony – human flourishing – can potentially be greatly advanced by AI if we seize it. Picture AI systems that eliminate hunger by optimizing agriculture and distribution, that personalize education so everyone can learn at their best pace, that perform tedious labor allowing people to pursue art, relationships, community. These outcomes aren’t fantasies; they are technically within reach in the coming decades if the social will is there. But they won’t happen automatically – they require conscious human agency steering technology to serve the common good. Leftist materialists, with our focus on material conditions and power relations, are well equipped to lead that steering. We understand that it’s not just a tech issue but a political one. Engaging doesn’t mean uncritical embrace; it means pushing an agenda of “AI for eudemony.” This might involve challenging AI developers to consider metrics beyond profit – such as well-being impact assessments – or building alternative institutions as discussed.

There is also a philosophical reason to engage: to prove that the values of equality, solidarity, and democracy are not anti-technology or backwards-looking, but in fact can harmonize with scientific progress. Leftist thought is sometimes caricatured as Luddite (indeed the original Luddites were not anti-machine in principle, only against their destructive use under capitalism). By engaging positively with AI, leftists can show that we are not against innovation; we are against innovation being used to deepen exploitation. We can articulate and demonstrate a vision of innovative socialism – one that embraces the best of human ingenuity (AI included) but insists it be marshaled for humane ends. This could inspire a new generation of technologists to ally with social movements (already, many AI researchers are uneasy with their work being used for surveillance or profit and would prefer it serve humanity – an organized left gives them a channel to realize that).

Of course, engagement must also be vigilant. We should not be naive about AI’s risks – some voices on the left worry that any highly advanced AI deployed by a state could become oppressive, even under socialism. These cautionary notes suggest that in engaging, leftists should set firm ethical red lines (e.g., no mass surveillance AI, period) and build robust democratic institutions that can keep technology in check (e.g., independent auditors and civil society watchdogs monitoring state use of AI). In other words, engage, but with eyes open. That is consistent with Beer’s philosophy: he advocated using cybernetics for freedom, but always with feedback controls and human judgment at the center.

In conclusion, the “AI problem” is not a reason to forswear AI; it is a call to action. The problems are largely of deployment, and thus the solutions lie in deployment choices – which are political. Leftist materialists have an important role to play in redirecting AI toward eudemonic outcomes. It will require struggle on many fronts – in workplaces, in courts, in legislatures, in the realm of ideas – but it is a struggle worth waging. If we succeed, AI could become a truly liberating force, helping to create a society where all people have the opportunity to fulfill themselves as their best possible selves (to recall Beer’s definition of eudemony​<redwedgemagazine.com>). If we fail or stand aside, AI will likely exacerbate current injustices and perhaps introduce new ones.

Eudemony or bust. That is the choice. By engaging with AI and insisting it serve human needs and collective well-being, we affirm that technology is not the master of our fate – we are. Or as Beer optimistically posited, the purpose of the system is what we decide it to be. A democratic socialist society would decide that the purpose of AI is to enhance life, not profits. And with that purpose clearly in sight, we can design and govern these technologies in line with our highest aspirations. It is a daunting task, but also a profoundly hopeful one: to take the fruits of human intellect and direct them towards a future in which automation and intelligence free us to be more fully human – creative, cooperative, and flourishing.

References:

  1. Meredith Whittaker, “The Steep Cost of Capture.” Interactions 28, no. 6 (2021): 50–55. papers.ssrn.com
  2. Dylan Matthews, “Why the left should worry more about AI.” Vox, Nov 7, 2019. vox.com
  3. Ali Swenson and Kelvin Chan, “AI-created election disinformation is deceiving the world.” AP News, March 15, 2024. apnews.com
  4. Philip Klafta, “No Adult Left Behind: Automation, Job Loss, and Education Policy.” Chicago Policy Review, Feb 12, 2024. chicagopolicyreview.org
  5. Blake Brittain, “OpenAI, Microsoft hit with new author copyright lawsuit over AI training.” Reuters, Nov 21, 2023. reuters.com
  6. PA Media, “UK unions call for action to protect creative industry workers as AI develops.” The Guardian, 2 Mar 2025. theguardian.com
  7. Dominic Ligot, “Comparing AI vs. Blockchain Hype.” HackerNoon, Nov 13, 2023. hackernoon.com
  8. Writers Guild of America (WGA) Contract Excerpts on AI, 2023 – via AP News reporting by Jake Coyle, “In Hollywood writers’ battle against AI, humans win (for now).” (Associated Press, Sept 27, 2023)​apnews.com
  9. Jeremey Gross, “Stafford Beer: Eudemony, Viability and Autonomy.” Red Wedge Magazine, Feb 18, 2020. redwedgemagazine.com​redwedgemagazine.com
  10. Anthony Stafford Beer, Platform for Change (John Wiley & Sons, 1975), p. 170-171. (Origin of Beer’s eudemony quote on money and eudemonic systems)​redwedgemagazine.com
  11. Artists’ Class-Action Complaint against Stability AI, et al. (filed Jan 2023, N.D. California) – summarized in James Vincent, “US artists launch lawsuit against Stability AI and Midjourney for scraping works.” The Verge, Jan 16, 2023. lawinc.comreddit.com
  12. Andy Budd, “Getty Images vs. Stability AI: A Landmark IP Case for AI.” Harvard Business Review, Feb 2023. hbr.org
  13. Trade Unions Congress (TUC) Report, “AI and Employment Rights” (London, 2025). theguardian.com
  14. Aaron Bastani, Fully Automated Luxury Communism: A Manifesto (Verso, 2019) – concept referenced re: Beer’s alignment with technology-enabled abundance​.
  15. Whittaker, Meredith. “AI and Big Tech: Intelligence as Concentration of Power.” Interview in Springerin, 2023 – on the centralization of AI in Big Tech​ papers.ssrn.com.
  16. Benkler, Yochai. “A Commons-Based Approach to AI.” Creative Commons, 2023 – proposal for sharing training data as commons.
  17. Beer, Stafford. Designing Freedom (CBC Massey Lectures, 1973). (Beer’s warnings about cybernetics under authoritarian vs democratic control.
  18. Johnson, Liza, and Hannah Johnston. “Hollywood Labor Strikes and AI.” Brookings Institution, October 2023. brookings.edu
  19. McKibben, Bill. “The A.I. Road to Serfdom.” The New Yorker, April 2023. (On who owns the means of computation in AI)
  20. European Union AI Act (proposed 2021, ongoing) – Draft provisions on deepfake disclosure and high-risk AI regulation.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.