The Inevitable Descent: Understanding Enshittification

Remember the early days of your favorite online platform? Maybe it was a social network that felt genuinely connective, a search engine that gave perfectly precise answers, or a streaming service with an uncluttered interface and a vast, ad-free library. Then, subtly at first, things began to shift. More ads crept in, features became clunky, recommendations felt less intelligent, and finding what you wanted became a chore. If this sounds painfully familiar, you’ve likely experienced what author Cory Doctorow calls “enshittification.”
Doctorow’s theory is a stark, yet accurate, descriptor for the decay of digital platforms. It outlines a predictable, three-stage lifecycle where platforms transition from serving their users, to extracting value from them, and eventually becoming utterly useless for everyone but the platform owners. The question now looms large: as artificial intelligence integrates deeper into our lives, and its models grow ever more powerful and profitable, is AI destined for the same dismal fate?
The Inevitable Descent: Understanding Enshittification
To grasp the risk to AI, we first need to understand the mechanics of enshittification. It’s a cynical yet empirically validated cycle that many tech giants have undergone.
Phase 1: Attracting Users (and Partners)
In the beginning, platforms offer incredible value, often for free or at a low cost. Think of Facebook allowing you to connect with old friends, Amazon providing unparalleled selection and convenience, or Google Search offering instant access to the world’s information. The goal here is rapid growth, leveraging network effects to become indispensable. They lavish users with features and provide easy access to third-party developers, content creators, or businesses looking for an audience.
Phase 2: Extracting Value from Partners
Once a platform reaches critical mass, with users locked in, the focus shifts. The platform starts extracting value not just from users, but from the businesses and creators who depend on its reach. Algorithms are tweaked to favor promoted content, search results become cluttered with ads, and API access becomes expensive or restrictive. Think of Facebook throttling organic reach for businesses, forcing them to pay for ads to reach their own followers. Or Amazon squeezing sellers with ever-increasing fees and strict performance metrics.
Phase 3: Exploiting Users
Finally, the platform turns its extractive gaze fully on its users. The quality of the service degrades noticeably. Ads dominate feeds, user data is aggressively monetized, and the overall experience becomes frustrating. Why? Because the users are now trapped. They’ve invested time, built communities, or stored data, making it too costly to leave. At this point, the platform is optimized purely for shareholder value, not user utility. We see this in social media feeds that are now more ads and suggested content than posts from friends, or streaming services cutting content libraries while raising prices.
The core takeaway is that this isn’t a bug; it’s a feature of the prevailing profit-maximization model. When the incentive structure prioritizes infinite growth and quarterly returns above all else, platform decay becomes not just possible, but probable.
AI’s Vulnerability: A Familiar Path to Decay?
So, where does AI fit into this depressing narrative? One could argue that AI’s journey is already showing tell-tale signs, even in its relative infancy. The unique characteristics of AI—its data dependency, the computational cost, and the rapid pace of innovation—could even accelerate its enshittification.
The Allure of Early AI
Initially, AI, especially generative AI, captivated us with its sheer capability. Free access to large language models, impressive image generators, and intelligent assistants offered a glimpse into a sci-fi future. The value proposition was clear: unprecedented productivity gains, creative augmentation, and access to knowledge. Companies rushed to offer these tools, often at low or no cost, to build market share and collect invaluable user data for model training.
The Shifting Sands of Monetization
As AI models matured and usage skyrocketed, the cost of running them became apparent. This is where the monetization pressure kicks in. We’ve seen freemium models emerge, where basic access is free, but advanced features, higher usage limits, or better-performing models are locked behind paywalls. Data privacy concerns mount as AI companies seek more data to “improve” their models, often without clear consent or understanding of how that data is used.
The next phase could see AI platforms prioritizing enterprise clients and businesses over individual users. Imagine AI tools whose core functionality is degraded for free users, pushing them to pay for “premium” versions that simply restore the original quality. Or AI models that subtly promote products or services embedded by paying advertisers, much like search engines prioritize paid listings.
The ‘Model Collapse’ Threat
Perhaps the most insidious threat unique to AI is “model collapse.” When AI models are increasingly trained on data generated by other AIs—data that might be biased, inaccurate, or simply repetitive—the quality of subsequent models inevitably degrades. This creates a vicious cycle: low-quality AI output becomes the input for the next generation of AI, leading to a general decline in reliability, creativity, and truthfulness. What good is an AI assistant if its information is derived from an echo chamber of machine-generated errors? This is enshittification at a fundamental, data-driven level.
Building Guardrails: Pathways to a Healthier AI Future
The good news is that enshittification isn’t an inevitable law of physics. It’s a consequence of design choices and economic incentives. We can, and must, forge a different path for AI.
1. Embrace Open Source and Decentralization
Open-source AI models and decentralized AI networks offer a powerful antidote. By making models, data, and training methodologies transparent and accessible, the community can scrutinize, improve, and even fork models that start to show signs of degradation. Decentralization can prevent any single entity from gaining too much control and dictating the terms of engagement, fostering competition and user choice.
2. Prioritize Ethical AI Development and Governance
Companies and developers must bake ethical considerations into the core of AI development. This means prioritizing user benefit, transparency, fairness, and privacy over pure profit. Robust regulatory frameworks are also crucial to prevent monopolies, mandate data governance, and ensure accountability for AI outputs. Regulations that encourage healthy competition and discourage predatory practices can be a strong deterrent against enshittification.
3. Innovative Business Models Beyond Extraction
Instead of relying on ad-based or data-extractive models, AI companies can explore subscription models focused on delivering consistent, high-quality value. Co-operative AI initiatives, where users have a say in the development and governance of AI, could also emerge as powerful alternatives. The focus should be on creating sustainable value loops, where users are seen as stakeholders, not just data points to be monetized.
4. Cultivate Data Purity and Curation
To combat model collapse, there needs to be a significant emphasis on training AI models with high-quality, human-curated data. Investing in rigorous data verification, promoting diverse data sources, and creating mechanisms for users to flag and correct AI errors are vital. The integrity of AI’s output is directly tied to the integrity of its input.
The threat of enshittification to AI is real, given the historical precedent set by other digital platforms. AI’s transformative potential is too great to be squandered by the same self-serving, profit-driven cycles that have degraded so much of our online experience. By learning from the past, embracing open principles, prioritizing ethical development, and advocating for user-centric models, we can steer AI towards a future of sustained utility and genuine human empowerment, rather than another casualty in the long list of digital disappointments.




