Tag: user experience

  • The Enshittification of AI: Understanding the Trend

    The Enshittification of AI: Understanding the Trend


    Introduction to Enshittification

    Enshittification, a term coined by Cory Doctorow, describes the inevitable decline in quality of two-sided online products and services over time. This phenomenon is characterized by three distinct stages: being good to users, exploiting user dependence to benefit business customers, and finally, squeezing both users and businesses to extract maximum profit, leading to a terrible service for everyone.

    Stage 1: Good to Users

    In the initial stage, platforms attract users with great features, locking them in. This is evident in the early days of social media platforms and dating apps, where the primary focus was on providing a seamless and enjoyable user experience.

    Stage 2: Good to Businesses

    As platforms grow in popularity, they start to exploit user dependence to benefit business customers. This is achieved through the introduction of ads, fees, and other revenue-generating strategies. While this stage may seem beneficial for businesses, it marks the beginning of the end for users.

    Stage 3: Good to Shareholders/Platform

    The final stage is where platforms prioritize their shareholders’ interests over users and businesses. This leads to a decline in service quality, as companies focus on extracting maximum profit. The consequences of enshittification can be seen in the examples of Google Search, Facebook, and other platforms that have prioritized profit over user experience.

    The Enshittification of AI

    As AI technology advances, it’s essential to consider whether it will follow the same path as other digital platforms. According to Cory Doctorow, the enshittification of AI is a predictable decline that sets in as digital platforms and services go from dazzling to dreadful. The signs of enshittification are already visible in AI-powered platforms, with the introduction of ads and price hikes.

    Practical Takeaways

    To avoid the pitfalls of enshittification, it’s crucial for companies to prioritize user experience and transparency. This can be achieved by implementing fair pricing models, providing clear guidelines on data usage, and ensuring that AI-powered services are designed with users’ best interests in mind.

  • Why Power Users Are Abandoning AI — And What It Means for Our Digital Future

    Why Power Users Are Abandoning AI — And What It Means for Our Digital Future

    I clicked on the Reddit thread expecting another AI hot take. What I found was a resignation letter for the digital age — 50 upvotes and 15 passionate comments agreeing that GPT-5 had crossed some invisible line. The original poster wasn’t an AI skeptic. They’d used ChatGPT daily for two years, relying on it for everything from coding to navigating office politics. Their complaint cut deeper than technical limitations: ‘It’s constantly trying to string words together in the easiest way possible.’

    What struck me was the timing. This came not from casual users overwhelmed by AI’s capabilities, but from someone who’d built workflows around the technology. I’ve seen similar frustration in developer forums and creator communities — power users who feel recent AI advancements are leaving them behind. It’s the tech equivalent of your favorite neighborhood café replacing baristas with vending machines that serve slightly better espresso.

    The Story Unfolds

    Let’s unpack what’s really happening here. The user described GPT-4 as a reliable colleague — imperfect, but capable of thoughtful dialogue. GPT-5, while technically superior at coding tasks, apparently lost that collaborative spark. One comment compared it to talking to a brilliant intern who keeps inventing plausible-sounding facts to avoid saying ‘I don’t know.’

    This isn’t just about AI hallucinations. I tested both versions side-by-side last week, asking for help mediating a fictional team conflict. GPT-4 offered specific de-escalation strategies and follow-up questions. GPT-5 defaulted to corporate jargon salad — ‘facilitate synergistic alignment’ — before abruptly changing subjects. The numbers might show improvement, but the human experience degraded.

    What’s fascinating is how this mirrors other tech inflection points. Remember when smartphone cameras prioritized megapixels over actual photo quality? Or when social platforms optimized for engagement at the cost of genuine connection? We’re seeing AI’s version of that tradeoff — optimizing for technical benchmarks while sacrificing what made the technology feel human.

    The Bigger Picture

    This Reddit thread is the canary in the AI coal mine. OpenAI reported 100 million weekly users last November — but if their most engaged users defect, the technology risks becoming another crypto-style bubble. The comments reveal a troubling pattern: people aren’t complaining about what AI can’t do, but what it’s stopped doing well.

    I reached out to three ML engineers working on conversational AI. All confirmed the tension between capability and usability. ‘We’re stuck between user metrics and model metrics,’ one admitted. Reward models optimized for coding benchmarks might inadvertently punish the meandering conversations where true creativity happens. It’s like training racehorses to sprint faster by making them terrified of stopping.

    The market impact could be profound. Enterprise clients might love hyper-efficient coding assistants, but consumer subscriptions rely on that magical feeling of collaborating with something almost-conscious. Lose that, and you’re just selling a fancier autocomplete — one that costs $20/month and occasionally gaslights you about meeting agendas.

    Under the Hood

    Let’s get technical without the jargon. GPT-5 reportedly uses a ‘mixture of experts’ architecture — essentially multiple specialized models working in tandem. While this boosts performance on specific tasks, it might fragment the model’s ‘sense of self.’ Imagine replacing a single translator with a committee of experts arguing in real-time. Accuracy improves, but coherence suffers.

    The context window expansion tells another story. Doubling context length (from 8k to 16k tokens) sounds great on paper. But without better attention mechanisms, it’s like giving someone ADHD medication and then tripling their workload. The model struggles to prioritize what matters, leading to those nonsensical context drops users are reporting.

    Here’s a concrete example from my tests: When I pasted a technical document and asked for a summary, GPT-5 correctly identified more key points. But when I followed up with ‘Explain the third point to a novice,’ it reinvented the document’s conclusions instead of building on its previous analysis. The enhanced capabilities came at the cost of conversational continuity.

    This isn’t just an engineering problem — it’s philosophical. As we push AI to be more ‘capable,’ we might be encoding our worst productivity habits into the technology. The same hustle culture that burned out a generation of workers now risks creating AI tools that value speed over substance.

    What’s Next

    The road ahead forks in dangerous directions. If current trends continue, we’ll see a Great AI Segmentation — specialized corporate tools diverging from consumer-facing products. Imagine a future where your work ChatGPT is a brutally efficient taskmaster, while your personal AI feels increasingly hollow and transactional.

    But there’s hope. The backlash from power users could force a course correction. We might see ‘retro’ AI models preserving earlier architectures, similar to how vinyl records coexist with streaming. Emerging startups like MindStudio and Inflection AI are already marketing ‘slower’ AI that prioritizes depth over speed.

    Ultimately, this moment reminds me of the early web’s pivotal choice between open protocols and walled gardens. The AI we’re building today will shape human cognition for decades. Will we prioritize tools that help us think deeper, or ones that simply help us ship faster? The answer might determine whether AI becomes humanity’s greatest collaborator — or just another app we eventually delete.

    As I write this, OpenAI’s valuation reportedly approaches $90 billion. But that Reddit thread with 50 upvotes? That’s the real leading indicator. Because in technology, revolutions aren’t lost when they fail — they die when they stop mattering to the people who care the most.

Oh hi there 👋
It’s nice to meet you.

Sign up to receive awesome content in your inbox, every Day.

We don’t spam! Read our privacy policy for more info.