
But for those who work with data broadly - not just the digitised kind - it is embedded in the job description. Data is a system of concretised ethics enforced by the employment relation, in the first instance, and horizons of credibility, of social conventions based on the the affirmation and demonstration of evidence. As I am an academic, the concern for data determines the character of the presentation of research, the marketing and student experience efforts that cover my programmes, and must be invoked to justify changes to course content and assessment, the launch of new degrees, and the business case for taking on new staff. This is the lot of other education workers, particularly teachers with the vast trail of paperwork cataloguing their practice as educators, and also civil servants, mid-level managers across the public and private sectors, buyers in retail - every job that requires making decisions about the delivery of a service or the provision of a product.
And it directly governs. Since the emergence of modern wage labour in the English countryside, targets have been one tool among many used by employers to individuate and atomise workforces to enable their management. Today, when immaterial labour predominates and is simultaneously being attacked and deskilled, targets as a method have proliferated into a matrix of performance indicators for managing post-industrial labour. The worker, from the lowest paid to the relatively affluent are subjected (and subjectivated) by streams of algorithms and targets that measure aptitudes, define productivity, and determine the character of one's tasks. They shove (rather than nudge) labour along prescriptive circuits. The data points workers are judged by are often conjoined with those outside of their control - such as market conditions - and they are held to account against them.
Data as employed in capitalist societies is not just a practice/technology for managing workers, it has the characteristics of a moral code. Of, more precisely, a bourgeois morality. Like religion, the moralities of law and order, and the explicit statements of bootstraps neoliberalism, their subjects are the objects of exploitation. Bourgeois morality marks the bourgeoisie's innumerable, multitudinous others, and works to govern them. They themselves are, by and large, left unmarked. The neutral term. The starting point. That natural way of the world. Therefore, moralities spiritual and secular do not apply to them. The same is true of data.
How many CEOs "abbreviate" the evidence-based decision-making they enforce on their employees? How many politicians are impervious to the data-heavy briefings handed them by civil servants when they push particular policies? The further up the hierarchy one travels, the employment of data becomes more episodic, and its status as a tool of governance becomes all the clearer. These exalted levels operate with different sets of priorities: not what data tells them about what decisions would bring the greatest benefit to the greatest number, nor even in the narrower terms of using data to prioritise economic growth. No, data plays second fiddle to the politics of class maintenance. The successful exercise and preservation of class privilege and class power requires flexibility and opportunist nous. The realpolitik of our rulers depends on instinct and the feels in the first place, and the enforcement of decisions that reiterate their power-over. In these mundane circumstances of capitalism's everyday, data that shows they're wrong is simply noise.
Image Credit
1 comment:
Haven't you just reinvented "policy-based evidence making"?
Quelle surprise: in the hands of dishonest and perversely-motivated humans, data that may have been initially neutral (if the experts who worked hard to make it so we're successful) becomes twisted to serve embedded social biases again. In the best case, well-meaning experts rigorously cleanse the egregious distortions (and overt supply chain attacks!) from the collection process, so that untrustworthy people in positions of responsibility can then stamp their own preferred biases upon it in service of their agenda - by at least cherry picking the convenient data which they heed, and the inconvenient data which they ignore.
Or at least, so it is for as long as data is delivered to humans in order to act upon it. Humans are already disappearing from that position. The proper position of humans - the one which they cannot be entirely removed from, on this side of AGI, lest "model collapse" occur - is closer to the experts who filter and grade the data.
Many cases, such as corporate target setting in lassez-faire environments, are nowhere near that best case to begin with and have abusive bad faith agendas built into them at every level. A good example is the ubiquitous rating system for service industry workers in the US. These need simply to be exposed, discredited, and destroyed.
Post a Comment