Politics and the Black Box of AI
Do We need a Digital Social Contract?
The following essay sheds light on a challenging relationship. Although we are surrounded by digital technologies, public discourse often deals with marginal issues and, depending on taste, with utopias or dystopias. A structured and self-reflective implementation of the digital transformation is not on the horizon. Why do we, as a society, not use this power to shape our future?
In this context it is needless to restate that there is no universal definition of AI. It will become increasingly unimportant whether something is only “digitized” or already full-blown AI. Both are mutually dependent. Typically, we find a mix of technologies in real systems anyway. I always like to present the example of automotive navigation systems as they contain conventional software as well as AI components from various decades: Heuristic search for the shortest route (1960s), voice dialogue system (1990s) and data-based traffic jam forecasting (2000s).
However, the effects on us and the world are more interesting than the technology itself. If we assume that a large proportion of the 60 million cars in Germany already have a GPS system or are navigated by a smartphone app, we have the means to influence energy consumption, traffic in city centers, residential areas, bypasses, conformity with speed limits, etc. However, it is not clear who could actually take responsibility on this task: Town planners, cities and municipalities, citizens, or technology providers? Probably all of these and others should be involved. However, one could argue that a central control system which recommends cars where to go is not wanted and that the decision should be left to coincidence (which is demanding for computer systems). Since users always read the general terms and conditions, inform themselves about cookies, etc., we could also ask them to check the route suggested by their GPS system. When country roads in Austria were completely overloaded during the holiday season a few years ago, bypassing of toll roads was prohibited. By then it was too late. As mobility as a whole will have to change, one can argue that individual transport will decrease anyway, which in turn reminds us that we have to address the topic of traffic control head-on.
Now, let’s consider a second example: our supply of news and information, another case chosen at random. In many industries around media monitoring and strategic communication, AI has been increasingly used to be faster, more up-to-date, and also cheaper. Still only partial automation is used and the “human in the loop” is responsible for end results. However, it is much too often neither specified in advance what the AI system is supposed to do nor evaluated whether the entire socio-technical system has the desired properties in the end. But that is a story story in its own.
And how is the general population supplied with news? There are about as many smartphones as cars in Germany. Every day, my smartphone suggests an eclectic mix from familiar and less-familiar news sources that inform me about climate change, sports results, accidents, and other events. When reading the headlines of news item, I do not know whether this selection was organized by humans, whether the recommendations are personalized, and how this specific list of news items came about. AI is certainly involved somewhere. Isn’t it strange that we do not question the source of that specific chunk of world news, which surely manipulates us like any other composition of news? I don’t want to imply that there are evil forces at play and I have the freedom to ignore the news. But I would question any person presenting me with a selection of news without further explanation (unless he is the editor-in-chief of my daily newspaper). What institution would you permit to pick such a selection, or even decide on the selection criteria? Such questions often make me think of the German Television Council, which successfully controls public broadcasting in Germany, which de facto tends to aim the older part of our population. What about the rest? I once demanded a “digital social contract”. At least we should discuss how we want to accept the powers digitalization and AI offer and, more generally, what kind of technological support we want to live with.
At this point I would like to make a hard turn towards AI. My discussion of the two examples was somewhat suggestive, as it suggested that AI systems could be socially corrected easily with the right normative input. This assumption is presumably connected to calls still being made for an algorithm MOT or the claim that if the developer teams were more diverse, the technology would immediately become fairer and better. However, we’d face challenges even if these (benevolent) measures were well implemented.
We are talking about weak AI today that must be provided with data and the necessary knowledge for every task we want it to perform. The other side of the coin is humanity’s nightmare of Frankenstein’s son: a strong or even a general AI that could be applied to all kinds of problems, that would set its own complex goals, develop a consciousness. We need to accept the fact that AI systems typically solve tasks differently than humans would – at least differently than we experience it in our introspection – as long as there is no strong AI (whose possibility is disputed among experts). It’s not big news that technology works differently than its human or natural equivalent. Turbines work differently from fluttering wings, and in the area of cognitive abilities, too, there are calculators, for example, whose circuit boards work differently from the way our brains work when calculating. However, it is still possible that an externalized level of abstract calculation rules creates a common understanding of the matter. This is often different with today’s AI systems.
For instance, in the 1970s and 1980s there was a generation of rule-based translation systems that subjected sentences to grammatical analysis and used vocabulary lists and so-called transfer rules to generate the possibly different grammatical structures of the target language. Thus, by swapping nouns and adjectives, red car became voiture rouge in French. This approach was still like the way we approach foreign languages in school. However, since our languages and our communication are very flexible and difficult to put into formal rules, these systems never scaled up. Today’s high performance translation systems, like DeepL or Google Translate, work through machine learning. They learn completely by themselves to turn the input sentences into output sentences, by feeding on bilingual or multilingual texts (previously translated by humans at some point). Hence, translation is modelled as a purely statistical symbol manipulation and this approach works very well, at least within certain limits.
In language technology (my field of expertise) it can be said that it is almost a rule that systems that are given a lot of freedom in processing their task (and a lot of suitable training data) deliver the best quality. Nowadays, these are typically (artificial) neural networks which work with the help of deep learning. Hence, they have the often-mentioned property of being a black box – not mathematically, they are even quite simple in that respect, but in their solution to the task, in this example the translation of texts. It is difficult to determine how exactly they solve their task, whether and to what extent they have an “understanding” of words, word classes, things, measures, concepts, etc.
Perhaps it helps to think of a working model as a guide dog that reacts correctly to a red light because we have trained it that way. How much it understands about the world and the reasons for it remains hidden from us because it cannot talk. In comparison, a navigation system can tell us little about traffic planning or a news recommendation system can’t reveal much about journalistic principles. The upside is that many systems are practically given to us as a gift because the data exists, and we don’t have to think about what knowledge we still need to give to the systems. But one can also build AI systems in a more transparent way, providing more explanations. It’s just more time-consuming and usually limits the applicability and performance, and it’s often not worth it in the end.
Back on topic. In terms of the labor force, we are at the beginning of a demographic downward spiral in our country, which offers the opportunity to design technology that can help us keep things running. However, in turn this technology ideally works within certain degrees of freedom and therefore we need to keep a watchful eye on it. If we manage the human-machine interaction and the socio-technical systems function well, technology can make processes faster, fairer, and more transparent. But it can’t “guess” our normative ideas in the process; instead, it sticks descriptively to what patterns are in our data, independently of whether we think we approve of those patterns or not.
And it is at this point that we all need to work together. The normative ideas, the experience about processes, customers, political framework conditions, etc. cannot be contributed by technology providers, they must come from domain experts: transport planners, policy advisors, journalists – in discourse with representatives of civil society and within a political framework. This could, for example, stipulate that navigation systems and news portals must document their technology and provide interfaces so that their functionality can be checked by the aforementioned stakeholders, or that certain actors must document their development process (specification, evaluation, roll-out). I am not sure whether the rather horizontal and risk-based approach currently being discussed at EU level will support that.
Furthermore, we have to decide for which processes we want to use technology. Which task is repetitive, expensive, time-consuming? Where are customers, patients, clients not served individually and quickly enough? Which tasks do we absolutely want to keep in the human domain? We should now answer these and many other questions as participatively, inclusively, interdisciplinarily, and cross-generational as possible. We must make digitalization the project for our future, and in doing so, also take issues such as sustainability and Europe’s technological independence into consideration.
Note: This text was machine translated from German with only slight manual post-editing.
For more information how to publish with FMP, see here.