Between Power and Humanity: Towards More Responsible AI
Lukas Hess, Malik El Bay, Jeannie Schneider
07.11.2025
The Swiss AI Convention is approaching. A good moment to share our stance on the topic: What do we want to use AI for, and where do we draw the line?
Share Article
Intro
Last Monday, the Federal Office of Justice held a working conference on the implementation of the AI Convention. The event brought together representatives of all major political parties, civil society organisations, and private companies such as SBB and Google … (The fact that tech corporations that ought to be regulated are sitting at the table illustrates both their growing influence and the seriousness of the consultation process.) In preparation for our participation, we discussed our fundamental positions on artificial intelligence within the Dezentrum association and team.
There is an almost endless range of opinions about AI. Depending on one’s research field or industry, different aspects are emphasised. That is why we focus on our mission: Dezentrum advocates for a digital transformation that places society at its centre. Our concern is the wellbeing of people, regardless of the country they live in. Through applied projects and theoretical impulses, we create spaces for dialogue that explore how we, as a society, should engage with technology. Here, we present three points that stood out to us after the discussions within the association and during the working conference.
The Question No One Is Asking: What Do We Want to Use AI For – and What Not?
A new iPhone every year, a new smartwatch, more features, more data, more power. Companies push innovation forward at breakneck speed, and every product, every extra pixel of resolution is marketed as a revolution. This logic implies that every technological “advancement” is socially desirable and benefits us all.
In reality, however, innovation rarely follows the common good. It is driven by economic interests – and only a few benefit from it. This makes one thing clear: technology is not neutral. It is developed by specific people with specific values and target groups in mind, most often white, affluent men from the Global North. That shapes the outcomes. Facial recognition technologies, for instance, are known to perform worse on darker skin tones (link).
Another example is Mark Zuckerberg presenting AI companions as a solution to the fact that Americans are feeling lonely. An area of application that fills his pockets but is likely to fuel the very problem of isolation and social fragmentation (link).
So, when it comes to AI, we should not constantly ask what it can do, but what we want to use it for and what we do not. ChatGPT for proofreading or Claude as coding support? Why not? Automated decision-making in the asylum system? Highly problematic. AI companions to combat loneliness in care homes? Difficult. Under very strict conditions, such use might be considered.
These questions are not technical, but political. They are about values, responsibility, and power. At Dezentrum, we can only discuss them, not answer them definitively, because ultimately they concern the kind of society we want to live in together. Or, in Dezentrum’s words: they concern which futures are desirable and which are not. And this discussion is currently not taking place.
How Do We Prevent Inequality?
The internet is a driver of growing global inequality (link). At Dezentrum, we want to fight this inequality. We stand for equal opportunities and against discrimination and exploitation. Unfortunately, there are more and more examples of digital technologies being misused or deployed against people’s interests. Recently, Tagesschau reported on surveillance software from the US company Palantir, which is used in several German federal states and massively infringes on fundamental rights (link). The same issues can be observed in the platform economy. For instance, among Uber drivers in Zurich, who, according to Watson, often work more than 11 hours a day and earn less than 4,000 francs (link). Or on platforms such as X, where, according to a study by the University of California, Berkeley, hate speech has increased significantly (link). The list is long. And AI belongs on this list as well. For example, in an Associated Press investigation into Israel’s use of AI systems in committing war crimes in Gaza (link).
1. Recognising inequality
Artificial intelligence can amplify existing inequalities if we do not question it critically. It can discriminate against and disadvantage people because any system trained on historical data reproduces historical discrimination, often invisibly, yet with very real consequences. Studies, for example, show that AI-based hiring software can disadvantage women and people with a migration background because it relies on biased datasets (link). Similarly, automated credit scoring systems tend to rate individuals from low-income or certain regional backgrounds more poorly, not because of their behaviour, but due to historical prejudices embedded in the data (link).
This means: anyone talking about AI should first talk about social structures. Who benefits? Who loses? Who gets to decide? Discrimination through AI is real and happening today.
Instead of addressing these very real impacts, AI leaders often prefer to talk about how AI may soon become more intelligent than humans, what that would mean, and whether AI will be so powerful that (link). These narratives are not only a distraction from current problems but, above all, very effective marketing.
2. Preventing inequality
Once these problems are recognised, the crucial question arises: what can we do about them? There are certainly ways to make AI fairer and prevent discrimination, they just need to be implemented consistently.
Effective approaches might include mandatory risk assessments before deploying AI systems, similar to practices in medicine or construction, as well as thorough testing in sensitive areas, even if this slows development. Furthermore, registries of automated systems could disclose where and how AI-driven decisions are made and what risks they entail. Greater transparency around datasets could make biases visible at an early stage, and protecting data within its specific context of use (also known as “contextual privacy”) could reduce risks by limiting how strongly data can be linked. Enforcement is also lacking: data protection, equal treatment, and transparency laws must be applied consistently to have real effect.
Unfortunately, these very measures are being obstructed by influential industry actors, because while they create greater transparency and accountability, they also slow down development and can generate short-term costs.
How Much Power Is Too Much Power?
The final point we wish to highlight is the growing concentration of power among tech corporations, whose influence now extends far beyond the technology sector. The current AI race is dominated by a handful of US technology giants (link). Even the few alternatives, such as Mistral or DeepSeek, follow the same logic of global competition: here, “open source” often means “partly open” as long as it is economically advantageous. Genuine alternatives such as Apertus, the ETH’s AI project, or other publicly funded models do exist, but hold barely any market share because they must push back against the dominance of capital and attention economies.
This dynamic follows a familiar pattern: platforms create network effects that lead to “winner-takes-most” structures. Social media has already shown how rapidly this can translate into political influence, polarisation, and manipulation. Cambridge Analytica was only the beginning (link). Even today, algorithms steer visibility, attention, and reach and with them, public opinion.
In AI systems, this power becomes even more concentrated: when the same corporations that control search engines, cloud services, and social networks also operate the infrastructure for generative AI, the opportunities for influence multiply.
This “algorithmic curation” is barely regulated yet highly effective (link). When certain voices, for instance from the Global South or from marginalised groups, are systematically underrepresented (link), inequality becomes technologically reproduced (link). Power shifts quietly from democratically legitimised institutions to proprietary systems whose inner workings remain secret (link). And this is happening at a time when an increasing number of Big Tech leaders are aligning themselves with authoritarian powers (link).
This dependency now reaches far beyond opinion-shaping: as a recent analysis shows, major tech companies are increasingly intertwined with security-relevant sectors, for example, with the military infrastructure of the United States. This close entanglement of private technological development and state power pushes the boundaries of what can still be democratically controlled even further (link).
This is precisely why new approaches are needed: technology companies that do not follow the Silicon Valley model, but instead prioritise privacy, transparency, and participation.
The aim is not to build new tech giants, but to foster independent, open, and democratic systems. Open-source and decentralised approaches offer a chance to strengthen technological sovereignty, particularly in Switzerland and Europe.
Or, as someone in our association chat put it: “Europeans want stronger privacy and are willing to pay for it.”