When introducing (or allowing the use of) this technology in democratic processes, governments also introduce a degree of uncertainty and vulnerability. Civil society, and particularly individual citizens, take a risk when their interaction with government or their participation in policymaking is mediated by AI. Considering the limited AI awareness and literacy of citizens and policymakers, we question the reflexivity (or consciousness that one is taking a risk) of such risk-taking and risk-making decisions. Deploying AI in democratic processes is not neutral and may have long-lasting negative effects on the trust that citizens place in their governments, the transparency and accountability of policymaking, as well as in their capacity to have a meaningful influence in this process. Additional research must be conducted to assess the citizen perception of AI in this context and its impact on trust. In addition, it is imperative to raise awareness and literacy about AI so that citizens can adapt and take advantage of this new AI-mediated relation.
The limited transparency and accountability of these actors, and the tools they sell to some political actors, increase the asymmetry of power in the policymaking process, as well as raises question about the legitimacy of the process itself. This study confirms previous research including from Mayer-Schonberger and Ramge (2018), who contend that power will increasingly be concentrated in the hands of those who have developed the capacity to collect and control valuable data. Wu (2010) and Harari (2018) predict the growth of cartels and monopolies. Without control over data accumulation, users are deprived of some of their agency over personal information, which can then become an open door to unfair data management practices, such as discrimination (Cinnamon, 2017).
This study concurs with these studies by arguing that in an AI-mediated citizen-government context, power lies with those who hold the AI and big data capacity. In this context, we need to adopt a human-centered approach to AI that respect human rights and contributes to the resilience of liberal democracy. We argue that we need to go beyond current guidelines and recommendations to (i) implement a moratorium on the use of AI developed and managed by private actors, (ii) develop AI literacy campaigns, and (iii) build pilot projects to assess how AI- mediation impacts citizen participation and trust in liberal democracy.