The Dangers Of Implementing AI In Government Administration

Share This:

From “technocracy.news”

Technocracy is taking over governments worldwide with AI replacing the rule-of-law, and even in Russia. In America, Washington, DC has been invaded by arch-Technocrats who are bolting AI into government processes nilly-willy: “Decision-making based on technology (“hyper-technocracy”) pays little attention to people’s values and beliefs. All forms of human experience become merely “behavioral data”, transformed into products for analysis, forecasting, and management.” ⁃ Patrick Wood, Editor.

Over the past year, there have been an increasing number of examples of AI being introduced into public administration. The U.S. government has begun using specialized services such as ChatGPT Gov and xAI for Government; the Russian government declares the prioritization of AI implementation in its operations. This trend seems to have already become irreversible. However, technology cannot replace governance itself, as it is fundamentally built on values and circumstances that are not obvious to “machines”. The consequences of blindly trusting AI may be wrong decisions and a loss of public trust in an administration that fails to consider the interests of its citizens.

The feature of political governance is that it resolves social conflicts and contradictions based on values: justice, equality, patriotism, democracy. In situations of resource scarcity, a manager makes choices, for example, between profit and environmental preservation, between investment and social welfare, between security and freedom. In historic speeches by politicians—whether Russian, Chinese, or American leaders—we also see appeals to important values that unite specific nations.

The choice of values in each case leads to different decisions. For example, for law enforcement, so-called “troubled teens” are problem children who may become criminals and therefore should be “monitored”, while social services strive to socialize these teenagers through mentoring, new social circles, and engaging pastimes.

The development of urban areas is another example. An industrial district near the city center up for renovation could simply be rebuilt as new residential housing, or it could become a recreational spot on the city map. In the first case, the city budget gains revenue; in the second, residents’ comfort increases. Thus, the voter is not just a “parameter in a model”, but a full-fledged participant in the political process, with their own values and objectives potentially differing from those of the state apparatus.

As shown by VCIOM data, in Russia, 52% of citizens generally trust AI, while 38% do not. But this is in general; if we talk specifically about the application of AI in public administration, the numbers reverse: 53% view this negatively, and only 37%—positively. Among other concerns, Russians noted the following potential risks: mistaken decisions (58%) and lack of accountability for decisions made (57%). Overall, people are willing to see AI as an “assistant”, but not as the final decision-maker.

AI & Government Administration: The Problems

In discussions, four main problems with AI technologies stand out:

  1. Transparency of Use.The political decision-making process is not clear to the “average voter”—both because of process complexity and its lack of openness. This becomes even more of an issue if an AI makes decisions, as its motives are nearly impossible to explain. Even neural network developers do not always know why and based on which parameters an algorithm arrived at a particular conclusion. That is why AI is often referred to as a “black box”: an observer sees only the initial input and the output.
  2. Reinforcement of Bias.AI is trained on human-generated content and inevitably “inherits” stereotypes present in society. This becomes especially evident when comparing models created in different countries and cultural contexts. The biases and unspoken assumptions embedded in each culture can increase existing inequalities and discrimination via AI. One might suggest “re-training” the algorithms, but if policy is value-driven, this problem may be fundamentally irreconcilable within algorithmic frameworks.
  3. The Problem of Accountability and Responsibility.The use of AI could serve as a way to shift responsibility from decision-makers to the “machine.” Positive effects from AI implementation would appear in reports on efficiency, while failures and errors would be blamed on the imperfection of AI models.
  4. Technical Imperfections of AI, for example:
    • Neural network “hallucinations”—that is, factual errors;
    • The problem of reproducibility—one can get two different answers to the same question, asked of the same model at different times;
    • The limited scope of training materials, as the algorithms require vast datasets.

Thus, at the current stage of AI development, one cannot be sure of transparent and impartial decisions, since the behavior of the algorithm is non-obvious. AI only has “intelligence” in its name, yet its nature is fundamentally different from human intelligence. Therefore, it should be used in clearly defined and well-tested “boundaries”.

Limits of Trust

Decision-making based on technology (“hyper-technocracy”) pays little attention to people’s values and beliefs. All forms of human experience become merely “behavioral data”, transformed into products for analysis, forecasting, and management.

The choice to implement AI in public administration is itself a political decision, though it may be justified through the prism of technocratic efficiency. Any form of authority, including bureaucracy, is always politicized—a point made over a century ago by the classics of social sciences, such as Max Weber.

Nevertheless, AI will likely be implemented in government regardless of voters’ approval. The complex nature of the technology and the small number of actors involved—namely, the state and the IT giants who own patents and technology—contribute to the fact that implementation will proceed exclusively “top-down”, though not without challenges.

In the USA, the government’s use of AI to detect potential threats on social networks has already faced criticism. This campaign raised concerns about privacy, especially among immigrants.

As early as 2012 in Poland, an algorithm was developed and implemented to profile the unemployed based on a number of characteristics. Assignment to one of three categories determined whether a person could receive state support and what type of program was offered: employment, job training, or nothing at all. Many unemployed individuals went to administrative courts, claiming that the categorization was unfair and opaque. The result: the mechanism was found unconstitutional.

In the USA and Spain, there has been a practice of using AI in courts to decide on parole, assessing the likelihood of recidivism. Research has shown that the use of AI only amplified the bias and discrimination that were already present in these decisions even before the technologies’ introduction.

Uncritical use of AI not only failed to improve state efficiency but also reduced public trust in AI and, consequently, in all government measures focused on AI implementation. Thus, public trust is a prerequisite for integrating AI into public administration. Without it, citizens may respond with increased passive resistance, active protest, and a rising popularity of populist politicians who promise to “save” them from AI.

The Russian Experience

In Russia, AI implementation is part of the digital transformation program that every agency has. To support state bodies, the government announced plans to create a project office for AI implementation, and specialist working groups are being formed to develop legislative frameworks for AI.

So far, there is no unified regulation for AI in Russia—there are laws and bylaws covering various sectors. The “Law on Recommender Algorithms”, adopted in 2023, is presumed to be a sufficient legal framework for successful AI deployment. However, the expert community highlighted during the law’s development that usage scenarios for AI are very diverse, and this document does not cover them all. Moreover, civil servants work according to strict regulations and instructions that leave no room for indeterminate scenarios. As a result, simple solutions such as chatbots are being implemented on a massive scale. This uncertainty does not allow the state to use AI to its full potential but also serves to mitigate risks for now. To overcome these barriers, President Putin issued an order in early 2026 to develop measures to intensify the implementation of AI in public administration.

However, amid the intensification of AI implementation measures, the issue of protecting citizens’ digital rights against known risks has fallen out of focus. As a result, the topic of public interest in the implementation of AI in public administration remains on the periphery of expert discussion and is only occasionally addressed by stakeholders. And not only in Russia.

Read full story here…

Share This: