Commentary - (2022) Volume 10, Issue 9

Public Sector of AI Algorithms and Effects of Technologies on Decision Making
Charles Hugo*
 
Department of Public Administration, Tsinghua University, Beijing, China
 
*Correspondence: Charles Hugo, Department of Public Administration, Tsinghua University, Beijing, China, Email:

Received: 02-Sep-2022, Manuscript No. RPAM-22-18307; Editor assigned: 05-Sep-2022, Pre QC No. RPAM-22-18307 (PQ); Reviewed: 19-Sep-2022, QC No. RPAM-22-18307; Revised: 26-Sep-2022, Manuscript No. RPAM-22-18307 (R); Published: 03-Oct-2022, DOI: 10.35248/2315-7844.22.10.362

Description

In the public sector across all spheres of government, Artificial Intelligence (AI) algorithms are being widely used. Artificial intelligence (AI) algorithms are used in fields as diverse as policing, welfare, criminal justice, healthcare, immigration, or education. They are increasingly permeating non-routine and high-stakes aspects of bureaucratic work. They are essentially a set of tools that demonstrate human-level performance on given tasks traditionally associated with human intelligence. It has been determined that the public sector's increasing and deeper reliance on AI and machine learning technology is "transformative" for public administrations.

The possibility for cost-effective, high-efficiency, and successful policy solutions is what spurs these advancements. Algorithms are also stated to come with the "promise of impartiality," in contrast to human intuition-based decision-making, which contains biases and may lead to prejudice. Using AI in decision making is thus believed to have the potential to aid us in overcoming our cognitive biases and constraints. The adoption of predictive risk assessment systems in criminal justice was prompted by similar justifications, which were partly in reaction to worries about prejudice and discrimination in people, despite the fact that these methods themselves had been identified as causes of bias. In many administrative environments AI algorithms are presently used to support human decision-makers in making decisions. This is especially true in sections of the public sector that have significant ramifications, where complete automation appears unsuitable or far off. Algorithmic outputs, such as risk assessment scores used in criminal justice or the algorithm-generated heat maps of predictive police, help human decision making rather than making judgments on their own. As an outcome, algorithmic decision-making develops at the interplay of the two and does not eliminate the role of the human decision-maker. Major concerns have been raised by the use of AI algorithmic technology in the public sector. The well documented tendency of algorithms to learn systemic prejudice through, among other things, their reliance on past data and come to perpetuate it, essentially automating inequality, as well as the possibility for bias coming from human processing of AI algorithmic outputs, rank highly among them. These concerns also include issues with algorithmic accountability and monitoring of algorithmic outputs. Knowing the effects of these technologies on decision-making in the public sector and any potential cognitive biases becomes crucial. These become even more relevant when human decision-makers are seen as crucial safeguards, serving as decisional mediators on issues of algorithmic bias, in the context of the emergence of algorithmic governance. In an administrative state that is becoming more and more computerized, determining the extent to which our cognitive limitations permit us to serve as useful decisional mediators becomes crucial.

Theorizing on the basis of two streams of work from other fields that haven't yet spoken to each other on this subject, we concentrate on two distinct biases in this essay. Automation bias is the first bias, which builds on earlier social psychology findings. It alludes to a well-known human tendency to instinctively obey automated systems in spite of warning signs or knowledge that is incongruent from other sources. In other words, it is discovered that human actors unquestioningly delegate their decision-making to machinery. Although solid, these results have been reported for AI algorithmic forerunners like pilot navigation systems and in contexts beyond the public sector. The second bias that we theorize and test can be derived from previous work on biased information processing in public administration, and it has to do with decision-makers' selective adherence to algorithmic guidance. Specifically, the predisposition to preferentially take algorithmic guidance when it corresponds with pre-existing prejudices about decision subjects. Regarding algorithmic sources, this prejudice has not yet been looked into in our field.

Our focus is on biases in human processing that come from the application of AI algorithms in the public sector. We concentrate on the public sector since the stakes are higher there than in the private sector, even though we would anticipate that such biases are equally relevant for algorithmic decision making there. These issues are particularly serious in the context of the public sector since AI algorithms are increasingly being used in high-stakes situations where they have significant consequences for people's lives. Automation bias is the tendency for decisionmakers to ignore opposing cues from other sources and to reflexively default to the algorithm, perhaps leading to bad algorithmic advice. A second bias that we extrapolated from public administration literature concerns decision-makers propensity to selectively adhere to the algorithm when algorithmic forecasts match pre-existing prejudices. The employment of algorithms might then disproportionately harm stereotyped groups, adding to discrimination and possibly increasing administrative difficulties.

Citation: Hugo C (2022) Public Sector of AI Algorithms and Effects of Technologies on Decision Making. Review Pub Administration Manag. 10:362

Copyright: © 2022 Hugo C. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.