Artificial Intelligence - Downsides to the Government.

The downsides of artificial intelligence (AI) in US government organizations from the governments point of view include AI making government organizations vulnerable to security breaches, AI's self-reinforcing biases, government organization's incompetence in AI, the conflict of interest between government organizations and AI software manufacturers, and AI making it easier for individuals to lie.

GOVERNMENT ORGANIZATION'S VULNERABILITY TO ATTACK

The use of AI in US government organizations at the federal level faces security problems. Many government institutions in the US have faced security breaches at some point. This has been due to the lack of securing data in the organizations, allowing hackers to attack and punish the organizations. It has become increasingly unlikely for government organizations to protect algorithms from cyber attackers. It's also becoming difficult to identify if and when attackers have tampered with an algorithm, as algorithms are usually "black boxed."
The recent allegations of the hacking of the Democratic National Committee by Russia, which reportedly assisted Trump’s presidential bid, is a good example. The stealing of sensitive data for 21.5 million government employees and applicants after hackers accessed the U.S. Office of Personnel Management, is another example. Besides, coming up with laws that would aid in the transfer of judicial and government powers to artificial intelligence machines, gives rise to issues of software infiltrated with bugs.

SELF-REINFORCING BIAS

What AI does best is to analyze a lot of data in real time to identify what is going on and give conclusions on trends. The main problem with this is that AI has a self-reinforcing bias, as it assumes that all things happen the same and that there will be a repeat of what happened in the past. An example is the crime data produced by AI machines that single out minority and black neighborhoods with lower incomes, as they are more likely to be plagued with anti-social behavior and crime than prosperous white ones, for rigorous police patrols, thereby reinforcing the notion of biases by the police.
Al has been used in predictive policing at the local level in the US since the early 2010s, and has constantly faced accusations of being faulty and predisposed to deep-rooted racial biases. Self-reinforcing biases may be to blame in such cases as questions are asked on how transparent the AI algorithms are.

GOVERNMENT ORGANIZATION'S INCOMPETENCE IN AI

A downside of artificial intelligence in US government organizations is the lack of competence in digital matters. Many in government organizations lack experience in understanding the power and limits of AI technology. This means that their failure to grasp AI fundamentals prevents them from sufficiently regulating software companies that build AI software.
Some officials in US government organizations are incapable of understanding how AI functions in terms of things such as backdoors and their relationship with encryption, making it unlikely for them to sufficiently come up with regulations, such as algorithmic ones, for the industry.

CONFLICT OF INTEREST

There exists a conflict of interest between government organizations and AI software manufacturers in the US. These organizations need to regulate and police the AI industry using software systems that are not developed by them, but which are often developed by private tech multinational corporation. Questions arise about such partnerships and how transparent the algorithms sold to government organizations are.
Private tech multinational corporation have been accused of advocating for free and open data when it is coming from the government, but strongly opposed to the same concept when it comes to their data to ensure that it remains proprietary.

EASIER TO LIE IN AI ADAPTATIONS IN MILITARY OPERATIONS

The Department of Defense has adapted AI into military operations. Although AI is said to increase the sources in truth discernment in intelligence operations, its downside is that it has also been known to make it easier for people to lie persuasively. In addition, AI has been used to enhance forged video and audio, and brought into question what is authentic and can be trusted across many government organizations, including in the military.
The adoption of AI by the military poses a problem in that the current validation and verification process is not suited to AIs that learn, as it is developed for frozen software. In addition, corrupted data, possibly from enemies, could have grave consequences in military operations. It is also difficult 
to trust an AI system in the military that cannot be easily understood.

Post a Comment