
The Pentagon has confirmed that artificial intelligence-based voice agents are already being used to guide interrogations around the world, a trend that could radically transform security assessment methods. U.S. officials told Defense NewsThese advances are directly influencing the design and testing of virtual interrogators intended to evaluate individuals seeking access to classified material.
Meanwhile, there are growing concerns about the lack of clear regulations to prevent the abusive use of these technologies, in what some experts are already calling unmarked cyber torture.
Privacy lawyer Amanda McAllister Novak, a member of World Without Genocidewarns that these practices can have devastating consequences. He recalled recent cases of teenagers who, after having conversations with self-learning chatbots, suffered mental anguish and even committed suicide.
"We have to ask ourselves whether it is really worth implementing AI interrogation systems in military contexts." -Novak said, warning of the possibility that this technology could be used to manipulate or psychologically harm individuals.
The U.S. Defense Counterintelligence and Security Agency (DCSA) justifies the use of these systems on the grounds that they reduce the gender and cultural biases present in human interrogators.
"We must understand the capabilities of these systems, as they are already in use around the world, including by law enforcement in the U.S."said Royal Reff, DCSA spokesperson.
However, Novak warned that even if monitoring protocols are implemented, there is always a risk that these systems could be hacked or used by malicious actors to inflict psychological harm on both military personnel and civilians.
As investments in data centers and AI technologies exceed $500 billion, the race to lead the artificial intelligence sector continues to accelerate. The conversational AI market is estimated to reach $50 billion by 2030, while the speech generator industry will grow to $40 billion by 2032.
However, these figures contrast with the absence of legal frameworks regulating the psychological impacts of these technologies, especially in such sensitive areas as national security.
Experts such as Renée Cummings, professor of data science and criminal psychologist, warn that current studies do not consider all the psychological variables involved in these types of interactions.
"You can't expose someone to extreme torture for an avatar or a bot to measure the outcome." -Cummings noted, stressing the urgent need to understand the real effects of these algorithms on the human mind.
The risk, according to specialists, lies not only in direct damage, but in the possibility that these systems amplify the biases, hostility and abusive practices present in the data with which they are trained.
Although the UN Convention against Torture recognizes the concept of "torture" as of 2020, it has been recognized by the UN Convention against Torture as a crime against humanity. psychological tortureIn addition, there is still no international legal framework that contemplates the use of artificial intelligence in these contexts.
"Cyber environments offer almost guaranteed anonymity and total impunity for offenders."warned Nils Melzer, former UN special rapporteur on torture.
Without clear and binding rules, military AI developers could argue that there is no intent to inflict harm, thus exempting themselves from any legal liability.
The question that remains in the air is clear: Are we in time to establish ethical and legal limits before it is too late?






