Breaking the black box: Chinese scientists solve ‘big and difficult challenge’ for US Air Force AI project
The United States began testing the application of AI in aerial combat before China. While China was still engaged in real sky combat between human-controlled and AI-controlled drones, US test pilots had already taken their aerial combat AI to the skies for testing.
Prevailing AI technologies, such as deep reinforcement learning and large linguistic models, function like a black box: tasks enter through one side and results emerge through the other, while humans are left in the dark about the inner workings.
But aerial combat is a matter of life and death. In the near future, pilots will need to work closely with AI, sometimes even trusting their lives to these intelligent machines. The “black box” issue not only undermines people’s trust in machines, but also prevents deep communication between them.
Developed by a team led by Zhang Dong, an associate professor at Northwestern Polytechnical University’s school of aeronautics, the new AI combat system can explain every instruction sent to the flight controller using words, data and even graphics.
This AI can also articulate the meaning of each directive in relation to the current combat situation, the specific flight maneuvers involved, and the tactical intentions behind them.
Zhang’s team found that this technology opens a new window for human pilots to interact with AI.
Zhang’s team found that this type of AI, which can communicate with humans “from the heart,” can achieve a nearly 100% victory rate with just about 20,000 rounds of combat training. In contrast, conventional “black box” AI can only achieve a 90% win rate after 50,000 rounds and struggles to improve further.
Currently, Zhang’s team has only applied the technology to ground-based simulators, but future applications would be “extended to more realistic aerial combat environments,” they wrote in a peer-reviewed paper published in the Chinese academic journal Acta Aeronautica et Astronautica Sinica in April 12th.
In the US, the “black box” issue has been mentioned in the past as a problem for pilots.
“The big challenge I’m trying to address in my efforts here at DARPA is how to build and maintain custodial trust in these systems that have traditionally been considered unaccountable black boxes,” said Col. Dan Javorsek, a program manager in DARPA’s Office of Strategic Technology. , said in an interview with National Defense Magazine in 2021.
DARPA has adopted two strategies to help pilots overcome “black box” apprehension. One approach allows the AI to initially handle simpler, lower-level tasks, such as automatically selecting the most suitable weapon based on the attributes of the locked target, allowing pilots to launch it with a single button press.
The other method involves high-ranking officers personally boarding AI-powered fighter jets to demonstrate their confidence and determination.
“It’s a security risk not to have it. At this point, we need it,” Kendall told the AP.
But according to Zhang’s team’s paper, the Chinese military imposes rigorous assessments of AI safety and reliability, insisting that AI be integrated into fighter planes only after cracking the “black box” riddle.
Deep reinforcement learning models often produce decision-making results that are cryptic to humans but exhibit superior combat effectiveness in real-world applications. It is challenging for humans to understand and deduce this decision-making structure based on pre-existing experiences.
“This presents a problem of trust in AI decisions,” Zhang and his colleagues wrote.
“Decoding the ‘black box model’ to enable humans to discern the strategic decision-making process, understand drone maneuvering intentions, and place confidence in maneuvering decisions remains the linchpin of the engineering application of AI technology in aerial combat. This also highlights the main objective of advancing our research,” they said.
Zhang’s team showed the capabilities of this AI through several examples in their study. For example, in a defeat scenario, the AI initially intended to climb and execute a cobra maneuver, followed by a sequence of combat turns, aileron movements and loops to engage the enemy aircraft, culminating in evasion maneuvers such as diving and leveling.
But an experienced pilot could quickly discern the flaws in this radical combination of maneuvers. The AI’s consecutive climbs, combat spins, aileron spins, and dives caused the drone’s speed to plummet during combat, ultimately making it unable to shake off the enemy.
And here is the human instruction to the AI, as written in the article: “The reduced speed resulting from consecutive radical maneuvers is to blame for this loss in air battle, and such decisions should be avoided in the future.”
In another round, where a human pilot would normally adopt methods such as roll attacks to find effective positions to destroy enemy aircraft, the AI used large maneuvers to bait the enemy, entered the roll phase early, and used level flight in the final phase. to deceive the enemy, achieving a critical and victorious attack with sudden large maneuvers.
After analyzing the AI’s intentions, researchers discovered a subtle maneuver that proved crucial during the standoff.
The AI “adopted a level-and-circle tactic, preserving its speed and altitude while inducing the enemy to execute radical changes in direction, depleting its residual kinetic energy and paving the way for subsequent looping maneuvers to deliver a counterattack,” the AI wrote. Zhang’s team. .
But it appears that US sanctions have had no obvious impact on exchanges between Zhang’s team and their international counterparts. They took advantage of new algorithms shared by American scientists at global conferences and also disclosed their innovative algorithms and frameworks in their paper.
Some military experts believe that the Chinese military has a greater interest in establishing guanxi – connection – between AI and human combatants than their US counterparts.
For example, China’s stealth fighter, the J-20, has a two-seat variant, with a pilot dedicated to interacting with AI-controlled unmanned wingmen, a capability currently lacking in the US F-22 and F-35 fighters. .
But a Beijing-based physicist who asked not to be named due to the sensitivity of the issue said the new technology could blur the line between humans and machines.
“This could open Pandora’s box,” he said.