...
Science

Breaking the black box: Chinese scientists solve ‘big and difficult challenge’ for US Air Force AI project


The United States began testing the application of AI in aerial combat before China. While China was still engaged in real sky combat between human-controlled and AI-controlled drones, US test pilots had already taken their aerial combat AI to the skies for testing.

But while it’s unclear whether America has also resolved the same AI hurdle in its new AI-powered F-16 fighter jet that China claims to have, the innovative work of the Chinese scientists it will certainly change the face of air battles in the future.

Prevailing AI technologies, such as deep reinforcement learning and large linguistic models, function like a black box: tasks enter through one side and results emerge through the other, while humans are left in the dark about the inner workings.

But aerial combat is a matter of life and death. In the near future, pilots will need to work closely with AI, sometimes even trusting their lives to these intelligent machines. The “black box” issue not only undermines people’s trust in machines, but also prevents deep communication between them.

Developed by a team led by Zhang Dong, an associate professor at Northwestern Polytechnical University’s school of aeronautics, the new AI combat system can explain every instruction sent to the flight controller using words, data and even graphics.

This AI can also articulate the meaning of each directive in relation to the current combat situation, the specific flight maneuvers involved, and the tactical intentions behind them.

02:16

Australian leader criticizes China for “unacceptable” use of flares near military helicopter

Australian leader criticizes China for “unacceptable” use of flares near military helicopter

Zhang’s team found that this technology opens a new window for human pilots to interact with AI.

For example, during a review session following a simulated conflict, an experienced pilot might discern the clues that led to the AI’s failed self-presentation. An efficient feedback mechanism then allows the AI to understand suggestions from human teammates and avoid similar pitfalls in subsequent battles.

Zhang’s team found that this type of AI, which can communicate with humans “from the heart,” can achieve a nearly 100% victory rate with just about 20,000 rounds of combat training. In contrast, conventional “black box” AI can only achieve a 90% win rate after 50,000 rounds and struggles to improve further.

Currently, Zhang’s team has only applied the technology to ground-based simulators, but future applications would be “extended to more realistic aerial combat environments,” they wrote in a peer-reviewed paper published in the Chinese academic journal Acta Aeronautica et Astronautica Sinica in April 12th.

In the US, the “black box” issue has been mentioned in the past as a problem for pilots.

US air combat tests are being conducted between the Air Force and the Defense Advanced Research Projects Agency (DARPA). A senior DARPA official acknowledged that not all Air Force pilots welcome the idea due to the “black box” issue.

“The big challenge I’m trying to address in my efforts here at DARPA is how to build and maintain custodial trust in these systems that have traditionally been considered unaccountable black boxes,” said Col. Dan Javorsek, a program manager in DARPA’s Office of Strategic Technology. , said in an interview with National Defense Magazine in 2021.

DARPA has adopted two strategies to help pilots overcome “black box” apprehension. One approach allows the AI ​​to initially handle simpler, lower-level tasks, such as automatically selecting the most suitable weapon based on the attributes of the locked target, allowing pilots to launch it with a single button press.

The other method involves high-ranking officers personally boarding AI-powered fighter jets to demonstrate their confidence and determination.

China’s J-20 stealth fighter has a two-seat variant, with a pilot dedicated to interacting with AI-controlled unmanned wingmen. Photo: China Daily via Reuters
Earlier this month, Air Force Secretary Frank Kendall took a one-hour flight in an artificial intelligence-controlled F-16 at Edwards Air Force Base. Upon landing, he told the Associated Press that there was seen enough during your flight to trust this AI “still learning” with the ability to decide whether to launch weapons in war.

“It’s a security risk not to have it. At this point, we need it,” Kendall told the AP.

The security risk is China. The US Air Force told the AP that AI offers them a chance to prevail against the growing formidable Chinese Air Force in the future. At the time, the report said that while China had AI, there was no indication that it had discovered a method to conduct testing beyond simulators.

But according to Zhang’s team’s paper, the Chinese military imposes rigorous assessments of AI safety and reliability, insisting that AI be integrated into fighter planes only after cracking the “black box” riddle.

Deep reinforcement learning models often produce decision-making results that are cryptic to humans but exhibit superior combat effectiveness in real-world applications. It is challenging for humans to understand and deduce this decision-making structure based on pre-existing experiences.

“This presents a problem of trust in AI decisions,” Zhang and his colleagues wrote.

“Decoding the ‘black box model’ to enable humans to discern the strategic decision-making process, understand drone maneuvering intentions, and place confidence in maneuvering decisions remains the linchpin of the engineering application of AI technology in aerial combat. This also highlights the main objective of advancing our research,” they said.

Zhang’s team showed the capabilities of this AI through several examples in their study. For example, in a defeat scenario, the AI ​​initially intended to climb and execute a cobra maneuver, followed by a sequence of combat turns, aileron movements and loops to engage the enemy aircraft, culminating in evasion maneuvers such as diving and leveling.

02:17

China displays images of Fujian aircraft carrier with advanced catapult launch system

China displays images of Fujian aircraft carrier with advanced catapult launch system

But an experienced pilot could quickly discern the flaws in this radical combination of maneuvers. The AI’s consecutive climbs, combat spins, aileron spins, and dives caused the drone’s speed to plummet during combat, ultimately making it unable to shake off the enemy.

And here is the human instruction to the AI, as written in the article: “The reduced speed resulting from consecutive radical maneuvers is to blame for this loss in air battle, and such decisions should be avoided in the future.”

In another round, where a human pilot would normally adopt methods such as roll attacks to find effective positions to destroy enemy aircraft, the AI ​​used large maneuvers to bait the enemy, entered the roll phase early, and used level flight in the final phase. to deceive the enemy, achieving a critical and victorious attack with sudden large maneuvers.

After analyzing the AI’s intentions, researchers discovered a subtle maneuver that proved crucial during the standoff.

The AI ​​“adopted a level-and-circle tactic, preserving its speed and altitude while inducing the enemy to execute radical changes in direction, depleting its residual kinetic energy and paving the way for subsequent looping maneuvers to deliver a counterattack,” the AI ​​wrote. Zhang’s team. .

Northwest Polytechnic University is one of the most important in China military technological research bases. The US government imposed strict sanctions on it and made repeated attempts to infiltrate its network system, prompting strong protests from the Chinese government.

But it appears that US sanctions have had no obvious impact on exchanges between Zhang’s team and their international counterparts. They took advantage of new algorithms shared by American scientists at global conferences and also disclosed their innovative algorithms and frameworks in their paper.

Some military experts believe that the Chinese military has a greater interest in establishing guanxi – connection – between AI and human combatants than their US counterparts.

For example, China’s stealth fighter, the J-20, has a two-seat variant, with a pilot dedicated to interacting with AI-controlled unmanned wingmen, a capability currently lacking in the US F-22 and F-35 fighters. .

But a Beijing-based physicist who asked not to be named due to the sensitivity of the issue said the new technology could blur the line between humans and machines.

“This could open Pandora’s box,” he said.



Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Seraphinite AcceleratorOptimized by Seraphinite Accelerator
Turns on site high speed to be attractive for people and search engines.