The Army is implementing two new strategies to protect its troops during the 500-day AI implementation plan.
The Army plans to test a massive AI implementation program for 500 days.
The U.S. Army has announced measures to protect its soldiers while enhancing its AI capabilities under a 500-day plan.
On Wednesday, the Army’s ALT office released two new initiatives, "Break AI" and "Counter AI," aimed at testing emerging AI technologies for practical use in the field and safeguarding against the malicious use of AI against the U.S., according to the Federal News Network.
The military is considering both the safe implementation and coordination with external parties for the development of AI across its branches.
One of the challenges to adopting AI is determining how to assess risk. This includes considering issues such as poisoned datasets, adversarial attacks, and trojans, as reported by Young Bang, principal deputy to the assistant secretary of the Army's ALT, during a tech conference in Georgia on Wednesday.
He stated that it is easier to implement third-party or commercial vendors' algorithms into their programs if they have developed them in a controlled and trusted environment owned by the Department of Defense or the Army.
"We want to adopt them."
The Army's 100-day sprint on incorporating AI into its acquisitions process was completed, and Bang made an announcement.
The Federal News Network reported that the goal was to explore methods for the Army to create its own AI algorithms while collaborating with trustworthy third parties to ensure the technology's security.
The Army is now implementing AI across the board and developing systems for its use, while also strengthening its defense against adversarial AI employment.
The "Break AI" initiative aims to explore the evolution of AI under the field of artificial general intelligence (AGI), which involves creating software that can match or surpass human cognitive abilities, and has the potential to employ advanced decision-making and learning capabilities.
The technology, which is not yet fully realized, aims to enhance current AI software that can only predict outcomes based on given data.
The Army must not only develop but also protect against this ambiguous technology, which presents a significant challenge.
As we move towards AGI, how do we test something that we don't know the outcome or behaviors of? Bang reportedly said.
"We cannot evaluate it in the same manner as we assess deterministic models, and we require the industry's assistance."
Jennifer Swanson, deputy assistant secretary of the Army's office of Data, Engineering and Software, explained that the second part of the Army's 500-day plan is more straightforward.
"Our goal is to ensure the security of our platforms, algorithms, and capabilities against attacks and threats, while also addressing how we counter the adversary's actions. We understand that we are not the only ones investing in this, as there is significant investment happening in countries that pose significant threats to the United States."
The military branch is keeping quiet about the specific AI capabilities it plans to develop due to the sensitive nature of the initiatives.
As Swanson stated, "As we learn and determine our actions, we will inevitably have shared aspects."
world
You might also like
- In England, unique artwork dating back to the early 2nd century is discovered by archaeologists.
- An assassination plot against Iranian Prime Minister Benjamin Netanyahu was thwarted, and an Israeli man was charged with the crime.
- After two rounds of device explosions, Israel targets Hezbollah in Lebanon.
- What caused the Hezbollah pager explosions? 5 key points to understand
- South Korea faces a new barrage of trash balloons from North Korea.