by Gd sonu singh azad
May 31st 2022.



 Rick Robinson

Artificial intelligence (AI) is now part of our everyday lives — and while it does not take the science-fiction form of humanoid robots functioning at the level of a human (yet), AI implementations are already capable of making independent decisions, at a rapid pace. However, AI has well-documented challenges related to data bias, vulnerability and explainability. Northrop Grumman is working with U.S. Government organizations to develop policies for what tests need to be completed and documented to determine if an AI model is sufficiently safe, secure, and ethical for DoD use.

The DoD’s Defense Innovation Board (DIB) has responded to AI challenges with the AI Principles Project, which initially set out five ethical principles that AI development for DoD should meet: AI should be responsible, equitable, traceable, reliable and governable. To operationalize these DIB principles, AI software development should also be auditable and robust against threats.

These concerns in themselves are not new. People have worried about AI ethics since they first imagined robots. These ethical principles reflect this history and will help us get the most out of automation while limiting its risks. Here, three Northrop Grumman AI experts highlight the importance and complexity of implementing the DIB’s AI Principles in national defense.

Ethical AI, Operationalized

What is new, says Northrop Grumman Chief AI Architect Dr. Bruce Swett, is the challenge of operationalizing AI ethics — making ethical decisions and building them into AI systems before a subtle oversight or flaw can lead to negative or even catastrophic mission results. Developing secure and ethical AI is complicated by nature because it blurs the distinctions between development and operations that exist in more traditional computing 

For example, any time an image-recognition AI is re-trained on a new set of test images, it is in effect reprogramming itself, adjusting the internal recognition weights it has built up. Updating the AI model with new data to improve its performance could also introduce new sources of bias, attack, or instability that must be tested for safe and ethical use.

According to Dr. Amanda Muller, technical fellow and systems engineer at Northrop Grumman, this fluid environment calls for an “approach that is very multidisciplinary — not just technology or just policy and governance, but trying to understand the problem from multiple perspectives at the same time.

DevSecOps and Beyond

Some of these challenges are not unique to AI. The shift toward agile software development practices, with frequent update cycles, has led to an integration of previously separate code generation stages, software development and operations, merging into DevOps. As developers realized that security cannot be bolted on as an afterthought, it was also merged into the concept, leading to DevSecOps.

Now, experts are quickly understanding that AI security and ethics need to be an integral part of the DevSecOps framework. But the unique challenges of secure and ethical AI design extend beyond simply handling development, security, and operations as one moving process.

When an AI implementation goes online out in the world, it is exposed not only to learning experiences but also to hostile actors, says Vern Boyle, Vice President of Advanced Processing Solutions at Northrop Grumman. These actors may have their own AI tools and capabilities, making robustness to adversarial AI attacks a real and crucial consideration for DoD uses.

This risk is not limited to defense applications. One major tech company had to withdraw a “chatbot” aimed at teens after trolls attacked it, training it to respond to users with insults and slurs. In a defense environment, the stakes can impact and endanger an even wider range of people. Attackers must be expected to understand AI well and know just how to target its vulnerabilities. Protecting AI data and models throughout the AI lifecycle — from development through deployment and sustainment — is critical for DoD applications of AI.

The Complexity of Understanding Context

The current state of the art in AI is very good at a wide range of very specific tasks. Swett points out that it is crucial for people to know the limitations of current AI.

What it is not so good at, adds Boyle, is understanding context. AI operates only within its specific application, with no concept of the big picture. For example, AI has a hard time determining if a puddle of water is 1 ft. deep or 10 ft. deep.  A human can reason on information around the puddle to add context and understand that it might not be safe to drive through the puddle.

We rely on human intelligence to provide context —but as Muller notes, they also need to be  an integral part of the system. But that brings with it a requirement to “keep the human involved,” even when a system is highly automated, and configure the interaction to “allow humans to do the things humans do well,” she says.

Secure and Ethical AI for the Future

For Swett, the core ethical question that AI developers need to face is whether an AI model meets DoD applications, and how do you develop justified confidence in the AI model?

Having an integrated approach to AI, including AI policies, testing, and governance processes, will allow DoD customers to have auditable evidence that AI models and capabilities can be used safely and ethically for mission-critical applications.

Gds tech educational and management system user

Gd sonu singh azad

gdsonuazad9506@gmail.com
Search Website

Search

Subscribe

Newsletter

WhatsApp Google Map

Safety and Abuse Reporting

Thanks for being awesome!

We appreciate you contacting us. Our support will get back in touch with you soon!

Have a great day!

Are you sure you want to report abuse against this website?

Please note that your query will be processed only if we find it relevant. Rest all requests will be ignored. If you need help with the website, please login to your dashboard and connect to support