Change Location EUR
 
Mouser Europe - Flag Mouser Europe

Incoterms: DDP is available to customers in EU Member States.
All prices include duty and customs fees on select shipping methods.

Incoterms: DDU applies to most non-EU customers.
Duty, customs fees and taxes are collected at time of delivery.


Please confirm your currency selection:

Euros
Euros are accepted for payment only in EU member states these countries.

US Dollars
USD is accepted in all countries.

Other currency options may also be available - see
Mouser Worldwide.

Bench Talk for Design Engineers

Bench Talk

rss

Bench Talk for Design Engineers | The Official Blog of Mouser Electronics


How AI Safety Engineering Will Transform Engineering Roman Yampolskiy

(Source: Gorodenkoff/Shutterstock.com)

Introduction

The journey in developing intelligent machines has led us on a path of discovery, surprise, awe, and concern. Even with so much yet to be discovered, we know that the quest for Safe AI will change how we design, develop, and bring intelligent products to market. Safe AI requires not just defending from outside attackers as we do with cybersecurity; we must ensure the AI’s behavior is safe. Future AI development will focus on safety and risk management throughout every part of the development process.

Safety in AI Standards and Guidelines

One of the needs in AI Safety Engineering is having widely accessible practical guidelines and standards for AI development. Although a number of standards organizations and consortia are working toward establishing these resources, they’re still far from coming to a consensus (here again, that value-alignment problem crops up) and even further from providing specific technical guidance. In the meantime, safety-intense industries such as aerospace and medical manufacturing offer potentially useful risk-management models that could meet short-term AI Safety needs and serve as the foundation for eventual standards and frameworks. In particular, the Delphi method could be used to organize collective wisdom and judgment from experts, risk assessments of different AI system types, and results of hard and soft takeoff scenarios.

Safety in Design Thinking and Process

A related need is proactively incorporating safety into all areas of engineering design, starting with design thinking all the way through the process of bringing products to market. Let’s face it: This will mean a change for many development teams. Look at the internet. Look at embedded systems. The reality is that most industries do not devote significant resources to safety until after the product begins to succeed. The potential consequences of leaving safety unchecked in intelligent systems are significant in that it might not be possible to incorporate safety after the fact.

Safety in User Needs

Engineering Safe AI will require that engineers go beyond surface-level use cases and truly understand users’ values and preferences. We’re accustomed to taking use cases and user feedback at face value; however, that value-alignment problem discussed in Part 2 tells us that we need to dig deeper to understand actual needs. If, for example, we ask a student why she’s attending school, she’d probably tell you that she wants to gain knowledge and to study an area of interest. Studies show, though, that real or additional reasons lie elsewhere: Getting credentials, gaining peer approval, and increasing earning potential, for example. If, as engineers, we just look at surface user needs and feedback when developing intelligent products, we will likely develop products that don’t actually meet user needs … and result in intelligence that fails.

Safety in Vulnerability Assessment

At every stage of the process, safety of the product and its intelligence (memory, processing speed, etc.) must be considered and addressed. A hacker needs to find just one weakness in this infinite space of possibilities, but an AI could exploit intelligence in ways we can’t predict or even imagine. What are the potential product vulnerabilities? What are potential misuses? How could individual features be misused? How could each scenario backfire? Is the data safe and secure? How do we know? Does the product include more intelligence than is necessary? How could this be misused? How could it be secured?

Conclusion

Engineering Safe AI will bring changes to our processes, ways of thinking, and priorities. Having AI development frameworks and standards will go a long way toward engineering Safe AI. In the meantime, borrowing practices from aerospace and medical device industries would be useful, as well as adding safety into all aspects of your product development process. Up next in Part 6, we’ll look at one emerging technique for developing Safe AI, called Artificial Stupidity.



« Back


Dr. Roman V. Yampolskiy is a Tenured Associate Professor in the department of Computer Science and Engineering at the University of Louisville. He is the founding and current director of the Cyber Security Lab and an author of many books including Artificial Superintelligence: a Futuristic Approach.


All Authors

Show More Show More
View Blogs by Date

Archives