Pamela Isom
Director, U.S. Department of Energy, USA
Pamela K. Isom is Director of Artificial Intelligence & Technology Office at the U.S. Department of Energy. She is a distinct, highly valued Senior Executive whose emphasis is on applied responsible and trustworthy artificial intelligence (AI) to drive innovation, clean energy, equity and energy justice. Principal corporate officer and geospatial leader, Pamela is transforming the use of AI to save lives, fight fraud, deliver ethical, secured outcomes and strengthen our critical infrastructure. She oversees the digital programme and develops risk mitigation policies and standards. Pamela received the prestigious Fed100 award in 2021, AFCEA innovation award in 2020, and federal GEARS in government awards in both 2020 and 2019 for exemplary leadership and results in AI, geoscience and applied strategy, product and portfolio management.
U.S. Department of Energy, USA
E-mail: pamela.isom@hq.doe.gov
Journal of AI, Robotics & Workplace Automation © Henry Stewart Publications
Abstract Confidence in artificial intelligence (AI) is necessary, given its growing integration in every aspect of life and livelihood. Citizens are sharing both good and unpleasant experiences that have fuelled opinions of AI as an emerging, advantageous capability while also expressing an abundance of concerns that we must address. Clean energy scientific discoveries, the supply of autonomous vehicles that perform with zero carbon emissions, the rapid discovery of chemicals and/or anomalies that generate medicinal value, or the integration of AI in human resource processes for accelerated efficiencies are examples of AI use cases that can save lives and do so at the speed of urgency. The concerns and challenges are to ensure that models, algorithms, data and humans — the whole AI — are secure, responsible and ethical. In addition, there must be accountability for safety and civil equity and inclusion across the entire AI life cycle. With these factors in action, risks are managed and AI is trustworthy. This paper considers existing policy directives that speak to managing risks across the AI life cycle and provides further perspectives and practices to advance implementation.
KEYWORDS: confidence, responsible AI, AI ethics, explainable, trustworthy AI, AI life cycle, governance, stewardship, AI risk management, AI assurance, AI-IV&V
INTRODUCTION
Advancements in artificial intelligence (AI) have brought about tremendous results through a host of capabilities, including integration in robots and computer visioning to make our cities smarter through greater management of critical infrastructure. The control of traffic lights is occurring through the use of AI algorithms, along with cameras to keep traffic moving and save energy at the same time,1 and autonomous vehicles rely on AI to decrease traffic accidents by predicting the paths of pedestrians and making adjustments to speed and routes. AI underpins automation efficiencies such as applicant screening and, according to the Smithsonian, Beethoven’s unfinished Tenth Symphony2 was completed with the help of AI. Most importantly, AI has and will continue to save lives.3 Albeit with good intentions, AI has, however, brought attention to the use of algorithms to spread misinformation and draw inferences with recommendations that have adverse impacts on society. Facial recognition algorithms, for instance, are convenient at times but have caused significant harm, including false arrests.4 The practice of profit making over safety and misuse of citizens’ personal information5 are also common concerns. These situations are real and require immediate action. AI is here to stay; therefore, the need for assurances of secure, responsible and ethical AI, where accountability for safety, civil rights and liberties is the norm, is not an option. Only when such assurances are in place is AI worthy of the people’s confidence.
AI LIFE CYCLE AND STEWARDSHIP
Whether applications are narrow or general, stakeholders must understand the life cycle of AI from research and development to the impacts of deployments on the environment and citizens when planning its use and enablement. Figure 1 illustrates the AI life cycle and considerations for human oversight along with automation, so that governance is effective, mitigates risks and strengthens user confidence. Note that the entry points of the AI life cycle can vary, and the process is iterative. User A might, for example, be responsible for data acquisition and monitoring the AI’s performance, while User B is responsible for supply chain management. In this scenario, the entire life cycle remains relevant for effective governance and oversight, while stakeholder engagement and the degree of collaboration varies.
Stewardship and human oversight are required throughout the entire AI life cycle:
- AI supply chain: It is important to understand the supply chain of AI hardware, software and supporting assets. One should consider candidate suppliers that have a record of responsible, trustworthy AI business practices. There should be accessible, actionable AI ethics strategies and plans, transparency in regulatory compliance, explanations of error handling and recovery mechanisms in the event of erroneous outcomes. Suppliers can be internal and external to an organisation. Supply chain managers and stakeholders must be held accountable for asset quality along with preventative and risk management from supply to customer provisioning;
- Data acquisition: One must establish a data chain of custody for greater accountability and traceability. Testing and training data should record its chain of custody to ensure an accurate account of who had access to the data and when that data may have been modified. In addition, it is important to understand data rights and licensing permissions and embed metadata with datasets to ensure the right to use, modify, reproduce, release, perform, display or disclose technical data, in whole or in part, within the government, as applicable.6 Stakeholders need to know how data is acquired, data use privileges and regulations to make informed and evidence-based decisions. If stakeholders select datasets from the open market, cyber security precautions are required before local network integration. Applied import and export policies are critical in this phase of the life cycle to ensure data has not been and is not subjected to illegal or irresponsible transmissions across boundaries. Another consideration when acquiring data is to understand the state: was data encrypted, is there missing data, how is privacy protected, and are the privacy policies clearly marked and sufficient for the data classification levels?;
- Model development: It is important to ensure that algorithms are trained and tested and that models are architected for transparency. There must be diverse representation of human involvement for verification and validation of models that is based on scope, context and intent. The AI should be robust where training of the algorithms results in performance of models that are resilient and of high quality. For deep learning scenarios where an algorithm is unsupervised and learns from itself, there must be traceability to the underlying networks and substantial, diversified data. It is necessary that AI stakeholders understand how models are developed, trained and tested, including the data inputs and outputs. Take the scenario of training animals. If a dog is trained to sit when in the presence of an infant, a fundamental training parameter is identification of infants. If the infant was consistently held by adults during the training process, then the dog may not recognise a crawling baby as an infant. The dog could have a built-in, perhaps unintentional bias that infants do not crawl, so does not sit in their presence. Trained algorithms combined with trained data work similarly, affecting the performance of models. Algorithms depend on data, which, by the way, is one of the vulnerabilities of AI because data is susceptible to mishandling as well as misuse. Bias in training data, due to prejudice in labelling and under or over-sampling, produces models with discriminatory effects. Biased AI can have serious negative consequences, particularly for vulnerable communities and minorities, which can also erode public trust in the use and adoption of AI technology. Ethical AI is designed and deployed in ways that reflect considerations of equality, justice and integrity. Methods should be pursued that ensure AI technologies mitigate bias concerns and are fair, transparent, welldefined in their limitations and generate comprehensible output so that the benefits of AI are enjoyed by all;
- Model deployment: One must securely integrate and deploy AI so that assets are continuously managed and protected. It is critical to know if the AI systems are being monitored for anomalies. Benchmarks for performance success and failure need to be defined and described. For instance, robotic-based surgical procedures require precision: 90 per cent tolerance can have dire consequences, while 90 per cent accuracy in predicting the time to reach a destination is more tolerable if safety is not at stake. Benchmarks and metrics must be derived based on scope, context and intent. Furthermore, one must remember that as the use of AI expands, so too does the attack surface. Attacks on AI systems supporting critical infrastructure pose a strategic threat to the nation and therefore must be mitigated through defensive strategies that should entail teams that model adversarial behaviours and simulate attacks against deployed AI systems. Categorisation of AI systems by risk type (eg high-risk, no-risk) and automatic registration in the enterprise repository are example deployment practices that can be leveraged by all aspects of the life cycle, in particular to measure AI performance;
- AI performance: It is important to continuously monitor the AI, including predictions, recommendations and overarching performance. This phase ensures that the adoption is valuable and that outputs are accurate, reliable, explainable and unbiased. It is important to have clear ownership and access rights established of the AI for greater sustainability. Suppose the dog/infant example described above is an AI model that surveils an area looking for infants. Depending on the training of algorithms and the data, the model may or may not perform to expectations. AI should be carefully selected, and predictions continuously monitored for relevancy and precision. Having tools to understand AI actions is a challenge with emerging resolutions but necessary to mitigate risks. As traceability of AI outcomes becomes more advanced in tools and infrastructure, so will the confidence of users. Other questions to consider include: are outputs meeting the performance requirements? Are algorithms fit for purpose and are the results explainable and available for interpretation? What are the long-term impacts of the AI (eg growing AI talent, increased use of products and services, heightened fear of AI due to persistent misinformation)? Given the high-impact areas in which AI is increasingly applied, there is a need for AI technologies and infrastructure to provide decision support in a way that is safe, secure and robust. Moreover, if AI systems do fail, they must do so in a way that is recognisable and mitigates any harmful consequences.
AI governance requires decision making and actions that drive assurances to maximise value and do no harm to citizens. Maximising value is a requirement of investments made in products, services and time. The National AI Research Resource (NAIRR) task force, for instance, specifies in its charter7 that a ‘roadmap and plan will be established that includes capabilities required to create and maintain a shared computing infrastructure to facilitate access to advanced computing resources for researchers across the country’. ‘Do no harm’ entails guidelines for fair, ethical behaviours and accountability. In the charter, NAIRR specifies that it will ‘include an assessment of privacy and civil rights and civil liberties associated with NAIRR and its research’.8
Another example of ‘do no harm’ is the excerpt from the UK’s National AI Strategy that specifies it will ‘ensure that the UK gets the national and international governance of AI technologies right to … protect the public and international values’.9 AI governance involves benchmarks that scale beyond traditional measurements by encompassing social and ethical accountability and considering the traditional black box nature of AI, requires more democratisation of AI assets.
Recall earlier mention of the need for diverse test and evaluation teams that are responsible for proactively assessing the robustness of AI systems used across critical infrastructure. AI independent verification and validation (AI-IV&V) is a strategic imperative for maintaining responsible and trustworthy AI.
IV&V is defined by the National Institutes of Standards and Technology (NIST):
‘Verification and validation (V&V) performed by an organization that is technically, managerially, and financially independent of the development organization. “A comprehensive review, analysis, and testing, (software and/or hardware) performed by an objective third party to confirm (i.e., verify) that the requirements are correctly defined, and to confirm (i.e., validate) that the system correctly implements the required functionality and security requirements”.’10
AI-IV&V, on the other hand, focuses on the comprehensive review, analysis, testing and performance of AI models by evaluating AI’s performance relative to the entire AI life cycle. AI-IV&V participants are comprised of interdisciplinary, diverse teams in addition to applied machine learning (ML) and AI automation, to ensure systems are certified for use. The operational efficiencies, among other things, should entail model compliance with energy-efficient and carbon emission standards, rapid and precise identification of networking issues such as a faulty router and traffic redirection with recommendations for the proper permanent resolutions, and as stated throughout, empowers teams to recognise harmful biases and address for greater incident management.
There is significant opportunity for parents and educational leaders to govern the use of AI for and by our children. Toys powered by AI are here today and they will only compound in the future as educational and play devices. These toys learn from and speak with children, answer questions, provide recommendations, compose and send messages, and even prepare school lessons. AI presents great immersive experiences through gamification and intelligent virtual assistants that must protect children’s rights and privacy. According to the World Economic Forum ‘smart toys present enormous promise but also pose many potential risks to children if not designed responsibly’.11
In addition, organisations should educate staff about the greatness of AI and the risks should be mitigated through the secure development and deployment of AI systems. Education should entail inclusivity in thinking and explain the value of AI in hiring practices with recognition that diversity in AI leads to more equitable, secure, trustworthy outcomes. Stakeholders should be aware of the consequences of AI bias and ways to mitigate bias throughout the AI life cycle. Public–private partnerships with government agencies such as the Department of Energy (DOE) and NIST to evolve AI leadership and risk mitigation best practices is an excellent step towards nationwide AI assurance.
Generative adversarial networks (GANs) are synthetic instances of data that can pass for the real data. They are typically used to fill gaps in, for example, artistry and video streams. Fake media content such as Deepfakes12 is an example of GANs with unethical outcomes, such as digital photos of fake individuals and use of those photos for harm. Adversarial attacks are enabled by limitations in algorithms and the dependence on data. Such attacks can destabilise AI by exploiting algorithms to degrade, deceive or invert models. Attack examples include spoofing, phishing and data poisoning. Therefore, AI governance should encompass using AI algorithms to detect GANs and other adversarial AI threats. In addition, AI governance must be applied across the AI life cycle and advanced in both digital and human audits to protect the AI ecosystem.
CONCLUSION
Excellence in AI requires stewardship and governance with the infusion of ethical and ultimately responsible principles and practices in every aspect of the AI life cycle. This requires education and training in the implications of AI in daily operations from the young to the most senior and across all occupations. Today, opportunities exist to advance AI for decision support that encompasses:
- Creating explainable results along with transparent warnings for personnel engaging with the AI systems. According to the DARPA explainable AI (XAI) program13 retrospective, users prefer systems that provide decisions with explanations over systems that provide only decisions. The retrospective goes on the report that the most valuable explanations are those where a user needs to understand the inner workings of how an AI system makes decisions and that explanations are more helpful when an AI is incorrect;
- Calculating uncertainty and its propagation to quantitate confidence levels in AI outputs. In general, the greater the level of uncertainty, the lower the confidence level. Confidence is another way of communicating the probability of a model or a hypothesis performing as intended. A confidence interval with a 90 per cent confidence level signals that 90 out of 100 times, the estimated performance will fall between the values specified by the confidence interval. There are opportunities to improve risk modelling and management for AI where the impacts of models are assessed and use case guidance provided using confidence levels as one of several key parameters;
- Developing methods to self-assess and diagnose issues is essential for continuous operations and to prevent threat advancements. The DOE Office of Cybersecurity, Energy Security and Emergency Response14 provides such capabilities. Continuous evolution of measures that detect attacks on AI during development, implementation and operations is an imperative for infrastructure protections;
- Identifying potential vulnerabilities in developed AI technologies to aid in risk management is a fundamental requirement of AI certifications. AI and autonomous systems will play a vital role in critical missions15 where surveillance data must be governed in a responsible manner and protect citizens’ privacy. The Responsible Artificial Intelligence Institute’s certification16 programme is based on objective assessments of fairness, explainable outcomes and concrete metrics of responsibly built AI systems. DOE’s Office of Artificial Intelligence & Technology17 leads in innovative AI governance along with coordination, development and training on responsible/ trustworthy AI and risk mitigation practices.
There is undeniable excitement about the convenience, innovation and accelerated outcomes of AI. Increased confidence will motivate citizens to use and generate higher-quality algorithms. Confidence in AI will strengthen collaborative teaming across industry, governments, regulatory bodies, academia and international entities to continuously improve the whole AI for the greater good. Table 1 summarises assurances described throughout this paper to mitigate risks, build confidence and facilitate responsible, trustworthy AI outcomes.
|
- AI is a powerful tool and technology that has tremendous benefits for citizens, but AI also introduces risks that can have an adverse impact on one’s life and livelihood.
- AI stakeholders should understand, apply and insist on governance and stewardship across the entire AI life cycle from the supply chain to management and monitoring of the performance.
- The AI life cycle process is continuous, and accountability does not cease once models are deployed.
- AI stewardship and governance encompasses risk management and considers the whole AI as its models, algorithms, data and human interactions.
- AI risk management considers scope, context and intent and at a minimum assesses security, safety, responsibility, accountability, traceability, explainable AI, equity, fairness and AI ethics strategy and practices.
- Educators should teach students of all ages and organisations should provide training and upskill workers on responsible and trustworthy AI best practices as an essential component of AI talent management, AI assurance and as a crucial component of enabling national security.
- This paper considers existing policy directives that speak to managing risks across the AI life cycle and provides further perspectives and practices to advance implementation. The Organization for Economic Co-operation and Development (OECD) AI Principles promote use of AI that is innovative and trustworthy and that respects human rights and democratic values.18 The federal legislation on Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government19 requires agencies to apply AI that is: a) lawful and respectful of our nation’s values; b) purposeful and performance-driven; c) accurate, reliable and effective; d) safe, secure and resilient; e) understandable; f) responsible and traceable; g) regularly monitored; h) transparent; and i) accountable. In summary, standard approaches to categorisation and classification of AI systems are required to effectively govern performance management and policymaking20.
References
- Kanowitz, S. (March 2020), ‘How AI can reduce traffic congestion and fuel consumption’, GCN, available at https://gcn.com/emerging-tech/2020/03/how-ai-can-reduce-traffic-congestion-and-fuel-consumption/303335/ (accessed 9th January, 2022).
- Elgammal, A. (September 2021), ‘How Artificial Intelligence Completed Beethoven’s Unfinished Tenth Symphony’, Smithsonian, available at https://www.smithsonianmag.com/innovation/how-artificial-intelligence-completed-beethovens-unfinished-10th-symphony-180978753/ (accessed 9th January, 2022).
- Komorowski, M., Celi, L. A., Badawi, O. and Gordon, A. C. (October 2018), ‘The Artificial Intelligence Clinician learns optimal treatment strategies for sepsis in intensive care’, Nature, available at https://www.nature.com/articles/s41591-018- 0213-5 (accessed 9th January, 2022).
- Kelley, A. (December 2021), ‘ACLU Calls for Halt of Homeland Security’s Use Of Facial Recognition Technology’, Nextgov, available at https://www.nextgov.com/emerging-tech/2021/12/aclu-calls-halt-homeland-securitys-use-facial-recognition-technology/187377/ (accessed 9th January, 2022).
- Milmo, D. (October 2021), ‘Facebook whistleblower accuses firm of serially misleading over safety’, Guardian, available at https://www.theguardian.com/technology/2021/oct/05/facebook-whistleblower-accuses-firm-of-serially-misleading-over-safety (accessed 9th January, 2022)
- DISA, ‘Data Rights’, available at https://disa.mil/ about/legal-and-regulatory/datarights-ip/datarights (accessed 9th January, 2022).
- National AI Research Resource Task Force (2021), ‘Advisory Committee Charter’, available at https://www.ai.gov/wp-content/uploads/2021/04/National-AI-Research-Resource-Task-ForceCharter-2021.pdf (accessed 9th January, 2022).
- Ibid., ref. 7.
- Gov.UK (September 2021), ‘Guidance: National AI Strategy – HTML version’, available at https://www.gov.uk/government/publications/national-aistrategy/national-ai-strategy-html-version (accessed 9th January, 2022).
- NIST, ‘Independent verification & validation (IV&V)’, Glossary, available at https://csrc.nist.gov/glossary/term/independent_verification_and_ validation (accessed 9th January, 2022).
- Tedeneke, A. (May 2021), ‘Smart Toy Awards announce AI-powered toy winners in categories from “creative play” to “innovation” and “smart companion”’, WEF, available at https://www.weforum.org/agenda/2021/05/how-7-smart-toysare-protecting-kids-data-and-safety/ (accessed 9th January, 2022).
- Somers, M. (July 2020), ‘Deepfakes explained’, MIT Management Sloan School, available at https://mitsloan.mit.edu/ideas-made-to-matter/deepfakesexplained (accessed 9th January, 2
- Gunning, D., Vorm, E., Wang, J. Y. and Turek, M. (November 2021), ‘Editorial: DARPA’s explainable AI (XAI) program: A retrospective’, available at https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.61 (accessed 9th January, 2022).
- Office of Cybersecurity, Energy Security, and Emergency Response/Department of Energy, available at /ceser/office-cybersecurity-energy-security-andemergency-response (accessed 9th january, 2022).
- Vergun, D. (February 2022), ‘Artificial Intelligence, Autonomy Will Play Crucial Role in Warfare, General Says’, U.S. Department of Defense, available at https://www.defense.gov/News/News-Stories/Article/Article/2928194/ artificial-intelligence-autonomy-will-play-crucialrole-in-warfare-general-says/ (accessed 15th February, 2022).
- Responsible Artificial Intelligence Institute, ‘RAI Certification Beta’, available at https://www.responsible.ai/certification (accessed 9th January, 2022).
- Artificial Intelligence & Technology Office/ Department of Energy, available at /artificial-intelligence-technology-office (accessed 15th February, 2022).
- OECD.AI, ‘OECD AI Principles overview’, available at https://oecd.ai/en/ai-principles (accessed 15th February, 2022).
- Executive Office of the President (August 2020), ‘Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government’, National Archives, available at https://www.federalregister. gov/documents/2020/12/08/2020-27065/ promoting-the-use-of-trustworthy-artificialintelligence-in-the-federal-government (accessed 15th February 2022).
- Aiken, C. (December 2021), ‘Testing frameworks for the classification of AI systems’, OECD.AI, available at https://oecd.ai/en/wonk/cset-test-classificationai-systems (accessed 7th March, 2021).