Navigate to content

U.S. Chamber of Commerce Artificial Intelligence Commission Field Hearing

Subject: Testimony of Rohit Israni, Chair INCITS/AI at the U.S. Chamber of Commerce Artificial Intelligence Commission Field Hearing

Date and location:  June 13, 2022, from 12:00 - 4:00 PM BST/ 7:00 - 11:00 AM ET in London, U.K.


 

Co-chairs, Rep. John Delaney and Rep. Mike Ferguson and distinguished members of the AI Commission, thank you for inviting me to this field hearing. I will give a testimony on ‘International AI standards, a tool for policymakers and regulators to mitigate risks, while enabling worldwide innovation’.

 Introduction: Artificial intelligence (AI) is a much talked about technology and holds much promise. While AI brings many benefits, it also raises concerns, for instance regarding data privacy, unintended bias and ethical and societal concerns of people who use or encounter AI technologies. Created under the auspices of ISO/IEC JTC 1, the information technology arm of ISO and the IEC, subcommittee SC 42, Artificial intelligence, is the international standards body responsible for AI. The subcommittee is taking an ecosystem approach by considering emerging requirements from a comprehensive range of perspectives, such as regulatory, business, sector specific, societal, and ethical. It has been assimilating these requirements in the context of the use of these technologies, translating them to technical requirements and developing horizontal deliverables that can enable responsible adoption of AI across industry sectors. Currently 50 nations are engaged (35 participating and 15 in observing status), and the committee has liaisons with 45 entities including the Organization of Economic Co-operation and Development (OECD), the European Commission (EC), the European Trade Union Confederation (ETUC) and others.

 The InterNational Committee for Information Technology Standards (INCITS) -- is the central U.S. forum dedicated to creating technology standards for the next generation of innovation. INCITS/Artificial Intelligence, the US Technical Advisory Group to ISO/IEC JTC 1/SC 42 on Artificial Intelligence, represents US interests in the development of international standards for AI. It was established in 2018, in response to international standardization needs. Participation includes major US tech companies, government organizations, research institutions and universities.

Program of Work: The program of work of SC 42 addresses nine key areas (i) Application guidance and use cases (ii) Foundational standards (iii) Computational aspects (iv) Data ecosystem (v) Trustworthiness (vi) Ethical aspects and societal concerns (vii) Testing of AI-based systems (viii) Management system standards and (ix) Governance implications

(i)            Application guidance and use cases: AI is already used in many products and services e.g. in fintech, healthcare, online fraud protection, automotive, recommendation engines and many other areas. In fact, almost every sector is expected to be impacted by AI.  Nearly 60 percent of financial-services sector respondents in McKinsey’s Global AI Survey report that their companies have embedded at least one AI capability.  SC 42 collects use cases across sectors to ensure that its horizontal standards are broadly applicable and provide guidance to AI application domain developers and application domain standards groups. SC 42 is also developing guidelines for AI applications that enables application developers, open-source communities and application SDOs/committees to leverage the work of SC 42. In addition, a standard on AI system life cycle processes is being developed. 

(ii)           Foundational: With AI impacting multiple sectors and drawing a variety of stakeholders from the technology industry, and from government, policymakers, regulators, social scientists, legal experts and even consumers; a common terminology is critical to have consistent usage of key terms across the world. Foundational standards provide this common language that can be used across stakeholder domains. In response to the RFI from the Commission seeking comments on questions pertaining to artificial intelligence definitions, we made a contribution drawing your attention to ISO/IEC 22989, Artificial intelligence - Concepts and terminology where artificial intelligence and related terms have been defined after deliberations by experts from 35 countries.

(iii)          Computational aspects: At the heart of AI systems are computational technologies which include heterogenous hardware ranging from advanced central processing units (CPUs), graphics processing units (GPU)s, field programmable gate arrays (FPGAs) and custom application specific integrated circuits (ASICs) and many layers of software stacks ranging from AI Frameworks and modules for data ingestion, processing and analysis. SC 42 is looking at the complete set of computational approaches and characteristics of AI systems. It published an overview of the state of the art of computational approaches for AI systems which describes the main computational characteristics, algorithms and approaches used in AI systems, referencing exemplary use cases. SC 42 is working on projects in this area that range from a reference architecture for knowledge engineering, which is focused on the front end of the process, to the assessment of classification performance for machine learning models and an overview of machine learning computing devices.

(iv)          Data ecosystem:   The current wave of innovation in AI is powered by deep learning techniques which rely on large volumes of data. Recognizing the dependency of AI on Big Data, the ongoing standards work in Big Data by ISO-IEC was transferred to SC 42 where foundational standards on big data have already been published around vocabulary and big data reference architecture. In the field hearings so far of the Commission, one key aspect that certainly would have been highlighted is that the quality of decisions made by AI systems are very closely correlated to the quality of data input to the system. The scope of the standards work was therefore expanded to look at all data aspects relating to AI and a new five-part series on data quality for ML was launched. In addition, SC 42 is developing a new AI data lifecycle framework.

 

(v)           Trustworthiness: AI is often viewed as a ‘black box’ and there is a general lack of understanding of how AI systems make decisions. With the wide applicability of AI in multiple domains, trustworthiness aspects of AI are critical and necessary to ensure broad adoption. This is true of all applications and even more so in the financial services industry which is an area of focus for this hearing in London.  For the Finance industry, trust between stakeholders is a foundational currency. The consequences of unintentional bias in AI systems can be significant and ‘trustworthy AI’ is therefore a key requirement. SC 42 has published a suite of standards that provide an overview of emerging issues related to trustworthiness. In addition, it is developing technical standards to address these aspects that include the application of the ISO 31000 risk management framework to AI; the quality model for AI systems; quality evaluation guidelines; explainability; controllability; transparency taxonomy; functional safety and AI systems; assessment of neural networks and treatment of unwanted bias.

(vi)          Ethical aspects and societal considerations: SC 42 is addressing these concerns across the board in its deliverables – for example, ethical and societal concerns around use cases – as well as specifically, by having deliverables that tie these requirements to the technical standards being developed and that provide best practice guidance for mitigating ethical issues. SC 42 is collaborating with domain committees to develop context specific guidance on ethics as well as with international organizations like OECDUNESCO and the EC.

(vii)        Testing of AI-based systems: SC 42 is collaborating with SC 7 (Software and systems engineering) on a standard targeted at the testing of AI-based systems. This builds on traditional standards for testing of complex systems but also addresses AI-specific concerns to further enable broad responsible adoption.

(viii)       Management systems standard: The unique aspects of AI technology have created a need for a methodology that covers developers and deployers of AI systems. This will help increase user confidence by providing a platform that can be used for third-party certification. SC 42 is leveraging the management systems standard (MSS) approach and has started work on ISO/IEC 42001 for an AI management system standard. This could further be extended for various application domains.

(ix)          Governance implications: SC 42 is collaborating with the standards committee on governance of IT systems (SC 40) and has published an international standard targeted at addressing governance implications that can arise from the use of AI. The standard is tailored for decision makers such as executives or boards looking to deploy AI technology.

 Summary and Recommendations: The Commission notes in the RFI that there is a consensus that is beginning to build among regulatory authorities in the U.S. and E.U. toward risk-based approaches to AI regulation. Examples of this include the AI Risk Management Framework from NIST which is an active member of INCITS/AI and from the Organization of Economic Co-operation and Development (OECD), which has a liaison relationship with SC 42. As noted in the draft document of the OECD Framework, AI technologies bring AI-specific concerns beyond those of traditional IT systems. As an example, consumers of AI products and services may lack trust in the AI supplier organization and may seek assurance that the organization addressed any concerns around fairness, inclusiveness, accountability etc. in the AI system during development. While these have different levels of severity and consequences if not properly addressed depending on the application area, there will be a need for recommending mitigating measures by regulators and policymakers. Several SC 42 initiatives can help policymakers in this regard and recommend actionable risk mitigating measures for organizations, to name a couple, the AI management system standard (ISO/IEC 42001) being developed in SC 42 will contain AI-specific process requirements which will allow for assessment of conformance or auditability of the processes. Similarly, establishing a governance framework for artificial intelligence is an essential board-level responsibility, limited not only to ensuring the effective use of AI, but also encompassing risk management, regulatory compliance, and ethical usage.  The joint initiative with SC 40 which I mentioned, ISO/IEC 38507 -- Information technology -- Governance of IT -- Governance implications of the use of artificial intelligence by organizations, will help decision makers such as organization boards and executive managers ask and answer key questions about AI technologies.

 AI is a rapidly growing field and requirements for additional standards continue to evolve in response to needs from industry, governments, policymakers, and regulators. As an example, SC 42 has ongoing discussions with various bodies in the EU involved in drafting the EU AI Act and has been taking their input on potential gaps in standards and creating new work item proposals to address them. SC 42 is also engaged with ISO COPOLCO and with Consumers International via a liaison arrangement to get input on consumer concerns around AI and standardization needs emanating therefrom.  Bi-annual workshops are planned to engage stakeholders and share current work and gather requirements for future work from emerging trends with the first one being held last month.  We also welcome input from the AI Commission on any additional standardization needs that may come as a result from ongoing research and field hearings.

 We encourage the Commission to emphasize the role that international AI standards could play while proposing policy solutions that accomplish the goal of ensuring the United States continues to lead in innovation while fostering fairness in the deployment of AI and addressing societal concerns. I thank you for your audience and look forward to a continued dialogue.

Interested in becoming a participant and a member? Find out more about membership in the INCITS Technical Committee on Artificial Intelligence here or contact Lynn Barra.

_______________________

About INCITS: the InterNational Committee for Information Technology Standards (INCITS) – is the central U.S. forum dedicated to creating technology standards for the next generation of innovation.  INCITS members combine their expertise to create the building blocks for globally transformative technologies. From cloud computing to communications, from transportation to health care technologies, INCITS is the place where innovation begins. INCITS is accredited by the American National Standards Institute (ANSI) and is affiliated with ITI. Visit www.incits.org to learn more.