By Joel Li, FCIA, Rolly Molisho, ACIA, and Harrison Jones, members of the CIA’s Committee on Predictive Modelling
An introduction to AI ethics and regulation
Artificial intelligence (AI) is a large domain that is consistently evolving through new and innovative research. It is defined as the theory and development of systems able to perform tasks that normally require human intelligence. Within the field of AI lies machine learning and other topics. For a complete introduction to these ideas see “More the same than different: Data science and its relationship with actuarial science.”
With the increasing use of AI within both the private and public sectors, regulations and laws are being proposed and passed across the world. Notably within Canada, the Consumer Privacy Protection Act (CPPA) has been tabled in the House of Commons. The CPPA and other regulations and laws address the accountability of organizations for the appropriate collection and use of data, implementation and transparency of algorithms, and dispute resolution processes. Penalties for organizations that violate these regulations are significant. The CPPA has a proposed penalty of the greater of 5% of global revenue or $25 million.
Beyond the expected regulatory burden that will face companies using AI, there is an ethical component as well. The algorithms that insurers use have an impact on consumers and shareholders. Pricing algorithms, for example, impact how much consumers pay for insurance. These algorithms could be unfairly discriminatory if they are biased towards certain demographics of people in a way that, under traditional methods, would be considered an unacceptable form of discrimination. Insurers have historically built their business on the idea of “fair discrimination,” pricing policy-holders based on variables that differentiate their risk appropriately. Insurance companies that choose to use AI algorithms are now jumping headfirst into a situation that could be potentially deemed as unethical.
The role of actuaries in AI ethics and regulation
There is no debate, companies that choose to leverage AI will need to adhere to these standards, both regulatory and ethical. There are many roles within an insurance company that deal with AI, and actuaries have a strong case to lead these initiatives due to:
- existing actuarial standards of practice that can be applied directly to the build/implementation of AI algorithms;
- a unique combination of technical insurance and business knowledge; and
- professional judgment already used as an input into in an actuary’s day-to-day work.
Bias and discrimination in AI algorithms
AI algorithms are based on procedures and processes that are formulaic in nature and seek to optimize predictive performance. However, since they cannot reason on their own, the algorithms typically overlook the constraint of fairness and can inadvertently produce unacceptably biased predictions (i.e., predictions that favour certain groups over others on the basis of preconceived notions, as opposed to an impartial evaluation of facts). These discriminatory predictions are often due to inherent bias in the underlying data and/or the manner in which decisions have been made historically.
To illustrate the concept, consider an example from the banking industry. In credit scoring, legislation prohibits discrimination on certain grounds (e.g., age, gender, race) and while such variables are typically excluded, there may be other variables (or combinations of variables) which are fed into scoring algorithms that are highly correlated with the prohibited variables (e.g., postal codes serving as a proxy for race). In addition, proprietary scoring algorithms may inadvertently discriminate against certain population groups by failing to consider certain data sources (e.g., due to reporting limitations, rent and utilities are not always included in payment history, which is a key driver of the credit score and typically includes payment behavior on mortgages and other types of credit, thus making it difficult for certain groups to demonstrate their creditworthiness and build their credit score). Failure to account for these data correlations and limitations can amplify or perpetuate the restrictions on access to credit for historically marginalized population groups.
The example can easily be extended to underwriting and pricing applications in the insurance industry as well, where concerns around fair access to affordable coverage need to be considered. In insurance pricing, this means going beyond the traditional requirement for rates to be “actuarially justifiable,” evaluating the socio-economic impact and cost of higher rates on certain groups of individuals, and asking whether the rates are “socially justifiable.” Given their sound technical expertise and business acumen, as well as the emphasis of the profession on serving the public interest, actuaries are uniquely positioned to play a leading role in ensuring that ethical constraints are considered at all stages of the AI lifecycle.
Actuaries have historically played a dual role at insurance companies: maintaining technical proficiency in topics such as pricing, reserving, capital modelling, etc., while simultaneously considering the business impacts of key decisions. AI algorithms do present a new theoretical approach to common insurance questions, however the nature of how they are deployed is something that actuaries have significant familiarity with. Actuaries can therefore act as leaders in the field of AI algorithms and help guide insurance companies into a highly technical, but exciting, field of research.