All through human history, technology has been one of the biggest enabler-driver of landmark changes. However, these technologies often brought along their ‘growing pains’ on the way to adoption maturity. Even the steam engine and electricity, two of the essential forces of the 2nd industrial revolution, had slow adoption in the beginning. Thus, it is not a surprise that Artificial Intelligence/Machine Learning (AI/ML), a genre of emerging technologies that is an important component of the universe of Machine Intelligence (MI) and key driver of the current Industry 5.0, would also bring adoption challenges with it.
With artificial intelligence, we have an instrument of great transformational power at our disposal which must also be used responsibly. AI is norm-neutral but at MI4People our goals are charitable. Therefore, we always want to check our actions and efforts for ethical standards and potentially also help other NPOs use AI ethically. Therefore, last month we launched an AI Ethics Committee as part of MI4People initiatives.
The AI Ethics Committee is intended to open MI4People to non-AI-professionals and to foster interdisciplinary discussions within our organization on the topic of AI and ethics. Our activities are for the common good - there are many ethical issues involved in which lawyers, medical professionals, AI researchers, and representatives of other disciplines can make a valuable contribution to the development of organizations such as ours. The key goals of our AI Ethics Committee are:
Promote ethical discussions about the applications of AI and, where appropriate, limitations of AI development
Supporting the continued development of AI research at MI4People by setting an ethical framework
Further development of MI4People's goals and project plans through discussions on public good orientation from different perspectives (medicine, legal, social sciences, etc.)
We are also looking for further support for this committee. If you would like to contribute or have any questions, please contact Dr. Christian Herles, Yonas Maximilian Thiele or Dr. Paul Springer who initiated this committee at MI4People or write us at email@example.com.
Unlike a typical newsletter, we devote this issue to a small essay on the major hindrances for the nonprofit sector to adopt AI with a focus on challenges arising from lack of ethical use AI and conclude with a high-level overview of government and industry actions to ensure AI serves a ‘good society’.
Enjoy the discourse and chime in with your opinions and ideas regarding removing the barriers of AI adoption by the nonprofit sector, especially those related to Ethical AI.
Potential Benefits of AI for Nonprofit Sector Organizations
According to many analyst reports, the current AI market is estimated to be in the $300-400 billion range and is expected to cross a trillion dollars in less than a decade. The downstream benefits of AI are even bigger. This massive AI adoption and its rapid growth are all happening in the commercial sector, a small amount in government institutions, and practically none in the non-profit sector. However, just like in the commercial sector, in the realm of public good, AI and MI can help tackle many of the Public Good challenges like poverty, hunger, environment protection, healthcare, wildlife conservation, etc. (s. more examples at our webpage).
There are many studies and analysis reports that identify AI adoption opportunities in the nonprofit sector (see figure above, source: McKinsey). For example, a November 2018 discussion article from McKinsey titled “Applying Artificial Intelligence (AI) for Social Good” identifies around 160 use cases for application of AI for public good. A 2019 National Health Service (NHS, UK) report titled “Artificial Intelligence – How to Get it Right” cites around 30 case studies with AI applications in health and care spanning topical areas like precision medicine, genomics, image recognitions, and operational efficiency. And there are plenty other reports of similar nature.
Adoption Obstacles of AI for Nonprofit Sector Organizations
It should be shocking to note that with such high potential of AI, its penetration in the nonprofit sector remains abysmally small (low single digit percentages). Three main reasons for this very low uptake of AI by the nonprofits, where the organization sizes and budgets are usually very small, are:
Lack of knowledge of AI capabilities and inability to identify high value use cases
nability to attract suitable talent as competent AI professionals find much higher attraction for promising commercial startups or large enterprises
Often very conservative and risk averse stance of typical management of nonprofit organization
Challenges arising from non-ethical AI can lead to all sorts of risks thus becomes a key obstacle for AI adoption in the nonprofit sector.
Specific Challenges Arising from non-Ethical AI
Ethics are the moral principles that our behaviors and actions are guided by. AI-infused applications and AI-powered decision-making can (and do) heavily influence and alter our actions and behavior. There are many published reports of ill-conceived AI usages: for example, AI-based expert systems producing wrong medical diagnoses and remedies, basic human privacy breaches caused by AI-based processing of private data, use of AI-derived intelligence to create manipulation strategies during government elections, race and gender discriminations using AI-based decisioning during hiring or job promotion processes, bypassing poorer or disadvantaged customers using AI-based customer segmentation, and even a case of AI-based interactive toys for children that wrongfully influenced children behavior. Other not so obvious non-ethical usages AI include use of inefficient AI algorithms that cause heavy use of computer power unnecessarily and consequently adding to carbon emissions and impacting sustainability negatively.
Consequences of non-ethical AI use, besides being a shameful act morally, can cause all sorts of legal and criminal risks for organizations leading to loss of trust and reputation, financial and other legal penalties. While large enterprises have the muscle power to counter lawsuits and indictments, small nonprofit organization with very limited funds and complete dependency on public trust and good image perceive general AI adoption as a highly risky undertaking. Of course, with the right expert help, for example from NPOs like MI4People, many of these AI-risk-fearing nonprofits would be able find suitable and safe AI usages.
Governmental and Industry Actions to Enforce Ethical AI Practices
To emphasize and enforce ethical AI usage, both governmental and industry involvements are essential. For ethical AI, the guiding principles that ultimately make their way into legislation, appear to be focusing broadly on safety, reliability, transparency, fairness, equality, explainability, and value-to-society. Since 2017, governments in over 60 countries worldwide have signed up to elevate the importance of regulations around ethical AI. There are also inter country and inter continent government agency collaborations on this topic, for example, the recent inclusion AI regulation topics in the EU-US Trade and Technology Council (TTC) agenda. Alongside inter-governmental efforts international agencies like the UN/OECD, IEEE, and the like, have also set up their expert bodies to observe, research and make broadly applicable recommendations on ethical AI practices that influence both regulators, AI application builders, and the public.
In the last few years, in the US there have been many government-sponsored activities to study AI ethics – some of these are being conducted under the recent Artificial Intelligence Act (AIA). Few of the US government agencies are already putting place their own ethical AI practices, for example, the US Dept of Defense (DoD). Such actions are expected to proliferate. Just this month, the White House unveiled a “AI Bill of Rights” aimed at protecting citizen interests against wrongdoing by or via AI technologies.
In Europe, there are many existing and evolving regulations at the EU level and in the EU member countries that can already impose a certain level of discipline on AI technology. However, a notable one dedicated to AI ethics is the European Commission’s ‘Regulation on Artificial Intelligence (the EU AI Act)’ released in in April 2021, and after gathering reviews and recommendations from member states, experts, and concerned citizens expected to be signed into law in early 2023. EU AI Act uses a ‘risk-based’ hierarchical structure for classifications of AI products, services, and usage (as shown in the figure above). This risk-based structure seems to be a logical and rational way to designing restrictions and penalties associated with a particular AI use case.
Currently one of the top countries in AI development and use with an ambition of becoming the world leader in AI by 2030, China government’s position is that “AI in China will remail under meaningful human control” according to the country’s first set of rules which will govern every aspect of the emerging technology from its research to supply and implementation. In September of last year, China government unveiled their “Ethical Norms for the New Generation Artificial Intelligence” with a goal of building ‘Beneficial AI for Human and Ecology Good’.
While the governments are getting more active on legislating AI, private sector companies are also mounting their own efforts to pursue ethical AI. For example, practically all the high-tech companies with massive global reach, Google, Apple, Microsoft, Meta, Alibaba, and the like, have publicly stated their ethical AI policies that are pretty aligned with the ethical AI focus areas discussed thus far. Also, these days, most companies that build or use AI technology train their employees in ethical use of AI.
AI is very important to us and so is its ethical use. No matter how powerful its capabilities are (or can be) and no matter what great potential its holds, AI is something that ‘we’, the humans, have created and will continue to advance it. Thus, it is ultimately our values, behavior, and intentions that would ensure ethical AI usage and its implementations delivering real value to society.