There is No Time Like the Present: AI Reckoning

Estimated Reading Time: 2 minutes

Melanie Lockwood Herman
By Melanie Lockwood Herman

Executive Director

Resource Type: Risk eNews

Topic:

Sign up for the Risk eNews!

by Melanie Lockwood Herman

(This article is adapted from a segment in our risk forecast for 2026: Manage Risk With Intention This Year. The full paper is available here.)

During a recent AI Risk Assessment led by NRMC, we conducted a simple survey to uncover the ways artificial intelligence was supporting and guiding teams. We began the project thinking that although AI use had not been encouraged or sanctioned, the human beings in the organization were likely using AI in interesting ways. The survey results confirmed that hunch.

In an article published by The Bridgespan Group last summer, the authors contrast potential differences in risk and value between internal and external AI use. They invite readers to prioritize six dimensions (access to data/privacy, access to data/retraining, outcome fairness/bias, testing and quality assurance, informed consent, and dependency/continuity risk) in both use cases, noting that “outcome fairness may be a priority when it comes to externally facing AI efforts,” while efficiency may be paramount for an internal use, such as summarizing highlights of an internal meeting. The article includes thought-provoking questions to assess the risks of an AI project along the six dimensions.

AI Readiness Roadmap

NRMC encourages nonprofit teams to enthusiastically explore AI’s powerful potential with careful measures to address its significant downside risks. Nonprofit teams that vow and act to ‘reckon’ with AI adoption—across the full spectrum of exciting to concerning possibilities—will be in the strongest position to navigate the disruptions, surprises and opportunities that lie ahead.

To do so, ask these questions:

  • Do we know how and why staff—at all levels and across dispersed teams—are already using AI? If not, how can we efficiently uncover that information with an open, learning mindset?
  • What blue-sky potential uses of AI can we envision?
  • What are the pressure points AI tools could relieve? What mission-critical tasks or activities are manual and repetitive? What inconsistencies or errors could be detected using an AI tool?
  • What tasks or activities would be strengthened with more time on oversight and human review, and less time on manual processing?
  • What surprising or exciting uses of AI emerged from our staff survey?
  • What guardrails should we consider to manage the risks of privacy violations and confidentiality breaches, bias and harm to constituents, loss of trust from key supporters, overreliance on automation and weakening of professional judgement and accountability, vendor dependence and hidden costs?

Further reading and resources:

Three Very Human Qualities to Help You Manage AI Risk

A Step-by-Step Framework to Mitigate AI Risk

Hype vs. Benefit: A Nonprofit Tech Leader’s Perspective on AI

AI Resources Guide for Nonprofits, Independent Sector

AI Can’t Be Ignored: Exploring the Opportunities for Nonprofits and the Social Sector, The Bridgespan Group

SIGN UP FOR THE RISK ENEWS!

Sign Up Risk eNews

Name*(Required)
Privacy Policy Agreement(Required)