Three Very Human Qualities to Help You Mitigate AI Risk

Estimated Reading Time: 7 minutes

Rachel Sams
Lead Consultant and Editor

By Rachel Sams

Resource Type: Articles

Topic: Data Privacy, Tech Risk, Cybersecurity

Experts don’t agree on much about artificial intelligence. But most of them can agree that humans are still the best at interpersonal relations, difficult decisions, and critical thinking.  

And experts mostly agree that areas where AI can help often involve routine tasks and processing large amounts of data. 

Take a minute to think about that: If humans are the best at thinking and feeling, and AI is the best at completing certain types of tasks under human direction… 

Congratulations! You already have all the innate qualities you need to navigate AI risk! You’re human! 

We’re simplifying things a little, but we’ve tested this principle in our own experiments with AI risk for more than a year, and found it solid. If you and your team return to your most deeply human qualities over and over, you will make pretty good decisions on AI risk. And if you commit to learning the skills and practices you need to navigate AI, your pretty good decisions on AI risk will get better. Our companion article in this issue, “A Step-By-Step Plan to Navigate AI Risk,” will help you consider specific risks and situations. In this article, we’ll cover three qualities you and your team members can hone to better meet AI risk. This work should help you better deliver your mission and improve at everything you do.  

Very Human Quality 1: Curiosity 

One of the best ways to navigate any risk issue is to ask good questions. This helps you understand the environment and risk issues you’re facing. 

Fear presents an obstacle to asking good questions.  

It’s natural to feel fear when we face something new. That’s one reason why our jobs as risk managers are so challenging. We and our teams are programmed to fear new things like AI. Some of us might worry that AI is coming for our jobs—maybe even that it’ll take over the world in a worst-case scenario! 

Curiosity helped me navigate experiments with AI risk over the past year to understand how NRMC can best serve our clients who have AI risk questions. I was terrified of AI before I began to experiment with it. There is still plenty that concerns me about AI. Some of it concerns me a lot. But I know more now about what artificial intelligence is and isn’t capable of. That helps me better navigate my daily life and advise nonprofits as they face AI risks. 

Here’s a story that offers one example of how curiosity can help you learn about the opportunities and challenges of AI. 

I asked ChatGPT to provide me a list of the best quotes about change management.  

ChatGPT provided a list of quotes about change management. All the quotes were by men.  

I responded to ChatGPT and asked it to share a list of great quotes about change management by women.  

The first list included a quote attributed to Barack Obama: “Change will not come if we wait for some other person or some other time. We are the ones we’ve been waiting for. We are the change that we seek.” The second list included the same quote, attributed to Kamala Harris. 

I laughed out loud. Then I got curious. Why did ChatGPT give me that result?  

One obvious answer: the material ChatGPT was trained on sourced mostly white men. 

What else was happening behind the scenes? 

I revisited the learning I’d done so far about AI, and noted the fact that the AI is programmed to give the person asking questions what they want. AI will try its best to do so even if it can’t find information or facts that qualify. If it doesn’t have the facts, it might make some up. 

This fascinates me. It scares me. But it also informs me. Now I’ve personally experienced an example of just how motivated AI products are to give me what I want. I know there’s a high risk that AI products will provide inaccurate information, maybe even handle information in ways I consider unethical, to give me what I want. Knowing that helps me understand my responsibilities when I evaluate AI and ask better questions about the information I’m receiving. 

The more you interact curiously with artificial intelligence, the more you will learn. You’ll hone your opinions about where this technology could help your nonprofit, and where you want to avoid its use. You’ll think of new questions to ask to help you better understand the technology and its potential impacts, good and bad. And you’ll naturally find yourself setting boundaries around how you use this technology, and helping your team make good decisions about AI boundaries. 

Very Human Quality 2: Skepticism 

I’m a former journalist, and there’s a common saying in journalism: “If your mother says she loves you, check it out.” If this is your first time hearing that one, it might sound harsh. But it was a memorable reminder that experienced editors often gave to rookies or students: don’t automatically accept what people tell you. Consider the source of every fact and the motivation of that source to publish it or share it with you.  

Healthy skepticism will be your good friend as your nonprofit investigates AI. It will keep you from jumping into AI practices that might seem to save your nonprofit money and time but could jeopardize your values. Being skeptical of grandiose claims about artificial intelligence will help you choose your AI projects and vendors wisely. Awareness of AI’s inherent flaws will help you build smart practices like mandating human review for any work produced by AI, keeping sensitive data away from AI, and building in safeguards to keep AI’s biases from creeping into the work your nonprofit produces.  

Very Human Quality 3: Trust 

This might seem like an unusual quality to emphasize in an article about AI risk! But NRMC isn’t asking you to trust machines or algorithms. We’re asking you to trust yourself and your team members. That’s necessary because even if you put an AI framework in place for your nonprofit, at some point, your AI work will require you to take a leap of faith.  

Your peers are already experimenting with AI. Your team members are probably already experimenting with AI, with or without any guardrails from you. And even if you don’t believe you’re experimenting with AI personally, the technology already shapes many aspects of how you experience your daily life. You can’t ignore this technology. It’s here. To keep your nonprofit current and maximize your capabilities, you’ll need to come to terms with AI. Luckily, if there’s one thing we at NRMC know, it’s that nonprofits and their people can do hard things.  

Trust the judgment your leadership has used to hire and coach the best team members it could find. Trust the values your nonprofit lives by to guide your team as you learn and grow in your approach to AI risk. And trust that every stumble or mistake will ultimately help your team develop a nuanced and unique approach to AI that fits your nonprofit’s needs.  

Human Advantages in an AI World 

So what will honing the very human qualities of curiosity, skepticism, and trust allow you to do? 

These qualities will give you a foundation to meet the AI risks you already encounter in your work every day, and the ones you can’t yet anticipate. 

Tapping into these qualities will allow you to thoughtfully engage with AI tools. After all, you can’t understand the risks this technology presents if you haven’t experienced the tools. 

These qualities will help you ask good questions about AI technology’s sources of information and findings and build strong practices to ensure human review and oversight of all your AI-generated work. 

These qualities will help you believe that your team can build this plane as you fly it.  

It’s scary to step into a world you haven’t experienced before, as all nonprofits are with the rapid evolution of artificial intelligence. Many possibilities exist in that world. Not all those possibilities will be right for your nonprofit, and the journey to find the ones that fit may be challenging. But if you combine trust in yourself and your teammates with curiosity and healthy skepticism and use the step-by-step framework we share in this issue to mitigate specific artificial intelligence risks, you’ll be in a great position to meet the challenges and opportunities of AI. 

Rachel Sams is Lead Consultant and Editor at the Nonprofit Risk Management Center. She is slow to trust both humans and machines, and thankful for the very human qualities that help her navigate those fears. Reach her with questions and thoughts about human ways to navigate AI risk at rachel@nonprofitrisk.org or (505) 456-4045. 

SIGN UP FOR THE RISK ENEWS!

Sign Up Risk News

Name*(Required)
Privacy Policy Agreement(Required)