What Nonprofits Can Learn from the AI Hype Cycle
Key Takeaways from AI for Nonprofits 101 with Rich Leimsider
Recently, I watched an online event called AI for Nonprofits 101 hosted by the OpenAI Academy, featuring Rich Leimsider. Rich is the Director of the AI for Nonprofits Sprint, Entrepreneur-in-Residence at FCNY, and someone who's been hands-on helping thousands of nonprofit staff navigate AI tools meaningfully.
Beyond the buzzwords, Rich offered a lot of grounded, practical advice on what AI actually means for nonprofit work today and how to use it smartly.
Let’s dive into the key takeaways.
Key Reflections
“Tech hype always overpromises before it normalizes, AI could follow the same pattern.”
Rich kicked things off by reminding us of the classic tech hype cycle:
A new technology drops, and the people selling it hype it to the skies ("this will change everything!").
Then comes the backlash ("this will destroy society!").
Eventually, things settle somewhere in the middle, not exactly world-ending, not world-saving, just... useful.
It’s a good mental model for thinking about all the extreme headlines around AI right now. Will AI solve every problem? No. Will it destroy humanity tomorrow? Also no. The truth is likely somewhere far less exciting, but a lot more manageable if we stay thoughtful.
“Incremental AI > Exponential AI for most nonprofits.”
Rich introduced a really helpful distinction between Exponential AI and Incremental AI:
Exponential AI = big “change everything” projects (think: self-driving cars, new drug discovery, robot suicide hotlines). These require massive resources, experts, and are still mostly in the research phase.
Incremental AI = modest, practical goals using existing tools. Think automating writing tasks, summarizing long reports, or prepping for a hard conversation.
For nonprofits, Incremental AI is the sweet spot. It’s low-risk, low-cost, and can immediately free up hours of valuable staff time.
“Reading, writing, and chatting are the three skills to start with.”
Rich made it simple: you don’t need to master machine learning to start using AI well. Instead, focus on three practical skills:
Reading: Summarize long articles, reports, even YouTube videos.
Writing: Draft bios, resumes, grant applications, event descriptions.
Chatting: Role-play difficult conversations (like donor asks or performance reviews) to prepare and practice.
These are high-impact, real-world uses, not sci-fi experiments.
“The best public models for nonprofits are already here.”
If you want to start, some of the best public generative AI models for incremental tasks today include:
ChatGPT 4.5 / 4o (OpenAI)
Claude Sonnet (Anthropic)
Microsoft Copilot (Microsoft / OpenAI — heavily used across nonprofits already)
Gemini Advanced (Google)
And more are emerging from companies like Meta, DeepSeek, and others.
“Many nonprofit staff are ‘secret cyborgs’ and that's a missed opportunity.”
One of the most interesting (and funny) ideas Rich shared was the concept of “secret cyborgs” (with credit to Prof. Ethan Mollick).
Here’s the reality:
Many staff are already using AI.
Some are actually really good at it.
But they’re often doing it quietly without guidance and afraid of getting into trouble.
That’s risky, because without open conversations, staff might use AI in ways that aren’t aligned with data privacy rules, or they might miss opportunities to share what they’re learning.
Creating a culture where safe experimentation is encouraged can unlock a lot of hidden value.
“Staff have real concerns and incremental use can help ease most of them.”
From Rich’s survey of 2,399 nonprofit staff across 366 organizations:
89% had heard of AI tools.
61% had used them.
Only 14.5% had paid for AI tools.
When asked about expertise levels, the majority rated themselves at a 1 or 2 out of 5, showing there’s a lot of room (and appetite) for growth.
Top concerns included:
Errors (“hallucinations”)
Bias
Data privacy
Loss of skills
Job displacement
Environmental impact
Starting with small, incremental AI tasks like writing drafts or summarizing reports can help build comfort and trust without overwhelming anyone.
“Safe AI use starts with clear guidelines.”
Rich shared a simple Placeholder AI Policy nonprofits can adapt:
AI use and experimentation are encouraged.
Don’t enter personal information about clients or students.
You are responsible for the quality of your AI-supported work.
Follow existing data privacy policies.
Disclose AI-supported work internally. Good work will be praised, not criticized.
Use judgment about sharing AI outputs externally.
A little structure can make a huge difference in creating a safe, thoughtful AI culture.
“The aiSPIRE model makes working with AI easier.”
Finally, Rich introduced a simple model for getting better AI results:
Simple: Start with plain English prompts.
Persona: Assign the AI a role (editor, job coach, etc.).
Information: Give context like paste text, upload documents, etc.
Refine: Ask AI to improve your prompt.
Encourage: Treat it like a smart but inexperienced intern. Be clear, give feedback, iterate.
It’s not about writing “perfect” prompts — it’s about having a clear, structured conversation.
Final Thoughts
This event was a great reminder that AI doesn't have to be overwhelming or all-or-nothing.
Especially for nonprofits, where budgets and staff time are stretched thin, incremental AI can quietly, meaningfully help. It's about reclaiming a few hours here, reducing burnout there, making the impossible a little more manageable.
See you on the other side.
In writing this article, I've drawn inspiration from readings, conversations, and tools that explore AI's potential for good.
I for one enjoy AI. It's great to read something that's not hysterical and unbalanced. Thank you! What non-profits?