Welcome to our January 2026 Newsletter!
I’m hoping you find these newsletters useful. If you have feedback, you can email me directly at kkrugler@aiforcommunity.org. And thanks for reading!

Help Us Help Others
There are three easy things you can do that would help us hit our goal next year of teaching over 1000 nonprofit staff & volunteers about AI…
- If you know people who would benefit from the training, point them at our registration page. That’s right, we’re offering monthly online classes! And the next one is January 23rd, so right around the corner.
- If you know anyone who works at a community foundation, please introduce them to us. We need to partner with organizations to efficiently set up training. An easy way is to have them schedule a Zoom call with me.
- And if you happen to be on Facebook or LinkedIn, follow our pages.
Hot Off the Press
As always, there are many things I could mention – here are two…
- Claude’s Chrome Extension. I’ve been using this extensively over the past week…it can be amazing! But like much of AI, there are jagged edges. When it works, though, it’s a game changer. For example, I used it to process the 200+ filtered grant opportunities I hadn’t yet looked at via Instrumentl, and (based on my nonprofit goals, and the 16 grants I had flagged already) it recommended a top 10, with justifications for each. This saved me 10+ hours of painful, tedious work! I’ve included this as an AI Example on our web site. And that’s just one of four very useful things it’s done for me recently.
- With Google’s release of Gemini 3 Pro, I should have mentioned in the previous newsletter that you now can provide almost 1 million words as context. This means you could feed it at least 1,500 pages of documents, and Gemini would be able to effectively use all that data when generating results.
Tip of the Month
Something I talk about in training is how useful it can be to have free-form text fields in survey forms. It’s now possible to have AI do a reasonable job of analyzing this text and extracting insights, whereas in the past nobody wanted to slog through reading a bunch of forms – which is why everything was all yes/no or rate on a scale of 1 to 5.
In a similar manner, it’s now possible to have AI extract useful insights from archival data. For example, in scientific research you can ask AI to find examples of older research results where a next step was NOT taken, due to computational constraints. When you find examples of those, it’s fruitful ground for revisiting that research given the high probability that the compute constraint no longer exists.
To bring that point home to you, as a nonprofit, it means that giving AI access to your older data (status reports, board agendas, financials, etc) can provide very useful context. Sometimes we lose track of the big picture/big win when we’re down in the weeds handling day-to-day operations. AI can help re-surface those ideas, and help us incorporate them into a longer-term, strategic vision.
So what does this kind of prompt look like? As an example, Ethan Mollick asked Gemini 3 Deep Think, ChatGPT Pro & Claude the following: “What is the single best investment equivalent in spending $1000 that I could make if I time traveled back to any destination circa 1300?”
The results were surprising…
- Gemini: pay a scribe for a copy of the Magna Carta and store it at Durham Cathedral
- ChatGPT: Buy a documented ownership share in the Great Copper Mountain at Falun (Stora Kopparberg), Sweden
- Claude: Contribute to an established Islamic waqf endowment, specifically to a major educational or charitable institution like Al-Azhar in Cairo or the University of al-Qarawiyyin in Fez.
Which means you could ask your favorite (paid version, so it thinks deeply) AI what is the single most important investment you could make at your nonprofit that would contribute meaningfully to your long-term vision. This of course assumes that you’ve also given the AI your strategic plan, mission statement, financials, and other context so that it can come up with a reasonable response.
Final Thoughts
Everyone feels like they are falling behind in AI, including me. But in reality, everyone is figuring it out, day-by-day. Your most important skill when trying to surf the AI wave is curiosity. Don’t worry about everyone telling you that if you aren’t prompting like this, you’re a loser. Just keep exploring ways in which AI can help, and trying new things, and don’t worry about the hype machine.
And finally, someone from one of our classes sent me a post where the author’s position on current AI is that “It’s not really AI, it’s just predicting words”. Which is funny to me for two reasons. First, when I’m talking to someone, I’m not really sure what my brain is doing other than predicting words. If I start thinking too hard about what word I’m about to say next, it often doesn’t go well. So most of the time I don’t really feel like I’m thinking about what I’m about to say, I’m just saying it.
The second reason is that many years ago, my AI professor at MIT defined AI as being “whatever a computer can’t do yet”. I think he was a bit frustrated by how the goal line kept moving; whenever software was able to solve some problem like play chess well, there would be a new definition of AI, like “It has to be able to beat the world’s best chess player”. And then that happens, and now it’s “Pass the Turing test”. Which ChatGPT 4.5 did last year (outperforming real people), but now “AI researchers argue the test is flawed or that current LLMs haven’t truly passed it in a way that signifies genuine intelligence.” Sigh.