2025-12 – December Newsletter

Welcome to the December 2025 newsletter – this is the first one sent using our CRM system, which means you now can easily unsubscribe using the link at the bottom.

Thanks for reading!

— Ken

Help Us Help You

There are three easy things you can do that would help us hit our goal next year of teaching over 1000 nonprofit staff & volunteers about AI…

  1. If you know people who would benefit from the training, point them at our registration page. That’s right, we’re going to be offering monthly online classes!
  2. If you know anyone who works at a community foundation, please introduce them to us. We need to partner with organizations to efficiently set up training. An easy way is to have them schedule a Zoom call with me.
  3. And if you happen to be on Facebook or LinkedIn, follow our pages.

Hot Off The Press

There’s been a lot going on recently from the big 3 AI vendors. To start with, all of them have released significantly better models in the past month.

  • Google – Gemini 3: It has a “Deep Think” mode, where it follows multiple lines of reasoning simultaneously, for better results with complex questions.
  • OpenAI – ChatGPT-5.1: There’s a new “Thinking” mode that dynamically thinks harder (uses more compute) when it seems useful. This should significantly reduce hallucinations.
  • Anthropic – Claude Sonnet/Opus 4.5: The focus continues to be on “agentic” AI, where the new Claude for Chrome extension can actually control your browser to do things for you.

In addition, Anthropic (finally) has special pricing for nonprofits, and it’s a solid deal. For Claude Team, seats are $8/person/month, though you do have a minimum team size of 5 members. Which means if you currently are paying for 2 or more subscriptions @ $20/month, it’s a win since the cost will be the same or less, and you have a Team account that facilitates sharing.

Anthropic also rolled out online training for nonprofits, via their AI Fluency for nonprofits course. It’s based on work by two university professors, and has videos, exercises, and a final quiz to get a certificate. The framework is called “4D”, which stands for Delegation, Description, Discernment, and Diligence. It’s a pretty good overview, but somehow they’ve managed to suck all of the fun and excitement out of AI 🙂

Tip Of The Month

All 3 of the AI services are providing increasing support for personalization. For both ChatGPT and Claude, one powerful technique is to add “custom instructions” or “personal preferences”. These are essentially a chunk of text that is auto-inserted at the beginning of each new chat. For ChatGPT, I’m using:

You are an expert who double-checks things, and strives for accuracy. You are skeptical. You do research before answering, and double-check all facts. When possible, you include references to supporting documentation. If my claim seems incorrect, explain why and support your view. Point out any assumptions I am making that may be weak or unsupported. After answering, add a confidence score with a short justification.

I asked Claude to optimize these instructions, and it came back with something that I’ve been using for Claude:

Prioritize accuracy: verify facts, cite sources when available, and challenge questionable claims or assumptions—mine or yours. End responses with a confidence score and brief justification.

With ChatGPT, you can also set a “Base style and tone”, in addition to “Custom instructions”. I went for Efficient, but will be trying out Candid as well.

Final Thoughts

In companies with significant AI adoption, there’s a new issue involving “swim lanes”. In the past, a marketing person typically wouldn’t implement a prototype, and a programmer wouldn’t propose a marketing plan. But with recent AI models, both of those examples now happen. A non-programmer can do “vibe coding” using sites like Loveable to quickly turn an idea into a prototype. And a programmer can use AI to craft a pretty solid marketing plan. Which means you see toes getting stepped on. It’s going to be interesting watching how companies handle this. They can try to enforce swim lanes, but in doing so they hamper innovation.

Also, the “AI Bubble” is a hot news story. The gist is that because so much of the US stock market and tech economy now depends on massive spending on AI, but the returns on this investment aren’t going to materialize quickly, we’re headed for a huge correction in both the market and the economy. Which might very well happen, but it won’t directly change much of anything for people using AI. The existing models are still going to transform our work, without any significant advancements. I think the biggest impact could be if two or all three of the main players (OpenAI, Google, Anthropic) have to reduce their losses, which would likely mean significant price increases for their users.

Finally, we’ve started sending out donation letters. And we’re going very analog, in that we’re mailing out letters with lengthy hand-written notes. There is a printed donation letter, and this was crafted through multiple iterations between people and AI, but the focus is on the note. Why? In a weird way, the ease of crafting digital communication using AI means its value has dropped significantly. So to make an impact, it’s becoming more important to add a human touch, something that clearly hasn’t been semi-automated via AI.