2026-03 – March Newsletter

Updates on AI for Community

We have so many online and in-person classes and online meetups coming up! There are eight events still to come in March & April – see https://aiforcommunity.org/events for a complete list.

  • For our two day online AI Essentials classes, the next one is tomorrow, March 18th, followed by April 14th. In-person in Grass Valley is March 24th, then in Auburn on March 31st.
  • The new AI In Depth classes start online April 7th, followed by in-person in Auburn on April 21st, then in Grass Valley on April 28th. This new course shows you how to really leverage AI via customizations, agents, and deep research, while still being safe & responsible via effective AI guidelines for your nonprofit.
  • We continue to have regular online meetups. The last one was March 12th, where we talked about AI guidelines. The next one is April 16th, 10am PDT, where we’ll be talking about how everyone is a maker (aka “Vibe coding”, or how to create custom web apps with English).
  • If you know anyone who works at a community foundation or large nonprofit, please introduce them to us. We need to partner with organizations to efficiently set up training. An easy way is to have them schedule a Zoom call with me.

Hot Off the Press

The latest scoop…

  • Google released Gemini 3.1 Pro, which is a significant upgrade. On the ARC-AGI-2 benchmark, which tests how well an AI handles completely new logic problems it hasn’t seen before, Gemini 3.1 Pro scored 77.1%, compared to the previous version’s 31.1%. If you’re using the paid version of Gemini, you should notice meaningfully better results on complex questions.
  • Anthropic got into a notable standoff with the Pentagon. The Department of Defense wanted to use Claude for weapons-targeting work, and Anthropic said no, as their terms of service prohibit using Claude to harm people. What’s interesting is the follow-up: the DoD then turned to OpenAI with the same request, and OpenAI accepted, but only after agreeing to the same content restrictions Anthropic had insisted on. So in a roundabout way, Anthropic’s position shaped how OpenAI engaged as well.
  • Apple is making a very different bet than everyone else. While Amazon, Alphabet, Meta, and Microsoft are collectively spending around $700 billion on AI infrastructure, Apple is investing just $14 billion. Their reasoning: AI models will eventually commoditize and get smaller, and the advantage will go to whoever owns the device in the customer’s hands, not whoever builds the biggest data center.
  • Anthropic also made changes to their safety policies, loosening some restrictions on what Claude will discuss. This drew criticism from some of their own safety researchers, a few of whom left the company. This is an important reminder that even inside organizations deeply focused on AI safety, there are real tensions between caution and capability.

Tip of the Month

A blog post I came across recently offered a reframe that I think is genuinely useful: instead of thinking of AI as a coworker or assistant, think of it as an exoskeleton.

The idea is that an exoskeleton doesn’t replace you, it amplifies what you can already do. Ford uses exoskeletons on assembly lines so workers can hold tools overhead for hours without the physical strain. BMW uses them so workers can carry heavy parts without injury. The exoskeleton makes the person more capable, but the person still has to know what they’re doing.

AI works the same way, in that it can amplify your existing skills. If you have strong writing instincts, AI makes you a faster and more prolific writer. If you understand your organization’s grant strategy, AI makes you a faster and more thorough researcher. But if you don’t have that foundation, AI mostly just helps you produce mediocre work more quickly.

So here’s a practical homework assignment: pick one thing you’re already pretty good at in your job (writing donor updates, researching grant opportunities, summarizing meeting notes, etc) and spend 30 minutes this week trying to use AI to do that one thing faster. Start with your strength, and see what happens.

And yes, this is a different viewpoint than what I regularly teach, which is to focus using AI on the things you don’t want to do (the tedious, the unpleasant), which often aren’t in your areas of strength…which is one reason they stick around on the to-do list. But I think it’s also useful to explore how AI can amplify your strengths.


Final Thoughts

During the classes I teach, I’m constantly encouraging you to try AI on the “mindless” tasks, the things that you don’t want to do. Being curious about whether AI might help is an important attribute for anyone who is using or will use AI effectively.

But in a classic example of “do as I say, not as I do”, I sent out a recent email about upcoming classes without bothering to use AI to review it. I was tired of reading it, tired of thinking about it, and just wanted to be done with it.

And then, right after I pressed the Send button, I thought – you know, I wonder if AI would have found any issues. So I tried it. And it did. In particular, the title of the email was “New Training Dates Announces”. Arghhh, I hate typos.

And secondly, Claude pointed out that the timezone as PST, not PDT like all of the other events. Which it (correctly) flagged as suspicious.

On a positive note, I now feel much more confident that future emails (like this one) will be checked. Lesson learned.