Welcome! These guidelines cover how we use AI at AI for Community. They’re designed to be practical, not punitive – think of them as a shared agreement so everyone’s on the same page. If something comes up that these guidelines don’t cover, bring it up. AI is changing fast, and this is a living document.
These guidelines apply to everyone doing work on behalf of AI for Community: staff, contractors, volunteers, board members, and interns.
🔴 The Non-Negotiables
A person owns every piece of output.
Everything we put out publicly is reviewed and owned by a human being. You check it, you stand behind it. If something goes wrong, “the AI made a mistake” is never an acceptable response — just like you wouldn’t blame an intern for a press release with your name on it.
Protect sensitive data.
Don’t put anything into AI that you wouldn’t put into Gmail, Google Drive, or Google Calendar. If you’re comfortable with it living in those cloud services, AI is a similar level of trust. But some data should never go into any cloud service, such as legally protected information, or anything with specific contractual confidentiality requirements. When in doubt, ask.
Label AI-generated content.
If we use AI to generate content that someone might reasonably mistake for real or original (typically images) it must be clearly labeled as AI-generated. This protects our credibility and the trust of the people we serve.
🟡 Things to Watch Out For
AI can get it wrong.
AI can hallucinate, by fabricating convincing information that sounds completely reasonable. It can also be sycophantic, telling you what you want to hear instead of pushing back. Think of it like an eager intern: capable of good work, but also capable of making mistakes and over-following your directions. Never use AI output without reviewing it.
Keep personal and work data separate.
AI gets smarter the more context you give it, which is great. But it means personal data mixed into your work context will dilute the quality of AI’s responses and expose your private information. Use separate accounts for personal and work AI use.
Don’t let AI write for you.
AI-generated text can contain errors and tends toward a generic, inauthentic tone. Your voice – and AI for Community’s voice – should sound like us, not like a default algorithm. The green section below covers the right process.
Watch your usage.
Deep thinking and deep research modes burn through usage limits fast. Once you hit your cap, buying more capacity is expensive. Be mindful of when you actually need the heavy-duty modes versus a quick, lightweight query.
Don’t overwhelm others (or yourself).
AI makes it trivially easy to generate massive volumes of content. Just because you can doesn’t mean you should. Avoid forwarding long AI output to colleagues without filtering it down to what’s actually useful. Don’t let AI crush your enthusiasm for a task by giving you 10 pages of action items.
🟢 How We Want You to Use AI
Make it part of your workflow.
Always ask yourself: “Could AI help me do this faster, better, or more timely?” Evaluate this realistically – if you have 20 minutes and the alternative is not doing the task at all, getting 80% of the way there with AI is a big win.
Claude is our default tool.
We use Claude for our AI work. It’s strongest at writing and planning, and Anthropic’s ethics around AI safety align most closely with our values. Use other tools (like ChatGPT for spreadsheets or image generation) when there’s a specific reason, but be intentional about switching rather than bouncing around randomly.
Connect Claude to our data.
Give Claude access to Gmail, Calendar, and Google Drive for organizational work. Full context is what takes AI from generic to genuinely useful. This is why keeping personal and work accounts separate is so important – we want Claude’s context to be clean and focused on our mission.
Follow the writing process.
For written content, especially anything external:
- Brainstorm with AI: explore ideas, get input
- Write the rough draft yourself: it can be very rough, even voice-transcribed notes
- Have AI critique and refine: let it organize, suggest, improve
- Do the editing yourself: the final voice should be yours
For internal work, it’s OK to take a lighter approach. Reviewing AI’s final output for accuracy and tone is usually sufficient.
Use AI to scope your work.
Instead of asking AI for everything and drowning in output, tell it your constraints. “I have 5 minutes, what are the top 3 things I should focus on for this task?” is much more useful than “give me a comprehensive plan.”
Use the right tool for the job.
Use the fastest, lightest model for everyday tasks – quick questions, simple edits, brainstorming. Save deep thinking and deep research modes for when they matter: strategic plans, grant proposals, complex analysis. This stretches your usage allocation and is better for the environment.
Speak up.
If you run into situations these guidelines don’t cover, or if something feels off about how AI is being used, bring it up. We want open conversation about this. AI is evolving rapidly, and these guidelines will need regular updates based on real experience.
Questions? Reach out to Ken!
This work is licensed under a Creative Commons Attribution 4.0 International License. You are free to share and adapt this document with attribution to AI for Community.