Skip to main content

Dec 1, 2025

Our AI Journey: Building a Values-Driven Approach 

Over the past six months, the Headwaters team has been on a journey to figure out what using artificial intelligence (specifically large language models) means for a small, values-driven, trust-based foundation like ours. We partnered with Meena Das, founder of Namaste Data, to participate in her AI Playground—a six-month initiative designed to help nonprofits build their own values-based approach to artificial intelligence and decide how and why to use AI responsibly. 

Each member of our team joined from a different place. A few of us were best friends with ChatGPT. Others hadn’t touched it and weren’t sure they wanted to. All of us shared worries that use of AI could compromise our values of equity, trust and relationships, but we were also curious about the potential to free up increased time for creativity and human interaction to reinforce those values. 

A Snapshot of the Playground 

The journey included conversation, reflection, and a lot of experimentation. We first surveyed staff to understand where people stood: were they curious, hesitant, somewhere in between? Then, we held workshops to learn what AI is (and what it isn’t) and explored ways of using it that put equity and care at the center. Together, we drafted an AI framework that could evolve as we learned. Then, we started trying things, from generating blog outlines to distilling research to handling repetitive data tasks, helping us learn what worked, what didn’t, and where the boundaries should be. 

One tangible outcome from this process is our AI framework. Rather than a document that sits in a folder, we wanted a set of values and guidelines we can revisit, question, and adapt. The framework affirms that use of AI at Headwaters should align with our values, protect community trust, and strengthen the human connections at the heart of our work. It also commits us to transparency with our grantees about how we use AI and takes their privacy seriously. We plan to review it every 12-18 months and continue to find ways to keep it relevant as we learn more. 

Key Takeaways + Pieces of Advice for Peers 

  • The conversations matter more than the tools. Early on, our conversations sounded like “Should we use this?” or “What if it replaces human connection?” Six months in, they sound different: “What goal are we trying to achieve?” “Does this align with our values?” “What are the risks to our community?” That shift happened because we talked a lot. The discussions themselves, sometimes philosophical and sometimes practical, made AI less of a mystery and helped us clarify who we are as an organization and where AI fits in. 
  • Build comfort and trust before you build skills. We realized readiness for AI is cultural as well as technical. You can’t experiment responsibly if people don’t feel safe or supported. We needed shared language and trust before we could talk about use cases or policies. One team member shared that she went from fearful to excited once we had guardrails and shared understanding in place. Start there. 
  • Co-create your framework; don’t outsource it. We could have adopted generic AI best practices, but we didn’t. Creating our framework together ensured it reflected Headwaters’ identity and values. That ownership matters. It means people actually use the framework, question it, and evolve it as they learn. 
  • Treat AI as a mirror that reflects what you bring to it. If you lead with curiosity and care, AI tools amplify those qualities. If you approach it with urgency and efficiency above all else, that’s what you’ll get back. This insight changed how we prompt, how we evaluate what it puts out, and how we decide what to use AI for in the first place. 
  • Learn and experiment continuously; document along the way: It’s okay not to have all the answers. Experimentation done thoughtfully and transparently is a valid form of learning. And this learning is never finished. Document what you learn along the way! Mistakes are data. 
  • Keep your humanity at the center: AI should take the things off your plate that drain time from creativity and human connection; it shouldn’t replace those things. The real opportunity is to use it in ways that strengthen curiosity, creativity, and care. 

What’s Next 

We’re still experimenting. In the coming months, we’ll build a shared workspace to capture what we try—the successes, missteps, and lessons that come with them. We’ll develop ways to measure whether these tools are truly supporting our mission, not just saving time. And we’ve committed to revisiting our approach regularly, asking whether our use of AI still reflects who we are and what our communities need from us. 

We’re grateful to Meena Das and Namaste Data for guiding this process and reminding us that responsible AI isn’t just about efficiency; it’s also about joy and community. To our partners and peers: may our reflections spark your own conversations about what human-centered, values-driven AI can look like on your teams. 

If you’d like to talk with us about this process and our AI framework, reach out to our Knowledge Manager Steph Schilling at stephs@headwatersmt.org.