Artificial Intelligence Policy
The goal of this policy is to provide Giant Rabbit's clients with visibility into how Giant Rabbit engages with AI tools. If you're a Giant Rabbit client, this policy explains what we're doing to keep your data safe, and our philosophy about AI tools in general.
I. It's Giant Rabbit's responsibility to pick the best tool for the job.
A key part of our work has always been choosing the right tool for the job and using it safely. That's true for the software and third-party platforms we recommend, our internal development tooling, the way we structure our projects, and also AI tools. It's also our responsibility to be aware of downsides and work to mitigate them.
AI tools are evolving rapidly, but our decision-making process around them is still guided by the same core values.
II. AI tools come with unique risks and societal costs.
At the same time, AI tools bring unique risks, challenges, and societal costs that other tooling decisions don't. The broader costs of AI include the concentration of wealth and economic power, environmental costs of data centers, and negative impacts on workers. Additionally, AI tools introduce new security and privacy challenges. AI agents can make dramatic and costly mistakes, and confidential data is at risk if clear and careful boundaries aren't established.
We strive to be clear-eyed and transparent about both costs and benefits. We aren't AI maximalists, and we don't believe in replacing people with software.
III. We use AI tools when they are the most efficient tool for the job.
Without minimizing the costs of AI, there are situations where the potential benefits are significant. Sometimes AI tools can help us complete tasks more efficiently, or tackle a programming or data cleanup task that would otherwise be unaffordable. In those situations, our developers and our project managers may use AI tools in their work. Those tools may range from AI code assistants and document tools to fully agentic development processes and LLM-assisted analysis.
IV. Giant Rabbit has developed privacy and security standards to safeguard your data and your systems, and we're continuously improving them.
Giant Rabbit has established clear boundaries around acceptable AI use, and we're constantly updating our policies and practices as tools evolve. Some key points are:
-
We apply our normal standards of quality, security, and maintainability to all AI output. AI makes it possible to write very long documents and/or a large amount of code very quickly, but we don't allow that to change our baseline standards. It doesn't matter if a human or a robot wrote the code or the document; the output still needs to be up to GR standards before we'll release it.
-
A human will always review any AI output. In order to ensure that, all AI code will be code reviewed by at least two human developers. (Our human-written code is already reviewed at least once, but AI code will be initially reviewed by the developer in charge of the task, and then will be additionally reviewed by someone else.) Similarly, when our project managers use AI tools to prepare deliverables or organize complex bodies of information, they'll treat the output as a draft, and carefully review, revise, and synthesize to ensure that the end result is reflective of our best work.
-
We don't let AI agents make direct changes to your production site or data. In order to ensure that humans always review any changes, humans will always be in charge of launching features or making other changes to your production systems.
-
We will not allow an AI model direct access to confidential data. Even with training turned off and data confidentiality settings enabled, we don't allow AI models to manipulate your private data. When a project involves analyzing private data, we will carefully anonymize your data before allowing it to be seen by AI.
Over time these policies will evolve with the evolution of the tools themselves. As we're evaluating and testing new tools, we'll be updating and modernizing our standards to keep your data safe.
V. When a task takes us less time because we've used AI tools, you'll directly benefit from lower costs.
All of our work across all of our clients is conducted on an hourly basis. That means that time savings due to AI tools are directly beneficial to you.
VI. If you would like us not to use AI tools when working for you, just let us know.
The question of when the costs of AI outweigh its benefits is a complicated one. Even among people with shared values, there's enough nuance to the question that different people will have different levels of comfort with the technology. While we feel it would be irresponsible to unilaterally rule out using AI tools, if your organization would prefer that we minimize our use of AI tools when working for you, we're happy to honor your preference (we still love the craft of software development, after all!). Simply notify us that you'd like to opt out, and we'll send you a short questionnaire, then use that to avoid discretionary AI usage that you specify in the provision of services for your organization.
Please note that we'll continue to check in about your preferences as the landscape evolves. If there's a particularly clear use case for you, we'll check in about it specifically, so you're clear about the costs and benefits.
If you don't opt out, you can assume that we're using AI tools where appropriate, to make our work more efficient. This usually won't be something you'll notice directly, because humans will be evaluating, testing, and delivering the work, the same as always. If there's ever a decision point where there are costs, benefits, or risks that bear special discussion, we'll bring you into the decision, as usual.
We're committed to transparency in all of our work, so we welcome questions and discussion.
VII. AI Q&A
Q: How do you use AI currently?
A: Many of our developers have been using AI coding assistants like Copilot for quite a while now, because they offer handy shortcuts compared to manually reading documentation. As AI tools have improved, we've incorporated leading-model agentic developer tools where appropriate. Our project managers use LLM tools to process large amounts of documents, manipulate (non-confidential) data, and for other routine administrative tasks. Over time, given the trajectory of the technology, we expect our use to increase.
Q: Can you help us decide when/whether we should be using tools ourselves?
A: Definitely. As you can guess if you've read this far, we'll approach AI tool integration the same way we do any potential software choices: with a needs assessment and a discussion of your goals. If you're already a Giant Rabbit client, odds are good that we already understand your objectives, strengths, and preferences, and we can hop into talking about when and whether AI tools might be helpful. We'll also help you establish boundaries and security policies to keep your data safe.
Q: How much faster are AI tools compared to working by hand?
A: Once again, it depends on the task and many other factors. For the last few years, the answer was "it's a marginal improvement in efficiency, sometimes," but some of the recent tools are offering more dramatic improvements over hand-coding certain features and certain tasks. In general, you can assume that the speed benefits from using AI tools are over-hyped by the industry that's promoting the tools, and the industry of people who make a living writing about them. At the same time, there are use cases where LLMs offer surprising time savings, or offer analysis or summarization that would simply be too tedious or costly for a human to generate. That's why there's no substitute for real, ongoing experimentation with real-world use cases—which is how we learn about these tools and decide when to put them to work.
Q: How's the code quality? Is AI code maintainable?
A: The code quality is better than it used to be, but it still needs careful review. But by the time AI code goes live, it's not purely AI code anymore, it's been code reviewed and tested by at least two developers here. Of course, code review and testing takes time, as do fixes and changes—which is part of why AI isn't always a clear-cut benefit to a given task or process.
Q: What's next for AI tools? What's on the horizon?
A: For now, AI tools are evolving rapidly, so we expect continual experimentation to be the norm for a while. For better and for worse, we all need to be continually reevaluating our assessments of the best way to use these tools.
No matter how the technology evolves, Giant Rabbit's approach will remain grounded in our values of transparency, sustainability, and stewardship. You can rely on us to continually evaluate AI tools on your behalf, use them when they're beneficial to you, and protect your privacy and security along the way.