POLICY
Use of AI (Artificial Intelligence) tools and software
This policy aims to provide a flexible framework within which staff can make informed decisions about the use of AI tools and software and that any such use is accountable, transparent and responsible whilst ensuring workers are treated and paid decently, preventing bias and discrimination, and minimising impacts on the climate and environment. It is…
Contents
Policy
Introduction
We recognise that AI tools and software are now widespread and will only become more so in coming years. There is already a disparity between those who have readily adopted AI in their work and those who are more reticent. It is important for us to understand the potential applications of AI to our work and the potential benefits and drawbacks of these in order for us to make informed decisions.
We also recognise that AI is more than a set of tools. Like any technology it is situated in networks of power, the current foci of which are major tech corporations whose motivations and intentions are not necessarily aligned with social, economic or ecological justice. Adfree Cities exists to challenge corporate power and systems of oppression. Any use of AI in our work should clearly align with our mission, not that of the tech industry.
This policy aims to provide a flexible framework within which staff can make informed decisions about the use of AI tools and software and that any such use is accountable, transparent and responsible whilst ensuring workers are treated and paid decently, preventing bias and discrimination, and minimising impacts on the climate and environment. It is not intended to provide a blanket “yes” or “no” answer to whether we use AI.
This policy will be communicated effectively to all staff. Staff will commit to making best efforts to adhere to the policy in all aspects of their work.
Who is this policy for?
This policy applies to all Adfree Cities staff, contracted or freelance. Some elements also apply to our work with external partners.
Adfree Cities’ work – in both substance and scale – means that many of the debates around the use of AI do not particularly apply. For instance, using an AI chatbot in service delivery. This policy should therefore be reviewed regularly and updated accordingly as the field of AI progresses.
Roles and responsibilities
Adfree Cities has no designated digital security manager or data protection officer. Accordingly it is the responsibility of all members of staff to uphold this policy in their own work and the work of colleagues.
Overarching principles
Accountability and responsibility: Staff using AI tools will remain accountable and responsible for any work or decisions made using or influenced by those tools.
Transparency: Any use of AI in our work will be made clear both internally and externally.
Fairness and non-discrimination: AI tools have all been trained on publicly available data that is in many cases inherently biased against marginalised groups. We will not use AI in ways that replicate historical bias or discrimination.
Non-displacement of human labour: We will (where practical) not use AI to perform tasks that we would previously have commissioned a human to do. Examples include large pieces of translation, or graphic design.
Reliability and accuracy: We recognise that AI outputs are liable to be inaccurate or unreliable and we will always fact-check any outputs before using them in our work. We will aim to only use those AI tools which we know to be and have experienced to be accurate and reliable.
Trust: We will use AI in ways that allow us to maintain trust with the community we support.
Data protection: We will not input sensitive or personal information into AI tools and always prioritise the protection of personal information above any perceived benefits of using AI tools.
Environment and climate: AI relies on the processing of vast amounts of data, which in turn requires vast amounts of energy, much of which is provided by fossil fuels. The energy demand of AI data centres is rapidly growing and could be responsible for as much as 246 million tonnes of carbon dioxide emissions by 2035.
Mission: Any potential benefits to productivity of using AI must be weighed against all the above and the overall impact on our mission in its widest sense.
Policy context
This policy overlaps and intersects with relevant areas of law as well as several other Adfree Cities policies:
- Equalities Act 2010
- Equal Opportunities Policy
- GDPR (general data protection regulations)
- Data Privacy Policy
- Procurement Policy
- Adfree Cities Vision and Values
What is AI and how does it apply to our work?
The term AI can apply to a wide suite of technologies and is not easy to define exactly. Some forms of AI, like predictive text, are familiar to us, whereas other forms, like voice modulation, are not. Similarly, some uses of AI are more readily applicable to our work whereas others are not.
Below is an attempt to capture some of the broad categories of AI that currently exist and that we might encounter in our work or that of external partners.
Type of AI use | Applicability to our work | Pros | Cons |
---|---|---|---|
Administrative assistant e.g. note taking, drafting an email, Auto-filling a website like Eventbrite | High | Potentially time-saving. Accessibility tool. | Note taking can be a useful exercise in itself for focusing attention. Security risk and ethical concerns. |
Administrative assistant e.g. ZOOM AI attending a meeting on your behalf, Google Gemini | Low due to data protection concerns and general staff dislike of these products. | Time-saving. Accessibility | Lazy. How will it be received by others in the meeting? Privacy concerns (e.g. if something confidential discussed on the call) |
Language generative e.g. using ChatGPT for idea prompts | High | Can be useful for idea generation, overcome “writers’ block” | Can lead to stale or boring ideas. Lowest common denominator. Lazy, brain rot. Normalising the use of AI for creativity. |
Language generative e.g. Using ChatGPT to draft funding proposals (see Appendix 2) | High | Fills a skills gap in the team. Speeds up the fundraising process. Enables more funding applications to be written and sent. | Potential for inaccuracy. Could lead to us (unwittingly) making spurious applications. |
Creative generative e.g. using DALL.E or ChatGPT for image generation | Medium | Fast image generation e.g. an image needed for a blog. Cheaper than commissioning something. | Potentially takes work from an artist or creative and risks condoning this wider trend. AI images are notoriously unreliable e.g. 6 fingers, lack of diversity. |
Creative generative e.g. Using Opus AI to create a video or Speechify to alter speech | Very low due to ethical concerns | Good if you want to churn our content | Misleading, creepy, energy-intensive. Normalising or condoning a tool that is largely used for extremely un-ethical purposes. |
Data analysis e.g. Asking ChatGPT, CoPilot or Claude AI for a document summary (a report, rather than an internal document) | High | Fast summarising of a document. Pulls out key points. Speeds up research. Accessibility. Quick translation. Puts it in the language and style needed for different audiences e.g. MPs. Avoids plagiarism as other’s content is re-written | Needs fact-checking anyway. Brain rot. Using translation services takes this work away from a person. |
Large scale data analysis e.g. Using machine learning like Dataro or Sprout to analyse social media data and trends. | Very low as we don’t work at a large enough scale for this to apply. | Can lead to insights that inform future work. | Requires technical expertise to do at scale. |
Using AI in practice
As noted, the purpose of the document is not to provide a “yes” or “no” answer to the use of AI in our work, but rather to provide guidance on how we can use AI should we choose to.
Below is a kind of “flow chart” of guiding questions that should be answered by any staff member or contractor when considering using AI in their work. Each question is designed to mitigate the risk of acting contrary to the overarching principles above.
- What kind of AI are we talking about?
- Refer to the table above for pros and cons and Appendix 1 for a list of available AI tools.
- Is this a tool you’ve used before? Do you require additional training or upskilling in order to use it responsibly?
- Have we done due diligence on the AI (reading terms and conditions, reading user reviews) to ensure it uses ethically sourced data and that it has been tested for bias?
- What do we want to use it for? What is the intended outcome (e.g. ideas, draft, final product)?
- Is this an area where a high degree of trust is required / expected? AI is notoriously unreliable and may not be the best tool for the job.
- How will the output be reviewed for bias?
- How will the output be reviewed for accuracy and reliability?
- How will the output be received by our audiences and supporters?
- Is it helping our working or doing it for us?
- Could we do this work otherwise and what are the implications of that (cost, time, accessibility etc.)?
- Are we taking work from someone else (e.g. we would historically have commissioned someone to do it)? Remember the mantra “AI for tasks, not jobs”.
- Are we likely to mislead (our audience) by using AI?
- Is any use of AI transparent and auditable? If any AI generated content is included in externally-facing work, are we making that clear?
- Copyright infringement: always run a reverse (image) search on any outputs.
- What data are we inputting into the AI?
- Personal information is any data that could identify someone. This includes names, addresses, phone numbers, and email addresses. It also covers things like photos, medical information, safeguarding information and financial details.
- Have you read the AI tool’s terms and conditions regarding use of data?
- How will you manage data use and storage? E.g. deleting chat history on ChatGPT; Microsoft CoPilot automatically deletes data every 24 hours.
- Does this use of AI advance our mission and support our vision and values?
- Do the negative climate impacts offset any benefits?
AI use by external partners and collaborators
This policy only applies internally within Adfree Cities. At times we may work in partnership or collaboration with others, or to contract others, who choose to use AI in ways that do not align with this policy.
In such instances we should make clear our approach to AI with the external partner, including sharing this policy, at the earliest opportunity. In general, we will not knowingly support or commission the use of AI where we could employ a human professional. See also our Ethical Procurement Policy.
All staff working with external partners, collaborators and contractors are required to perform due diligence in this regard.
Appendix 1
A non-exhaustive list of available AI tools and software. A much longer list can be found on this Tech Radar review.
- Open AI ChatGPT: Large Language Model. Unless you specifically turn it off, the data that you share with ChatGPT is used to train its future models, with possible unintended consequences. chat.openai.com
- Microsoft Copilot: Large Language Model productivity tool integrated into Microsoft 365 (formerly known as Office 365) applications. It deletes data overnight. copilot.microsoft.com
- Claude AI: Large Language Model similar to CoPilot & ChatGPT. claude.ai
- Open AI DALL·E 2: can create realistic images and art from a description in natural language openai.com/index/dall-e-2 (integrated with ChatGPT paid)
- Synthesia: Creates videos with AI avatars and voiceovers. synthesia.io
- Pi by Inflection AI: An AI “personal assistant”. pi.ai
- HeyGen: For video production, AI-generated avatars and voiceovers. heygen.com
- Speechify: Converts text to speech with a wide range of voices and languages, (enhancing accessibility). speechify.com
- Opus AI: AI-driven video editing optimized for social media. opus.ai
- Canva’s Magic Studio: A range of AI-powered creative tools inside Canva. canva.com/magic/
- Fireflies: AI meeting assistant that transcribes conversations, captures insights, and automates tasks from meetings. fireflies.ai
- Perplexity AI: AI-powered search engine. perplexity.ai
- Superwhisper: A voice-to-text app that works offline. superwhisper.com
- Dataro: integrates to a CRM to analyse donor data. dataro.io
Appendix 2: Guidance on use of AI
Whether using AI or not, it is always better to be informed. Staff training on the uses and abuses of AI is therefore encouraged.
Further reading and watching:
- jrf.org.uk/ai-for-public-good/harder-better-faster-stronger-will-ai-improve-public-policymaking
A concise challenge to claims that AI will make us faster and more productive. - policy.friendsoftheearth.uk/reports/harnessing-ai-environmental-justice
A deeper look at AI in the context of environmental justice concludes with 7 principles for the use of AI in the sector. - blog.weareopen.coop/cooperating-through-the-use-of-ai
Thoughts and signposting on “small” AI e.g. difference LLM’s to ChatGPT with a lower environmental impact. - ibm.com/think/topics/generative-ai
IBM intro to generative AI, what it is, how it works and what it can do. - innovationforimpact.network/ai-non-profit
A repository of blogs, reports and videos on AI in the non-profit sector. - notion.so/Climate-Movement-AI-Hub-225a26da72058085aaf8dd5f22a1f04f
Another repository of articles and tools for AI in the non-profit sector.
How to write a prompt for ChatGPT, CoPilot etc.
The below is copied from an AI policy template from Platypus Digital.
- Set the stage: Give the AI a specific role to play. For example, “You are a very experienced fundraising consultant with 30 years of experience in getting grants from UK trusts and foundations.”
- Be clear and specific: Clearly state what you want the AI to do. Break down complex requests into smaller, manageable tasks.
- Provide context and examples: Give relevant background information and, if possible, good examples of the kind of output you’re looking for (or bad examples).
- Avoid vague or overly broad requests: These can lead to unfocused or generic responses.
- Iterate and refine: Don’t settle for the first response. Follow up with more specific questions, ask for clarifications, or request modifications to get exactly what you need. This iterative process often leads to much better results than accepting the initial output.
Example of a bad prompt: | Why it’s bad: |
“Help me with fundraising.” | – It’s vague and lacks specificity.- It doesn’t provide any context about the type of fundraising you need or the organisation you need it for.- It fails to give the AI a specific role or task.- It doesn’t include any background information or examples. |
Example of a good prompt: | |
“You are a very experienced fundraising consultant with 30 years of experience in getting grants from UK trusts and foundations. Our small charity focuses on providing after-school tutoring for primary age children in London. We need to raise £50,000 to expand our program to two new schools. Can you provide a step-by-step guide on how to identify and approach potential grant-giving foundations that might support our cause? Please include tips on crafting a compelling grant proposal.” |
Comment on our forum: community.radhr.org