The pitfalls of using AI in funding applications

A lot is being said about AI in philanthropy. For those of you wanting insight on the current debates, then I recommend the new Routledge Handbook of Artificial Intelligence and Philanthropy – which is commendably free to access. For now, I want to share my experience of AI being used in funding applications from the position of someone reading and assessing them. The best description to capture this experience came from a fellow assessor who called it “soul destroying”.

Before I say anything more, I want to point out that:

1.     I have huge respect for fundraisers who have to complete countless application forms all looking for something slightly different in a system where the funders hold the power. Fundraisers deserve any tools they can get to make their task quicker and easier, especially in today’s difficult funding environment.

2.     I use AI tools myself, and think they have and will increasingly have a useful role in philanthropy. For example, creating the transcript from a conversation with a funded partner that removes the need for them to write a grant report.

The first difficulty in assessing an application written using AI is a practical one.  The tendency of tools like ChatGPT to come up with fairly bland points written in flowery language makes tough reading. The assessor has to concentrate hard to make sure they are uncovering the relevant information and not judging the presentation. It can be hard to find specific substance, for example, when considering a response like “by combining data, direct community input and expert insights we have a comprehensive evidence base that shows our service is urgently needed” and it is harder still when several applications include the same text. The very real risk is that good organisations will miss out on funding.

The second difficulty is more motivational. What enters your head is “what is the point of my trying so hard to assess something that nobody wrote?” Gaining efficiency means losing something important – the connection with someone who knows what the applicant actually does and why it matters. Some grant makers, with the good intentions of reducing bias and saving resources,  also use AI tools in the assessing process. So what happens when an assessment algorithm is used to assess an AI-generated application? Are we all (fund seekers and fund givers alike) freeing up time for better, richer conversations and other more important work? Or are we losing the importance of people expressing themselves and people listening in this process?

Whilst we all have to get our heads around the impact of AI on philanthropy, for now we still have the current system of application forms that usually need to be judged by a human at some point. Grantmakers therefore need to issue guidance on the use of AI – as some do, such as the London Community Foundation and Paul Hamlyn Foundation – to make it clear to applicants how this will be treated and the pitfalls to avoid. And fundraisers, by all means use AI for your first drafts, but please then translate it back into human and make sure it is a credible match to your work.