
Origins of a Friday-morning “rabbit hole”: Can’t we make/let writing be easier?!
We’ve been hearing a lot of handwringing and concern about ChatGPT up-ending education. We’ve also been hearing a lot of innovative ideas about how to engage with ChatGPT as a teaching tool (which is, let’s be clear, mainly an effort to make sure ChatGPT doesn’t become the beginning and end of student work and writing).
To be honest, I haven't been too invested in the hulabaloo [1]. Of all the approaches I’ve heard, the one that seems most straightforward is to have students use ChatGPT to generate first drafts. That would get developing writers past the daunting blank page, wrangle some initial thoughts into a form that can be refined and enhanced, etc., etc.
But, there’s been a thought tickling the back of my mind as I’ve listened to all these discussions, chatted a bit about it with colleagues, and even shared some commentaries with folks looking for perspectives: “Are we really only going to set up these frameworks of expectation and standards of use/engagement for students?”
Yes, there are already some cite-the-bot or attribute-co-authorship policies coming online.
But, I’m actually thinking more of our own writing, at our own keyboards.
Surely, even the writery writers among us would love to have that first draft be easier.
For example: grant proposals. I’ve written (or co-written) 30-some proposals that have raised over $3 million in funds for everything from community nonprofits and outdoor education to ecology research and systems change in higher ed and the science communication profession. And, I do plenty of coaching for folks writing grants, particularly when they are working on scicomm/broader impacts/engagement aspects of proposals. I've even started leading trainings that leverage narrative/storytelling techniques from the humanities and creative writing plus integrate the strategic, comprehensive budgeting and project planning typical in STEM grant writing.
That amount of money and number of grants might seem like a lot or a little, depending on your context. Point is, grant writing isn’t a brand-new writing task for me. And yet, I still want to get better at it and am open to it being easier. And, I would love to be able to recommend resources that make it easier, more efficient, and productive for people I support.
Even with the experience and positive grant-writing outcomes I’ve had, every proposal feels like pushing a massive boulder uphill.
There’s gotta be a way to make it easier.
Now, I have colleagues who work with AI and have played with writing their own code to train language models on their style of writing. And, supposedly, ChatGPT can model the style of writing samples you feed it. I cannot be the only one who has wondered: could this flashy, new AI thing help write grants?
And today (of course, with a grant deadline looming; next Wednesday, to be exact), I figured I’d check.
So, I created an account on ChatGPT and tried it out [2].
I didn’t want to straightaway ask it to write a proposal. Why? because I’ve read this article about ChatGPT declaring its love (in a stalkery kind of way) to a journalist and this Twitter thread about training AI to write in your writing style. I was particularly curious about the argument that AI can be a viable tool for structure and grunt labor, if you don’t expect it to work as outsourced thinking.
Here's what happened
Note: If you see a prompt to subscribe to keep reading, you can just subscribe for free!
1. I was reminded that instructions matter in writing
I started by asking it to give me feedback on my writing.
I was going to paste in 2400+ words (the full text of a previous seed-grant proposal of mine). But, it told me that was too much, and I should link to it via Google Drive or other cloud storage options.
How many documents can it use as a sample? Certainly two, right?
I shared two docs via Google Drive, then switched up the prompt. I asked it:
"Okay, specifically, I’m wondering: could you write something that’s 1500 words that is in the style of those two documents?"
Then, ChatGPT clarified it was a language model that could only generate text in a similar style. It even provided a caveat: “I cannot guarantee that the generated text will be exactly the same as the style of the documents, as my writing style may differ slightly from the original author’s style. But I can try my best to match the style and tone of the documents.”
Fine, fine. (What's the point then?!?!, I'm thinking.) But, off we went.
As I watched the bot’s output unfurl in real time, I got a primo reminder that meaningful, clear assignments are a crucial component of writing instruction.
In the first few lines, it wrote…a guide for good science communication. 🤦♀️
It wasn’t wrong when it wrote: “Effective science communication is crucial for bridging the gap between scientists and the public, policymakers, and other stakeholders.” It even started with sections on “Know your audience” and “Use plain language” [3].
But, I still wanted to know if the bot could give someone a meaningful jump start on a funding proposal. However, I clearly hadn’t asked for that, yet.
2. ChatGPT tried to write a grant (and I tried to get it to do so)
So, I stopped it mid-response and said:
"The two documents are grant proposals. The piece of writing should be persuasive writing, to help ask for money to pay for this kind of training. Could you please try again with that in mind?"
It went for it again. But, I interrupted again, because I wanted to be sure it really was drawing from the documents I linked to. (It didn’t seem to be, because the text examples I provided didn’t contain any of the content the bot generated.)
So, I asked, and it clarified: “I cannot use existing documents to generate new text without explicit instructions on what portions of the original documents to use or reference. However, I can try to match the style and tone of the original documents in a persuasive piece of writing advocating for funding to support training faculty and students on good science communication.”
Well, I don’t train AI, so I don’t know what it would take to provide explicit instructions. So I just got out of the way and had it try again.
It spit out a blurb that was remarkably generic and read more like a fundraising letter than a grant proposal. For example, the blurb ended:

Mmmmm, not quite. You would absolutely never say that to NSF. 🤣
I asked about using references. It said I needed to tell it which to include and where to put them in the text. Nope, no thanks. I can do that on my own. Skip that.
Then, I asked it for 1500 words. That might be more than an executive summary, but it’s on the way to a seed-grant proposal . At a minimum, 1500 words is long enough to require sustained thinking and short enough to be an overview someone could share with collaborators or program officers.
ChatGPT spit out 1500 words under the title: “Investing in Good Science Communication: A Proposal for Funding.”
It didn’t say anything that was wrong (in the first few paragraphs, that is). But, it didn’t say anything compelling.
Maybe two documents (~4,500 words) just isn’t enough to get it to produce my kind of writing.
3. I upped the ante and provided numerous example documents
So, I asked it to do it again, using several documents as examples, including the first two I linked to earlier in the chat. The documents I provided this round included:
Two IRB exemption request proposals (having to do with survey-based research in science communication courses at the university level and scicomm across a university);
One manuscript of a paper (about scicomm) currently under review at a peer-reviewed journal;
Two grant proposals about scicomm training;
Two published, peer-reviewed papers about scicomm.
All the examples (n=7) were my own writing and/or collaborative writing to which I contributed substantially.
My thinking: A draft in the style of any of them would be workable; together, surely they provide a meaningful corpus. (Again, I don’t work in AI, so this might be laughable. That’s fine. Most folks drafting a grant proposal won’t have produced Google Books’ worth of writing, so they’d also be looking for support from a relatively limited corpus.)
ChatGPT produced 1500 words organized with the following subheadings:
Title: Enhancing Science Communication Skills in Faculty and Students
Executive summary
Objectives (with 3 objectives stated)
Background
Proposal
Evaluation <–this one actually would help, because eval/efficacy in scicomm is often overlooked, underdeveloped, etc.
Conclusion
And, it took 1500 words seriously – it stopped mid-sentence at the end, when it hit the word limit. The last sentence was: “Through our science communication training programs and initiatives, we will improve the dissemination of”.
So, I encouraged it to feel free to complete its sentences at the end, never mind the word count. I expected it would revise what it had already written. But, that probably exposes my naïveté about these tools.
Of course, it just generated a new 1500-word blurb. This one had the following headers:
Project summary
Background and need
Project description
Evaluation and impact <–again, this one nudge could be worth it, if it was consistent. But, that might be contingent upon the sample texts it is provided.
Funding (it asked for $200k, unprompted)
Conclusion
So, can AI write my SciComm proposal?
Nope.
I would argue it not only cannot write my proposal, it isn’t a viable tool for starting any scicomm proposal.
Every version it produced was underwhelming.
It made schtuff up left and right. For example, one 1500-word chunk said it was modeled after scicomm training programs at UC Berkeley and the University of Michigan, including a bogus reference to a study indicating Michigan's program saw 95% of participants increase their confidence and 92% increased interest in public and policy engagement. Mind you, that material is not in the example material I provided. And, I have no contacts with folks running scicomm training programs at those universities. My proposal, therefore, couldn’t be modeled on theirs in a credible, robust way. (And I while I didn't check if such even programs exist at UCB and UM, ChatGPT certainly cant' validate that assertion .)
After seeing those fake research results, I asked ChatGPT to not make up stats or numbers. And, I specified: if it had to use stats or references, please derive them from the sample texts.
Well…that lasted until the final version it spit out. In that one, it cited another bogus survey that indicated >70% of science faculty respondents hadn’t had any formal training in scicomm. That stat is in the realm of plausibility. It would be all too easy for someone less familiar with the field to take that for granted and want to include it in their proposal.
Where might that lead? At worst, citing and disseminating fake stuff. At best, hours spent trying to track down the citation to accompany that stat. Cherry-picking at it’s moldiest.
ChatGPT also made up formats for the programs that would be offered, including a consistent-through-all-versions commitment (from ChatGPT) that we provide practice in scicomm. There was even a version where a practicum was built in. While that would be peachy if someone was going to do that, I’ve never written a proposal or study that included a practicum.
Why?
Because I develop and run programs to enhance capacity, efficacy, and robust approaches to aspects of scicomm that rarely get institutional support: things like assessment, conceptualizing scicomm as research, etc. In a word, there are already plenty of practice opportunities. I’m doing different work. So, that’s ChatGPT conjuring things again.
Takeaways
Overall, the quality of writing was remarkably general, bland, and hand-wavey. It would not only not be competitive, it would be embarrassing [4].
My takeaway here is that ChatGPT ain’t the mechanism for jumpstarting high-quality grant writing or even making grant writing easier [5].
Notes
1 The nature of my courses is such that I don’t expect to encounter the plagiarism or lack of original work issues raised by ChatGPT. (Maybe that’s naïve, but that’s my current take.) So, I’m a bit late to the “let’s see what the bot can do” party. What I’m reporting here was my first interaction with ChatGPT.
2 If you don’t want to read my commentary, you can just skim a plain-text version of the transcript with annotations here.
3 Which of course, is scicomm jargon itself.
4 I hadn’t realized I might be worried, but this exercise reassured me. Experienced, knowledgeable science communicators who are connected to the evidence base for inclusive, effective scicomm probably don’t need to worry about ChatGPT as a grant competitor (at least not at this stage!).
5 What would really make grant applications easier (and therefore more inclusive and equitable) is standardized, simplified grant formatting, or letting us budget for real compensation for broader impacts and for people, especially students and early career folks. Oh, and, substantive, sustained funding for us to offer training and real support for actually writing grants; grant compliance is great, but it’s only part of the grant process, though it always seems to get priority in university investments.
Коментарі