For about six months, the State Department has had in place a statement on using artificial intelligence (AI) to deter candidates from using it in their application and writing their personal narratives. While the purpose and need for the statement is clear, it is incredibly flawed and lacks clarity. As AI advancement and its implementation intertwine with all our digital products, State’s rule will force candidates to write on a typewriter.
Department of State’s AI Policy
The policy follows:
While the Department of State encourages candidates to create their narratives with great care, including correct use of grammar and style, candidates are prohibited from using any artificial intelligence (AI) tool, to include but not limited to ChatGPT, to aid in their written responses. The Department will discontinue any individual’s candidacy if found to have violated this prohibition on use of AI tools in the application process. (Source, June 2024)
Let’s break this statement into two key points:
- Candidates are prohibited from using any artificial intelligence (AI) tool… to aid in their written responses.
- The Department will discontinue any individual’s candidacy if found to have violated this prohibition on use of AI tools in the application process.
Italics added for emphasis.
It’s clear that AI tools, such as ChatGPT, have shifted how people gather information, put together content, and advance their work. Evaluating a candidate’s capabilities becomes remarkably more complex when AI can mimic style and tone when writing an essay. In publishing this policy, it would seem State is trying to ensure that what a candidate submits is genuinely their own.
I agree with that sentiment. It is far too easy to use ChatGPT to write an essay. With some prompting, a candidate could put together a decent essay. As such, a candidate should not solely use AI to write their narrative and submit what it creates. Such an act is similar to asking a friend who is not interested in joining the Service to write your essays and for you to then submit the narratives as if you wrote them. Don’t do it.
However, the problem with the policy is that State applies it to “any AI tool… to aid in their written responses.”
Clarifying scope
The current policy, “any AI tool… to aid in their written responses”, is unclear and broad. Does this include grammar checkers that use AI, like Grammarly, Google Docs, and Microsoft Word? Are AI-enabled search engines considered off-limits for research? If I upload my resume to ChatGPT and ask it to help me brainstorm leadership ideas based on my experience, but then I write my narrative off ChatGPT, is that not allowed? If I write my own narrative and then put it in ChatGPT and ask it to suggest grammar improvements, is that not allowed?
The pace AI is weaving into everything we do makes this policy short-sighted. I’ve wanted to write this article for six months, ever since seeing the policy, but I wanted to wait a little bit to see if State would see the error in what they wrote and update their language. They have not.
Microsoft, Google, OpenAI, Samsung, Apple, and others announced that they will incorporate AI into every facet of their products. I don’t see how a candidate can write in an electronic document without aid from the AI contained in the software.
State needs to update its policy to define the allowable use of AI expressly and what is not.
Detecting AI: how?
The second issue with the policy is that “The Department will discontinue any individual’s candidacy if found to have violated this prohibition on use of AI tools in the application process.”
If we know you cheated, we are kicking you out. On its face, this rule makes sense.
Now, considering that AI is infused into everything we do, this is a problem (again, they are using the phrase “AI tools,” which is dumb). However, the more significant issue I want to highlight is the how. How is State determining that a candidate used AI? What software is State using? If they are using an AI-detecting tool, then which one?
Sharing the approach is essential because using AI detection tools has resulted in false positives.
Furthermore, if a candidate does not move forward because of AI detection, will State notify the candidate that they are not moving forward because of the use of AI? I am confident that the answer is no, as we would have heard about it by now. The appeal process, if one would even exist, and I highly doubt it, would be a burden.
The problem is that there is no way for the candidate to know that they are not moving forward because of AI and not because of the other elements that go before the QEP. (This is a bigger problem in that State should provide candidates with more information as to why they do not move forward so that they may improve their candidacy the next time.)
Conclusion
While State’s policy on AI is necessary to uphold the integrity of the Foreign Service application process, it is currently massively flawed. As written, candidates cannot use any writing software for their applications and narratives. Each writing software has implemented an “AI tool” to assist the user in writing.
If State wants to deter candidates from submitting 100% ChatGPT-written essays, which I support, they need to explicitly write this policy. Instead, the policy is too broad and lacks clarity, harming candidates.
Disclaimer: I wrote on Google Docs, which has AI-suggested improvements, ChatGPT to experiment with writing a narrative, Grammarly to review the article with AI-suggested improvements, and internet browsers for search, with AI-suggested material to review. Oh, and an AI-generated image to be on the nose. “AI tools” were used throughout the writing process, yet this is my writing. Yes, this is not the application and the narratives, so State’s policy does not apply here, but the point is that using the current “everyday tools” means I cannot get around AI. I also don’t own a typewriter and will not purchase one.