As AI tools become a part of the research process, disclosure becomes a matter of academic integrity. At the same time, the call to disclose can be complicated by the sheer ubiquity of such tools, operating silently in the background of everyday scholarly activity, such as search or grammar correction. The transparency of intellectual labor has also been a major part of my recent work, a book called Author Function coming out soon at Chicago University Press.
I include the following disclosure at the end of my manuscript, which I now believe should cover all of my intellectual output:
This book was written using contemporary digital writing tools, including spelling and grammar checkers, reference management software, and, on a limited basis, large language models. Such systems were used as editorial aids: to rephrase sentences, test alternative formulations, summarize notes, and assist with transitions during revision. They were not used to generate archival claims, historical interpretations, or original arguments. All sources were selected, verified, and cited by the author. The responsibility for the text’s claims, structure, and conclusions rests entirely with me.
Given that this study examines the history of literary automation, distributed cognition, and machine-assisted authorship, the writing process itself reflects the continuity between past and present techniques of composition. Rather than treating these tools as external or exceptional, I regard them as part of the long-standing infrastructure of collective intellectual labor that this book seeks to describe.
To put it more simply: This document was created using human intelligence.
A detailed manifest of labor and specific contributions can be found on my Github pages, and soon at Lab Notes—a journal dedicated to transparency in the scholarly process.