Policy on AI-Assisted Contributionsยค
Writing this page is something of a fool's errand, given the rapidly evolving nature of AI tools and their usage in software development. However, to provide some clarity for contributors, the following policy is established regarding the use of AI tools in this repository.
This repository permits the use of AI tools to assist in code and documentation development, provided that all contributions are reviewed and verified by human maintainers before being merged. Contributors must ensure that AI-generated content adheres to the project's quality standards and does not introduce errors, vulnerabilities, or significant inconsistencies in the codebase.
The following guidelines should be observed when using AI tools:
- Transparency: Contributors should disclose the use of AI tools in their pull requests or contributions, specifying which parts were generated or assisted by AI in their pull request descriptions.
- Review: All AI-assisted contributions will undergo thorough human review to ensure accuracy, relevance, and compliance with project standards.
- Attribution: If AI-generated content is used, appropriate attribution of the AI tool used should be provided in accordance with the tool's terms of service.
- Limitations: AI tools should not be used to generate large portions of code or documentation without significant human oversight and modification. See the further notes below on the ways AI tools have been used in this repository.
- Ethical Use: Contributors must ensure that the use of AI tools does not violate any ethical guidelines, including but not limited to issues of bias, privacy, and intellectual property rights.
- Responsibility: Contributors are fully responsible for the content they submit, regardless of whether it was generated by AI tools.
The above list was in fact generated with the assistance of GitHub Copilot, and subsequently edited by a human to fit the specific context of this repository.
In the initial phase of development, GitHub Copilot was used with various GPT or Claude models available at the time, predominantly for writing test functions to achieve 100% or near 100% code coverage. To the extent that this is possible, we would like to continue this pattern of mostly human coding to implement the library features, and AI-assisted generation of test functions.
Models implemented through GitHub Copilot were also used to help write documentation files. However, AI models produced documentation of uneven quality, often containing inaccuracies or extraneous information, requiring extensive human editing. Even where the documentation was accurate, it was often generated in a style not appropriate for the intended audience or inconsistent with the rest of the documentation. As of the time of writing, some of these qualities are still being addressed in the documentation pages. The lesson is that AI-generated documentation should be used with extreme caution.