Generative AI tools have introduced new challenges in academic integrity, particularly related to plagiarism.
Plagiarism is typically defined as presenting someone else's work or ideas as one's own. While a generative AI tool might not qualify as a "someone," using text generated from an AI tool without citing is still considered plagiarism because the work is still not the researcher's own. Individual policies for using and crediting GAI tools might vary from class to class, so looking at the syllabus and having a clear understanding from the professor is important.
A note about plagiarism detection tools:
A number of AI detection tools are currently available to publishers and institutions, but there are concerns about low rates of accuracy and false accusations. Because generative AI tools do not generate large amounts of text word-for-word from existing works, it can be difficult for automated tools to detect plagiarism. If you are using a plagiarism detection tool, please be aware of and transparent about the limitations of the tool.
Another area of academic integrity affected by GAI tools is that of false citations.
Providing false citations in research, whether intentional or unintentional, is an academic integrity violation. GAI tools such as ChatGPT have been known to generate false citations, and even if the citations represent actual papers, the cited content in ChatGPT might still be inaccurate.