Skip To Main Content
An illustration of a pencil, but it is digitized into lines and the lines all end on a copyright sign
Data and technology

Minding the gaps of generative AI

Artificial intelligence presents an array of ethical challenges. But humans can still rely on basic principles of honesty and integrity as we learn to live with it.

Ever since generative AI, particularly ChatGPT, burst on the scene, the use of AI has exploded. Its use has covered a wide range of activities and has led to considerable improvements in productivity. All indications are it will continue on this path indefinitely.

And, while GenAI can be a useful tool—helping to develop articles, letters, emails, summarize large amounts of written material, not to mention prompting further creative ideas—it also presents an array of potential conflicts, dilemmas and uncertainties.


Read More


For writers who offer content that they themselves are supposed to have written, consultants who develop reports they are paid to develop, and students who are tasked to provide assignments, AI presents an opportunity, a challenge and a dilemma. For users who depend on such content, it presents many uncertainties.

The root of these issues goes to the basic characteristic of GenAI, namely that the systems are trained using actual documents and previously published work, usually scraped from the internet. When a GenAI model uses that information, it does not disclose its sources, making it difficult or impossible for users to accurately cite their sources.

Sometimes, readers and users of content created by GenAI and claimed to be original might recognize parts of it as something they themselves wrote or as coming from a familiar source. This can lead to lawsuits and other issues of credibility and reliability for the author or creator. An example was provided when actor Scarlett Johansson threatened legal action after a voice adopted by ChatGPT for its new “assistant” sounded “eerily similar” to her own. OpenAI responded by announcing that it would discontinue the use of that voice.

GenAI can be used for generating new content and also for manipulating content already prepared separately by the user. The former poses the most ethical issues because the tools do not disclose their own sources and could be violating copyright rules. To further complicate the issue, the use of GenAI for producing new content can vary tremendously. For example, if one is developing an article for publication, the AI prompt can simply ask for the tool to write the whole article on a particular topic. Or it can ask for ideas or ask for an outline. The human author can then take what GenAI provides and edit it a little or a lot.

When is this plagiarism and when is it not? Clearly having the whole article written by AI and doing little editing on it and then presenting it as your own creation is a case of pure plagiarism. It’s no different than having another person write it but passing it off as your own. 
Most universities have addressed this issue one way or another. Some of them ban the use of AI in preparing papers. Even the International Conference on Machine Learning (ICML) announced last year that the use of GenAI to write papers for its conference was prohibited. Other organizations have adopted policies about citation of AI sources, normally involving proper citation of any usage of the particular AI tool used. For example, a citation might look like “ChatGPT, response to ‘Create appendices to illustrate the major points made in this article,’ OpenAI, May 20, 2023.”

This approach addresses the issues of plagiarism and cheating, but not some other issues inherent in AI systems, like bias or developing fictional content, known as hallucinations. Bias can be built into the sources used by the AI, which it will reflect in its response to a question. Some of the developers of GenAI have been trying to address these issues, but have they been successful? Partially, but we really don’t know at this stage. 

Preparers of reports or other documents, for someone else, who wish to make use of AI, say an institution or company, are well advised to enquire as to their policies on the use of AI.

The ethical implications of using GenAI usually devolve into basic principles of integrity, honesty and transparency. If it’s used to prepare the whole report, then that’s wrong for the reasons noted. If for some reason that course of action is pursued, then it is imperative that this be disclosed to the recipient. If it is used for some part of the report, then that needs to be disclosed as well, and cited as described above. This would apply for even minor parts of the report.

Use of GenAI for internal reports presents fewer issues with regard to copyright but still presents issues around bias and accuracy and privacy. Auditors using AI need to be particularly aware of these issues. They would never, for example, enter client documents into an AI system for an analysis or response. That would be a breach of privacy. Hopefully, their firm will have clear, fully thought-out policies regarding the use of AI.

Few areas in life change as quickly as generative AI. Use of GenAI and the systems themselves are changing daily and any use of them needs to take this into consideration. What‘s acceptable today may not be acceptable tomorrow, and vice versa.