Current U.S Copyright law, as codified in U.S. Code: Tile 17 does not reference or address artificial intelligence. Generally speaking, copyrights are extended to humans or organizations, but not systems. Generative artificial intelligence is challenging current copyright law and precedent with its ability to create content semi-autonomously. So, in this moment what we are seeing are copyright disputes and claims related to two particular aspects related to the use of artificial intelligence:
Research from Oxford shows that the computational power of AI systems has doubled every six months since 2010 (Giattino & Samborska, 2025). The exponential increase in computing power of AI systems has been paralleled by demands for training data. At first, AI developers used the open Web and materials in the public domain to train their AI models. However, two issues became apparent: 1.) there is not enough data to train the latest models 2.) the quality of the data means potential introduction of biases and errors in to the model. Furthermore, attempts to train AI models on AI generated data have in some instances created irreversible defects in the resulting models (Shumailov et al., 2024). As noted by US District Court Judge Vince Chhabria in Kadrey, et al. v. Meta Platforms, Inc., "while a variety of text is necessary for training, books make for especially valuable training data. This is because they provide very high-quality data for training an LLM's 'memory' and allowing it to work with larger amounts of text at once." Many of the concerns related to used of copyrighted works for AI training data are related to the four factors associated with the doctrine of fair use. AI developers argue their use of copyrighted works falls within the parameters of fair use, but many copyright holders have filed suit claiming infringement of copyright or trademarks, including Disney and Universal.The outcomes of these cases and others will likely set precedent for how the intersection of AI and copyright looks going forward.
Copyrights are typically extended to humans under the legal concept of authorship. In a legal sidebar from the Congressional Research Service, it is noted that, "before the proliferation of generative AI, courts did not extend copyright protection to various nonhuman authors" and that, "the U.S Copyright Office has also long maintained that copyrighted works must be created by human beings". (Zirpoli, 2025). With these precedents in place, the legal debate is currently centered around the amount of human control related to the creation of the work when using generative AI. Recent claims for copyright have been denied in which prompt engineering by the human author was not deemed sufficient creative control. For works that combine human authored and AI generated content, copyrights can be extended to the portions authored by a human.
It has been well-documented that Generative AI has a habit of “hallucinating,” or producing content that is not based on any fact. Most problematically, they will often state this false information confidently without giving the user any indication that the AI is unsure of the validity of the information it’s presenting.
Generative AI is also getting better at creating images and videos that look real but are not. This creates a growing problem called “deep fakes,” where images or videos can be created of real people saying or doing things they would never do in real life.
Students using AI to generate coursework that they would normally do themselves presents a serious issue for academic integrity.
Understand the policies of your institution, program, professors and classes regarding use of AI; Understand the tools you are using and how they work.
Indicate when and where AI was used and cite specific tool used
The resource requirements to power Artificial Intelligence (AI) hardware systems and data centers are significant. For example, the IT equipment and data centers that power generative AI needs a lot of water and electricity to function efficiently and avoid overheating. (U.S. Government Accountability Office, April 2025). In some instances, meeting these resource needs is causing harm to communities and the environment (see links below). Consideration of the resource requirements for AI provides an additional opportunity to measure and assess AI alignment. AI alignment is generally understood as the process of encoding human values and goals into AI models to make them as helpful, safe, and reliable as possible. (Jonker & Gomstyn, n.d.). While this definition is most often understood with respect to the programming and training of AI models, we can apply this principle along with other useful frameworks with regards to care and protection of our environment when assessing the value of AI systems in their totality, not just their output. One such framework of special significance to the Regis University community is the papal encyclical Laudato Si: On Care for Our Common Home. In this encyclical letter, Pope Francis shares mutual concerns related to pollution, climate change, depletion of natural resources including fresh water, and the loss of biodiversity. Pope Francis argues that these areas of concern are accelerating global inequality, societal breakdown, and an overall decline in the quality of life for all of nature's creatures. In response to these areas of concern, Pope Francis enumerates seven goals or areas of emphasis in which action can be taken to better protect our environment and climate, which belong to all. We may ask ourselves the following: How can AI help advance these goals? Conversely, how does the production of AI impact these goals?
Developments in AI technology are advancing at an exponential rate, and as of this moment, without much regulatory oversight. As the resource requirements for AI systems continue to grow and outpace other technologies, there is increasing need to address the sustainability of AI systems from an environmental and ecological perspective. Given the other issues associated with AI, are the results worth further irreparable harm to the environment? Consider impacts on the environment when using AI.
AI tools can be helpful in completing rote, mundane tasks that would otherwise create undue busy work for a person. However, these same AI tools can now be used to complete tasks that typically require critical thinking, problem solving, advanced literacy and writing skills, and creativity. Offloading these more challenging tasks to AI may result in detrimental impacts on human agency and critical thought, though these effects have yet to be proven through scientific research.
Consider impacts of use on other humans. Who does and doesn’t have access to AI tools. Who does and doesn’t benefit from AI?
How will AI use impacts in the future? How is it already impacting the workforce?
Biases in AI are skewed responses, or responses that reflect prejudices. For example, if you asked ChatGPT to generate images of a nurse, and it only provided images of women, that would be a stereotype bias. There are various ways bias comes through in generative AI, you can review some of the different types in the video here.
Biases in LLMs and generative AI come from the data we use to build it. AI programs like ChatGPT do not have access to all data online or that exists. What data we upload into the program decides its output. You often hear the phrase, “garbage in, garbage out” when discussing AI, the idea that bad data going in can only generate bad answers coming out. Because we, as a society, have biases around topics like race, gender, and sexual orientation, these biases are reflected in AI answers. Whether it’s the data used to train the model or the data it’s accessing, the foundation of AI can lead to biased results. Remember, AI is programmed to be a complex autocomplete tool and give you the most plausible answer, not to give you a true or false answer.
While we cannot always impact large measures like corporate accountability or government oversight, there are steps one can take to get better answers from generative AI.
If you would like to learn more about bias in AI, check out the links below.
Service Desks
Circulation & Interlibrary Loan: 303.458.4030
Archives & Digital Collections: 303.458.4265
Research Help Desk: 303.458.4031