Skip to Main Content

Ethical Use of Artificial Intelligence (AI)

How ought we to live with AI? This guide does not endorse or condone any particular artificial intelligence (AI) or use of AI, but instead provides considerations around its use to help build a more just and humane world.

Critiquing and Analyzing AI - Ethical Considerations

Ethical Considerations - Select each category to explore

Your interactions with Generative AI—including any personal information you disclose—may be stored and used to train that AI. While some AI tools might allow you to change your privacy settings, many of them will default to collecting your data. What data from your interactions with AI are being used to train the AI? What data is being stored? Each utility has different set of standards/protocols/privacy settings

Current U.S Copyright law, as codified in U.S. Code: Tile 17 does not reference or address artificial intelligence. Generally speaking, copyrights are extended to humans or organizations, but not systems. Generative artificial intelligence is challenging current copyright law and precedent with its ability to create content semi-autonomously. So, in this moment what we are seeing are copyright disputes and claims related to two particular aspects related to the use of artificial intelligence:

Can AI developers claim fair use when using copyrighted works to train AI models?

Research from Oxford shows that the computational power of AI systems has doubled every six months since 2010 (Giattino & Samborska, 2025). The exponential increase in computing power of AI systems has been paralleled by demands for training data. At first, AI developers used the open Web and materials in the public domain to train their AI models. However, two issues became apparent: 1.) there is not enough data to train the latest models 2.) the quality of the data means potential introduction of biases and errors in to the model. Furthermore, attempts to train AI models on AI generated data have in some instances created irreversible defects in the resulting models (Shumailov et al., 2024). As noted by US District Court Judge Vince Chhabria in Kadrey, et al. v. Meta Platforms, Inc., "while a variety of text is necessary for training, books make for especially valuable training data. This is because they provide very high-quality data for training an LLM's 'memory' and allowing it to work with larger amounts of text at once." Many of the concerns related to used of copyrighted works for AI training data are related to the four factors associated with the doctrine of fair use. AI developers argue their use of copyrighted works falls within the parameters of fair use, but many copyright holders have filed suit claiming infringement of copyright or trademarks, including Disney and Universal.The outcomes of these cases and others will likely set precedent for how the intersection of AI and copyright looks going forward.  

Can the output of generative AI systems be copyrighted? And to whom do the rights belong?

Copyrights are typically extended to humans under the legal concept of authorship. In a legal sidebar from the Congressional Research Service, it is noted that, "before the proliferation of generative AI, courts did not extend copyright protection to various nonhuman authors" and that, "the U.S Copyright Office has also long maintained that copyrighted works must be created by human beings". (Zirpoli, 2025). With these precedents in place, the legal debate is currently centered around the amount of human control related to the creation of the work when using generative AI. Recent claims for copyright have been denied in which prompt engineering by the human author was not deemed sufficient creative control. For works that combine human authored and AI generated content, copyrights can be extended to the portions authored by a human.

 

It has been well-documented that Generative AI has a habit of “hallucinating,” or producing content that is not based on any fact. Most problematically, they will often state this false information confidently without giving the user any indication that the AI is unsure of the validity of the information it’s presenting.

Generative AI is also getting better at creating images and videos that look real but are not. This creates a growing problem called “deep fakes,” where images or videos can be created of real people saying or doing things they would never do in real life.

Students using AI to generate coursework that they would normally do themselves presents a serious issue for academic integrity.

Understand the policies of your institution, program, professors and classes regarding use of AI; Understand the tools you are using and how they work.

Indicate when and where AI was used and cite specific tool used

The resource requirements to power Artificial Intelligence (AI) hardware systems and data centers are significant. For example, the IT equipment and data centers that power generative AI needs a lot of water and electricity to function efficiently and avoid overheating. (U.S. Government Accountability Office, April 2025). In some instances, meeting these resource needs is causing harm to communities and the environment (see links below). Consideration of the resource requirements for AI provides an additional opportunity to measure and assess AI alignment. AI alignment is generally understood as the process of encoding human values and goals into AI models to make them as helpful, safe, and reliable as possible. (Jonker & Gomstyn, n.d.). While this definition is most often understood with respect to the programming and training of AI models, we can apply this principle along with other useful frameworks with regards to care and protection of our environment when assessing the value of AI systems in their totality, not just their output. One such framework of special significance to the Regis University community is the papal encyclical Laudato Si: On Care for Our Common Home. In this encyclical letter, Pope Francis shares mutual concerns related to pollution, climate change, depletion of natural resources including fresh water, and the loss of biodiversity. Pope Francis argues that these areas of concern are accelerating global inequality, societal breakdown, and an overall decline in the quality of life for all of nature's creatures. In response to these areas of concern, Pope Francis enumerates seven goals or areas of emphasis in which action can be taken to better protect our environment and climate, which belong to all. We may ask ourselves the following: How can AI help advance these goals? Conversely, how does the production of AI impact these goals?

Developments in AI technology are advancing at an exponential rate, and as of this moment, without much regulatory oversight. As the resource requirements for AI systems continue to grow and outpace other technologies, there is increasing need to address the sustainability of AI systems from an environmental and ecological perspective. Given the other issues associated with AI, are the results worth further irreparable harm to the environment? Consider impacts on the environment when using AI.

What are the environmental impacts of AI technologies?

AI as an agent for sustainability

AI tools can be helpful in completing rote, mundane tasks that would otherwise create undue busy work for a person. However, these same AI tools can now be used to complete tasks that typically require critical thinking, problem solving, advanced literacy and writing skills, and creativity. Offloading these more challenging tasks to AI may result in detrimental impacts on human agency and critical thought, though these effects have yet to be proven through scientific research. 

It is difficult to determine if the potential benefits and strengths of AI are true, or are simply over-inflated by the same tech companies that stand to profit from their proliferation.

What is bias in generative AI? 

Biases in AI are skewed responses, or responses that reflect prejudices. For example, if you asked ChatGPT to generate images of a nurse, and it only provided images of women, that would be a stereotype bias. There are various ways bias comes through in generative AI, you can review some of the different types in the video here.  

Why does bias happen? 

Biases in LLMs and generative AI come from the data we use to build it. AI programs like ChatGPT do not have access to all data online or that exists. What data we upload into the program decides its output. You often hear the phrase, “garbage in, garbage out” when discussing AI, the idea that bad data going in can only generate bad answers coming out. Because we, as a society, have biases around topics like race, gender, and sexual orientation, these biases are reflected in AI answers. Whether it’s the data used to train the model or the data it’s accessing, the foundation of AI can lead to biased results. Remember, AI is programmed to be a complex autocomplete tool and give you the most plausible answer, not to give you a true or false answer. 

How to address bias in AI 

While we cannot always impact large measures like corporate accountability or government oversight, there are steps one can take to get better answers from generative AI. 

  • Critically evaluate all AI responses. Never assume the answer AI has provided is true! Remember, the goal of generative AI is not to give you a true answer, but the answer that is most probable.  
  • Diversify your sources! Don’t ever rely on one source for information. Make sure you can verify the responses you get from authoritative sources and rely on different item types for accuracy.  
  • Make AI work for you! Don’t just take the first answer generated, make the systems work to get an answer that is truthful and helpful. Rewrite your prompts, try different keywords, and ask questions about the responses you get. 
  •  Check your AI settings and Terms of Use. Some programs, like Google AI Studio, allow you to change the temperature of the AI responses, i.e. how creative the AI answer is allowed to be. Make sure you know what guidelines your output is following as well as privacy and use around what you input.  

If you would like to learn more about bias in AI, check out the links below.  

Reflection Questions 

  1. Given the ethical considerations listed above, which ones will guide your AI use the most? 
  2. How do can you determine when/where/how AI use is appropriate? Are there other methods you can use to complete a task you might otherwise outsource to AI? What are the potential benefits/costs of completing that task using AI?  
  3. What strategies can you use to critique and evaluate AI-generated information? How can you assess whether the information generated by AI is accurate?  
  4. AI tools have largely been trained on information readily available on the internet. What types of biases, harmful views, or stereotypes might be present in that type of content?  
  5. Who is responsible for errors or misinformation generated by AI? Have you encountered any AI tools that are up-front about how they ensure the accuracy of their AI? Do you think the companies that manage AI tools have an incentive to ensure the tools are accurate? 
  6. Who is promoting the use of AI most frequently and loudly? What might their interests be in doing so? 
  7. What are the benefits you see of using AI? Are those benefits available equally to all people? When has AI enhanced your learning experience? When has AI use compromised or conflicted with your learning experience? 
  8. How do you think you will you use AI in this year? When will you refrain from using AI this year?