AI Ethical Use Standards

Purpose

The purpose of the AI Ethical Use Standards is to promote the safe, transparent, and responsible use of artificial intelligence (AI) at South Texas College. These standards serve as a guide for ethical decision-making, ensuring that AI enhances trust, supports learning, and advances innovation while protecting individual rights and academic integrity. By aligning innovation with institutional values, these standards help safeguard both the College community and the integrity of its educational mission.

Policy Authority: These AI ethical use standards are issued pursuant to Policy CR - Technology Resources and Policy CS - Information Security.

Definitions

As used herein, the following terms shall have the meaning assigned:

“AI Plugin” means an add-on or extension to an AI tool that enhances its capabilities.

“AI Tool” refers to technologies, including but not limited to machine learning models, large language models (LLMs), and various data-processing systems, which generate distinct responses or outputs based on user inputs.

“Artificial Intelligence (AI)” refers to the computer systems that perform task that would require human intelligence. This definition also includes Generative AI, which involves creating new content such as text, or images.

“Autonomous AI Bots or Agents” means AI systems capable of performing tasks independent of human intervention, such as chatbots, notetakers or virtual assistants.

“Confidential Information” means any non-public data that, if disclosed, could compromise institutional operations, privacy, or security. This includes internal communications, student data, and sensitive operational details.

“Human Oversight” means the requirement for human review in AI-assisted decisions, especially those with ethical, legal, or operational consequences.

“Intellectual Property Laws” means laws such as copyright, patent, and trademark which protect original works.  AI-generated content must respect these laws.

“Large Language Model (LLM)” refers to a type of artificial intelligence that understands and generates human-like text. These models are trained on massive datasets of text allowing them to perform a wide variety of complex tasks like answering questions, summarizing, and translating content. Examples include Microsoft Copilot, ChatGPT, and Gemini.

“Personally Identifiable Information” means information that can be used to identify individuals, such as names, addresses, student IDs, social security numbers, or biometric data.

“Reasonable Expectation” means a belief that an outcome or condition will occur, based on what an average, rational person would anticipate under similar circumstances.

“Security Vetting” means the formal review process by which IT and the Information Security office asses an AI tool’s security risks, vulnerabilities, and compliance with the existing institutional standards before it is approved for use within the STC network or with the use of an STC account.

“STC Information Security and IT Compliance Standards” refer to the institutional protocols that govern the secure use of technology and data, ensuring compliance with legal and ethical obligations.

“Users” refer to individuals who interact with institutional systems, technologies, or resources. This includes but is not limited to faculty, staff, administrators, students, and vendors.

Standards

Users are responsible for adhering to the following AI ethical use standards and related compliance requirements when accessing or utilizing AI technologies within the STC network or with the use of an STC account.

  1. Data Protection
    It is prohibited to upload or provide access to confidential information, personally identifiable information, or any data protected by law from public disclosure to any AI tool. STC information security and IT compliance standards must first be met. (See Policy - CS Information Security)
  2. Network Security
    AI tools must comply with existing STC IT security procedures. Any new AI tool should undergo security vetting before being accessed through the STC network or used with an STC account. All AI plugins must be evaluated for security vulnerabilities and undergo regular security assessments to prevent unauthorized access or data leakage. (See Policy CS - Information Security)
  3. Responsibility
    Users will be responsible for ensuring the accuracy and reliability of any AI-generated content before using it at the College. Users deploying autonomous AI bots or agents are also responsible for monitoring these and ensuring that the actions of these tools comply with South Texas College policies and AI Ethical Use Standards.
  4. Transparency
    • Generation of Content
      Where use of an AI tool is authorized, a user will disclose their use of AI through attribution when sharing AI-generated content if there is a reasonable expectation that the user created the content.
    • AI Use Disclosure in Data Collection
      When interacting with others, a user who intends to employ AI to gather information about their interaction with others will disclose their use of AI and, where feasible, will provide others an opportunity to opt out of this data collection. AI use disclosure will also be made where legally required.
  5. Care for Others
    When deciding to use an AI tool, a user will take responsibility in considering the impact that the use of the tool will have on others. For example, AI may be used to enhance learning, but care should be taken not to undermine it.
  6. Care for the Environment
    AI tools with demonstrated sustainability practices should be favored and used in a way that balances the benefits of these tools with the impact they have on the environment.
  7. Accessibility
    AI tools that offer greater compliance with accessibility standards and that support assistive technologies should be favored over other comparable AI tools. (See Policy CR - Technology Resources)
  8. Bias & Fairness
    AI tools must be evaluated to determine bias, and those tools that minimize bias in their operation and the content they generate should be favored over other comparable AI tools.
  9. Bots and Agents
    AI bots, chatbots, and agents must be used with clear accountability, regulatory compliance, and safeguards to prevent misuse.
  10. Intellectual Property
    Use of AI tools must comply with applicable intellectual property laws and uphold South Texas College AI Ethical Use Standards in the use of AI-generated content. Additionally, proper credit must be provided to authors when their works are used, referenced, or shared. (See Policy CT - Intellectual Property)
  11. Human Oversight
    All AI-assisted decisions must include human review or intervention, particularly in cases with ethical, legal, or operational implications.
  12. Explainability & Auditability
    AI tools and systems that are explainable and whose operations are auditable enable human oversight. These tools should be favored over other comparable AI tools.
  13. Monitoring & Reporting
    AI tools that enable the establishment of monitoring processes, reporting channels, and performance tracking to detect misuse and ensure accountability will be favored over other comparable AI tools.
  14. Training & Education
    When deploying an AI tool, arrangements should be made to provide users with appropriate training to ensure the proper operation and human oversight of the tool. Users are encouraged to receive training on the ethical use, risks, and responsibilities of institutionally endorsed or licensed AI tools to support informed and responsible use. (See Generative Artificial Intelligence Portal)
  15. Vendor & Third-Party Standards
    AI vendors and third-party AI providers must comply with the College procurement requirements. When working with vendors, agreements must define roles, responsibilities, ownership, and usage rights of AI models and their data. AI vendors and third-party AI providers that best enable users to comply with these Ethical Use Standards will be favored over others.

Level of Risk

AI-related risks vary depending on how and where the technology is used. Some situations pose minimal risk, while those involving sensitive data, network security, or major decisions impacting others require greater caution and oversight. Human oversight and documentation should match the level of risk. Users must be transparent about AI use, protect privacy, uphold academic integrity, and apply human judgment. As risk increases, the need for oversight and documentation also rises.

Disclaimer

AI technology is rapidly evolving, and these standards may not anticipate every scenario. In instances where issues arise that fall outside the scope of current standards, the South Texas College AI Ethics Committee will evaluate and address each case and make appropriate adjustments to these standards. 

 

back to top