1. What is the purpose of York College's new AI computing system?
The new AI computing system at York College aims to create a secure and collaborative environment for machine learning (ML) research and educational activities. This system will provide faculty and students access to high-performance computing resources for AI model development and research while ensuring data security, transparency, and responsible usage.
2. Who can access the AI computing system, and how?
Access to the AI computing system is granted to authorized personnel, including faculty, staff, and approved external consultants, whose roles require interacting with the system for academic, research, or administrative purposes. To request access, individuals must submit a formal request through the "AI System Access Request Form" outlining their justification for access, the nature of the data they need, and their required access level. Outside entities granted access to College AI systems must first agree to apply the same or greater standards of confidentiality to academic records, as provided for under the Family Educational Rights and Privacy Act (FERPA). The AI Governance Committee, in collaboration with the IT department, reviews and approves access requests based on job roles, responsibilities, and the sensitivity of the data being requested.
3. What are the different access levels to the AI system?
The AI computing system uses a tiered access control system, granting different levels of access based on roles and responsibilities:
- Read-Only Access: Users with this level can view data and reports but cannot make changes or download data.
- Data Analysis Access: Users can analyze data, run queries, and generate reports but cannot modify system configurations or access sensitive data without further approval.
- Full Administrative Access: This level, granted only on a need-to-know basis, allows users to make system changes, manage users, and access sensitive data.
4. What security measures are in place to protect data in the AI system?
Protecting data confidentiality, integrity, and availability is a top priority for the AI computing system. Key security measures include:
- Data Classification: Data is classified based on sensitivity, with appropriate safeguards applied according to CUNY’s Data Classification Standards.
- Encryption: Data is encrypted both in transit and at rest using industry-standard encryption protocols.
- Access Control: Multi-factor authentication (MFA) is mandatory for all users accessing the system, ensuring only authorized individuals can access sensitive data.
- Monitoring and Auditing: The system is continuously monitored for unauthorized access or unusual activities. Detailed audit logs are maintained to track data access and system modifications.
5. What are the rules regarding data use within the AI system?
All users are responsible for handling data ethically and responsibly, adhering to York College’s Acceptable Use Policy and CUNY’s policies. Misuse of data, including unauthorized access, distribution, or use for purposes beyond approved research or operational activities, may result in disciplinary and/or legal actions.
6. How are changes to the AI system managed and implemented?
Any modifications to the AI system, including updates to algorithms, configurations, or datasets, must follow a strict Change Management procedure. This process involves:
- Formal Change Request: A detailed request explaining the reason for the change, its scope, and potential impacts on security, privacy, and compliance must be submitted to the AI Oversight Committee.
- Risk Assessment: Each change undergoes a thorough risk assessment to evaluate its potential impact on the system and data.
- Approval and Implementation: The AI Oversight Committee approves or denies changes based on the risk assessment. All approved changes are documented, implemented, and audited to maintain transparency and accountability.
7. What training is required for users of the AI system?
Mandatory training is required for all personnel granted access to the AI computing system. This training covers essential topics such as:
- AI ethics and responsible use
- Data privacy principles and regulations
- Cybersecurity awareness and best practices
- Data handling and security protocols specific to the AI system
- The training ensures that all users understand their responsibilities regarding data sensitivity, compliance requirements, and the potential risks associated with handling sensitive information. Training link will be posted when ready.
8. What is the role of the AI Oversight Committee?
The AI Oversight Committee plays a crucial role in overseeing the development, deployment, and ongoing management of the AI computing system. The committee's responsibilities include:
- Establishing Ethical Standards: Defining ethical guidelines for the development and use of AI within the college.
- Ensuring Compliance: Overseeing the AI system's compliance with all applicable legal, ethical, and institutional policies.
- Data Security and Privacy: Establishing and enforcing data security and privacy protocols to protect sensitive information.
- Monitoring Performance and Impact: Continuously monitoring the performance and impact of AI models used within the system.
- Access Control and Approvals: Reviewing and approving access requests to the AI system based on roles, responsibilities, and data sensitivity.
- Change Management Oversight: Reviewing and approving proposed changes to the AI system to ensure security and compliance.