Artificial Intelligence
AI technology/programs have rapidly become common tools to assist with work and productivity. WSSB recognizes the opportunities these tools create, as well as the risks.
Washington State School for the Blind staff and students — particularly those working with personal information— need to consider the risks of using artificial intelligence (AI) tools like ChatGPT.
As with any software, anyone who wishes to use one of these tools (that aren’t already included in Microsoft/Google etc) must request a security design review by the Department of Technology Services (dots@wssb.wa.gov) to ensure the software is safe to use.
WSSB Guidance on Artificial Intelligence
Users of ChatGPT and similar artificial intelligence (AI) technology or AI programs must not search, or otherwise incorporate any non-public information, including but not limited to personal identifying information (Information that can be used to distinguish or trace an individual’s identity, either alone or when combined with other information that is linked or linkable to a specific individual). The use of non-public data with these services may result in unauthorized disclosure to the public and may expose the user and WSSB to legal liability and other consequences.
For staff, Ai/Ai bots cannot be used to get results regarding any consequential outcomes for the education of the student. These bots can give false positives as well as false negatives. Our “Duty of Care” is that an individual is not affected by the outcomes of an Ai in regard to education, health, employment or housing.
Guidelines from OSPI
More information can be found on OSPI’s Ai web page (https://ospi.k12.wa.us/ai)
Potential Risks That Need to Be Mitigated When Using AI in Education
-
- Increasing and/or creating inequitable learning environments
- Unauthorized access to protected user information and unauthorized data collection
- Perpetuating institutional and systemic biases
- Plagiarism and academic dishonesty
- Over-relying on technology and undermining the importance of human intelligence in education
Artificial Intelligence tools provide opportunities, benefits, and potential risks. It is the responsibility of every parent/guardian, policymaker, teacher, administrator, and support staff member to ensure that the use of this transformative new technology and its future is regularly reviewed to ensure equity of access, data privacy, and safe and ethical usage are maintained at all levels. It is equally critical that LEAs embrace and teach students what AI is and isn’t and how to use AI technologies to enhance learning – not prevent students from developing critical skills needed to graduate with technological literacy.
Guidelines and Policy
The National Institute of Standards and Technology (NIST) AI Risk Management Framework and the TeachAI Toolkit serve as foundations for OSPI's guiding principles on the use of AI in education.
- Human-Centered Approach to AI: A human-centered AI learning environment is one that prioritizes the needs, abilities, and experiences of students, teachers, and administrators.
- Implementing AI in Student Learning: Empower students to actively shape their learning experience with AI by allowing them control over how and to what extent AI is integrated into their education.
- Sensitive or Confidential Data: District policies must comply with student/personal privacy and data protection laws for the use of all AI tools and resources.
Educational policymakers must focus on ensuring that the use of AI increases the public good, with emphasis on equity and inclusion.
WaTech Interim Guidelines for state agencies
Some of the interim guidelines from the State of Washington as it relates to WSSB are below.
Principles
The intention of the state of Washington is to follow the principles in the NIST AI Risk Framework, which serve as the basis for the guidelines in this document. A foundational part of the NIST AI Risk Framework is to ensure the trustworthiness of systems that use AI.
The guiding principles are:
- Safe, secure, and resilient: AI should be used with safety and security in mind, minimizing potential harm and ensuring that systems are reliable, resilient, and controllable by humans. AI systems used by state agencies should not endanger human life, health, property, or the environment.
- Valid and reliable: Agencies should ensure AI use produces accurate and valid outputs and demonstrates the reliability of system performance. EA-01-01-G State CIO Adopted: August 8, 2023 TSB Approved: N/A Sunset Review: August 8, 2026 Replaces: N/A 2
- Fairness, inclusion, and non-discrimination: AI applications must be developed and utilized to support and uplift communities, particularly those historically marginalized. Fairness in AI includes concerns for equality and equity by addressing issues such as harmful bias and discrimination1 .
- Privacy and data protection: AI should be used to respect user privacy, ensure data protection, and comply with relevant privacy regulations and standards. Privacy values such as anonymity, confidentiality, and control generally should guide choices for AI system design, development, and deployment. Privacy-enhancing AI should safeguard human autonomy and identity where appropriate.
- Accountability and responsibility: As public stewards, agencies should use generative AI responsibly and be held accountable for the performance, impact, and consequences of its use in agency work.
Guidelines
Fact-checking, Bias Reduction, and Review
All content generated by AI should be reviewed and fact-checked, especially if used in public communication.
State personnel generating content with AI systems should verify that the content does not contain inaccurate or outdated information and potentially harmful or offensive material.
When consuming AI-generated content, be mindful of the potential biases and inaccuracies that may be present.
Disclosure and Attribution
AI-generated content used in official state capacity should be clearly labeled as such, and details of its review and editing process (how the material was reviewed, edited, and by whom) should be provided. This allows for transparent authorship and responsible content evaluation.
Training
There are many various training options given through Safe Schools, Sumtotal and LinkedIn for WSSB staff. If you would like a LinkedIn Learning account, please send an email to StaffTraining@wssb.onmicrosoft.com.