AI Policy
At Creart Digital Media, we embrace artificial intelligence as a tool to support our vision of shaping immersive and critical technology for the betterment of humanity. Our use of AI is grounded in ethical integrity, human-centred design, and positive social impact, with people always responsible for decisions, outcomes, and accountability.

[This policy applies to all team members, contractors, volunteers, and partners who use or interact with AI tools or AI-assisted outputs on behalf of Creart Digital Media.]

Responsible AI Principles
Creart applies the following principles to all AI use:
Human-led use
AI supports creativity, research, and execution, but does not replace human judgement or responsibility.
Privacy and IP protection
Confidential information, intellectual property, and personal data are protected at all times.
Transparency and accountability
AI use is documented, reviewable, and owned by named humans.
Ethics and wellbeing
AI must be used in ways that respect people, communities, diversity, and social wellbeing.

Team AI Use Policy
Team members may use AI only via Creart-approved tools as a human-in-the-loop support for research, ideation, and execution. Creart intellectual property, confidential information, personal data, client data, and unpublished materials must not be uploaded to any AI system. AI must not be used to misrepresent authorship, automate decisions without human oversight, or bypass legal, ethical, or contractual obligations. All AI outputs must be reviewed, verified, and remain the responsibility of the human user.

Use in Education, Youth, and Public Sector Contexts
When working with schools, young people under 18, or public sector partners, Creart applies stricter safeguards. AI is not used for automated profiling, assessment, or decision-making about individuals.
No student, participant, or community member data is entered into AI systems. All AI-assisted activities are designed, supervised, and delivered by qualified humans in line with child-safe standards, education policies, and public sector governance requirements.

Governance and Oversight
Creart maintains transparent governance of AI use by:Maintaining a record of approved AI tools and permitted use casesAssigning clear human accountability for AI-assisted workRequiring review and validation of AI outputs before use or releasePeriodically reviewing AI practices as guidance, standards, and risks evolveCreart’s approach to responsible AI and technology use is informed by guidance from the Australian eSafety Commissioner and the National AI Centre, including Australia’s AI Ethics Principles and Guidance for AI Adoption.

Creart provides guidance to support responsible AI use and reviews this policy regularly, updating it as legal, regulatory, or operational requirements evolve.