Institute of Professional Investigators Training Centre

INSTITUTE OF PROFESSIONAL INVESTIGATORS TRAINING ACADEMY

Institute of Professional Investigators

INSTITUTE OF PROFESSIONAL INVESTIGATORS TRAINING ACADEMY

Artificial Intelligence (AI) Policy

Approved by the Board: 30th January 2025

 

1.    Purpose

This Policy defines the Institute of Professional Investigators’ (IPI) approach to the responsible management and utilisation of Artificial Intelligence (AI) within its training and assessment processes. It outlines the guiding principles and obligations of IPI as an approved training and assessment centre, while also establishing clear expectations for Learners, Tutors, Assessors and external contractors.

 

2.    Definitions

Artificial Intelligence (AI) refers to the capability of a computer or computer-controlled system to perform tasks typically requiring human intelligence. In the context of training and assessment, AI tools can assist learners in gathering information, generating content, or even creating assessments. All stakeholders involved in qualifications and End-Point Assessments (EPA) should be aware that AI tools are continually evolving and may produce inaccurate or inappropriate content. The Office of Qualifications and Examinations Regulation (Ofqual) defines AI systems based on two key characteristics: Adaptivity: AI systems that learn and evolve, performing tasks not explicitly programmed by humans. This makes it difficult to determine the rationale behind AI outputs. Autonomy: AI systems that make decisions without human intervention, presenting challenges in assigning responsibility for outcomes.

 

3.    Use of AI in Assessments

AI chatbots and other AI tools can generate responses to prompts based on trained data, enabling them to perform tasks such as: answering questions and summarising information; generating essays, reports, and articles; writing computer code; translating text across languages; providing creative suggestions and ideas; producing text with specified attributes such as tone or sentiment. Conditions of Use: AI tools may only be used in assessments if explicitly permitted under the assessment criteria. The IPI Level 3 Foundation Investigation Course exercises assessment criteria do NOT allow the use of AI or other tool to complete the IPI Foundation Investigation Course exercises. Learners must demonstrate that their submissions through to the final submission is the result of their independent effort and critical thinking.

 

4.    AI Misuse

Misuse of AI in assessments constitutes malpractice and may result in severe penalties, including disqualification or exclusion from qualifications and EPA. Misuse includes but is not limited to copying or paraphrasing AI-generated content without proper attribution; submitting AI-generated responses as original work; using AI tools to complete assessments without demonstrating independent knowledge or analysis; failing to declare the use of AI tools when used as a source of information; providing incomplete or misleading references related to AI-generated content.

 

5.    Acknowledging AI Use

If a learner uses AI tools that provide sources, these must be verified and cited in accordance with standard referencing practices. Where AI-generated content lacks citation, learners must independently verify and appropriately reference any factual information used. Learners must provide a clear acknowledgment of AI use, detailing: the name of the AI tool used (e.g., ChatGPT 4.0); the date of content generation; a record of the AI-generated output in a non-editable format (e.g., screenshots); a brief explanation of how AI was used to support their work. Failure to provide sufficient documentation may result in the work being investigated under the IPI Malpractice Policy. Example of an AI acknowledgment: ChatGPT 4.0 (https://chat.openai.com), generated on 22/01/2025.

 

6.    Roles and Responsibilities

Learners: Must be familiar with and adhere to IPI’s AI and Malpractice Policies. Must ensure their work is their own and declare AI use if and when applicable. Centre Tutors, Assessors, and Internal Quality Assurers (IQAs): Must implement and enforce this Policy within their centre. Must verify that learners appropriately acknowledge AI usage and ensure assessment integrity. Independent Assessors (IA’s): Must ensure compliance with IPI AI policies and support the detection and investigation of AI misuse.

  

7.    Regulatory Compliance

This Policy aligns with regulatory requirements as outlined by Ofqual, Qualifications Wales, and CCEA Regulation. The relevant regulatory conditions include: C2, E4, H1, H2, and J1.

 

8.    Policy Review

This Policy will be reviewed regularly as part of IPI’s self-evaluation process and will be updated in response to feedback, regulatory changes, and advancements in AI technology. 

 

9.    Conclusion

The responsible use of AI in training and assessment is essential to maintaining the integrity, credibility, and reliability of IPI qualifications. While AI presents significant opportunities to enhance learning, it must be used ethically and transparently, ensuring that learners’ achievements accurately reflect their skills and knowledge through their studies. IPI remains committed to upholding the highest standards of assessment by providing clear guidance on the use and misuse of AI, supporting both learners and educators in navigating this evolving landscape.

 

This policy was updated on 9th February 2026.

 

 

 




© IPI 2025.
Orchard CMS by itrap