The questions below were all raised during development trials for AI-Safe:
1. Is it actually possible to have non-exam assessments which are 100% safe from generative AI?
Our answer: Whether or not an assessment may be 100% safe, a high degree of security can be achieved through careful assessment design. By implementing features based on our conceptual framework, an assessment designer can add layers of protection against AI misuse.
2. Where does academic integrity feature in this design tool?
Our answer: Integrity is about honesty, trust, and responsibility in academic practice, whereas security is about ensuring that assessment is valid and reliable. Here it means preventing illegitimate AI use by students for the purpose of passing their assessments.
3. Why can’t students use AI in an ethical way for their assessment work?
Our answer: We can see no reason why AI should not be used in assessments if this is legitimate and appropriate. In fact, as AI becomes more widespread in industry and the wider world, its use by students in their assessments will become essential for the validity of learning and teaching. 1
4. Can your design tool be used as a checklist of criteria for AI safety?
Our answer: The guiding questions in AI-Safe are not intended as criteria, nor should they be used as such. Although designed to be applied to assessment tasks (or problem-based activities) in as wide a range of disciplines as possible, they cannot be made equally relevant to all assessments. 2
5. What should I do when my answers to questions in the design tool are negative?
Our answer: If the questions are relevant to the assessment, negative answers probably indicate potential gaps in security that could be addressed through more effective design. The questions are meant as a starting point for reflecting on how such improvements can be made. 3
6. Why is getting students to critique AI-generated material not the simplest way to have secure assessments?
Our answer: This might be a viable solution, provided it is valid. In a vocational or professional assessment, there must be a direct correlation between the task requirements and the way that AI is used in the real world. If such an assessment does not accord with authentic practice, it is likely to be seen by students and employers as irrelevant. 4
7. Why not just focus on how well the students’ work is supported by evidence?
Our answer: As with the previous question, the answer will depend on the authenticity of what learners are required to do. If this accords with established practice in relevant, non-academic contexts, then the quality and transparency of supporting evidence may well be a determining factor in assessing performance. 5
8. Why did you not include critical thinking as one of your key concepts?
Our answer: It is true that critical thinking is often recommended as an assessment requirement in relation to AI. But while critical skills are highly valuable in work and study, including them in your instructions and rubrics can’t in itself prevent the misuse of AI. It is the context for these skills that will provide security. 6
9. How does your design tool correlate with other frameworks or official sources of advice?
Our answer: We have shown how our concepts relate to principles and propositions from a number of educational bodies in relation to the following: context & authenticity, collaboration, process & generativeness. You will find examples of this overlap here.
10. Could we not have templates or exemplars that show what AI-safe assessments look like?
Our answer: Because we are dealing with risk, there isn’t one problem to be addressed but a multiplicity of potential misuses of AI. Hence, we cannot provide a simple solution or template. Furthermore, what is authentic and appropriate in assessment design will vary according to the discipline, subject area, educational context, learning outcomes, assessment aims, etc. 7
- Of course, this requires assessment criteria that are able to distinguish between actual student work and any other work done by AI for assessment purposes. Our design tool can help to identify ways in which this may be done. ↩︎
- But while some questions may not be relevant to a particular assessment, this need not compromise its security overall. ↩︎
- There are also suggestions for design improvement after each set of guiding questions in the downloadable version of the design tool, here. ↩︎
- It might even be misinterpreted as a critique of AI itself (instead of its products). ↩︎
- Nor should this mean that collaboration, process and generativeness are not just as important or useful. ↩︎
- You will find a fuller explanation on page 2 of The thinking behind our conceptual framework, here. ↩︎
- Instead, we have illustrated the application of individual concepts in our framework with short and simple examples applicable to different vocational disciplines. You will find them on page 2 of How to use AI-Safe, here. ↩︎