Service 05 of 08

AI Assessment Design Security

Designing assessments that survive in an era of generative AI.

Overview

Detection alone is not a strategy. The defensible response to generative AI in assessment is to design tasks that either render AI assistance irrelevant, or that explicitly incorporate AI use as a competency to be demonstrated and evaluated.

Poios provides three complementary capabilities. First, a risk analysis tool that scores any draft assessment for its vulnerability to AI-assisted completion, with specific findings and recommendations. Second, an alternative design generator that proposes oral, in-person, supervised, or scenario-anchored alternatives appropriate to the discipline. Third, a probabilistic detection layer for submitted work, used as a moderation aid rather than an autonomous accusation engine.

Detection is treated as an indicator for human review, never as evidence of misconduct.

Capabilities

What this service provides

Design risk scoring

Quantitative AI-vulnerability score for any draft assessment task.

Alternative generation

Discipline-aware suggestions for integrity-assured assessment redesign.

Detection-as-indicator

Probabilistic AI-content signals surfaced for marker review, never auto-actioned.

Policy alignment

Workflows aligned to TEQSA Guidance Note on Academic Integrity (2022).

Aligned to

Standards and frameworks

This service is engineered to satisfy specific Australian higher education regulatory and academic standards.

  • HES Framework Standard 5.2
  • TEQSA Guidance Note: Academic Integrity (2022)
  • TEQSA Request for Information on Generative AI

Next service

Educational Design

Request a Demo