Reconstructing Constructive Alignment: The CSR Framework
Part 1 of 3: Curriculum Architecture for AI-Aware Pedagogy Why your AI policy won’t save you (but curriculum architecture can)
Your institution probably has some sort of AI policy that tells students how they can use it. It might say “students may use AI for research but not for writing” or “AI is permitted with proper citation.” Maybe it bans AI entirely in certain units while allowing free use in others, or it might delegate to the Unit “you must adhere to the AI guidelines for your assessment.” In the unit guide you might find “this is a Level 3 task that allows AI assisted editing” but no way to determine if the student did or did not cross the boundary.
None of these solve your actual problem because the problem is How do you know what students can actually do? rather than how do you restrict how they do it. Traditional assessment structures assumed that graded work equivalently verified student capability. Submit an essay, get a grade. Do an exam, get a grade. Complete a placement, get a grade. That grade means something about what you know and can do, and to some extent, who you are - the “Distinction average student”.
That structure is now broken and its not because students are cheating. Some students always cheated, but now the environment of learning has now become one where cognitive labour can be distributed to some extent between humans and machines in almost every context. My particular contexts span creative industries and human science, with a dash of IT, business and Law, at two Australian institutions, SAE University College and ACAP University College.
The problem is not just student behaviour change. As Dawson et al. (2024) argue, assessment validity—whether we’re actually testing what we claim to test—matters more than controlling the means of production. We face a fundamental epistemological challenge: grades once signified demonstrated capability, but in AI-capable contexts, they might represent some combination of student, AI, and their collaborative capability that we cannot disentangle.
This is the first in a three-part series presenting a complete curriculum architecture for professional education in AI-capable contexts. This piece introduces the CSR (Context-Standard-Range) Framework, which provides a developmental method for building programmatic structure that proves capability from work that builds it. I pronounce it “Caesar” and builds on work at SAE over more than ten years (McMillan & Webber, 2021).
Part 2 will introduce the SRCIT Framework (Strategic, Relational, Creative, Interpretive, Technical task modes), showing how generic AI literacy training is insufficient and what mode-switching professional relationships with AI look like. I pronounce this “Circuit”. Part 3 brings both frameworks together for implementation.
This series is designed for programmes with OR without statutory registration scaffolding. If your programme has professional accreditation bodies (nursing, psychology, social work), those create accountability but not curriculum architecture. If you don’t have regulatory frameworks (creative industries, business, criminology), you need to build that architecture yourself. CSR×SRCIT provides it either way.
I’ve pointed out the crack in the sandstone, but let’s pick at it a bit.
The Architectural Failure Traditional Assessment Creates
Constructive alignment transformed higher education curriculum design. Biggs’ (2014) insight was elegant: learning outcomes, teaching activities, and assessment tasks should align in service of demonstrable student capability. If your outcome says “critically analyse,” your assessment must require critical analysis, not recall or description. And you should teach it.
This worked beautifully in theory. In practice, units designed in isolation often lost the “constructivism” element, that connection to course-level knowledge construction. CSR addresses this by forcing explicit links back to Course Learning Outcomes. But even when we get constructive alignment right, the release of accessible LLMs exposed deeply held assumptions that were a bit shaky to start with:
Assessment tasks are completed by individual students
Graded outputs represent individual student capability
Scalar grades meaningfully differentiate performance quality
The GPA aggregates into overall capability signal
In AI-capable contexts, none of these assumptions work any more.
Problem 1: Authorship
When a student submits any written piece, or a creative work, you cannot know, with detection tools, with process documentation, even with an oral defence component, exactly which cognitive work was done by the student unaided. This is the fundamental ambiguity AI amplifies in the relationship. The student might genuinely understand the material and have used AI extensively. Or they might not understand it and have used AI minimally. The output is decoupled from the capability.
Problem 2: False precision in grading
You award a 78% for the submitted. Or a Distinction if that’s how you roll.
This is supposed to mean something about the student’s demonstrated capability relative to the criteria. In AI-capable contexts, it might mean:
The student’s capability (they did it independently)
The student’s capability + AI capability (they collaborated effectively)
Mostly AI capability + the student’s judgement about what to submit
The student’s ability to manage AI + their subject knowledge
Some combination of the above that you cannot determine
When you calculate a GPA from these ambiguous numbers, you’re aggregating noise and the precision is false. Even in a 7 or 5 or 3 point grading scale, if there’s a calculation, it inherits ambiguity.
Problem 3: The assessment OF learning illusion
Many thoughtful models distinguish formative (assessment FOR learning) from summative (assessment OF learning). But in AI-mediated contexts, most written assessments cannot reliably verify independent capability. It might show collaborative capability, or AI-management capability, or the capability to submit plausible work regardless of understanding. Both the summative and formative functions have collapsed if the artefact is the measure.
Caesar can solve this
The CSR Framework makes two architectural moves that restore assessment validity in contemporary contexts:
Move 1: Separate PROVING capability from BUILDING capability at programme level
Not all assessments need to verify independent student capability. Most assessments should support learning, exploring ideas, practising skills, getting feedback, and failing safely.
This recognises that different forms of assessment serve different epistemic functions (Fawns, Boud, & Dawson, 2026), and trying to make assessment do both creates the current confusion. But at strategic points, programmes must verify students possess the capabilities required for graduation and professional practice. CSR establishes two architecturally distinct types of assessment, and I’m going out on a limb and referring to one of them as evaluation:
Secure Assessment (Assessment of Learning)
Directly proves CSR-wrapped Course Learning Outcomes (CLOs)
Requires identity verification and controlled AI parameters
Results in graded outcomes contributing to GPA
Located strategically at each AQF level/exit point
Focus: Proving competence has been achieved
Developmental Work (Evaluation for Learning)
Scaffolds progression toward CLOs through formative tasks
Recognises change as a measure of learning
Explicitly permits AI-augmented learning
Results in Ungraded Pass based on engagement and progress
Distributed throughout programme as needed
Focus: Building capacity for later independent demonstration
This recognises that different forms of assessment serves different epistemic functions, and trying to make assessment do both creates the current confusion. I’ve toyed with the idea of referring to the developmental tasks where evaluation of progress takes place as “activity” rather than assessment, but that might be going to far and diminish its importance.
Move 2: Wrap outcomes with Context-Standard-Range and make them assessment criteria
Traditional learning outcomes can conflate or fail to recognise three distinct elements:
Context: Where/when competence is demonstrated
Standard: The performance quality expected
Range: The diversity of legitimate demonstration methods
CSR separates these deliberately:
Example (Traditional):
“Students will demonstrate effective written communication.”
Example (CSR-Wrapped):
“Students will demonstrate advanced communication capability [STANDARD: AQF Level 7 professional practice] through written, oral, and interpersonal formats [RANGE: multiple valid modalities] across diverse stakeholder contexts requiring adaptation to audience needs [CONTEXT: authentic professional scenarios].”
Context can be updated rapidly without full curriculum redesign. Standard focuses on professional judgements that AI cannot achieve alone. For AQF Level 7 (bachelor degree), this means graduates demonstrate ‘broad and coherent knowledge’ with ‘well-developed cognitive skills to analyse critically, evaluate and transform information to complete a range of activities’ and ‘well-developed judgement, adaptability and responsibility for their own learning and practice’ (Australian Qualifications Framework Council, 2013, p. 61). Range explicitly accommodates AI while maintaining the quality. This concept also looks back to Sadler (2013. 2014) where he notes that rubrics that don’t account for different approaches can become futile.
The wrapping makes explicit what traditional outcomes may leave implicit, or bake into complex Outcome statements that are inflexible.
CSR’s deeper change isn’t at the CLO level, you can keep them (assuming they are the right CLOs for the AI age of course ...) The CSR wrapper becomes the assessment criterion, collapsing the traditional chain of indirection where assessment tasks map to criteria, which map to ULOs, which map to CLOs. Each link in that chain can break from the others. If CSR Wrapped CLOs stay simple and stable at the programme level, while the Context-Standard-Range structure provides the flexibility to apply them across diverse units and assessment formats.
How This Actually Works: From CLO to Unit Activity
Let’s walk through CSR implementation showing how simple CLOs create direct line of sight to unit work, eliminating the ULO middle layer. I’ll use three programmes to demonstrate the pattern across disciplines: Bachelor of Counselling (ACAP), Bachelor of Film Production (SAE), and Bachelor of Environmental Science (hypothetical).
Step 1: Keep Your CLOs Simple
Start with clear, stable programme-level outcomes that won’t need constant revision as contexts change:
Counselling:
CLO 1: “Demonstrate ethical therapeutic practice with diverse client populations”
CLO 2: “Apply psychological theories to case conceptualisation and intervention planning”
CLO 3: “Communicate effectively with clients, colleagues, and stakeholders in professional contexts”
CLO 4: “Engage in critically reflective practice informed by evidence and supervision”
Film Production:
CLO 1: “Produce professional-standard screen content across narrative and documentary forms”
CLO 2: “Apply technical craft skills to achieve creative intent”
CLO 3: “Manage collaborative production processes in team environments”
CLO 4: “Evaluate screen work critically within industry and cultural contexts”
Environmental Science:
CLO 1: “Design and execute field-based ecological research”
CLO 2: “Apply statistical methods to analyse environmental data”
CLO 3: “Communicate scientific findings to diverse audiences”
CLO 4: “Evaluate environmental policy using evidence-based reasoning”
These are simple and stable. The sophistication comes when we deploy them in specific unit contexts.
Step 2: Identify Secure vs Developmental Units
For each programme, determine which units must verify CLOs independently (Secure) versus which build capacity toward later verification (Developmental).
Look for 2-3 units at each AQF level that:
Sit at AQF boundary or exit points (strategic timing)
Have ULOs closely aligned to CLOs (reduced friction)
Already have or could accommodate secure assessment formats (faculty readiness)
Counselling Programme:
Secure: Professional Practice Placement, Clinical Case Analysis, Counselling Skills Lab
Developmental: Introduction to Counselling Theories, Developmental Psychology, Group Dynamics, Research Methods
Film Production Programme:
Secure: Major Project (Year 3), Professional Portfolio, Collaborative Production
Developmental: Screen Craft Fundamentals, Post-Production Techniques, Narrative Development, Pre-Production Planning
Environmental Science Programme:
Secure: Field Research Project, Ecological Methods, Honours Thesis
Developmental: Introductory Ecology, Statistical Analysis, Literature Review Methods, Conservation Biology
The pattern: Relational, creative, and applied research capabilities require Secure assessment (can’t be AI-mediated). Knowledge building and technical skills can be Developmentally scaffolded (AI appropriately supports learning). Please note, Ive given examples here that are focussed to the end of a bachelor degree. In the reality of a 24 unit, 3 year degree, we would expect 2-3 secure units in study periods 2, 4 and 6.
Step 3: Deploy CSR-Wrapped CLOs to Secure Assessment
The fundamental change is that the CSR wrapper is the assessment criterion. The CLO goes straight to the unit work without ULO intermediary.
One Secure assessment will almost always address multiple CLOs simultaneously because that’s how authentic professional work actually functions. Traditional rubrics tend to fragment integrated capability, but CSR keeps it whole.
The Standard in each example below explicitly references AQF Level 7 descriptors such as broad and coherent knowledge, well-developed judgement, autonomy in unpredictable contexts, analysing and evaluating to solve complex problems as the threshold for the Course.
Example: Professional Practice Placement (Counselling, Secure Unit)
CLOs (from programme level):
CLO 1: “Demonstrate ethical therapeutic practice with diverse client populations”
CLO 3: “Communicate effectively with clients, colleagues, and stakeholders”
CLO 4: “Engage in critically reflective practice informed by evidence and supervision”
Traditional rubric approach would fragment and map this:
Criterion | HD | D | C | P | F
Therapeutic rapport | ... | ... | ... | ... | ...
Ethical reasoning | ... | ... | ... | ... | ...
Communication clarity | ... | ... | ... | ... | ...
Professional reflection | ... | ... | ... | ... | ...
Cultural responsiveness | ... | ... | ... | ... | ...
This creates the illusion that you could demonstrate “excellent therapeutic rapport” (HD) while showing “poor ethical reasoning” (P). In reality they’re a integrated professional capabilities.
CSR approach keeps it integrated by design:
Assessment criterion (deployed CLOs 1, 3, 4):
Context: In supervised placement settings where you work directly with clients presenting varied mental health concerns. AI’s environmental presence (clients arrive having already used ChatGPT for self-diagnosis, online symptom checkers, AI-generated coping strategies) must be navigated transparently and ethically.
Standard (Pass = AQF Level 7 professional threshold):
Demonstrate autonomy, well-developed judgement and responsibility (AQF Level 7 application) in therapeutic practice by:
Establishing therapeutic rapport independently through broad and coherent understanding of relational dynamics (AQF Level 7 knowledge)
Navigating ethical boundaries when clients reference AI-generated advice, demonstrating well-developed judgement in unpredictable therapeutic contexts (AQF Level 7 application)
Recognising and responding to complex relational dynamics (transference, rupture, therapeutic alliance) using advanced cognitive skills to analyse and evaluate client presentations (AQF Level 7 skills)
Making professional judgements about risk, safety, and intervention appropriateness in sometimes complex and unpredictable therapeutic situations (AQF Level 7 application)
Communicating transparently with clients about professional capability versus AI-generated information, transmitting knowledge appropriately to non-specialist audiences (AQF Level 7 skills)
Demonstrating cultural responsiveness and trauma-informed practice requiring depth in therapeutic principles (AQF Level 7 knowledge)
Engaging in reflective supervision, articulating clinical reasoning with self-directed learning capability (AQF Level 7 application)
Range: Individual counselling, group facilitation, crisis intervention, family therapy all require real-time human relational capability
What this means for assessment:
The supervisor assesses therapeutic capability in live client interactions. The Standard describes integrated professional practice at AQF Level 7. Communication, ethics and reflection are integrated because therapeutic work requires all three functioning together with the autonomy, judgement, and analytical capability that defines Bachelor-level professional practice.
Higher grades are awarded when the student shows increasingly sophisticated integration: a Pass student meets AQF Level 7 threshold professional capability; a Distinction student demonstrates exceptional relational attunement, nuanced ethical reasoning in complex situations, and deep reflective insight that transforms practice that exceed threshold Level 7 descriptors.
AI parameters: AI may support case documentation (with supervisor disclosure), but therapeutic capability at AQF Level 7 (autonomy, judgement, complex problem-solving) must be independently demonstrated. The Context makes AI’s environmental presence explicit—clients use AI, so navigating that is part of professional competence. We’ll come back to this.
Example: Major Project (Film Production, Secure Unit)
CLOs (from programme level):
CLO 1: “Produce professional-standard screen content across narrative and documentary forms”
CLO 2: “Apply technical craft skills to achieve creative intent”
CLO 3: “Manage collaborative production processes in team environments”
Traditional rubric approach fragments:
Criterion | HD | D | C | P | F
Technical quality (camera, sound, editing) | ... | ... | ... | ... | ...
Creative vision | ... | ... | ... | ... | ...
Production management | ... | ... | ... | ... | ...
Teamwork | ... | ... | ... | ... | ...
CSR approach :
Assessment criterion (deployed CLOs 1, 2, 3):
Context: Short-form production (5-15 minutes) in collaborative team environment where AI tools (script development, colour grading, sound design, metadata tagging) are industry-standard, but creative authorship and production management remain human.
Standard (Pass = AQF Level 7 professional threshold):
Demonstrate autonomy, well-developed judgement and responsibility (AQF Level 7 application) in screen production by:
Making defensible creative choices about story, casting, performance direction, pacing using broad and coherent knowledge of screen language and narrative craft (AQF Level 7 knowledge)
Executing technical craft to serve creative intent, applying well-developed technical skills to analyse and evaluate on-set problems and generate solutions to unpredictable production challenges (AQF Level 7 skills)
Managing production workflow to deliver on time and on brief, demonstrating autonomy and judgement in self-directed work contexts (AQF Level 7 application)
Using AI tools appropriately (technical execution) while retaining authorial control (creative vision), showing judgement about when AI supports versus undermines creative intent (AQF Level 7 application)
Demonstrating production management capability that AI cannot provide, e.g. managing crew dynamics, on-set decision-making under pressure, editorial judgement about emotional impact in unpredictable collaborative contexts (AQF Level 7 application)
Range: Documentary, narrative fiction, experimental forms that all require human creative intelligence integrated with technical craft and production management
What this means for assessment:
The final screen work demonstrates integrated AQF Level 7 capability because the creative vision only matters if you executed it technically, and the technical quality serves creative intent. Production management must deliver a functioning production. They are one integrated professional capability requiring the autonomy, judgement, and problem-solving that defines Bachelor-level practice.
Higher grades: Pass = AQF Level 7 threshold met across all three CLOs integrated. Distinction = exceptional creative vision executed with sophisticated technical craft delivered through exemplary production management—exceeding Level 7 baseline descriptors.
Example: Field Research Project (Environmental Science, Secure Unit)
CLOs (from programme level):
CLO 1: “Design and execute field-based ecological research”
CLO 2: “Apply statistical methods to analyse environmental data”
CLO 3: “Communicate scientific findings to diverse audiences”
CSR approach:
Assessment criterion (deployed CLOs 1, 2, 3):
Context: Species population study or ecological monitoring requiring field data collection, statistical analysis, and scientific reporting. AI assists with literature synthesis, statistical calculation, and data visualisation, but cannot make methodological decisions or interpret ecological significance.
Standard (Pass = AQF Level 7 scientific threshold):
Demonstrate autonomy, well-developed judgement and responsibility (AQF Level 7 application) in ecological research by:
Designing methodologically sound field protocol appropriate to research question, applying broad and coherent theoretical knowledge of ecological principles (AQF Level 7 knowledge)
Collecting field data systematically with documented controls and error sources, demonstrating self-directed work capability in unpredictable field contexts (AQF Level 7 application)
Verifying AI-generated statistical outputs against ecological validity using well-developed cognitive skills to analyse and evaluate results—distinguishing correlation from causation, recognising confounding variables (AQF Level 7 skills)
Interpreting results within ecological theory, using depth in ecological principles to identify patterns AI analysis misses: species interactions, habitat relationships, temporal dynamics (AQF Level 7 knowledge)
Generating and transmitting solutions to complex ecological problems, demonstrating judgement about what statistical outputs mean in real ecological contexts (AQF Level 7 skills and application)
Communicating findings with appropriate scientific rigour to specialist and non-specialist audiences, transmitting knowledge effectively across contexts (AQF Level 7 skills)
Documenting methods with sufficient detail for replication, showing responsibility for scientific integrity (AQF Level 7 application)
Range: Field observation, experimental manipulation, statistical modelling, written reports, oral presentations, all of which require human scientific reasoning
Higher grades: Pass = methodologically sound research with valid statistical analysis and clear scientific communication meeting AQF Level 7 threshold. Distinction = innovative methodological design, sophisticated statistical reasoning that catches AI errors, and compelling scientific communication that makes complex findings accessible, exceeding Level 7 baseline descriptors.
Is defining threshold standards easy? No. It requires deep understanding of your discipline, clear grasp of AQF Level 7 descriptors, and honest conversation about what professional capability actually looks like.
What CSR provides is a structure that forces you to do this work explicitly rather than allowing fragmented rubric rows and vague language. The Standard must describe integrated professional capability at AQF Level 7. Once you’ve done that work, it holds up for accreditation, for employers, for students trying to understand what they’re being assessed against.
Step 4: Show Direct Line of Sight in Developmental Units
Core Principles for Developmental Assessment:
Learning = change (not competence verification)
Multi-CLO integration (building capabilities together, not isolated)
Trajectory-based Pass standards (readiness to progress toward AQF Level 7)
Feed-forward design (feedback drives iteration, iteration creates visible change)
AI as learning partner (not restricted)
Learning Is Change
In Developmental units, we are evaluating and promoting learning, which means we are looking for change. Did the student change in response to what they encountered? Can they do something now they could not do before? Do they think differently about the problem than they did at the start? Are they ready to move on?
Feedback in Developmental units is aimed at that Secure assessment destination. This is “feed-forward” design (Hattie & Timperley, 2007): feedback drives the next iteration, iteration creates visible change, and visible change is evidence of readiness to progress. AI can fake competence in the form of an output but it cannot fake learning, so we need to work on the iterative process of getting something wrong, receiving feedback, and doing it differently.
Students need to arrive at Secure assessment having built:
The knowledge and skills to perform
The confidence to demonstrate under pressure
The ability to think on their feet
The ability to communicate complexity
Example 1: Introduction to Counselling Theories (Developmental Unit)
CLOs being built together:
CLO 2: Apply psychological theories to case conceptualisation
CLO 4: Engage in critically reflective practice
CLO 5: Communicate effectively with colleagues and stakeholders
As above, these can work together as an integrated capability. Developmental work builds all three simultaneously, preparing students for Year 3 Clinical Case Analysis (Secure unit) where they will demonstrate this integration at AQF Level 7.
How students see this:
This unit builds CLOs 2, 4, and 5 together toward AQF Level 7 demonstration in Year 3
What you will do:
Weekly group discussions applying theories to case vignettes (CLO 2 + CLO 5: theoretical application AND articulating reasoning to peers)
Collaborative patchwork texts where you assemble perspectives and explain your reasoning (CLO 2 + CLO 4: theory AND reflection on developing understanding)
Practice sessions using AI to explore frameworks, then discussing with peers what AI helped you see versus what you questioned (CLO 2 + CLO 4: theory AND critical evaluation)
Reflective learning journal tracking how your theoretical understanding develops and how you communicate that understanding (CLO 4 + CLO 5: reflection AND communication)
You will use AI as a learning partner, exploring theoretical perspectives, testing your understanding, getting rapid feedback, before demonstrating independent theoretical reasoning at AQF Level 7. Using AI now builds the evaluative capability you will need later: knowing when AI outputs are theoretically sound versus when they miss clinical nuance.
Pass = Evidence of readiness to progress toward AQF Level 7:
The Pass Standard is trajectory-based, not arrival-based. Pass means you are building the integrated capability and ready to demonstrate it independently in Secure assessment.
✓ Visible change from Week 1 to Week 13:
Your Week 1 discussion contributions versus Week 13 show increasing theoretical sophistication (documented in participation)
Early patchwork texts versus later texts show deeper theoretical reasoning (artefact comparison)
Reflective journal captures what changed in your thinking: “In Week 3 I thought attachment theory meant X, but after the Week 7 vignette discussion and AI exploration, I now understand it means Y because...”
✓ Responsive iteration (feed-forward in action):
You act on facilitator feedback: “Week 5 feedback said my theoretical application was surface-level; Week 8 patchwork contribution shows I applied theories more precisely”
You incorporate peer feedback: “After Week 6 group discussion, I revised how I explain psychodynamic concepts to make them clearer”
You document AI exploration: “I asked AI to explain CBT versus DBT; I tested that explanation against the reading; I questioned AI claim that...”
✓ Trajectory toward AQF Level 7 Secure assessment:
Foundation theoretical fluency established (you can apply frameworks with AI support, preparing for independent application)
Reflective capacity developing (you can articulate what you know, what you do not know, what you are learning)
Communication capability building (you can explain theoretical reasoning clearly to peers)
You are ready for Year 3 where you will demonstrate CLOs 2, 4, 5 integrated at AQF Level 7 (autonomy, well-developed judgement, broad and coherent knowledge)
Pass requires evidence of learning:
Evidence of change (visible development across 13 weeks)
Feed-forward responsiveness (acting on feedback, trying differently)
Active engagement (meaningful participation, not passive presence)
Metacognitive awareness (can articulate what changed and why)
Not Pass (evidence of coasting, not learning):
✗ All work submitted Week 13 (no iterative development visible)
✗ Generic patchwork contributions with no theoretical sophistication developing
✗ Passive discussion participation (present but not engaged)
✗ No documented response to feedback (same approaches repeated without change)
✗ Cannot articulate what changed in thinking from Week 1 to Week 13
✗ Reflective journal is retrospective summary, not developmental tracking
Grading: Ungraded Pass (does not contribute to GPA). The rigour is in the trajectory requirement, not in scalar grading. Students either show readiness to progress (Pass) or do not yet (Not Pass, with specific feedback on what development is still needed).
Example 2: Screen Craft Fundamentals (Developmental Unit)
CLOs being built together:
CLO 2: Apply technical craft skills to achieve creative intent
CLO 3: Manage collaborative production processes
CLO 4: Evaluate screen work critically within industry contexts
How students see this:
This unit builds CLOs 2, 3, and 4 together toward AQF Level 7 demonstration in Year 3
What you will do:
Weekly craft exercises in pairs or small teams: light a scene, record clean dialogue, frame a sequence (CLO 2 + CLO 3: executing craft AND coordinating with others)
Group critiques where you articulate your creative choices and evaluate peers work (CLO 4 + CLO 5: critical analysis AND communication)
Practice using AI for shot planning and lighting diagrams, then executing as a team and discussing what worked versus what the plan missed (CLO 2 + CLO 3 + CLO 4: craft + collaboration + evaluation)
Build portfolio of exercises with reflective commentary on what you learned through collaboration and critique (all three CLOs integrated, documented development)
Industry-standard pre-production uses AI for planning, visualisation, technical specs. You will practice using these tools to explore craft approaches, then executing those approaches practically and evaluating results. This builds the judgement you will need at AQF Level 7: knowing when AI-generated plans work versus when hands-on adaptation is required.
Pass = Evidence of readiness to progress toward AQF Level 7:
✓ Visible change from Week 1 to Week 13:
Early craft exercises versus late exercises show progressive technical improvement (portfolio comparison)
Early critique contributions (”this looks good”) versus later contributions (”the lighting supports the mood because the key-to-fill ratio creates...”) show analytical development
Reflective commentary tracks what changed: “In Week 4 I just followed AI lighting diagrams; by Week 10 I understood why certain setups work and could adapt on set when the diagram did not account for...”
✓ Responsive iteration:
Acting on critique feedback: “Week 6 critique said my framing was static; Week 8 exercise shows I experimented with camera movement to create dynamic compositions”
Incorporating peer observations: “After seeing how other team handled the sound challenge, we tried a similar approach in Week 9”
Documenting AI use evolution: “Week 3 I just accepted AI shot suggestions; Week 11 I evaluated suggestions against creative intent and modified them”
✓ Trajectory toward AQF Level 7 Secure assessment:
Foundation craft capability established (can execute technical fundamentals)
Collaborative practice developed (can coordinate with team to deliver outcomes)
Critical judgement building (can evaluate what works and why, preparing for autonomous creative decisions)
Ready for Year 3 Major Project where you will integrate CLOs 2, 3, 4 at AQF Level 7 (well-developed judgement, autonomy, complex problem-solving)
Example 3: Introductory Ecology (Developmental Unit)
CLOs being built together:
CLO 1: Design and execute field-based ecological research
CLO 2: Apply statistical methods to analyse environmental data
CLO 3: Communicate scientific findings to diverse audiences
Scientific research requires methodological rigour, statistical reasoning, and clear communication working as an integrated capability. Developmental work builds all three simultaneously, preparing students for Year 3 Field Research Project (Secure unit) where they will demonstrate this integration at AQF Level 7.
How students see this:
This unit builds CLOs 1, 2, and 3 together toward AQF Level 7 demonstration in Year 3
What you will do:
Small-scale field studies in groups: design simple protocols, collect data, document methods (CLO 1 + CLO 3: fieldwork AND clear scientific documentation)
Practice using AI to run statistical tests on your data, then discuss in groups what the results mean ecologically—not just what the numbers are (CLO 2 + CLO 3: statistics AND explaining to others)
Present findings to class, explaining your methods, results, and ecological interpretation (all three CLOs integrated in authentic scientific communication)
Peer review others protocols and analyses, giving feedback on methodology, statistical appropriateness, and communication clarity (CLO 1 + CLO 2 + CLO 3: evaluating integrated research practice)
Why AI is encouraged: AI runs statistical tests instantly, generates visualizations, suggests analytical approaches. You need practice using AI to accelerate analysis while building the critical reasoning to verify AI outputs, interpret results ecologically, and catch when AI statistical suggestions are inappropriate for your research question. This develops the evaluative capability you will need at AQF Level 7.
Pass = Evidence of readiness to progress toward AQF Level 7:
✓ Visible change from Week 1 to Week 13:
Early protocols versus late protocols show methodological improvement (documented in study designs)
Early statistical interpretations (”the p-value is 0.03”) versus later interpretations (”the correlation is statistically significant, but ecologically this likely reflects habitat confounding rather than direct causation because...”) show analytical development
Presentation quality improves: clearer explanations, more effective visualizations, better anticipation of questions
✓ Responsive iteration:
Acting on peer review: “Week 5 feedback said our sampling design had spatial bias; Week 7 protocol shows we randomized collection points”
Incorporating facilitator guidance: “After Week 8 discussion about ecological versus statistical significance, Week 10 analysis distinguishes these explicitly”
Documenting AI verification: “Week 4 I just accepted AI regression output; Week 11 I checked assumptions, verified the test was appropriate for our data structure, and caught that AI suggested parametric tests for non-normal distributions”
✓ Trajectory toward AQF Level 7 Secure assessment:
Foundation research skills established (can design valid field protocols with guidance)
Statistical reasoning developing (can run analyses with AI support and verify outputs)
Scientific communication building (can explain methods and findings clearly)
Ready for Year 3 Field Research Project where you will demonstrate CLOs 1, 2, 3 integrated at AQF Level 7 (autonomy in research design, well-developed judgement in analysis, broad and coherent ecological knowledge)
Not Pass:
✗ Field studies lack documented methodology (cannot track development)
✗ Statistical analyses show no verification of AI outputs (uncritical acceptance throughout)
✗ Presentations remain unclear or disorganized (no communication development)
✗ Peer reviews stay superficial (no analytical feedback developing)
✗ Cannot articulate what changed in research thinking or statistical reasoning
Grading: Ungraded Pass. Development and trajectory matter.
The Pattern Across All Developmental Units
There is a clear pattern across developmental units:
Multiple CLOs explicitly named and integrated (not isolated skill development)
Activities naturally involve multiple CLOs (discussions = theory + communication; field studies = methods + documentation)
Pass criteria are trajectory-based (evidence of readiness to progress toward AQF Level 7, not evidence of having arrived)
Learning is made visible through observed change (visible development from Week 1 to Week 13, documented metacognitively)
Feed-forward design prevents coasting (feedback drives iteration, iteration creates visible change, change is the evidence)
AI is learning partner (not restricted, because developmental work builds the evaluative capability to use AI well)
Connection to Secure assessment is explicit (this prepares you for where you will demonstrate these CLOs integrated at AQF Level 7)
Students need to understand why the same CLO feels different in Secure versus Developmental contexts.
The language used above can be standardised for the Developmental
What’s Next: Completing the Circuit
CSR Framework tells you which units need Secure assessment and it gives you programmatic architecture for distinguishing verification from development.
It doesn’t tell you what kind of tasks those Secure units should contain. It doesn’t specify which professional capabilities require which kinds of AI relationships. Existing assessments will often not adapt to CSR because the activities that underpin them don’t assume that AI is in the mix. In many current assessments, there has been active attempts to exclude AI, and that no longer represents the real world.
That’s where Part 2: Completing the Circuit—The SRCIT Framework comes in.
SRCIT (Strategic, Relational, Creative, Interpretive, Technical) identifies five fundamentally different task modes in professional practice. Each mode has different relationships with AI, different assessment implications and different boundary requirements.
A counselling student navigating a client’s AI-mediated self-diagnosis needs different competencies than the same student using AI for case documentation. Both are legitimate professional work and both require development and assessment. Assuming that the assessment has NO relationship with AI is not going to work, but they’re not the same kind of assessment.
Next week, I’ll show you why generic “AI literacy” training isnt going to work (the real axis is task mode, not discipline), what mode-switching competence looks like in practice, and how faculty can develop task-type literacy instead of chasing tool fluency.
Part 3: CSR × SRCIT Integration will then show you exactly how to implement mode-aware programmatic assessment across your whole programme.
Your Turn: Questions and Dialogue
I want this to be conversation, not monologue. Three questions for you:
Does the Secure/Developmental distinction make intuitive sense for your discipline? Where does it clarify existing practice, and where does it create new tensions?
What’s your biggest barrier to implementing this? Institutional policy? Workload? Colleague resistance? Professional accreditation constraints?
Which of your current units are doing “Developmental” work but graded as if they’re “Secure”? (My guess: most of them.)
Comment below, or email me directly at colin.webber@navitas.com. I’m especially interested in hearing from programme directors actively redesigning assessment, and from faculty who’ve tried to implement Secure/Developmental distinctions and hit obstacles.
Special invitation to colleagues in:
Healthcare education (how does this compare to your clinical/academic split?)
Professional programmes without statutory registration (business, criminology, creative industries—do you recognise the accountability challenge?)
TEQSA and professional accreditation bodies (does this implement Pathway 3 guidance structurally?)
Next week: Why your AI literacy training isn’t working (and what mode-switching competence looks like instead).
About this series: This is Part 1 of a three-part series establishing curriculum architecture for professional education in AI-capable contexts. The series introduces the CSR Framework (programmatic assessment architecture), the SRCIT Framework (task-mode taxonomy), and their integration into complete implementation guidance.
The complete series:
Cite this work: Webber, C. (2026). Reconstructing Constructive Alignment: The CSR Framework. Dr.C.Webber Substack. DOI: https://doi.org/10.17605/OSF.IO/XWGEH
Colin Webber is an academic working across SAE University College (creative industries, IT) and ACAP University College (psychology, counselling, social work, criminology, business). His work focuses on curriculum and assessment reform for AI-aware professional education, with particular interest in programmes operating without statutory registration frameworks.
Biggs, J. (2014). Constructive alignment in university teaching. HERDSA Review of Higher Education, 1, 5–22.
Dawson, P., Bearman, M., Dollinger, M., & Boud, D. (2024). Validity matters more than cheating. Assessment & Evaluation in Higher Education, 49(7), 1005–1016. https://doi.org/10.1080/02602938.2024.2386662
Fawns, T., Boud, D., & Dawson, P. (2026). Identifying what our students have learned: A framework for practical assessment validation. Assessment & Evaluation in Higher Education, 0(0), 1–17. https://doi.org/10.1080/02602938.2026.2620053
Hattie, J. & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112.
McMillan, L., & Webber, C. (2021). Institute-wide implementation of holistic assessment in collaborative project based learning.
Rolfe, G., Freshwater, D. & Jasper, M. (2001). Critical Reflection in Nursing and the Helping Professions. Palgrave Macmillan.
Sadler, D. R. (2013). Making Competent Judgments of Competence. In S. Blömeke, O. Zlatkin-Troitschanskaia, C. Kuhn, & J. Fege (Eds), Modeling and measuring competencies in higher education: Tasks and challenges. (pp. 13–27). Sence Publishers. https://doi.org/10.1007/978-94-6091-867-4_2
Sadler, D. R. (2014). The futility of attempting to codify academic achievement standards. Higher Education, 67, 273–288. https://doi.org/DOI%252010.1007/s10734-013-9649-1

