Recent progress
What has changed
The first session established the problem and introduced the framework. This follow-up shows what happens when expert knowledge is translated into working infrastructure.
- We moved from concept to a live, usable framework app
- Faculty can now upload assignments and receive structured feedback in minutes — not weeks
- The conversation is no longer "Should we ban AI?" — it is "How do we design for learning with AI present?"
- Legacy assignment patterns have a clear, supported path to intentional redesign
The persistent challenge
The design mismatch
Most assignments were built for a world that no longer exists. This is not a faculty failure — it is a structural challenge requiring a disciplined, collective response.
"If an assignment can be completed well by AI, it may no longer measure the learning we intended."
AI is already embedded in how students approach coursework. Students routinely use AI tools to generate ideas, structure arguments, locate information, and complete deliverables. Assignments that haven't been rethought are no longer measuring what we think they're measuring.
"Can students use AI on this?"
"What thinking are students required to do — and where is that visible?"
This shift puts cognition, not tool policing, at the center of assignment design.
The tool
The Framework App in four steps
The Thering Framework is now available as a working web application. Faculty expertise that once lived in workshops now scales across programs.
Submit any assignment prompt, rubric, or instruction sheet
The app assesses the assignment against all seven framework criteria
Receive a clear alignment rating with criterion-by-criterion feedback
Use built-in guidance to revise and re-evaluate in the same session
Alignment verdicts
The assignment demands thinking that AI cannot substitute for student judgment and process.
Some criteria are met; targeted redesign on specific dimensions is recommended.
The assignment as designed is largely AI-solvable. Redesign guidance is provided by criterion.
The Framework
Seven criteria for AI-era assignments
The framework evaluates assignments against seven interconnected dimensions, each aligned with SUNY priorities and AI-era learning expectations.
Every assignment should be traceable to specific learning goals and transferable, discipline-relevant skills.
Tasks should require reasoning that cannot be outsourced — judgment calls, personal analysis, context-specific interpretation.
Assignments situated in real-world problems requiring local knowledge and genuine judgment that AI cannot replicate.
UDL-aligned design that ensures unequal AI access does not become an equity gap for students from under-resourced backgrounds.
Drafts, revision histories, and reflection prompts shift assessment toward the actual development of thinking.
Students engage with AI critically — identifying bias, errors, and limitations — and document their use transparently.
Active, inclusive pedagogies that acknowledge AI's presence and prepare students for real post-graduation environments.
The deeper story
Expertise at scale
This is not AI replacing pedagogy. It is faculty expertise, implemented in software for broader use, updated as conditions change. Human judgment remains the source of quality.
Why this matters now
Faculty expertise already exists, but dissemination is slow. Workshops reach dozens; app-based guidance can reach hundreds. AI capabilities change faster than curriculum review cycles. A living tool keeps pedagogy responsive instead of static.
The replicable model
-
Faculty expert defines a framework
Domain knowledge is structured into clear criteria with defined logic.
-
Criteria and logic are structured clearly
The framework is documented in a form that can be implemented.
-
Agentic development translates the framework into a tool
Software encodes the expert logic so it can operate at scale.
-
Faculty use the tool and generate feedback data
Real-world usage reveals gaps and refinement opportunities.
-
Framework and tool iterate together
The scholarly output is living, versioned, and continuously improvable. This is a template for many disciplines.
What audiences gain
Who benefits and how
Faster, clearer assignment redesign support — without waiting for a workshop slot or consulting a colleague.
Program-level visibility into assignment alignment across courses and instructors.
Concrete evidence of a responsible, structured response to AI-era teaching challenges.
Assignments that genuinely demand real thinking, process transparency, and meaningful cognitive engagement.
Near-term roadmap
What comes next for the app
The framework app continues to develop alongside faculty use. Planned additions include:
Implementation
Department implementation path
A five-step approach to embedding alignment checks into routine curriculum review — no major restructuring required.
Pilot with 3–5 willing faculty using the live app on real current assignments
Review anonymized findings together in a low-stakes department conversation
Run a redesign workshop using the app's criterion-level feedback as the agenda
Re-evaluate revised assignments to measure improvement across criteria
Incorporate alignment checks into routine curriculum review as standard practice