Predicted, Not Assessed: The Ethics of Algorithmic Grading

[The following is a discussion post shared in the graduate-level course, Communication Ethics (JCOM629), a the University of Oregon’s School for Journalism and Communication. The post is a response to the use of AI to predict student success in A-Level exams in the UK during the pandemic and exams the ethics of such through a rights based and utilitarian lens:
https://blogs.lse.ac.uk/impactofsocialsciences/2020/08/26/fk-the-algorithm-what-the-world-can-learn-from-the-uks-a-level-grading-fiasco/
https://www.vox.com/future-perfect/2020/8/22/21374872/uk-united-kingdom-formula-predict-student-test-scores-exams]

[Image source: LSE Impact]

From a rights-based ethical perspective, moral decision-making centers on protecting individual rights and human dignity, even when doing so conflicts with efficiency or collective outcomes. Students have a right to fair treatment and to be assessed on their own demonstrated performance, not as proxies for their school’s historical results. By relying on prior institutional data and limiting transparency or meaningful appeal, the grading system undermined individual autonomy and dignity, particularly for students whose achievement diverged from expected patterns.

The utilitarian rationale for the policy emphasized maintaining consistency, preventing grade inflation, and preserving trust in national standards. Yet the harms were unevenly distributed. Students from historically underperforming schools faced disproportionate negative consequences, and the public backlash ultimately eroded confidence in the system. On its own terms, the approach failed to maximize overall good.

More broadly, predictive systems raise a deeper ethical concern. They do not merely measure reality; they can constrain it. As Luciano Floridi argues, AI systems act without understanding, which makes their embedded value judgments especially consequential at scale. When statistical prediction replaces direct evaluation, opportunities for exception, improvement, or late mastery are diminished. Individuals risk being confined to trajectories shaped by historical patterns rather than present performance.

Although deploying AI in high-stakes assessment can be understood as a response to extraordinary constraints, a rights-based framework does not permit individual harm to be justified by institutional necessity. Students bore the greatest consequences while having the least control over the process. Interests in standardization or credibility cannot outweigh the right to fair and individualized assessment.

In contexts like education, where the purpose is to recognize achievement and preserve the possibility of growth, replacing evaluation with prediction risks transforming past disadvantage into fixed outcome, undermining both fairness and trust.

Thoughts on my thoughts?