THE GOLDILOCKS STANDARD – Machine Unlearning and the Right to be Forgotten Under Emerging Legal Frameworks
// WINNER
download pdf
This paper critically examines the legal frameworks that impose requirements for machine unlearning, the process by which AI systems forget previously learned information upon request. It focuses on the General Data Protection Regulation (GDPR) and the Data Act, highlighting how these laws have set early benchmarks that technical standards are only now beginning to meet. While these legal standards were, at the time of adoption, a well-balanced “Goldilocks” solution, neither too rigid nor too lenient, they face increasing strain under the weight of rapidly evolving AI technologies. The analysis explores whether these laws are truly future proof by identifying key legal and technical gaps that hinder the effective implementation of machine unlearning. The paper argues that while the legal frameworks provide essential momentum for developing unlearning capabilities, they fall short in anticipating future challenges. Ultimately, the paper calls for continuous regulatory adaptation and cross-disciplinary collaboration to ensure that laws governing AI remain both relevant and effective.
Machine unlearning challenges the assumption embedded in many legal texts that once data is used to train a model, it becomes practically inseparable from the model's knowledge. As unlearning technologies become more viable, regulators and lawmakers must grapple with new questions: Should individuals have the right to compel the unlearning of their data from AI systems? What constitutes adequate erasure in a machine learning context? How do we ensure verifiability and accountability in the unlearning process?