Craig Smith, Lecturer in Law, recently attended the 41st Annual Conference of the British and Irish Law, Education and Technology Association (BILETA), hosted at Aberystwyth University. As a first-time attendee, the conference offered an engaging and intellectually challenging space to explore how law and technology are increasingly shaping one another and, importantly, how legal education must respond. The event brought together academics, practitioners, and researchers in a welcoming and collaborative environment.
Generative AI as a pedagogical partner
Craig presented his paper, Positioning Generative AI as a Debate Opponent in Legal Education, within the Pedagogy and Legal Education stream. The session sparked thoughtful discussion around the role of artificial intelligence (AI) in developing students’ oral, critical, and evaluative skills.
What became clear through both audience questions and parallel presentations was that many educators are experimenting with similar approaches – using AI not simply as a tool for answers, but as a mechanism to simulate professional and academic interaction, whether through debate, mooting, or client interviewing. This reinforced the value of positioning AI as a pedagogical partner rather than a passive assistant, a central point of the paper.
Concerns raised in Rebekah Marangon’s work on the “feedback paradox”, highlight the tension between supporting student development and maintaining confidence and clarity in evaluation.
Rethinking teaching and assessment in the age of AI
Across the two days, several key themes emerged. First, there was a strong recognition that generative AI is no longer an emerging issue, but a central feature of both legal practice and education. This raises fundamental questions not only about what is taught, but how it is taught.
Discussions around assessment highlighted growing concern about maintaining academic integrity while also designing authentic tasks that reflect use of AI in industry and practice.
Second, the concept of AI literacy was a recurring focus. Presentations emphasised the need for students to develop not only technical understanding, but also critical awareness of AI’s limitations, risks, and regulatory context. Frameworks such as “integrated” and “spiral” approaches to embedding AI across the curriculum, as outlined in the paper by Lezel Roddeck, point towards a more structured and progressive model of AI literacy development within legal education.
Legal and societal implications of AI
Several sessions explored the broader regulatory and societal implications of AI, from predictive policing and judicial decision making to online harms and data governance. A consistent thread was the idea that while AI may influence decision making, human oversight, accountability, and legal frameworks remain essential.
This was echoed in discussions such as Jiahong Chen’s analysis of third-party cookies, which highlighted how regulatory frameworks often struggle to keep pace with technological development, an issue that may offer a useful parallel for the future of AI governance.
These discussions raise important questions about the extent to which legal actors are shaped by, rather than simply controlling, technological systems.
Looking ahead
BILETA 2026 provided a valuable opportunity to situate Craig’s work within a wider, interdisciplinary conversation. It reinforced the importance of engaging proactively with AI in legal education, while remaining attentive to its broader legal and societal implications.
Craig looks forward to developing this work further as part of an evolving and collaborative academic community.