Education is adopting natural language processing technology to improve students’ writing skills
Writing essays, dissertations or theses involves a complex set of skills that students should ideally acquire during their undergraduate studies. Hundreds of books and syllabi give advice and resources that help students write in the accepted academic writing style. Unfortunately, there are no ready-to-use recipes to produce good written assignments. The best way to acquire writing skills has always been, and probably always will be, practice. Practice that relies on instructors' thoughtful guidelines (aka rubrics) and personal feedback. Yet the increasing number of university students means instructors have become overwhelmed in providing such feedback, posing a serious problem for higher educations.
At the Connected Intelligence Centre (CIC) at the University of Technology of Sydney (UTS), Simon Buckingham Shum initiated a project to investigate how machines could help students with high-level writing skills. He set up a multi-disciplinary research team of academic writing and writing analytics researchers, with software developers, to create the Academic Writing Analytics (AWA) tool in educational technology. The team gets support from practicing writing instructors at several faculties, who provide feedback and help evaluate the automated analyses of AWA from their specific viewpoints. AWA is now capable of doing multiple types of analysis and can provide feedback on several kinds of student essays.
I had the privilege of joining this multi-disciplinary team remotely from XRCE in France to develop one of AWA’s automated linguistic analysis modules. This was a continuation of my now decade-long work on the concept-matching methodology  to automatically detect specific important sentences that contain rhetorical moves within various writing genres.
Concept matching was originally developed to help biomedical researchers detect from among tens of thousands of research articles, the few especially relevant ones that “have identified a problem with, or a break from, conventional knowledge” . Authors usually signal such issues by a special rhetorical emphasis, like “In contrast with previous hypotheses …” or “Recent observations … challenge this assumption …”, which the method can detect. Concept-matching has been applied for a number of different purposes e.g. as a support tool for peer reviewers in evaluating articles , or to detect the main messages in research project reviews .
For AWA I developed two concept-matching modules: one for highlighting rhetorical moves in argumentative essays, in which students display knowledge in their discipline, and one for reflective essays, in which they critically reflect on courses or internships.
Academic writing researchers at UTS provide students with rubrics that list and explain the important rhetorical moves necessary to construct a convincing line of thought. For argumentative essays, such rubrics include summarizing, putting emphasis on important issues, presenting contrasting ideas, etc. The concept-matching software highlights the sentences that convey these moves when students paste their essay in AWA (The figure below is published in .):
Screenshot of an AWA report of an argumentative essay
This highlighting allows students to double-check if the required content elements are present, and if their message is properly communicated. If not, they can modify the composition so that all the necessary content is present. (Watch a demonstration.)
Reflective essays help students be more conscious about their profession, orientation and career paths. A dedicated component is being developed in AWA that carries out different kinds of analyses on reflections and besides this, this module also provides commented feedback. Reflective compositions involve specific rhetorical moves, different to argumentative essays. For the purpose of the AWA analysis UTS researchers have developed special rubrics that reflective essays need to contain. I developed a dedicated concept-matching module that detects the corresponding rhetorical moves. In a similar fashion to the first example, students get a report where these rhetorical moves are highlighted. (Watch a demonstration.)
UTS reports in the LAK17 conference paper  on the initial feedback from students who have used the reflective module of AWA. Although the sample is limited - at the time of the study 63 students responded to the feedback questionnaire - it is promising that 85.7% of the respondents found AWA helpful.
These examples of student comments cited in  encourage us to continue this work:
I was fascinated by how it works and can see its implication in future, to determine which phrases need more work/ which can be improved.
(It) prompted me to follow through with the reflection to the last step of the process - I had written about my thoughts and feelings, discussed challenges, but had not followed through with reflecting on how this can lead to change. . . The reports also direct me to write more personally, using language that evokes emotion, and less descriptively,
We look forward to continuing the development of AWA. As an example, we’d like to create special modules for different disciplines, which each have their characteristic rubrics besides the general ones. This will allow AWA to provide even more specific feedback.
For information on XRCE research in natural language processing.
 Swales, J. (1990). Genre analysis: English in academic and research settings. Cambridge University Press.
 Lisacek, F., Chichester, C., Kaplan, A. & Sándor, Á. (2005). Discovering paradigm shift patterns in biomedical abstracts: application to neurodegenerative diseases. First International Symposium on Semantic Mining in Biomedicine, Cambridge, UK, April 11-13, 2005.
 Sándor, Á., Vorndran, A. (2009). Detecting key sentences for automatic assistance in peer reviewing research articles in educational sciences. In Proceedings of the 2009 Workshop on Text and Citation Analysis for Scholarly Digital Libraries, ACL-IJCNLP 2009, Suntec, Singapore, 7 August 2009 Singapore (2009), pp. 36--44.
 De Liddo, A., Sándor, Á. and Buckingham Shum, S. (2012). Contested Collective Intelligence: rationale, technologies, and a human-machine annotation study. Computer Supported Cooperative Work (CSCW), 21(4-5), pp.
 Knight, S., Buckingham Shum, S., Ryan, P., Sándor, Á., & Wang, X. (In Press). Designing Academic Writing Analytics for Civil Law Student Self-Assessment. International Journal of Artificial Intelligence in Education, (Special Issue on Multidisciplinary Approaches to Reading and Writing Integrated with Disciplinary Education, Eds. D. McNamara, S. Muresan, R. Passonneau & D. Perin). Open Access Reprint: http://dx.doi.org/doi:10.1007/s40593-016-0121-0
 Gibson, A., Aitken, A., Sándor, Á., Buckingham Shum, S., Tsingos-Lucas, Ch. and Knight, S.(2017): Reflective Writing Analytics for Actionable Feedback (LAK 2017)