Fine-tuning descriptors for CEFR B1 level: insights from learner corpora

Fine-tuning descriptors for CEFR B1 level: insights from learner corpora Abstract Despite the current importance of the Common European Framework of Reference for Languages (CEFR) in the learning, teaching, and assessment of languages, limitations arise in the use of the CEFR descriptors, which are also present in the European Language Portfolio (ELP). This article highlights the main challenges posed to CEFR and ELP users by the linguistic competence descriptors—with a particular focus on the grammatical accuracy descriptors, and strategy descriptors for monitoring and repair at B1 level—when they try to self-assess their written production activities. In order to address these limitations, a Computer-aided Error Analysis (CEA) was performed on a learner corpus comprising B1-level texts produced by Spanish learners of English. The results obtained enabled the reformulation of the descriptors for written production activities at CEFR B1 level aimed at L1 Spanish learners of English, by complementing the existing descriptors with further linguistic information on the most frequent errors at that level. Introduction After more than 20 years of research, the Common European Framework of Reference for Languages (CEFR) was published in 2001 by the Council of Europe to provide ‘a common basis for the elaboration of language syllabuses, curriculum guidelines, examinations, textbooks, etc. across Europe’ (Council of Europe 2001: 1). In addition to establishing a common metalanguage that encompasses the main aspects related to language teaching, learning, and assessment, two further aims can be found underpinning the CEFR (North 2007: 659). The first is to promote reflection on learners’ needs, establish objectives and identify ways to follow up and check their progress. The second involves establishing a series of levels which considers the learners’ use of the language from a communicative point of view. To date, the CEFR has been translated into 39 languages and has been adopted, to varying degrees, by countries throughout Europe and beyond. The CEFR has had a far more pronounced impact on language testing than on any other aspect of language learning/teaching (Little 2007: 648). In fact, aspects of the CEFR related to learning and teaching have yet to find their place in the classroom context (Figueras 2012), although attempts are being made to facilitate this through the use of the European Language Portfolio (ELP), which is the companion piece to the CEFR. This is because the ELP is a ‘much more straightforward’ tool to use than the CEFR and ‘can be put into direct use with the students’ (Figueras ibid.: 450). The ELP helps familiarize students with the CEFR and the use of descriptors, helping them become the main agents of their own learning, for example by encouraging self-evaluation and learner autonomy. However, the use of the descriptors or ‘can-do statements’ by CEFR or ELP users (for example learners, teachers, assessors, material writers, etc.) has proved problematic, as the descriptors are too impressionistic and global in nature when it comes to providing a linguistic description of how well the learner is able to make use of language when performing language activities (Hulstijn 2007; North 2007). In other words, while users are given detailed descriptors for the number, that is ‘quantity’ of language activities they can perform, i.e. the CEFR’s horizontal dimension, they are not provided with clear information regarding the ‘quality’ of their language production when performing said language activities at the different CEFR levels, i.e. the vertical dimension (Hulstijn ibid.: 663). In fact, the descriptors for the linguistic competences (Council of Europe op.cit.: 108–18) lack specific information concerning the type of language expected at each CEFR level. The same occurs with the descriptors for strategies, which lack appropriate information, especially for written language activities. Consequently, users still need to be given detailed clear descriptors of linguistic competence and strategies so that they can become autonomous learners as well as effective self-evaluators. In other words, it is necessary to fine-tune the ‘quality’ descriptors and the strategy descriptors in both the CEFR and the ELP using linguistic information on what the learners are expected to do at each CEFR level, as analysed through learner corpora. This article illustrates the main challenges that linguistic competence descriptors pose to CEFR users when they try to self-assess their written production activities, with a particular focus on the grammatical accuracy descriptor and strategy descriptors for monitoring and repair at B1 level. In order to address these limitations and provide CEFR users with more comprehensive descriptors, a Computer-aided Error Analysis (CEA) was performed on a learner corpus comprising B1-level texts by Spanish learners of English. The results obtained enabled the fine-tuning of the above-mentioned descriptors aimed at L1 Spanish learners of English by complementing those descriptors with linguistic information on the most frequent errors made at that level, as extracted from the error-tagged learner corpus. The focus on errors stems from the need to specify the vague reference to errors in the descriptor for grammatical accuracy at B1 level in the CEFR (see Figure 2 below). The CEFR: the horizontal and the vertical dimensions To use the CEFR descriptors correctly, an understanding of the CEFR’s horizontal and vertical dimensions is required. The concepts in the quotation below are described under its horizontal dimension. This is the less commonly known CEFR dimension, but constitutes the core of the scheme (North 2007). According to the CEFR (Council of Europe op.cit.: 9), language use, embracing language learning comprises the actions performed by persons who as individuals and as social agents develop a range of competences, both general and in particular communicative language competences. They draw on the competences at their disposal in various contexts under various conditions and under various constraints to engage in language activities involving language processes to produce and/or receive texts in relation to themes in specific domains, activating those strategies which seem most appropriate for carrying out the tasks to be accomplished. The monitoring of these actions by the participants leads to the reinforcement or modification of their competences. (emphasis in original) As can be seen, this dimension provides a descriptive overview of the domains of language use (i.e. the context in which the language activity takes place, be it personal, public, occupational, or educational); communicative language activities (involving reception, production, interaction or mediation in texts, oral or written, which may combine both modes); and strategies (production, reception, and interaction), as well as the user’s competences (general and particular communicative language competences) needed to do such activities. Meanwhile, the vertical dimension provides a set of Common Reference Levels (the most widely known aspects of the CEFR), which offers information on what the learner can do at each level by means of a number of illustrative descriptors. As shown in Figure 1, the three main CEFR levels can each be subdivided into two more specific levels using the hypertext branching approach (Council of Europe op.cit.: 32). If required, these levels may be broken down further (Council of Europe op.cit.; North 2014: 72), following the results of the Swiss National Project (Little 2007: 648). Figure 1 View largeDownload slide The hypertext branching approach in the CEFR (Council of Europe op.cit.: 32) Figure 1 View largeDownload slide The hypertext branching approach in the CEFR (Council of Europe op.cit.: 32) The horizontal and vertical dimensions complement each other, given that learning a language calls upon both, as ‘learners acquire the proficiency to perform in a wider range of communicative activities’ (Council of Europe op.cit.: 17). Thus, the learner is able to do more things on the horizontal dimension with the language at his/her disposal as he/she acquires the language and progresses up the vertical dimension. The use of descriptors in the CEFR and the checklists in the ELP: limitations Four key aspects represent the philosophy behind the CEFR approach, namely transparency and coherence, plurilingualism, the concept of partial competences, and language for a social purpose (North 2014: 10–11). Transparency and coherence are ensured in the CEFR through the use of illustrative descriptors, i.e. statements phrased in positive terms which specify what users can do with the language at each level and the degree of precision they can bring to the task, enabling users, and stakeholders to use this metalanguage to refer to the standards, self-evaluate what they can do, and check the characteristics of the language employed. Although the use of descriptors in the CEFR and the ELP empower learners (González 2009) as they become aware of the aspects they can take action on, it is necessary to adapt the descriptors to the context in which they are used (North 2014). This is because, due to the overriding philosophy of the CEFR, i.e. providing users with a document which may trigger reflection on the learning, teaching, and assessment of any language as well as providing a common standard for the differing levels, the descriptors are written in such a way that they can be applied to any language. But it is for this reason that they show some significant limitations; specific criticisms include lacking validity and relevance, not being based on SLA results, and being too impressionistic and global to really provide a linguistic description of how well the learner can use the language when performing language activities (Hawkins and Filipović 2012; North 2007). Thus, their links to the actual use of language in specific contexts is quite vague. Learners experience this vagueness in the phrasing of the descriptors when, after completing a written production activity, they want to check how well they have performed on such an activity, i.e. the ‘quality’ of their language, in relation to a particular CEFR level. If they wish to focus on the grammatical aspects of their texts, they would need to check the available descriptors for grammatical accuracy at CEFR B1 level (see Figure 2). Figure 2 View largeDownload slide Descriptors for grammatical accuracy at CEFR B1 (Council of Europe op.cit.: 114) Figure 2 View largeDownload slide Descriptors for grammatical accuracy at CEFR B1 (Council of Europe op.cit.: 114) Yet, after reading these descriptors, students may still need precise information on the ‘errors’ which are frequent for this language activity at the level in question. Such a description would enable them to seek help from their teacher, language reference tools, etc., to improve their use of the language and move up the vertical dimension of the CEFR. The opposite, i.e. not finding a descriptor specifying these errors, may prevent learners from planning their next writing activity or from evaluating the quality of an already completed task. Furthermore, little information is available to learners when checking language strategies, these being the means at their disposal to activate, monitor, and prepare for any possible unexpected outcomes in task development (Council of Europe op.cit.: 57). When checking production strategies such as monitoring and repair, which are typical in the writing as a process approach, the descriptor at B1 level (shown in Figure 3) reads. Figure 3 View largeDownload slide Descriptors for monitoring and repair at CEFR B1 level (Council of Europe op.cit.: 65) Figure 3 View largeDownload slide Descriptors for monitoring and repair at CEFR B1 level (Council of Europe op.cit.: 65) As can be seen, these descriptors mostly relate to speaking, as writing does not usually involve an interlocutor and feedback is not normally given during the writing process. In the case of written production, learners would need a source of information on the aspects of the language which can be double-checked during or after writing to improve communication, otherwise, they would not be given the opportunity to activate the appropriate strategies for this language activity at this level. The descriptors mentioned above are therefore examples of the need to complement descriptors with detailed information on common errors (in the case of grammatical accuracy descriptors) and on the way that production strategies can be activated at a certain level. Since the CEFR descriptors are part of a ‘flexible, open, dynamic, and non-dogmatic’ document (Trim 2012: 29), there is room for improvement or fine-tuning. Making the most of the flexibility in the CEFR allows users to adapt it to their reality, without losing track of the CEFR. This adaptation, by unzipping or complementing the descriptors (North 2014), can be clearly seen in the descriptors in the ELP’s Language Biography (one of the three parts of the ELP, together with the Language Passport and the Dossier). The Language Biography fulfils a pedagogic function as it aims to encourage learners to experience more language use opportunities and intercultural contacts, improve their language learning, and help them reflect on the process so that they can plan the next step and become autonomous learners. This is done by presenting the CEFR descriptors for each language activity unzipped into ‘micro-descriptors’ or ‘I can do’ statements in checklists. As an example, Figure 4 shows the validated ELP for Spanish learners in which the general descriptor for writing at CEFR B1 level1 is divided into ten micro-descriptors which the learner can self-evaluate as being something acquired (mis capacidades), something on which he/she is working (aún no), or an objective (mis objetivos) in the foreign language. In an academic setting, these ‘micro-descriptors’ can be developed through micro-activities which foster the practice of a particular communicative language activity and the corresponding linguistic competences in baby steps, which eventually leads the learners to the general CEFR descriptor. Figure 4 View largeDownload slide CEFR B1 descriptor for writing unzipped in the electronic ELP (Spanish version) Figure 4 View largeDownload slide CEFR B1 descriptor for writing unzipped in the electronic ELP (Spanish version) Despite the possibility of unzipping or complementing the CEFR descriptors in the ELP checklists, the qualitative aspects of language use, such as grammatical accuracy and lexical control, have not yet been fully included in the ‘micro-descriptors’ (Little 2005), probably due to the lack of specificity of those descriptors in the CEFR. Their inclusion in the checklists is crucial, as their absence in the ELP micro-descriptors may prevent learners from undertaking their self-assessment and planning successfully. In Little’s words (2005: 322), ‘unless they know what tasks they can already perform in their target language – and with approximately what linguistic range, fluency and accuracy – their decisions will be random, even worthless’. To account for the need to fine-tune the functional descriptors in the CEFR and the ELP with linguistic information, several projects, as outlined in the next section, analyse the main characteristics of learner language, as compiled in learner corpora, at each CEFR level. Learner corpora: analysing learner language Learner corpora, that is, ‘electronic collections of foreign or second language learner texts assembled according to explicit design criteria’ (Granger 2009: 14), have been used to describe the language learners produce in language activities. The results obtained in Learner Corpus Research have become increasingly salient, since they inform a number of fields including corpus linguistics, second language acquisition research, foreign language teaching, and assessment. Language teaching has been the field most influenced by the results of learner corpora. Learner Corpus Research findings that characterize learners’ language at different proficiency or CEFR levels are used to inform language reference tools, such as monolingual learners’ dictionaries, pedagogical grammars, coursebooks, and Computer-assisted Language Learning materials (Granger 2016). One way of analysing learner language is by conducting a CEA (Dagneaux, Denness, and Granger 1998), where the most frequent errors that students make at a specific CEFR level are identified in an error-tagged learner corpus. Teaching materials may include this type of information to make students aware of the most common problems encountered at each level and to provide remedial teaching activities designed to address specific foreign language difficulties. An example of materials that include Learner Corpus Research findings are dictionaries, like the Macmillan English Dictionary for Advanced Learners, which features ‘Get it right’ boxes that cover the most common mistakes, as analysed in the International Corpus of Learner English. Pedagogical grammars, such as the Cambridge Grammar of English, also utilize CEA results by making reference to problematic foreign language aspects. Similarly, textbook series including Face2Face, English in Mind, and Common Mistakes at … also tackle learners’ problem areas at each level when producing language, helping them become aware of their language limitations so as to improve their language quality. Large corpora with texts aligned to the different CEFR levels are needed to analyse learner writing. In the case of English, the leading project in this line of research is the English Profile Project (http://www.englishprofile.org/). This initiative analyses the Cambridge English Corpus in order to identify salient features at each CEFR level, i.e. ‘properties of learner English that are characteristic and indicative of L2 proficiency at each of the levels and that distinguish higher levels from lower levels’ (Hawkins and Filipović op.cit.: 11). However, the findings observed to date report on learner writing at different CEFR levels by learners from various L1 backgrounds. Although some projects have yielded results on specific learner groups, i.e. Spanish learners of English (Díez-Bedmar 2015), further research is needed to describe the errors typical at each CEFR level by a specific L1 group. Thus, this detailed information can inform both language reference tools aimed at the L1 learner group as well as the descriptors for linguistic competence in the CEFR and the ELP, helping students become aware of the language expected at each level. Methodology The learner corpus used in this study to analyse learner language by Spanish students of English at CEFR B1 level when undertaking a written production activity comprises 179 texts (21,441 words in total; M = 119.78; SD = 28.96). These texts were chosen from a larger learner corpus, selecting only those which two independent and experienced raters assigned a B1 level, thus ensuring full inter-rater agreement. Learners were instructed to write a short piece on the following topic: ‘Where, outside Spain, would you go on a short pleasure trip?’ To highlight the errors made by the students, two native English speakers identified all the errors in the compositions. The learner corpus was then manually error-tagged in accordance with the error taxonomy by Dagneaux, Denness, Granger, and Meunier (1996). When manually error-tagging the learner corpus, and so as not to bias the results due to the raters’ different degrees of severity in the identification of errors, only inaccurate uses of the foreign language identified by both native speakers were considered. Descriptive statistics were used to pinpoint the students’ most frequent errors in the foreign language at CEFR B1 level when performing this language activity. Corpus results Figure 5 gives an overview of the mean of errors per text in the learner corpus per error category.2 The one with the highest mean of errors per text corresponds to grammar (M = 6.42; SD = 4.20). Figure 5 View largeDownload slide An overview of the main errors at CEFR B1 level Figure 5 View largeDownload slide An overview of the main errors at CEFR B1 level Focusing on the category which yields the highest means of errors per composition in the learner corpus, Figure 6 provides the data on the Grammar category.3 Figure 6 View largeDownload slide An overview of grammatical errors at CEFR B1 level Figure 6 View largeDownload slide An overview of grammatical errors at CEFR B1 level As Figure 6 shows, the grammatical areas that pose the greatest challenges to Spanish learners of English when undertaking this written production activity at B1 level are related to the incorrect use of pronouns (GP) (M = 1.53; SD = 1.66) and articles (GA) (M = 1,35; SD = 1.49). A mean of more than one error per text may be expected at this CEFR level regarding these two word classes. In the case of the incorrect use of pronouns, the most frequent error is using the pronoun ‘his’ instead of ‘its’ (Example 1), followed by the omission of a pronoun when the pronoun ‘it’ should be used (Example 2), and the use of the pronoun ‘it’ as a subject when a subject has already been included (Example 3).4 1 (GP) his $its$ towns 2 I like England because (GP) $it$ is a very beautiful country 3 England (GP) it $0$ is very far The main problem found in the use of incorrect articles is the omission of an article when the definite article is needed. For example: 4 go to (GA) $the$ beach in (GA) $the$ evening Three further error types emerge as frequent in learner writing at this level, yielding a mean of between 0.05 and 1. They correspond to the expression of number in nouns (GNN) (M = 0.6; SD = 0.92), the use of auxiliary verbs (GVAUX) (M = 0.55; SD = 0.84), and incorrect word class usage (GWC) (M = 0.53, SD = 0.76). Problems encountered when expressing number in nouns can be seen in the following example: 5 My (GNN) dreams $dream$ is … Difficulties have also been observed in auxiliary verb selection and the incorrect inclusion of an auxiliary verb: 6 If I got more money and more time, I (GVAUX) will $would$ go to visit 7 If I (GVAUX) would have $had$ the opportunity Lastly, incorrect word class selection can be seen in the following example: 8 The last reason for visiting (GWC) Greek $Greece$ Proposal for fine-tuned descriptors The results obtained in this study are useful to inform the CEFR descriptor for grammatical accuracy and the CEFR descriptor for monitoring and repair for written production activities at B1 level aimed at Spanish learners of English. These descriptors may be included in a version of the CEFR aimed at Spanish learners and may be unzipped in the electronic ELP for this L1 learner group (see Figure 7 for an example). Figure 7 View largeDownload slide Unzipped strategy descriptor for monitoring and repair at B1 level Figure 7 View largeDownload slide Unzipped strategy descriptor for monitoring and repair at B1 level First, the descriptor for grammatical accuracy can be rephrased as follows: Communicates with reasonable accuracy in familiar contexts; generally good control though with noticeable mother tongue influence. Errors occur, especially grammatical errors related to the use of pronouns and articles, which are likely to appear at least once per composition, as well as other grammatical problems which are less frequent such as the incorrect expression of number in nouns, the use of auxiliary verbs and the incorrect use of word classes, but it is clear what he/she is trying to express. (Text in italics indicates added information.) Similarly, the descriptor for the monitoring and repair strategy for written production activities may help the learner to pay attention to these specific aspects of language and notice mistakes before submitting a piece of writing so as to improve the quality of the language used, if it is rephrased as follows: Can double-check during and after the writing process that the use of pronouns and articles is correct. Can pay attention to the incorrect expression of number in nouns, the use of auxiliary verbs and the incorrect use of word classes. Can consult in a textbook, grammar book or with a peer or the teacher (if possible) doubts regarding those aspects of the FL. Conclusions This article has highlighted the main challenges that linguistic competence descriptors pose to CEFR and ELP users when attempting to self-assess their writing and exercising autonomy in their learning process, with a particular focus on the grammatical accuracy descriptors and strategy descriptors for monitoring and repair at B1 level. Thanks to the results obtained through a CEA conducted on a learner corpus comprising B1-level texts by Spanish learners of English, it has been possible to propose a reformulation of the grammatical accuracy descriptor at B1 level to ensure that learners know the type of errors which are most frequent at that level. Likewise, the descriptors for the monitoring and repair strategy at B1 level have been rephrased to help learners develop strategies for monitoring and repair with the aim of improving the quality of their texts by correcting those errors. These descriptors may be ‘unzipped’ in the ELP so that learners can reach the descriptor via baby steps. Examples taken from the learner corpus could also be included in the electronic ELP or teaching materials to raise learners’ awareness of the main grammatical errors and to enable them to monitor and repair their texts. Using this approach, foreign language teaching following the CEFR can be achieved. First, learners/users are placed at the centre of the process. Second, they are provided with a transparent and coherent process which clearly illustrates the relationship between the three elements of the CEFR triad, namely learning, teaching, and assessment. Finally, learners become active agents of their own learning thanks to the transparency and coherence of the process, as they are able to reflect on what they can do with the language, how well they can use the language, and how they can self-assess their texts, thus becoming more reflective and autonomous learners. María Belén Díez-Bedmar is Associate Professor at the University of Jaén (Spain). Her main research interests include Learner Corpus Research, error-tagging, the use of corpora in language teaching, the CEFR, language testing and assessment, and CMC. References Council of Europe . 2001 . The Common European Framework of Reference for Languages: Learning, Teaching, Assessment . Cambridge : Cambridge University Press . Dagneaux , E. , Denness S. , and Granger S. . 1998 . ‘ Computer-aided error analysis ’. System 26 / 2 : 163 – 74 . Google Scholar CrossRef Search ADS Dagneaux , E. , Denness S. , Granger S. , and Meunier F. . 1996 . Error Tagging Manual Version 1.1 . Louvain-la-Neuve : Centre for English Corpus Linguistics, Université Catholique de Louvain . Díez-Bedmar , M. B . 2015 . ‘ Article use and criterial features in Spanish EFL writing: a pilot study from CEFR A2 to B2 levels ’ in M. Callies and S. Götz (eds.). Learner Corpora in Language Testing and Assessment (pp. 163 – 190 ). Amsterdam : John Benjamins . Google Scholar CrossRef Search ADS Figueras , N . 2012 . ‘ The impact of the CEFR ’. ELT Journal 66 / 4 : 477 – 85 . Google Scholar CrossRef Search ADS González , J. Á . 2009 . ‘ Promoting learner autonomy through the use of the European Language Portfolio ’. ELT Journal 63 / 4 : 373 – 82 . Google Scholar CrossRef Search ADS Granger , S . 2009 . ‘ The contribution of learner corpora to second language acquisition and foreign language teaching: a critical evaluation ’ in K. Aijmer (ed.). Corpora and Language Teaching (pp. 13 – 32 ). Amsterdam : John Benjamins . Google Scholar CrossRef Search ADS Granger , S . 2016 . ‘ The contribution of learner corpora to reference and instructional materials design ’ in S. Granger , G. Gilquin , and F. Meunier (eds.). The Cambridge Handbook of Learner Corpus Research (pp. 486 – 510 ). Cambridge : Cambridge University Press . Google Scholar CrossRef Search ADS Hawkins , J. A. and Filipović L. . 2012 . Criterial Features in L2 English . Cambridge : Cambridge University Press . Hulstijn , J. H . 2007 . ‘ The shaky ground beneath the CEFR: quantitative and qualitative dimensions of language proficiency ’. The Modern Language Journal 91 / 4 : 663 – 7 . Google Scholar CrossRef Search ADS Little , D . 2005 . ‘ The Common European Framework and the European Language Portfolio: involving learners and their judgements in the assessment process ’. Language Testing 22 / 3 : 321 – 36 . Google Scholar CrossRef Search ADS Little , D . 2007 . ‘ The Common European Framework of Reference for Languages: perspectives on the making of supranational language education policy ’. The Modern Language Journal 91 / 4 : 645 – 55 . Google Scholar CrossRef Search ADS North , B . 2007 . ‘ The CEFR illustrative descriptor scales ’. The Modern Language Journal 91 / 4 : 656 – 9 . Google Scholar CrossRef Search ADS North , B . 2014 . The CEFR in Practice . Cambridge : Cambridge University Press . Trim , J . 2012 . ‘ The Common European Framework of Reference for Languages and its background: a case study of cultural politics and educational influences ’ in M. Byram and L. Parmenter (eds.). The Common European Framework of Reference. The Globalisation of Language Education Policy (pp. 14 – 34 ). Bristol : Multilingual Matters . Footnotes 1 The general descriptor reads ‘Can write straightforward connected texts on a range of familiar subjects which are familiar or of my interest within his field of interest. I can write personal letters describing experiences or thoughts’. An example of micro-descriptors is ‘I can fill in forms, questionnaires and other similar documents’. 2 The distinction between the error categories, for instance grammar versus lexical errors, was made following the error taxonomy by Dagneaux, Denness, Granger, and Meunier (1996). 3 The complete list of acronyms and their meanings can be found in Dagneaux, Denness, Granger, and Meunier (op.cit.). 4 The examples are verbatim transcriptions from the error-tagged learner corpus. Only the error-tags corresponding to the errors under consideration have been left in the examples. The error-tags, between brackets, are placed before the wrong word and the corrections are inserted between dollar signs. In Example 1, the error-tag inserted is ‘(GP)’ because ‘his’ is incorrectly used. The correction, $its$, is included after the incorrect word. © The Author(s) 2017. Published by Oxford University Press. This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs licence (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial reproduction and distribution of the work, in any medium, provided the original work is not altered or transformed in any way, and that the work is properly cited. For commercial re-use, please contact journals.permissions@oup.com http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png ELT Journal Oxford University Press

Fine-tuning descriptors for CEFR B1 level: insights from learner corpora

ELT Journal , Volume Advance Article (2) – Nov 22, 2017

Loading next page...
 
/lp/ou_press/fine-tuning-descriptors-for-cefr-b1-level-insights-from-learner-D02UH7t0uo
Publisher
Oxford University Press
Copyright
© The Author(s) 2017. Published by Oxford University Press.
ISSN
0951-0893
eISSN
1477-4526
D.O.I.
10.1093/elt/ccx052
Publisher site
See Article on Publisher Site

Abstract

Abstract Despite the current importance of the Common European Framework of Reference for Languages (CEFR) in the learning, teaching, and assessment of languages, limitations arise in the use of the CEFR descriptors, which are also present in the European Language Portfolio (ELP). This article highlights the main challenges posed to CEFR and ELP users by the linguistic competence descriptors—with a particular focus on the grammatical accuracy descriptors, and strategy descriptors for monitoring and repair at B1 level—when they try to self-assess their written production activities. In order to address these limitations, a Computer-aided Error Analysis (CEA) was performed on a learner corpus comprising B1-level texts produced by Spanish learners of English. The results obtained enabled the reformulation of the descriptors for written production activities at CEFR B1 level aimed at L1 Spanish learners of English, by complementing the existing descriptors with further linguistic information on the most frequent errors at that level. Introduction After more than 20 years of research, the Common European Framework of Reference for Languages (CEFR) was published in 2001 by the Council of Europe to provide ‘a common basis for the elaboration of language syllabuses, curriculum guidelines, examinations, textbooks, etc. across Europe’ (Council of Europe 2001: 1). In addition to establishing a common metalanguage that encompasses the main aspects related to language teaching, learning, and assessment, two further aims can be found underpinning the CEFR (North 2007: 659). The first is to promote reflection on learners’ needs, establish objectives and identify ways to follow up and check their progress. The second involves establishing a series of levels which considers the learners’ use of the language from a communicative point of view. To date, the CEFR has been translated into 39 languages and has been adopted, to varying degrees, by countries throughout Europe and beyond. The CEFR has had a far more pronounced impact on language testing than on any other aspect of language learning/teaching (Little 2007: 648). In fact, aspects of the CEFR related to learning and teaching have yet to find their place in the classroom context (Figueras 2012), although attempts are being made to facilitate this through the use of the European Language Portfolio (ELP), which is the companion piece to the CEFR. This is because the ELP is a ‘much more straightforward’ tool to use than the CEFR and ‘can be put into direct use with the students’ (Figueras ibid.: 450). The ELP helps familiarize students with the CEFR and the use of descriptors, helping them become the main agents of their own learning, for example by encouraging self-evaluation and learner autonomy. However, the use of the descriptors or ‘can-do statements’ by CEFR or ELP users (for example learners, teachers, assessors, material writers, etc.) has proved problematic, as the descriptors are too impressionistic and global in nature when it comes to providing a linguistic description of how well the learner is able to make use of language when performing language activities (Hulstijn 2007; North 2007). In other words, while users are given detailed descriptors for the number, that is ‘quantity’ of language activities they can perform, i.e. the CEFR’s horizontal dimension, they are not provided with clear information regarding the ‘quality’ of their language production when performing said language activities at the different CEFR levels, i.e. the vertical dimension (Hulstijn ibid.: 663). In fact, the descriptors for the linguistic competences (Council of Europe op.cit.: 108–18) lack specific information concerning the type of language expected at each CEFR level. The same occurs with the descriptors for strategies, which lack appropriate information, especially for written language activities. Consequently, users still need to be given detailed clear descriptors of linguistic competence and strategies so that they can become autonomous learners as well as effective self-evaluators. In other words, it is necessary to fine-tune the ‘quality’ descriptors and the strategy descriptors in both the CEFR and the ELP using linguistic information on what the learners are expected to do at each CEFR level, as analysed through learner corpora. This article illustrates the main challenges that linguistic competence descriptors pose to CEFR users when they try to self-assess their written production activities, with a particular focus on the grammatical accuracy descriptor and strategy descriptors for monitoring and repair at B1 level. In order to address these limitations and provide CEFR users with more comprehensive descriptors, a Computer-aided Error Analysis (CEA) was performed on a learner corpus comprising B1-level texts by Spanish learners of English. The results obtained enabled the fine-tuning of the above-mentioned descriptors aimed at L1 Spanish learners of English by complementing those descriptors with linguistic information on the most frequent errors made at that level, as extracted from the error-tagged learner corpus. The focus on errors stems from the need to specify the vague reference to errors in the descriptor for grammatical accuracy at B1 level in the CEFR (see Figure 2 below). The CEFR: the horizontal and the vertical dimensions To use the CEFR descriptors correctly, an understanding of the CEFR’s horizontal and vertical dimensions is required. The concepts in the quotation below are described under its horizontal dimension. This is the less commonly known CEFR dimension, but constitutes the core of the scheme (North 2007). According to the CEFR (Council of Europe op.cit.: 9), language use, embracing language learning comprises the actions performed by persons who as individuals and as social agents develop a range of competences, both general and in particular communicative language competences. They draw on the competences at their disposal in various contexts under various conditions and under various constraints to engage in language activities involving language processes to produce and/or receive texts in relation to themes in specific domains, activating those strategies which seem most appropriate for carrying out the tasks to be accomplished. The monitoring of these actions by the participants leads to the reinforcement or modification of their competences. (emphasis in original) As can be seen, this dimension provides a descriptive overview of the domains of language use (i.e. the context in which the language activity takes place, be it personal, public, occupational, or educational); communicative language activities (involving reception, production, interaction or mediation in texts, oral or written, which may combine both modes); and strategies (production, reception, and interaction), as well as the user’s competences (general and particular communicative language competences) needed to do such activities. Meanwhile, the vertical dimension provides a set of Common Reference Levels (the most widely known aspects of the CEFR), which offers information on what the learner can do at each level by means of a number of illustrative descriptors. As shown in Figure 1, the three main CEFR levels can each be subdivided into two more specific levels using the hypertext branching approach (Council of Europe op.cit.: 32). If required, these levels may be broken down further (Council of Europe op.cit.; North 2014: 72), following the results of the Swiss National Project (Little 2007: 648). Figure 1 View largeDownload slide The hypertext branching approach in the CEFR (Council of Europe op.cit.: 32) Figure 1 View largeDownload slide The hypertext branching approach in the CEFR (Council of Europe op.cit.: 32) The horizontal and vertical dimensions complement each other, given that learning a language calls upon both, as ‘learners acquire the proficiency to perform in a wider range of communicative activities’ (Council of Europe op.cit.: 17). Thus, the learner is able to do more things on the horizontal dimension with the language at his/her disposal as he/she acquires the language and progresses up the vertical dimension. The use of descriptors in the CEFR and the checklists in the ELP: limitations Four key aspects represent the philosophy behind the CEFR approach, namely transparency and coherence, plurilingualism, the concept of partial competences, and language for a social purpose (North 2014: 10–11). Transparency and coherence are ensured in the CEFR through the use of illustrative descriptors, i.e. statements phrased in positive terms which specify what users can do with the language at each level and the degree of precision they can bring to the task, enabling users, and stakeholders to use this metalanguage to refer to the standards, self-evaluate what they can do, and check the characteristics of the language employed. Although the use of descriptors in the CEFR and the ELP empower learners (González 2009) as they become aware of the aspects they can take action on, it is necessary to adapt the descriptors to the context in which they are used (North 2014). This is because, due to the overriding philosophy of the CEFR, i.e. providing users with a document which may trigger reflection on the learning, teaching, and assessment of any language as well as providing a common standard for the differing levels, the descriptors are written in such a way that they can be applied to any language. But it is for this reason that they show some significant limitations; specific criticisms include lacking validity and relevance, not being based on SLA results, and being too impressionistic and global to really provide a linguistic description of how well the learner can use the language when performing language activities (Hawkins and Filipović 2012; North 2007). Thus, their links to the actual use of language in specific contexts is quite vague. Learners experience this vagueness in the phrasing of the descriptors when, after completing a written production activity, they want to check how well they have performed on such an activity, i.e. the ‘quality’ of their language, in relation to a particular CEFR level. If they wish to focus on the grammatical aspects of their texts, they would need to check the available descriptors for grammatical accuracy at CEFR B1 level (see Figure 2). Figure 2 View largeDownload slide Descriptors for grammatical accuracy at CEFR B1 (Council of Europe op.cit.: 114) Figure 2 View largeDownload slide Descriptors for grammatical accuracy at CEFR B1 (Council of Europe op.cit.: 114) Yet, after reading these descriptors, students may still need precise information on the ‘errors’ which are frequent for this language activity at the level in question. Such a description would enable them to seek help from their teacher, language reference tools, etc., to improve their use of the language and move up the vertical dimension of the CEFR. The opposite, i.e. not finding a descriptor specifying these errors, may prevent learners from planning their next writing activity or from evaluating the quality of an already completed task. Furthermore, little information is available to learners when checking language strategies, these being the means at their disposal to activate, monitor, and prepare for any possible unexpected outcomes in task development (Council of Europe op.cit.: 57). When checking production strategies such as monitoring and repair, which are typical in the writing as a process approach, the descriptor at B1 level (shown in Figure 3) reads. Figure 3 View largeDownload slide Descriptors for monitoring and repair at CEFR B1 level (Council of Europe op.cit.: 65) Figure 3 View largeDownload slide Descriptors for monitoring and repair at CEFR B1 level (Council of Europe op.cit.: 65) As can be seen, these descriptors mostly relate to speaking, as writing does not usually involve an interlocutor and feedback is not normally given during the writing process. In the case of written production, learners would need a source of information on the aspects of the language which can be double-checked during or after writing to improve communication, otherwise, they would not be given the opportunity to activate the appropriate strategies for this language activity at this level. The descriptors mentioned above are therefore examples of the need to complement descriptors with detailed information on common errors (in the case of grammatical accuracy descriptors) and on the way that production strategies can be activated at a certain level. Since the CEFR descriptors are part of a ‘flexible, open, dynamic, and non-dogmatic’ document (Trim 2012: 29), there is room for improvement or fine-tuning. Making the most of the flexibility in the CEFR allows users to adapt it to their reality, without losing track of the CEFR. This adaptation, by unzipping or complementing the descriptors (North 2014), can be clearly seen in the descriptors in the ELP’s Language Biography (one of the three parts of the ELP, together with the Language Passport and the Dossier). The Language Biography fulfils a pedagogic function as it aims to encourage learners to experience more language use opportunities and intercultural contacts, improve their language learning, and help them reflect on the process so that they can plan the next step and become autonomous learners. This is done by presenting the CEFR descriptors for each language activity unzipped into ‘micro-descriptors’ or ‘I can do’ statements in checklists. As an example, Figure 4 shows the validated ELP for Spanish learners in which the general descriptor for writing at CEFR B1 level1 is divided into ten micro-descriptors which the learner can self-evaluate as being something acquired (mis capacidades), something on which he/she is working (aún no), or an objective (mis objetivos) in the foreign language. In an academic setting, these ‘micro-descriptors’ can be developed through micro-activities which foster the practice of a particular communicative language activity and the corresponding linguistic competences in baby steps, which eventually leads the learners to the general CEFR descriptor. Figure 4 View largeDownload slide CEFR B1 descriptor for writing unzipped in the electronic ELP (Spanish version) Figure 4 View largeDownload slide CEFR B1 descriptor for writing unzipped in the electronic ELP (Spanish version) Despite the possibility of unzipping or complementing the CEFR descriptors in the ELP checklists, the qualitative aspects of language use, such as grammatical accuracy and lexical control, have not yet been fully included in the ‘micro-descriptors’ (Little 2005), probably due to the lack of specificity of those descriptors in the CEFR. Their inclusion in the checklists is crucial, as their absence in the ELP micro-descriptors may prevent learners from undertaking their self-assessment and planning successfully. In Little’s words (2005: 322), ‘unless they know what tasks they can already perform in their target language – and with approximately what linguistic range, fluency and accuracy – their decisions will be random, even worthless’. To account for the need to fine-tune the functional descriptors in the CEFR and the ELP with linguistic information, several projects, as outlined in the next section, analyse the main characteristics of learner language, as compiled in learner corpora, at each CEFR level. Learner corpora: analysing learner language Learner corpora, that is, ‘electronic collections of foreign or second language learner texts assembled according to explicit design criteria’ (Granger 2009: 14), have been used to describe the language learners produce in language activities. The results obtained in Learner Corpus Research have become increasingly salient, since they inform a number of fields including corpus linguistics, second language acquisition research, foreign language teaching, and assessment. Language teaching has been the field most influenced by the results of learner corpora. Learner Corpus Research findings that characterize learners’ language at different proficiency or CEFR levels are used to inform language reference tools, such as monolingual learners’ dictionaries, pedagogical grammars, coursebooks, and Computer-assisted Language Learning materials (Granger 2016). One way of analysing learner language is by conducting a CEA (Dagneaux, Denness, and Granger 1998), where the most frequent errors that students make at a specific CEFR level are identified in an error-tagged learner corpus. Teaching materials may include this type of information to make students aware of the most common problems encountered at each level and to provide remedial teaching activities designed to address specific foreign language difficulties. An example of materials that include Learner Corpus Research findings are dictionaries, like the Macmillan English Dictionary for Advanced Learners, which features ‘Get it right’ boxes that cover the most common mistakes, as analysed in the International Corpus of Learner English. Pedagogical grammars, such as the Cambridge Grammar of English, also utilize CEA results by making reference to problematic foreign language aspects. Similarly, textbook series including Face2Face, English in Mind, and Common Mistakes at … also tackle learners’ problem areas at each level when producing language, helping them become aware of their language limitations so as to improve their language quality. Large corpora with texts aligned to the different CEFR levels are needed to analyse learner writing. In the case of English, the leading project in this line of research is the English Profile Project (http://www.englishprofile.org/). This initiative analyses the Cambridge English Corpus in order to identify salient features at each CEFR level, i.e. ‘properties of learner English that are characteristic and indicative of L2 proficiency at each of the levels and that distinguish higher levels from lower levels’ (Hawkins and Filipović op.cit.: 11). However, the findings observed to date report on learner writing at different CEFR levels by learners from various L1 backgrounds. Although some projects have yielded results on specific learner groups, i.e. Spanish learners of English (Díez-Bedmar 2015), further research is needed to describe the errors typical at each CEFR level by a specific L1 group. Thus, this detailed information can inform both language reference tools aimed at the L1 learner group as well as the descriptors for linguistic competence in the CEFR and the ELP, helping students become aware of the language expected at each level. Methodology The learner corpus used in this study to analyse learner language by Spanish students of English at CEFR B1 level when undertaking a written production activity comprises 179 texts (21,441 words in total; M = 119.78; SD = 28.96). These texts were chosen from a larger learner corpus, selecting only those which two independent and experienced raters assigned a B1 level, thus ensuring full inter-rater agreement. Learners were instructed to write a short piece on the following topic: ‘Where, outside Spain, would you go on a short pleasure trip?’ To highlight the errors made by the students, two native English speakers identified all the errors in the compositions. The learner corpus was then manually error-tagged in accordance with the error taxonomy by Dagneaux, Denness, Granger, and Meunier (1996). When manually error-tagging the learner corpus, and so as not to bias the results due to the raters’ different degrees of severity in the identification of errors, only inaccurate uses of the foreign language identified by both native speakers were considered. Descriptive statistics were used to pinpoint the students’ most frequent errors in the foreign language at CEFR B1 level when performing this language activity. Corpus results Figure 5 gives an overview of the mean of errors per text in the learner corpus per error category.2 The one with the highest mean of errors per text corresponds to grammar (M = 6.42; SD = 4.20). Figure 5 View largeDownload slide An overview of the main errors at CEFR B1 level Figure 5 View largeDownload slide An overview of the main errors at CEFR B1 level Focusing on the category which yields the highest means of errors per composition in the learner corpus, Figure 6 provides the data on the Grammar category.3 Figure 6 View largeDownload slide An overview of grammatical errors at CEFR B1 level Figure 6 View largeDownload slide An overview of grammatical errors at CEFR B1 level As Figure 6 shows, the grammatical areas that pose the greatest challenges to Spanish learners of English when undertaking this written production activity at B1 level are related to the incorrect use of pronouns (GP) (M = 1.53; SD = 1.66) and articles (GA) (M = 1,35; SD = 1.49). A mean of more than one error per text may be expected at this CEFR level regarding these two word classes. In the case of the incorrect use of pronouns, the most frequent error is using the pronoun ‘his’ instead of ‘its’ (Example 1), followed by the omission of a pronoun when the pronoun ‘it’ should be used (Example 2), and the use of the pronoun ‘it’ as a subject when a subject has already been included (Example 3).4 1 (GP) his $its$ towns 2 I like England because (GP) $it$ is a very beautiful country 3 England (GP) it $0$ is very far The main problem found in the use of incorrect articles is the omission of an article when the definite article is needed. For example: 4 go to (GA) $the$ beach in (GA) $the$ evening Three further error types emerge as frequent in learner writing at this level, yielding a mean of between 0.05 and 1. They correspond to the expression of number in nouns (GNN) (M = 0.6; SD = 0.92), the use of auxiliary verbs (GVAUX) (M = 0.55; SD = 0.84), and incorrect word class usage (GWC) (M = 0.53, SD = 0.76). Problems encountered when expressing number in nouns can be seen in the following example: 5 My (GNN) dreams $dream$ is … Difficulties have also been observed in auxiliary verb selection and the incorrect inclusion of an auxiliary verb: 6 If I got more money and more time, I (GVAUX) will $would$ go to visit 7 If I (GVAUX) would have $had$ the opportunity Lastly, incorrect word class selection can be seen in the following example: 8 The last reason for visiting (GWC) Greek $Greece$ Proposal for fine-tuned descriptors The results obtained in this study are useful to inform the CEFR descriptor for grammatical accuracy and the CEFR descriptor for monitoring and repair for written production activities at B1 level aimed at Spanish learners of English. These descriptors may be included in a version of the CEFR aimed at Spanish learners and may be unzipped in the electronic ELP for this L1 learner group (see Figure 7 for an example). Figure 7 View largeDownload slide Unzipped strategy descriptor for monitoring and repair at B1 level Figure 7 View largeDownload slide Unzipped strategy descriptor for monitoring and repair at B1 level First, the descriptor for grammatical accuracy can be rephrased as follows: Communicates with reasonable accuracy in familiar contexts; generally good control though with noticeable mother tongue influence. Errors occur, especially grammatical errors related to the use of pronouns and articles, which are likely to appear at least once per composition, as well as other grammatical problems which are less frequent such as the incorrect expression of number in nouns, the use of auxiliary verbs and the incorrect use of word classes, but it is clear what he/she is trying to express. (Text in italics indicates added information.) Similarly, the descriptor for the monitoring and repair strategy for written production activities may help the learner to pay attention to these specific aspects of language and notice mistakes before submitting a piece of writing so as to improve the quality of the language used, if it is rephrased as follows: Can double-check during and after the writing process that the use of pronouns and articles is correct. Can pay attention to the incorrect expression of number in nouns, the use of auxiliary verbs and the incorrect use of word classes. Can consult in a textbook, grammar book or with a peer or the teacher (if possible) doubts regarding those aspects of the FL. Conclusions This article has highlighted the main challenges that linguistic competence descriptors pose to CEFR and ELP users when attempting to self-assess their writing and exercising autonomy in their learning process, with a particular focus on the grammatical accuracy descriptors and strategy descriptors for monitoring and repair at B1 level. Thanks to the results obtained through a CEA conducted on a learner corpus comprising B1-level texts by Spanish learners of English, it has been possible to propose a reformulation of the grammatical accuracy descriptor at B1 level to ensure that learners know the type of errors which are most frequent at that level. Likewise, the descriptors for the monitoring and repair strategy at B1 level have been rephrased to help learners develop strategies for monitoring and repair with the aim of improving the quality of their texts by correcting those errors. These descriptors may be ‘unzipped’ in the ELP so that learners can reach the descriptor via baby steps. Examples taken from the learner corpus could also be included in the electronic ELP or teaching materials to raise learners’ awareness of the main grammatical errors and to enable them to monitor and repair their texts. Using this approach, foreign language teaching following the CEFR can be achieved. First, learners/users are placed at the centre of the process. Second, they are provided with a transparent and coherent process which clearly illustrates the relationship between the three elements of the CEFR triad, namely learning, teaching, and assessment. Finally, learners become active agents of their own learning thanks to the transparency and coherence of the process, as they are able to reflect on what they can do with the language, how well they can use the language, and how they can self-assess their texts, thus becoming more reflective and autonomous learners. María Belén Díez-Bedmar is Associate Professor at the University of Jaén (Spain). Her main research interests include Learner Corpus Research, error-tagging, the use of corpora in language teaching, the CEFR, language testing and assessment, and CMC. References Council of Europe . 2001 . The Common European Framework of Reference for Languages: Learning, Teaching, Assessment . Cambridge : Cambridge University Press . Dagneaux , E. , Denness S. , and Granger S. . 1998 . ‘ Computer-aided error analysis ’. System 26 / 2 : 163 – 74 . Google Scholar CrossRef Search ADS Dagneaux , E. , Denness S. , Granger S. , and Meunier F. . 1996 . Error Tagging Manual Version 1.1 . Louvain-la-Neuve : Centre for English Corpus Linguistics, Université Catholique de Louvain . Díez-Bedmar , M. B . 2015 . ‘ Article use and criterial features in Spanish EFL writing: a pilot study from CEFR A2 to B2 levels ’ in M. Callies and S. Götz (eds.). Learner Corpora in Language Testing and Assessment (pp. 163 – 190 ). Amsterdam : John Benjamins . Google Scholar CrossRef Search ADS Figueras , N . 2012 . ‘ The impact of the CEFR ’. ELT Journal 66 / 4 : 477 – 85 . Google Scholar CrossRef Search ADS González , J. Á . 2009 . ‘ Promoting learner autonomy through the use of the European Language Portfolio ’. ELT Journal 63 / 4 : 373 – 82 . Google Scholar CrossRef Search ADS Granger , S . 2009 . ‘ The contribution of learner corpora to second language acquisition and foreign language teaching: a critical evaluation ’ in K. Aijmer (ed.). Corpora and Language Teaching (pp. 13 – 32 ). Amsterdam : John Benjamins . Google Scholar CrossRef Search ADS Granger , S . 2016 . ‘ The contribution of learner corpora to reference and instructional materials design ’ in S. Granger , G. Gilquin , and F. Meunier (eds.). The Cambridge Handbook of Learner Corpus Research (pp. 486 – 510 ). Cambridge : Cambridge University Press . Google Scholar CrossRef Search ADS Hawkins , J. A. and Filipović L. . 2012 . Criterial Features in L2 English . Cambridge : Cambridge University Press . Hulstijn , J. H . 2007 . ‘ The shaky ground beneath the CEFR: quantitative and qualitative dimensions of language proficiency ’. The Modern Language Journal 91 / 4 : 663 – 7 . Google Scholar CrossRef Search ADS Little , D . 2005 . ‘ The Common European Framework and the European Language Portfolio: involving learners and their judgements in the assessment process ’. Language Testing 22 / 3 : 321 – 36 . Google Scholar CrossRef Search ADS Little , D . 2007 . ‘ The Common European Framework of Reference for Languages: perspectives on the making of supranational language education policy ’. The Modern Language Journal 91 / 4 : 645 – 55 . Google Scholar CrossRef Search ADS North , B . 2007 . ‘ The CEFR illustrative descriptor scales ’. The Modern Language Journal 91 / 4 : 656 – 9 . Google Scholar CrossRef Search ADS North , B . 2014 . The CEFR in Practice . Cambridge : Cambridge University Press . Trim , J . 2012 . ‘ The Common European Framework of Reference for Languages and its background: a case study of cultural politics and educational influences ’ in M. Byram and L. Parmenter (eds.). The Common European Framework of Reference. The Globalisation of Language Education Policy (pp. 14 – 34 ). Bristol : Multilingual Matters . Footnotes 1 The general descriptor reads ‘Can write straightforward connected texts on a range of familiar subjects which are familiar or of my interest within his field of interest. I can write personal letters describing experiences or thoughts’. An example of micro-descriptors is ‘I can fill in forms, questionnaires and other similar documents’. 2 The distinction between the error categories, for instance grammar versus lexical errors, was made following the error taxonomy by Dagneaux, Denness, Granger, and Meunier (1996). 3 The complete list of acronyms and their meanings can be found in Dagneaux, Denness, Granger, and Meunier (op.cit.). 4 The examples are verbatim transcriptions from the error-tagged learner corpus. Only the error-tags corresponding to the errors under consideration have been left in the examples. The error-tags, between brackets, are placed before the wrong word and the corrections are inserted between dollar signs. In Example 1, the error-tag inserted is ‘(GP)’ because ‘his’ is incorrectly used. The correction, $its$, is included after the incorrect word. © The Author(s) 2017. Published by Oxford University Press. This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs licence (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial reproduction and distribution of the work, in any medium, provided the original work is not altered or transformed in any way, and that the work is properly cited. For commercial re-use, please contact journals.permissions@oup.com

Journal

ELT JournalOxford University Press

Published: Nov 22, 2017

There are no references for this article.

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off