Corps de l’article

Countries around the world, including Canada, have recognized the importance of quality assurance (QA) in post-secondary education and have implemented processes for academic program review (e.g. Tam, 1999; Brown, 2004; Mora, 2004; Shah et al., 2007; Weinrib and Jones, 2014). Although program reviews can take different forms and be undertaken for different reasons, an oft-cited challenge is that of engaging faculty members in the program review process in a positive way. Numerous scholars have observed that faculty members are often resistant to institutional quality assurance efforts. Anderson draws attention to “the frequently hostile responses of academics to quality assurance processes” (2006, p. 162), while Cardoso et al. have suggested that some academics may even go so far as to “sabotage” the process (2013, p. 98). Frequently cited complaints by faculty members are that quality assurance mechanisms are overly bureaucratic, onerous, time-consuming, and a drain on already-limited resources (Cardoso et al., 2016, p. 952; Jones and Darshi de Saram, 2005, p. 52; Strydom et al., 2004, p. 213). Numerous faculty members have expressed the opinion that quality assurance exercises divert their attention from teaching and research, which they see as the truly important aspects of their jobs (Cardoso et al., 2013, p. 98; Manatos et al., 2015, p. 246). Moreover, they are sometimes skeptical about whether the additional workload burden actually translates into any significant or meaningful improvement in quality (Anderson, 2006, p. 170; Huusko and Ursin 2010, pp. 865-866), with some feeling that quality assurance is sometimes at risk of amounting to little more than “a tick-a-box process” (van de Mortel et al., 2012, p. 118).

Cardoso et al. report that QA managers in higher education are sometimes perceived as engaging in “power games” or implementing a “quasi-feudal management framework” or even “promoting a ‘terror and menace’ climate” (2016, p. 957). Huusko and Ursin note that some academics feel threatened by the “growth of mechanicalness and control” associated with QA processes (2010, p. 866). Jones and Darshi de Saram observe that in some cases, faculty members appear to demonstrate a genuine fear that “if they do not comply with these procedures… then they will be ‘punished’ in some way” (2005, p. 52). The academics perceive that overly demanding rules are applied because management do not trust them; meanwhile, the academics do not trust management “to act cordially” (Jones and Darshi de Saram, 2005, p. 52). Cartwright puts forward similar observations, noting that many faculty members feel powerless in the face of the QA system, and that the QA process can generate resentment on the part of the academics—toward the system, toward immediate managers, toward senior management, or even toward colleagues (2007, p. 290). Van de Mortel et al. observe that, in those instances where it does exist, collegiality is highly valued, and where it does not, its loss is mourned (2012, p. 113).

As outlined above, researchers have investigated a host of factors that may contribute to creating a situation where there are tensions between the QA team and the faculty members in the context of academic program reviews. However, to the best of our knowledge, the role of linguistic factors—such as the choice of terminology and the tone of communication—has not been explored in any detail. This paper explores whether changing the language used to communicate with academics about the institutional QA process might serve as a countertactic to help mitigate the resistance of faculty members to the institutional QA process. To this end, we present a case study carried out by the graduate QA team at the University of Ottawa. As part of a wider effort to improve the services offered by the QA team, and to engage academics more fully in program reviews, the QA team made a concerted effort to revise the content and style of many of the documents used to communicate with faculty members about the program review process.

This paper is divided into four sections. We begin by describing the institutional context, including a brief overview of the QA process used for graduate program reviews. Next, we explain how a corpus-based approach was used to investigate language use, and we introduce the corpus analysis tools used in this study. With reference to examples taken from our corpus of QA documents, we explore how the language used in these documents could be a factor that contributes to the resistance encountered by the QA team when attempting to engage faculty members in the program review process. Finally, we evaluate the success of the QA team’s linguistic revision efforts and relay some indirect evidence that suggests that improvements to the tone of the communications with faculty members helped lead to a greater level of engagement in the QA process.

1. Institutional context

In Canada, education falls under the mandate of the provincial and territorial governments, and in the province of Ontario, university quality assurance has gained increasing importance over the past fifty years. Both Goff (2013) and Liu (2015) provide detailed overviews of the history and development of university quality assurance in Ontario, so we will give just a summary of the key points here. In 1968, external appraisals of new graduate programs became a requirement, and by 1982, graduate programs began undergoing periodic external appraisals through the Ontario Council on Graduate Studies (OCGS). This process operated largely unchanged until 2007, when the Executive Heads of Ontario Universities commissioned a review of OCGS’s appraisal processes and operations. As a result of this review, the Ontario Universities Council on Quality Assurance (OUCQA) was formed, and by 2010, a Quality Assurance Framework (QAF) was introduced and adopted by all publicly assisted universities in Ontario. The OUCQA, which is responsible for the oversight of the QAF processes for Ontario universities, operates at arm’s length from both Ontario’s publicly assisted universities and the provincial government. Each university has developed its own Institutional Quality Assurance Process (IQAP), which is subject to review and approval by the OUCQA. The requirements for the IQAP are set out in the QAF, and each university’s quality assurance processes are audited regularly by the OUCQA. One of the components that must be part of the IQAP is a protocol for the cyclical review of existing programs at least once every eight years to secure academic standards and ensure ongoing improvement.

At the University of Ottawa, the cyclical review process for graduate programs is managed by the Faculty of Graduate and Postdoctoral Studies, and more specifically, by a team comprising the Vice-Dean, the Director of Quality Assurance, and the Quality Assurance Coordinator, with support from a Graduate Curriculum Design Specialist. When first developing the IQAP in 2010, the University of Ottawa team largely adopted the familiar process—and the associated language—that had been used to conduct the periodic external appraisal of graduate programs through the OCGS system. In 2013-2014, owing to the end of the mandate of the Vice-Dean, and the retirement of other staff members, the entire graduate QA team was renewed. As this freshly installed team began the work of learning about the cyclical review process in detail, and supporting programs undergoing a cyclical review, they were struck by two things. Firstly, many of the documents associated with the cyclical review process, including the IQAP—but also presentations, guidelines and standard communication tools (e.g. templates for letters and email communications)—were difficult to understand, and were often written in a prescriptive and authoritarian tone. Secondly, the QA team observed a marked reluctance—and sometimes even an outright resistance—on the part of many faculty members to engage in the cyclical review process. This is in line with observations made by numerous other researchers, as noted above.

Curious to know whether the content and tone of the documents were contributing to the negative reception of QA activities by the faculty members, the QA team decided to investigate the language used in the QA documents more closely. This investigation was led by the Vice-Dean (the present author), who is also a professor at the School of Translation and Interpretation, and whose areas of research expertise include corpus linguistics and language for special purposes.

2. Using corpus-based techniques to investigate the language of quality assurance

The first step undertaken by the QA team was to assemble a corpus, which can be described as a large collection of authentic texts that have been gathered in electronic form according to a specific set of criteria (Bowker and Pearson, 2002, p. 9). In this case, the objective was to investigate the language that appears in documents used to explain or support the graduate program cyclical review process at the University of Ottawa. Therefore, the documents gathered for the corpus included a copy of the university’s IQAP, guidelines and templates for developing a self-study of a program, presentations and handouts used at workshops, and templates of standard communications (such as letters or emails) used to communicate with various stakeholders involved in the cyclical review process, including program administrators, faculty members, external reviewers, and the OUCQA. These documents were collected in electronic format and compiled into a corpus totalling 104 pages in length and containing 37,128 words. This will henceforth be referred to as the QA Corpus.

An advantage of compiling a corpus in electronic form is that it is possible to use special software packages known as corpus analysis tools to manipulate the data. These tools allow users to access and display the information contained within the corpus in a variety of useful ways that facilitate analysis and interpretation. For this study, two different corpus analysis tools were used: a term extraction tool known as TermoStat[1], which was developed by Patrick Drouin (2003) at the Université de Montréal; and a corpus analysis tool suite called WordSmith Tools[2], developed by Mike Scott (2001) at the University of Liverpool.

A term extractor attempts to automatically identify those terms which appear to be most pertinent to a specialized subject field as represented by a specialized corpus. Different term extraction tools use different techniques to identify candidate terms, but the approach used by TermoStat is as follows. TermoStat compares a specialized corpus (in this case, the QA Corpus) against a general reference corpus (in this case, an 8-million word collection of texts, half of which are journalistic texts on a wide range of topics taken from the Montreal newspaper The Gazette, and half of which are taken from the general language British National Corpus). Each term is given a specificity score based on how frequently it appears in the specialized corpus and how frequently it appears in the general reference corpus. The basic idea is that a term which is particular to a specialized domain will occur in the specialized corpus more often than would be expected by chance in comparison with the reference corpus. In other words, the specificity score is essentially a measure of the frequency of disproportionate occurrence. It identifies terms that are unusually frequent in the specialized corpus as compared to their frequency in a general reference corpus.

After the QA Corpus had been analyzed by TermoStat, the output was a list of candidate terms that TermoStat identified as having a high likelihood of being particularly pertinent to the domain of QA as expressed in this particular corpus of texts used in communications about the cyclical review process at the University of Ottawa. Table 1 gives an example of the type of output generated by TermoStat, including the list of terms, their raw frequency count (i.e., the total number of times the terms appeared in the QA corpus), and their specificity score. In total, 251 potential terms were identified by TermoStat in the QA corpus.

Table 1

Sample of the output generated by TermoStat after processing the QA Corpus

Sample of the output generated by TermoStat after processing the QA Corpus

-> Voir la liste des tableaux

The terms appearing on the list generated by TermoStat were then used as search terms in WordSmith Tools, which is a corpus analysis tool that includes a concordancer. A concordancer allows the user to see all the occurrences of a particular search pattern in its immediate contexts. This information is typically displayed using a format known as key word in context (KWIC). In a KWIC display, all the occurrences of the search pattern are lined up in the centre of the screen with a certain amount of context showing on either side, as illustrated in Figure 1. The amount of context displayed can be enlarged as desired, and the concordance lines can be sorted (for example, according to the words that appear before or after the search term). Different ways of sorting the data can help to reveal different patterns, and in the upcoming sections, we will discuss some of the patterns that were identified, as well as other results from the corpus analysis.

Figure 1

KWIC concordance for the term academic unit (QA corpus)

KWIC concordance for the term academic unit (QA corpus)

-> Voir la liste des figures

3. Using language to establish authority and control: some examples from the QA corpus

Cardoso et al. report that quality assurance is viewed by some faculty members as altering the traditional relationship between academics, inducing a situation where they relate to each other more as “managers” and “managed” than as colleagues (2013, p. 99). In a follow-up study, Cardoso et al. note that tensions between these two groups are exacerbated when the management and governance structure surrounding QA is perceived by faculty members to be heavy, bureaucratic, highly hierarchical, regulatory and autonomy constraining (2016, p. 957).

Meanwhile, Leeuw introduces the notion of reciprocity in a QA context, insisting that “reciprocity implies dialogue, debate, openness and (intellectual) investor-investee relationships instead of primarily a top-down approach” (2002, p. 141). He observes, however, that the QA systems that are in place in higher education have been influenced by traditional theories about regulation, including the logics of hierarchical control through inspection and performance auditing. Unfortunately, the so-called “audit society” reflects a tendency not to trust, and so efforts to build trust, reciprocity and social capital become that much more important if QA is to play a meaningful role in a higher education context, rather than feeding into a culture of “hollowed collegiality” where academics engage in nominal acts of collaboration, but seldom engage in more substantial collective efforts to improve education (Massey et al., 1994, p. 19).

Poole, for his part, echoes the idea that an increase in collegiality is essential for bridging the cultural divide that has evolved between the faculty members and the QA officers (2010, p. 14). In particular, he calls for “a shift in attitude among those academics charged with responsibility for quality” (ibid.). In advocating for a stronger culture of respect between these two groups, Leeuw points out that

[t]he more the relationship between inspector and inspectee is characterised by trust, the larger the probability that the inspectees will act upon the findings, evaluations and recommendations of the educational evaluation officers. Here the mechanism is that trust and mutual understanding act as an incentive for listening to the evaluator and not only listening because one is obliged to do so.

2002, p. 145

One of the frustrations expressed by faculty consulted as part of the recent study by Cardoso et al. is that QA systems “grasp the ‘academic world’ through the language and ideology of managerialism” (2016, p. 952). In examining the list of terms identified by TermoStat, shown in Table 2 (next page), we see a number of overt examples of terms that come across as being unnecessarily formal, bureaucratic, legalistic or negative, and for which more straightforward and collegial options exist.

Table 2

Examples of bureaucratic language from the QA Corpus

Examples of bureaucratic language from the QA Corpus

-> Voir la liste des tableaux

Meanwhile, other examples taken from the corpus demonstrate a more subtle or covert means of using language to establish a power relationship, such as through semantic prosody. Semantic prosody refers to a collocational phenomenon whereby a lexical item that, in and of itself, does not contain any evaluative meaning, takes on a favourable or unfavourable attitudinal meaning by virtue of the lexical environment in which it is typically found (Louw, 1993, p. 157). For instance, the adjective habitual is defined in Merriam-Webster’s online dictionary simply as “doing, practicing, or acting in some manner by force of habit”. It would seem, therefore, that habitual is not inherently negative, since there are plenty of good habits in which people could engage. However, a corpus-based examination of habitual (Bowker, 2001, pp. 599-600) has revealed that this word keeps “bad company,” typically being used to modify lexical items such as criminal, drunk, drug user, and offender. Given that habitual appears so frequently in such unfavourable environments, it begins to take on an unfavourable semantic prosody itself, to the extent that it might seem strange or unnatural to encounter this lexical item in a favourable environment. A similar phenomenon was observed in the QA Corpus. For example, to subject to and to submit to are described by relatively neutral dictionary definitions, as shown in Table 3. However, when used in authentic text, they overwhelmingly appear in unfavourable contexts, as illustrated in Figures 2 and 3.

Table 3

Dictionary definitions for to subject to and to submit to

Dictionary definitions for to subject to and to submit to

-> Voir la liste des tableaux

While the dictionary entries paint a neutral picture, if we look in the British National Corpus, which is a 100-million word general language reference corpus, it becomes clear that the types of things that one is typically subjected to or that one submits to are not particularly pleasant (see Figures 2 and 3). Using these lexical items in a QA context therefore contributes to setting an authoritative tone. Faculty members may be particularly affronted by these lexical choices given that other more neutral or positive options are available. For instance, instead of “All graduate programs will be subjected to appraisal at least once every eight years,” it could be phrased as “All graduate programs will engage in a cyclical review at least once every eight years.” Similarly, “programs will submit to cyclical review” could be expressed as “programs will participate in a cyclical review”. Examples such as these could help to explain why many faculty members have a perception of QA as being based on imposition and prescription and, thus, clashing with the values seen to characterize academic culture, such as collegiality.

Figure 2

KWIC concordance of to subject to (British National Corpus)

KWIC concordance of to subject to (British National Corpus)

-> Voir la liste des figures

Figure 3

KWIC concordance of to submit to (British National Corpus)

KWIC concordance of to submit to (British National Corpus)

-> Voir la liste des figures

In addition to specific lexical choices, such as those discussed above, some general stylistic patterns were observed in the corpus. For instance, the use of the passive voice was extremely common. Passive constructions are typically wordy and complex, and they create distance between the writer and reader, thus contributing to a formal and bureaucratic tone that works against collegiality. Moreover, the modal auxiliaries must, should, and shall are used regularly, with shall occurring the most frequently of the three, thus adding to the overall officious tone of the communications and potentially bolstering the faculty members’ perception of “the significant bureaucracy involved in quality assurance” (Cardoso et al., 2013, p. 98).

4. (Re-)building trust by changing our tone

Overall, the language used in the documents that make up the QA Corpus does not portray a respectful tone. This is problematic because as emphasized by Strydom et al. (2004, p. 201), Westerheijden et al. (2007, p. 305), and Cardoso et al. (2016, p. 951) among others, obtaining successful outcomes from quality assurance is heavily dependent on faculty members buying into, rather than resisting, the process. Therefore, as a first step in (re-)building trust and establishing reciprocity with faculty members, the QA team undertook a thorough revision of all its communications documents to recast these in a more collegial tone that would not disenfranchise academic colleagues.

Although it is not possible to isolate these linguistic changes as the sole variable leading to improved engagement on the part of faculty members, we have nonetheless observed some positive trends that have taken hold following the linguistic revision of the QA documents. For example, a key first step in the QA process is for the academic unit to engage in a self-study and to produce a three-volume report. Ideally, this process should take no more than 12 months; however, the mean time to completion when the new QA team arrived in 2014 was 17 months. Clearly faculty members were dragging their heels to some degree and resisting the process. By the end of 2016, two years after the linguistic overhaul of the QA documents, the average time required to produce a self-study report had been shortened to 13 months.

Another indirect measure of success can be seen in the voluntary engagement of faculty members in customized program assessment activities offered by the QA team. These activities, including Strengths, Weaknesses, Opportunities and Challenges (SWOC) analyses, had had participation rates below 40% prior to 2014. However, in 2016, 85% of the programs undergoing a review that year participated in the voluntary program assessment sessions.

A final indirect measure consists of the positive feedback received from faculty members who are leading programs through the review process. In contrast to the hostility, silence, and resistance that had been encountered by the QA team in 2014, it has become increasingly common for them to receive positive feedback, such as the following:

J’ai soumis les 3 volumes hier. Vous avez donc tout ce qu’il faut avant la prochaine réunion. Je voudrais personnellement vous remercier pour tout le support que vous m’avez accordé. On a beaucoup travaillé sur les rapports et avons participé à toutes les enquêtes possibles depuis juin dernier. Sans vous et le centre de pédagogie, la tâche aurait été plus difficile. Bref, thank you so much!!

Concluding remarks

It is well known that language can be used as a means of acquiring and asserting power, but this case study demonstrates that judicious use of language can also serve as a countertactic to help restore the balance of power. While it is unlikely that tasks related to quality assurance will ever rise to the top of a list of favorite activities for faculty members, the results of this case study nonetheless suggest that faculty are more likely to respond with an open mind and to engage more willingly in the institutional QA process if they are approached in a respectful manner and treated as colleagues, rather than as “subjects”. Though hardly surprising, this is encouraging because it presents a relatively easy and straightforward way to reach a win-win scenario. Colleagues who feel respected will contribute more fully to a culture of continuous improvement, resulting in stronger programs that attract high-quality students. Meanwhile, universities can meet their commitment for conducting program reviews more efficiently and without alienating their most important resources.