Corps de l’article

1. Introduction

The impact of new technology as a tool for helping translators in their work is nothing new to Translation Studies (TS) literature. However, as we discuss below, very few studies have attempted a systematic analysis of human translation as a technological fact in today’s society. Much current literature analyzes such technologies as subordinate parts of the translator’s work, relying on approaches that we might consider predominantly human-centered, anthropocentric. In this paper, we analyze the degree of symbiosis between technology and human translation in today’s world. Through the prism of this reality, we set out to review human translation with information technology playing a central rather than subsidiary role, addressing this from a wide range of potential perspectives.

This paper aims at reconsidering the impact of new technology from two perspectives that have been little addressed to date:

  1. the impact of a new technological paradigm on established Translation Studies theories;

  2. possible theoretical frameworks that remain unexplored or insufficiently examined within TS that offer us an approach to this new reality.

As we set out below, from a psychosocial perspective, a number of theories can be applied for conceptualizing the way in which humans relate and interact with the material elements around them – particularly technology – and the flow of influence and multidirectional development that these processes generate. Such theories have already been applied to Translation Studies (e.g., Göpferich 2009; Pym 2012; Byrne 2012; O’Brien 2013; LeBlanc 2013), but the core role of technology in translation as a discipline has not been extensively explored so far.

Over the last fifty years, information and communication technology (ICT) has infiltrated all areas of knowledge, impacting radically on the nature of most professional activities. The pace of this change has been accelerated by the advent of the Internet and its development as a tool for communication, collaboration and knowledge generation. This metamorphosis is generally regarded as marking the start of a new era: an era characterized by a radical break with past concepts and models of thought.

Although change is naturally implicit in all human occupations and activities, there are clear signs that this technological impact will have irreversible and even more dramatic effects in the field of translation, given the particular nature of this human activity. Nevertheless, it is also possible to apply established Translation Studies paradigms in observing the current epistemological transformation of translation caused by technology. The evolution we are describing here is not so much a theoretical paradigm shift in Translation Studies; rather, it is a change in the very substance of translation as an activity.

Therefore, in the first instance, we must reflect on the emergence of a new era marked by what have become known as new technologies and their impact on translation as a human and professional activity. This will enable us to establish the approach of this work, which we will then develop in the translation and methodological spheres in the following sections.

2. An Instrumental Approach to Translation

For many decades now, technology has been addressed as a driver of human and social change in disciplines such as psychology, anthropology and sociology. One of the most widespread theories in this regard is that of the Instrument-mediated Activity or the Instrumental Method (Heath and Hindmarsh 2000). This was initially proposed by constructivists authors such as Vygotsky (1978) – and to some extent prior to this by Piaget (1978) –, interested in analyzing how humans learn in contact with society and their environment. Vygotskian instrument-based theories focus on three basic notions (Vygotsky 1978; Verillon and Rabardel 1995; Rabardel 1995):

  • the artifact (a tool, whether human-made or not);

  • the instrument (the tool analyzed in its action);

  • the instrumented activity (how a person relates cognitively with the objects they handle).

The conceptual difference between the artifact and the instrument is that the artifact is considered an object as an isolated form – the object in abstract – whilst the instrument is the conceptualization of that object in its use and action, and in relation to its users, meaning that this has social, cognitive and other significant implications.

The starting point for our observations on the social changes being wrought by digital technology is precisely that of considering technology as an instrumented action, establishing a system in which technology cannot be decoupled from its users or societies, and these cannot be decoupled from the technologies they use.

The perspectives that follow this approach agree in characterizing the appearance of printing and its impact on knowledge distribution and the more recent emergence of the Internet as two of the artifacts that, through their instrumental action and usage, have ushered in a new era for Humanity, particularly with regard to their implications for the transmission of knowledge, and for learning and human technological evolution.

Taking a broad view, many authors, from a wide range of disciplines, consider there to have been at least three great societies in human history, namely: the Agrarian Society, the Industrial Society and the Information Society. Different authors employ different terms to describe the current situation, for example: global village (McLuhan 1964/1994), third wave (Toffler 1980/1989), Telépolis (Echeverría 1994/1999), informational society (Castells 1996/2000). These formulations are of necessity original, but they were proposed prior to the Internet becoming definitively established in the early 21st century. However, whatever the terminology, most of these authors agree that we are experiencing a historic rupture in our social fabric as a result of technology.

With regard to the informational society, Castells argues that this name denotes an innovative development, as, in this society, information feeds back upon itself as the main source of productivity. In other words, knowledge in itself and its transfer mechanisms are acquiring an instrumental value in themselves:

What characterizes the current technological revolution is not the centrality of knowledge and information, but the application of such knowledge and information to knowledge and information processing/communication devices, in a cumulative feedback loop between innovation and the uses of innovation.

Castells 1996/2000: 31

However, in the field of translation, the instrument-mediated perspective has not always been so clear. Technologies are usually viewed simply as isolated artifacts. In other words, they are seen as objects or material resources (dictionaries, thesauruses, software or the Internet, etc.) that support translators in their work. The theoretical trend was for a long time essentially linguistic or textual. More recently it has become, we could say, anthropocentric, as it is now finally considering the translator and their translation competence (Kelly 2002; PACTE 2003; Göpferich 2009; Pym 2012). However, this approach is often a little artificial, as it does not consider the environment as a determining and central factor in the process. As we will see, there are some very interesting exceptions, as there have been some echoes of instrumentalist approaches in Translation Studies properly speaking. This has particularly been the case since the first decade of the 21st century, when new technologies achieved a level of development that made it possible to believe fully in their potential for change. This coincided with the emergence of perspectives in Translation Studies that were more social or more related to the profession.

The idea that printing, as a technology with both social and human impact, marked a turning point for translation has been understood by authors such as Cronin (2010), Littau (2011) and Byrne (2012).

Littau (2011) brings together insights from TS, book history and technology studies in order to analyze the impact of media technologies on translation and Translation Studies. In her contribution, a stimulating proposal for a material history of translation is drafted. Particularly interesting is her approach to technologies as agents of cultural change, with special mention to the manuscript culture, the print culture and the digital culture as the three main shifting milestones in translation history.

Byrne (2012: 3-4) explains that each of the main technological advances of our time has been accompanied by translation, so that we cannot understand technology without translation, and we cannot understand translation without technology. For example, all technology is based on the transfer of information, and this would be impossible without translation. Discussing the appearance of printing, the author argues that translation was completely reconfigured from an epistemological point of view, as, prior to printed information, manuscript translations and copies thereof were very difficult to disseminate, and there could be no guarantee that the copyists would produce exact replicas, whether deliberately or not. Texts, and as an extension, translations, were ephemeral realities, and their content could disappear or be adulterated easily. In Ancient Greece and Rome, for example, there was no notion of authorship being subject to the intellectual property rights that exist in so many areas today. Compilers translated and reproduced the fragments of information to which they had access and made them their own, with the originals often being lost. This changed dramatically with the arrival of printing. Translated texts could now be replicated, as could the originals, meaning that the reference point of the original became omnipresent, and translated versions also became more stable. This had a major impact on translation throughout successive stages of history, with the evolution of concepts related to the reproduction strategies of different aspects of the original text, such as: equivalence, faithfulness, loyalty, adequacy, etc. (Byrne 2012). In other words, this had a definitive impact on the perception of many of the basic concepts of Translation Studies: the visibility of the translator, the generalization of notions of source text and target text, as well as the possibility of systematic study of translations and even originals and their authors through translations.

By way of a prelude to the conclusions we will draw, we understand that many of the new innovations in our era, such as wiki technology, free software and open-source code, collaborative platforms, crowdsourcing and cloud technology oblige us to review some of the basic concepts assumed in Translation Studies, and all disciplines related to reading and writing in general (Littau 2011; Pym 2011). This is the case, for example, with notions such as those of text, translation unit, media, author, translator, and, above all, translation itself.

The technology sector has also established itself as an enormous field of translation work. The interaction between technology and translation led to the rise of the localization industry in the 1980s. This aimed to overcome traditional concepts of translation, seeking the adaptation of technological products (and the discourse around them) to each local market and to local uses, customs and linguistic variety (locale) (Esselink 2000; Hurtado 2001; Gonzalo García and García Yebra 2004; Pym 2004; Mata 2005; Dunne 2006; Alonso 2011; Byrne 2012).

One of the pioneers in analyzing the impact of technology on translation in our time was O’Hagan in her 1996 work The Coming Industry of Teletranslation, from a perspective based on analysis of language industries. O’Hagan explained the revolutionary effect that the various technologies of the day – though very different from those we are familiar with today – would have (1996: xii-xiii) on the translation sector, both in terms of the technologies themselves, and their effect on what she then termed teletranslation, i.e., the possibilities for connectivity at all levels, such as remote subcontracted working, professional discussion forums, information searches, etc., that were just starting to emerge when the world was still in the fax machine era.

At the start of this century, Austermühl (2001) stated that, even though technologies were initially used as just another tool in the translation process, translation soon began to develop into a computer-based activity. The focus of attention was clearly moving, generating a core that integrated human and technological capabilities. Austermühl was at the time well aware of the impact of new information technology on the translation profession:

For translators there is no longer any question of whether or not to use computers and networks. The use of information and communication technology (ICT) is a fait accompli in the lives of today’s language professionals.

Austermühl 2001: 7

Cronin (2010) also observed how the main writing technologies of the last millennium – i.e., printing and, much later, IT and the Internet – prompted social (and political and religious) changes, as well as changes in translating. Cronin (2010: 3) argued that we should not restrict ourselves to a limited understanding of these technologies – as is common in the literature – as simply being auxiliary tools that can be described in isolation, belonging to a specific sector, such as localization. Automatic and machine-assisted translation technologies were significant advances, capable of changing pre-existing paradigms on their own. However, Cronin adopts a holistic perspective – which we are also adopting – to digital tools in general, using terms such as ubiquitous computing and third wave of computing to refer to the set of digital technologies evolving from personal computing and their huge social impact. As these technologies are integrated into systems and artifacts that are part of people’s everyday lives, both through the type of devices and their connectivity, we understand them as being capable of generating dynamic and constant interaction and feedback systems between humans and technologies. As such, technology is to be found everywhere, it has been described as everyware (e.g., Greenfield 2006; Cronin 2010; Enríquez-Raído 2013), in counterpoint to hardware and software. Given its nature, Cronin questions how this ubiquitous computing will adapt to multilingual environments, concluding that it will represent a further step forward in the development of globalization and the localization industry (2010: 3). As a result, in his approach to the new paradigm, Cronin allows the notion of ubiquity to spread to the field of translation:

Advances in peer-to-peer computing and the semantic web further favour the transition from a notion of translation provision as available in parallel series to translation as part of a networked system, a potentially integrated nexus. In other words, rather than content being rolled out in a static, sequential manner (e.g., separate language information leaflets at tourist attractions), translated material would be personalised, user-driven and integrated into dynamic systems of ubiquitous delivery.

Cronin 2010: 3

In our opinion, immediate connectivity of people and information through the Internet is perhaps the most powerful catalyst for a new order in translation. This will even impact in the way that other, more specific, technologies, such as corpus-based machine translation and collaborative-assisted translation are used. With regard to the ultimate repercussion of the Internet on translation in particular, in the light of what has already happened to the printing industry and society in general, Cronin concludes that the medium has clearly become the message (Cronin 2010: 1-2). This idea and this wording also tie in with those of McLuhan, who argued:

In a culture like ours, long accustomed to splitting and dividing all things as a means of control, it is sometimes a bit of a shock to be reminded that, in operational and practical fact, the medium is the message. This is merely to say that the personal and social consequences of any medium – that is, of any extension of ourselves – result from the new scale that is introduced into our affairs by each extension of ourselves, or by any new technology.

McLuhan 1964/1994: 7

The omnipresence of IT and, in particular, the appearance of the Internet, has generated a new paradigm based on a premise that blurs the medium, converting it into the message through a process of integration. The Internet has thus established itself as an everything for communication, and therefore, for translators who find the Internet to be the medium, the message and the instrument.

In the context of translation, Cronin (2010: 2) finds that all these changes call into question the central pillars of Holmes’ celebrated map of translation studies (1988), which had been the foundations of the discipline until very recently (Vandepitte 2008). This disciplinary approach focused on the notion of medium, with a clear distinction, for example, into theories of translation done by machines or humans:

The notion of ‘medium’ thus construed is as a kind of classificatory aid, a way of expressing how contents are differently transmitted. However, it is arguable that ‘medium-restriction’ is more than a simple heuristic device, a convenient handle for defining content delivery, that the definitional possibilities of a medium challenge notions of translation invariants which remain constant across different media.

Cronin 2010: 2

We consider that Cronin’s theory could be tested in the light of observation of common current practices, where we find examples of the use of digital tools in even the most traditional forms of translation, such as literary translation. Moreover, there are forms of translation that mix the oral and the written, and these are constantly continuing to appear.

In this regard, it is also worth noting the results of the ethnographic research set out in Désilets, Melançon et al. (2009). This found that even translators with lower levels of technological competence include numerous generic and specific technological tools in their translation processes. Furthermore, Biau-Gil and Pym insist on the importance of tools that are not specific to translators:

the most revolutionary tools are quite probably the everyday ones that are not specific to translation; Internet search engines, spell checkers, search and replace functions, and revision tools have had a huge impact on all forms of communication.

Biau-Gil and Pym 2006: 18

As Byrne (2012) argues, with the appearance of computer-assisted translation tools we could imagine and even fear that advances in information technology would impact dramatically on the world of translation, but it is ultimately generic technologies, particularly the Internet, that will have the greatest impact.

Commercial Translation: […] the point of which is to provide a written alternative to some foreign language, has always required the use of certain tools whether a clay tablet a stylus, quill and parchment or typewriter, telex and fax. Such tools, while requiring some acclimatization, more so in the case of typewriters and telexes, were unlikely to have any radical impact on the work of the translator; they were simply improvements on existing methods. […] translation only underwent genuine metamorphosis as a result of technology with the advent of computers and the Internet.

Byrne 2012: 15

This new hybrid role between the user and the generator of the related technology ties in, as we shall see, with innovative sociological and anthropological analysis of the translator’s role to raise some fundamental questions about translating.

The idea this evokes is that technologies, translations and translators together constitute a single indissoluble system. In other words, the technologies employed in translation need no longer be analyzed as simple artifacts, and should be studied as realities that generate instrument-mediated actions with strong cultural, social, professional and personal impact.

3. Technology and Translation Studies

Since the 1980s, metaphors of changes or turns have been used to describe the introduction of new paradigms in Translation Studies (Bassnett and Lefevere 1990; Snell-Hornby 1995/2006; Wolf and Fukari 2007; Cronin 2010). Although there is an ongoing debate about which new theoretical perspectives really constitute the appearance of a new paradigm, there is usually a degree of consensus about how Translation Studies evolved from the linguistic turn of the 1960s and 1970s, to the cultural and communicative turn of the 1980s and 1990s, going through what could be considered a new sociological approach (Wolf and Fukari 2007). The focus has shifted from purely textual studies to studies of context and now studies that focus on the translator as an active agent in their environment.

Some authors also argue that there has been a further technological turn (Chan 2004; Snell-Hornby 1995/2006; Cronin 2010; O’Hagan 2012). Sealed labels denoting disciplinary currents and trends are difficult to maintain (Wolf and Fukari 2007). This is particularly the case in areas that are as multidisciplinary as the one considered here. Whilst it might appear appropriate, because of its obviousness, to talk about a technological turn, we might also debate whether this is in reality just another facet of a sociological or even cultural turn, as the sociological perspective continues to seek to incorporate translators and the mutual interaction they create between themselves and their environment. Nevertheless, talking from a general rather than a strictly translation perspective, recognition of the technological paradigm appears to be a reality that naturally cuts across many disciplines.

The question is really whether the technological element is merely a revolution of artifacts, a renewal of the media used by the translator, or rather, ultimately, a genuine revolution in translation, which could share some of the foundations of the material history of translation drafted by Littau (2011).

With the exception of functional and cognitive paradigms, and sociological research, many of the other currents of research mentioned focus essentially on the translation product. However, the most pragmatic theories of recent decades are starting to pay attention to the ecosystem and environment in which the translation originates. For example, we can see various currents in Translation Studies, which, as Buzelin (2007: 137) argues, use the metaphor of a network (Even-Zohar 1990) or even a system, in one form or another. According to Robinson (1997), prior to sociological research acquiring the weight it now has in our discipline (something that took place over the last ten years), it was this social approach, then only incipient, that diverted attention from methods purely focused on the product to also observe the process and the actors involved.

In such contextual research, which until recently had just been anthropocentric, technology no longer plays a secondary or auxiliary role, but now defines and impacts on all other processes. Cronin’s (2010: 1) idea of ubiquitous computing may be a faithful reflection of this phenomenon.

This means that translation is no longer defined by isolated or specific use of computer-assisted or automatic translation tools, but rather by the generalized and essential use of all forms of digital technology across a range of tasks relating to the social positioning of the translator (social and professional networks, digital communication), advanced documentation (information search engines), as well as editing and layout in multiple formats, among others.

As we have stated, it is highly likely that classic and well-established notions will have to be reassessed in the light of the technological reality of the translation fact. This will allow us to assess whether these remain current, and to propose new theoretical frameworks that consider the reality of elements in the translator’s universe, such as translation standards, the issuer and recipient, authorship and ownership, translator visibility, the translation unit, skopos, translator competence and the social role of the translator. If we focus on three of these theoretical currents with particular impact today – functionalism, cognitivism and the sociology of translation – we will see that including technology in the analysis structure may have significant impact, even though in all three cases there is an almost natural fit to considering technologies as core elements in their respective analysis structures.

3.1. Functionalism

We consider there to be clear feedback between the various currents in Translation Studies and the industry. Nevertheless, functionalist theories have had the greatest impact in the industry, partly perhaps because functionalist theories are probably inspired by observation of the translation market in operation. Byrne (2012: 12-13) explains, e.g., that the functionalist Skopos Theory (Vermeer 1978; 1996), focusing on the purpose of the translation and its effect on potential audiences, clearly identifies a concept that links Translation Studies with professional practice: the brief. According to Byrne, applied in the professional context, skopos could be defined as the customer specifications to be considered in a particular translation project (Byrne 2006: 39). Byrne argues that functionalism drifts from actual translation practice as professional translators rarely receive detailed instructions with their projects that might provide or complete the minimum information required for a significant skopos for the translation process:

Producing a translation brief is quite a hit and miss affair with clients rarely able to provide anything more relevant or specific than “I have a 7,500 word document that I need translated. It’s got something to do with electronics and I need it by the end of the week.”

Byrne 2012: 13

Although we agree in part with Byrne, we would nuance this by saying it is the reality for many translators working with translation agencies, particularly small and medium-sized agencies, where project management and provider communication processes are ripe for improvement. However, in other cases, translation agencies or vendors put a great deal of effort into preparing specifications and instructions for the translator. If we consider the project specifications a translator might receive, here we are faced not so much with the skopos disappearing, as with a super skopos: detailed style guides, software for the translation and technical configurations in which translation decisions affecting the format and medium are given, a general description of the target audience for the translation (for example, Spanish speakers from any country), glossaries and bibliographic sources to be consulted, together with basic order information (number of words, rate, date ordered and delivery date). Translators can therefore be confronted with a variety of possible situations: from lacking skopos, that translators would need to construct or make explicit in order to produce proposeful translations, to extremely detailed, descriptive and subordinating instructions.

The parents of translation functionalism probably never imagined such a degree of sophistication in their notion of skopos, given the huge volume of instructions that might now be given to the translator; this is perhaps explained by the profession becoming more technologically-based.

Technology constrains and defines translation processes at many different levels: usage of translation tools, e.g., how texts are fragmented in segments by translation memory systems; how mark-up languages influence decision-making in translation and localization (De la Cova 2012); how Internet impacts information mining, accessing and processing; how translators become integrated in collaborative virtual environments and social networks; the way computerized project management and QA routines influence the whole translation workflow and the translator role; how the translator critically evaluates the quality of the available resources for a project, such as translation memories, terminology databases or recommended translation strategies (Pym 2012); how the audiovisual elements (sound, images, interfaces, etc.) restrain translation decisions; how translators extract meaning by using corpus-based technologies, etc.

The question raised here is to what extent technologies can naturally become part of complex skopos systems, e.g., where human translators and technologies balance and influence each other, whether explicitly – by means of client or vendor specifications, or otherwise – or implicitly, in terms of comprehensive and ubiquitous technological environments and its translational influences.

3.2. Cognitive translation theories

The interest of traditional cognitive theories usually lies in identifying, often empirically and experimentally, the decision-making processes and competences employed by the translator. The cognitive approach is usually anthropocentric (the object of study usually occurs in the translator’s mind) and it is only in the most constructivist approaches that this opens out to include the translator’s environment and the way in which the translator interacts socio-cognitively with their context and other agents:

Translation is done not only by the brain, but also by complex systems, systems which include people, their specific social and physical environments and all their cultural artefacts.

Risku 2002: 529

Prunč (2007: 41) argues that the study of translation processes focused for a while on mental conceptualization and psycho-linguistic experiments (such as many of the experiments using think-aloud protocols, which aim to delve into what was termed the translator’s black box). According to Prunč, in the 1990s such theories opened up to the idea that the translator’s competence had a cultural basis, based on their experiences and social relationships, and they started to reflect the translator’s individual and collective interactions in their social environment. Thus, we can see that the most constructivist theories, such as those of Risku (2010), present a notion close to theories of the artifact and the instrument as mentioned above. Risku termed this Situated and Embodied Cognition, based on the idea that humans are creative and depend on their physical and psychological environment. Risku argues that the current impact of technology is so significant that: “the new findings in cognitive science will necessarily change some of the common concepts and methodological traditions with regard to the actual text production process and competencies” (Risku 2010: 94). Pym takes a similar approach, arguing that: “we will have to rethink, yet again the basic configuration of our training programs. That is, we will have to revise our models of what some call translation competence” (Pym 2012: 1-2). As we aim to describe in the following section, cognitive and constructivist studies incorporating the instrumental and social plane also relate to McLuhan’s (and others) theories of technology as an extension of humans:

It is simpler to say that if a new technology extends one or more of our senses outside us into the social world, then new ratios among all of our senses will occur in that particular culture. It is comparable to what happens when a new note is added to a melody. And when the sense ratios alter in any culture then what had appeared lucid before may suddenly become opaque, and what had been vague or opaque will become translucent.

McLuhan 1964/1994: 47

Meanwhile, from a translation perspective, Biau-Gil and Pym (2006) and Pym (2011), whilst not explicitly mentioning McLuhan, also make use of the same metaphor of extension when introducing the intrinsically human capacities accentuated by the use of certain tools, referring particularly to translation memories.

Here we shall be looking at a series of electronic tools that extend human capacities in certain ways. These tools fundamentally affect 1) communication (the ways translators communicate with clients, authors, and other translators), 2) memory (how much information we can retrieve, and how fast), and 3) texts (how texts now become temporary arrangements of content). Of all the tools, the ones that are specifically designed to assist translators are undoubtedly those concerning memory.

Biau-Gil and Pym 2006: 6

Many aspects could be reassessed from these perspectives. In this regard, evidence is already accumulating of the impact of new technology in modifying and extending generic capacities, such as reading and writing and memory (Pym 2011), that, in one way or another, feature in the translator’s competence set. A comprehesive review of the interdisciplinary interaction between cognitive translatology and disciplines such as linguistics, psychology, neuroscience, cognitive science, reading and writing research and language technology has been compiled by O’Brien (2013). More specifically, O’Brien (2013: 5) gives examples of many research approaches that have grown thanks to the development of “accessibility tools and methods for measuring specific cognitive aspects of the translation task, in particular screen recording, keystroke logging and eye-tracking technologies.”

A change in the reader’s approach to information has been noted with regard to handling Internet content, although these cognitive modifications could impact on processing any type of information. As Cronin argues, the consequence of collaborative phenomena, crowdsourcing and ubiquitous computing is a paradigm shift in translation, the main expression of which is revealed in new translating practices (2010: 1). In this regard, it is worth noting that constructivist research into the impact of instruments in society and on cognition has a long history in other disciplines, such as the sociology of technology and the psychology of learning. Building on the initial theoretical basis provided by Piaget (1978) and Vygotski (1978), there is now a body of more recent literature featuring studies and methodology that might considerably enrich future research in Translation Studies, from fields related to the sociology of science and technology, technology education and teaching, the psychology of technological learning, etc. Of all the technologies and artifacts-instruments that could be analyzed, those related to information technology and digital connectivity have the greatest psychological weight, if we apply Vygotskian terminology, as they have the greatest influence on the user’s mind and can learn the most while being used. We might say, in principle at least, that compared to tools used in a linear fashion, such as a hammer or an eraser, other tools establish a dialogue with the user, for example musical instruments and toys. In this latter category, information technology generates the most complex interactions as it is in itself considered, metaphorically, intelligent (although only artificially) and tends to be dynamic and to become optimized with use. As Pym explains with regard to new automatic translation systems and translation memories: “The more you use them (well), the better they get,” providing a “learning dimension” to the tool (Pym 2012: 2), which is linked to the instrumented action approach.

3.3. Sociological theories of Translation

Finally, we will examine sociological theories of translation. In our opinion, these complete all the new elements involved in the translator’s role, including technology, with the necessary flexibility and interpretative rigor. Curiously, this approach has only explored the impact of technology on translation to a limited extent.

The Sociology of Translation (Wolf and Fukari 2007) focuses on the agents involved in translation, considering them to belong to a broader social system. Within this scope, translation is regarded as something multi-faceted, as it can be perceived as:

  1. A socially regulated activity (Hermans 1997: 10; Wolf 2007: 1);

  2. An interactive social event (Fuchs 1997: 319; Wolf 2007: 3). Translation and its agents have a social effect, whilst simultaneously being social products;

  3. A social practice (Wolf 2007: 6).

Sociological theories of Translation have been applied most frequently in literary translation. However, as we will see, some authors have proposed that it should be applied to other forms of translation, naturally understanding the inclusion of technology (as non-human agents or artifacts) as just another agent in the social context of the translation (Buzelin 2005: 212; Chesterman 2007: 173).

Once again following the new foundations laid by Wolf and Fukari in their 2007 compilation, we can say that the main sociological currents in Translation Studies include the notion of habitus and what is known as the actor-network theory (ANT) (Latour 1987; Callon 1986; Law 1999).

The translator’s habitus, as explained by Chesterman (2007: 177), refers to:

the translator’s mindset or cultural mind, “the elaborate result of a personalized social and cultural history (Simeoni 1998: 32).” The habitus thus mediates between personal experience and the social world. The habitus is acquired via “inculcation in a set of social practices” (Inghilleri 2005: 70).

Chesterman 2007: 177

As Wolf (2007: 19) explains, constructing a translator’s habitus involves following their social trajectory. Wolf agrees with Simeoni in affirming that analyzing translation incorporating habitus as a theoretical background may lead to more detailed consideration of the skills, capabilities and socio-cognitive competences in the translation and its results.

Ultimately, a habitus-led consideration of translation practices would encourage more finely-grained analyses of the “socio-cognitive emergence of translating skills and their outcome.”

Wolf 2007: 19

One of the most interesting aspects of habitus theories for our line of research is the capacity to easily incorporate non-human agents or actors. This facilitates the incorporation of technology into the networks being studied.

In addition, the actor-network theory proposed by the sociologist Bruno Latour (1987) creates a theoretical framework that facilitates the analysis of collective socio-technological processes. Latour argues that science is a complex, heterogeneous process, mixing social, technical, conceptual and textual aspects. He incorporates the idea of the artifact, but, despite sharing the same basis, the connotations slightly differ from those of Vygotskian theories. In the actor-network theory, elements only have meaning in relation to the other elements in the network, explicitly including non-human agents, such as technology, machines, animals, text, etc. (Ritzer 2005). The actor-network theory was proposed to overcome the theoretical barrier between agents and structures, and so create cohesive, integrated models.

Both of these theoretical approaches can easily incorporate a technological perspective. For example, Buzelin (2005) is one of the strongest proponents of the actor-network theory, using this to analyze translation-technology phenomena, particularly with regard to literary translation. Her contribution refers explicitly to the possibility of including technology in the analytical framework:

[The actor-network theory] reminds us that the translation process involves a multiplicity of mediators, some of which are technological, and that the latter are not simple tools but ‘black boxes’ enclosing stable forms of knowledge, consensus and presuppositions over what constitutes (good) translation. In short, this concept enables us to grasp both the complexity – and non-linear character – of the translation process, and the hybridity of the translating agent.

Buzelin 2005: 212

Chesterman (2007: 178) argues that these sociological perspectives, which can potentially enrich our understanding of what translation entails, may lead us to reconsider concepts and norms such as translation standards and strategies, the functional notion of the translation brief, the translator’s role, etc.

4. Extended Translation: a Trans-human Translation Approach

In his essay “El Gran Mediodía. Sobre la Transhumanización” (2003), Vázquez-Medel analyzes the signs of change that enable us to glimpse into the transformation – perhaps technology-driven – of the human into what he terms the trans-human. As we will explain, this metaphor seems appropriate to us for observing what is happening in translation.

We have seen that Information Technology and the Information Society are driving changes that can be compared to the impact of the development of printing. Technological advances since the middle of the 20th century, and particularly from the 1970s to the present, have had enormous impact on the tools used by translators (Hutchins and Somers 1992; O’Hagan 1996; Trujillo 1999; Austermühl 2001; Mossop 2001; Bowker 2002; Somers 2003; Torres 2003; Lagoudaki 2006; Melby 2006; Désilets, Melançon et al. 2009; Cronin 2010; García 2009; Pym 2012; LeBlanc 2013). The terms that have been used to refer to the impact of technology on translation evoke some of the approaches adopted since the final third of the 20th century: these include machine translation, computer-assisted translation, teletranslation, localization, globalization, crowd-translation, cloud-translation, wiki-translation or, our proposal in this paper, extended translation or trans-human translation.

Our trans-human translation hypothesis (Alonso and Calvo 2012) refers to an extended cognitive, anthropological and social system or network which integrates human translators and technologies, whether specific to translation or not, and acknowledges the collective dimension of many translation workflows today. A technology-mediated approach envisages technologies in action and interaction with the human, fostering a plethora of instrumental developments, not only as isolated fragmentary tools utterly dominated by the human. The creative and learning dimension of technologies in both directions, from the user to the tool and vice versa, also plays a shaping role in this proposed construct. This approach closely relates to Risku and Windhager’s recent formulation of the extended cognitive aspects of translation, which further develops her previous theories on situated cognition and translation (Risku 2002):

Extended cognition studies inevitably follow “leaking minds” into their social and technical environments, thereby including process, interaction and artefact analysis into a combined and linked view on dynamic complexity.

Risku and Windhager 2013: 36

The time seems ripe to transfer extended mind cognition theories to Translation Studies, whether related to specific skills intervening in translation processes such as memory (Pym 2011) or translation itself as a tool-mediated, context-dependent mind process (Risku and Windhager 2013; Alonso and Calvo 2012).

As for the general impact of tools, ever since the emergence of information technology as we understand it today, starting around the end of the 2nd World War (O’Hagan 1996: 24), there has been frequent excitement and speculation about the utopia or hope that humans might be replaced by machines. This has been seen cyclically with regard to artificial intelligence in general and, highly illustratively, in relation to automated translation. Every so often, it has been excitedly announced that high-quality automated translation is just around the corner, only for this to be subsequently disappointedly disproved. The mechanization or automation of certain capabilities that form part of translation as an industry, activity or process has taken place through the incorporation of new tools. Whilst the ultimate objective of automatic translation tools is to eliminate human involvement almost entirely, the purpose of assisted-translation tools is to help the human translator, who continues to manage and be ultimately responsible for the process as a whole. To put it another way, while the objective of automatic translation is holistic in concept, assisted translation is fragmentary, as it aims to identify and intervene in translation processes and sub-processes in order to facilitate the process for the human translator.

This idea stems from subordination of the human translator’s work to machine processes. This was stated by Biau-Gil and Pym, who argued that this convergence leads to a certain dehumanization of the translation process and a loss of perspective of the translator’s role through sub-discourses, rather than from the overall perspective of a text (Biau-Gil and Pym 2006: 6-7). This perception relates back to the loss of functional vision we discussed earlier in this paper. However, McLuhan allegorically argued that this new automation also leads to rehumanization, through the creation of higher-level tasks that enable automated processes to be controlled creatively and holistically.

Thus, with automation, for example, the new patterns of human association tend to eliminate jobs, it is true. That is the negative result. Positively, automation creates roles for people, which is to say depth of involvement in their work and human association that our preceding mechanical technology had destroyed. Many people would be disposed to say that it was not the machine, but what one did with the machine, that was its meaning or message. In terms of the ways in which the machine altered our relations to one another and to ourselves, it mattered not in the least whether it turned out cornflakes or Cadillacs. The restructuring of human work and association was shaped by the technique of fragmentation that is the essence of machine technology. The essence of automation technology is the opposite. It is integral and decentralist in depth, just as the machine was fragmentary, centralist, and superficial in its patterning of human relationships.

McLuhan 1964/1994: 7-8

The intermediary step from automation technologies to the coming of the global village, the ultimate stage of development foreseen by McLuhan, produces a phase which is perhaps where we find ourselves now, where the automation-dehumanization of some tasks tends to eliminate jobs. Being at a paradigmatic juncture for translation, and a professional junction for translators, a range of opinions has emerged about how the profession will develop. Broadly speaking, there are two positions: the pessimistic and the optimistic. The crudest versions of the immediate future include García’s description of the outlook for translators from 2009, based on two different roles according to the two dominant translation models: the utility center and the hive model.

The ‘utility’ model could well cater for small projects, or projects in specialised areas. It would also employ professional translators using MT-assisted TM for texts written in some kind of managed authoring environment, or translating directly when dealing with the colloquial language of email and instant messaging. In a typical situation, the use of on-site resources will entail professional translators working in low-paid, call-centre conditions.

The ‘hive’ model does away with professional translators altogether in preference to a mass of volunteers/amateurs. This model brings back the pre-professional era when translators were simply bilinguals with good subject knowledge, and the ability or inclination to transfer meaning between languages. This model would be supported by a few professionally trained translators occupying key terminological or QC roles in the background.

García 2009: 211

From a more optimistic point of view, technology has potential to create new human roles that may be less mechanical and more dynamic. These would be based on critical thinking, in which human imagination is irreplaceable. The optimists include Van der Meer (2011) and Melby (2012), who still see a promising future for translators who are willing to adapt to change, particularly if they maintain their capacity for critical thinking:

It is possible to summarise optimistic views of the future of human translation using an analogy. Humans will never replace calculators (they are far too slow at doing arithmetic), but computers will never replace certified accountants who use calculators and other tools to make informed recommendations. Likewise, humans will never replace computers to search for words in a bi-text corpus (they are far too slow at skimming large collections of documents for particular words), but the only human translators who will be replaced by computers are those who translate like computers, that is, mechanically.

Melby 2012: 16

The metaphor of dehumanization and rehumanization at the hand of the most recent hybrid tools, combining rule-based automated translation technologies with corpus-based automatic translation, again provides a central element in a vision that contemplates dehumanization of translation in itself, but combined with creation of new, rehumanized, processes. This is the case Pym describes when discussing translation memories.

On countless levels, the advantages presented by technology are so great that they cannot be refused. Translation memories perform the most repetitive tasks so that translators can concentrate on the most creative aspects of translation.

Biau-Gil and Pym 2006: 18

And again, more recently:

Whereas much of the translator’s skill-set and effort was previously invested in identifying possible solutions to translation problems (i.e., the generative side of cognitive processes), the vast majority of those skills and efforts are now invested in selecting between available solutions and then adapting the selected solutions to target-side purposes (i.e., the selective side of the cognitive processes). The emphasis has shifted from generation to selection. That is a very simple and profound shift, and it has been occurring progressively with the impact of the Internet.

Pym 2012: 9-11

Despite the dehumanization metaphor frequently recurring in reference to new advanced probability-based automated translation systems (Van der Meer 2011; Pym 2012; Vintar 2012), and high quality automatic translation remaining the goal of several international entities (e.g., the EuroMatrixPlus[1] and Moses[2] projects financed by the European Union), as these authors argue, this does not mean translators will disappear, but rather that they will be transformed. More specifically, post-editing is currently attracting much of the research conducted in the field of translation tools (see, for example, Winther Balling, Carl et al. 2012). However, the idea of future translating being converted exclusively into post-editing, seems unlikely to us. There are two significant stumbling blocks that would have to be overcome for automatic translation engines to learn from post-editors in a short period of time. The first of these results directly from the virtues of MT, which is widely accepted in the translation industry, widespread and constantly recycled and enriched by Internet content. We consider that, despite the quality controls imposed on the multilingual corpus prior to incorporation into automatic translation engines, this feedback loop could lead to a saturation point at which the redundancy of MT errors would fossilize leading to a bottleneck in the quality of automatic translations. Secondly, we would argue that the quality of statistical automatic translation will vary depending on the language pair, as success depends, in the first instance, on the existence of an enormous bilingual corpus (Oliver, Moré et al. 2008: 37). Unfortunately, all languages are not equal in this regard, due to demographic and political issues, the nature of languages (having a written or a spoken base) and degree of technology and Internet adaptation.

In summary, there are numerous factors that would determine the existence of a corpus with the characteristics needed to provide automatic translation with the minimum quality required for communication: such a corpus does not currently exist for all language pairs, and will not develop in the immediate future.

It must be remembered that the degree of technological infiltration of the translation process has not risen inexorably over time, nor is it homogeneous across all language pairs. We might say that every translation action could be placed on a continuum, a kind of possible dehumanization-rehumanization-trans-humanization scale. This is not just about a trend towards specialization or technological development, but rather a process of increasing versatility.

As Calvo (2008) argues, this sets a new challenge for the learning objectives to be established in the training of professionals. While the translation competence as traditionally understood remains fundamental for operating in this new scenario, according to Pym (2012: 7), other integrated competencies will be required, such as: learn to learn, learn to trust and mistrust data; [and] learn to revise translations as texts. These tasks have perhaps been considered in non-technologically driven translation, but the point we are at means that now is a suitable time for a substantive review of traditional translator-training practices. This fits with our idea that, following the utopian idea of total dehumanization, we are now seeing a perspective of rehumanization of processes that we might consider cognitively superior to those of a machine: these processes relate to supervision, quality control and automated and hybrid machine-person processes, or simply the term transcreation, as used by Common Sense Advisory (Kelly and Stewart 2011).

This context leads to what we have termed trans-human translators and trans-translation (Alonso and Calvo 2012: 5) in relation to the activity of a professional translator taking on a different role in the translation process, interacting with technology as though this were really just an extension of their capabilities and establishing a process with a social, creative and learning dimension. The term trans-human translator was chosen as a clear allusion to the post-structuralist approach set out by Vázquez-Medel in his philosophical essay “El Gran Mediodía: Sobre la Transhumanización”:

a) the human exists in the framework of a wide range of intervals (material and symbolic): this had a start and will probably have an end in which we will contribute to the gestation of a different (and perhaps superior) reality; b) there are symptoms suggesting we are close to the final interval, that we have crossed the threshold and that humanity, as a species, is heading into the sunset; c) there are a number of possible ways to overcome this: the genetic revolution, modifying biological parameters that would otherwise evolve over aeons; the technological and IT revolution, that may identify intelligent forms that are clearly superior in their thinking and feelings to human beings, based on different biological bases; a mixture of both possibilities, tending to the gestation of new creatures (cyborgs) that are both cybernetic and organic; and finally, among other options, the emergence of a superior mind – a super intelligence or super conscience – resulting from the progressive integration of individual human intelligences.[3]

Vázquez-Medel 2003: 28; translated by the authors

If we consider the channels that might lead to the futuristic trans-humanization process described by Vázquez-Medel (2003: 28) in the context of translation, it would seem plausible that trans-humanization might be propagated through “the emergence of a superior mind – a super intelligence or super conscience – resulting from the progressive integration of individual human intelligences,” one of the manifestations of which might be the collaborative translation facilitated by 2.0 and subsequent technologies.

We can see that this trend towards a sum of individual efforts occurs both in automatic translation and in the translation industry, as well as in collaborative translation initiatives driven by technology companies and translation agencies, from volunteering and private initiatives.[4] Moreover, statistical translation and the translation industry are both interested in compiling translated corpus, whether in translation memory (TMX) or other formats that can be processed using automation techniques. What we term collaborative translation is establishing itself as a manifestation of the new forms of translation noted by Cronin (2010). This trans-humanization of translation via the collaborative approach requires a technological dimension, in which the translator acquires relevance as the organizer and creator of the natural discourse produced through their own capabilities, and the technological and social extensions that establish their environment, their network. We find it likely that this new way of performing translation activity will change the biology of the individual – e.g., cognition processes –, and the traditional interaction between the discourse and the translator. Verification of these hypotheses should be a significant area of research in new Translation Studies.

5. By way of preliminary conclusions

A purely artifactual approach to translation and its tools leads us to an idea of translation where productivity, and time and cost efficiency are the raison d’être. Translation is seen as a core, supreme, self-sufficient skill and tools are only subsidiary and supportive applications.

However, this paper suggests how technology actually plays a much more consequential role in translation today. A comprehensive technology-based approach to Translation Studies would offer sound foundations to understand the present and future of translation, what translators do and what translation entails. Technologies could naturally find their way into translator and translation training in an embedded, transferable and integrated way; but as Risku and Windhager (2013) claim, far more research is needed before we can actually claim to understand the dynamics of complex trans-human translation cultures.

In a globalized society such as the one we live in today that generates huge quantities of multilingual content, translation is establishing itself as a heterogeneous, complex activity, discipline and industry, with room for different approaches and working models.

For example, collaborative and crowdsourced translation workflows, not only in contexts such as audiovisual translation – funsubs –, activist translation groups, wiki-translation or free software projects, but also in literary and other professional translation environments – e.g., & Other Stories Publishing[5] – might fall under the category of trans-human translation, i.e., translation processes that take place by means of technological networks and extensions replacing or enriching individual skills, processes and roles that were formerly performed in a rather isolated way. Social translation networks and forums could be clear examples of how the sum of collective intelligences could be triggering the trans-humanization of translation and the mutual interplay between human translators and their tools.

The impact of translation-specific technologies, whether memories or automated translation engines relying on massive multilingual corpora, and their related processes such as post-editing also asks for further exploration both from a theoretical and empirical perspective.

A technology-based approach has the potential to shed light on typically unclosed translation questions such as the notion of authorship and copyright, the conception of text, whether as a traditional finished product or an ever-changing prototype, research on translator skills models, translation norms, etc.

As we have argued, the trans-humanized, technology-mediated approach to translation links harmoniously with many of the principles on which we have based our theoretical reflections: the relation between artifact, instrument and instrumented action; more up-to-date and integrating types of skopos, socio-cognition and theories of the extension of the mind; and mutual social feedback between the translator and technology.