Corps de l’article

Introduction

Encryption is one of the most important technologies of the digital age. It provides individuals and organizations with the confidence and trust necessary for a myriad of socially productive transactions, including e-commerce, personal and business communications, and the provision of government services.[1] It also facilitates the expression of ideas and opinion and pursuit of fulfilling lifestyle choices essential to a free and liberal society.[2]

But encryption also poses a tangible threat to public safety. Criminals and other wrongdoers use it to shield incriminating evidence from law enforcement. In response, police have had to find ways to circumvent it. Current methods are limited, however, and their success often depends on variables outside of law enforcement’s control. To the extent that evidence of crime is increasingly in digital, encrypted form, it will often be either unavailable to police or require costly measures to access. This situation has no analogue in the pre-digital world: no safe or lock could permanently prevent police from lawfully obtaining incriminating documents or records stored within.[3] Encryption technology thus threatens to impede the detection and deterrence of crime and cause significant harm to society.[4]

Legislative responses to this conundrum may be grouped into three categories. The first is to do nothing, relying on existing and future investigative and technical methods to lawfully access as much encrypted data as possible. Whether this is a tenable situation for society involves many difficult empirical and policy questions that we do not attempt to answer in this article.[5]

Should Parliament decide to act, two approaches present themselves. First, Parliament could choose to regulate the production and use of encryption technologies to enhance law enforcement’s lawful access to unencrypted information. Such an “exceptional access” regime would compel encryption providers to give police “backdoor” access to decrypted data. The second approach would empower police to compel individuals—by imposing sanctions for refusal—to decrypt their own data. This could be achieved by compelling users to give police passwords, encryption keys, biometric identifiers, or the unencrypted data itself.[6]

Both of these legislative solutions raise concerns. Exceptional access presents technical challenges, including the risk that malicious actors will exploit security vulnerabilities intended only for legitimate law-enforcement purposes.[7] Compelled decryption avoids this risk, but raises concerns relating to privacy and self-incrimination.[8]

This article evaluates each of these alternatives in the context of policy and constitutional law. We conclude that exceptional access, though very likely constitutional, creates too great a risk of data insecurity to justify its benefits to law enforcement and public safety. Compelled decryption, in contrast, would provide at least a partial solution to the encryption problem without unduly compromising data security. And while it would inevitably attract constitutional scrutiny, it could be readily designed to cohere with the Canadian Charter of Rights and Freedoms.[9] By requiring warrants to compel users to decrypt their data, and by giving evidentiary immunity to the act of decryption, our proposal would prevent inquisitorial fishing expeditions yet allow the decrypted information to be used for investigative and prosecutorial purposes.

The article proceeds as follows. Part I outlines existing methods for circumventing encryption and their limitations. Part II discusses the legal and policy issues surrounding the exceptional access and compelled disclosure alternatives. We argue that the latter is the more viable policy option, can be designed to comply with the Charter’s self-incrimination and privacy protections, and would maintain the contemporary balance between liberty and crime control interests.[10]

I. Existing Methods of Defeating Encryption

Encryption uses the process of cryptography to transform ordinary information, or plaintext, into unintelligible ciphertext. Persons who possesses the encrypted ciphertext cannot recover the plaintext information unless they know both the algorithm used to perform the transformation and an additional piece of information called the encryption key.[11] In effect, this allows encryption to hide information from anyone not authorized to view it.

The theoretical security of an encryption system is determined by considering whether it can be broken using a reasonable amount of time and computing power. While any encryption can be defeated by trying every possible key, any algorithm that requires an unreasonable amount of time and resources to break is considered computationally secure.[12]

Even computationally secure encryption, however, may be vulnerable to attack. The encryption algorithm and keys are part of a larger security system that may contain weaknesses.[13] Depending on the algorithm used, the device it operates on, and decisions made by the user, police may be able to recover the plaintext information without acquiring the encryption key. Moreover, many systems derive the key from a user-selected password.[14] In such instances, the encryption is only as strong as the user’s choice of password.

As mentioned, police do currently have methods of accessing encrypted data. These may be grouped into four categories: (i) traditional investigative methods; (ii) third party assistance; (iii) exploiting vulnerabilities; and (iv) guessing the password.[15] As detailed below, however, each has significant limitations. Without exceptional access or compelled disclosure, law-enforcement will continue to be prevented from accessing information that they are legally entitled to obtain.

A. Traditional Investigative Methods

Police may use several traditional investigative techniques to obtain plaintext from encrypted data, most commonly through surveillance, search and seizure, and questioning.[16] As its name implies, surveillance involves surreptitiously observing suspects to capture passwords, encryption keys, or plaintext before it is encrypted. Hidden video cameras, for example, may record suspects’ encryption keys or passwords as they are inputted into the device. In addition, if suspects share passwords over unencrypted channels, police can potentially intercept them in transit. And most effectively, police can obtain keys or plaintext by surreptitiously installing software on suspects’ devices. The most common form is a “keylogger”, which records all keystrokes, including passwords, entered on a device.[17] Similarly, any plaintext information input through keystrokes would be recorded before it is encrypted. Another type of software, known as a “rootkit”, can provide full access to a targeted device, making plaintext information directly recoverable.[18]

Surveillance methods have at least three major limitations. First, they require police to have pre-existing knowledge that a suspect is using a particular device. Surveillance thus is inapplicable when police discover a device previously unknown to them. Second, even when a device is targeted in advance, it will often be difficult to surreptitiously capture inputted data. Observing a password, whether in person or through video surveillance, presents logistical challenges for police.[19] Similarly, software tools need to be secretly installed on a suspect’s device. Doing this by gaining physical access to the device involves the same challenges discussed above. And remote installation requires exploiting security vulnerabilities that, as discussed in greater detail below, may either be prohibitively costly to overcome or may not even exist.[20] Finally, surveillance is often highly intrusive, potentially recording information, communications, and activities far beyond what is necessary to defeat encryption, including that relating to innocent third parties.[21] Police wishing to conduct such surveillance must meet onerous standards of justification under the Criminal Code[22] and Charter.[23]

Alternatively, police may use search and seizure powers to locate either the user’s password or a copy of the targeted plaintext information. Some people record their passwords to avoid having to memorize them.[24] If police are authorized to search locations where users store their passwords and find them, they can use them to access the encrypted information it protects.[25] In addition, copies of the targeted data may be stored in another location that is either unencrypted or accessible by third parties.[26] Lastly, police may be able to lawfully seize devices while they are in an “unlocked” state and retrieve the data they contain.[27]

These techniques are likely to be effective, however, only for naïve targets. More sophisticated wrongdoers can readily secure their passwords and data to thwart these techniques. Moreover, as mentioned, device manufacturers, software designers, and service providers are making it increasingly simple for ordinary users to adopt security best practices, often in the form of “default” settings that provide strong security without any action by the user.

Lastly, police may ask suspects to voluntarily provide passwords, encryption keys, biometric identifiers, or plaintext. So long as their questioning is lawful,[28] they may use any information revealed to access encrypted information in their possession.[29] But while police are trained to induce cooperation and may employ a considerable degree of pressure and manipulation,[30] there is no guarantee this method will succeed, especially in interrogating sophisticated wrongdoers. Suspects enjoy a freedom to remain silent in response to police questioning[31] and police cannot (currently) require them to assist with decryption.[32]

B. Third Party Assistance

If police cannot access encrypted information themselves or convince suspects to help decrypt it, a logical place for them to turn to is the organization that implemented the encryption. The extent to which third parties can assist with decryption differs depending on the form of encryption used. End-to-end encryption is initiated on the user’s device, and protects data from any third party coming into possession of it, including telecommunications, internet, and software application providers.[33] It presents police with a more challenging obstacle than encryption initiated by such technology service providers. When providers encrypt information, they control the keys and can access the plaintext at any time.[34] But they cannot access information encrypted end-to-end without additional weaknesses built into the system.[35] Unless providers store a copy of the encryption key or retain some way of determining it, police will not be able to access the user’s information.

The same principles apply to encrypted data stored on a user’s device. Secure device encryption systems, such as those used on the latest mobile operating systems developed by Google and Apple, ensure that the service providers do not have access to the user’s encryption key.[36] For both end-to-end and secure device encryption, the implications are the same. Without building vulnerabilities into the system, service providers are in no better position to access encrypted information than the police.

Police currently have two ways to obtain third party assistance with decryption, neither of which works for end-to-end or secure device encryption. First, as detailed in Part II.A below, wireless telecommunications providers are required to provide plaintext versions of data they have encrypted when police demonstrate their lawful entitlement to that data. But providers are not obliged—and in any case, would not likely have the capacity—to provide plaintext of data encrypted before entering the network, as with data encrypted end-to-end by a user or by default on a device.

Second, police may ask any service provider, such as a device manufacturer, to voluntarily assist with the decryption of data that they have encrypted (or facilitated the encryption of). If they do not comply voluntarily, police might be able to use section 487.02 of the Criminal Code to force them to do so. That provision authorizes a judge or justice to “order a person to provide assistance, if the person’s assistance may reasonably be considered to be required to give effect to [an] authorization or warrant.” Though there are no reported decisions applying this provision in this context, there are reports that police have used it in attempts to obtain decrypted data from service providers.[37]

As end-to-end and secure device encryption become more common, however, the usefulness of assistance orders is likely to wane.[38] Further, many providers have taken the position that these orders cannot be used to compel them to attempt to defeat their own encryption systems.[39] It is likely, therefore, that police will be increasingly unable to enlist service providers to decrypt data.[40]

C. Exploiting Vulnerabilities

Another way for law enforcement agencies to access encrypted information is through technical vulnerabilities, or exploits, within an encryption system. These weaknesses can exist in the encryption algorithm, on the device that the algorithm operates on, or within other applications on the same device. To discover such exploits, state agencies can either develop their own hacking capabilities or purchase them from third party vendors.[41] For example, the FBI used a purchased exploit to access data on an Apple device used by a perpetrator of the 2015 San Bernardino terrorist attack. The exploit disabled security measures built into the device that prevented the agency from using “brute-force” computation to guess its password.[42] Some encryption implementations also store keys on the device, leaving them vulnerable to recovery by resourceful attackers.[43]

Police may only exploit vulnerabilities, however, before the encryption provider fixes them. Technology companies have strong commercial reasons to maximize data security. They therefore invest heavily in preventing and quickly redressing vulnerabilities.[44] While efforts to reduce vulnerabilities are not always successful,[45] developing a toolkit of unaddressed exploits demands significant resources and technical expertise.[46]

Exploiting vulnerabilities also raises a broader policy concern. Government-developed hacking tools can be leaked or stolen and subsequently used by malicious actors.[47] And police purchases of third party exploits spur the market for privately developed hacking tools, which again may also be used for nefarious purposes. In other words, state-sponsored hacking, even if targeted only at likely wrongdoers, may effectively diminish the security of encryption for law-abiding citizens.[48]

D. Guessing the Password

Lastly, police may attempt to guess the user’s encryption key or password. Current encryption standards are so strong that it is rarely possible to discover the key within reasonable time and resource limits.[49] But because many encryption systems derive their key from a user-selected password, police can often devote their energies to the easier task of guessing it.[50]

There are two main ways to guess a password: trying all possible combinations (a brute-force attack) and trying a list of passwords (a dictionary attack).[51] The probability of success for each method turns on the strength of the encryption system and complexity of the password, as well as the computational resources available to police.

Success is more likely when the number of potential passwords is limited. For example, a four-digit numeric pin allows for only 10,000 combinations, while a four-character alphanumeric password—using only numbers and lowercase letters—allows for 1,679,616. However, even when the number of combinations is small, the system may include countermeasures designed to thwart repeated password entry attempts. If the relevant encrypted files can be extracted from the device and run on powerful offline processors, a great many passwords can be tried within a short period of time.[52] But some systems implement controls to prevent such external password cracking attempts, forcing the attacker to make all attempts directly on the targeted device. Depending on the device, this can drastically increase the amount of time required to complete a brute-force attack.[53] On other systems, users can employ a setting whereby data is permanently destroyed if an incorrect password is entered a certain number of times.[54] This is the feature that initially prevented the FBI from accessing the San Bernardino terrorist’s phone.[55]

Police attempts to guess the user’s password are therefore constrained by both the complexity of the password and by any countermeasures included in the encryption implementation. As the San Bernardino case demonstrates, even a four-digit numeric pin can pose significant challenges for law enforcement.

II. Legislative Responses

As we have seen, law enforcement’s capacity to defeat encryption under current law depends on many variables outside of its control, including: decisions made by the encryption user; the strength and design of the encryption system; and whether users or service providers are willing to cooperate with police. As mentioned, we do not attempt to answer whether this state of affairs is acceptable. Given the increasing power, ubiquity, and security of encryption systems, however, it makes sense to at least consider how government might enhance police access to encrypted data.

There are two main legislative options: (i) requiring service providers to grant police exceptional access to encrypted data through backdoors; and (ii) compelling suspects to decrypt their own information in response to a lawful request enforceable by criminal punishments for noncompliance. In the discussion that follows we evaluate whether either regime is tenable from the perspectives of both policy and constitutional law.

It bears repeating that neither option would imply changing the law regulating the state’s entitlement to private information. As the law currently stands, police may use any technical means to decrypt data that they are legally entitled to access. If state agencies obtained a technology capable of quickly and efficiently breaking all current encryption systems, they would be free to use it without restriction. We recognize, and discuss in detail below, that enhancing the state’s decryption capacity (whether through legislation or technology) could increase the quantum of private information accessible to police. But it should be kept in mind that while the law gives people the freedom to use encryption and other security measures to protect their data, it has never restricted law enforcement’s capacity to defeat those protections.[56]

A. Exceptional Access

1. Policy

An exceptional access regime would require service providers to build in law-enforcement backdoors for all data encrypted by their systems.[57] It is easy to understand why such a regime would be attractive to law enforcement. In theory, it would give them the ability to easily obtain plaintext for any encrypted information that they are lawfully entitled to access. And, in contrast to the compelled disclosure proposal discussed below, it would provide them with plaintext without any involvement from the user. This would enable the immediate decryption of data acquired through both prospective, surreptitious interception and retrospective seizure from devices used by persons who could not be induced, for whatever reason, to assist with decryption.

Before examining this proposal further, it is important to note that a limited exceptional access regime has existed in Canada since the mid-1990s.[58] As mentioned, where police establish their legal entitlement to information, wireless telecommunications providers must provide it in plaintext form if they have encrypted it.[59] Specifically, the government has mandated that if providers “initiate encoding, compression or encryption” on communications, they must provide police with the communications “en clair” (i.e. plaintext).[60] This obligation is part of a broader spectrum of licensing conditions requiring providers to enable police access to lawfully acquired telecommunications content and metadata.[61]

This duty to decrypt does not apply, however, to “end to end encryption that can be employed without the service provider’s knowledge.”[62] This means that wireless telecommunications providers are not obliged to provide plaintext for data that was encrypted, by the user or a third party, before entering the network, even if the provider implemented the encryption system.[63] The duty to decrypt has also been interpreted to apply only to circuit-switched communications such as mobile phone calls, sms text messages, and faxes, and not to packet-switched (i.e. internet) communications.[64]

As a consequence of these exceptions, the duty to decrypt covers only those communications encrypted by telecommunications providers. It does not extend to information encrypted by default on mobile operating systems, such as Apple’s iOS and Google’s Android. Nor does it apply to communications encrypted end-to-end by application providers, such as WhatsApp, even if they are carried over wireless telecommunications networks. And even if the decryption obligation did apply, the wireless providers subject to the regime would probably not be able to comply. Even in the wireless sphere, therefore, law enforcement’s capacity to defeat encryption is limited, and is likely to become increasingly so.

Not surprisingly, then, police have lobbied for comprehensive exceptional access legislation.[65] As we elaborate below, however, there are significant economic, jurisdictional, and technical hurdles that would have to be overcome to make such a regime a practical reality. More importantly, exceptional access would introduce vulnerabilities into encryption systems that could be exploited by malicious actors.

To begin, any legislation requiring service providers to build in backdoors for police would generate significant upfront development costs. Many technology experts contend that it is simply not possible to design exceptional access in a way that would allow police in while keeping malicious actors, such as criminals and foreign intelligence agencies, out.[66] Even if we assume that this concern is overstated, developing a proposal for secure exceptional access would require extensive research and resources.[67] There would also be significant costs associated with replacing existing encryption systems that do not currently support exceptional access.

Furthermore, the obligations imposed by exceptional access legislation could diminish investment in the Canadian technology sector. As legislation would apply only to providers operating in Canada, many security-conscious users would adopt purely foreign encryption systems.[68] Canadian operations would become less profitable, and some providers might pull out of Canada altogether rather than bear the burdens of compliance. Smaller start-up firms could also suffer, potentially chilling technological innovation.[69]

The availability of foreign and open-source alternatives would also limit the effectiveness of any exceptional access regime.[70] It would not be difficult for sophisticated users, including ones with malicious intent, to find methods of encrypting information immune to exceptional access.[71] Police would still face situations in which encrypted information is inaccessible to them.

Lastly, and most importantly, exceptional access would expose legitimate, socially-productive uses of encryption to the same vulnerabilities that enable lawful police access. As mentioned, there is an overwhelming consensus among information security experts that exceptional access cannot keep backdoors out of the hands of bad actors.[72] Exceptional access requires some organization to hold the access keys. This is problematic, as a single organization that holds multiple keys presents a concentrated target for attackers seeking to access those keys.[73] And there is also a risk that backdoors will be abused or unintentionally leaked by internal actors.[74]

These risks are not merely hypothetical. From 2004 to 2005, the phone communications of several members of the Greek government were surreptitiously intercepted. This sophisticated attack was carried out by compromising a lawful intercept mechanism built into the telecommunications network.[75] In 2005, Juniper Networks found unauthorized code allowing traffic travelling through its network devices to be decrypted. Some evidence suggests that this code originated as an NSA-requested backdoor.[76] Further, as the Snowden leaks demonstrate, even if backdoors are not penetrated by hackers, they may be disseminated or misused by the people who have access to them.

In summary, policy considerations indicate that exceptional access would be a poor solution to the encryption problem.[77] We do not believe, however, that it would be unconstitutional. While commentators have suggested that it could compromise privacy[78] and free expression,[79] there is little reason to think that it would violate the constitutional provisions protecting these norms: sections 8 and 2(b) of the Charter.

2. Privacy and Section 8 of the Charter

Section 8 states that “[e]veryone has a right to be secure against unreasonable search or seizure.” Whether a search or seizure has occurred turns on whether the state has invaded the claimant’s “reasonable expectation of privacy.”[80] If not, section 8 is not violated. If the state does invade a reasonable expectation of privacy, the court must go to consider whether that invasion was “reasonable”.[81]

Any section 8 challenge to exceptional access would very likely fail at the first stage. Put simply, in and of itself, encryption cannot create a reasonable expectation of privacy.[82] As we elaborate in Part II.B(3) below, before they decrypt anyone’s data, police must show that they are lawfully entitled to access it. In many cases, the data will be protected by a reasonable expectation of privacy, and police will accordingly have to comply with the requirements of “reasonableness” under section 8 to acquire it. Often, this will require them to get a warrant based on reasonable and probable grounds. But once they have established their entitlement to the data, the law imposes no limits on their efforts to make it intelligible, even if encryption or any other security measure make it difficult to do so.[83]

Consider analog equivalents of encryption. People can lock their private documents in safes or use public pay telephones or anonymous prepaid mobile phones to communicate with others, but if police obtain warrants to seize those documents or communications, they are free to try to overcome those security measures.[84] Imagine that police convince a judge that a residence likely contains documents relevant to the investigation of an offence and obtain a warrant to search that residence for those documents. As the documents may exist in either paper or digital form, the warrant specifically authorizes both physical and computer searches.[85] Police enter the residence and find a locked filing cabinet, shredded papers in a garbage bin, a letter with seemingly illegible handwriting, a sheet of paper with coded message written on it, and a computer locked with a password. The law unquestionably permits them to break the lock on the cabinet, put the shredded papers back together, analyze the messy handwriting, and decipher the coded text. There is no reason why they should not also be able to defeat the computer’s password protection.[86] As the Supreme Court has held, the vast storage and connective capacities of digital devices raise unique concerns about overbroad searches and the potential need for minimizing search protocols.[87] But none of these concerns relate to encryption. Even if the warrant specified that police could search the computer only for a highly specific file type, created in narrowly limited time frame, and located in a directory connected to a single user’s profile, they would surely be entitled to decrypt those files.

Nothing in search and seizure jurisprudence detracts from this conclusion. Police are permitted to use destructive methods, for example, to gain access to secured or hidden evidence, so long as the methods used are reasonable in the circumstances.[88] Courts have also held that people cannot create a reasonable expectation of privacy by selectively excluding police from premises generally open to the public.[89] And the Supreme Court emphasized in Fearon that the presence or absence of password security on mobile phones does not determine whether they attract a reasonable expectation of privacy.[90]

The issue in Fearon was whether police need a warrant to search mobile devices seized as an incident of arrest. The majority held that, subject to certain limitations, they do not. The dissenting justices would have found that warrants are required, save for exigent circumstances. Notwithstanding this division, the Court unanimously concluded that mobile devices attract a reasonable expectation of privacy and, most importantly for our purposes, that the failure to use a password does not extinguish that expectation.[91] It seems to us that the converse should also be true: a password cannot create a reasonable expectation of privacy.[92] Therefore, if police have “reasonably” invaded the user’s expectation of privacy in the device, they are entitled to try to defeat any passwords or encryption protecting it.

In her dissenting reasons, which advocated for the more privacy-protective outcome, Justice Karakatsanis specifically referred to the difficulties that police may have in accessing encrypted devices, asserting that this did not detract from the necessity of preauthorization.[93] This approach is inconsistent with the notion that encryption can create a legal barrier to police access. Though the specific question was not before the Court, Justice Karakatsanis clearly assumed that police would be entitled to defeat encryption after they obtained a warrant to search the device.

Of course, the jurisprudence reviewed above relates to state access to potentially encrypted data in individual search and seizure cases. Those objecting to exceptional access on privacy grounds often claim that by potentially making more personal data accessible to police, backdoors would significantly erode individual privacy in the aggregate.[94] They may have a point. But it does not necessarily follow that this would be a negative outcome. First, law enforcement agencies have argued forcefully that strong and ubiquitous encryption has diminished, or threatens to diminish, the net availability of evidence as compared to the analog era.[95] On this view, exceptional access would merely restore the previous privacy-security balance.

Secondly, even if exceptional access would give police more access to personal information than before, this would not necessarily be objectionable. The privacy-security trade-off is not always a zero sum game.[96] If police usually require warrants to obtain encrypted data, exceptional access could increase the amount of forensically relevant information without dramatically increasing the frequency of privacy invasions of the innocent.[97] If a technology can substantially enhance law enforcement’s capacity to detect and deter wrongdoing without substantially impinging on the liberties of law-abiding citizens, why shouldn’t the law permit it?

We need not definitively resolve this debate here, however. It should suffice to say that deciding whether exceptional access would unduly diminish aggregate privacy entails enormously difficult empirical and normative questions, questions that are better suited to informed, democratic deliberations by Parliament than to case-by-case adjudication by the courts.[98] Parliament’s capacity to gather expert information, gauge public preferences, and craft detailed regulations applying to a broad range of circumstances far exceeds that of the courts.[99] And as the failure of many attempts to adopt modern “lawful access” legislation has shown, the internet privacy lobby is strong and effective.[100] There is no evidence of any democratic deficit that might justify counter-majoritarian intervention by the judiciary.[101]

Lastly, even if courts held that exceptional access invaded a reasonable expectation of privacy, claimants would still have to establish that the invasion was unreasonable. But what would a “reasonable” exceptional access regime entail? The usual requirements of preauthorization on reasonable and probable grounds are irrelevant. In most cases police would have already satisfied them in demonstrating their lawful entitlement to the data; and where the law permits access on other grounds, what other decryption-related conditions would apply? Surely section 8 does not prohibit exceptional access altogether? If it did, the wireless carrier exceptional access regime in place since the 1990s would be unconstitutional, which no court has found. And if it does not, on what basis would a court decide that some regimes are permissible and some not? The purpose of exceptional access is to prevent digital evidence from “going dark” vis-à-vis law enforcement. If courts were to gauge a regime’s reasonableness on the extent it permits users to exempt their data from its scope, that purpose would be almost wholly defeated. Criminal and other malicious actors would have a much stronger incentive to exploit gaps in coverage than law-abiding users. And providers would likely face consumer and competitive pressure to use exempt forms of encryption.

In summary, section 8 of the Charter does not provide a suitable framework for deciding whether exceptional access unduly threatens privacy. While exceptional access does raise privacy concerns, these pale in comparison to the security and economic risks discussed in Part II.A(1), above. In any case, Parliament is well placed to address the privacy impacts of exceptional access regimes, should it (unwisely) seek to implement one.

3. Freedom of Expression and Section 2(b) of the Charter

Section 2(b) of the Charter states that everyone is entitled to the fundamental freedoms of “thought, belief, opinion and expression, including freedom of the press and other media of communication.” The section 2(b) guarantee is violated when the state, in purpose or effect, restricts a person’s ability to carry out expressive activities conveying meaning.[102] The provision also prevents the state from compelling persons to express themselves.[103]

Free expression-based challenges to exceptional access have been framed in two ways. The first is that computer code is expressive, and that requiring coding to implement backdoors violates the prohibition against compelled expression.[104] Second, some contend that by limiting people’s ability to communicate securely, backdoors infringe freedom of expression for encryption users generally.[105] In our view, neither argument is viable under section 2(b).

The first claim founders on the fact that the coding of backdoors is not expressive. Like human speech, computer code can be expressive in some contexts and purely functional in others.[106] Consider a vehicle capable of setting its speed in response to voice commands. The use of speech to adjust the vehicle’s speed is functional—it has no expressive character. Similarly, whether computer code is expressive or not depends on the context. Where coding produces an expressive outcome, such as website content or a video game, that outcome is protected by section 2(b).[107] In contrast, requiring an encryption provider to implement a backdoor is no different than requiring a manufacturer to comply with safety standards in designing a product.[108] While exceptional access would compel service providers to produce computer code, it would not compel expression.

The second argument—that exceptional access would restrict free expression for encryption users generally—reflects one of the policy concerns discussed above, i.e. that backdoors would compromise data security and consequently deter people from engaging in expressive communications and activities online. While this is a legitimate issue for Parliament to consider, the effect of exceptional access on freedom of expression is at best uncertain. Like other Charter rights, section 2(b) only protects individuals against state-imposed limitations on the implicated right or freedom.[109] Exceptional access does not empower the state to impose any sanctions or restrictions on speech. The most that can be said is that it could deter expression by those who fear that their communications could be decrypted and used for improper purposes by state or non-state actors. But given that non-regulated encryption technologies would still exist, as discussed above, it would be difficult to show that an exceptional access law dissuaded anyone concerned about this risk from expressing themselves. Though the Supreme Court has held that indirect chilling effects on expression can be inferred in obvious cases, there must be a causal connection between the impugned law and the exercise of the section 2(b) right.[110] The effect of exceptional access on encryption users’ freedom of expression is too remote and speculative to ground a section 2(b) claim.

B. Compelled Disclosure

1. Policy

Unlike exceptional access, a compelled disclosure regime would not require encryption providers to give law enforcement backdoor access to encrypted data. Instead, it would require encryption users to either hand over their keys or provide plaintext when police have independently established lawful authority to access the data.[111] If they fail to do so, users could be charged with an offence and face penal sanctions, such as imprisonment.[112]

On the one hand, compelled disclosure is a less comprehensive solution to the encryption problem than exceptional access. It would not allow police to decrypt surveillance intercepts in real time. Nor would it enable decryption in any case where the user was unidentified, deceased, incapacitated, at large, or otherwise unable or unwilling to provide the key. Sophisticated users might also be able to adopt countermeasures to either prevent encrypted data from being discovered on seized devices or make it difficult to prove that they possessed the key.[113]

On the other hand, compelled disclosure avoids many of the problems associated with exceptional access. It creates no security vulnerabilities; imposes no economic burden on industry, consumers, or police; and can be enforced without jurisdictional constraint. It thus promises to provide police with lawful access to a significantly greater quantum of encrypted data than is possible under current law. Further, in comparison to exceptional access, it also greatly minimizes the risk that law enforcement or national security agencies will engage in illegal, abusive, or discriminatory surveillance.[114] As detailed in Part II.B(2) below, to cohere with section 8 of the Charter, any compelled disclosure regime would presumptively require police to obtain a warrant based on probable grounds to believe that the suspect had the capacity to decrypt data that police were legally entitled to access.

Some have nonetheless objected to compelled disclosure on self-incrimination and privacy grounds.[115] As these interests are recognized in the jurisprudence interpreting sections 7 and 8 of the Charter, respectively, it makes sense to canvass these objections in the context of that case law. In brief, we argue that with appropriate limitations, compelled disclosure would not unduly compromise self-incrimination or privacy and should be upheld under the Charter.

2. Self-Incrimination and Section 7 of the Charter

Protection against self-incrimination is provided by sections 7, 11(c), and 13 of the Charter. Sections 11(c) and 13 provide such protection explicitly,[116] but only in the context of formal legal proceedings as they do not apply to the investigative stage of the criminal process.[117] The Supreme Court of Canada has read section 7, however, to protect against compelled self-incrimination outside of formal proceedings in some circumstances.[118] Section 7, which gives everyone the right not to be deprived of “life, liberty and security of the person ... except in accordance with the principles of fundamental justice,” does not speak of self-incrimination. But the Court has recognized the “self-incrimination principle” as a principle of fundamental justice and provided a framework for deciding whether pretrial compulsion violates it.[119] The principle is not a “free-standing legal protection,” however.[120] Even if the state compels a person to self-incriminate, courts will only grant protection after a flexible and pragmatic balancing of interests, including the importance of admitting relevant evidence to determine the truth.[121]

The first question is whether compelled decryption engages any self-incrimination interests. Non-testimonial evidence, such as evidence obtained outside of formal proceedings, may be grouped into two categories: linguistic (or communicative) and non-linguistic (or non-communicative).[122] Linguistic evidence includes documents and statements.[123] Non-linguistic evidence includes bodily samples, bodily impressions, and participation in identification lineups.[124] When the state compels linguistic evidence, the self-incrimination principle is clearly engaged and, depending on a balancing of interests, section 7 may be violated.[125] Self-incrimination is not as obviously implicated by the compulsion of non-linguistic evidence.

Before the Charter, courts rejected claims that compelled takings of non-linguistic evidence violated either common law or statutory protections against self-incrimination.[126] This position has shifted under the Charter, but only nominally. The Supreme Court has stated that non-linguistic compulsion can be self-incriminating.[127] The Court, however, has preferred to assess its constitutionality under section 8, considering the usual suite of factors inhering in the privacy versus law enforcement calculus.[128]

It is important to decide, therefore, whether compelled decryption should be characterized as linguistic or non-linguistic. If it is the former, self-incrimination concerns are fully engaged; if the latter, the focus should be on whether compelled decryption constitutes an unreasonable search or seizure under section 8 of the Charter.

While compelled decryption has attributes of both linguistic and non-linguistic compulsion, in our view it mostly non-linguistic. It is linguistic in that it compels users to disclose that they possess the key or password to the device or that their biometric features are linked to it.[129] It can often be inferred from this information that the user had access to the encrypted device and is aware of its contents.

But the encryption key itself should be viewed as non-linguistic compulsion. Like all digital data, it does communicate information. But it does so in a manner categorically different from the kinds of linguistic acts traditionally enjoying self-incrimination protection.[130] Though encryption keys are, like language, expressed in alphanumeric form, unlike language they serve a purely mechanistic purpose.[131] Moreover, unlike language, they do not convey information about the material world or the user’s experience of it; they merely unlock the security feature that prevents communicative content from being understood.[132] As Reitinger puts it, while a key “might arguably have content, albeit arbitrary content, it has no necessary meaning.”[133] And while passwords and encryption keys may originate or reside in the user’s mind, they also have an independent, material existence analogous to a physical key.[134]

This situation is analogous to one considered in R. v. Orbanski, where the accused argued that requiring motorists to perform physical sobriety tests violated the self-incrimination principle.[135] The Supreme Court suggested that admitting evidence of such tests to prove impairment would likely violate the principle. But using the tests to give police grounds to make a breath sample demand would not, even though the laboratory analysis of the sample could be used to prove impairment.[136] In other words, though compelling the suspect’s participation in creating new communicative evidence (“my bodily movements indicate I am probably drunk”) caused self-incrimination, using that evidence to make pre-existing physical evidence (bodily samples) available to the state to prove the offence did not. Compelling decryption similarly creates new, communicative, self-incriminating evidence (“my ability to decrypt shows my connection to the data”), but the use of that evidence to make pre-existing physical evidence (the plaintext data) available to the state to prove the offence does not.

This distinction can be further illustrated by comparing the different ways that users could be compelled to decrypt. As mentioned, giving an alphanumeric password to police appears plainly linguistic and communicative. But other methods of achieving the same goal do not. Instead of compelling passwords, for example, the law could require suspects to provide encrypted data in intelligible form.[137] In this scenario, suspects are not required to disclose any communicative content that did not exist independent of the compulsion, beyond the implied statement that they can decrypt the data.[138] As the Supreme Court has held, the self-incrimination principle applies only to material “brought into existence by the exercise of compulsion by the state.”[139] Similarly, where the encrypted data is protected by biometric security, compelling users to provide the required biometric produces no linguistic or communicative content, beyond the implied statement described above.

It is difficult to fathom how the law could rationally differentiate between these scenarios. In each case, the nature of the compulsion (“assist in decrypting your information or be punished”) and the information revealed (plaintext and an admission of a capacity to decrypt) is the same. And since Canadian law treats analogous situations, such as the compelling of bodily samples and pre-existing documents, as implicating primarily privacy concerns, and only nominally as self-incrimination concerns, it should do the same for encryption keys, no matter what the method of inducing them. Viewed in this way, the only truly linguistic aspect of compelled decryption is the implied statement “I am able to decrypt the data” and its concomitant inferences.[140] The section 7 Charter analysis should be conducted with this in mind.

That analysis consists of two steps. In the first, claimants must establish that the type of compulsion at issue would compromise the purposes underlying the principle against self-incrimination.[141] If they are successful, the court must provide one of two remedies. In most cases, it will permit the compulsion, but grant use and derivative use immunity to any evidence obtained from it.[142] Alternatively, in rare instances, the compulsion may be barred altogether. We expand on each of these analytical steps below.

According to the Supreme Court, the principle against self-incrimination has two rationales: “to protect against unreliable confessions, and to protect against abuses of power by the state.”[143] In Fitzpatrick, it outlined four factors to be considered in deciding whether statutory compulsion conflicts with the principle: the existence of coercion, the existence of an adversarial relationship, the risk of unreliable confessions, and the risk of abuses of state power.[144]

Where individuals are compelled to assist with decryption, coercion is imposed by the threat of penal consequences. For self-incrimination purposes, however, the coercion relates only to suspects’ forced acknowledgment that they possess the key. No self-incrimination arises from any coercion associated with state’s acquisition of the encrypted data itself. Though that data may have been obtained coercively (e.g. by taking personal information or property without consent), the coercion is justified if police adhered to the limitations of a constitutionally valid search power.[145] As discussed below in Part II.B(3), whether they did or did not is a question to be answered under section 8 of the Charter, not section 7.

The “adversarial relationship” factor works in a similar manner. When the encrypted data was created, the suspect and police were not adversaries.[146] Further, when the data is seized, there is no adversarial relationship beyond that arising in any case in which police suspect wrongdoing and exercise search and seizure powers to obtain evidence of it. An adversarial relationship producing self-incriminating information (i.e., the fact that suspect could decrypt and the inferences arising therefrom) develops only after police compel the suspect to decrypt.

The risk of unreliable confessions, the third factor in the Fitzpatrick analysis, is not material to any aspect of compelled decryption. This factor has two iterations. First, compulsion by state actors may lead to false confessions that contribute to wrongful convictions.[147] This is not an issue for compelled decryption. Persons facing criminal punishment for failing to provide access to encrypted data have a strong incentive to provide the correct key, and providing the wrong one cannot contribute to a conviction for the offence under investigation.[148]

The second aspect of the reliability criteria relates to the state’s need for accurate information for non-incriminating purposes. Without assurances of evidentiary immunity, compelled persons will often be reluctant to reveal self-incriminating information.[149] Immunity is therefore granted as a quid pro quo to induce evidence required for regulatory purposes or the prosecution of others.[150] Individuals provided with such immunity are assured that their statements cannot be used against them, even indirectly, and are therefore more likely to tell the truth.

This incentive is unnecessary in the context of compelled encryption: persons with the ability to decrypt will either comply with or refuse orders to do so. If they provide a “false” key, this will be immediately apparent. If a person refuses to do so despite the consequences, and authorities still wish to obtain plaintext for a non-incriminating purpose, such as the prosecution of another, they can offer immunity on an individual basis.[151]

The final factor to be considered is the abuse of state power. Statutory compulsion raises the spectre of two types of abuse. First, investigators clothed with inquisitorial powers may apply undue pressure to induce recalcitrant suspects to self-incriminate.[152] Even if this does not produce unreliable confessions, some means of inducing cooperation may simply be inhumane.[153]

Once again, this concern does not arise with compelled decryption. The law would not require suspects to provide any information other than the key or plaintext. Unlike the duty to provide accident reports at issue in White, there is little danger that the presence of police could induce suspects to “provide a more extensive statement to police than legally required” under the legislation.[154] Suspects would either provide the correct key or not. If not, they would face potential punishment after being afforded the usual due process protections granted to persons charged with offences. While it is always possible that police may overreach in questioning contumacious suspects, a statutory obligation to decrypt is likely to diminish this risk, not enhance it. Under the current law, police anxious to decrypt data in the investigation of a serious crime may be tempted to use improper interrogation techniques to obtain the key. In a high proportion of cases, a statutory obligation to decrypt would obviate the need for such measures.

The second type of abuse involves the use of statutory compulsion to enable inquisitorial fishing expeditions.[155] This could occur, for example, if police used decryption orders to identify the one person out of many capable of unlocking an encrypted device. Ultimately, however, this raises concerns about privacy, not self-incrimination.[156] And as elaborated below, it is easy to design a compelled decryption regime that would all but eliminate this risk.[157]

The four factors from Fitzpatrick demonstrate, then, that compelled decryption raises only minimal concerns about self-incrimination. Forcing people to acknowledge possession of a decryption key is undoubtedly self-incriminatory: it requires them to say or do something that could assist the state in proving its case.[158] Jurists holding deontologically-oriented conceptions of self-incrimination, who typically view almost any form of compulsion to assist the state as inherently harmful,[159] would likely see this as sufficient to trigger section 7 protection. But those with more instrumentalist approaches might ask: “what is the real harm in compelling a password?”[160] In the absence of concerns for reliability or inhumane methods, why shouldn’t the state be able to use compelled evidence of key possession to prove its case?[161]

We do not take sides on this debate in this article, however. Instead, we advance a proposal that we hope may be broadly acceptable. Assuming that forcing suspects to admit key possession violates the self-incrimination principle, courts should grant use and derivative use immunity to this fact. Police would be permitted, however, to compel decryption and plaintext information would be admissible.[162] The state would accordingly have access to the evidence it would have lawfully been entitled to but for the encryption, and no more. Depending on one’s views, denying the state evidence of key possession may not be necessary to protect against unjustified self-incrimination. But it would be unlikely to hamper many investigations or prosecutions. It is rarely difficult to prove that an accused had access to a device through other evidence.[163]

Interpreting section 7 to prohibit compelled decryption or provide evidentiary immunity to plaintext, on the other hand, would substantially constrain law enforcement without preventing any wrongful self-incrimination.[164] As the Supreme Court has held, even when the self-incrimination principle is violated, prohibiting compulsion outright, as opposed to granting evidentiary immunity, is justified only when the state’s predominant purpose is to obtain self-incriminating evidence, rather than some other legitimate public purpose.[165] In the case of compelled decryption, that purpose is obvious: helping police render intelligible information that they are legally entitled to possess. Further, since the plaintext was created without state-imposed compulsion, immunizing it would do nothing to protect against self-incrimination. Nor would it help to advance any important societal interests like inducing helpful evidence for other purposes.[166] Instead, it would simply serve as a shield for wrongdoing.

3. Privacy and Section 8 of the Charter

As we have seen, the act of giving law enforcement access to encrypted information has two components: the physical act of decryption and the communicative statement arising from that act. And as we have contended, it makes little sense to distinguish between the compulsion of plaintext information, biometric identifiers, passwords, and keys. In each case, the law would compel the suspect to transform specific data from ciphertext into plaintext. Consequently, each situation should be treated as a physical act of decryption. The question then becomes whether section 8 of the Charter provides any protection for this act beyond that provided by section 7 for the communicative statement.

As discussed in Part II.A(2), above, section 8 claimants must first show that state agents intruded onto their reasonable expectation of privacy. This can be a notoriously difficult inquiry involving a myriad of considerations,[167] including whether the information revealed invades the “biographical core of personal information which individuals in a free and democratic society would wish to maintain and control from dissemination to the state.”[168] Such information typically consists of “intimate details” of a person’s “lifestyle and personal choices,”[169] such as “intimate relations or political or religious opinions.”[170]

In many cases the encrypted data at issue would have attracted a reasonable expectation of privacy. Police would consequently have been required to seize it in accordance with section 8 of the Charter. But whether police invaded a reasonable expectation of privacy in obtaining data, the question remains whether they do so when they compel a person to decrypt it.[171]

In our view, they do. As discussed, an encryption key itself is non-communicative and thus does not itself reveal any reveal intimate, personal information. Nor does the decision to encrypt personal information necessarily create a reasonable expectation of privacy. We consequently argued in Part II.A(2) above that requiring encryption providers to facilitate law enforcement’s lawful access to plaintext did not engage section 8 of the Charter.

But compelling encryption users to assist in decrypting data lawfully acquired by police invades privacy in a way that exceptional access does not. Exceptional access merely allows police who have already established their entitlement to specific data to obtain it in plaintext form. The police receive only information that would otherwise be available had the data never been encrypted. As discussed in Part II.B(2) above, however, compelling users to decrypt potentially provides police with new information: the fact that the user can decrypt. In addition to implicating self-incrimination, this information also raises privacy concerns under section 8.

In R. v. Spencer (2014), the Supreme Court concluded that a police request for an anonymous internet user’s identity invaded a reasonable expectation of privacy because it allowed them to connect him to specific online activity.[172] Reasonable privacy expectations are similarly invaded when suspects are compelled to provide identifying information like fingerprints, bodily impressions, and bodily samples.[173] Unlike compelled decryption, these forms of compulsion implicate section 8 concerns for bodily integrity. Nonetheless, in determining their constitutionality under section 8, the Supreme Court has also taken note of the risk that the state may misuse intimate information gleaned from the identifying evidence.[174]

Consider what could happen if compelled decryption were not considered to invade a reasonable expectation of privacy. It would not be a “search or seizure” and could consequently never violate section 8 of the Charter. Absent some other form of regulation, the state would be free to demand key disclosures from suspects without any evidence connecting them to the encrypted data.[175] Many innocent people could face intrusive decryption demands, demands that could generate reasonable fears of arrest, conviction, and imprisonment for failing to provide the “correct” key. In some cases, police might also exercise their discretion in discriminatory ways, perhaps by deploying conscious or unconscious stereotypes to disproportionately target members of racial, ethnic, or religious minorities.[176]

Recognizing that compelled decryption invades a reasonable expectation of privacy, on the other hand, would allow courts to regulate the practice through section 8’s reasonableness requirement. Presumptively, this would require the compulsion to be preauthorized by a judge or justice, or other impartial arbiter capable of acting judicially, on the standard of reasonable and probable grounds.[177] In other words, police would have to obtain a warrant[178] based on solid, objective grounds to believe that the suspect has the capacity to access encrypted data that police are lawfully entitled to possess.[179] Obtaining a warrant would prevent the police from engaging in fishing expeditions and diminish the number of innocent people subject to such demands.[180]

So constrained, compelled decryption would be reasonable under section 8 of the Charter. Where police have strong evidence that a suspect has access to encrypted data, and have independently established their lawful authority to obtain it, there is no compelling privacy-related justification to forbid compelled decryption. If section 8 permits police to defeat encryption by other means, it should also permit them to do by compelling people to decrypt their data.[181]

Conclusion

The trust and security that encryption provides is critical to human flourishing in the digital age. But encryption also has the potential to provide an impenetrable shield for wrongdoing and alter the consensus balance of interests embodied in constitutional criminal procedure. Ideally, encryption should not prevent police from making sense of information that they are lawfully entitled to possess.

Yet, governments should be wary of imposing solutions that would diminish encryption’s many social benefits. Compelling technology companies to install backdoors risks exposing individual, government, and commercial data to malicious actors. At least for the present time, exceptional access presents too great a risk to the security of digital networks and data to justify its benefits to law enforcement. It also faces daunting logistical and jurisdictional hurdles likely to thwart its effectiveness and generate substantial social costs. Ultimately, however, we think it is up to Parliament to decide whether to implement a comprehensive exceptional access regime. While we advise strongly against it, there is likely no constitutional impediment to doing so.

Though only a partial solution, compelled decryption promises to enhance law enforcement’s ability to lawfully access digital data while avoiding the dangers of exceptional access. If Parliament does wish to act, it would be wise to consider it. While compelled decryption does implicate the Charter’s self-incrimination and privacy protections, it should not be difficult to construct a constitutionally-compliant regime. In brief, such a regime would:

  1. save for exigent circumstances, require judicial preauthorization for compulsion based on reasonable and probable grounds that the individual can decrypt data that police are lawfully entitled to possess; and

  2. exclude evidence that an accused was compelled to decrypt the information in question, while allowing the plaintext to be admitted.

These restrictions would greatly diminish the risk that police might use compelled decryption in arbitrary, discriminatory, or otherwise abusive ways. To some, it may still seem intuitively wrong for the state to force individuals to give up their passwords, keys, or biometric identifiers. As long as police have independently established their right to the encrypted data and shown that the person to be compelled has the ability to decrypt it, however, this is a small price to pay to help preserve law enforcement’s capacity to detect and deter crime.