Skip to Content

Interpreting “Non-Personal Data” Under Alberta’s Protection of Privacy Act

Navigating Ambiguity, Purpose, and Practical Consequences.

Blurred silhouettes of business professionals walking past a glass wall, suggesting movement and anonymity

Introduction

Alberta’s Protection of Privacy Act (POPA) introduces “non-personal data” (NPD) as a new information category, marking a significant departure from the former binary framework under the Freedom of Information and Protection of Privacy Act, which distinguished only between personal information and all other information. While this innovation is intended to enable privacy-protective data use and sharing, the statutory definition of NPD presents substantial interpretive challenges. In the absence of guidance from the Office of the Information and Privacy Commissioner (OIPC) or the courts, public bodies and their legal advisors are left to navigate this uncertainty largely on their own.

This article examines the interpretive difficulties inherent in POPA’s definition of NPD, evaluates competing interpretive approaches, and highlights the practical and legal consequences that flow from adopting either a broad or narrow reading of the definition.

Interpretation Challenges

Section 1(n) of POPA provides the following definition for NPD: “non-personal data means data, including data derived from personal information, that has been generated, modified or anonymized so that it does not identify any individual, and includes synthetic data and any other type of non-personal data identified in the regulations.”

The definition contains multiple commas and alternate grouping of clauses and embeds undefined terms: “data”, “generated”, “modified”, and “anonymized”. Compounding this complexity is the legislature’s use of both “means” and “includes”, which traditionally signal, respectively, a closed and an open class in statutory interpretation.

Competing Interpretations: Broad and Narrow

Practitioners are faced with two (2) principal interpretations. A broad interpretation treats any data that has been generated, modified, or anonymized so that it does not identify an individual as NPD. Under this view, any de-identified dataset would fall within POPA’s non-personal data regime.

The narrow interpretation, by contrast, reads the definition as encompassing the following three (3) specific classes:

  1. data derived from personal information that has been generated, modified, or anonymized so that it no longer identifies an individual;
  2. synthetic data; and
  3. other categories prescribed by regulation.

Interpretive Guideposts

The broader context of how NPD is handled in POPA provides some interpretive guideposts. POPA permits the creation of NPD for specified purposes such as research, planning, and program evaluation, subject to extensive prescribed requirements. These include maintaining a creation record, implementing quality assurance processes, and conducting a pre-disclosure re-identification risk assessment. Notably, the collection of personal information does not have similar requirements.

Once created, NPD may be used by the public body for any purpose and may be disclosed freely to other public bodies. Disclosure to non-public bodies is permitted only under strict contractual conditions, including security safeguards, prohibitions on re-identification, and obligations to destroy the NPD. This sharp distinction between personal information, which may be disclosed to a non-public body in specified circumstances set out in Section 13 of POPA, and NPD being tightly controlled emphasizes the enhanced safeguarding of NPD and the importance of correctly identifying when data crosses the threshold into the category of NPD.

Additional context is provided in POPA’s Ministerial Regulation, which requires the public body to implement policies and procedures related to NPD only if the “public body will create, use or disclose NPD.” This suggests that public bodies may operate without engaging NPD, which has interpretive significance. Under a broad interpretation, NPD is frequently created.

Legislative Intent and Purpose

Public statements by the Minister of Technology and Innovation emphasize strong privacy protection and may suggest a potentially expansive conception of NPD. While the Minister’s comments concerning the ability to anonymize or de-identify records for research and analysis may suggest a narrower understanding of NPD.

A purposive analysis suggests that the NPD regime may be intended to address the privacy risks arising from data matching to create data derived from personal information and secondary uses of personal information beyond the scope of original collection notices. Transforming data derived from personal information into NPD relaxes the restrictions on use and disclosure, while still imposing safeguards at the point of creation.

Consequences and the Presumption Against Absurdity

Applying the presumption against absurdity exposes weaknesses in the broad interpretation of NPD. For example, treating records redacted under Section 20 of the Access to Information Act (ATIA) in response to access requests as NPD could frustrate the objectives of ATIA. This effectively bars disclosure of records once identifiers are removed unless strict agreements are put in place. Such outcomes appear inconsistent with legislative harmony and longstanding access principles.

Also, the strict provisions associated with disclosure of NPD to a non-public body could perversely incentivize the continued use and disclosure of personal information which would appear to be contrary to the objectives of POPA.

The broad interpretation also offends the principle that each word of an enactment must be given meaning, as the definition could simply be set out as “NPD means data that has been generated, modified or anonymized so that it does not identify any individual”.

Data Matching, Synthetic Data, and Stronger Justifications

Examples involving data matching illustrate where POPA’s NPD regime appears most coherent. When datasets collected for different purposes are matched to generate new insights, POPA tightly restricts use and disclosure of the resulting “data derived from personal information”. Converting that output into NPD, through de-identification or synthetic techniques, mitigates privacy risks and enables broader analytical use and interagency sharing.

Synthetic data aligns closely with the prescribed quality assurance and re-identification risk assessment requirements, justifying the regulatory burden by reference to genuine privacy risk and policy objectives.

Conclusion: Case-by-Case Judgment in an Unsettled Landscape

Neither the broad nor narrow interpretation of NPD under POPA is free of issues. A broad reading risks undermining access rights and creating privacy-eroding incentives, while a narrow reading may limit the legislature’s intent to enable data use within the public sector in a manner that protects personal privacy.

Until the legislature, courts or the OIPC provide further guidance, public bodies may assess each situation on a case-by-case basis by weighing statutory text, legislative purpose, and practical consequences.