Designing for critical privacy and data protection systems (I)

Mind the factors

Lately, I’ve been doing some work in the area of cryptography and enterprise scale data protection and privacy. And so, it hit me: things are a lot different than they used to be, and they are changing fast. It seems that things are changing towards a more secure environment, with stronger DP and privacy requirements and it also seems that these changes are widely adopted. Somehow, I am happy about it. Somehow, I am worried.

Before I go a little deeper into the topic of how to design for critical privacy and DP systems, let me just enumerate three of the factors that are responsible for generating the changes that we are witnessing:

  • The evolving worldwide regulation and technology adoption started by EU 2016/679 regulation (a.k.a. GDPR)
  • The unimaginable progress we are covering in terms of big data analysis and ML
  • The digitalization of the citizen facing services of nation-states. (stuff like e-voting, that I really advocate against)

I don’t want to cover in-depth the way I see each factor influencing the privacy and DP landscape, but, as we go on, I just want you to have these three factors in mind. Mind the factors.

Emerging technologies

Talking about each concept and technology that is gaining momentum in this context is absolutely impossible. So, I choose to talk about two of the most challenging ones. Or, at least the ones that I perceive as being the most challenging: this is going to be a two episodes series about Differential Privacy and Homomorphic Encryption.

Differential privacy. e-Differential Privacy.

Differential Privacy, in a nutshell, from a space-station view, is a mathematical way of ensuring that reconstruction attacks are not possible at the present or future time.

Mathematical what? Reconstruct what? Time what? Let me give you a textbook example:

Assume we know the following about a group of people:

There are 7 people with the median age of 30 and the mean of 38.

4 are females with the median of 30 and the mean of 33.5

4 love sugar with the median of 51 and a mean of 48.5

3 sugar lovers are females with the median 36 and the mean 36.6

Challenge: give me the: age, sex, sugar preference and marital status of each individual.

Solution:

1. 8, female, sugar, not married

2. 18, male, no sugar, not married

3. 24, female, no sugar, not married

4. 30, male, no sugar, married

5. 36, female, sugar, married

6, 66, female, sugar, married

7. 84, male, sugar, married

Basically, a reconstruction attack for such a scenario involves finding peaks of plausibility in a plausibility versus probability plot. It goes something like this:

You can start brute forcing all the combinations of the seven participants. Considering all the features except age (so, gender, sugar preference, marital) you have 7^8= 5764801 possibilities, but all have roughly the same plausibility. So a possibility / plausibility plot, looks something like this

See, there does not seem to be any peaks in plausibility. But, once we factor in the age, well, things change. For example, although possible to have a 150 years old person, it is very implausible. Furthermore, it is more plausible to have an older individual married than a younger one, and so on. So, if we factor in age plausibility, a graph looks more like this:

See, there’s a peak of plausibility. That is most likely our solution.  Now, if our published statistics are a little skewed. Say, we introduce just enough noise into them such that the impact is minimum for science, and we eliminate the unnecessary ones (if this can be done) then, a reconstruction attack is almost impossible. The purpose is to flatten, as much as possible, the graph above.

Now, to be fair, in our stretched-out textbook example, there’s no need to do the brute-force-assumption plausibility plot. Because the Mean and Median are published for each subset of results, you can simply write a deterministic equation system and solve for the actual solution.

Imagine you, as an attacker possess some external knowledge about your target from an external source. This external source may be an historical publishing over the same set of data, or a different data source altogether. This makes your reconstruction job easier.

e-Differential Privacy systems have a way of defining a privacy loss (i.e. a quantitative measure of the increase in the plausibility plot) Also, these systems define a privacy budget. And this is one of the real treasures of this math. You can make sure that, over time, you are not making the reconstruction attacks easier.

This stuff gained momentum as the US census bureau got the word out the they are using it, and also encouraged people to ask enterprises that own their data to use it.

So, as a software architect, how do I get ready for this?

First, at the moment, there are no out-of-the box solutions that can give you e-Differential Privacy for your data. If this is a requirement for you, you are most probably going to work with some data scientists / math degree that are going to tell you exactly what will be a measure a privacy loss for the features in your data. At least that is what I did 😊 Once they are defined you have to be ready to implement those.

There is a common pattern you can adopt. A proxy, a privacy guard:

You are smart enough to realize that CLEAN data means that some acceptable noise is introduced, such that the privacy budget is not greatly, if at all, impacted.

Challenges

If it was easy, everyone would do it, but it’s not, so suck it.

First, you and your team must be ready to understand what a highly trained math scientist is talking about. Get resources for that.

Second, you have to be careful, as an architect, to have formal definitions throughout your applications for the two concepts enumerated above: privacy budget, and privacy loss.

Third, in both my experience and in the textbook ongoing research, the database must contain the absolute raw data, including historic data if needed. This poses another security challenge: you don’t want to be fancy about using complicated math to protect your data while being vulnerable to a direct database attack. Something stupid like injection attacks have no place here. You can see now that the diagram above is oversimplified. It lacks a ton of proxies, security controls, DMZs and whatnot. Don’t make the same mistake I did and try to hide some data from the privacy guard, your life will be a misery.

Fourth, Be extremely careful about documenting this. It is not rare that software ecosystems change purpose, and the tend to be used where they are not supposed to. It may happen that such an ecosystem, with time, gets to be directly used for scientific research, from behind the privacy guard. That might not be acceptable. You know, scientists don’t like to have noisy data. So I’ve heard, I’m not a scientist.

That’s all for now.

In the second part we’re going to talk a little bit about the time I used Homomorphic Encryption. A mother****ing monster for me.

Stay safe!