Blog

Privacy-preserving text mining can help us all get more from data

Data is the new currency. Yet, whilst companies consider it one of their biggest assets, citizens are increasingly concerned about who has their data and what they know or have deduced from it. 

One of the reasons for the high value of data is because it’s the fuel of machine learning algorithms that dig into it to discover patterns and predict possible outcomes, like which products you’re likely to buy, what the best path is to reach your destination or if a document is relevant for you. The more data you have, the better your model and the higher the value you can deliver. But the more data you have also means the higher the risk of discovering relationships which were not meant to be released (i.e. personally identifying people, or finding large trends of a sub-population).This tension has dominated major releases of data-sets in recent years, starting with a catastrophic 2006 release of “anonymized” search logs, a competition involving movie preferences and an announced-but-then-cancelled news-feed release last year.

Data sets are extremely useful resources for all sorts of applications but it has become a challenge to release one that is both useful and that minimizes privacy risks. And this stands true for not only public data sets, but for any internal data set that has personally identifiable information such as client databases. 

We recently studied this problem in the scenario of releasing counts of substrings of textual data known as n-gram tables. In the sentence "rose is a rose is a rose", and for n=2 this table looks like:

rose is 2
is a     2
a rose  2

Such n-grams are most commonly used as input for machine learning algorithms that process text. Even if such a data-set doesn’t contain personal data, it still presents a risk if there are copyright limitations, or if there’s valuable information that lies within (such as the amount a contract is worth) and which should not be released in its larger context (i.e. with the names of the involved parties). The question is, can the original document be reconstructed based on such an n-gram table? In the example above, after a "rose is", the only possible continuation is "is a", and in general there is only one possible reconstruction that uses all the 2-grams the number of times they occur in the sequence (2 for all cases here). Does this apply to all cases?

To study this formally, we mapped the table on top of what is called a de-Bruijn graph, a popular data-structure in bioinformatics. An example below for the modified sequence is: "$ a rose is a rose rose is a rose #":

Privacy-preserving text mining image

The edges contain the 2-grams, together with the count of how many times they occur in the original sequence. Based on this graph, any reconstruction of the document corresponds to a path through the graph that uses each edge exactly the number of times indicated by its count. This is known as a Eulerian path, based on a famous problem solved by Swiss mathematician Leonhard Euler. The number of these paths has been studied in graph theory where the result is known as the BEST theorem. Suffice to say that their number grows incredibly quickly (proportionally to the factorial of the degrees of the node). So, in general, it is impossible to know which one of all the possible reconstructions corresponds to the original document.

But is it still possible to obtain fragments that are larger than n of which we know with absolute certainty that they occurred in the original? For this, we found two simple rules that concatenate edges on the de-Bruijn graph such that their iterative applications end up with a graph in which all edges (which can now be larger than n) occur in all possible Eulerian paths (and therefore have to occur in the original document) and such that no further concatenation with this property exists. In our experiments we show that with these rules, and for n=5, we are able to identify blocks of length of 55 (roughly 5 lines of text) on average and up to 650 (a whole page).

So, releasing n-gram tables is pretty bad if you want to make sure that nobody can reconstruct the original document. Worse, we also show that a popular technique, removing low-frequent n-grams, doesn't help you much. But we do introduce another method of our own that makes it much harder to reconstruct the documents while maintaining a good utility of the corpus (measured as the capacity of the data-set as a language model). This method consists of adding strategically chosen n-grams. These are fictitious n-grams, which do not exist in the original documents. By choosing them randomly, in such a way that they break our reconstruction algorithm, we show that even a small number can avoid faithful reconstruction without a breakdown of the capacity of the model to serve as a language model

More details at: "Reconstructing Textual Documents from n-grams," Matthias Gallé and Mati­as Tealdi in Proceedings of the 21th ACM International Conference on Knowledge Discovery and Data Mining (SIGKDD), pp. 329-338. ACM, 2015. Available here

Watch the video recording of Matthias presenting at SIGKDD on YouTube