Catch up on the latest AI articles

[Google Development] A Fast Method For Auditing Privacy Protection In Machine Learning Systems

[Google Development] A Fast Method For Auditing Privacy Protection In Machine Learning Systems

Privacy

3 main points
✔️ Auditing whether a machine learning system is privacy-preserving used to require "manipulating a single piece of data in a dataset" and performing "model training" hundreds of times
✔️ Only Proposed a method to audit the privacy of machine learning systems with only one model training
✔️ Theoretically demonstrated and experimentally evaluated that it is possible to audit not one, but multiple data manipulations of a dataset.

Privacy Auditing with One (1) Training Run
written by Thomas SteinkeMilad NasrMatthew Jagielski
(Submitted on 15 May 2023)
Comments: Published on arxiv.

Subjects: Machine Learning (cs.LG); Cryptography and Security (cs.CR); Data Structures and Algorithms (cs.DS)

code:  

The images used in this article are from the paper, the introductory slides, or were created based on them.

Introduction

The success of large-scale language models has raised expectations for general-purpose artificial intelligence beyond them, but apart from the technical question of whether such artificial intelligence is possible, a social question has also arisen.

For example, there is the issue of copyright: AI can instantly draw a picture that you specify, but that picture is based on a picture that someone else drew, so copyright exists. In addition to copyright, various other rights issues are emerging in society.

Among such social issues surrounding AI, the issue addressed in this paper is the issue of privacy: the data that AI learns contains not only copyrights but also personal information. For example, aggregating data from hospital patientscould be used to improve the accuracy ofdisease diagnosis, but since it probably contains personal information, it would be difficult to utilize it from the standpoint of privacy. If a machine learning system that learns customer data were to come under attack and leak personal information, it would make the news as a corporate scandal.

Larger companies are more likely to be questioned about their attitudes toward such social issues and are more aware of them, so privacy protection will be a focus for Apple, Google, and others.

This paper is Google's solution to the AI x privacy problem through the power of technology.

In statistical analysis x privacy, the concept of "differential privacy " was proposed in 2006. Differential privacy is a mathematical definition of what it means for privacy to be protected. Differential privacy has parameters ε and δ. The values of ε and δ represent the level of privacy protection. Therefore, an algorithm that can reduce ε and δ is a privacy-preserving algorithm.

This mathematical analysis of ε and δ provides upper bounds for ε and δ. However, the mathematical analysis alone cannot verify whether the mathematical analysis itself is in error. Therefore, in order to "audit" whether the mathematical analysis is correct or not, a method called "differential privacy auditing" has been proposed to experimentally estimate the possible range of ε and δ.

One problem with the conventional method is that the possible ranges of ε and δ cannot be estimated without repeating the training many times with one data set in and out of the data set.

Therefore, this paper proposes a method that can estimate the possible ranges of ε and δ after only one training.It takes advantage of parallelism, which allows multiple training data to be added or deleted independently.

The proposed method is applicable in both black-box and white-box settings for auditing machine learning systems under audit because it minimizes the assumptions required for the algorithm, making it a generic method.

We show that whenDP-SGD (one of the algorithms satisfying differential privacy) is audited withthe proposed method, the experimental privacy lower bound (ε) can be computed in a single model training.

Differential privacy

When a machine learning system is trained on a dataset that contains personal information, care should be taken to ensure that personal information is not exposed through a malicious attack on the machine learning system. The concept of differential privacy (differential privacy) has emerged to allow society to take advantage of the useful statistical suggestions provided by machine learning systems while protecting the privacy of individuals.

A machine learning system M (henceforth referred to simply as the algorithm or the algorithm under audit) satisfies (ε,δ)-DP (differential privacy) if for any input pair x,x' and any measurable set S with the difference that some single person's data has been added or deleted, the following condition is true for any input pair x,x' and any measurable set S.

Equation 1: Privacy Assurance with Differential Privacy

This guarantees that the output of the algorithm will not change significantly even if data x becomes data x' due to the addition or deletion of one person from it. p[M(x)S] represents the probability that an algorithm M, given data x, will produce an output belonging to S. Although abstract and difficult to visualize, it may be thought of as the probability of an image classification system that learns MRI images x and classifies them as having a tumor or not, and returns a classification result of class S with tumor.

The parameters ε and δ in the equation affect the level of privacy protection. For ease of understanding, let us consider the case when ε = 0 and δ = 0. For example, the probability of returning the result "tumor present" for MRI image x is equal to the probability of returning the result "tumor present" for MRI image x'.

This means that the system's output is not dependent on the data of any particular individual. It is difficult to try to extract personal data from the output of this system, which means that privacy is protected.

Conversely, if ε and δ take large values, it means that the classification result will change, and the system is dependent on the data of a specific individual. Therefore, there is a possibility that an attacker could analyze the input/output of the system and extract personal information.

Thus, specifically, we want to know what these values of ε and δ will be, in other words, how much privacy will be protected. we cannot say that the values of ε and δ will be pinpointed to these values, but depending on the algorithm, the theoretical upper limits of ε and δ are mathematically evident in the mathematical analysis. Theoretical upper bounds for ε and δ have been identified in mathematical analysis.

Differential Privacy Audit

Earlier, we explained that there is a concept called differential privacy, which mathematically defines whether a machine learning system can protect privacy. Differential privacy auditing is an experimental way to check whether differential privacy is being protected.

There are two reasons we want to do a differential privacy audit.

The first is to find out if the mathematical analysis is tight enough to improve it. For example, a lower bound means that it takes a value greater than or equal to that value. In extreme cases, if we say that the lower limit is -∞, it means that we are not basically wrong. On the other hand, if you say that the lower limit is ∞, it is highly likely that you are wrong, since it would be possible to make the lower limit smaller. The true lower limit, or to put it another way, the greatest lower limit that can be shown, is where the skill of mathematical analysis comes into play.

The second is to detect errors in mathematical analysis and implementation. There may be errors in the derivation of equations or calculations in the mathematical analysis. There may be errors in the implemented algorithm. The only way to really know for sure is to experiment.

Traditional Differential Privacy Audit Issues

Traditional differential privacy auditing attempts to directly estimate the probabilities that appear in Equation 1 experimentally.

To do this, create a data set that either includes or excludes a single piece of data, and run Algorithm M several hundred times.

From the results, we estimateP[M(x)S] and P[M(x')S], check how far apart the probability distributions of both are, and find lower bounds for ε and δ.

Thus, it takes computation time to run the algorithm several hundred times.

Obviously, this is not practical when auditing large machine learning systems.

Suggested Differential Privacy Audit Ideas

Here's a simple question: if it takes so much computation time because it runs the algorithm M hundreds of times,

"Wouldn't it be nice if we could do a differential privacy audit just by running Algorithm M once, and can we do that?" Is.

This paper answers this question and demonstrates the effect, not only through mathematical analysis, but also through actual experiments.

How to realize the idea of the proposal

The way to achieve differential privacy auditing with a single run of Algorithm M is simple.

Instead of considering whether to include or exclude one piece of data, consider whether to include or exclude multiple pieces of data.

The algorithm of the auditing method is shown in Algorithm 1.

Algorithm 1. proposed audit method, auditable in a single study

Algorithm 1 contains nine lines of explanation, but in effect, lines 4 through 7 and line 9 can be considered the substance of the specific method, so I will explain lines 4 through 7 and line 9.

In the proposed auditing method, we first prepare n-m data that must be included (unaudited data) and m data that are either randomly included or excluded (audited data).

For the data to be audited, m unbiased coins are tossed independently to decide whether to include or exclude (choice set S_i ∈ {-1,1} , symbol indicating that S_i=1 to include and S_i=0 to exclude) each audited data (line 4 of Algorithm 1).

After running the algorithm to be audited on the data x_IN that we have chosen to include, we input x_i ∈ x_IN into the algorithm and obtain the output w_i of the algorithm (line 5 of Algorithm 1). In the case of a machine learning system, we might think of it as learning the data x_IN and then inputting x_i into the trained machine learning system to obtain the output w_i.

From this w_i, we will predict what the flip side of the coin was (data selection or not) (the prediction result is T_i ∈ {-1,0,1}). Specifically, we first compute the score Y from x_i and w_i (line 6 of Algorithm 1). From this score Y, we guess whether x_i was included as training data or not. Then, predict T_i = +1 for the k_+ scores from the larger score Y. Predict T_i=-1 for the k_- scores from the lower end. Predict T_i=0 for the remaining scores (line 7 of Algorithm 1).

Here, we may assume that T_i=0. If we cannot predict with a high degree of certainty, then we should not force a prediction.

As for the pros and cons of predicting whether or not data will be selected based on the sorting results of the scores, it may be easier to interpret that a good score is because it was included in the training data and a bad score is because it was not included in the training data.

The proposed auditing method finally returns the true selected data S and its prediction T of the presence or absence of data selection (line 9 of Algorithm 1).

The auditing method itself, as just described, only ultimately returns the true selected data S and the prediction T of its data selection presence or absence. At first glance, this is not at all related to the estimation of the epsilon parameter of differential privacy that we were talking about at the beginning.

However, this paper shows by mathematical analysis that if the algorithm being audited is an εDP, then the accuracy of the prediction of the presence or absence of data selection is at most (epsilon power of e)/(epsilon power of e+1), from which the ε parameter can be estimated.

What I mean is that by actually applying the proposed auditing method to the algorithm to be audited, the predictive accuracy of data selection presence/absence can be calculated. Then, according to mathematical analysis, (epsilon power of e)/(epsilon power of e+1) will be larger than the result (the result of the calculation of the predictive accuracy of data selection presence/absence). The parameter ε of the differential privacy will naturally have to be above a certain value to be consistent. This means that the lower bound of epsilon can be estimated.

Assessment Results

One method of training machine learning models is stochastic gradient descent (SGD). One SGD that satisfies differential privacy is DP-SGD. In this study, we will conduct an audit of DP-SGD using the proposed method to estimate the lower bound of ε for the differential privacy parameter and check how close it is to the theoretical value.

There are two types of audit problem settings: white box setting and black box setting. The white box setting is a setting where the audit method can see the training process by DP-SGD and can intervene at any gradient. The other setting is the black box setting. In this paper, the white box setting is described as a strong "attack" setting that has a strong impact on DP-SGD, while the black box setting is the weakest "attack" setting in terms of its impact on DP-SGD. The black box setting is the weakest "attack" on DP-SGD.

Figure 1 shows the results of the evaluation in the white box setting.

Figure 1. epsilon estimation results from the proposed auditing method in a white box setting.

Horizontal axis, theoretical value of ε; vertical axis, predicted value of ε. The red dotted line in the figure is the theoretical value, so the closer to this line, the closer to the theoretical value ε is estimated.

The m in the figure is the number of data to be audited. It can be seen that as the number of data to be audited increases, the graph approaches the red dotted line. To improve the accuracy of epsilon estimation using the proposed auditing method, it seems necessary to increase the number of data to be audited. However, simply increasing the number of data does not necessarily bring the graph closer to the red dotted line.

The results of the evaluation in the black box setting are shown in Figure 2.

Figure 2: Estimation results of ε using the proposed auditing method in the black box setting.

The view of the figure is the same as in the white box setting. In the black box setting, the estimation accuracy of ε tends to be lower than in the white box setting. If the number of data to be audited m is increased too much, the estimation accuracy of ε will deteriorate.

The paper considers that when there is little data to audit, there is not enough observed data to obtain a high confidence lower bound for ε, and when there is too much data to audit, the model may not be able to remember enough data to audit to increase the accuracy of ε estimation. Regarding the tendency for the accuracy of ε estimation to be not very good in the first place, it is considered that the attack may be too weak (too loose in auditing) in a black box setting, and that it may be seen as protecting privacy.

Conclusion

In the paper described in this issue, we explained privacy protection techniques for machine learning systems, which are in social demand due to the recent increase in privacy awareness in society.

This paper was selected as an Outstanding Main Track Paper at NeurIPS 2023, a top AI conference. Reviewers commented that it was a rare example of theoretical analysis leading to practicality, along with a rating of Excellent in terms of Soundness.

In practical use, epsilon, which represents the level of privacy protection, could not be estimated without training the model several hundred times, but the theoretical analysis shows that epsilon can be estimated even from a single training of the model, and this improvement in speed is thought to be linked to the improvement in practical use.

On the other hand, in the results of the evaluation described here, epsilon estimation was not at the same level as the theoretical value, and we hope to improve the accuracy in the future.

In addition, the committee appreciated the fact that the committee gave sufficient consideration to the limitations of the proposed method, although it is usually reluctant to focus on the shortcomings of its own proposed method (i.e., limitations).

I believe that this is due in large part to his high level of mathematical analytical skills, but I also believe that he has a sincerity that researchers and engineers should learn from.

  • メルマガ登録(ver
  • ライター
  • エンジニア_大募集!!

If you have any suggestions for improvement of the content of the article,
please contact the AI-SCHOLAR editorial team through the contact form.

Contact Us