Abstract:Local differential privacy (LDP) is widely used to collect and analyze sensitive data while protecting user privacy. However, it is vulnerable to data poisoning attacks by malicious users. The k-subset mechanism and the wheel mechanism are LDP schemes with optimal utility for frequency estimation. Yet, their resistance to data poisoning attacks lacks in-depth analysis and evaluation. Therefore, data poisoning attack methods are designed to assess the resistance to data poisoning attacks of both the k-subset mechanism and the wheel mechanism. First, the random perturbed-value attack and random item attack are discussed, and then the maximal gain attack methods against the k-subset mechanism and the wheel mechanism are constructed. The attack methods can be exploited to maximize the frequencies of target items selected by attackers, which is achieved by sending carefully crafted poisoning data to the data collector via fake users. Theoretically, the attack gains are rigorously analyzed and compared, and the effects of data poisoning attacks are experimentally evaluated, demonstrating their impact on the k-subset mechanism and the wheel mechanism. Finally, defensive measures are proposed to mitigate the effects of data poisoning attacks.