Apple to Adjust CSAM System to Keep False Positive Deactivation Threshold at one in Trillion

When Apple announced its plans to tackle child pornography content on its operating systems last week, the company said the threshold for disabling false positive accounts would be one in a trillion per year.

Part of the mechanisms by which Apple arrived at this figure has been published in a document (PDF) which provides more details about the system.

The most controversial element of Cupertino’s plans is its system for detecting child sexual abuse material (CSAM) on devices. For Apple devices, this will involve comparing the images on the device to a list of known images provided by the US National Center for Missing and Exploited Children (NCMEC – National Center for Missing and Exploited Children) and other child protection organizations, before an image is stored in iCloud.

When a reporting threshold is reached, Apple will inspect the uploaded metadata with the encrypted images in iCloud and, if the company determines that it is CSAM, the user’s account will be deactivated and the content will be delivered to the NCMEC in the United States.

The document says that data from two child protection organizations operating in different countries will determine this, not just one database.

After comparing the metadata of 100 million non-CSAM images, Apple found three false positives, and zero when compared to adult pornography. The company says that by assuming a “worst-case” error rate of one in a million, it wants to ensure a false positive cutoff threshold of one in a trillion.

To ensure that Apple’s iCloud servers do not account for a user’s number of positive CSAM images, their device will also produce fake metadata, which Apple calls safety vouchers. The company adds that its servers will not be able to distinguish real vouchers from fake ones, until the threshold is reached.

“The on-device match process will, with some probability, replace a real security voucher being generated with a fake voucher. This probability is calibrated so that the total number of false vouchers is proportional to the match threshold, ”explains Apple.

“These fake coupons are the property of each account, not the system as a whole. For accounts below the match threshold, only the user’s device knows which supporting documents are false; Apple’s servers do not and cannot determine this number, and therefore cannot count the number of true positive matches. ”

Parental control
Cupertino claims that the system was designed in such a way that a user does not need to trust Apple to know that the system “works as advertised”: “The threat model is based on the technical properties of the system to guard against the unlikely possibility of malicious or coerced reviewers, and in turn relies on reviewers to guard against the possibility of technical or human errors earlier in the system ”.

The company reiterated that it would deny requests to add non-CSAM images to the dataset: “Apple will also deny all requests to instruct human examiners to file reports for everything. which is not CSAM material for accounts that exceed the match threshold ”.

During the initial announcement, Apple also announced that machine learning will be used in iMessage to alert parents using Family Sharing when children’s accounts have viewed or sent sexually explicit images, as well as to provide warnings to the child.

Under, each sexually explicit image sent or received will alert the child that if they continue to view or send the image, their parents will receive a notification. It is only if the child continues to send or watch an image after this warning that the notification will be sent, ”says Apple. “For accounts of children aged 13 to 17, the child is always notified and asked if they want to view or share a sexually explicit image, but parents are not notified. “