Apple’s Use of iOS Bug Reports for AI Training Raises Privacy Concerns

Apple has recently updated its Feedback application, introducing a new privacy notice that informs users their bug report submissions may be utilized to enhance Apple Intelligence models and other machine learning systems. This change has sparked discussions about user consent and data privacy.

The updated notice states:

Apple may use your submission to improve Apple products and services, such as training Apple Intelligence models and other machine learning models.

This means that when users report issues within iOS, the content they provide could be employed to refine Apple’s AI technologies.

Apple has previously announced an opt-in program allowing users to contribute their data for AI training purposes. This program emphasizes on-device processing and employs Differential Privacy techniques to protect individual data points. However, the recent update to the Feedback app suggests that users submitting bug reports are automatically consenting to their data being used for AI training, with no explicit opt-out option available.

Developer Joachim highlighted this change on social media, expressing concern over the lack of an opt-out mechanism. He noted that the only way to avoid participation is by refraining from submitting bug reports altogether. This has led to frustration among some users who feel that their consent is being assumed without clear communication.

While Apple asserts that privacy-preserving measures are in place, the absence of an opt-out option in the Feedback app has raised questions about user autonomy and data usage transparency. As Apple continues to integrate AI into its products, it remains to be seen how the company will address these privacy concerns and whether it will provide more explicit choices for users regarding their data.