Grammarly Faces Legal Action Over Unauthorized Use of Writers’ Identities in AI Feature
Grammarly, a widely-used digital writing assistant, is currently embroiled in legal controversy following the release of its Expert Review feature. This new tool employs artificial intelligence to simulate editorial feedback from renowned figures such as novelist Stephen King, the late scientist Carl Sagan, and tech journalist Kara Swisher. However, the company did not obtain consent from these individuals or their estates before incorporating their names and personas into the feature.
Journalist Julia Angwin, one of the professionals impersonated by this feature, has initiated a class-action lawsuit against Superhuman, Grammarly’s parent company. The lawsuit alleges violations of privacy and publicity rights, asserting that the company exploited the identities of numerous writers without authorization. Angwin, known for her investigative work on technology’s impact on privacy, expressed her dismay, stating, I have worked for decades honing my skills as a writer and editor, and I am distressed to discover that a tech company is selling an imposter version of my hard-earned expertise.
The Expert Review feature, available exclusively to subscribers at an annual fee of $144, has been criticized for delivering generic and superficial feedback. Casey Newton, founder and editor of the tech newsletter Platformer, tested the tool by submitting one of his articles for review. The AI-generated feedback, purportedly from an imitation of Kara Swisher, suggested, Could you briefly compare how daily AI users versus AI skeptics articulate risk, creating a through-line readers can follow? This response was perceived as lacking depth and specificity, raising questions about the feature’s value and the ethical implications of using real individuals’ identities without consent.
This incident underscores the growing concerns surrounding the use of artificial intelligence in content creation and the potential for misuse of personal identities. As AI technologies continue to evolve, the necessity for clear ethical guidelines and legal frameworks becomes increasingly apparent to protect individuals’ rights and maintain trust in digital tools.