“Grammarly’s New AI Feature Criticized for Misleading ‘Expert’ Feedback attributions”

Article Title:

Grammarly’s ‘Expert Review’ Feature: A Misstep in Authenticity

Article Text:

In August 2025, Grammarly introduced “Expert Review,” an AI-powered feature designed to enhance users’ writing by offering revision suggestions from the perspectives of renowned authors and thinkers. This addition aimed to provide users with feedback as if it were coming from esteemed figures, both historical and contemporary.

However, this feature has sparked controversy due to its approach to attributing feedback. Reports from Wired and The Verge highlighted that Grammarly’s suggestions were framed as if they originated from well-known authors and even tech journalists from prominent publications like The Verge, Wired, Bloomberg, and The New York Times. This practice raised questions about the authenticity and ethical implications of using individuals’ names without their direct involvement or consent.

In an attempt to experience this feature firsthand, a draft of this article was input into Grammarly. The feedback received included suggestions to “add ethical context like Casey Newton,” “leverage the anecdote for reader alignment like Kara Swisher,” and “pose the bigger accountability question like Timnit Gebru.” Notably, these individuals were not involved in providing such feedback, highlighting a significant flaw in the feature’s design.

Alex Gay, vice president of product and corporate marketing at Grammarly’s parent company, Superhuman, addressed these concerns by stating that the experts mentioned are referenced because their published works are publicly available and widely cited. Furthermore, Grammarly’s user guide clarifies that references to experts in Expert Review are for informational purposes only and do not indicate any affiliation with Grammarly or endorsement by those individuals or entities.

Despite these clarifications, the feature’s approach has been criticized for potentially misleading users. Historian C.E. Aubin expressed skepticism, stating, “These are not expert reviews because there are no ‘experts’ involved in producing them.” This sentiment underscores the importance of transparency and authenticity in AI-driven tools, especially when they claim to provide expert insights.

The controversy surrounding Grammarly’s Expert Review feature highlights the challenges tech companies face when integrating AI with human expertise. While the intention to provide users with high-quality feedback is commendable, the execution must ensure that representations are accurate and that the use of individuals’ names and reputations is handled with the utmost care and respect.

In the broader context, this incident serves as a reminder of the ethical considerations inherent in AI development. As AI systems become more sophisticated and integrated into various aspects of daily life, developers must prioritize transparency, authenticity, and respect for individual contributions. Misrepresenting the involvement of experts not only undermines the credibility of the tool but also erodes user trust.

Moreover, this situation raises questions about the boundaries of AI-generated content and the responsibilities of companies in ensuring that their tools do not inadvertently mislead users. As AI continues to evolve, establishing clear guidelines and ethical standards will be crucial in navigating the complex interplay between technology and human expertise.

In conclusion, while Grammarly’s Expert Review feature aimed to enhance the writing experience by incorporating expert perspectives, its implementation has highlighted significant ethical and practical challenges. Moving forward, it is imperative for tech companies to approach such integrations with a heightened sense of responsibility, ensuring that the pursuit of innovation does not come at the expense of authenticity and ethical integrity.