1
1
Grammarly, a widely recognized writing assistant, has recently introduced a new feature called "Expert Review," purporting to elevate users’ writing by drawing insights from renowned writers, influential thinkers, and even prominent tech journalists. Launched in August 2025, this innovative addition is part of a broader, significant expansion of Grammarly’s artificial intelligence-powered capabilities, marking a strategic enhancement to its suite of writing tools.
The "Expert Review" feature integrates seamlessly into Grammarly’s main writing assistant, appearing conveniently in the sidebar. Its primary function is to offer revision suggestions that are framed as coming "from the perspective" of various subject matter experts. This approach is designed to provide users with specialized feedback, guiding them to refine their prose, structure arguments, and articulate ideas with greater clarity and impact, purportedly through the lens of established masters in different fields.
However, the nature and authenticity of these "expert reviews" have quickly become a focal point of discussion and scrutiny within the tech and media landscape. Wired magazine, in its coverage, highlighted that Grammarly positions this feedback as if it originates directly from well-known authors, irrespective of whether they are living or deceased. This framing creates an impression of direct consultation or endorsement from these literary figures, lending an air of authoritative guidance to the AI-generated suggestions. Adding another layer to this intriguing proposition, The Verge reported that in certain instances, the advice offered by the "Expert Review" feature can even appear to emanate from esteemed tech journalists affiliated with major publications such as The Verge itself, Wired, Bloomberg, and The New York Times, among others. This particular aspect has raised questions about the criteria for selecting these "experts" and the implications of associating their names with AI-generated content.
The introduction of such a feature naturally led to an exploration of its practical application and the expectations it sets for users. One TechCrunch journalist, curious about the feature’s scope and hoping to see recommendations tailored to their professional sphere, decided to test "Expert Review" by pasting an early draft of their own article into Grammarly. The expectation was to potentially receive writing tips or stylistic advice that mirrored the approach of their TechCrunch colleagues, or at least other prominent figures within the tech journalism ecosystem. However, the experience diverged significantly from this expectation. Instead of suggestions from familiar names within their publication, the AI assistant offered guidance framed in the distinct styles of other influential personalities. For instance, the journalist was advised to "add ethical context like Casey Newton," to "leverage the anecdote for reader alignment like Kara Swisher," and to "pose the bigger accountability question like Timnit Gebru." This outcome, while demonstrating the feature’s capacity to emulate diverse rhetorical styles, nonetheless proved "rather disappointing" for the TechCrunch author. The sentiment expressed was not a critique of the advice itself, but rather a journalistic observation on the perceived omission of their own publication’s prominent voices, prompting a rhetorical question about TechCrunch’s standing in this curated list of "experts" if other leading publications were indeed being cited. This personal anecdote serves as an illustrative example of the gap that can emerge between a user’s specific expectations and the AI’s actual output, especially when dealing with the nuanced realm of professional identity and community.
A critical aspect of the ongoing discussion revolves around the involvement, or rather the distinct lack thereof, of the individuals whose names are invoked by the "Expert Review" feature. It is important to clarify that none of the figures mentioned, including Casey Newton, Kara Swisher, Timnit Gebru, or any other authors and journalists whose "perspectives" are simulated, appear to be actively involved in the development or operation of Expert Reviews. Crucially, they have not granted Grammarly explicit permission for their names to be used in this context. This raises significant questions regarding intellectual property, consent, and the ethical implications of leveraging a public persona to lend credibility to an AI-driven service without direct authorization.
Grammarly’s parent company, Superhuman, addressed these concerns through Alex Gay, its vice president of product and corporate marketing. Gay stated to The Verge that these experts are referenced "because their published works are publicly available and widely cited." This explanation suggests that the AI models are trained on the publicly accessible body of work produced by these individuals, allowing the system to generate suggestions that theoretically align with their characteristic styles or thematic concerns. Furthermore, Grammarly itself has included a disclaimer within its user guide for the feature, explicitly stating: "References to experts in Expert Review are for informational purposes only and do not indicate any affiliation with Grammarly or endorsement by those individuals or entities."
While this disclaimer aims to provide legal clarity and manage user expectations, it simultaneously highlights the tension inherent in the feature’s marketing and its underlying mechanics. The phrasing "for informational purposes only" and the explicit denial of "affiliation" or "endorsement" are reasonably clear from a legal standpoint. However, the very act of framing feedback "from the perspective" of these named individuals inherently creates an impression of their involvement or at least their indirect contribution to the advice, potentially leading to a subtle but significant misinterpretation by users. The ambiguity lies in how users perceive "perspective" versus direct, authorized expert input.
This perceived dissonance between the feature’s branding and its operational reality has led to pointed critiques. Historian C.E. Aubin, in an observation shared with Wired, succinctly encapsulated this concern: "These are not expert reviews, because there are no ‘experts’ involved in producing them." Aubin’s statement underscores a fundamental philosophical and practical challenge presented by AI-driven tools that simulate human expertise. An "expert review," traditionally understood, implies the direct application of a qualified human’s knowledge, judgment, and experience to a specific piece of work. This process involves critical thinking, nuanced understanding, and the ability to adapt advice based on context – qualities that, while AI can mimic, are fundamentally rooted in human consciousness and specialized training. When an AI generates suggestions based on patterns gleaned from publicly available texts, it is performing a sophisticated form of pattern matching and content generation, not an act of human expert review. The distinction, according to critics, is crucial for maintaining transparency and intellectual honesty.
The emergence of Grammarly’s "Expert Review" feature thus serves as a compelling case study in the evolving landscape of artificial intelligence in creative and professional domains. It highlights the immense potential of AI to analyze and synthesize vast amounts of information, offering unprecedented levels of personalized assistance. Yet, it simultaneously brings to the forefront complex ethical questions surrounding the attribution of ideas, the nature of intellectual property in an AI-generated world, and the precise definition of "expertise" when that expertise is mediated and simulated by algorithms. As AI continues to integrate deeper into tools that assist with human endeavors, especially those as nuanced as writing, the balance between leveraging technological innovation and upholding principles of transparency, consent, and authentic attribution will remain a critical challenge for developers and a vital point of discussion for users and the wider public. The feature, while innovative, invites a deeper consideration of what constitutes genuine expert guidance in an era increasingly shaped by artificial intelligence.