Introduction

 The academic and professional worlds have long emphasized grammatical correctness. Over the years, grammar programs have been created and updated to ensure error-free writing, although they have drastic limitations. To escape from the failures of past programs, manufacturers of new grammar software often make claims too broad for their product’s capabilities. Grammarly, Inc. (2014b) confidently advertised its program, Grammarly®, to have the following abilities: “Grammarly provides another set of eyes to help perfect your writing. Grammarly’s patent-pending grammar checking technology reviews and improves your text, correcting grammar, spelling, word choice and style mistakes with unmatched accuracy.” For the academic world, Grammarly® is even claimed to work alongside university writing centers (Grammarly, Inc., 2013).

However, grammar programs and writing centers are unlikely allies: they share neither focus nor goals. Grammar programs focus on the text and have tried to credibly stand in for human editors, while writing centers focus on the writers and have struggled not to be labeled as editing services or grammar shops. Grammarly®, specifically, has emphasized perfecting writing and error-free prose (Grammarly, Inc., 2014b). On the opposite hand, writing centers acknowledge that there is no such thing as the “perfect” paper: writing centers aim instead to help students learn and become better writers while respecting students’ authority over their work. Because Grammarly, Inc. (2013) has chosen to advertise to writing centers on its Grammarly@edu website, Grammarly® is subject to standards defined by the field of writing center studies.

In the current study, the claim that Grammarly® can complement writing centers will be examined from a writing center perspective. At the time of this study, the researcher had been working in writing centers for almost four years during her undergraduate and graduate studies. The researcher began working at the Central Michigan University (CMU) Writing Center as a graduate assistant in Fall 2012 and held the position of Online Coordinator from Spring 2013 through Summer 2014. Her responsibilities included administering the Writing Center’s asynchronous online service and training consultants to work online. Because of this position, the researcher has had experience evaluating asynchronous feedback on student work. The current study will compare and contrast Grammarly®’s approach to providing feedback with that of online writing center consultants at CMU.

Research Problem

The claim made by Grammarly, Inc. (2013) increases its own credibility while posing several problems for writing centers. Writing centers are free, non-profit services that often struggle to gain enough funding to serve their student population or to be understood and appreciated by their campus faculty and administrators. Their reputations and funding could be further jeopardized by grammar programs, such as Grammarly®, claiming to offer writing assistance.

Grammarly, Inc.’s (2013) claim implies a similarity between grammar programs and writing centers and supports a long tradition of students, faculty, and administrators misunderstanding writing center work. Those outside the field have often mislabeled writing center consultants as editors or proofreaders (North, 1984; Carino, 1995; Castner, 2000; Adkins, 2011). While writing centers can help students learn grammar and develop proofreading skills, this is not the primary focus or limit of their services. Getting writers to use good grammar is not the main goal of writing center pedagogy: writing centers seek to help students become better with all aspects of writing. Grammarly, Inc.’s (2013) claim is harmful for not only perceptions of writing centers but also perceptions of effective writing. Even in academic forums, a common misconception is that good grammar makes good writing; this misconception among administrators is worrisome to writing centers.

While Grammarly, Inc. (2013) did not claim that its product is better than a writing center or could act as a replacement, its advertising could be interpreted in this way by university administrators who do not understand writing center work and are looking to provide student services at low costs. As Harbord (2006), whose writing center was almost closed by administrators in 2004, explained,

It is in the nature of academic support units that senior administrators are more likely to make cuts there when under financial pressure than axing more prestigious parts of the university. Writing centers are all too often threatened with closure or staff reduction, and all too often have to marshal resources and arguments to ensure their continued existence. (p. 1)

Some administrators have considered supplementing or even replacing their writing centers with technology to cut costs while reaching more students. Grammarly® is a cheaper, 24-hour option that can work with an unlimited number of students at the same time. If universities viewed Grammarly® and writing centers as similar, rather than complementary or opposed, writing centers could lose funding or campus popularity in favor of Grammarly®. At the extreme end, writing centers could be closed, or new writing centers could be prevented from being established. In an age where technology is often used to save money or increase efficiency, research is needed to establish when and where human knowledge is irreplaceable for effective writing.

Literature Review

Background on writing centers and grammar programs provides a context for the current study. A discussion of writing center history and pedagogy will clarify what writing centers do and how they achieve their goals. Online writing centers will then be discussed with a focus on the asynchronous online consultation, which was the type used in this study. Writing center scholars have shown concern for several aspects of asynchronous consulting, and these same concerns also apply to the nature of grammar programs. The strengths and weaknesses of past grammar programs will be presented before Grammarly® is discussed in detail, including its manufacturers’ claims and its positive and negative reviews.

Writing centers

According to Carino (1995), writing center work likely started in the classroom with the “laboratory method,” in which students could receive individual help from their instructor or participate in peer editing groups (p. 105). This instructional method took place during class time where students could practice writing and correcting their errors before having them corrected by the instructor (Boquet, 1999). Such a format was documented as early as 1904 (Carino, 1995). These beginning writing conferences evolved into separate facilities in the 1930s and 1940s, although they were still closely tied to the classroom (Carino, 1995; Boquet, 1999). Carino (1995) explained, “By the 1940s, free-standing writing labs were a recognizable part of higher education, though it is difficult to know how widespread they were” (p. 107). Writing centers increased as part of the Armed Forces English—on-campus programs that taught communication skills to officers for World War II. By the 1950s, writing centers were becoming a part of writing programs (Carino, 1995).

Though writing conferences have existed for over 100 years, the work of writing centers today is still unclear to those outside the field: “Writing centers, throughout their history, have been misunderstood, hence the reams of scholarship defining writing centers, their work, and their function in the university and communities in which they exist” (Adkins, 2011, p. 3). This confusion prompted North’s (1984) “Idea of a Writing Center,” which addressed those outside the writing center field and became iconic scholarship for those within the field. North (1984) clarified that a writing center is not a “grammar and drill center,” “fix-it shop,” or “first aid station” (p. 437); rather, writing centers focus on improving writers, not their texts. Years later, scholars still reported that students and faculty view writing centers as fix-it shops (Carino, 1995; Castner, 2000) or “last-minute insurance against anything that might be wrong with the paper” (Castner, 2000, p. 127). The continued understanding of writing center work must also be addressed within the field.

 Writing center training handbooks carefully guide new consultants through their roles and responsibilities. Ryan and Zimmerelli’s (2010) The Bedford Guide for Writing Tutors has been a popular training resource and is considered authoritative on writing center work. As Ryan and Zimmerelli (2010) explained, “Writing centers differ widely in their facilities, procedures, and resources; however, you will find one constant from center to center: students seeking help as they work through different stages of the writing process” (p. 6). Because writing centers focus on the writer, they play a role in the process of writing a paper: prewriting, drafting, revising, and editing. Multiple writing stages come with different types of revision, commonly referred to as global- and surface-level revision (Ryan & Zimmerelli, 2010). Global revision focuses on large issues like development, organization, or content, while surface revision alters individual sentences. Students may only ask for help with grammar, but Ryan and Zimmerelli (2010) warned that students can view “grammar” and “proofreading” as meaning any type of revision, including global revision (p. 20). Comments regarding global issues are preferred in writing center work and are prioritized over surface issues, because global changes, such as reorganization, would overshadow surface corrections and likely cause new grammar errors to emerge (Ryan & Zimmerelli, 2010). Surface-level issues are beside the point if the text requires more fundamental rethinking and revision.

 When surface-level issues are addressed, the purpose is for students to learn grammar. Consultants are advised to focus on surface issues in small parts of the paper, so the student can apply the grammar feedback to current and future work (Ryan & Zimmerelli, 2010). According to Ryan and Zimmerelli (2010), “This approach also reminds writers that they are ultimately responsible for revising their papers” (p. 51). Consultants explain grammar issues while being wary of terms unfamiliar to the student. Hewett (2010) advised consultants to write at the “student’s comprehension level” because students “respond best” to vocabulary that is familiar or at least defined in the feedback (p. 88-89). As Hewett (2010) further explained,

While it is important not to assume that novice writers have certain vocabulary abilities just because of their course levels, the potential for lack of clarity in textual settings suggests that it is best to define the terms in context or to use noncomplex vocabulary. (p. 89)

Ryan and Zimmerelli (2010) likewise argued that grammar jargon must be defined to aid the student’s learning.

Writing centers value human interaction and consultant-student relationships. In a session, consultants and students work collaboratively, with the consultant assuming the attentive roles of ally, coach, commentator, collaborator, learner, and/or counselor (Ryan & Zimmerelli, 2010). Hewett (2010) explained that students are looking for relationships as well: “Students want feedback on their writing—certainly because grades ultimately are in play—but also because they want to know, to learn, to be better writers, and especially to be understood” (p. 12). Ryan and Zimmerelli (2010) suggested that, to start building a relationship, consultants introduce themselves at the beginning of a session, sit next to students rather than across from them, and keep the paper or computer in front of the students to leave them in control of their work.

While writing centers are most known for working face-to-face with students, they can also serve students online and increasingly so in the era of technology and online classes.

Online writing centers

Online writing centers existed as early as 1986, according to Brown (2000), who cited Utah State University’s electronic tutor (or E.T.) as “the first writing center consultant to respond electronically to student writers” (p. 18). Kinkead (1987) explained that Utah State University’s E.T. was an individual who responded to students’ email questions and helped students who were shy, afraid to visit the writing center in person, or unavailable during operating hours. Technology has since advanced and created more options for writing centers to reach students online.

Online consulting comes in two types: synchronous and asynchronous. In synchronous online sessions, the consultant and student review a paper and converse together at the same time through some form of technology, such as microphone, video, or chat (Ryan & Zimmerelli, 2010). In asynchronous sessions, the consultant reviews a paper without the student present and returns the feedback through email, database, or online classroom. According to Hewett (2010), asynchronous consulting takes place in three to four phases: (1) student submits essay, (2) consultant reads essay and provides feedback, (3) student reads feedback and revises, and (4) student resubmits for additional feedback. Asynchronous sessions may be used in place of synchronous due to cost, lack of technology, or limited student availability. For example, the CMU Writing Center relies on asynchronous consulting because many online submitters are non-traditional or military students, who may not be available to work with a consultant in real-time.

Writing center scholars have seen advantages of asynchronous over synchronous online consulting for both students and consultants. Students have more time to read their feedback (Hewett, 2010), can be empowered by online privacy, and feel more comfortable asking questions (Carlson & Apperson-Williams, 2000). Similarly, shy and anxious consultants may feel more comfortable working online (Ryan & Zimmerelli, 2010). Asynchronous consulting allows consultants time to revise their feedback and be more detailed (Hewett, 2010; Ryan & Zimmerelli, 2010). Asynchronous feedback then becomes an archived, written record that consultants and students can refer to at later dates (Mabrito, 2000; Hewett, 2010; Ryan & Zimmerelli, 2010).

 Scholars have proposed roles specific to online consultants. Kastman Breuch and Racine (2000) proposed the online role of peer reviewer, who acts as both reader and writer. Remington (2006) also referred to the online consultant’s roles as reader and writer. With written feedback, online consultants demonstrate writing skills. Furthermore, consultants can give a reader’s perspective and, in doing so, avoid assuming a power relationship that might be present in other roles as teacher or professional (Remington, 2006).

 Furthermore, online consulting maintains the goals and benefits of a face-to-face consultation. Mabrito (2000) found online consulting to be “a logical extension of the writing tutorial’s goals” of face-to-faceand to stillfoster collaboration (p. 145). Kastman Breuch and Racine (2000) stated, “although online tutoring and face-to-face tutoring occupy different spaces, the same pedagogical goals—namely, student-centered, process-based pedagogy—can be facilitated equally well in both mediums” (p. 246). Consultants still focus on making better writers and not on editing texts (Ryan & Zimmerelli, 2010).

Maintaining writing center pedagogy is one of the challenges of online consulting, which is why scholars have worried that online consulting might be seen as “dropping off dry cleaning” (Cooper, Bui, & Riker, 2008, p. 310), using a fax machine (Kastman Breuch & Racine, 2000), or making a one-time visit to the hospital (Castner, 2000). Though asynchronous consulting risks losing human interaction, dialogue, and a focus on global issues, specific strategies can overcome these disadvantages (Ryan & Zimmerelli, 2010).

Human interaction

A common concern has been that human interaction will be lost within an asynchronous online writing center. Asynchronous consulting does not involve body language, visual cues, or tone of voice (Carlson & Apperson-Williams, 2000; Mabrito, 2000; Hewett, 2010; Ryan & Zimmerelli, 2010). Asynchronous consulting relies only on text for communication, and this text must carry meaning on its own (Carlson & Apperson-Williams, 2000). Carlson and Apperson-Williams (2000) described the computer screen as faceless, expressionless, cold, sterile, and uninviting. Unlike the human interaction natural in face-to-face sessions, the human aspect can be lost behind text-based technology.

However, Carlson and Apperson-Williams (2000) have also argued, along with other scholars (Co

oper et al., 2008; Hewett, 2010), that consultants can still establish relationships online. Carlson and Apperson-Williams (2000) stated that “Tutors and students still work on student texts, but the absence of face-to-face interaction causes the interpersonal relationship to develop through words about the writing rather than through physical presence” (p. 134). Cooper et al. (2008) believed consultants can keep collaboration and humanity in online work by setting a tone with friendly and informal introductory comments and providing closure with ending comments. Similarly, Hewett (2010) found that consultants can engage with students by introducing themselves and asking open-ended questions.

Dialogue

The literature have addressed issues with the lack of dialogue and back-and-forth conversation in asynchronous work (Carlson & Apperson-Williams, 2000; Castner, 2000; Cooper et al., 2008; Hewett, 2010; Ryan & Zimmerelli, 2010). Castner (2000) stated that dialogue helps students learn how to talk about writing, so consultants can better understand what students want to work on when they want their papers “edited.” Furthermore, students who submit a paper and receive it back without further dialogue may believe they have received all the feedback necessary to make changes. They might not submit again after revising or schedule multiple sessions (Castner, 2000). Castner (2000) discussed how the lack of dialogue could become problematic:

Consultants do not want clients to perceive writing centers as fix-it shops for writing, places where writing can be repaired in one session. E-mail consulting without dialogue, however, may promote these ideas by giving the impression that clients can send off their texts to be fixed at the last minute by a voiceless editor. (p. 120)

Asynchronous consulting does not always allow for back-and-forth communication, but dialogue can still be initiated. Hewett (2010) argued that while asynchronous sessions may appear one-sided, they still include “at least two participants struggling to communicate about an activity that feels of great consequence to each” (p. 12). Students can ask questions for consultants to answer during the session (Castner, 2000; Kastman Breuch & Racine, 2000; Hewett, 2010). Similarly, online consultants can pose questions for students to answer during revision; interact with the student and the text; and share their reactions as readers (Cooper et al., 2008; Hewett, 2010; Ryan & Zimmerelli, 2010).

Grammar

Online consulting offers consultants more temptations to edit or proofread because the student is not present to guide the session. Technology also enables consultants to make changes to the student’s paper. However, the same writing center pedagogy should apply to online sessions. Similar to face-to-face sessions, consultants are advised to discuss only patterns of error (Cooper et al., 2008; Ryan & Zimmerelli, 2010). As Cooper et al. (2008) explained, “It is ironic that painstakingly correcting every error makes a tutor feel exhausted, while the student who receives the corrected paper feels ashamed. This is not conducive to learning” (p. 314). Cooper et al. (2008) stated that addressing every grammatical problem is “pedagogically unsound” (p. 315). If online consultants point out every error from the beginning, students may expect all errors to be addressed, and they may not look for further errors on their own (Ryan & Zimmerelli, 2010). Hewett (2010) explained that too much feedback can be overwhelming to students and even offensive to experienced writers. Similarly, too little feedback may not provide explanations or adequately help students with an issue.

Consultants must provide logical explanations for grammar errors; otherwise, the student will make the same mistakes in future papers (Cooper et al., 2008; Ryan & Zimmerelli, 2010). Not providing an explanation would do a “disservice” to the writer (Ryan & Zimmerelli, 2010, p. 80). Ryan and Zimmerelli (2010) suggested that consultants use pre-written template responses for common issues to ensure consistency and save time; however, consultants should still slightly alter these responses to be personalized to each student’s paper.

Software in writing centers

As far back as the late 1970s, writing centers felt threatened by technology and argued against it. Veit (1979) stated that “writing is an intensely personal activity,” and effective consultants can be concerned for both the student writer and their writing: “What I’m saying is that one of our most valuable services is one that machines aren’t programmed to provide” (p. 2). The concern remains that technology, particularly without a human attached, will lack the social and psychological interaction that writing centers value. Veit (1979) argued that students require personalized attention and empathy they cannot receive in their classes; the student’s greatest need is a personal relationship and “contact with a human being who cares” (p. 2). Machines, however, cannot offer such personal instruction. Pemberton (2004) discussed how the increased use of technology is worrisome for more than just lack of interaction:

While computers and computer software have often been praised by writing center scholars for the educational benefits they provide, they have also been seen as incipient threats – not merely to the personal, interactive pedagogies that writing centers embrace, but also to the writing center’s very existence, particularly in tough budget times when administrators may view CAI [computer assisted instruction] programs and other technological artifacts as cheap, efficient alternatives to the labor-intensive, individualized teaching model at the heart of writing center practice. (p. 11)

While writing centers have embraced technology that benefits students, such as email and word processors used in online consulting, they do not want technology to compromise writing center core values (Pemberton, 2004).

Grammar programs

Grammar programs look for grammar and spelling errors and can be part of a word processor or stand as their own products. Grammar programs became stand-alone products in the early 1980s (Beals, 1998). Grammar software is programmed with pre-defined patterns or rules and looks for “errors” that deviate from these patterns (Pogue, 1993; McAlexander, 2000). Common areas of strength for grammar programs include catching repeated words (Pogue, 1993; LaRocque, 1999), misspelled words, overused expressions, long noun modifiers (LaRocque, 1999), subject-verb agreement, and comma splices (McAlexander, 2000). However, reading these patterns incorrectly can produce errors (McAlexander, 2000). As Galletta, Durcikova, Everard, and Jones (2005) explained,

Unfortunately, it does not take long to discover that, while sophisticated beyond the wildest dreams of many users a decade ago, the software is imperfect in important ways. There are false negatives, where the language-checking software fails to detect true errors, and false positives, where the software detects problems that are not errors. (p. 82)

Machines and programs are limited to finding surface issues identifiable by pattern and not larger, more important issues related to meaning or content (Veit, 1979; Beals, 1998; McAlexander, 2000). Beals (1998) argued that little evidence proves that using a grammar checker would help a student become a more effective writer or have error-free writing. R.L.G. (2012a) stated, “Since computers can be tricked even by one of the most computational elements of language (syntax), we shouldn’t be surprised that they should struggle harder still to judge whether a text is interesting, relevant, concise, organised, stylish or truthful.” Grammar programs also cannot consider exceptions to certain grammatical rules for stylistic purposes.

Historically, grammar programs have had consistent deficiencies and technological limitations even though they have been updated and changed through time. Several programs have been tested by scholars and others with similar results:

  • Grammatik (Pogue, 1993)
  • RightWriter® (Pogue, 1993)
  • Correct Grammar (Pogue, 1993)
  • Microsoft™ Word® (Pogue, 1993, LaRocque, 1999; McAlexander, 2000; Vernon, 2000; Kies, 2012)
  • Editor (Beals, 1998)
  • WordPerfect® (Vernon, 2000, Kies, 2012)
  • Grammarian™ Pro X (Kies, 2012)
  • Open Office™ Writer (Kies, 2012)

Though technology had surely changed from 1993 to 2012, lack of accuracy was still a consistent limitation. In tests, all of these programs gave incorrect feedback; some also missed grammatical errors (Beals, 1998; LaRocque, 1999; Vernon, 2000; McAlexander, 2000; Kies, 2012). The grammar programs’ most common weaknesses were related to commonly confused words, such as “its” versus “it’s” (Pogue, 1993; LaRocque, 1999); “that” versus “which”; run-on sentences (Pogue, 1993; McAlexander, 2000); pronoun agreement and case; and parallelism (McAlexander, 2000).

Grammar programs have also been criticized for making errors. Many grammar checkers follow formal rules prescribed by outdated pedagogy, such as no split infinitives and no contractions (LaRocque, 1999; Vernon, 2000; LaRocque, 2008). They are often designed to assume all writing is formal and, therefore, cannot accommodate informal writing. LaRocque (1999) argued that humanity and warmth can be reflected in informal writing while formal writing lacks humanity. Grammar programs also cannot accommodate rhetorical and stylistic choices (LaRocque, 1999; Vernon, 2000; LaRocque, 2008). They work outside of a rhetorical context, unlike humans (Kies, 2012).

Because they miss errors and mark false errors, grammar programs have not been recommended for inexperienced writers or those with little grammar knowledge (McAlexander, 2000; Galletta et al., 2005). Users need to have basic grammar knowledge and confidence to reject suggestions that are incorrect or conflict with their personal writing style (McAlexander, 2000). Galletta et al. (2005) conducted a study with 65 graduate and undergraduate students with high and low writing abilities and had them edit papers with and without Microsoft™ Word®’s grammar checker. All participants performed better when the grammar program caught obvious errors, but performed worse when the program missed errors or flagged false errors. The students believed the program when it said something was incorrect, even if it was not. Those with high writing abilities performed worse when the software was turned on than when it was turned off, because they did not look for errors the computer did not find. Galletta et al. (2005) believed the students with high writing ability actually lost some of that ability or advantage when they relied on a grammar program. Galletta et al. (2005) determined that grammar software could give a false sense of security. Thus, users may place power and trust in the program rather than finding errors themselves.

Due to these limitations, reviewers and scholars have continued to prefer humans over computers when it comes to grammar and writing feedback (Pogue, 1993; Beals, 1998; LaRocque, 1999; Galletta et al., 2005; LaRocque, 2008). Beals (1998) considered the use of a grammar program in the classroom and concluded that students instead need to interact more with their teachers throughout the writing process. Beals (1998) found that grammar programs could not provide positive feedback and would be unable to “assist our students directly until they can think and respond as we do” (p. 71-72). Pogue (1993) used 35 passages to test four grammar programs against a human editor. The grammar programs’ best accuracy ratings were 50% while the editor was correct on all but two passages: “Final score: human 33, computer 8” (p. 186). LaRocque (1999) concluded that artificial intelligence lacks a brain and that human intelligence is still superior for writing and grammar:

The human brain, with its flexibility and capacity to imagine, is still superior in many ways to the electronic model. The computer is never tired or preoccupied or careless, so it is wonderful at remembering and observing rules. But it doesn’t have the imagination of even a very young human brain—which not only can forget the rules, but can find in them loopholes and options. Electronic intelligence can process information like a house afire, but it still can’t think. (p. 52)

Nine years later, LaRocque (2008) came to the same conclusion that software is merely a machine with no brain.

Despite these issues, some authors have recommended using grammar programs within the composition classroom (McAlexander, 2000; Vernon, 2000). McAlexander (2000) argued that instructors must teach students how to use grammar programs to remove the negative effects of the program and help students learn grammar. McAlexander (2000) conducted a grammar checker project: she first gave students grammar instruction before having them analyze Microsoft™ Word®’s suggestions. After the project, she found that students exhibited increased efficiency and confidence. Vernon (2000) felt that reliability issues made grammar checkers an important topic to address in the classroom “[b]ecause grammar checkers do tend to undermine their own potential value, and because students might object to human corrections not noted by the program” (p. 336). Instructing on grammar programs would teach grammar in context, rather than separate from writing or computers (Vernon, 2000).

 Although some sources have found grammar programs to be a useful teaching tool in the classroom, the general consensus has been that their weaknesses outweigh their benefits and that the programs are too limited to be used on their own. The current study focuses on the grammar program Grammarly®.

Grammarly®

Grammarly® is an online grammar program advertised as more accurate and comprehensive by its manufacturers in an attempt to distance it from past and current grammar programs. Grammarly®’s manufacturers have labeled the program “the world’s best grammar checker” that can “correct up to 10 times more mistakes than popular word processors” (Grammarly, Inc., 2014b). The company has claimed that its program can search for over 250 types of grammatical issues, including vocabulary and spelling. By the end of October 2012, Grammarly® had over 3 million registered users (Business Wire, 2012), and by March 2014, it was licensed by at least 250 colleges and universities within and outside the United States (PR Newswire Association LLC, 2014a).           

Grammarly® is available for individual users at around $30 a month, $60 a quarter, or $140 a year (Grammarly, Inc., 2014b). It is also available for campus license, but price estimates are not available on the website (Grammarly, Inc., 2013). Individual users access Grammarly® by logging into their paid accounts through the Grammarly® website. If a school has a license, students access Grammarly® through their school’s online learning management system, such as Blackboard (Grammarly, Inc., 2013). Users can upload or paste their documents into the program through an Internet browser (Grammarly, Inc., 2014a). Users do not have options to list their stage in the writing process or the areas they would like reviewed. However, they can submit their paper as one of six document types: General, Business, Academic, Technical, Creative, and Casual (Grammarly, Inc., 2014a). Less than a minute after submission, Grammarly® generates a report of all the issues found, shows where errors occur within the paper, and offers explanations and examples for users to correct the errors on their own. In addition to checking for errors, Grammarly® scores documents on a scale of 100 (based on the number of errors per word count) and places papers in one of four categories: “poor, revision necessary”; “weak, needs revision”; “adequate, can benefit from revision”; or “good” (Grammarly, Inc., 2014a).

Grammarly®’s audience consists of not only inexperienced and professional writers but also students (Grammarly, Inc., 2013). A separate site entitled Grammarly@edu advertises the program as a tool for higher education in classrooms, libraries, and writing centers. For writing centers, Grammarly® has been advertised to reach more students quicker and with more convenience:

 Grammarly@edu is designed to effectively complement the services your writing center offers today. Sentenceworks operates just like a human tutor in that it guides students through the revision process and delivers rich instructional feedback – all through highly engaging online interface. Grammarly@edu allows your writing center to expand its scope both in terms of reach − being instantly available to every student in your institution − and in the range of services − by helping students with advanced grammar, sentence structure and other sentence-level aspects of writing. [italics in original] (Grammarly, Inc., 2013)           

Grammarly, Inc. (2013) has claimed that Grammarly® can offer specific benefits to writing centers by giving instant feedback, being available to all students, and reaching out to those who would not otherwise seek writing help. It further stated that Grammarly® can help students learn writing skills at the sentence-level as well as develop proofreading and revising skills.

Grammarly® has received both praise and criticism. Business Wire (2012) reported that of over 800 unspecified survey participants who used Grammarly®, 84% of students felt their grades improved, 94% of users felt they saved time editing, and 63% became more confident in their writing. PR Newswire Association LLC (2014a) reported similar statistics; i.e., that 70% of student users surveyed had higher confidence and 84% had higher grades because of Grammarly®. In tests, Grammarly® performed better than Microsoft™ Word® (Carbone, 2012; Wright, 2012) and WriteCheck™ (Carbone, 2012). Emphasis Training Ltd. (2012) found that Grammarly® might help increase grammar knowledge: “The error cards are certainly more comprehensive than their word processor counterparts, and generally have nice explanations of grammar terms.” Street-Smart Language Learning (2010) mentioned that Grammarly® gave clear and informative explanations, had a quick turn-around time, and could handle large texts.

However, when reviewers tested Grammarly® for accuracy, they found many of its findings incorrect, unnecessary, inconsistent, and/or poorly explained (Street-Smart Language Learning, 2010; Carbone, 2012; Emphasis Training Ltd., 2012; Grammarist, 2012; R.L.G., 2012b; Wright, 2012; Yagoda, 2012). For example, R.L.G. (2012b) found that Grammarly® gave inappropriate vocabulary suggestions out of context, such as changing “black” (as in “African”) to “Stygian,” “unhealthy,” “despairing,” or “terrible.” Grammarist (2012) criticized Grammarly® for advising students to avoid the passive voice, not end a sentence with a preposition, and not begin a sentence with a conjunction.

Grammarly® also missed errors in several tests (Street-Smart Language Learning, 2010; Emphasis Training Ltd., 2012; Grammarist, 2012; R.L.G., 2012b; Wright, 2012). In R.L.G.’s (2012b) three student paragraphs, Grammarly® missed 25 errors, including pronoun-antecedent agreement, excess verbs, comma splices, missing words, and comma use. Street-Smart Language Learning (2010) found that Grammarly® was geared more towards native speakers and did not find common second-language errors, such as article and punctuation use.

Reviewers found that users need to already have grammar knowledge and confidence to sort through the program’s correct and incorrect suggestions (Street-Smart Language Learning, 2010; Emphasis Training Ltd., 2012). Emphasis Training Ltd. (2012) stated that “an unconfident user might be confused into submitting to the advice.” R.L.G. (2012b) questioned whether students would depend on Grammarly® rather than build writing skills. Several reviewers reached harsh conclusions on Grammarly®’s abilities. R.L.G. (2012a) used Grammarly®’s own grading system to rate the program as “weak, needs revision.” Grammarist (2012) concluded that “Grammarly is useless for everyone.” R.L.G. (2012b) explained that “Computer analysis of natural language is very tough stuff, and Grammarly has utterly flailed [sic] in the tests here. The best way to learn to write is from other humans, and $140 will buy a lot of well-written and edited books.” Similar to reviews of other grammar programs, reviewers preferred human interaction over Grammarly®.

Summary of literature

To create better writers, writing centers value connecting personally with students and talking to them about writing. Although computers and technology are common in academia and in writing centers, scholars have worried that asynchronous online consulting would compromise writing center values by eliminating dialogue and human interaction or focusing heavily on surface-level issues. Several sources have confirmed there are ways for consultants to be personable and have conversations with students online and to overcome the temptation to edit. However, technology, such as grammar checkers, that is not mediated by a human consultant may not be able to overcome these challenges; furthermore, the purchase of such programs by a university could result in budget cuts for writing centers. Grammar programs have been tested for accuracy, and they have consistently given incorrect feedback and false positives and have missed errors. Consequently, they are not suggested for inexperienced writers with little grammar knowledge. Although grammar programs catch some errors accurately, their weaknesses outweigh their benefits, and human writing feedback is preferred by the literature. Grammarly® has attempted to distance itself from these limitations; however, reviewers have found the same weaknesses and inaccuracies.

Past grammar programs have been either tested on their own or compared to professional editors and grammarians—writing resources not readily available to students on most college campuses. Grammar programs also have been studied within the composition classroom, but this academic space functions differently and separately from a writing center. Current discussions and tests of Grammarly® have been limited to its accuracy and have not explored how it could function in higher education. The writing center field has also not compared human consultants to the technology that has and continues to threaten its funding, reputation, and existence. The current study will compare Grammarly®’s findings against those of online writing center consultants to determine their similarities and differences.

Research Objectives

The purpose of this study was to explore and test the stance that Grammarly® can supplement and benefit writing centers and the students they serve. Writing feedback from the online grammar program Grammarly® was compared to that of online consultants at the Central Michigan University (CMU) Writing Center. The CMU Writing Center was chosen for this study because of its convenience to the researcher and its large, sophisticated online service for CMU’s satellite and online courses. PR Newswire Association LLC (2014b) labeled CMU as “offering one of the strongest online programs in the country.” In 2014, CMU’s online bachelor’s programs were ranked number one, and its online bachelor’s programs and online master’s in education programs were ranked number one for veterans (PR Newswire Association LLC, 2014b).

As Grammarly® cannot interact face-to-face with students or their texts, its feedback can only be compared fairly with that of asynchronous online consultants. A content analysis was used to reveal where Grammarly®’s responses might be equal to, weaker than, or stronger than those of online writing center consultants. The criteria for this comparison included whether the suggestions were correct or incorrect, what the responses focused on, and how suggestions were explained to the student. This study also included focus groups to consider the consultants’ opinions: they compared their own findings with those of Grammarly® and considered its potential use both inside and outside of writing centers. This study attempted to answer two research questions:

  1. How does Grammarly®’s writing feedback compare to that of online writing center consultants in terms of amount, focus, type, accuracy, terminology, techniques, language choices, and evaluation?
  2. How do writing center consultants respond to Grammarly® and its feedback on the same texts the consultants reviewed?

The findings of this study could demonstrate whether the feedback of writing center consultants can be replicated, complemented, or enhanced by Grammarly®. Writing centers remain the best option for improving student writing, and this study’s findings will help the writing center field to clarify and promote the work of its consultants to university administrators, teachers, and others who may not fully understand the purpose and benefits of writing centers as well as the scope and limitations of grammar programs like Grammarly®.