Conference on College Composition and Communication Logo

2026 Resolutions

The following resolutions were passed at the CCCC Annual Business Meeting held on Friday, March 6, in Cleveland, Ohio.

Resolution 1

Whereas the 2026 Conference on College Composition and Communication Convention reflects the sustained planning and collaborative labor required to support scholarly exchange across writing, rhetoric, and communication studies;

Whereas Melissa Ianetta, Program Chair for the 2026 Convention, provided steady leadership in developing a program that advances CCCC’s commitments to teaching, research, and professional practice across diverse institutional and disciplinary contexts;

Whereas Melissa Ianetta’s work as Program Chair fostered opportunities for dialogue, connection, and shared inquiry among Convention participants;

Whereas Melissa Ianetta, as expressed in her conference call for proposals for “Conference and Our Conversations,” invited us to remember our conference experiences as individuals and as a discipline, and to envision vital conference experiences that would foster the essential work done afterward;

Whereas Kimberly K. Emmons, Local Arrangements Chair for the 2026 Convention in Cleveland, Ohio, worked diligently to support attendees and enhance the Convention experience through careful attention to logistics, accessibility, and local engagement, including the coordination of special events and conference resources; and

Whereas Kimberly K. Emmons contributed to creating a welcoming Convention environment that supported participants’ engagement with both the city and one another, including resources that helped conferencegoers explore food, drink, and sightseeing in Cleveland and its neighborhoods;

BE IT THEREFORE RESOLVED that the 2026 Conference on College Composition and Communication expresses its sincere appreciation to Melissa Ianetta and Kimberly K. Emmons for their leadership, service, and commitment to CCCC and the broader community of writing and rhetoric scholars and teachers.

Resolution 2

Whereas since before the launch of ChatGPT in 2022, Big Tech companies have used the education sector as a site for what companies call technological innovations;

  • Whether in terms of organizing students’ learning outcomes within learning management systems or providing easy-to-use digital toolkits that are marketed as granting access to information, Big Tech has been a source for encouraging education and government officials to buy and use their wares. Generative AI is but the latest version of this neoliberal approach to efficiency and expediency. This trend extends a much longer history of corporate technological capture of public education as evinced by the trend toward privatization and the number of for-profit contracts and partnerships that enable the workings of higher education, ranging from LMS to enrollment technologies to email and word processing environments to “agentive systems” that can “take over” personal calendars and browsers.
  • Amidst this strong push, there has been a great deal of pressure for educators across writing studies, the humanities, and related fields, including from professional organizations that support these fields, to adopt, integrate, and respond to generative AI in their teaching. The ubiquity of generative AI product integrations and top-down institutional mandates and incentivization at some colleges and universities intensify these pressures. Moreover, this tendency has been exacerbated by the current media environment, mass marketing push for generative AI products, and proliferation of synthetic text as a result of generative AI.

Whereas understanding that students and teachers should have the right to make their own informed choices with regard to generative AI in the writing classroom as a matter of academic freedom;

  • According to the American Association of University Professors’ “The Freedom to Teach,” academic freedom in teaching “includes the right of faculty members to select the materials, determine the approach to the subject, make the assignments, and assess student academic performance in teaching activities for which they are individually responsible, without having their decisions subject to the veto of a department chair, dean, or other administrative officer.” These rights should reasonably include the right of faculty members to select the technologies used in their courses and determine the critical approaches for engaging these technologies.
  • In addition, although conversations about academic freedom often focus on the freedoms of faculty, the 1968 AAUP Joint Statement on Rights and Freedoms of Students states that “As members of the academic community, students should be encouraged to develop the capacity for critical judgment and to engage in a sustained and independent search for truth” (258). In addition, students should be protected in terms of their “freedom of expression,” “against improper academic evaluation,” and “against improper disclosure” (259). Students can and do engage in critical judgment and the sustained inquiry toward understanding the technologies pushed onto them, and they should have agency when it comes to selecting technologies that will enhance and not curtail their learning as generative AI platforms are increasingly found to do.
  • We should not assume that student refusal of generative AI is a result of laziness, groundless defiance, or ignorance. Similarly, teacher refusal should not be read as evidence of being uninformed or resistant to change. Instead, we should understand how such unfounded assumptions can curtail academic freedom and inquiry as we are all working to make sense of these technologies and their various impacts.

Whereas ensuring that students and teachers have the right to refuse generative AI in the writing classroom aligns with longstanding disciplinary understandings related to the practice of writing, multimodal composition, and composition pedagogies, even as we’ve not always lived up to those commitments;

  • As outlined in “Refusing Generative AI in Writing Studies: A Quickstart Guide,” the right to opt out of generative AI in the classroom is a logical outcome considering ongoing disciplinary conversations about writing and writing technologies, and it is based on the shared understandings that language, power, persuasion, and technologies are interconnected; that we resist language homogenization, which is accelerated and advanced by current generative AI models; that, as a discipline, we reject punitive approaches to plagiarism and plagiarism surveillance and guard against the co-optation of student work by corporate for-profit companies; that we understand that technologies, including generative AI, are never ideologically, culturally, or politically neutral; that we can and should work in solidarity with exploited laborers on whom generative AI platforms rely; that our expertise equips us to be critical of the rhetorics surrounding these technologies, particularly in terms of its marketing and promotions including the use of humanizing metaphors and language; that we should take seriously the current political-economic and environmental impacts of these technologies on the lives of people; and that our priority as teachers is student learning, keeping in mind recent studies about learning loss that can result from the use of generative AI for developing writers.

Whereas the right to refuse generative AI in the college writing classroom extends the work of the 1974 CCCC Resolution on Students’ Right to Their Own Language;

  • In focusing on student and teacher rights in the classroom, this resolution also extends CCCC’s 1974 Resolution on Students’ Right to Their Own Language, which “affirms the students’ right to their own patterns and varieties of language—the dialects of their nurture or whatever dialects in which they find their own identity and style” given that homogenized approaches to language in the classroom rely on the “myth of a standard American dialect” and “amounts to an attempt of one social group to exert its dominance over another.” Not only is the current issue of generative AI in education bound up in issues of power, privilege, and dominance, but these technologies and the economies that prop it up also disproportionately negatively affect already marginalized communities across the globe. In addition, as Antonio Byrd, Carmen Kynard, Alfred Owusu-Ansah, and Maggie Fernandes and Megan McIntyre have argued, generative AI platforms exacerbate language homogenization and reinforce white language supremacy.

Whereas refusal extends the work of Black and Indigenous feminist scholars like Ruha Benjamin, Tina Campt, Audra Simpson, and Yanira Rodríguez, who have articulated refusal as a critical, generative, and ultimately hopeful orientation to ongoing conditions of settler colonialism;

  • As these scholars make clear, refusal is one way to assert one’s sovereignty within oppressive conditions and to imagine new possibilities and futures for responding to the problems posed by generative AI in our discipline and beyond. As Simpson explains in “The ruse of consent and the anatomy of ‘refusal’: Cases from indigenous North America and Australia,” refusal serves “to enunciate repeatedly to ourselves and to outsiders that ‘this is who we are, this who you are, these are my rights’” (73). Likewise, Rodríguez explains in “Pedagogies of Refusal” that “Refusal helps us unmask seemingly benevolent relations and the function of affect in creating institutional buy-in. Our refusal creates space for resistance to incorporation while simultaneously opening space for us to turn toward another possibility” (5). Indeed, the refusal of generative AI enables us to take a step back from the compulsory opt-in culture that has become ubiquitous through Big Tech, and it (re)opens possible rethinking around how we interact with and engage corporate proprietary technologies that involve profiting from student and teacher data and intellectual labor, including plagiarism detection software, learning management systems, and telecommunication technologies.

Whereas ensuring students’ ability to refuse generative AI in the college writing classroom requires that we account for the right to refuse in course design and assessment;

  • To do so, we should not police and punish the use of generative AI among students. Rather, as Elizabeth Palumbo wrote in “A Student’s Right to Refuse Generative AI,” teachers can support their students’ right to refuse generative AI in the following ways:
    • Professors should never put a student’s words or work into a generative AI platform without their consent.
    • Professors should be transparent about any AI usage for grading, course design, and any other way they might use it in their teaching.
    • If a professor does use generative AI in the classroom, they should explain their reasoning behind this decision and how its usage will help students meet learning outcomes.
    • Professors should additionally respect a student’s choice to refuse AI. To do this, it would be ideal that they have assignments that students can choose from that do not involve generative AI and that do not isolate the students from class discussions and activities.
  • By providing students with options and being transparent about pedagogical choices early in the term, teachers can help students exercise their right to refuse.

Whereas recent studies have found that generative AI does not save time or create efficiencies as frequently claimed but rather shifts time and intensifies workloads;

  • Unsubstantiated claims about how generative AI increases productivity and saves time are especially concerning within a profession that has long dealt with labor issues and that continues to rely considerably on frequently underpaid contingent, adjunct, and graduate student labor. As Stacy Wittstock and Amy Lynch-Biniek report in a forthcoming study of college writing instructors, generative AI, and labor, “Adjusting effectively to new pedagogical and technological landscapes demands significant time and resources . . . that our respondents overwhelmingly reported have not been adequately met.” In addition, perceptions of time saved do not often reflect reality, especially when we account for the time required to set up, learn, review, fact and citation check, and revise synthetic content in order to make it usable. Furthermore, any perceived “saved time” as a result of generative AI is frequently a result of that work being taken on by underpaid, exploited, and invisibilized global laborers.

Whereas the work of college writing instruction should be attentive to industry trends—among many other external factors—but not driven by the goal of workforce preparation through a narrow focus on specific technological skills;

  • According to the organization’s mission statement, “CCCC advocates for broad and evolving definitions of literacy, communication, rhetoric, and writing . . . that emphasize the value of these activities to empower individuals and communities.” As a profession, rhetoric, composition, and writing studies is committed to preparing students to write in a world that is bigger than just work. We understand that students learn to write to navigate uncertainty, gain access to resources, make sense of phenomena, connect with others, build community, process feelings and experiences, and engage in civic participation. As a profession, our goals are much larger than workforce preparation; it is to prepare writers who can thoughtfully navigate a complex and constantly changing technological, cultural, and political world.

BE IT THEREFORE RESOLVED that we affirm the rights of students and teachers to refuse to sign up for, prompt, or otherwise use generative AI in the writing classroom.

Copyright

Copyright © 1998 - 2026 National Council of Teachers of English. All rights reserved in all media.

1111 W. Kenyon Road, Urbana, Illinois 61801-1096 Phone: 217-328-3870 or 877-369-6283

Looking for information? Browse our FAQs, tour our sitemap and store sitemap, or contact NCTE

Read our Privacy Policy Statement and Links Policy. Use of this site signifies your agreement to the Terms of Use