Your AI policy has already expired (idea)

aydinynr/iStock/Getty Images Plus
Over the past two years, many of us have written courses, programs and university policies about artificial intelligence. Maybe you banned AI from your first-year composition course. Or maybe your computer science program is friendly. And your campus information security and academic integrity offices may have their own guidelines.
Our argument is that the integration of AI technology into existing platforms has made these frameworks obsolete.
We all knew this place was going to change. Some of us have been writing and talking about “the switch,” where Gemini and Copilot are embedded in all versions of the Google and Microsoft suites. The world where when you open any new document, you will be asked “What are we working on today?”
This world is here, sort of, but we are currently in a period of intense integration. Last year, Ethan Mollick began referring to current AI models as a “zipped frontier,” with models that are better suited for some tasks while other skills are out of reach. We’re deliberately borrowing that language to refer to this period of rugged integration where the switch hasn’t been flipped, but integration surrounds us in ways that were difficult to predict and impossible to create a general guide for.
Almost every policy we’ve seen, reviewed, or heard about the world thinks when a reader opens a browser window, navigates to ChatGPT or Gemini, and starts a conversation. Our proposed syllabus policies at California State University, Chico, policies that we helped write, thought of this world with guidance such as, “You will be informed of when, where and how these tools are allowed to be used, as well as guidance for explanation.” Even the University of Pennsylvania guidelines, which have been some of our favorites from the start, have language like “AI-generated contributions should be properly cited like any other reference”—language that assumes that tools are something you use intentionally. That’s how AI worked for about a year, but not in the era of tight integration. Consider, for example, the growing integration of AI in the following domains:
- Research. When we open some versions of Adobe, there is an embedded “AI assistant” in the upper right corner, ready to help you understand and work with the document. Open the PDF citation and reference application, such as Papers, and now you are greeted with an AI assistant ready to help you understand and summarize your academic papers. A reader who reads an article you uploaded, but can’t remember the main point, uses an AI assistant to summarize or remind them where they read something. Did this person use AI when there was a ban in the classroom? Even if we check our colleagues on tenure and promotions files, do you need to promise that you won’t hit the mark when you plow through hundreds of pages of student teaching evaluations? From an information security perspective, we understand the issues with using sensitive data within these systems, but how do we avoid AI when it’s built into the systems we already use?
Top of most Google searches now is the Gemini acronym. How should we tell readers to avoid AI-generated search results? Google is at least polite enough to point to theirs (perhaps as a Gemini promotion), but we don’t know how these programs provide results or summaries unless the search engines tell us. What is common here and throughout this piece is that this technology is integrated into the systems that we and our students already use.
- Development. The new iPhone is designed for a new purpose of Apple Intelligence, which will permeate all aspects of Apple’s operating system and text input platform and often work in ways that are invisible to the user. Apple Intelligence will help sort notes and ideas. In accordance with CNET“The idea is that Apple Intelligence is built into your iPhone, iPad and Mac to help you write, get things done and express yourself.” Many students use phones to complete courses. If they use a compatible iPhone, they will be able to generate and edit text right on the device as part of the system software. In addition, Apple has partnered with OpenAI to include ChatGPT as a free layer on top of Apple Intelligence integrated into the operating system, with rumors about Google Gemini to be added later. If a student uses Apple Intelligence to help organize ideas or rewrite discussion posts, have they used AI as part of their project?
One piece of technology that benefits is Google’s NotebookLM. This is the only non-integrated technology we’re talking about, but that’s because it’s designed to be i expertise of writers, researchers and students. This is an amazing platform that allows the user to upload large amounts of data, such as a decade’s worth of notes or PDFs, and the system generates summaries in multiple formats and answers questions. Author and developer Steven Johnson is up front that this system is a potential place in educational settings, but it’s not designed to produce full essays; rather, it produces what we would think of as learning materials. However, is the decision to cooperate with this platform to do an organizational task and the same idea as the attachment from ChatGPT?
- Production. Have you noticed how the autocomplete features in Google Docs and Word have improved over the last 18 months? That’s because they’re powered by advanced machine learning that’s dependent on AI. Any content generation we do includes auto-completion features. Google Docs has been active since 2019. You can use Gemini in Google Docs in Workspace Labs right now. Do we need to include instructions to disable autocomplete for students or people who work with sensitive data?
When you log into Instagram or LinkedIn to publish an update, an AI assistant offers to help you. If we are teaching students about marketing content production, public relations or professional skills development, do they need to disclose that AI embedded in content platforms has helped them generate ideas?
Outside of Policy
We do not mean to contradict; these are incredibly difficult questions that undermine the policy foundations we were beginning to build. Instead of reformulating policies, which may have to be rewritten again and again, we urge institutions and faculty to take a different approach.
We suggest that instead of AI policies, especially syllabus policies, there should be a framework or framework. A more straightforward approach would be to acknowledge that AI is everywhere in our lives in the production of information and that we often interact with these systems whether we want to or not. It would agree that AI is both expected in the workplace and inevitable. Faculty may also indicate that the use of AI will be part of an ongoing conversation with students and that we welcome new use cases and tools. There may be times when we encourage students to do work without using these tools, but this is a matter of discussion, not policy.
Alternatively, faculty may identify this integration as a threat to student learning in other areas of study. In these cases, we need to use the syllabus as a place to explain why students should work without AI and how we intend to set them up to do so. Also, framing this as an ongoing discussion about technology integration rather than policy treats older students as adults while acknowledging the complexity of the situation.
There continues to be a mismatch between the pace of technological change and the slow pace of university practice. Early policy creation followed the same frameworks and processes we have used for centuries—processes that have served us well. But what we are living in right now cannot be solved by the decisions of the Education Senate or the work of the institutions that are still growing. There will be a time in the near future when rough integration is smoothed out for complete integration, where AI is at the core of every operating system and piece of software. Until then, in the classroom, peer review and institutional structure, we must think about this technology differently and move beyond policy.
Source link