Integrating generative AI tools into your classroom requires clear communication with students about expectations and boundaries. A well-crafted AI policy in your syllabus not only sets standards for academic integrity but also guides students on ethical and proactive AI use. There is no single “right” policy; the right one is the one that you can commit to, communicate clearly, and assess fairly. Before writing your statement, consider exploring the following useful resources to guide content and wording:
Below are examples of different perspectives on A.I. use: against, avoid, and adopt.
Choosing to restrict AI use is a legitimate pedagogical decision. There are courses such as developmental writing, clinical skills labs, oral communication, hands-on technical programs, where the thinking process itself is the learning outcome. If AI generates the product, the learning does not occur. That said, a no-AI policy is also the hardest to enforce without clarity and consistency. The key is specificity: not just "no AI," but what counts as AI, what the consequences are, and how you'll handle gray areas like grammar checkers or auto complete.
The use of artificial intelligence tools (such as ChatGPT, Claude, Copilot, Gemini, or similar tools) is not allowed in this course. All submitted works must reflect your own thinking and be written in your own words. Using AI to generate, rewrite, summarize, outline, or otherwise produce course content, whether in full or in part, counts as academic dishonesty and will be addressed under Kirkwood’s Academic Integrity Policy. This policy supports the core goal of the course: developing your own skills. Tools such as spell-check and basic grammar support built into Word or Google Docs are permitted. If you are unsure whether a tool is allowed, ask before submitting your work.Assignments emphasize in‑class work, personal reflection, live demonstrations, or process‑based tasks.
Major projects include checkpoints or are completed partially under instructor supervision.
Assignment instructions restate “no AI use” instead of relying on the syllabus alone.
Rubrics reward original thinking and process, not polish.
The same standard is applied consistently from the first assignment to the last.
A partial‑use policy (“some AI is okay”) is currently the most common approach, and the most likely to lead to inconsistency if not intentionally designed. Without clear boundaries, students often interpret limited permission as broad approval. Partial‑use policies work best when they:
Equally important is a disclosure requirement. Asking students to document their AI use supports accountability and helps them develop metacognitive awareness of when AI assists learning, and when it replaces it. Here is an example A.I. Disclosure Statement.
Artificial intelligence tools may be used in this course only in the specific ways outlined for each assignment. Assignment instructions will clearly state whether AI is allowed and for what purpose. When AI use is permitted, you must include a brief AI Disclosure Statement with your submission, describing how the tool was used. Any AI use beyond what is explicitly allowed for an assignment, including using AI to draft or rewrite content, will be treated as academic dishonesty. You remain responsible for all submitted work. This includes checking AI output for accuracy and revising it meaningfully. Any errors or inaccuracies in AI-generated content are your responsibility. Every major assignment explicitly states whether AI is allowed and for what purpose.
AI Disclosure Statements are required whenever AI use is permitted.
Rubrics assess critical judgment, revision quality, and accuracy, not AI output.
Expectations are reinforced before each major assignment.
The policy is enforced consistently across low‑ and high‑stakes work.
Artificial intelligence tools are permitted and encouraged in this course. You may use AI for any stage of your work: research, brainstorming, drafting, editing, formatting, and problem-solving. Using AI tools effectively is a professional skill, and this course treats it as one. However: you are responsible for everything you submit. This means you must verify the accuracy of AI-generated content, critically evaluate AI output rather than accepting it uncritically, and add meaningful analysis, judgment, and original thinking beyond what AI alone produces. Submitting AI output without meaningful engagement - in other words, using AI as a replacement for your own thinking rather than a tool to support it — will result in a grade that reflects the lack of original contribution. All submitted work must include a brief note at the end describing how AI was used (one to three sentences is fine). This is not punitive — it's professional practice. In the workforce, being transparent about your tools is a mark of credibility. AI tools may be used freely for brainstorming, drafting, editing, research support, or idea generation.
Some assignments may explicitly require students to evaluate, revise, compare, or critique AI‑generated content rather than simply use it.
Students submit an AI Use Declaration Form with their work.
Rubrics focus on effective use of AI, quality of analysis, integration of ideas, and evidence of student judgment.
Assignments require elements AI cannot provide on its own, such as local context, personal experience, class‑specific material, or in‑class explanation.
Students are able to explain and defend their work; follow‑up questions or reflections may be required.
The resources below may help you identify typical assignments that align with this policy.
The assignments include tasks that involve, criticize, and evaluate AI outputs. Students critically analyze AI outputs, for example: conducting an “interview” with a historical figure generated by AI. AI tools may be part of the assignment, but equivalent materials must be available for students who choose not to use AI.
If you choose… | Your key moves are… | Watch out for… |
Against AI use is restricted | Name specific tools and define what counts as AI; design process-based assignments with checkpoints; restate the policy on every assignment sheet; have an explicit Day 1 conversation; apply rubrics that reward process and original thinking over polish. | Vague prohibitions students can work around; inconsistent enforcement across the semester; not revisiting the policy after Week 1; failing to distinguish permitted tools (spell-check) from prohibited ones. |
Avoid Limited AI use | Specify AI permissions at the assignment level, not just in the syllabus; distinguish phases of work (brainstorming vs. drafting); require AI Disclosure Statements whenever AI is permitted; define AI's role (assistant vs. author); enforce consistently across low- and high-stakes work. | "Limited permission" creeping into broad approval; unclear lines between assignments; treating disclosure statements as optional; students assuming AI output is automatically accurate and not verifying it. |
Adopt AI as a support tool and a subject of analysis | Frame AI as a professional skill, not a shortcut; require brief AI use disclosures with all submissions; design assignments that require judgment, local context, reflection, or in‑class explanation; include tasks where students evaluate, critique, or improve AI output; assess reasoning and decision‑making, not AI quality. | “You can use AI” becoming “AI does the work”; assignments generic enough that AI output earns full credit; grading AI polish instead of student thinking; students unable to explain or defend their work. |
Kirkwood Community College | AISD
Writing an effective AI policy isn't a one-time task; it's an ongoing teaching practice. The most important thing is not which policy you choose, but whether you've thought carefully enough about your course's learning outcomes to justify it, communicated it clearly enough that students can follow it, and committed to it consistently enough that it remains meaningful.
Questions about AI policy development, assignment design, or instructional strategies? Reach out to the AISD team.