The T.E.A.C.H. Framework for AI Literacy

The T.E.A.C.H. Framework for AI Literacy

As developers, programmers, businesses, schools, and governments infuse Artificial Intelligence (AI) into more aspects of daily operations and activity, it’s essential that users have a framework for understanding the technology.

T.E.A.C.H. framework for AI literacy.

T.E.A.C.H. highlights 5 categories of awareness via broad questions, useful for starting an AI-aware discussion:

1. TECHNOLOGY Why ask these questions? Users will be less susceptible to: anthropomorphizing AI (“ChatGPT is referring to users by their names unprompted, and some find it ‘creepy’”), accepting (current) claims that AI is “thinking,” “feeling,” or “sentient”; utilizing AI for tasks it does not perform well (“OpenAI’s new reasoning AI models hallucinate more”), misapplying the technology, or paying for unnecessary accounts.
  1. How does generative AI work?
  2. What AI platforms/tools are available?
  3. What capabilities do AI platforms/tools have?
  4. What are AI’s weaknesses and limitations?
  5. What are current development trends?
  6. How can users access the AI platform/tool?
  7. Is account creation necessary to use the AI platform/tool?
  8. Can users benefit from free accounts?

2. ETHICS Why ask these questions? Users will be less susceptible to: succumbing to developer demands (“Why are creatives fighting UK government AI proposals on copyright?” and “OpenAI and Google ask the government to let them train AI on content they don’t own”), undermining labor rights, depleting limited natural resources (“With data centers and drought, Iowa studies aquifers”), relinquishing control/oversight over natural inputs, promoting unethical or inappropriate uses of AI, or undermining data security.
  1. What does “black box” AI mean?
  2. What are the societal, environmental, and economic implications of AI use?
  3. Is harmful/misleading content generated?
  4. Is academic integrity maintained?
  5. Is AI clearly credited for its creations?
  6. How are user information, data, AI interactions being used and stored?
  7. Can all interested users access to the AI platform/tool, or are there barriers to be overcome?

3. APPLICATION Why ask these questions? Users will be less susceptible to: creating data/information/tech vulnerabilities, promoting wasteful uses of AI, losing creative/productive independence and agency, or promoting further social inequalities.
  1. Does a user abide by school or employer AI policies?
  2. Can the user draft quality prompts?
  3. Is an AI solution appropriate for the given task/problem/challenge?
  4. Is there too much dependency on AI?
  5. Do individuals share the same level of AI skills, or is there an AI digital divide?
4. CRITICAL THINKING Why ask these questions? Users will be less susceptible to: falling for AI-created content, overlooking biases and misrepresentation in AI inputs and outputs, passively using AI features that are not necessarily value-added (“Google’s AI Overviews are quietly draining clicks from top sites, new data shows”), or accepting developer actions made to promote tech’s self-interest (“OpenAI updated its safety framework—but no longer sees mass manipulation and disinformation as a critical risk”).
  1. Is AI content recognized/suspected?
  2. Is AI output evaluated for accuracy, fairness, balance, & representation?
  3. Are developer claims viewed with skepticism?
  4. Does AI use undermine skills growth?
  5. Is AI use necessary?
5. HARMONY/ALIGNMENT Why ask these questions? Users will be less susceptible to: eroding skills development, habitually defaulting to AI use (“Students delegate higher-level thinking to AI, Anthropic study finds”), or avoiding meta-cognition regarding technology dependency.
  1. Can AI be used in moderation?
  2. Can users maintain agency over the technology?
  3. Do individuals leverage AI to assist their work/learning, rather than replace it?
  4. Is effort, and ensuing personal growth, valued over quick fixes?

    • Related Articles

    • A T.R.E.A.T. For Your Syllabus: An AI Syllabus Policy Framework

      Why Have an AI Use Policy? Students continue expressing confusion, fear, and uncertainty over allowable uses of Artificial Intelligence (AI) in Higher Education. Syllabi represent a reliable, go-to location where faculty can outline their positions ...
    • Are there AI-powered search engines beyond ChatGPT (that promote privacy)?

      What's Out There? Yes! There are AI search engines (beyond ChatGTP and Perplexity) worth exploring! And, for more good news, these place an emphasis on privacy! Here are some options for those of us weary of the same old approaches to AI . . . Andi ...
    • Can AI . . . create rubrics?

      START WITH A PROMPT When drafting the prompt, include as many specifics regarding your output expectations as possible; the details associated with a creating a grading rubric for a history course are bolded in the example below. SUGGESTED MODEL: ...
    • Can AI . . . create research articles?

      Yes! Artificial Intelligence (AI) offers new tools for creating content. In the case of STORM, it can draft long-form Wikipedia-style articles through the interaction of AI agents. STORM (Official Website) What is STORM? “[A] writing system for the ...
    • Is Artificial Intelligence (AI) detection software accurate?

      How do AI detectors work? Faculty have numerous AI detection tool options to choose from. Many are quite tempting – especially when they share their claimed accuracy rates of 90% and higher! However, great debate rages around AI detection tool use, ...