As developers, programmers, businesses, schools, and governments infuse Artificial Intelligence (AI) into more aspects of daily operations and activity, it’s essential that users have a framework for understanding the technology.
T.E.A.C.H. highlights 5 categories of awareness via broad questions, useful for starting an AI-aware discussion:
1. TECHNOLOGY Why ask these questions? Users will be less susceptible to: anthropomorphizing AI (“ChatGPT is referring to users by their names unprompted, and some find it ‘creepy’”), accepting (current) claims that AI is “thinking,” “feeling,” or “sentient”; utilizing AI for tasks it does not perform well (“OpenAI’s new reasoning AI models hallucinate more”), misapplying the technology, or paying for unnecessary accounts.
- How does generative AI work?
- What AI platforms/tools are available?
- What capabilities do AI platforms/tools have?
- What are AI’s weaknesses and limitations?
- What are current development trends?
- How can users access the AI platform/tool?
- Is account creation necessary to use the AI platform/tool?
- Can users benefit from free accounts?
2. ETHICS Why ask these questions? Users will be less susceptible to: succumbing to developer demands (“Why are creatives fighting UK government AI proposals on copyright?” and “OpenAI and Google ask the government to let them train AI on content they don’t own”), undermining labor rights, depleting limited natural resources (“With data centers and drought, Iowa studies aquifers”), relinquishing control/oversight over natural inputs, promoting unethical or inappropriate uses of AI, or undermining data security.
- What does “black box” AI mean?
- What are the societal, environmental, and economic implications of AI use?
- Is harmful/misleading content generated?
- Is academic integrity maintained?
- Is AI clearly credited for its creations?
- How are user information, data, AI interactions being used and stored?
- Can all interested users access to the AI platform/tool, or are there barriers to be overcome?
3. APPLICATION Why ask these questions? Users will be less susceptible to: creating data/information/tech vulnerabilities, promoting wasteful uses of AI, losing creative/productive independence and agency, or promoting further social inequalities.
- Does a user abide by school or employer AI policies?
- Can the user draft quality prompts?
- Is an AI solution appropriate for the given task/problem/challenge?
- Is there too much dependency on AI?
- Do individuals share the same level of AI skills, or is there an AI digital divide?
4. CRITICAL THINKING Why ask these questions? Users will be less susceptible to: falling for AI-created content, overlooking biases and misrepresentation in AI inputs and outputs, passively using AI features that are not necessarily value-added (“Google’s AI Overviews are quietly draining clicks from top sites, new data shows”), or accepting developer actions made to promote tech’s self-interest (“OpenAI updated its safety framework—but no longer sees mass manipulation and disinformation as a critical risk”).
- Is AI content recognized/suspected?
- Is AI output evaluated for accuracy, fairness, balance, & representation?
- Are developer claims viewed with skepticism?
- Does AI use undermine skills growth?
- Is AI use necessary?
5. HARMONY/ALIGNMENT Why ask these questions? Users will be less susceptible to: eroding skills development, habitually defaulting to AI use (“Students delegate higher-level thinking to AI, Anthropic study finds”), or avoiding meta-cognition regarding technology dependency.
- Can AI be used in moderation?
- Can users maintain agency over the technology?
- Do individuals leverage AI to assist their work/learning, rather than replace it?
- Is effort, and ensuing personal growth, valued over quick fixes?