top of page

Responsible AI — Risks & Bias

Level

Pro

10.1 Learning Goals

By the end of this section, your mandate is to:

  • Recognize the clear and present risks of using AI without discipline.

  • Understand how bias functions in AI in simple, practical terms.

  • Apply the rule that AI must assist, but you are always the final check.

  • Develop responsible habits for the safe and ethical use of AI.

10.2 Why Responsibility Matters

I must be explicit on this point: AI is a double-edged sword. Used with discipline, it amplifies your learning and productivity. Used carelessly, it will misinform you, mislead you, and weaken your own capacity for independent thought. Irresponsible use of AI is not just a minor error; it is intellectual laziness and a significant personal risk.

10.3 Key Risks

You must be aware of three primary risks. The first is misinformation. AI can fabricate “hallucinated” answers with absolute confidence. A false fact, written convincingly, is far more dangerous than no fact at all. The second risk is overreliance. Using AI before you have done the hard work of reading and writing erodes your ability to think for yourself. Finally, there is the risk of a privacy breach. Every input you type may be stored by the system. Sharing personal details or private data is reckless.

10.4 Bias Explained Simply

AI is trained on human data. Human data contains historical and societal bias. Therefore, AI output will contain bias. This is not a possibility; it is a certainty.

Consider my Penguin Analogy. Imagine an AI trained on a dataset of birds where 95% are pigeons and only 5% are penguins. Based on the overwhelming pattern in its data, the AI might wrongly predict that a penguin can fly, because the vast majority of birds in its dataset can. This is how bias works: the AI mirrors what it has “seen most often,” not what is objectively true.

10.5 Rule of Responsibility

My rule of responsibility is absolute and must be internalized. The AI is an assistant, never an authority. Your final check on any fact or figure must always be human. The final version of any work you submit must always be yours. The ultimate responsibility for your work cannot be delegated to a machine. This is non-negotiable.

10.6 Safe & Unsafe Use Table

The distinction between responsible and irresponsible use must be clear.

Safe AI Use (You are in control)

Unsafe AI Use (The AI is in control)

Summarising your own written notes.

Copying full answers for homework.

Performing grammar checks on your essay.

Submitting AI-generated text as your own.

Generating quiz questions for self-testing.

Skipping the reading and writing process entirely.

Getting a draft comparison of A/L vs TVET.

Typing personal data (name, ID, location).

10.7 Activity: Spot the Risk

Read the following three scenarios. Your task is to identify each as either Safe or Unsafe.

  1. A student uploads their personal ID card to an AI to get a scholarship essay drafted for them.

  2. A student asks an AI to quiz them on the stages of mitosis after they have finished writing their own notes on the topic.

  3. A student copies an AI’s answer and pastes it directly into their English exam paper.

10.8 Self-Check

  1. What are the three main risks of using AI carelessly?

  2. Explain my Penguin Analogy in a single sentence.

  3. Who is always the final check when you are using AI for learning?

10.9 Key Takeaway

AI is a powerful tool, but power without responsibility is danger. Your rule of engagement must be clear: AI assists, but you are always the final check.

bottom of page