Outline
Critical Reading of AI Responses
Level
Beginner
6.1 Learning Goals
By the end of this section, your mandate is to:
Recognize that AI is not an authority and can produce significant errors.
Detect common AI flaws, such as hallucinations and content gaps.
Verify all AI-generated answers with reliable, authoritative sources.
Develop a critical mindset to maintain your own academic integrity.
6.2 Why AI Is Not Always Right
I must be very clear about this: AI is a pattern-matching engine, not an arbiter of truth. It produces answers based on statistical probabilities in its training data, not on a true understanding of the world. Because of this, it can and does make mistakes. It may invent details, which we call hallucinations, or it may miss key points required by an exam syllabus. Blind trust in an AI is not just a mistake; it is an intellectual failure.
6.3 What Is Hallucination?
A hallucination is when an AI states a falsehood with complete confidence. For example, it might invent a historical date, misattribute a quote, or list a chemical property that does not exist. This is dangerous because the output is delivered with the same authoritative tone as correct information. A critical, skeptical eye is your only defense.
6.4 Verification: Step 1
Your first and most important source of truth is your own course material. You must always compare AI-generated answers against your official textbooks and class notes. This is the foundational step of verification and it is not optional.
6.5 Verification: Step 2
Your second layer of verification is the official standard. For academic work, this means checking the AI’s output against what your teacher has taught you and, most importantly, against the official marking scheme for your exam. Exam answers must match the specific keywords and formats that examiners are required to look for.
6.6 Verification: Step 3
Your final step for important information is to cross-check with multiple reliable sources. This could include library books or official educational websites. Never rely on a single source, especially when that source is a generative AI.
6.7 Example of Error
Here is a practical example of where an AI can fail.
Prompt: “List the first 5 Sri Lankan Presidents.”
Potential AI Error: The AI might confuse prime ministers with presidents, or it might get the order wrong, leading to a factually incorrect list.
Student Correction: You must build the accurate list yourself, using your official Civics or History textbook as the authoritative source.
6.8 Group Task: Critical Eye
Your task is to take a single AI-generated answer on a topic from your syllabus. In your group, you will mark the following:
✅ Two points that are factually correct and verifiable.
❓ One point that seems doubtful or requires further verification.
❌ One key point from your textbook that the AI missed entirely.
6.9 Mistake Bank for AI Errors
A disciplined learner treats AI errors with the same rigor as their own. I recommend that you maintain a Mistake Bank specifically for AI errors. This will train you to spot them more easily in the future.
Question
AI Answer
Error Found
Correct Answer
Source for Fix
6.10 Practice Drill
In small groups, you will fact-check a single AI-generated answer in Science, Maths, or History. Each group will then produce a final “verified version” of the answer, with citations from your textbook, to present to the class.
6.11 Self-Check & Key Takeaway
Define what a hallucination is.
Name two methods of verification that you must use.
Why must you never copy and paste an AI answer directly into an exam?
My key takeaway is the most important lesson in this entire module: AI is a fast assistant, not an authority. The only sources of truth for your academic work are your official textbooks, your teachers, and the official marking schemes. Your critical judgment must always be the final filter.
Of course. Here is the next part of the module, rewritten in the requested style with a greater emphasis on inline paragraphs while maintaining the complete original structure.
