Teaching with AI: Why Critical Thinking Still Matters
Author : Receivables Info | Published On : 21 Feb 2026
Artificial intelligence has already revolutionized how we learn, communicate, and work, but it may also be changing how we think.
As automation tools accelerate across industries, including collections and financial services, there’s a growing concern that people are outsourcing reasoning to algorithms.According to a 2025 Pew Research Center report, 62% of U.S. adults say they interact with artificial intelligence several times a week, while just 13% feel they have a great deal or quite a bit of control over how it’s used in their lives.
In the latest episode of the Receivables Podcast, Adam Parks sat down with Porter Heath Morgan, Partner at Martin Golden Lyons Watts Morgan, to explore the intersection of AI literacy, governance, and human reasoning. He explains why critical thinking must guide the next wave of technological and AI adoption.
From AI Detection to AI Education
“Don’t bother keeping up in this arms race of seeing if you have better AI to detect the AI output of a student,” Morgan explains. “You’ve got to figure out how to teach with this being a tool that can be used.”
This quote captures a truth beyond education as it applies equally to compliance officers, operations leaders, and data-driven agencies. Instead of focusing on detecting AI output, Morgan argues that leaders must focus on how people use AI to reason, analyze, and solve problems.
The lesson is simple: if you don’t train people to think critically with AI, they’ll accept whatever it produces as truth.
That’s a dangerous precedent, especially in regulated industries like financial services, where explainability, verification, and transparency are not just academic virtues but legal necessities.
The Cognitive Decline of Automation
Morgan draws a clear parallel between AI dependency and the erosion of cognitive independence.
“We need to retain those critical thinking skills because as we outsource more thinking to AI, like using Google Maps, we lose the ability to navigate for ourselves.”
His analogy perfectly fits the modern compliance landscape. Many financial organizations rely on automated decisioning and machine learning to identify risk, predict payments, or assess consumer sentiment. But the more we trust those models without human oversight, the greater the risk of systemic bias, regulatory errors, or ethical blind spots.
According to the World Economic Forum’s Future of Jobs Report, analytical thinking remains the most in-demand skill across all industries. The same report found that 44% of workers’ core skills will be disrupted, because technology is moving faster than companies can design and scale up their training programmes.
That gap underscores a simple truth: automation may accelerate progress, but it can’t replace human judgment. Organizations that prioritize continuous learning and analytical thinking will be best equipped to adapt and use AI not as a substitute for decision-making, but as a catalyst for smarter and more ethical outcomes.
The AI Reasoning Framework: Input, Context, and Validation
Building on Morgan’s perspective, we can define a three-part AI Reasoning Framework for leaders who want to teach or manage AI use responsibly within compliance and operational contexts.
1. Input: Teaching Better Questions
Critical thinking begins with question quality. AI is only as valuable as the clarity and purpose of the prompts it receives.
-
Train staff to write goal-oriented prompts that reflect regulatory, ethical, and consumer expectations.
-
In compliance, that means asking “What risk am I overlooking?” rather than “Summarize this regulation.”
2. Context: Framing AI as a Co-Teacher
Instead of treating AI as a replacement for human expertise, Morgan suggests we use it as an amplifier for discussion.
-
Ask AI to simulate opposing arguments or compliance scenarios.
-
Use it to generate what-if case studies that sharpen decision-making.
-
Always validate outputs through multi-person review.
3. Validation: Keeping Humans in the Loop
AI’s confidence can mislead users. Leaders must normalize double-checking AI outputs.
-
Implement a “trust, but verify” policy for all AI-assisted deliverables.
-
Use validation checklists for AI-generated reports, contracts, and communications.
-
Establish escalation protocols for inconsistencies between machine and human reasoning.
The framework reflects Morgan’s guiding principle: “AI governance is not a technology project, rather a culture of accountability.”
AI in Compliance: Teaching Teams to Think Again
For collection agencies, creditors, and law firms, teaching teams to reason critically about AI is just as essential as teaching data privacy or CFPB regulations.
Without human oversight, even well-trained models can reinforce errors or ethical violations at scale. A 2025 McKinsey Global Institute analysis found that organizations embedding AI governance frameworks and human oversight are more likely to avoid risk-related incidents than those deploying AI without structured controls.
In other words, compliance success depends less on the sophistication of the AI and more on the sophistication of the humans using it.
Morgan’s advice echoes this sentiment:
“Does this output make sense? Does this align with what I was asking, versus just taking the first output as gospel and turning that in?”
These are the questions that compliance officers, educators, and executives must all begin asking across every industry where AI-driven decisioning influences outcomes.
Ethical AI Starts with Ethical Education
Morgan’s reflections go beyond technology and point to a philosophy of leadership. Ethical artificial intelligence begins with teaching people how to think critically about ethics itself.
If professionals don’t understand how their tools interpret data, they can’t truly control the consequences of their use. Ethical AI requires a shift from automation-first thinking to education-first adoption.
That’s why leaders must:
-
Embed AI reasoning workshops into ongoing compliance training.
-
Evaluate AI vendors based not just on features, but on governance transparency.
-
Incorporate human judgment checkpoints in all AI-assisted processes.
The organizations that thrive in this new era will be those that understand AI as a mirror of human thought and not a replacement for it.
Conclusion: The Next Generation of Critical Thinkers
Morgan’s warning about losing our cognitive compass should resonate with every compliance and operations leader. As he says, “We lose the ability to navigate for ourselves.”
That’s why teaching with AI isn’t just about education, it’s also about leadership. It’s about creating cultures where curiosity, analysis, and verification remain non-negotiable.
To explore more insights like this, visit Receivables Info where technology, compliance, and ethical innovation converge.
The future of AI governance won’t be written by machines. It’ll be led by those who still know how to ask why.
Author Attribution
About Adam Parks
Adam Parks has become a voice for the accounts receivables industry. With almost 20 years working in debt portfolio purchasing, debt sales, consulting, and technology systems, Adam now produces industry news hosting hundreds of Receivables Podcasts and manages branding, websites, and marketing for over 100 companies within the industry.
