People, process, prompting in practice: what compliance teams must get right

In regulated corporate environments, AI is not just a productivity tool-it is a governance question. The organisations getting value without creating new risk are building human judgement, safe prompting, documentation and escalation into AI-assisted workflows. That is how AI becomes a reliable, auditable extension of the team.

Most leaders in regulated firms are no longer asking whether AI is capable. They can see it is. The question is whether AI-enabled work will still be defensible six months from now, when a decision is challenged, when an auditor asks how a conclusion was reached, or when the original context exists only in a chat history.

In regulated corporate environments, the risk is rarely “the model made a mistake” in isolation. The risk is that an output looks plausible, moves quickly through the organisation, and becomes hard to explain after the fact.

To put this simply: AI does not remove the need for human judgement. It increases the need for it, as well as for clearer processes, oversight and documentation.

The human capabilities that determine whether AI helps or harms

AI is often treated as a tool that will automatically reduce workload. In regulated settings, that assumption is risky. The biggest value comes when teams strengthen the human-AI partnership with deliberate skills.

Safe prompting is a control, not a trick

Prompting is not about clever phrasing. It is about specifying:

  • the purpose of the task (what decision or output is being supported)
  • the boundaries (what the model must not do)
  • the evidence standard (what sources are acceptable)
  • the output format (so it can be reviewed consistently)

A “good” prompt makes the model easier to challenge. A vague prompt makes the model harder to supervise.

Critical evaluation is non-negotiable

Regulated firms already know how to challenge a human narrative. AI outputs need the same discipline:

  • What assumptions are being made?
  • What is missing?
  • What would change the conclusion?
  • What is the source of truth?

If AI is used to accelerate work, the review step must not become a rubber stamp.

Contextual judgement is the differentiator

AI can generalise. Regulated decisions cannot.

A model may produce a generic answer that is technically coherent and still wrong for the specific context: the firm’s policies, risk appetite, client profile, jurisdictional requirements, or operational constraints.

The most valuable professionals are those who can apply context, recognise what does not fit, and know when to stop the workflow and escalate.

The ability to challenge AI outputs must be trained

Teams need a shared language for challenging AI:

  • “Show your working” (what was the reasoning route?)
  • “What alternatives did you not consider?”
  • “What would a sceptical reviewer say?”

This is a skill, and it improves quickly with practice if teams treat it as part of professional development.

Process as control: governance, documentation and escalation

The most common mistake I see is treating AI as an informal productivity layer. In regulated corporate environments, the process around AI is the control environment.

Governance: define what AI is allowed to do

Be explicit about:

  • which tasks can be AI-assisted (and which cannot)
  • what level of human sign-off is required
  • where the line sits between support and decision-making

Documentation: make the workflow auditable

If a decision is challenged, the organisation should be able to show:

  • the input and context provided
  • the prompt or instruction used
  • the output generated
  • the human review performed
  • the final decision and rationale

If you cannot reconstruct the workflow, you cannot defend it.

Escalation: build “stop points” into the workflow

AI is at its most dangerous when it removes friction. Regulated firms need friction in the right places.

Define escalation triggers such as:

  • uncertainty or conflicting information
  • potential policy or regulatory breach
  • high-risk client segment, transaction, or scenario
  • outputs that rely on assumptions rather than evidence

The goal is not to slow everything down. It is to ensure the workflow stays inside safe, explainable boundaries.

Prompting in practice: three techniques that improve safety and usefulness

These are simple habits that make AI outputs easier to review.

  1. Ask for structured outputs (checklists, decision trees, risk and mitigation plans) rather than prose.
  2. Force citations and gaps: require the model to list what it knows, what it is assuming, and what information is missing.
  3. Run a challenge pass: ask the model to produce the strongest counterargument, then compare it to the initial output.

These techniques do not remove the need for judgement, but they make judgement easier to apply.

Where Acquarius is taking a steady, controlled approach to AI

Acquarius operates in a regulated corporate environment. That means adopting AI is not a “tool rollout”, but a governance decision.

Our approach is deliberately steady:

  • Use-case led: we start with clearly defined, low-risk use cases where AI can support quality and speed without changing accountability.
  • Human-owned outcomes: AI can assist analysis and drafting, but a named professional remains responsible for the final output and rationale.
  • Repeatable workflows: prompts, templates and review steps are standardised so work is consistent and easier to supervise.
  • Auditability by design: we keep the “how” (inputs, prompts, outputs, edits, final decision) so it can be explained later if needed.
  • Guardrails and escalation: we define “stop points” and escalation triggers, rather than letting automation remove necessary controls.

This is how we aim to capture value while staying inside safe, explainable boundaries.

Why this matters in practice

Regulated firms will be judged on outcomes, not intent.

AI can make teams faster. It can also make errors faster and harder to trace. The organisations that use AI safely will be the ones that:

  • invest in human capability (prompting, evaluation, judgement)
  • design governance and documentation into day-to-day workflows
  • make escalation and oversight explicit

That is how AI becomes a reliable, auditable extension of the team rather than an uncontrolled assistant.

Key takeaways

  • In regulated environments, AI increases the need for human judgement; it does not remove it.
  • Safe prompting is a control mechanism: it defines boundaries, evidence standards and reviewability.
  • Critical evaluation and contextual judgement determine whether outputs are usable in context.
  • Governance, documentation and clear escalation points are what make AI auditable and defensible.
  • A steady approach, use-case-led, human-owned, and documented, reduces risk while enabling value.

Join Us at ICA AI Week 2026

In the run-up to ICA AI Week 2026, the most useful question to ask is not “what can the model do?” but “what must we be able to evidence?” AI can strengthen decision-making and delivery in regulated corporate environments, but only if teams treat people, process and prompting as part of the control environment.

I’ll be speaking at ICA AI Week 2026 on People, process, prompting in practice: using AI safely in regulated corporate environments, and I look forward to comparing notes with peers on what is working in practice.

Key Contacts
Oliver Andlaw
Chief Executive Officer
Get in Touch ->
Laura Fuhr
Team Leader Trust & Company Management
Get in Touch ->
Denise Bonavia
Client Accounting
Get in Touch ->
Related Insights