Choosing AI Agents for Business Workflows: System-Specific or General Purpose? | ZMDM
ZMDM · Decision Guide White Paper

Choosing AI Agents for
Business Workflows:

System-Specific or General Purpose? Why the answer depends on where your problem actually lives — not which system you already own.

Section 1
The decision every organization is facing right now
Section 2
A framework: matching the agent to where the problem lives
Section 3
Why the model inside is not what determines the outcome
Section 4
Domain intelligence: what the agent understands before it acts
Section 5
Supervision: who governs the agent's behavior
Executive Summary

Every major enterprise software vendor now ships an AI agent platform. SAP has Joule. Salesforce has Agentforce. ServiceNow has Now Assist. Each is genuinely capable within the system it was built for. Organizations adopting AI for business workflows are increasingly confronting a choice they did not anticipate: do they deploy the agent their system vendor provides, or do they deploy a general purpose agent that works across their systems?

Most organizations default to the system vendor's offering. The path of least resistance is to adopt the agent already integrated into the platform you run. This paper argues that this default is the right choice for some problems and the wrong choice for others — and that the way to tell them apart is not by comparing agent capabilities, but by asking a single diagnostic question: where does the problem actually live?

If the problem begins and ends inside one system, the system vendor's agent is well suited to it. If the problem originates upstream of any system — in a process gap, a cross-functional handoff, an organizational workaround — a system-specific agent will address only the symptom that surfaces in its system. A general purpose agent, operating across systems and reasoning from broad business domain knowledge, can address the root cause.

The choice between agent types is not a capability comparison — it is a scope match. The right question is whether the agent's scope matches the problem's scope.
System-specific agents are bounded by design: their context window, action surface, and governance are all scoped to the vendor's system. This is appropriate for system-contained problems.
General purpose agents — built on foundation models with broad business domain knowledge — are bounded by the problem, not the system. They act wherever the root cause leads.
Master data governance is almost never a system-contained problem. Root causes are upstream, cross-functional, and cross-system. The diagnostic framework in Section 2 identifies which class a given problem belongs to.
Section 1

The Decision Every Organization Is Facing Right Now

AI agent adoption in enterprise environments has reached an inflection point. Agents are no longer experimental — they are product roadmap commitments from every major software vendor. SAP Joule now includes 40+ autonomous agents with 2,400+ prebuilt skills. Salesforce Agentforce enables autonomous action across CRM workflows. ServiceNow, Oracle, and Microsoft have equivalent platforms. For organizations already running these systems, agent adoption can feel like a natural extension of existing investment.

At the same time, a second category of agent has emerged — general purpose AI agents built on foundation models such as Claude Code, Codex, and Gemini — that are not owned by any system vendor and operate across whatever systems and processes a business runs. These agents bring broad domain knowledge and can be deployed as primary agents governing workflows that span multiple systems.

The decision organizations face is not simply "which is more capable." In most cases, the underlying models are comparable — many system vendor agents are built on the same frontier models as general purpose agents. The decision is a scope question: does the problem you need to solve live inside one system, or does it live across systems, processes, and teams?

The Diagnostic Question

"Where does the root cause of this problem actually live — inside the system, or upstream of it?"

If the answer is: Inside the system

System-Specific Agent

The problem begins and ends within a single system's boundary. Incorrect field values, missing records, failed validations, workflow exceptions — all traceable and resolvable within the system's data model and action surface.

Examples: SAP Joule, Salesforce Agentforce, ServiceNow Now Assist. Well-suited. Native integration, deep system knowledge, prebuilt skills for common patterns.
If the answer is: Upstream of the system

General Purpose Business Agent

The symptom surfaces in the system, but the cause is a process gap, a cross-functional handoff failure, an organizational workaround, or a missing integration between systems. Resolving it requires acting outside any single system's boundary.

Examples: ZMDM + Claude Code. Well-suited. Cross-system action surface, broad business domain knowledge, business-user governed.

1.1 Why the default assumption is often wrong

The intuitive assumption — that the agent embedded in your ERP is the right agent for your ERP problems — is correct when the problem is genuinely an ERP problem. It is incorrect when the problem merely presents in the ERP but originates elsewhere. And in master data governance specifically, problems almost never originate where they surface.

A purchase order blocked by an incomplete material master is not a material master problem. It is typically an NPI sequencing problem: the material was requested before classification was complete, producing a structurally incomplete record at creation. A system-specific agent will identify and correct the incomplete fields. It cannot correct the NPI process that produced them, because that process is outside its context window and outside its action surface. The next launch produces the same errors.

This pattern — symptom in the system, cause upstream of it — repeats across supplier duplicates, pricing condition mismatches, customer record proliferation, and most other recurring governance failures. Organizations that match the wrong agent type to these problems spend considerable resources on remediation that does not become resolution.

Section 2

A Framework: Matching the Agent to Where the Problem Lives

The following framework provides a structured basis for agent type selection. It is not a capability comparison — both agent types are capable within their appropriate scope. It is a scope-matching tool: three diagnostic questions that determine whether the problem's scope matches a system-specific or general purpose agent's operating boundary.

The Three Diagnostic Questions
  • 1 Where does the root cause originate? If the cause is traceable to a record, field, or workflow step inside the system, a system-specific agent is appropriate. If the cause is a process gap, a cross-system handoff failure, or an organizational behavior pattern that exists upstream of the system, a general purpose agent is required.
  • 2 Does resolving it require acting outside the system? If the fix involves only system records and workflows, a system-specific agent can own it. If the fix requires changing a process in another system, closing a workflow bypass, or enforcing a governance checkpoint that spans teams and tools, a general purpose agent with cross-system action capability is required.
  • 3 Does the problem recur after system-level remediation? Recurrence is the clearest signal that the root cause is outside the system boundary. If periodic cleanup produces temporary improvement but the problem re-accumulates, the agent being used is addressing the symptom — not the cause. A different agent type, with a different scope, is needed.

2.1 Applying the framework to master data governance

Master data governance — the ongoing management of material, supplier, customer, and financial master data across enterprise systems — is a domain where the framework almost always points toward general purpose agents. The reason is structural: master data quality failures originate in business processes, not in systems. The system is where they become visible. The process is where they are created.

Applying the three diagnostic questions to common governance failure patterns:

Failure pattern Where root cause actually lives Framework recommendation
Recurring material master errors NPI sequencing — requests submitted before classification is complete. Outside any ERP's context window. General purpose agent. Root cause requires NPI process knowledge and cross-process action.
Persistent supplier duplicates Parallel onboarding workflows — one through the governed system, one bypassing it. Cross-system cause. General purpose agent. Root cause requires closing a bypass that exists outside the ERP.
Pricing condition mismatches Missing integration between CRM contract approval and ERP pricing condition management. Cross-system cause. General purpose agent. Resolution requires building a cross-system process connection.
Master data launch bottlenecks Incorrect NPI gate sequencing — master data creation triggered too early. Process cause, not system cause. General purpose agent. Resolution requires restructuring the NPI handoff sequence.
Failed system validations (one-off) Genuinely a data entry error or missing field value. Inside the system boundary. System-specific agent appropriate for this class.
Section 3

Why the Model Inside Is Not What Determines the Outcome

A sophisticated objection to the framework above is that it does not matter which agent type is selected, because both are increasingly built on the same underlying foundation models. SAP Joule runs on a combination of OpenAI, Gemini, Anthropic, and other frontier models. Salesforce Agentforce similarly integrates third-party foundation models. If the model inside a system-specific agent is the same as — or comparable to — the model in a general purpose agent, why would they produce different outcomes?

The answer is that the model's capability is not what determines agent behavior. The architecture surrounding the model is what determines agent behavior. Specifically: what context the model receives as input, what actions it is permitted to take, and who governs its reasoning. When a general purpose model is placed inside a system-specific wrapper, it becomes a system-specific agent — regardless of its intrinsic capability. The same model, operating as a primary agent with system skills, retains the full scope of its reasoning.

The Room Analogy

A skilled diagnostician placed in a room with only cardiology instruments and cardiology records will diagnose and treat cardiac conditions. A patient presenting with cardiac symptoms caused by an autoimmune condition will receive cardiac treatment. The diagnostician's capability is not the constraint. The room is. Wrapping Claude Code inside Joule puts Claude in SAP's room. The capability does not transfer. The boundary does.

3.1 Three things the wrapper does to the model

How the same underlying model operates differently depending on wrapper design
System-Specific Architecture

Frontier model wrapped inside a system agent (e.g. Joule)

Business problem
VENDOR KNOWLEDGE GRAPH & CONTEXT FILTER
Foundation model — reasoning from system context
SYSTEM ACTION SURFACE ONLY
System action executed

Model capability is real. Context window, action surface, and governance are all scoped to the vendor's system before the model reasons about anything.

Primary Agent Architecture

Claude Code as primary agent with system skills (ZMDM + ZFlow)

Business problem — received directly, unfiltered
Claude Code — full domain knowledge active at diagnosis
SAP skill · Salesforce skill · ZFlow skill · ERP skill
Action across whichever systems the problem requires

System skills are instruments of reasoning — not its boundary. The model acts wherever the business problem leads.

The three architectural constraints the wrapper imposes are: context shaping (the vendor's knowledge graph pre-frames the problem before the model reasons), action surface (the model can only act within the vendor's API regardless of what it diagnoses), and governance ownership (IT teams and vendor tooling determine what the agent is allowed to do, not the business users closest to the problem). These constraints are not incidental features — they are deliberate design choices appropriate for system-integrity use cases. They are limiting for cross-system governance use cases.

Section 4

Domain Intelligence: What the Agent Understands Before It Acts

Beyond the architectural wrapper, a second factor distinguishes general purpose agents from system-specific agents for governance work: the breadth of domain knowledge active at the point of diagnosis, before any system is queried.

System-specific agents reason from the vendor's knowledge graph — the system's own representation of how processes work, what data objects exist, and what best practices look like within that system. This produces accurate, contextually appropriate reasoning within the system's domain. It does not produce reasoning about NPI methodology, supply chain failure patterns, procurement economics, or the organizational behavior dynamics that produce governance workarounds.

Claude Code's training spans the full breadth of published business knowledge. At the point of diagnosis — before any SAP query, before any record is examined — it brings active knowledge across supply chain theory, new product introduction methodology, procurement economics, regulatory frameworks, quality engineering, financial modeling, and organizational behavior. This changes what the model identifies as the problem category before it takes any action.

"The benchmark question when selecting an agent for governance work is not how many prebuilt skills it has. It is what the agent understands before it uses any of them."

Knowledge Domain
System-Specific
System-Specific Agents
SAP Joule Agentforce ServiceNow
General Purpose
General Purpose Agents
Claude Code Codex Gemini
System & Technical Knowledge
Data models & object schema
Active
Active
Workflow & process execution
Active
Active
System integrations & APIs
Active
Active
Vendor knowledge graph
Active — primary source
Active — one input among many
Business & Domain Knowledge — active before any system is touched
Supply chain & logistics
Outside context window
Active
New product introduction (NPI)
Outside context window
Active
Procurement & sourcing economics
Outside context window
Active
Financial analysis & modeling
Outside context window
Active
Regulatory & compliance logic
Outside context window
Active
Quality engineering
Outside context window
Active
Organizational behavior
Outside context window
Active
Cross-system root cause analysis
Outside action surface
Active
Figure 1. Active knowledge domains at point of diagnosis, prior to any system action. System-specific agents ground all reasoning in the vendor knowledge graph; domains outside that boundary are outside their reasoning scope regardless of the underlying model's intrinsic capability.

4.1 Why organizational behavior knowledge matters for governance

Of the domains in Figure 1, organizational behavior is the one that most often surprises practitioners. It matters because master data governance failures are behavioral before they are data problems. A procurement team creates a supplier record outside the governed onboarding process — because that process is too slow. An engineer submits a material master request before classification is complete — because the NPI timeline does not prevent it. A regional team maintains its own spreadsheet — because the central system has historically been unreliable for their use case.

In each case, the visible problem is a data quality issue. The actual cause is a human behavior pattern: workaround, avoidance, or ambiguous ownership. An agent without organizational behavior knowledge will fix the data symptom. An agent with that knowledge can reason about why the workaround exists — and recommend a governance change that removes the incentive to bypass, rather than cleaning up its output indefinitely.

Section 5

Supervision: Who Governs the Agent's Behavior

A final dimension of the agent selection decision concerns governance ownership — not data governance, but AI governance: who decides what the agent is allowed to do, who can change it, and who is accountable when it acts.

System-specific agent platforms place this governance in IT. Joule Studio, for example, allows IT teams to build and modify custom agents through SAP's developer framework. This produces auditable, system-consistent behavior and is appropriate when governance rules are stable and IT has the domain context to configure them. It creates a dependency that is poorly suited to master data governance, where rules change frequently, domain context is distributed across functional teams, and the people best positioned to govern data quality are master data stewards and domain owners — not developers.

ZMDM's design — master data for business, by the business — places governance ownership with those people. Governance rules, approval gates, escalation logic, and agent behavior are configured through metadata-driven interfaces by business users. Changes happen at business speed. The agent adapts as governance requirements evolve. This is not a secondary consideration: an agent whose behavior can only be modified by IT will consistently lag the governance problems it is meant to address.

Dimension System-Specific Agent (e.g. Joule) General Purpose Agent (ZMDM + Claude Code)
Who builds governance rules IT teams via vendor developer tools Business users via metadata-driven configuration
Who supervises actions IT and vendor security architecture Business domain owners directly
Modifying governance without IT Not supported Fully supported
Scope of governed action Within the vendor system Across all systems and processes
Adapting to process change Vendor roadmap and IT development cycle Business-configurable on demand
Conclusion

The Choice Is a Scope Question, Not a Capability Question

The decision between system-specific and general purpose agents is not best framed as a competition between platforms. Both types are capable within their appropriate scope. The question is whether the scope of the agent matches the scope of the problem.

For workflows that begin and end inside a single system — validating transactions, executing standard processes, automating routine tasks — system-specific agents are well-suited and often the lowest-friction choice. Their native integration, prebuilt skills, and deep system knowledge are genuine advantages for this class of problem.

For governance problems — where root causes are upstream, cross-functional, and cross-system — system-specific agents will address what is visible in their system and leave the cause intact. The three diagnostic questions in Section 2 provide a structured basis for identifying which class a given problem belongs to. The answer, for master data governance specifically, is almost always that the problem requires a general purpose agent: one that receives the business problem directly, reasons from broad domain knowledge, acts across systems through a cross-system action surface, and is governed by the business users who understand the context.

"The default assumption — that the agent embedded in your ERP is the right agent for your ERP problems — is correct when the problem is genuinely an ERP problem. The challenge is that most recurring governance problems are not."

Organizations evaluating AI agent strategy for master data governance should apply the three diagnostic questions before selecting an agent type: Where does the root cause originate? Does resolving it require acting outside the system? Does the problem recur after system-level remediation? These questions are more useful than capability comparisons, vendor benchmark assessments, or integration convenience evaluations — because they address the variable that most determines outcome: scope match between agent and problem.

About ZMDM: ZMDM is a metadata-driven Master Data Management platform. ZMDM is powered by Claude Code (Anthropic) operating as a primary agent with SAP, Salesforce, and ERP skills deployed through ZFlow, ZMDM's cross-system process orchestration layer.

Apply the Framework to Your Environment

Speak with our team about your specific master data governance challenges and where your problems actually live.