Many AI initiatives in the enterprise fail not because of model quality, but because of a lack of context: AI “knows” too little about documents, metadata, authorizations and process realities. At the same time, companies often already have this information — distributed across archives, SAP, SharePoint and specialized applications.
2026 will therefore be less a question of which model to use, but how cleanly to combine models with company knowledge in a secure, comprehensible and authorization-conscious manner. One standard that is currently simplifying this connection is MCP — Model Context Protocol.
Short definition of MCP (Model Context Protocol): MCP is an open protocol that allows AI applications to access external data sources and tools in a standardized way — via well-defined servers that provide functions and resources.
Why MCP is gaining momentum
The problem isn't new: AI needs context. Until now, context has mostly been connected in two ways:
1. Individual interfaces/connectors per system (SAP, SharePoint, file server...)
2. Model-specific tool interfaces (depending on vendor, SDK, or agent framework)
This leads to a classic N×M integration trap: Many data sources × many AI clients = high effort, high maintenance pressure, inconsistent security.
MCP starts here: It creates a standardized, reusable link between AI clients (e.g. desktop apps or agents) and data/tool providers (MCP servers).
Important: This is not a “Magic API”, but a standardization lever. For IT, SAP and architecture managers, this means: fewer specialized solutions, more reusability — but only if security and governance are properly resolved.
That's why archiving is suddenly “context gold” for AI
An archive is not just storage. In many companies, it is the place where documents:
• are stored in an unalterable and verifiable manner,
• are linked to metadata (object reference, document number, process status, classification)
• and provide valuable signals via access protocols.
This makes archives attractive because AI systems typically need three things:
1. Content (documents, text, attachments)
2. Semantics (metadata, document type, process context)
3. Rules (Permissions, Exclusions, Retention, Compliance)
If you want to make AI “smarter,” it's not about more parameters in the model, but by providing relevant, legitimate and up-to-date information at runtime. MCP is a very suitable link for this.
The central issue: Permissions are not optional
As soon as archived data flows into AI processes, the authorization model becomes the linchpin. Two typical misconceptions occur particularly frequently in projects.
Source of error 1: “AI gets access to everything.”
That sounds convenient, but it is realistically not durable. Uncontrolled access is unacceptable, especially when it comes to HR documents, executive documents, contracts or sensitive process data.
Source of error 2: “We accept authorizations once.”
Authorizations are dynamic: role changes, department changes, project assignments, external users, temporary accesses. A snapshot quickly becomes incorrect.
A stable approach is therefore: Authorizations are not reinvented, but are obtained from leading systems and continuously compared. As a result, the “source of truth” stays where it belongs — and the AI works with the latest state of the art.
Permission levels: “global” vs. “user context”
Two clear access scenarios have proven effective in practice:
1) Company-wide analyses (broad but controlled)
Examples: trend analyses, process indicators, document class evaluations, turnaround times, compliance dashboards.
In principle, an AI agent can see more here — but ideally in an aggregated way and without unnecessarily disclosing individual content.
Best practice: The results of global analyses should be designed in such a way that they do not output sensitive content in plain language, but provide summaries and statistics.
2) Process support in the user context (strictly according to rights)
Examples: invoice clarification, contract review, support cases, change processes.
Here, a rights context is provided, and the MCP server only returns documents that the user can also see in the leading system.
In a sophisticated target architecture, this model can go very far — up to:
• clear role levels (trainee sees different content as team leader),
• complete isolation of sensitive document classes,
• defined exclusion rules, e.g. for HR or personnel files.
Mini-example 1: document research in the process (user sitting in front of it)
Starting position:
A clerk clarifies an incoming invoice. Delivery notes, orders and contract attachments are relevant — but only for your own area of responsibility.
Procedure (simplified):
1. User works in the leading process system (e.g. SAP)
2. AI client asks via MCP: “Which documents are relevant to process X?”
3. MCP server checks rights in the leading system (roles/groups/object reference)
4. MCP server only returns suitable documents and metadata
5. AI provides summary + secure links to original documents
Technical added value:
• Rights remain consistent with the leading system
• low risk of improper visibility
• Process remains user-led (no uncontrolled autonomous actions)
Mini-example 2: Global analysis (for defined roles)
Starting position:
Compliance or IT wants to identify usage patterns: for example, which document types are retrieved remarkably frequently or whether there are bottlenecks in certain processes.
Expiration:
1st “Compliance Analyst” role starts analysis
2. MCP server provides aggregated key figures instead of individual content
3. AI creates an evaluation: trends, anomalies, recommendations
4. Drill down only if the role allows
Metadata + logs: From archive data to operational optimization
In addition to documents themselves, metadata and access data are often underestimated levers. Typical data points include:
• Document class/ type
• Business object reference (e.g. purchase order, invoice, contract)
• Status and time information
• Access times, frequency, roles involved
This makes optimization use cases possible without the archive suddenly becoming a process leader.
Anomaly Detection: What does that mean
Anomaly Detection is a method for identifying unusual patterns in data, such as unusual accesses, sudden frequency, or unusual times. Statistical thresholds or models that learn “normal behavior” are typical.
An example:
A user suddenly accesses documents they don't normally work with. This can be harmless (representation, new project) — or an indication of a problem.
Best practice: Such information should not be marked as an “error,” but as a signal that you consciously check. There is also a need for clear rules:
• Who sees such signals?
• Which document classes are relevant anyway?
• How do you prevent sensitive access from becoming “public”?
Why your own chatbot can't be the right way
Many companies start AI projects with the reflex: “We need a chatbot.” Technically, however, this is often not the best architectural decision.
A more robust approach is:
• Archive systems provide data, metadata, evidence
• leading systems provide authorizations and process context
• AI components provide interpretation, summary, assistance
• Existing processes are improved, not replaced
This does not create a parallel “AI shadow process system,” but a targeted strengthening of real business processes.
What kgs is preparing for 2026: Archived data as an AI-compliant context
There is a clear direction for 2026:
1) MCP-based access to archived documents and metadata
• AI can retrieve context using standardized tools
• Documents are not “approved”, but made available in a controlled manner
• Metadata helps you quickly identify the right context
2) Authorization logic from the leading system
• kgs does not manage permissions as a new “master system”
• Rights and roles are obtained from SAP, SharePoint & Co.
• Access thus becomes consistent and up-to-date
3) Optional: usage data as a basis for optimization
• Access patterns can provide clues
• Abnormalities can be identified
• The aim is process improvement, not process reinvention
The decisive factor here is that customers should be able to use these functions — but not have to. Optionality is an advantage in an enterprise context because governance, security, and organizational maturity vary greatly.
synopsis
MCP will be relevant in 2026 primarily because it standardizes the integration of AI with company data. Archives play a special role here because they not only provide documents, but also metadata and evidence. The key to productive solutions is a permission-aware design that uses leading systems as a source of truth and can consistently protect sensitive documents.
This is exactly where kgs comes in: Archived data can be used in accordance with AI — with clear rights logic, optional extensions and a focus on real process support instead of a “chatbot at any price.”
Key Takeaways
• MCP standardizes AI tool access and reduces integration costs.
• Archive systems provide context: documents, metadata, evidence.
• Authorizations must come from leading systems and be consistently enforced.
• Two modes are decisive: global analyses vs. user context in the process.
• Optionally usable metadata/log evaluations enable optimization without “owning” processes.
Further information


More Info

