Section 01

Innovation Research & Development

Investigating capabilities at the edge of what GenAI can do and cannot do.

Research Area 1.1

Synthetic Employees

Agentic AI systems designed for sustained operation within enterprise environments

The Research

We're developing agentic AI systems designed for sustained operation within enterprise environments. The research treats these systems as organizational participants rather than software deployments, which shapes every problem we work on.

Organizational knowledge is central to this. An employee who's been in a role for years knows which policies apply in which situations, where the documented exceptions are, how similar cases were handled before, and when the situation calls for judgment rather than rule-following. Document retrieval doesn't capture this. We're developing methods for encoding operative knowledge — context graphs, this will be hot topic in 2026 — into systems that need it.

Authority and constraints present a related challenge. Technical permission systems control what a system can access, but organizational authority determines what decisions someone is trusted to make. A procurement officer might have access to the full vendor database but only authority to approve purchases under a certain threshold. Our frameworks specify decision boundaries, escalation triggers, and hard limits in terms that reflect how organizations actually delegate responsibility, not how software systems manage access.

Deployment preconditions matter as much as technical capability. Security review, compliance requirements, oversight mechanisms, failure protocols — these organizational structures have to exist before a synthetic employee can operate reliably. The question isn't whether the technology works in isolation.

Coordination across systems and humans is its own domain. How work gets handed off, how uncertainty gets communicated, how accountability distributes across a mixed workflow — these require theoretical models we're actively developing.

We call these systems synthetic employees rather than agents because the framing determines which questions get asked. The research follows from that choice.

Why "Synthetic Employees"

The term "agent" frames AI deployment as a technical problem — integration points, API design, automation triggers. This framing is why most enterprise AI initiatives succeed as pilots and fail in production: the technical questions get answered while the organizational questions never get asked.

We use "synthetic employee" as a deliberate reframe. Employees have roles, reporting structures, defined authority, access appropriate to their function, and someone accountable for their work. These aren't implementation details to solve after launch — they're preconditions for deployment that determine whether the system survives contact with the organization. The language shift forces the conversation toward organizational integration before technical architecture.

Our research develops the theoretical and practical foundations for treating AI systems as organizational participants rather than software deployments:

Research Focus Areas

Organizational Knowledge Encoding
The methods by which a synthetic employee acquires the contextual understanding that makes a human employee effective: which policies apply under which conditions, where exceptions exist, how edge cases have been handled historically, and when judgment rather than rule-following is required.
Authority and Scope Definition
Frameworks for specifying decision boundaries, escalation triggers, and hard constraints in terms that map to organizational accountability structures rather than technical permission systems.
Deployment Readiness Assessment
Criteria and processes for determining whether the organizational preconditions for a synthetic employee exist, independent of technical functionality.
Collaboration Architecture
Theoretical models for how synthetic employees coordinate with humans and with each other, including uncertainty signaling, task handoff, and distributed decision-making.