The EU AI Act (and similar legislation) describe two main types of controls: 1) controls on AI models, in particular models that are designated to pose a possibility of “systemic risk,” and 2) controls related to specific use cases that are considered to pose “unacceptable,” “high” or “minimal” risk. It also includes some policies regarding education, transparency, and reporting that apply to all AI systems.
From most companies’ perspective, however, most of what the EU AI Act regulates is out of scope. For example, most companies are unlikely to be using “subliminal, manipulative, or deceptive techniques” to distort people’s behavior, nor is it going to be trying to infer sensitive attributes like race or political opinions from biometric attributes. Most use cases and models fall into low risk categories – as defined by the EU AI Act.
But just because an AI system isn’t an EU AI Act “high risk” system doesn’t mean that governance is unneeded. A regulatory-only over-focus on just models and EU AI Act-influenced risk categories misses most of the issues that companies really care about:
- Intellectual Property Protection. Does the system create, process, or expose intellectual property (trade secrets, proprietary algorithms, competitive intelligence)? This includes tracking whether AI outputs could inadvertently recreate proprietary information or infringe on third-party IP rights.
- Contractual Compliance & Third-Party Obligations. Are all component licenses, API terms of service, and data processing agreements being followed throughout the system? This covers open source license obligations, commercial API usage restrictions, customer contract requirements, and vendor SLA compliance across the entire execution path.
- Liability & Negligence Prevention. Can your organization demonstrate reasonable care in system design and operation to defend against negligence claims? This requires documenting risk assessments, mitigation measures, testing protocols, human oversight implementation, and evidence of following industry standards of care.
- Data Sovereignty & Confidentiality Controls. Where does sensitive data (customer, employee, corporate confidential) flow and where is it stored? This encompasses cross-border data transfers, encryption requirements, access controls, and ensuring confidential information doesn't leak through model outputs or to unauthorized third parties.
- Safety & Product Liability. Could system failures or errors cause physical harm, property damage, or economic loss? To what extent does the use of AI systems affect the company’s ability to meet contractual requirements and SLAs?
- Transparency & Explainability Requirements. Can the system provide required explanations to various stakeholders (regulators, customers, employees, auditors)? This varies by context—GDPR requires explanations for automated decisions, while SEC may require disclosure of AI use in material business processes.
- Incident Response & Evidence Preservation. When something goes wrong, can you reconstruct what happened and demonstrate proper response? This includes audit logging, version control, incident escalation procedures, and maintaining forensic capabilities for investigations or litigation.
The CORE Framework
Responsible AI (RAI) governance needs to be more than a general evaluation of the “risk” of a model. It needs to provide documentation of the organization’s reasonableness and due diligence as demonstrated throughout the entire system. In order to provide this documentation, RAI teams need information about the entire AI system and how the models in the system interact with their environment. You can identify what is needed by using the acronym CORE: Components, Operations, Resources, and Execution.
The CORE framework represents AI systems as a system “blueprint” (technically a “directed graph”) where data flows through components. By documenting Components, Operations, Resources, and Execution paths, organizations can automatically evaluate policies, track compliance, and assess risks. The framework treats governance as a data flow problem rather than a documentation problem.
CORE works by translating developer/engineer-level knowledge (“What do we deploy?”) into a form suitable for policy-level analysis. IP protection becomes "what components access proprietary data?", contractual compliance becomes "what third-party APIs are called?", and liability prevention becomes "can we show what guardrails exist in the data flow?"
CORE not only works as a framework to guide information gathering and system analysis, but crucially, almost all of its steps can be automated, speeding up governance and simultaneously reducing risk. It is faster and more structured that questionnaire-based flows and more complete than model-centric approaches.
The CORE Framework Structure
Components are the nodes in the system blueprint. Each component is classified by type (LLM, database, API, guardrail, human reviewer) and has defined properties: - Provider (OpenAI, AWS, custom) - Operations it performs on data - The way it transforms data - Resources it accesses
Operations are the actions components perform: - Data operations transform or analyze data flowing through the system - Resource operations read from or write to external systems - Each operation can add or remove tags that track data properties
Resources are external assets the AI system doesn't control but must access: • Customer databases, third-party APIs, file systems, or licensed model endpoints • Resources exist outside the system boundary but are accessed during execution • Access patterns (read/write/delete) are documented for each component-resource pair
Execution is how data flows through the system: • Represented as edges connecting components • Tags propagate along edges, tracking data properties like "PII" or "Encrypted" • Multiple execution paths may exist based on routing logic
How CORE Enables Corporate AI Governance
CORE does three distinct things: it documents what exists (system architecture), evaluates against policies (compliance checking), and identifies gaps (risk assessment). Traditional AI governance requires weeks of manual review for each change. CORE reduces this to minutes of automated evaluation, making governance feasible for rapidly evolving AI systems.
Policy Evaluation
Policies are expressed as constraints on components, operations, and data flow. There are three main types of policies.
Component policies govern the use of components: - Technical example: "Systems cannot use components where origin='China'" - Business example: "Safety-critical systems must include component type='human-review'"
Access policies restrict operations on resources: - Technical example: "Components of type='LLM' cannot write to 'HR Database'" - Business example: "Third-party APIs cannot access 'Customer-Proprietary-Database'"
Flow State policies govern the flow of data through the system: - Technical example: "Data tagged 'PII' and 'Not-Anonymized' cannot reach components of type='LLM'" - Business example: "Data tagged 'proprietary' cannot flow to components with provider='OpenAI'"
Business-Relevant Policy Examples: - IP Protection: "Data tagged 'trade-secret' must not flow to any component with trust-level='third-party'" - Contractual Compliance: "Components using model='gpt-4' must not process data tagged 'customer-confidential'" - Liability Prevention: "Decisions tagged 'safety-critical' must pass through component type='human-review'" - Data Sovereignty: "Data tagged 'EU-origin' cannot flow to components with location!='EU'"
Risk Assessment
Risks are identified by analyzing the components, operations, and data flows through the entire execution “blueprint.” The CORE Framework makes it straightforward to find issues like: - Unmitigated sensitive data exposure - Missing guardrails or monitors - Trust boundaries crossed by confidential data - Components without required audit logging
Elements of this sort of analysis are familiar to those who have worked in privacy, IT security, or open source. In each of those cases, understanding risks means identifying what goes where and verifying that there are sufficient controls in place to deal with potential violations.
Getting Started
Organizations typically begin by documenting one critical system. Usually documenting just one system helps companies define business-critical policies. Because this information is already captured in technical documentation required for deployment (docker-compose files, Kubernetes yaml configuration files), it is much easier to get the necessary information from technical teams in a way that does not require technical analysis by the RAI team.
The investment in creating the initial system model pays dividends through: - Design for compliance, providing developers feedback before they build the system. - Enable rapid risk assessment for new changes - Perform automated policy evaluation when the system changes, seeing if the system as deployed matches the pre-defined blueprint - Create clear documentation for auditors and regulators - Have business records providing evidence of reasonable care and due diligence
When systems evolve, CORE automatically re-evaluates all policies, ensuring that a compliant system today remains compliant tomorrow. This enables a continuous governance that transforms AI oversight from a bottleneck into scalable, repeatable process.