Model Monster
Corporate AI Liability Management and Automated Due Diligence
Most AI governance tools collect attestations: a developer fills out a form claiming a system is compliant, a reviewer signs off, the form goes in a binder. We take a different approach. Model Monster captures the system itself as a graph — components, operations, resources, execution paths — and evaluates that graph against policies and risks automatically. The output is a technical artifact a regulator, customer, or auditor can interrogate, not a description that takes the developer's word for it.
Model Monster is the production implementation of the
CORE Framework. We work with companies to document AI systems for EU AI Act compliance, to enforce internal AI policies before deployment rather than after, and to generate the evidentiary record that supports a reasonable-care defense if something goes wrong.
See
modelmonster.ai →
OSPOCO
OSPO-as-a-service
Most companies use open source software. Far fewer have an Open Source Program Office: the team that handles license compliance, vulnerability response, contribution policy, foundation engagement, and the day-to-day mechanics of being a responsible consumer of (and contributor to) the broader ecosystem. Without an OSPO, those questions land on whoever happens to be nearby — usually a developer or a lawyer who already has a different job.
OSPOCO provides the OSPO function as a service. We work with companies that need the capability but don't yet have the volume to justify a full-time team, and with companies that have a team but need specific expertise on license selection, foundation participation, contribution review, or audit response. The goal is to make open source operational rather than a recurring fire drill.
See
OSPOCO →