Why reducing AI risk starts with treating agents as identities
As AI systems are used in our day-to-day operations, a central reality becomes unavoidable: AI doesn’t configure itself and must be set up with human approval and oversight. It requires engineers and developers to configure it. Developers need privileges to access and implement components, agents, tools, and features of the platforms. But developers don’t just have these privileges unconstrained… right? Where trust and privileges exist, someone will try to abuse them.