How to Control AI Usage in Your Organization (Without Breaking Productivity)

A Practical MITM Proxy + Identity Enforcement Approach

The Problem: AI Is Bypassing Your Security Controls

Your users are already using AI, you just don’t control it yet.

Tools like ChatGPT, Microsoft Copilot, Claude, and other AI platforms are rapidly being adopted across organizations. In many cases, this usage is happening without visibility, without policy enforcement, and without alignment to corporate security or compliance requirements.

Employees are using AI to draft emails, summarize documents, troubleshoot technical issues, and process internal data. While these tools can significantly improve productivity, they also introduce real risks when accessed through personal accounts or unmanaged platforms.

Blocking AI entirely is rarely practical. At the same time, allowing unrestricted access creates exposure that most organizations are not prepared to manage.

The challenge is not whether AI should be used; it is how to enable it in a controlled, secure, and auditable way.

In this guide, we’ll walk through a practical approach to controlling AI usage using identity enforcement, proxy-based controls, and policy-driven access allowing your organization to safely enable AI without sacrificing visibility or control.

From a cybersecurity and compliance perspective, this creates several immediate risks:

  • ❌ Users accessing AI tools with personal accounts
  • ❌ Sensitive data leakage into uncontrolled AI platforms
  • ❌ Lack of auditability and visibility
  • ❌ No enforcement of corporate identity or policy
  • ❌ Shadow IT usage outside approved tools
  • Traditional controls like DNS filtering or firewall rules fall short because:
  • AI platforms are embedded in legitimate web traffic
  • Authentication happens at the application layer
  • Blocking outright harms productivity

The Solution: Control Access Without Blocking It

At DBT, we built a practical solution that:

  • ✅ Forces corporate identity usage
  • ✅ Blocks anonymous and personal account access
  • ✅ Redirects users to approved enterprise AI services
  • ✅ Maintains user productivity
  • ✅ Works without requiring Intune

Note: This guide is intended as a general reference for implementing controlled AI access in an enterprise environment. Configuration details may vary based on your infrastructure, and this approach should be adapted to meet your organization’s specific security, compliance, and operational requirements.

Click Here to Read the Architectural Overview and High-Level Details

Click Here to Read the Step-by-Step Deployment Guide

What This Enables

🔐 Identity Enforcement

  • No personal AI usage
  • Corporate accounts only
  • Approved platform access

🔍 Visibility

  • Full proxy logging
  • Central inspection point
  • Improved auditability

🛑 Control

  • Block, allow, or redirect
  • Platform-specific policies
  • Reduce shadow AI usage

⚙️ Flexibility

  • No full MDM required
  • Works with on-prem AD
  • Adapts to new AI tools

🧱 Layered Security

  • Browser + PAC + Proxy + Filtering
  • Defense-in-depth approach
  • Consistent enforcement

⚠️ Fail Awareness

  • Understand proxy bypass risk
  • Plan for monitoring & alerts
  • Maintain control integrity

Why This Matters

AI adoption is already happening inside most organizations — often without visibility or control. Employees are using tools like ChatGPT, Copilot, Claude, and other platforms to work faster and more efficiently, but frequently outside of approved processes or corporate oversight.

This creates a gap between productivity and governance. Sensitive data may be submitted to unmanaged platforms, personal accounts may be used in place of corporate identities, and security teams are left without clear insight into how AI is being used.

Blocking AI entirely is not a practical solution. Users will find alternatives, and the business loses the efficiency gains these tools provide. At the same time, unrestricted access introduces risk that most organizations cannot afford to ignore.

The goal is not to stop AI usage — it is to guide it. By putting the right controls in place, organizations can support productivity while maintaining visibility, enforcing identity, and reducing exposure to unmanaged or unapproved platforms.

Who This Is For

  • IT Directors and IT Leadership
  • Security and Compliance Teams
  • Organizations using Microsoft 365 and other enterprise SaaS platforms
  • Healthcare, Financial Services, Municipal, and Government environments
  • Organizations that want to enable AI safely without allowing unmanaged adoption

Final Thoughts

AI is already part of your environment. The question is whether it is being used in a way that aligns with your organization’s security, compliance, and operational expectations.

Blocking AI outright often leads to workarounds and shadow usage. Allowing unrestricted access creates unnecessary risk. The most effective approach sits in the middle, enabling approved platforms while maintaining control over how they are accessed and used.

By combining identity enforcement, proxy-based routing, and layered policy controls, organizations can reduce exposure, improve visibility, and guide users toward approved AI tools without disrupting productivity.

This approach is not tied to a single platform or vendor. It is a flexible framework that can evolve alongside your environment as new AI tools continue to emerge.

If you are looking to move from unmanaged AI usage to a more controlled and deliberate model, this is a practical place to start.

Disclaimer

The information, configurations, and code samples provided in this article are for general informational and educational purposes only. While every effort has been made to ensure accuracy, this guide does not account for all possible environments, configurations, or edge cases.

Implementation of the concepts described herein should be performed by qualified IT and security professionals and adapted to your organization’s specific infrastructure, security policies, and compliance requirements.

Direct Business Technologies (DBT) makes no warranties, express or implied, regarding the completeness, reliability, or suitability of this information. By using this guide, you acknowledge that any implementation is performed at your own risk.

DBT shall not be held liable for any damages, data loss, service interruptions, security incidents, or other impacts that may result from the use or misuse of the information, scripts, or configurations provided in this article.

Want Help Implementing This?

Want Help Implementing This?

If you’re looking to securely enable AI while maintaining visibility, identity control, and policy enforcement, we can help you evaluate the right approach for your environment.

What we’ll review:

  • Where AI is currently being used in your environment
  • Which platforms should be allowed, restricted, or blocked
  • How to safely enable approved AI services
  • How to align AI usage with your security and compliance requirements