Something happened over the past year that caught a lot of IT and security teams off-guard: AI tooling moved from “thing your engineers want to try” to “thing your engineers have already installed and are routing production traffic through.” The adoption curve was faster than most procurement or security review processes could track.

The result is a visibility problem. Users install LLM proxy servers, AI coding agents, and developer tooling that routes requests through local gateways — and none of it shows up in your MDM inventory unless you explicitly look for it. Some of it is benign. Some of it represents real security risk, either because of what the tool does by design or because of how it was installed.

These three Jamf Extension Attributes are my approach to getting visibility. They cover different threat models and use different detection strategies, but they share the same operational pattern: deploy as Jamf String EAs, collect during inventory, use the results to scope Smart Groups for reporting or remediation.


Why Extension Attributes

Jamf EAs run as root during inventory collection, output <result>VALUE</result>, and get stored as searchable computer attributes. They’re lightweight, they run on every managed device automatically, and the results can drive Smart Groups for scoped policies.

The tradeoff is that they’re point-in-time snapshots — not real-time monitoring. An EA that runs at 9am won’t catch something installed at 9:05am until the next inventory cycle. For the threat models here, that’s acceptable. We’re looking for persistent installations and active deployments, not catching installations in the moment.


LiteLLM Detection

LiteLLM is an OpenAI-compatible proxy layer that lets you route requests to any LLM backend — local models, Ollama, Claude, or dozens of others — through a single API endpoint. It’s genuinely useful for developers building multi-model applications. It’s also exactly the kind of tool that ends up installed without a security review because it doesn’t look like a “security tool.”

The detection problem is that Python packages install in a lot of places. A single user might have LiteLLM in their Homebrew Python, a virtualenv, a Poetry project, and their ~/.local packages simultaneously. Checking only the system Python path would miss most of these.

The EA checks nine locations:

  1. System pip (/opt/homebrew/bin/pip3, /usr/local/bin/pip3) — the most common Homebrew-managed path
  2. Per-user pip installs~/Library/Python/*/lib/python*/site-packages/ across all user directories
  3. Virtual environments — recursively scans .venv, .virtualenvs, venvs within user homes
  4. Poetry/pipenv caches~/Library/Caches/pypoetry/virtualenvs/ for project-level dependencies
  5. Workbrew — checks the enterprise Homebrew prefix at /opt/workbrew/bin/brew
  6. Running processespgrep -f "litellm" catches actively running instances with PIDs
  7. Config files — scans for ~/.litellm/config.yaml (indicates intentional deployment, not just an installed package)
  8. Docker containers — both running and stopped containers with litellm images
  9. LaunchAgents/Daemons — plist files referencing litellm in system and per-user locations

Version extraction uses pip show litellm rather than filesystem path parsing — it queries the package manager directly, which is more reliable than trying to find version strings in nested directory structures.

Output is semicolon-delimited so you can write targeted Smart Group criteria:

# Clean
<result>Not Detected</result>

# Found via pip with version, plus a config file
<result>pip(/opt/homebrew/bin/pip3):v1.40.0; config(jdoe):~/.litellm/config.yaml</result>

# Running process found
<result>running(PID:45821); pip(~/.local/lib/python3.12):v1.38.2</result>

Smart Groups can target litellm_detection LIKE *pip* to catch any installation, or LIKE *running* to catch only active instances.


Axios Supply Chain Compromise Detection

This one is different. It’s not looking for a specific tool — it’s looking for post-compromise artifacts from a specific supply chain attack.

In March 2026, axios versions 1.14.1 and 0.30.4 were compromised. The malicious versions included a macOS RAT binary that installed to /Library/Caches/com.apple.act.mond — a deliberately Apple-looking path — and a malicious plain-crypto-js dependency that served as a second stage payload.

The EA checks three things:

1. RAT binary presence with hash verification

KNOWN_SHA="92ff08773995ebc8d55ec4b8e1a225d0d1e51efa4ef88b8849d0071230c9645a"
if [[ -f "/Library/Caches/com.apple.act.mond" ]]; then
    actual_sha=$(shasum -a 256 "/Library/Caches/com.apple.act.mond" | awk '{print $1}')
    if [[ "$actual_sha" == "$KNOWN_SHA" ]]; then
        # confirmed compromised binary
    fi
fi

The path was chosen to blend in with Apple’s own framework caches. Hash verification prevents false positives from legitimate Apple processes that happen to use a similar name.

2. Compromised npm package versions

Scans all axios/package.json files under /Users and extracts version strings, looking specifically for 1.14.1 and 0.30.4. These are the only two compromised versions — adjacent releases are clean.

3. Malicious dependency artifacts

Searches for plain-crypto-js directories in node_modules trees, which were included only in the compromised packages and have no legitimate npm presence.

Output distinguishes between findings to help triage:

# Clean
<result>clean</result>

# RAT binary confirmed
<result>INFECTED: RAT binary: /Library/Caches/com.apple.act.mond (sha256:92ff...); axios 1.14.1 at /Users/alice/project/node_modules/axios/package.json</result>

Axios INFECTED results should be treated as high-priority incidents, not just software inventory findings.


OpenClaw Detection

OpenClaw is an open-source AI agent gateway tool that routes IDE integrations — Cursor, Claude Code, Copilot — through a local proxy. It’s designed for enterprise visibility and control of coding agent traffic, and in the right hands it’s a legitimate security tool. In the wrong hands, or deployed without organizational approval, it’s a man-in-the-middle for all of your developers’ AI interactions.

The detection covers nine vectors:

CheckMethod
CLI binaryPATH search + global locations (/usr/local/bin/openclaw, /opt/homebrew/bin/openclaw) + user-level checks (.volta/bin, .nvm, ~/bin)
CLI versionExecutes openclaw --version to capture string
macOS appChecks for /Applications/OpenClaw.app
State directorySearches for ~/.openclaw or ~/.openclaw-{PROFILE}
Config fileParses ~/.openclaw/openclaw.json
Launchd servicelaunchctl print gui/{uid}/bot.molt.gateway
Gateway portnc -z localhost PORT to confirm active listener
Docker containersdocker ps for running openclaw containers
Docker imagesdocker images for openclaw image presence

The port check is worth calling out. OpenClaw can be configured to run on a non-default port, and the EA reads the configured port from openclaw.json before checking. This means it finds active gateways even when they’ve been configured away from the default — something a simple netstat grep would miss.

JSON parsing in that step uses grep -o with regex to extract the port number without requiring jq, which may not be present on all managed machines.

The script uses exit codes for MDM policy targeting:

  • 0 = Not installed (compliant)
  • 1 = Installed (running or dormant)
  • 2 = Script error

Output is structured for readability:

summary: installed-and-running
platform: darwin
cli: /usr/local/bin/openclaw
cli-version: 2026.1.15
app: /Applications/OpenClaw.app
state-dir: /Users/alice/.openclaw
gateway-service: gui/501/bot.molt.gateway
gateway-port: 18789
docker-container: not-found
docker-image: not-found

Putting It Together

Deployed as a set, these three EAs give you different layers of visibility:

LiteLLM — broad, permissive detection across nine installation vectors. Most installs are benign, but the presence of a config file or running process indicates intentional deployment that warrants a conversation.

Axios — narrow, high-confidence detection for a specific active threat. Any positive here is an incident response trigger, not an inventory note.

OpenClaw — infrastructure-level detection. The launchd and port checks surface gateway deployments that are actively intercepting traffic, not just installed binaries.

The recommended Jamf setup:

  1. Deploy all three as String EAs (Settings → Computer Management → Extension Attributes)
  2. Create Smart Groups: LiteLLM Detected, Axios Compromised, OpenClaw Found
  3. Treat Axios matches as P1 incidents — automated isolation policy if your security posture supports it
  4. Use LiteLLM and OpenClaw groups for reporting and targeted outreach before taking remediation action

Caveats

These are inventory tools, not security controls. They tell you what’s installed; they don’t prevent installation. A motivated user can avoid most of these checks by installing to unusual locations or renaming binaries. The goal is visibility across the fleet, not perfect detection against adversarial actors.

For LiteLLM specifically, the line between “authorized developer tool” and “shadow AI installation” depends entirely on your policy. Some organizations will want to block it entirely; others will want to track it for data governance reasons. The EA gives you the data — what you do with it is a policy decision.

The Axios detection is time-limited in usefulness. Once the compromised packages are removed from the ecosystem and developers update their dependencies, positive findings will become rare. But given how deeply nested node_modules dependencies can be, it’s worth running for the foreseeable future — old project lockfiles don’t update themselves.