{% extends "base.html" %} {% block title %}My Devices{% endblock %} {% block content %}

My Devices

Manage device credentials for tenant {{ tenant }}

{% if not readonly %}

Create New Device

{% endif %}

Device Credentials

{% if credentials %} {% for cred in credentials %}
{{ cred.device_id }}
{{ cred.filename }}
{% endfor %} {% else %}

No device credentials yet

Create a device above to get started

{% endif %}
{% if credentials %}

Getting Started

1. Set up your environment

python3 -m venv .venv && source .venv/bin/activate
pip install device-connect-edge@git+https://github.com/arm/device-connect.git@main#subdirectory=packages/device-connect-edge

2. Download a starter script and your device credentials

3. Run your device

export NATS_CREDENTIALS_FILE=./{{ credentials[0].filename }}
export NATS_URL=nats://{{ public_host }}:{{ nats_port }}
python my_device.py

4. Run all devices and orchestrate using an agent

Spin up an LLM agent that listens to your fleet and acts on it. Download the starter script — it discovers your devices, batches incoming events, and lets the model call list_devices, get_device_functions, and invoke_device to react. Inference runs through Arm's internal OpenAI proxy — generate an API key at openai-api-proxy.geo.arm.com (Arm VPN required).

pip install \
  'device-connect-edge@git+https://github.com/arm/device-connect.git@main#subdirectory=packages/device-connect-edge' \
  'device-connect-agent-tools[strands]@git+https://github.com/arm/device-connect.git@main#subdirectory=packages/device-connect-agent-tools' \
  'strands-agents[openai]'

# Common env (every terminal)
export MESSAGING_BACKEND=nats
export NATS_URL=nats://{{ public_host }}:{{ nats_port }}

# ── Terminals 1–3 — run each device with its own creds ──
export NATS_CREDENTIALS_FILE=./{{ credentials[0].filename }} && python soil_sensor.py
export NATS_CREDENTIALS_FILE=./{% if credentials|length > 1 %}{{ credentials[1].filename }}{% else %}{{ credentials[0].filename }}{% endif %} && python irrigation_pump.py
export NATS_CREDENTIALS_FILE=./{% if credentials|length > 2 %}{{ credentials[2].filename }}{% else %}{{ credentials[0].filename }}{% endif %} && python greenhouse_ctrl.py

# ── Terminal 4 — AI agent (Arm internal OpenAI proxy) ──
export NATS_CREDENTIALS_FILE=./{{ tenant }}-agent.creds.json
export DEVICE_CONNECT_ZONE={{ tenant }}
export OPENAI_API_KEY=<your-arm-proxy-token>
export OPENAI_BASE_URL=https://openai-api-proxy.geo.arm.com/api/providers/openai-eu/v1
export OPENAI_INSECURE=1

python run_agent.py

The agent subscribes to device-connect.{{ tenant }}.*.event.>, batches events every ~12s, sends them to the LLM, and the model decides which invoke_device calls to make.

{% endif %} {% endblock %}