Skip to content
Get started

List instances

client.instances.list(RequestOptionsoptions?): InstanceListResponse { id, created_at, image, 16 more }
get/instances

List instances

ReturnsExpand Collapse
InstanceListResponse = Array<Instance { id, created_at, image, 16 more } >
id: string

Auto-generated unique identifier (CUID2 format)

created_at: string

Creation timestamp (RFC3339)

formatdate-time
image: string

OCI image reference

name: string

Human-readable name

state: "Created" | "Running" | "Paused" | 4 more

Instance state:

  • Created: VMM created but not started (Cloud Hypervisor native)
  • Running: VM is actively running (Cloud Hypervisor native)
  • Paused: VM is paused (Cloud Hypervisor native)
  • Shutdown: VM shut down but VMM exists (Cloud Hypervisor native)
  • Stopped: No VMM running, no snapshot exists
  • Standby: No VMM running, snapshot exists (can be restored)
  • Unknown: Failed to determine state (see state_error for details)
Accepts one of the following:
"Created"
"Running"
"Paused"
"Shutdown"
"Stopped"
"Standby"
"Unknown"
disk_io_bps?: string

Disk I/O rate limit (human-readable, e.g., "100MB/s")

env?: Record<string, string>

Environment variables

gpu?: GPU { mdev_uuid, profile }

GPU information attached to the instance

mdev_uuid?: string

mdev device UUID

profile?: string

vGPU profile name

has_snapshot?: boolean

Whether a snapshot exists for this instance

hotplug_size?: string

Hotplug memory size (human-readable)

hypervisor?: "cloud-hypervisor" | "qemu"

Hypervisor running this instance

Accepts one of the following:
"cloud-hypervisor"
"qemu"
network?: Network { bandwidth_download, bandwidth_upload, enabled, 3 more }

Network configuration of the instance

bandwidth_download?: string

Download bandwidth limit (human-readable, e.g., "1Gbps", "125MB/s")

bandwidth_upload?: string

Upload bandwidth limit (human-readable, e.g., "1Gbps", "125MB/s")

enabled?: boolean

Whether instance is attached to the default network

ip?: string | null

Assigned IP address (null if no network)

mac?: string | null

Assigned MAC address (null if no network)

name?: string

Network name (always "default" when enabled)

overlay_size?: string

Writable overlay disk size (human-readable)

size?: string

Base memory size (human-readable)

started_at?: string | null

Start timestamp (RFC3339)

formatdate-time
state_error?: string | null

Error message if state couldn't be determined (only set when state is Unknown)

stopped_at?: string | null

Stop timestamp (RFC3339)

formatdate-time
vcpus?: number

Number of virtual CPUs

volumes?: Array<VolumeMount { mount_path, volume_id, overlay, 2 more } >

Volumes attached to the instance

mount_path: string

Path where volume is mounted in the guest

volume_id: string

Volume identifier

overlay?: boolean

Create per-instance overlay for writes (requires readonly=true)

overlay_size?: string

Max overlay size as human-readable string (e.g., "1GB"). Required if overlay=true.

readonly?: boolean

Whether volume is mounted read-only

List instances
import Hypeman from '@onkernel/hypeman';

const client = new Hypeman({
  apiKey: 'My API Key',
});

const instances = await client.instances.list();

console.log(instances);
[
  {
    "id": "tz4a98xxat96iws9zmbrgj3a",
    "created_at": "2025-01-15T10:30:00Z",
    "image": "docker.io/library/alpine:latest",
    "name": "my-workload-1",
    "state": "Created",
    "disk_io_bps": "100MB/s",
    "env": {
      "foo": "string"
    },
    "gpu": {
      "mdev_uuid": "aa618089-8b16-4d01-a136-25a0f3c73123",
      "profile": "L40S-1Q"
    },
    "has_snapshot": false,
    "hotplug_size": "2GB",
    "hypervisor": "cloud-hypervisor",
    "network": {
      "bandwidth_download": "125MB/s",
      "bandwidth_upload": "125MB/s",
      "enabled": true,
      "ip": "192.168.100.10",
      "mac": "02:00:00:ab:cd:ef",
      "name": "default"
    },
    "overlay_size": "10GB",
    "size": "2GB",
    "started_at": "2025-01-15T10:30:05Z",
    "state_error": "failed to query VMM: connection refused",
    "stopped_at": "2025-01-15T12:30:00Z",
    "vcpus": 2,
    "volumes": [
      {
        "mount_path": "/mnt/data",
        "volume_id": "vol-abc123",
        "overlay": true,
        "overlay_size": "1GB",
        "readonly": true
      }
    ]
  }
]
Returns Examples
[
  {
    "id": "tz4a98xxat96iws9zmbrgj3a",
    "created_at": "2025-01-15T10:30:00Z",
    "image": "docker.io/library/alpine:latest",
    "name": "my-workload-1",
    "state": "Created",
    "disk_io_bps": "100MB/s",
    "env": {
      "foo": "string"
    },
    "gpu": {
      "mdev_uuid": "aa618089-8b16-4d01-a136-25a0f3c73123",
      "profile": "L40S-1Q"
    },
    "has_snapshot": false,
    "hotplug_size": "2GB",
    "hypervisor": "cloud-hypervisor",
    "network": {
      "bandwidth_download": "125MB/s",
      "bandwidth_upload": "125MB/s",
      "enabled": true,
      "ip": "192.168.100.10",
      "mac": "02:00:00:ab:cd:ef",
      "name": "default"
    },
    "overlay_size": "10GB",
    "size": "2GB",
    "started_at": "2025-01-15T10:30:05Z",
    "state_error": "failed to query VMM: connection refused",
    "stopped_at": "2025-01-15T12:30:00Z",
    "vcpus": 2,
    "volumes": [
      {
        "mount_path": "/mnt/data",
        "volume_id": "vol-abc123",
        "overlay": true,
        "overlay_size": "1GB",
        "readonly": true
      }
    ]
  }
]