Create and start instance
Create and start instance
Body Parameters
OCI image reference
Human-readable name (lowercase letters, digits, and dashes only; cannot start or end with a dash)
Override image CMD (like docker run
Device IDs or names to attach for GPU/PCI passthrough
Disk I/O rate limit (e.g., "100MB/s", "500MB/s"). Defaults to proportional share based on CPU allocation if configured.
Override image entrypoint (like docker run --entrypoint). Omit to use image default.
Environment variables
Additional memory for hotplug (human-readable format like "3GB", "1G"). Omit to disable hotplug memory.
Writable overlay disk size (human-readable format like "10GB", "50G")
Base memory size (human-readable format like "1GB", "512MB", "2G")
Skip guest-agent installation during boot. When true, the exec and stat APIs will not work for this instance. The instance will still run, but remote command execution will be unavailable.
Skip kernel headers installation during boot for faster startup. When true, DKMS (Dynamic Kernel Module Support) will not work, preventing compilation of out-of-tree kernel modules (e.g., NVIDIA vGPU drivers). Recommended for workloads that don't need kernel module compilation.
Number of virtual CPUs
Returns
Create and start instance
curl http://localhost:8080/instances \
-H 'Content-Type: application/json' \
-H "Authorization: Bearer $HYPEMAN_API_KEY" \
-d '{
"image": "docker.io/library/alpine:latest",
"name": "my-workload-1",
"cmd": [
"echo",
"hello"
],
"credentials": {
"OUTBOUND_OPENAI_KEY": {
"inject": [
{
"as": {
"format": "Bearer ${value}",
"header": "Authorization"
},
"hosts": [
"api.openai.com",
"*.openai.com"
]
}
],
"source": {
"env": "OUTBOUND_OPENAI_KEY"
}
}
},
"devices": [
"l4-gpu"
],
"disk_io_bps": "100MB/s",
"entrypoint": [
"/bin/sh",
"-c"
],
"env": {
"PORT": "3000",
"NODE_ENV": "production"
},
"hotplug_size": "2GB",
"hypervisor": "cloud-hypervisor",
"overlay_size": "20GB",
"size": "2GB",
"skip_kernel_headers": true,
"tags": {
"team": "backend",
"env": "staging"
},
"vcpus": 2
}'
{
"id": "tz4a98xxat96iws9zmbrgj3a",
"created_at": "2025-01-15T10:30:00Z",
"image": "docker.io/library/alpine:latest",
"name": "my-workload-1",
"state": "Created",
"disk_io_bps": "100MB/s",
"env": {
"foo": "string"
},
"exit_code": 137,
"exit_message": "killed by signal 9 (SIGKILL)",
"gpu": {
"mdev_uuid": "aa618089-8b16-4d01-a136-25a0f3c73123",
"profile": "L40S-1Q"
},
"has_snapshot": false,
"hotplug_size": "2GB",
"hypervisor": "cloud-hypervisor",
"network": {
"bandwidth_download": "125MB/s",
"bandwidth_upload": "125MB/s",
"enabled": true,
"ip": "192.168.100.10",
"mac": "02:00:00:ab:cd:ef",
"name": "default"
},
"overlay_size": "10GB",
"size": "2GB",
"started_at": "2025-01-15T10:30:05Z",
"state_error": "failed to query VMM: connection refused",
"stopped_at": "2025-01-15T12:30:00Z",
"tags": {
"team": "backend",
"env": "staging"
},
"vcpus": 2,
"volumes": [
{
"mount_path": "/mnt/data",
"volume_id": "vol-abc123",
"overlay": true,
"overlay_size": "1GB",
"readonly": true
}
]
}
Returns Examples
{
"id": "tz4a98xxat96iws9zmbrgj3a",
"created_at": "2025-01-15T10:30:00Z",
"image": "docker.io/library/alpine:latest",
"name": "my-workload-1",
"state": "Created",
"disk_io_bps": "100MB/s",
"env": {
"foo": "string"
},
"exit_code": 137,
"exit_message": "killed by signal 9 (SIGKILL)",
"gpu": {
"mdev_uuid": "aa618089-8b16-4d01-a136-25a0f3c73123",
"profile": "L40S-1Q"
},
"has_snapshot": false,
"hotplug_size": "2GB",
"hypervisor": "cloud-hypervisor",
"network": {
"bandwidth_download": "125MB/s",
"bandwidth_upload": "125MB/s",
"enabled": true,
"ip": "192.168.100.10",
"mac": "02:00:00:ab:cd:ef",
"name": "default"
},
"overlay_size": "10GB",
"size": "2GB",
"started_at": "2025-01-15T10:30:05Z",
"state_error": "failed to query VMM: connection refused",
"stopped_at": "2025-01-15T12:30:00Z",
"tags": {
"team": "backend",
"env": "staging"
},
"vcpus": 2,
"volumes": [
{
"mount_path": "/mnt/data",
"volume_id": "vol-abc123",
"overlay": true,
"overlay_size": "1GB",
"readonly": true
}
]
}