Create and start instance
Create and start instance
Body Parameters
OCI image reference
Human-readable name (lowercase letters, digits, and dashes only; cannot start or end with a dash)
Device IDs or names to attach for GPU/PCI passthrough
Disk I/O rate limit (e.g., "100MB/s", "500MB/s"). Defaults to proportional share based on CPU allocation if configured.
Environment variables
Additional memory for hotplug (human-readable format like "3GB", "1G")
Writable overlay disk size (human-readable format like "10GB", "50G")
Base memory size (human-readable format like "1GB", "512MB", "2G")
Skip guest-agent installation during boot. When true, the exec and stat APIs will not work for this instance. The instance will still run, but remote command execution will be unavailable.
Skip kernel headers installation during boot for faster startup. When true, DKMS (Dynamic Kernel Module Support) will not work, preventing compilation of out-of-tree kernel modules (e.g., NVIDIA vGPU drivers). Recommended for workloads that don't need kernel module compilation.
Number of virtual CPUs
Returns
Create and start instance
curl http://localhost:8080/instances \
-H 'Content-Type: application/json' \
-H "Authorization: Bearer $HYPEMAN_API_KEY" \
-d '{
"image": "docker.io/library/alpine:latest",
"name": "my-workload-1",
"devices": [
"l4-gpu"
],
"disk_io_bps": "100MB/s",
"env": {
"PORT": "3000",
"NODE_ENV": "production"
},
"hotplug_size": "2GB",
"hypervisor": "cloud-hypervisor",
"overlay_size": "20GB",
"size": "2GB",
"skip_kernel_headers": true,
"vcpus": 2
}'
{
"id": "tz4a98xxat96iws9zmbrgj3a",
"created_at": "2025-01-15T10:30:00Z",
"image": "docker.io/library/alpine:latest",
"name": "my-workload-1",
"state": "Created",
"disk_io_bps": "100MB/s",
"env": {
"foo": "string"
},
"gpu": {
"mdev_uuid": "aa618089-8b16-4d01-a136-25a0f3c73123",
"profile": "L40S-1Q"
},
"has_snapshot": false,
"hotplug_size": "2GB",
"hypervisor": "cloud-hypervisor",
"network": {
"bandwidth_download": "125MB/s",
"bandwidth_upload": "125MB/s",
"enabled": true,
"ip": "192.168.100.10",
"mac": "02:00:00:ab:cd:ef",
"name": "default"
},
"overlay_size": "10GB",
"size": "2GB",
"started_at": "2025-01-15T10:30:05Z",
"state_error": "failed to query VMM: connection refused",
"stopped_at": "2025-01-15T12:30:00Z",
"vcpus": 2,
"volumes": [
{
"mount_path": "/mnt/data",
"volume_id": "vol-abc123",
"overlay": true,
"overlay_size": "1GB",
"readonly": true
}
]
}
Returns Examples
{
"id": "tz4a98xxat96iws9zmbrgj3a",
"created_at": "2025-01-15T10:30:00Z",
"image": "docker.io/library/alpine:latest",
"name": "my-workload-1",
"state": "Created",
"disk_io_bps": "100MB/s",
"env": {
"foo": "string"
},
"gpu": {
"mdev_uuid": "aa618089-8b16-4d01-a136-25a0f3c73123",
"profile": "L40S-1Q"
},
"has_snapshot": false,
"hotplug_size": "2GB",
"hypervisor": "cloud-hypervisor",
"network": {
"bandwidth_download": "125MB/s",
"bandwidth_upload": "125MB/s",
"enabled": true,
"ip": "192.168.100.10",
"mac": "02:00:00:ab:cd:ef",
"name": "default"
},
"overlay_size": "10GB",
"size": "2GB",
"started_at": "2025-01-15T10:30:05Z",
"state_error": "failed to query VMM: connection refused",
"stopped_at": "2025-01-15T12:30:00Z",
"vcpus": 2,
"volumes": [
{
"mount_path": "/mnt/data",
"volume_id": "vol-abc123",
"overlay": true,
"overlay_size": "1GB",
"readonly": true
}
]
}