StratoVirt can only be launched via cmdline arguments.
General configuration of machine, including
kvm
. (optional). If not set, default is KVM.off
. (optional). If not set, default is off.NB: machine type "none" is used to get the capabilities of stratovirt.
# cmdline
-machine [type=]name[,dump-guest-core={on|off}][,mem-share={on|off}]
StratoVirt supports to set the number of VCPUs(nr_vcpus).
This allows you to set the maximum number of VCPUs that VM will support. The maximum value is 254 and the minimum value that makes sense is 1.
By default, after booted, VM will online all CPUs you set.
Four properties are supported for smp
.
maxcpus
. On the arm machine, if you start a microvm, the value of socket must be one so far.maxcpus
.maxcpus
.NB: the arguments of cpu topology is used to interconnect with libvirt.
If it is configured, sockets * dies * clusters * cores * threads must be equal to maxcpus, and maxcpus should be larger than or equal to cpus.
# cmdline
-smp [cpus=]n[,maxcpus=<maxcpus>][,sockets=<sockets>][,dies=<dies>][,clusters=<clusters>][,cores=<cores>][,threads=<threads>]
StratoVirt allows the configuration of CPU features.
Currently, these options are supported.
host
, and this is the only supported variant currently.off
or on
, default to off
. (Currently only supported on aarch64)# cmdline
-cpu host[,pmu={on|off}]
StratoVirt supports to set the size of VM's memory in cmdline.
This allows you to set the size of memory that VM will support.
You can choose G
as unit (default unit is M
). And the memory size needs to be an integer.
Default VM memory size is 256M. The supported VM memory size is among [128M, 512G].
# cmdline
-m [size=]<megs>[m|M|g|G]
-m 256m
-m 256
-m 1G
Memory Prealloc feature is used to preallocate VM physical memory in advance and create its page tables. Using this feature, the number of page faults will decrease, and the memory access performance of the VM will improve.
Note: This option will take effect the VM startup time.
You can use the following cmdline to configure memory prealloc.
-mem-prealloc
StratoVirt supports to set the backend file of VM's memory.
This allows you to give a path to backend-file, which can be either a directory or a file. The path has to be absolute path.
# cmdline
-mem-path <filebackend_path>
Memory backend file can be used to let guest use hugetlbfs on host. It supports 2M or 1G hugepages memory. The following steps show how to use hugepages:
# mount hugetlbfs on a directory on host
$ mount -t hugetlbfs hugetlbfs /path/to/hugepages
# set the count of hugepages
$ sysctl vm.nr_hugepages=1024
# check hugepage size and count on host
$ cat /proc/meminfo
# run StratoVirt with backend-file
... -mem-path <filebackend_path>
The optional NUMA node element gives the opportunity to create a virtual machine with non-uniform memory accesses. The application of NUMA node is that one region of memory can be set as fast memory, another can be set as slow memory. The configuration items(mem-path, mem-prealloc) here will cause the global configuration to be invalidated
Each NUMA node is given a list of command lines option, there will be described in detail below.
G
or M
as unit for each memory zone. The host-nodes id must exist on host OS.
The optional policies are default, preferred, bind and interleave. If it is not configured, default
is used.Note: The maximum number of numa nodes is not more than 8.
The following command shows how to set NUMA node:
# The number of cpu must be set to be the same as numa node cpu.
-smp 8
# The memory size must be set to be the same as numa node mem.
-m 4G
-object memory-backend-ram,size=2G,id=mem0,host-nodes=0-1,policy=bind
-object memory-backend-ram,size=2G,id=mem1,host-nodes=0-1,policy=bind
or
-object memory-backend-file,size=2G,id=mem0,host-nodes=0-1,policy=bind,mem-path=/path/to/file
-object memory-backend-memfd,size=2G,id=mem1,host-nodes=0-1,policy=bind,mem-prealloc=true
-numa node,nodeid=0,cpus=0-1:4-5,memdev=mem0
-numa node,nodeid=1,cpus=2-3:6-7,memdev=mem1
[-numa dist,src=0,dst=0,val=10]
[-numa dist,src=0,dst=1,val=20]
[-numa dist,src=1,dst=0,val=20]
[-numa dist,src=1,dst=1,val=10]
Detailed configuration instructions:
-object memory-backend-ram,size=<num[M|m|G|g]>,id=<memid>,policy={bind|default|preferred|interleave},host-nodes=<id>
-object memory-backend-file,size=<num[M|m|G|g]>,id=<memid>,policy={bind|default|preferred|interleave},host-nodes=<id>,mem-path=</path/to/file>[,dump-guest-core=<true|false>]
-object memory-backend-memfd,size=<num[M|m|G|g]>,id=<memid>[,host-nodes=0-1][,policy=bind][,mem-prealloc=true][,dump-guest-core=false]
-numa node[,nodeid=<node>][,cpus=<firstcpu>[-<lastcpus>][:<secondcpus>[-<lastcpus>]]][,memdev=<memid>]
-numa dist,src=<source>,dst=<destination>,val=<distance>
StratoVirt supports to launch PE or bzImage (only x86_64) format linux kernel 4.19 and can also set kernel parameters for VM.
This allows you to give a path to linux kernel, the path can be either absolute path or relative path.
And the given kernel parameters will be actually analyzed by boot loader.
# cmdline
-kernel <kernel_path> \
-append <kernel cmdline parameters>
for example:
-append "console=ttyS0 rebook=k panic=1 pci=off tsc=reliable ipv6.disable=1"
StratoVirt supports to launch VM by a initrd (boot loader initialized RAM disk) as well.
If the path to initrd image is configured, it will be loaded to ram by boot loader.
If you want to use initrd as rootfs, root=/dev/ram
and rdinit=/bin/sh
must be added in Kernel Parameters.
# cmdline
-initrd <initrd_path>
Users can set the global configuration using the -global parameter.
One property can be set:
-global pcie-root-port.fast-unplug={0|1}
StratoVirt supports to output log to stderr and log file.
You can enable StratoVirt's logging by:
# Output log to stderr
-D
# Output log to log file
-D <logfile_path>
StratoVirt's log-level depends on env STRATOVIRT_LOG_LEVEL
.
StratoVirt supports five log-levels: trace
, debug
, info
, warn
, error
. The default level is error
.
If "-D" parameter is not set, logs are output to stderr by default.
StratoVirt supports to run as a daemon.
# cmdline
-daemonize
When run StratoVirt as a daemon, you are not allowed to bind serial with stdio or output log to stdio.
And you can also restore StratoVirt's pid number to a file by:
# cmdline
-pidfile <pidfile_path>
The SMBIOS specification defines the data structures and information that will enter the data structures associated with the system. Having these fields populate the data associated with each system enables system administrators to identify and manage these systems remotely.
# cmdline
# type 0: BIOS information, support version and release date string.
-smbios type=0[,vendor=str][,version=str][,date=str]
# type 1: System information, the information in this structure defines attributes of
# the overall system and is intended to be associated with the Component ID group of the system’s MIF.
-smbios type=1[,manufacturer=str][,version=str][,product=str][,serial=str][,uuid=str][,sku=str][,family=str]
# type 2: Baseboard information, the information in this structure defines attributes of a system baseboard
# (for example, a motherboard, planar, server blade, or other standard system module).
-smbios type=2[,manufacturer=str][,product=str][,version=str][,serial=str][,asset=str][,location=str]
# type 3: System Enclosure information, defines attributes of the system’s mechanical enclosure(s).
# For example, if a system included a separate enclosure for its peripheral devices,
# two structures would be returned: one for the main system enclosure and the second for the peripheral device enclosure.
-smbios type=3[,manufacturer=str][,version=str][,serial=str][,asset=str][,sku=str]
# type 4: Processor information, defines the attributes of a single processor;
# a separate structure instance is provided for each system processor socket/slot.
# For example, a system with an IntelDX2 processor would have a single structure instance
# while a system with an IntelSX2 processor would have a structure to describe the main CPU
# and a second structure to describe the 80487 co-processor
-smbios type=4[,sock_pfx=str][,manufacturer=str][,version=str][,serial=str][,asset=str][,part=str][,max-speed=%d][,current-speed=%d]
# type 17: Memory Device,this structure describes a single memory device.
-smbios type=17[,loc_pfx=str][,bank=str][,manufacturer=str][,serial=str][,asset=str][,part=str][,speed=%d]
For machine type "microvm", only virtio-mmio and legacy devices are supported. Maximum number of user creatable devices is 11 on x86_64 and 160 on aarch64.
For standard VM (machine type "q35" on x86_64, and "virt" on aarch64) , virtio-pci devices are supported instead of virtio-mmio devices. As for now pci bridges are not implemented yet, there is currently only one root bus named pcie.0. As a result, a total of 32 pci devices can be configured.
Iothread is used by devices to improve io performance. StratoVirt will spawn some extra threads due to iothread
configuration, and these threads can be used by devices exclusively improving performance.
Note: iothread is strongly recommended if a specific device supports it, otherwise the main thread has the risk of getting stuck.
There is only one argument for iothread:
# cmdline
-object iothread,id=<iothread>
Virtio block device is a virtual block device, which process read and write requests in virtio queue from guest.
fourteen properties are supported for virtio block device.
O_DIRECT
mode. (optional) If not set, default is true.unmap/ignore
means on/off
. If not set, default is ignore
.unmap
means it can free up disk space when discard is unmap
. If discard is ignore
, unmap
of detect-zeroes is same as on
. If not set, default is off
.none
. (optional) If not set, default is none
.raw
or qcow2
. If not set, default is raw
. NB: currently only raw
is supported for microvm.native
, io_uring
, or off
. If not set, default is native
if direct
is true, otherwise default is off
.For virtio-blk-pci, four more properties are required.
If you want to boot VM with a virtio block device as rootfs, you should add root=DEVICE_NAME_IN_GUESTOS
in Kernel Parameters. DEVICE_NAME_IN_GUESTOS
will from vda
to vdz
in order.
# virtio mmio block device.
-drive id=<drive_id>,file=<path_on_host>[,readonly={on|off}][,direct={on|off}][,throttling.iops-total=<limit>][,discard={unmap|ignore}][,detect-zeroes={unmap|on|off}]
-device virtio-blk-device,drive=<drive_id>,id=<blkid>[,iothread=<iothread1>][,serial=<serial_num>]
# virtio pci block device.
-drive id=<drive_id>,file=<path_on_host>[,readonly={on|off}][,direct={on|off}][,throttling.iops-total=<limit>][,discard={unmap|ignore}][,detect-zeroes={unmap|on|off}]
-device virtio-blk-pci,id=<blk_id>,drive=<drive_id>,bus=<pcie.0>,addr=<0x3>[,multifunction={on|off}][,iothread=<iothread1>][,serial=<serial_num>][,num-queues=<N>][,bootindex=<N>][,queue-size=<queuesize>]
StratoVirt also supports vhost-user-blk-pci to get a higher performance in storage, but only standard vm supports it.
You can use it by adding a new device, one more property is supported by vhost-user-blk-pci device than virtio-blk-pci.
# vhost user blk pci device
-chardev socket,id=<chardevid>,path=<socket_path>
-device vhost-user-blk-pci,id=<blk_id>,chardev=<chardev_id>,bus=<pcie.0>,addr=<0x3>[,num-queues=<N>][,bootindex=<N>][,queue-size=<queuesize>]
Note: More features to be supported.
It should open sharing memory('-mem-share=on') and hugepages('-mem-path ...' ) when using vhost-user-blk-pci.
Vhost-user-blk-pci use spdk as vhost-backend, so you need to start spdk before starting stratovirt.
How to start and configure spdk?
# Get code and compile spdk
$ git clone https://github.com/spdk/spdk.git
$ cd spdk
$ git submodule update --init
$ ./scripts/pkgdep.sh
$ ./configure
$ make
# Test spdk environment
$ ./test/unit/unittest.sh
# Setup spdk
$ HUGEMEM=2048 ./scripts/setup.sh
# Mount huge pages, you need to add -mem-path=/dev/hugepages in stratovirt config
$ mount -t hugetlbfs hugetlbfs /dev/hugepages
# Assign the number of the hugepage
$ sysctl vm.nr_hugepages=1024
# Start vhost, alloc 1024MB memory, default socket path is /var/tmp/spdk.sock, 0x3 means we use cpu cores 0 and 1 (cpumask 0x3)
$ build/bin/vhost --logflag vhost_blk -S /var/tmp -s 1024 -m 0x3 &
# Create a malloc bdev which size is 128MB, block size is 512B
$ ./scripts/rpc.py bdev_malloc_create 128 512 -b Malloc0
# Create a vhost-blk device exposing Malloc0 bdev, the I/O polling will be pinned to the CPU 0 (cpumask 0x1).
$ ./scripts/rpc.py vhost_create_blk_controller --cpumask 0x1 spdk.sock Malloc0
A config template to start stratovirt with vhost-user-blk-pci as below:
stratovirt \
-machine q35,mem-share=on \
-smp 1 \
-kernel /path-to/std-vmlinuxz \
-mem-path /dev/hugepages \
-m 1G \
-append "console=ttyS0 reboot=k panic=1 root=/dev/vda rw" \
-drive file=/path-to/OVMF_CODE.fd,if=pflash,unit=0,readonly=true \
-drive file=/path-to/OVMF_VARS.fd,if=pflash,unit=1 \
-drive file=/path-to/openEuler.img,id=rootfs,readonly=off,direct=off \
-device virtio-blk-pci,drive=rootfs,id=blk0,bus=pcie.0,addr=0x2,bootindex=0 \
-chardev socket,id=spdk_vhost_blk0,path=/var/tmp/spdk.sock \
-device vhost-user-blk-pci,id=blk1,chardev=spdk_vhost_blk0,bus=pcie.0,addr=0x3\
-qmp unix:/path-to/stratovirt.socket,server,nowait \
-serial stdio
Virtio-net is a virtual Ethernet card in VM. It can enable the network capability of VM.
Six properties are supported for netdev.
fd
or ifname
, if both of them are given,
the tap device would be created according to ifname
.Eight properties are supported for virtio-net-device or virtio-net-pci.
Three more properties are supported for virtio pci net device.
# virtio mmio net device
-netdev tap,id=<netdevid>,ifname=<host_dev_name>
-device virtio-net-device,id=<net_id>,netdev=<netdev_id>[,iothread=<iothread1>][,mac=<macaddr>]
# virtio pci net device
-netdev tap,id=<netdevid>,ifname=<host_dev_name>[,queues=<N>]
-device virtio-net-pci,id=<net_id>,netdev=<netdev_id>,bus=<pcie.0>,addr=<0x2>[,multifunction={on|off}][,iothread=<iothread1>][,mac=<macaddr>][,mq={on|off}][,queue-size=<queuesize>]
StratoVirt also supports vhost-net to get a higher performance in network. It can be set by
giving vhost
property, and one more property is supported for vhost-net device.
vhost=on
. If this argument is not
given when vhost=on
, StratoVirt gets it by opening "/dev/vhost-net" automatically.# virtio mmio net device
-netdev tap,id=<netdevid>,ifname=<host_dev_name>[,vhost=on[,vhostfd=<N>]]
-device virtio-net-device,id=<net_id>,netdev=<netdev_id>[,iothread=<iothread1>][,mac=<macaddr>]
# virtio pci net device
-netdev tap,id=<netdevid>,ifname=<host_dev_name>[,vhost=on[,vhostfd=<N>,queues=<N>]]
-device virtio-net-pci,id=<net_id>,netdev=<netdev_id>,bus=<pcie.0>,addr=<0x2>[,multifunction={on|off}][,iothread=<iothread1>][,mac=<macaddr>][,mq={on|off}]
StratoVirt also supports vhost-user net to get a higher performance by ovs-dpdk. Currently, only virtio pci net device support vhost-user net. It should open sharing memory('-mem-share=on') and hugepages('-mem-path ...' ) when using vhost-user net.
# virtio pci net device
-chardev socket,id=chardevid,path=socket_path
-netdev vhost-user,id=<netdevid>,chardev=<chardevid>[,queues=<N>]
-device virtio-net-pci,id=<net_id>,netdev=<netdev_id>,bus=<pcie.0>,addr=<0x2>[,multifunction={on|off}][,iothread=<iothread1>][,mac=<macaddr>][,mq={on|off}]
How to set a tap device?
# In host
$ brctl addbr qbr0
$ ip tuntap add tap0 mode tap
$ brctl addif qbr0 tap0
$ ip link set qbr0 up
$ ip link set tap0 up
$ ip address add 1.1.1.1/24 dev qbr0
# Run StratoVirt
... -netdev tap,id=netdevid,ifname=tap0 ...
# In guest
$ ip link set eth0 up
$ ip addr add 1.1.1.2/24 dev eth0
# Now network is reachable
$ ping 1.1.1.1
note: If you want to use multiple queues, create a tap device as follows:
# In host
$ brctl addbr qbr0
$ ip tuntap add tap1 mode tap multi_queue
$ brctl addif qbr0 tap1
$ ip link set qbr0 up
$ ip link set tap1 up
$ ip address add 1.1.1.1/24 dev qbr0
How to create port by ovs-dpdk?
# Start open vSwitch daemons
$ ovs-ctl start
# Initialize database
$ ovs-vsctl init
# Dpdk init
$ ovs-vsctl set Open_vSwitch . other_config:dpdk-init=true
# Set up dpdk lcore mask
$ ovs-vsctl set Open_vSwitch . other_config:dpdk-lcore-mask=0xf
# Set up hugepages for dpdk-socket-mem (2G)
$ ovs-vsctl set Open_vSwitch . other_config:dpdk-socket-mem=1024
# Set up PMD(Pull Mode Driver) cpu mask
$ ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=0xf
# Add bridge
$ ovs-vsctl add-br ovs_br -- set bridge ovs_br datapath_type=netdev
# Add port
$ ovs-vsctl add-port ovs_br port1 -- set Interface port1 type=dpdkvhostuser
$ ovs-vsctl add-port ovs_br port2 -- set Interface port2 type=dpdkvhostuser
# Set num of rxq/txq
$ ovs-vsctl set Interface port1 options:n_rxq=num,n_txq=num
$ ovs-vsctl set Interface port2 options:n_rxq=num,n_txq=num
Virtio console device is a simple device for data transfer between the guest and host. A console device may have one or more ports. These ports could be generic ports or console ports. Character devices /dev/vport*p* in linux guest will be created once setting a port (Whether it is a console port or not). Character devices at /dev/hvc0 to /dev/hvc7 in linux guest will be created once setting console port. To set the virtio console, chardev for redirection will be required. See section 2.12 Chardev for details.
Three properties can be set for virtconsole(console port) and virtserialport(generic port).
For virtio-serial-pci, Four more properties are required.
For virtio-serial-device, Two more properties are required.
# virtio mmio device using console port
-device virtio-serial-device[,id=<virtio-serial0>]
-chardev socket,path=<socket_path>,id=<virtioconsole1>,server,nowait
-device virtconsole,id=<console_id>,chardev=<virtioconsole1>,nr=0
# virtio mmio device using generic port
-device virtio-serial-device[,id=<virtio-serial0>]
-chardev socket,path=<socket_path>,id=<virtioserialport1>,server,nowait
-device virtserialport,id=<serialport_id>,chardev=<virtioserialport1>,nr=0
# virtio pci device
-device virtio-serial-pci,id=<virtio-serial0>,bus=<pcie.0>,addr=<0x3>[,multifunction={on|off},max_ports=<number>]
-chardev socket,path=<socket_path0>,id=<virtioconsole0>,server,nowait
-device virtconsole,id=<portid0>,chardev=<virtioconsole0>,nr=0
-chardev socket,path=<socket_path1>,id=<virtioconsole1>,server,nowait
-device virtserialport,id=<portid1>,chardev=<virtioconsole1>,nr=1
NB: Currently, only one virtio console device is supported. Only one port is supported in microvm.
Virtio vsock is a host/guest communication device like virtio console, but it has higher performance.
If you want use it, need:
And modprobe vhost_vsock
in the host.
Three properties can be set for virtio vsock device.
3<=guest_cid<u32:MAX
.For vhost-vsock-pci, two more properties are required.
# virtio mmio device.
-device vhost-vsock-device,id=<vsock_id>,guest-cid=<N>
# virtio pci device.
-device vhost-vsock-pci,id=<vsock_id>,guest-cid=<N>,bus=<pcie.0>,addr=<0x3>[,multifunction={on|off}]
You can only set one virtio vsock device for one VM.
You can also use nc-vsock
to test virtio-vsock.
# In guest
$ nc-vsock -l port_num
# In host
$ nc-vsock guest_cid port_num
Serial is a legacy device for VM, it is a communication interface which bridges the guest and host.
Commonly, we use serial as ttyS0 to output console message in StratoVirt.
In StratoVirt, there are two ways to set serial and bind it with host's character device. NB: We can only set one serial.
To use the first method, chardev for redirection will be required. See section 2.12 Chardev for details.
# add a chardev and redirect the serial port to chardev
-chardev backend,id=<chardev_id>[,path=<path>,server,nowait]
-serial chardev:chardev_id
Or you can simply use -serial dev
to bind serial with character device.
# simplified redirect methods
-serial stdio
-serial pty
-serial socket,path=<socket_path>,server,nowait
-serial file,path=<file_path>
Balloon is a virtio device, it offers a flex memory mechanism for VM.
Two properties are supported for virtio-balloon.
For virtio-balloon-pci, two more properties are required.
# virtio mmio balloon device
-device virtio-balloon-device[,deflate-on-oom={true|false}][,free-page-reporting={true|false}]
# virtio pci balloon device
-device virtio-balloon-pci,id=<balloon_id>,bus=<pcie.0>,addr=<0x4>[,deflate-on-oom={true|false}][,free-page-reporting={true|false}][,multifunction={on|off}]
Note: avoid using balloon devices and vfio devices together, balloon device is invalid when memory is hugepages. The balloon memory size must be an integer multiple of guest page size.
Virtio rng is a paravirtualized random number generator device, it provides a hardware rng device to the guest.
If you want to use it, need:
Five properties are supported for virtio-rng.
For virtio-rng-pci, two more properties are required.
NB:
# virtio mmio rng device
-object rng-random,id=<objrng0>,filename=<random_file_path>
-device virtio-rng-device,rng=<objrng0>,max-bytes=<1234>,period=<1000>
# virtio pci rng device
-object rng-random,id=<objrng0>,filename=<random_file_path>
-device virtio-rng-pci,id=<rng_id>,rng=<objrng0>[,max-bytes=<1234>][,period=<1000>],bus=<pcie.0>,addr=<0x1>[,multifunction={on|off}]
A PCI Express Port on a Root Complex that maps a portion of a Hierarchy through an associated virtual PCI-PCI Bridge.
Four parameters are supported for pcie root port.
-device pcie-root-port,id=<pcie.1>,port=<0x1>,bus=<pcie.0>,addr=<0x1>[,multifunction={on|off}]
The slot number of the device attached to the root port must be 0
PFlash is a virtualized flash device, it provides code storage and data storage for EDK2 during standard boot.
Usually, two PFlash devices are added to the main board. The first PFlash device is used to store binary code for EDK2 firmware, so this device is usually read-only. The second device is used to store configuration information related to standard boot, so this device is usually readable and writable. You can check out the boot to learn how to get the EDK2 firmware files.
Four properties can be set for PFlash device.
0<=unit<=1
. Note that the unit of the PFlash device which stores binary code should be 0, the unit of the PFlash device which stores boot information should be 1.# cmdline
-drive file=<pflash_path>,if=pflash,unit={0|1}[,readonly={true|false}]
The VFIO driver is an IOMMU/device agnostic framework for exposing direct access to userspace, in a secure, IOMMU protected environment. Virtual machine often makes use of direct device access when configured for the highest possible I/O performance.
Four properties are supported for VFIO device
-device vfio-pci,id=<vfio_id>,host=<0000:1a:00.3>,bus=<pcie.0>,addr=<0x03>[,multifunction={on|off}]
Note: the kernel must contain physical device drivers, otherwise it cannot be loaded normally.
See VFIO for more details.
The type of chardev backend could be: stdio, pty, socket and file(output only).
Five properties can be set for chardev.
# redirect methods
-chardev stdio,id=<chardev_id>
-chardev pty,id=<chardev_id>
-chardev socket,id=<chardev_id>,path=<socket_path>[,server,nowait]
-chardev file,id=<chardev_id>,path=<file_path>
StratoVirt supports XHCI USB controller, you can attach USB devices under XHCI USB controller.
USB controller is a pci device which can be attached USB device.
Three properties can be set for USB controller.
-device nec-usb-xhci,id=<xhci>,bus=<pcie.0>,addr=<0xa>[,iothread=<iothread1>]
Note: Only one USB controller can be configured, USB controller can only support USB keyboard and USB tablet.
The USB keyboard is a keyboard that uses the USB protocol. It should be attached to USB controller. Keypad and led are not supported yet.
One property can be set for USB Keyboard.
-device usb-kbd,id=<kbd>
Note: Only one keyboard can be configured.
Pointer Device which uses alsolute coordinates. It should be attached to USB controller.
One property can be set for USB Tablet.
-device usb-tablet,id=<tablet>
Note: Only one tablet can be configured.
Video Camera Device that based on USB video class protocol. It should be attached to USB controller.
3 properties can be set for USB Camera.
v4l2
or demo
.v4l2
, but not for demo
. eg. /dev/video0
.-device usb-camera,id=<camera>,backend="v4l2",path="/dev/video0"
-device usb-camera,id=<camera>,backend="demo"
Note: Only one camera can be configured.
Please see the 4. Build with features if you want to enable usb-camera.
USB storage device that base on classic bulk-only transport protocol. It should be attached to USB controller.
Three properties can be set for USB Storage.
disk
or cdrom
. If not set, default is disk
.-device usb-storage,drive=<drive_id>,id=<storage_id>
-drive id=<drive_id>,file=<path_on_host>[,media={disk|cdrom}],aio=off,direct=false
Note: "aio=off,direct=false" must be configured and other aio/direct values are not supported.
USB Host Device that based on USB protocol. It should be attached to USB controller.
Six properties can be set for USB Host.
Pass through the host device identified by bus and addr:
-device usb-host,id=<hostid>,hostbus=<bus>,hostaddr=<addr>[,isobufs=<number>][,isobsize=<size>]
Pass through the host device identified by bus and physical port:
-device usb-host,id=<hostid>,hostbus=<bus>,hostport=<port>[,isobufs=<number>][,isobsize=<size>]
Pass through the host device identified by the vendor and product ID:
-device usb-host,id=<hostid>,vendorid=<vendor>,productid=<product>[,isobufs=<number>][,isobsize=<size>]
Note:
Please see the 4. Build with features if you want to enable usb-host.
Virtio Scsi controller is a pci device which can be attached scsi device.
Six properties can be set for Virtio-Scsi controller.
-device virtio-scsi-pci,id=<scsi_id>,bus=<pcie.0>,addr=<0x3>[,multifunction={on|off}][,iothread=<iothread1>][,num-queues=<N>][,queue-size=<queuesize>]
Virtio Scsi HardDisk is a virtual block device, which process read and write requests in virtio queue from guest.
Ten properties can be set for virtio-scsi hd.
O_DIRECT
mode. (optional) If not set, default is true.native
, io_uring
, or off
. If not set, default is native
if direct
is true, otherwise default is off
.-device virtio-scsi-pci,bus=pcie.1,addr=0x0,id=scsi0[,multifunction=on,iothread=iothread1,num-queues=4]
-drive file=path_on_host,id=drive-scsi0-0-0-0[,readonly=true,aio=native,direct=true]
-device scsi-hd,bus=scsi0.0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0[,serial=123456,bootindex=1]
VNC can provide the users with way to login virtual machines remotely.
In order to use VNC, the ip and port value must be configured. The IP address can be set to a specified value or 0.0.0.0
, which means that all IP addresses on the host network card are monitored
-vnc 0.0.0.0:0
-vnc <IP:port>
Tls encryption is an optional configuration.Three properties can be set for encrypted transmission:
-object tls-creds-x509,id=<vnc-tls-creds0>,dir=</etc/pki/vnc>
Authentication is an optional configuration, it depends on the saslauth service . To use this function, you must ensure that the saslauthd service is running normally, and configure the supported authentication mechanism in /etc/sasl2/stratovirt. conf
Sample configuration for file /etc/sasl2/stratovirt.conf
# Using the saslauthd service
pwcheck_method: saslauthd
# Authentication mechanism
mech_list: plain
Three properties can be set for Authentication:
-object authz-simple,id=authz0,identity=username
Sample Configuration:
-object authz-simple,id=authz0,identity=username
-object tls-creds-x509,id=vnc-tls-creds0,dir=/etc/pki/vnc
-vnc 0.0.0.0:0,tls-creds=vnc-tls-creds0,sasl=on,sasl-authz=authz0
Note: 1. Only one client can be connected at the same time. Follow-up clients connections will result in failure. 2. TLS encrypted transmission can be configured separately, but authentication must be used together with encryption.
Please see the 4. Build with features if you want to enable VNC.
Virtio-fs is a shared file system that lets virtual machines access a directory tree on the host. Unlike existing approaches, it is designed to offer local file system semantics and performance.
Three properties can be set for virtio fs device.
-chardev socket,id=<chardevid>,path=<socket_path>
-device vhost-user-fs-pci,id=<device id>,chardev=<chardevid>,tag=<mount tag>
The vhost-user filesystem device contains virtio fs device and the vhost-user server which can be connected with the vhost-user client in StratoVirt through socket.
Seven properties are supported for vhost_user_fs.
chroot(2)
to make the shared directory tree its root when it does not have permission to create namespaces itself.pivot_root(2)
to make the shared directory tree its root.--modcaps=-LEASE,+KILL
stands for delete CAP_LEASE, add CAP_KILL. Capabilityes list do not need prefix CAP_
.How to start vhost_user_fs process?
host# ./path/to/vhost_user_fs -source /tmp/shared -socket-path /tmp/shared/virtio_fs.sock -D
host# stratovirt \
-machine type=q35,dump-guest-core=off,mem-share=on \
-smp 1 \
-m 1024 \
-kernel <your image> \
-append root=/dev/vda console=ttyS0 reboot=k panic=1 random.trust_cpu=on rw \
-drive file=<your file path>,if=pflash,unit=0 \
-qmp unix:/tmp/qmp2.socket,server,nowait \
-drive id=drive_id,file=<your image>,direct=on \
-device virtio-blk-pci,drive=drive_id,bug=pcie.0,addr=1,id=blk -serial stdio -disable-seccomp \
-chardev socket,id=virtio_fs,path=/tmp/shared/virtio_fs.sock,server,nowait \
-device vhost-user-fs-pci,id=device_id,chardev=virtio_fs,tag=myfs,bus=pcie.0,addr=0x7
guest# mount -t virtiofs myfs /mnt
virtio-gpu is an virtualized graphics card that lets virtual machines can display with it. Usually used in conjunction with VNC, the final images is rendered to the VNC client.
Sample Configuration:
-device virtio-gpu-pci,id=<your id>,bus=pcie.0,addr=0x2.0x0[,max_outputs=<your max_outputs>][,edid=true|false][,xres=<your expected width>][,yres= <your expected height>][,max_hostmem=<max host memory can use>]
In addition to the required slot information, five optional properties are supported for virtio-gpu.
Note:
Please see the 4. Build with features if you want to enable virtio-gpu.
ivshmem-scream is a virtual sound card that relies on Intel-VM shared memory to transmit audio data.
Nine properties are supported for ivshmem-scream device.
ALSA
, PulseAudio
or Demo
.on
.Sample Configuration:
-device ivshmem-scream,id=<scream_id>,memdev=<object_id>,interface=<interfaces>[,playback=<playback path>][,record=<record path>],bus=pcie.0,addr=0x2.0x0
-object memory-backend-ram,id=<object_id>,share=on,size=2M
Please see the 4. Build with features if you want to enable scream.
Ramfb is a simple display device. It is used in the Windows system on aarch64.
Two properties are supported for ramfb device.
Sample Configuration:
-device ramfb,id=<ramfb id>[,install=true|false]
Note: Only supported on aarch64.
Please see the 4. Build with features if you want to enable ramfb.
Users can specify the configuration file which lists events to trace.
One property can be set:
-trace events=<file>
StratoVirt use seccomp(2) to limit the syscalls in StratoVirt process by default. It will make a slight influence on performance to StratoVirt.
Number of Syscalls | GNU Toolchain | MUSL Toolchain |
---|---|---|
microvm | 51 | 50 |
q35 | 85 | 65 |
Number of Syscalls | GNU Toolchain | MUSL Toolchain |
---|---|---|
microvm | 49 | 49 |
virt | 84 | 62 |
If you want to disable seccomp, you can run StratoVirt with -disable-seccomp
.
# cmdline
-disable-seccomp
StratoVirt supports to take a snapshot of a paused VM as VM template. This template can be used to warm start a new VM. Warm start skips the kernel boot stage and userspace initialization stage to boot VM in a very short time.
Restore from VM template with below command:
$ ./stratovirt \
-machine microvm \
-kernel path/to/vmlinux.bin \
-append "console=ttyS0 pci=off reboot=k quiet panic=1 root=/dev/vda" \
-drive file=path/to/rootfs,id=rootfs,readonly=off,direct=off \
-device virtio-blk-device,drive=rootfs,id=rootfs \
-qmp unix:path/to/socket,server,nowait \
-serial stdio \
-incoming file:path/to/template
See Snapshot and Restore for details.
Ozone is a lightweight secure sandbox for StratoVirt, it provides secure environment for StratoVirt by limiting resources of StratoVirt using 'namespace'. Please run ozone with root permission.
Ozone can be launched by the following commands:
$ ./ozone \
-name stratovirt_ozone \
-exec_file /path/to/stratovirt \
-gid 100 \
-uid 100 \
-capability [CAP_*] \
-netns /path/to/network_name_space \
-source /path/to/source_files \
-numa numa_node \
-cgroup <controller1>=<value1>,<controller2>=<value2> \
[-clean-resource] \
-- \
<arguments for launching stratovirt>
About the arguments:
name
: the name of ozone, it should be unique.exec_file
: path to the StratoVirt binary file. NB: it should be a statically linked binary file.uid
: the user id.gid
: the group id.capability
: set the ozone environment capabilities. If not set, forbid any capability.netns
: path to a existed network namespace.source
: path to the source file, such as rootfs
and vmlinux
.clean-resource
: a flag to clean resource.numa
: numa node, this argument must be configured if cpuset.cpus
is set.cgroup
: set cgroup controller value. supported controller: cpuset.cpus
and memory.limit_in_bytes
.--
: these two dashes are used to split args, the args followed are used to launched StratoVirt.As ozone uses a directory to mount as a root directory, after ozone is launched, the directory "/srv/zozne/{exec_file}/{name}" will be created. (Where, exec_file
is the executable binary file, usually it is stratovirt
, while name
is the name of ozone, it is given by users, but the length of it should be no more than 255 bytes.) In order to run ozone normally, please make sure that the directory "/srv/zozne/{exec_file}/{name}" does not exists before launching ozone.
On top of that, the path-related arguments are different. They are all in the current(./
) directory.
For net name space, it can be created by the following command with name "mynet":
$ sudo ip netns add mynet
After creating, there is a file named mynet
in /var/run/netns
.
The following example illustrates how to config a ozone under netns mynet
, running on cpu "4-5" with memory limitation 1000000 bytes.
$ ./ozone \
-name stratovirt_ozone \
-exec_file /path/to/stratovirt \
-gid 100 \
-uid 100 \
-capability CAP_CHOWN \
-netns /var/run/netns/mynet \
-source /path/to/vmlinux.bin /path/to/rootfs \
-numa 0 \
-cgroup cpuset.cpus=4-5 memory.limit_in_bytes=1000000 \
-- \
-kernel ./vmlinux.bin \
-append console=ttyS0 root=/dev/vda reboot=k panic=1 rw \
-drive file=./rootfs,id=rootfs,readonly=off \
-device virtio-blk-device,drive=rootfs,id=rootfs \
-qmp unix:./stratovirt.socket,server,nowait \
-serial stdio
Once the process of StratoVirt exits, the following command can be used to clean the environment.
$ ./ozone \
-name stratovirt_ozone \
-exec_file /path/to/stratovirt \
-gid 100 \
-uid 100 \
-netns /path/to/network_name_space \
-source /path/to/vmlinux.bin /path/to/rootfs \
-clean-resource
Libvirt launches StratoVirt by creating cmdlines. But some of these commands such as: cpu, overcommit, uuid, no-user-config, nodefaults, sandbox, msg, rtc, no-shutdown, nographic, realtime, display, usb, mem-prealloc and boot, are not supported by StratoVirt. To launch StratoVirt from libvirt successfully, StratoVirt needs to put these arguments into white list. However, these cmdlines never function.
Apart from the above commands, some arguments are playing the same roles. Like 'format' and 'bootindex' for virtio-blk; 'chassis' for pcie-root-port; 'sockets', 'cores' and 'threads' for smp; 'accel' and 'usb' for machine; "format" for pflash device.
Currently, measurement of guest boot up time is supported. The guest kernel writes different
values to specific IO/MMIO regions, and it will trap to stratovirt
, we can record the timestamp
of kernel start or kernel boot complete.
See Debug_Boot_Time for more details.
Вы можете оставить комментарий после Вход в систему
Неприемлемый контент может быть отображен здесь и не будет показан на странице. Вы можете проверить и изменить его с помощью соответствующей функции редактирования.
Если вы подтверждаете, что содержание не содержит непристойной лексики/перенаправления на рекламу/насилия/вульгарной порнографии/нарушений/пиратства/ложного/незначительного или незаконного контента, связанного с национальными законами и предписаниями, вы можете нажать «Отправить» для подачи апелляции, и мы обработаем ее как можно скорее.
Комментарий ( 0 )