Fix translation but not execution of edge TB
This commit is contained in:
commit
b0c8272465
@ -256,6 +256,7 @@ build-user:
|
|||||||
variables:
|
variables:
|
||||||
IMAGE: debian-all-test-cross
|
IMAGE: debian-all-test-cross
|
||||||
CONFIGURE_ARGS: --disable-tools --disable-system
|
CONFIGURE_ARGS: --disable-tools --disable-system
|
||||||
|
--target-list-exclude=alpha-linux-user,sh4-linux-user
|
||||||
MAKE_CHECK_ARGS: check-tcg
|
MAKE_CHECK_ARGS: check-tcg
|
||||||
|
|
||||||
build-user-static:
|
build-user-static:
|
||||||
@ -265,6 +266,18 @@ build-user-static:
|
|||||||
variables:
|
variables:
|
||||||
IMAGE: debian-all-test-cross
|
IMAGE: debian-all-test-cross
|
||||||
CONFIGURE_ARGS: --disable-tools --disable-system --static
|
CONFIGURE_ARGS: --disable-tools --disable-system --static
|
||||||
|
--target-list-exclude=alpha-linux-user,sh4-linux-user
|
||||||
|
MAKE_CHECK_ARGS: check-tcg
|
||||||
|
|
||||||
|
# targets stuck on older compilers
|
||||||
|
build-legacy:
|
||||||
|
extends: .native_build_job_template
|
||||||
|
needs:
|
||||||
|
job: amd64-debian-legacy-cross-container
|
||||||
|
variables:
|
||||||
|
IMAGE: debian-legacy-test-cross
|
||||||
|
TARGETS: alpha-linux-user alpha-softmmu sh4-linux-user
|
||||||
|
CONFIGURE_ARGS: --disable-tools
|
||||||
MAKE_CHECK_ARGS: check-tcg
|
MAKE_CHECK_ARGS: check-tcg
|
||||||
|
|
||||||
build-user-hexagon:
|
build-user-hexagon:
|
||||||
@ -277,7 +290,9 @@ build-user-hexagon:
|
|||||||
CONFIGURE_ARGS: --disable-tools --disable-docs --enable-debug-tcg
|
CONFIGURE_ARGS: --disable-tools --disable-docs --enable-debug-tcg
|
||||||
MAKE_CHECK_ARGS: check-tcg
|
MAKE_CHECK_ARGS: check-tcg
|
||||||
|
|
||||||
# Only build the softmmu targets we have check-tcg tests for
|
# Build the softmmu targets we have check-tcg tests and compilers in
|
||||||
|
# our omnibus all-test-cross container. Those targets that haven't got
|
||||||
|
# Debian cross compiler support need to use special containers.
|
||||||
build-some-softmmu:
|
build-some-softmmu:
|
||||||
extends: .native_build_job_template
|
extends: .native_build_job_template
|
||||||
needs:
|
needs:
|
||||||
@ -285,7 +300,18 @@ build-some-softmmu:
|
|||||||
variables:
|
variables:
|
||||||
IMAGE: debian-all-test-cross
|
IMAGE: debian-all-test-cross
|
||||||
CONFIGURE_ARGS: --disable-tools --enable-debug
|
CONFIGURE_ARGS: --disable-tools --enable-debug
|
||||||
TARGETS: xtensa-softmmu arm-softmmu aarch64-softmmu alpha-softmmu
|
TARGETS: arm-softmmu aarch64-softmmu i386-softmmu riscv64-softmmu
|
||||||
|
s390x-softmmu x86_64-softmmu
|
||||||
|
MAKE_CHECK_ARGS: check-tcg
|
||||||
|
|
||||||
|
build-loongarch64:
|
||||||
|
extends: .native_build_job_template
|
||||||
|
needs:
|
||||||
|
job: loongarch-debian-cross-container
|
||||||
|
variables:
|
||||||
|
IMAGE: debian-loongarch-cross
|
||||||
|
CONFIGURE_ARGS: --disable-tools --enable-debug
|
||||||
|
TARGETS: loongarch64-linux-user loongarch64-softmmu
|
||||||
MAKE_CHECK_ARGS: check-tcg
|
MAKE_CHECK_ARGS: check-tcg
|
||||||
|
|
||||||
# We build tricore in a very minimal tricore only container
|
# We build tricore in a very minimal tricore only container
|
||||||
@ -318,7 +344,7 @@ clang-user:
|
|||||||
variables:
|
variables:
|
||||||
IMAGE: debian-all-test-cross
|
IMAGE: debian-all-test-cross
|
||||||
CONFIGURE_ARGS: --cc=clang --cxx=clang++ --disable-system
|
CONFIGURE_ARGS: --cc=clang --cxx=clang++ --disable-system
|
||||||
--target-list-exclude=microblazeel-linux-user,aarch64_be-linux-user,i386-linux-user,m68k-linux-user,mipsn32el-linux-user,xtensaeb-linux-user
|
--target-list-exclude=alpha-linux-user,microblazeel-linux-user,aarch64_be-linux-user,i386-linux-user,m68k-linux-user,mipsn32el-linux-user,xtensaeb-linux-user
|
||||||
--extra-cflags=-fsanitize=undefined --extra-cflags=-fno-sanitize-recover=undefined
|
--extra-cflags=-fsanitize=undefined --extra-cflags=-fno-sanitize-recover=undefined
|
||||||
MAKE_CHECK_ARGS: check-unit check-tcg
|
MAKE_CHECK_ARGS: check-unit check-tcg
|
||||||
|
|
||||||
@ -505,7 +531,7 @@ build-tci:
|
|||||||
variables:
|
variables:
|
||||||
IMAGE: debian-all-test-cross
|
IMAGE: debian-all-test-cross
|
||||||
script:
|
script:
|
||||||
- TARGETS="aarch64 alpha arm hppa m68k microblaze ppc64 s390x x86_64"
|
- TARGETS="aarch64 arm hppa m68k microblaze ppc64 s390x x86_64"
|
||||||
- mkdir build
|
- mkdir build
|
||||||
- cd build
|
- cd build
|
||||||
- ../configure --enable-tcg-interpreter --disable-docs --disable-gtk --disable-vnc
|
- ../configure --enable-tcg-interpreter --disable-docs --disable-gtk --disable-vnc
|
||||||
|
@ -1,9 +1,3 @@
|
|||||||
alpha-debian-cross-container:
|
|
||||||
extends: .container_job_template
|
|
||||||
stage: containers
|
|
||||||
variables:
|
|
||||||
NAME: debian-alpha-cross
|
|
||||||
|
|
||||||
amd64-debian-cross-container:
|
amd64-debian-cross-container:
|
||||||
extends: .container_job_template
|
extends: .container_job_template
|
||||||
stage: containers
|
stage: containers
|
||||||
@ -16,6 +10,12 @@ amd64-debian-user-cross-container:
|
|||||||
variables:
|
variables:
|
||||||
NAME: debian-all-test-cross
|
NAME: debian-all-test-cross
|
||||||
|
|
||||||
|
amd64-debian-legacy-cross-container:
|
||||||
|
extends: .container_job_template
|
||||||
|
stage: containers
|
||||||
|
variables:
|
||||||
|
NAME: debian-legacy-test-cross
|
||||||
|
|
||||||
arm64-debian-cross-container:
|
arm64-debian-cross-container:
|
||||||
extends: .container_job_template
|
extends: .container_job_template
|
||||||
stage: containers
|
stage: containers
|
||||||
@ -40,23 +40,11 @@ hexagon-cross-container:
|
|||||||
variables:
|
variables:
|
||||||
NAME: debian-hexagon-cross
|
NAME: debian-hexagon-cross
|
||||||
|
|
||||||
hppa-debian-cross-container:
|
loongarch-debian-cross-container:
|
||||||
extends: .container_job_template
|
extends: .container_job_template
|
||||||
stage: containers
|
stage: containers
|
||||||
variables:
|
variables:
|
||||||
NAME: debian-hppa-cross
|
NAME: debian-loongarch-cross
|
||||||
|
|
||||||
m68k-debian-cross-container:
|
|
||||||
extends: .container_job_template
|
|
||||||
stage: containers
|
|
||||||
variables:
|
|
||||||
NAME: debian-m68k-cross
|
|
||||||
|
|
||||||
mips64-debian-cross-container:
|
|
||||||
extends: .container_job_template
|
|
||||||
stage: containers
|
|
||||||
variables:
|
|
||||||
NAME: debian-mips64-cross
|
|
||||||
|
|
||||||
mips64el-debian-cross-container:
|
mips64el-debian-cross-container:
|
||||||
extends: .container_job_template
|
extends: .container_job_template
|
||||||
@ -64,24 +52,12 @@ mips64el-debian-cross-container:
|
|||||||
variables:
|
variables:
|
||||||
NAME: debian-mips64el-cross
|
NAME: debian-mips64el-cross
|
||||||
|
|
||||||
mips-debian-cross-container:
|
|
||||||
extends: .container_job_template
|
|
||||||
stage: containers
|
|
||||||
variables:
|
|
||||||
NAME: debian-mips-cross
|
|
||||||
|
|
||||||
mipsel-debian-cross-container:
|
mipsel-debian-cross-container:
|
||||||
extends: .container_job_template
|
extends: .container_job_template
|
||||||
stage: containers
|
stage: containers
|
||||||
variables:
|
variables:
|
||||||
NAME: debian-mipsel-cross
|
NAME: debian-mipsel-cross
|
||||||
|
|
||||||
powerpc-test-cross-container:
|
|
||||||
extends: .container_job_template
|
|
||||||
stage: containers
|
|
||||||
variables:
|
|
||||||
NAME: debian-powerpc-test-cross
|
|
||||||
|
|
||||||
ppc64el-debian-cross-container:
|
ppc64el-debian-cross-container:
|
||||||
extends: .container_job_template
|
extends: .container_job_template
|
||||||
stage: containers
|
stage: containers
|
||||||
@ -97,31 +73,12 @@ riscv64-debian-cross-container:
|
|||||||
NAME: debian-riscv64-cross
|
NAME: debian-riscv64-cross
|
||||||
QEMU_JOB_OPTIONAL: 1
|
QEMU_JOB_OPTIONAL: 1
|
||||||
|
|
||||||
# we can however build TCG tests using a non-sid base
|
|
||||||
riscv64-debian-test-cross-container:
|
|
||||||
extends: .container_job_template
|
|
||||||
stage: containers
|
|
||||||
variables:
|
|
||||||
NAME: debian-riscv64-test-cross
|
|
||||||
|
|
||||||
s390x-debian-cross-container:
|
s390x-debian-cross-container:
|
||||||
extends: .container_job_template
|
extends: .container_job_template
|
||||||
stage: containers
|
stage: containers
|
||||||
variables:
|
variables:
|
||||||
NAME: debian-s390x-cross
|
NAME: debian-s390x-cross
|
||||||
|
|
||||||
sh4-debian-cross-container:
|
|
||||||
extends: .container_job_template
|
|
||||||
stage: containers
|
|
||||||
variables:
|
|
||||||
NAME: debian-sh4-cross
|
|
||||||
|
|
||||||
sparc64-debian-cross-container:
|
|
||||||
extends: .container_job_template
|
|
||||||
stage: containers
|
|
||||||
variables:
|
|
||||||
NAME: debian-sparc64-cross
|
|
||||||
|
|
||||||
tricore-debian-cross-container:
|
tricore-debian-cross-container:
|
||||||
extends: .container_job_template
|
extends: .container_job_template
|
||||||
stage: containers
|
stage: containers
|
||||||
|
@ -165,7 +165,7 @@ cross-win32-system:
|
|||||||
job: win32-fedora-cross-container
|
job: win32-fedora-cross-container
|
||||||
variables:
|
variables:
|
||||||
IMAGE: fedora-win32-cross
|
IMAGE: fedora-win32-cross
|
||||||
EXTRA_CONFIGURE_OPTS: --enable-fdt=internal
|
EXTRA_CONFIGURE_OPTS: --enable-fdt=internal --disable-plugins
|
||||||
CROSS_SKIP_TARGETS: alpha-softmmu avr-softmmu hppa-softmmu m68k-softmmu
|
CROSS_SKIP_TARGETS: alpha-softmmu avr-softmmu hppa-softmmu m68k-softmmu
|
||||||
microblazeel-softmmu mips64el-softmmu nios2-softmmu
|
microblazeel-softmmu mips64el-softmmu nios2-softmmu
|
||||||
artifacts:
|
artifacts:
|
||||||
@ -179,7 +179,7 @@ cross-win64-system:
|
|||||||
job: win64-fedora-cross-container
|
job: win64-fedora-cross-container
|
||||||
variables:
|
variables:
|
||||||
IMAGE: fedora-win64-cross
|
IMAGE: fedora-win64-cross
|
||||||
EXTRA_CONFIGURE_OPTS: --enable-fdt=internal
|
EXTRA_CONFIGURE_OPTS: --enable-fdt=internal --disable-plugins
|
||||||
CROSS_SKIP_TARGETS: alpha-softmmu avr-softmmu hppa-softmmu
|
CROSS_SKIP_TARGETS: alpha-softmmu avr-softmmu hppa-softmmu
|
||||||
m68k-softmmu microblazeel-softmmu nios2-softmmu
|
m68k-softmmu microblazeel-softmmu nios2-softmmu
|
||||||
or1k-softmmu rx-softmmu sh4eb-softmmu sparc64-softmmu
|
or1k-softmmu rx-softmmu sh4eb-softmmu sparc64-softmmu
|
||||||
|
@ -72,6 +72,7 @@
|
|||||||
- .\msys64\usr\bin\bash -lc "pacman -Sy --noconfirm --needed
|
- .\msys64\usr\bin\bash -lc "pacman -Sy --noconfirm --needed
|
||||||
bison diffutils flex
|
bison diffutils flex
|
||||||
git grep make sed
|
git grep make sed
|
||||||
|
$MINGW_TARGET-binutils
|
||||||
$MINGW_TARGET-capstone
|
$MINGW_TARGET-capstone
|
||||||
$MINGW_TARGET-ccache
|
$MINGW_TARGET-ccache
|
||||||
$MINGW_TARGET-curl
|
$MINGW_TARGET-curl
|
||||||
|
2
.mailmap
2
.mailmap
@ -30,10 +30,12 @@ malc <av1474@comtv.ru> malc <malc@c046a42c-6fe2-441c-8c8c-71466251a162>
|
|||||||
# Corrupted Author fields
|
# Corrupted Author fields
|
||||||
Aaron Larson <alarson@ddci.com> alarson@ddci.com
|
Aaron Larson <alarson@ddci.com> alarson@ddci.com
|
||||||
Andreas Färber <andreas.faerber@web.de> Andreas Färber <andreas.faerber>
|
Andreas Färber <andreas.faerber@web.de> Andreas Färber <andreas.faerber>
|
||||||
|
fanwenjie <fanwj@mail.ustc.edu.cn> fanwj@mail.ustc.edu.cn <fanwj@mail.ustc.edu.cn>
|
||||||
Jason Wang <jasowang@redhat.com> Jason Wang <jasowang>
|
Jason Wang <jasowang@redhat.com> Jason Wang <jasowang>
|
||||||
Marek Dolata <mkdolata@us.ibm.com> mkdolata@us.ibm.com <mkdolata@us.ibm.com>
|
Marek Dolata <mkdolata@us.ibm.com> mkdolata@us.ibm.com <mkdolata@us.ibm.com>
|
||||||
Michael Ellerman <mpe@ellerman.id.au> michael@ozlabs.org <michael@ozlabs.org>
|
Michael Ellerman <mpe@ellerman.id.au> michael@ozlabs.org <michael@ozlabs.org>
|
||||||
Nick Hudson <hnick@vmware.com> hnick@vmware.com <hnick@vmware.com>
|
Nick Hudson <hnick@vmware.com> hnick@vmware.com <hnick@vmware.com>
|
||||||
|
Timothée Cocault <timothee.cocault@gmail.com> timothee.cocault@gmail.com <timothee.cocault@gmail.com>
|
||||||
|
|
||||||
# There is also a:
|
# There is also a:
|
||||||
# (no author) <(no author)@c046a42c-6fe2-441c-8c8c-71466251a162>
|
# (no author) <(no author)@c046a42c-6fe2-441c-8c8c-71466251a162>
|
||||||
|
@ -11,6 +11,9 @@ config OPENGL
|
|||||||
config X11
|
config X11
|
||||||
bool
|
bool
|
||||||
|
|
||||||
|
config PIXMAN
|
||||||
|
bool
|
||||||
|
|
||||||
config SPICE
|
config SPICE
|
||||||
bool
|
bool
|
||||||
|
|
||||||
@ -46,3 +49,6 @@ config FUZZ
|
|||||||
config VFIO_USER_SERVER_ALLOWED
|
config VFIO_USER_SERVER_ALLOWED
|
||||||
bool
|
bool
|
||||||
imply VFIO_USER_SERVER
|
imply VFIO_USER_SERVER
|
||||||
|
|
||||||
|
config HV_BALLOON_POSSIBLE
|
||||||
|
bool
|
||||||
|
77
MAINTAINERS
77
MAINTAINERS
@ -131,6 +131,17 @@ K: ^Subject:.*(?i)mips
|
|||||||
F: docs/system/target-mips.rst
|
F: docs/system/target-mips.rst
|
||||||
F: configs/targets/mips*
|
F: configs/targets/mips*
|
||||||
|
|
||||||
|
X86 general architecture support
|
||||||
|
M: Paolo Bonzini <pbonzini@redhat.com>
|
||||||
|
S: Maintained
|
||||||
|
F: configs/devices/i386-softmmu/default.mak
|
||||||
|
F: configs/targets/i386-softmmu.mak
|
||||||
|
F: configs/targets/x86_64-softmmu.mak
|
||||||
|
F: docs/system/target-i386*
|
||||||
|
F: target/i386/*.[ch]
|
||||||
|
F: target/i386/Kconfig
|
||||||
|
F: target/i386/meson.build
|
||||||
|
|
||||||
Guest CPU cores (TCG)
|
Guest CPU cores (TCG)
|
||||||
---------------------
|
---------------------
|
||||||
Overall TCG CPUs
|
Overall TCG CPUs
|
||||||
@ -323,7 +334,7 @@ RISC-V TCG CPUs
|
|||||||
M: Palmer Dabbelt <palmer@dabbelt.com>
|
M: Palmer Dabbelt <palmer@dabbelt.com>
|
||||||
M: Alistair Francis <alistair.francis@wdc.com>
|
M: Alistair Francis <alistair.francis@wdc.com>
|
||||||
M: Bin Meng <bin.meng@windriver.com>
|
M: Bin Meng <bin.meng@windriver.com>
|
||||||
R: Weiwei Li <liweiwei@iscas.ac.cn>
|
R: Weiwei Li <liwei1518@gmail.com>
|
||||||
R: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
|
R: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
|
||||||
R: Liu Zhiwei <zhiwei_liu@linux.alibaba.com>
|
R: Liu Zhiwei <zhiwei_liu@linux.alibaba.com>
|
||||||
L: qemu-riscv@nongnu.org
|
L: qemu-riscv@nongnu.org
|
||||||
@ -490,7 +501,7 @@ S: Supported
|
|||||||
F: include/sysemu/kvm_xen.h
|
F: include/sysemu/kvm_xen.h
|
||||||
F: target/i386/kvm/xen*
|
F: target/i386/kvm/xen*
|
||||||
F: hw/i386/kvm/xen*
|
F: hw/i386/kvm/xen*
|
||||||
F: tests/avocado/xen_guest.py
|
F: tests/avocado/kvm_xen_guest.py
|
||||||
|
|
||||||
Guest CPU Cores (other accelerators)
|
Guest CPU Cores (other accelerators)
|
||||||
------------------------------------
|
------------------------------------
|
||||||
@ -657,6 +668,7 @@ F: include/hw/dma/pl080.h
|
|||||||
F: hw/dma/pl330.c
|
F: hw/dma/pl330.c
|
||||||
F: hw/gpio/pl061.c
|
F: hw/gpio/pl061.c
|
||||||
F: hw/input/pl050.c
|
F: hw/input/pl050.c
|
||||||
|
F: include/hw/input/pl050.h
|
||||||
F: hw/intc/pl190.c
|
F: hw/intc/pl190.c
|
||||||
F: hw/sd/pl181.c
|
F: hw/sd/pl181.c
|
||||||
F: hw/ssi/pl022.c
|
F: hw/ssi/pl022.c
|
||||||
@ -687,7 +699,7 @@ M: Peter Maydell <peter.maydell@linaro.org>
|
|||||||
L: qemu-arm@nongnu.org
|
L: qemu-arm@nongnu.org
|
||||||
S: Maintained
|
S: Maintained
|
||||||
F: hw/intc/arm*
|
F: hw/intc/arm*
|
||||||
F: hw/intc/gic_internal.h
|
F: hw/intc/gic*_internal.h
|
||||||
F: hw/misc/a9scu.c
|
F: hw/misc/a9scu.c
|
||||||
F: hw/misc/arm11scu.c
|
F: hw/misc/arm11scu.c
|
||||||
F: hw/misc/arm_l2x0.c
|
F: hw/misc/arm_l2x0.c
|
||||||
@ -859,8 +871,10 @@ M: Hao Wu <wuhaotsh@google.com>
|
|||||||
L: qemu-arm@nongnu.org
|
L: qemu-arm@nongnu.org
|
||||||
S: Supported
|
S: Supported
|
||||||
F: hw/*/npcm*
|
F: hw/*/npcm*
|
||||||
|
F: hw/sensor/adm1266.c
|
||||||
F: include/hw/*/npcm*
|
F: include/hw/*/npcm*
|
||||||
F: tests/qtest/npcm*
|
F: tests/qtest/npcm*
|
||||||
|
F: tests/qtest/adm1266-test.c
|
||||||
F: pc-bios/npcm7xx_bootrom.bin
|
F: pc-bios/npcm7xx_bootrom.bin
|
||||||
F: roms/vbootrom
|
F: roms/vbootrom
|
||||||
F: docs/system/arm/nuvoton.rst
|
F: docs/system/arm/nuvoton.rst
|
||||||
@ -925,6 +939,7 @@ F: hw/*/pxa2xx*
|
|||||||
F: hw/display/tc6393xb.c
|
F: hw/display/tc6393xb.c
|
||||||
F: hw/gpio/max7310.c
|
F: hw/gpio/max7310.c
|
||||||
F: hw/gpio/zaurus.c
|
F: hw/gpio/zaurus.c
|
||||||
|
F: hw/input/ads7846.c
|
||||||
F: hw/misc/mst_fpga.c
|
F: hw/misc/mst_fpga.c
|
||||||
F: hw/adc/max111x.c
|
F: hw/adc/max111x.c
|
||||||
F: include/hw/adc/max111x.h
|
F: include/hw/adc/max111x.h
|
||||||
@ -977,7 +992,9 @@ M: Peter Maydell <peter.maydell@linaro.org>
|
|||||||
L: qemu-arm@nongnu.org
|
L: qemu-arm@nongnu.org
|
||||||
S: Maintained
|
S: Maintained
|
||||||
F: hw/*/stellaris*
|
F: hw/*/stellaris*
|
||||||
|
F: hw/display/ssd03*
|
||||||
F: include/hw/input/gamepad.h
|
F: include/hw/input/gamepad.h
|
||||||
|
F: include/hw/timer/stellaris-gptm.h
|
||||||
F: docs/system/arm/stellaris.rst
|
F: docs/system/arm/stellaris.rst
|
||||||
|
|
||||||
STM32VLDISCOVERY
|
STM32VLDISCOVERY
|
||||||
@ -992,6 +1009,7 @@ M: Peter Maydell <peter.maydell@linaro.org>
|
|||||||
L: qemu-arm@nongnu.org
|
L: qemu-arm@nongnu.org
|
||||||
S: Maintained
|
S: Maintained
|
||||||
F: hw/arm/vexpress.c
|
F: hw/arm/vexpress.c
|
||||||
|
F: hw/display/sii9022.c
|
||||||
F: docs/system/arm/vexpress.rst
|
F: docs/system/arm/vexpress.rst
|
||||||
|
|
||||||
Versatile PB
|
Versatile PB
|
||||||
@ -1131,7 +1149,7 @@ F: docs/system/arm/emcraft-sf2.rst
|
|||||||
ASPEED BMCs
|
ASPEED BMCs
|
||||||
M: Cédric Le Goater <clg@kaod.org>
|
M: Cédric Le Goater <clg@kaod.org>
|
||||||
M: Peter Maydell <peter.maydell@linaro.org>
|
M: Peter Maydell <peter.maydell@linaro.org>
|
||||||
R: Andrew Jeffery <andrew@aj.id.au>
|
R: Andrew Jeffery <andrew@codeconstruct.com.au>
|
||||||
R: Joel Stanley <joel@jms.id.au>
|
R: Joel Stanley <joel@jms.id.au>
|
||||||
L: qemu-arm@nongnu.org
|
L: qemu-arm@nongnu.org
|
||||||
S: Maintained
|
S: Maintained
|
||||||
@ -1192,6 +1210,7 @@ M: Richard Henderson <richard.henderson@linaro.org>
|
|||||||
R: Helge Deller <deller@gmx.de>
|
R: Helge Deller <deller@gmx.de>
|
||||||
S: Odd Fixes
|
S: Odd Fixes
|
||||||
F: configs/devices/hppa-softmmu/default.mak
|
F: configs/devices/hppa-softmmu/default.mak
|
||||||
|
F: hw/display/artist.c
|
||||||
F: hw/hppa/
|
F: hw/hppa/
|
||||||
F: hw/input/lasips2.c
|
F: hw/input/lasips2.c
|
||||||
F: hw/net/*i82596*
|
F: hw/net/*i82596*
|
||||||
@ -1283,6 +1302,7 @@ F: include/hw/char/goldfish_tty.h
|
|||||||
F: include/hw/intc/goldfish_pic.h
|
F: include/hw/intc/goldfish_pic.h
|
||||||
F: include/hw/intc/m68k_irqc.h
|
F: include/hw/intc/m68k_irqc.h
|
||||||
F: include/hw/misc/virt_ctrl.h
|
F: include/hw/misc/virt_ctrl.h
|
||||||
|
F: docs/specs/virt-ctlr.rst
|
||||||
|
|
||||||
MicroBlaze Machines
|
MicroBlaze Machines
|
||||||
-------------------
|
-------------------
|
||||||
@ -1535,6 +1555,14 @@ F: hw/pci-host/mv64361.c
|
|||||||
F: hw/pci-host/mv643xx.h
|
F: hw/pci-host/mv643xx.h
|
||||||
F: include/hw/pci-host/mv64361.h
|
F: include/hw/pci-host/mv64361.h
|
||||||
|
|
||||||
|
amigaone
|
||||||
|
M: BALATON Zoltan <balaton@eik.bme.hu>
|
||||||
|
L: qemu-ppc@nongnu.org
|
||||||
|
S: Maintained
|
||||||
|
F: hw/ppc/amigaone.c
|
||||||
|
F: hw/pci-host/articia.c
|
||||||
|
F: include/hw/pci-host/articia.h
|
||||||
|
|
||||||
Virtual Open Firmware (VOF)
|
Virtual Open Firmware (VOF)
|
||||||
M: Alexey Kardashevskiy <aik@ozlabs.ru>
|
M: Alexey Kardashevskiy <aik@ozlabs.ru>
|
||||||
R: David Gibson <david@gibson.dropbear.id.au>
|
R: David Gibson <david@gibson.dropbear.id.au>
|
||||||
@ -1614,6 +1642,7 @@ F: hw/intc/sh_intc.c
|
|||||||
F: hw/pci-host/sh_pci.c
|
F: hw/pci-host/sh_pci.c
|
||||||
F: hw/timer/sh_timer.c
|
F: hw/timer/sh_timer.c
|
||||||
F: include/hw/sh4/sh_intc.h
|
F: include/hw/sh4/sh_intc.h
|
||||||
|
F: include/hw/timer/tmu012.h
|
||||||
|
|
||||||
Shix
|
Shix
|
||||||
R: Yoshinori Sato <ysato@users.sourceforge.jp>
|
R: Yoshinori Sato <ysato@users.sourceforge.jp>
|
||||||
@ -1771,7 +1800,7 @@ F: include/hw/southbridge/ich9.h
|
|||||||
F: include/hw/southbridge/piix.h
|
F: include/hw/southbridge/piix.h
|
||||||
F: hw/isa/apm.c
|
F: hw/isa/apm.c
|
||||||
F: include/hw/isa/apm.h
|
F: include/hw/isa/apm.h
|
||||||
F: tests/unit/test-x86-cpuid.c
|
F: tests/unit/test-x86-topo.c
|
||||||
F: tests/qtest/test-x86-cpuid-compat.c
|
F: tests/qtest/test-x86-cpuid-compat.c
|
||||||
|
|
||||||
PC Chipset
|
PC Chipset
|
||||||
@ -1857,6 +1886,7 @@ M: Max Filippov <jcmvbkbc@gmail.com>
|
|||||||
S: Maintained
|
S: Maintained
|
||||||
F: hw/xtensa/xtfpga.c
|
F: hw/xtensa/xtfpga.c
|
||||||
F: hw/net/opencores_eth.c
|
F: hw/net/opencores_eth.c
|
||||||
|
F: include/hw/xtensa/mx_pic.h
|
||||||
|
|
||||||
Devices
|
Devices
|
||||||
-------
|
-------
|
||||||
@ -1882,6 +1912,7 @@ EDU
|
|||||||
M: Jiri Slaby <jslaby@suse.cz>
|
M: Jiri Slaby <jslaby@suse.cz>
|
||||||
S: Maintained
|
S: Maintained
|
||||||
F: hw/misc/edu.c
|
F: hw/misc/edu.c
|
||||||
|
F: docs/specs/edu.rst
|
||||||
|
|
||||||
IDE
|
IDE
|
||||||
M: John Snow <jsnow@redhat.com>
|
M: John Snow <jsnow@redhat.com>
|
||||||
@ -2226,7 +2257,7 @@ M: Stefan Hajnoczi <stefanha@redhat.com>
|
|||||||
S: Supported
|
S: Supported
|
||||||
F: hw/virtio/vhost-user-fs*
|
F: hw/virtio/vhost-user-fs*
|
||||||
F: include/hw/virtio/vhost-user-fs.h
|
F: include/hw/virtio/vhost-user-fs.h
|
||||||
L: virtio-fs@redhat.com
|
L: virtio-fs@lists.linux.dev
|
||||||
|
|
||||||
virtio-input
|
virtio-input
|
||||||
M: Gerd Hoffmann <kraxel@redhat.com>
|
M: Gerd Hoffmann <kraxel@redhat.com>
|
||||||
@ -2308,6 +2339,15 @@ F: hw/virtio/virtio-mem-pci.h
|
|||||||
F: hw/virtio/virtio-mem-pci.c
|
F: hw/virtio/virtio-mem-pci.c
|
||||||
F: include/hw/virtio/virtio-mem.h
|
F: include/hw/virtio/virtio-mem.h
|
||||||
|
|
||||||
|
virtio-snd
|
||||||
|
M: Gerd Hoffmann <kraxel@redhat.com>
|
||||||
|
R: Manos Pitsidianakis <manos.pitsidianakis@linaro.org>
|
||||||
|
S: Supported
|
||||||
|
F: hw/audio/virtio-snd.c
|
||||||
|
F: hw/audio/virtio-snd-pci.c
|
||||||
|
F: include/hw/audio/virtio-snd.h
|
||||||
|
F: docs/system/devices/virtio-snd.rst
|
||||||
|
|
||||||
nvme
|
nvme
|
||||||
M: Keith Busch <kbusch@kernel.org>
|
M: Keith Busch <kbusch@kernel.org>
|
||||||
M: Klaus Jensen <its@irrelevant.dk>
|
M: Klaus Jensen <its@irrelevant.dk>
|
||||||
@ -2350,6 +2390,7 @@ S: Maintained
|
|||||||
F: hw/net/vmxnet*
|
F: hw/net/vmxnet*
|
||||||
F: hw/scsi/vmw_pvscsi*
|
F: hw/scsi/vmw_pvscsi*
|
||||||
F: tests/qtest/vmxnet3-test.c
|
F: tests/qtest/vmxnet3-test.c
|
||||||
|
F: docs/specs/vwm_pvscsi-spec.rst
|
||||||
|
|
||||||
Rocker
|
Rocker
|
||||||
M: Jiri Pirko <jiri@resnulli.us>
|
M: Jiri Pirko <jiri@resnulli.us>
|
||||||
@ -2434,7 +2475,7 @@ S: Orphan
|
|||||||
R: Ani Sinha <ani@anisinha.ca>
|
R: Ani Sinha <ani@anisinha.ca>
|
||||||
F: hw/acpi/vmgenid.c
|
F: hw/acpi/vmgenid.c
|
||||||
F: include/hw/acpi/vmgenid.h
|
F: include/hw/acpi/vmgenid.h
|
||||||
F: docs/specs/vmgenid.txt
|
F: docs/specs/vmgenid.rst
|
||||||
F: tests/qtest/vmgenid-test.c
|
F: tests/qtest/vmgenid-test.c
|
||||||
|
|
||||||
LED
|
LED
|
||||||
@ -2466,6 +2507,7 @@ F: hw/display/vga*
|
|||||||
F: hw/display/bochs-display.c
|
F: hw/display/bochs-display.c
|
||||||
F: include/hw/display/vga.h
|
F: include/hw/display/vga.h
|
||||||
F: include/hw/display/bochs-vbe.h
|
F: include/hw/display/bochs-vbe.h
|
||||||
|
F: docs/specs/standard-vga.rst
|
||||||
|
|
||||||
ramfb
|
ramfb
|
||||||
M: Gerd Hoffmann <kraxel@redhat.com>
|
M: Gerd Hoffmann <kraxel@redhat.com>
|
||||||
@ -2479,6 +2521,7 @@ S: Odd Fixes
|
|||||||
F: hw/display/virtio-gpu*
|
F: hw/display/virtio-gpu*
|
||||||
F: hw/display/virtio-vga.*
|
F: hw/display/virtio-vga.*
|
||||||
F: include/hw/virtio/virtio-gpu.h
|
F: include/hw/virtio/virtio-gpu.h
|
||||||
|
F: docs/system/devices/virtio-gpu.rst
|
||||||
|
|
||||||
vhost-user-blk
|
vhost-user-blk
|
||||||
M: Raphael Norwitz <raphael.norwitz@nutanix.com>
|
M: Raphael Norwitz <raphael.norwitz@nutanix.com>
|
||||||
@ -2581,6 +2624,7 @@ W: https://canbus.pages.fel.cvut.cz/
|
|||||||
F: net/can/*
|
F: net/can/*
|
||||||
F: hw/net/can/*
|
F: hw/net/can/*
|
||||||
F: include/net/can_*.h
|
F: include/net/can_*.h
|
||||||
|
F: docs/system/devices/can.rst
|
||||||
|
|
||||||
OpenPIC interrupt controller
|
OpenPIC interrupt controller
|
||||||
M: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
|
M: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
|
||||||
@ -2652,6 +2696,14 @@ F: hw/usb/canokey.c
|
|||||||
F: hw/usb/canokey.h
|
F: hw/usb/canokey.h
|
||||||
F: docs/system/devices/canokey.rst
|
F: docs/system/devices/canokey.rst
|
||||||
|
|
||||||
|
Hyper-V Dynamic Memory Protocol
|
||||||
|
M: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
|
||||||
|
S: Supported
|
||||||
|
F: hw/hyperv/hv-balloon*.c
|
||||||
|
F: hw/hyperv/hv-balloon*.h
|
||||||
|
F: include/hw/hyperv/dynmem-proto.h
|
||||||
|
F: include/hw/hyperv/hv-balloon.h
|
||||||
|
|
||||||
Subsystems
|
Subsystems
|
||||||
----------
|
----------
|
||||||
Overall Audio backends
|
Overall Audio backends
|
||||||
@ -2755,12 +2807,13 @@ S: Supported
|
|||||||
F: util/async.c
|
F: util/async.c
|
||||||
F: util/aio-*.c
|
F: util/aio-*.c
|
||||||
F: util/aio-*.h
|
F: util/aio-*.h
|
||||||
|
F: util/defer-call.c
|
||||||
F: util/fdmon-*.c
|
F: util/fdmon-*.c
|
||||||
F: block/io.c
|
F: block/io.c
|
||||||
F: block/plug.c
|
|
||||||
F: migration/block*
|
F: migration/block*
|
||||||
F: include/block/aio.h
|
F: include/block/aio.h
|
||||||
F: include/block/aio-wait.h
|
F: include/block/aio-wait.h
|
||||||
|
F: include/qemu/defer-call.h
|
||||||
F: scripts/qemugdb/aio.py
|
F: scripts/qemugdb/aio.py
|
||||||
F: tests/unit/test-fdmon-epoll.c
|
F: tests/unit/test-fdmon-epoll.c
|
||||||
T: git https://github.com/stefanha/qemu.git block
|
T: git https://github.com/stefanha/qemu.git block
|
||||||
@ -2879,6 +2932,7 @@ F: include/sysemu/dump.h
|
|||||||
F: qapi/dump.json
|
F: qapi/dump.json
|
||||||
F: scripts/dump-guest-memory.py
|
F: scripts/dump-guest-memory.py
|
||||||
F: stubs/dump.c
|
F: stubs/dump.c
|
||||||
|
F: docs/specs/vmcoreinfo.rst
|
||||||
|
|
||||||
Error reporting
|
Error reporting
|
||||||
M: Markus Armbruster <armbru@redhat.com>
|
M: Markus Armbruster <armbru@redhat.com>
|
||||||
@ -2904,7 +2958,7 @@ F: gdbstub/*
|
|||||||
F: include/exec/gdbstub.h
|
F: include/exec/gdbstub.h
|
||||||
F: include/gdbstub/*
|
F: include/gdbstub/*
|
||||||
F: gdb-xml/
|
F: gdb-xml/
|
||||||
F: tests/tcg/multiarch/gdbstub/
|
F: tests/tcg/multiarch/gdbstub/*
|
||||||
F: scripts/feature_to_c.py
|
F: scripts/feature_to_c.py
|
||||||
F: scripts/probe-gdb-support.py
|
F: scripts/probe-gdb-support.py
|
||||||
|
|
||||||
@ -3126,10 +3180,11 @@ M: Michael Roth <michael.roth@amd.com>
|
|||||||
M: Konstantin Kostiuk <kkostiuk@redhat.com>
|
M: Konstantin Kostiuk <kkostiuk@redhat.com>
|
||||||
S: Maintained
|
S: Maintained
|
||||||
F: qga/
|
F: qga/
|
||||||
|
F: contrib/systemd/qemu-guest-agent.service
|
||||||
F: docs/interop/qemu-ga.rst
|
F: docs/interop/qemu-ga.rst
|
||||||
F: docs/interop/qemu-ga-ref.rst
|
F: docs/interop/qemu-ga-ref.rst
|
||||||
F: scripts/qemu-guest-agent/
|
F: scripts/qemu-guest-agent/
|
||||||
F: tests/unit/test-qga.c
|
F: tests/*/test-qga*
|
||||||
T: git https://github.com/mdroth/qemu.git qga
|
T: git https://github.com/mdroth/qemu.git qga
|
||||||
|
|
||||||
QEMU Guest Agent Win32
|
QEMU Guest Agent Win32
|
||||||
@ -4039,7 +4094,7 @@ F: gitdm.config
|
|||||||
F: contrib/gitdm/*
|
F: contrib/gitdm/*
|
||||||
|
|
||||||
Incompatible changes
|
Incompatible changes
|
||||||
R: libvir-list@redhat.com
|
R: devel@lists.libvirt.org
|
||||||
F: docs/about/deprecated.rst
|
F: docs/about/deprecated.rst
|
||||||
|
|
||||||
Build System
|
Build System
|
||||||
|
10
Makefile
10
Makefile
@ -283,6 +283,13 @@ include $(SRC_PATH)/tests/vm/Makefile.include
|
|||||||
print-help-run = printf " %-30s - %s\\n" "$1" "$2"
|
print-help-run = printf " %-30s - %s\\n" "$1" "$2"
|
||||||
print-help = @$(call print-help-run,$1,$2)
|
print-help = @$(call print-help-run,$1,$2)
|
||||||
|
|
||||||
|
.PHONY: update-linux-vdso
|
||||||
|
update-linux-vdso:
|
||||||
|
@for m in $(SRC_PATH)/linux-user/*/Makefile.vdso; do \
|
||||||
|
$(MAKE) $(SUBDIR_MAKEFLAGS) -C $$(dirname $$m) -f Makefile.vdso \
|
||||||
|
SRC_PATH=$(SRC_PATH) BUILD_DIR=$(BUILD_DIR); \
|
||||||
|
done
|
||||||
|
|
||||||
.PHONY: help
|
.PHONY: help
|
||||||
help:
|
help:
|
||||||
@echo 'Generic targets:'
|
@echo 'Generic targets:'
|
||||||
@ -303,6 +310,9 @@ endif
|
|||||||
$(call print-help,distclean,Remove all generated files)
|
$(call print-help,distclean,Remove all generated files)
|
||||||
$(call print-help,dist,Build a distributable tarball)
|
$(call print-help,dist,Build a distributable tarball)
|
||||||
@echo ''
|
@echo ''
|
||||||
|
@echo 'Linux-user targets:'
|
||||||
|
$(call print-help,update-linux-vdso,Build linux-user vdso images)
|
||||||
|
@echo ''
|
||||||
@echo 'Test targets:'
|
@echo 'Test targets:'
|
||||||
$(call print-help,check,Run all tests (check-help for details))
|
$(call print-help,check,Run all tests (check-help for details))
|
||||||
$(call print-help,bench,Run all benchmarks)
|
$(call print-help,bench,Run all benchmarks)
|
||||||
|
@ -90,8 +90,6 @@ bool kvm_kernel_irqchip;
|
|||||||
bool kvm_split_irqchip;
|
bool kvm_split_irqchip;
|
||||||
bool kvm_async_interrupts_allowed;
|
bool kvm_async_interrupts_allowed;
|
||||||
bool kvm_halt_in_kernel_allowed;
|
bool kvm_halt_in_kernel_allowed;
|
||||||
bool kvm_eventfds_allowed;
|
|
||||||
bool kvm_irqfds_allowed;
|
|
||||||
bool kvm_resamplefds_allowed;
|
bool kvm_resamplefds_allowed;
|
||||||
bool kvm_msi_via_irqfd_allowed;
|
bool kvm_msi_via_irqfd_allowed;
|
||||||
bool kvm_gsi_routing_allowed;
|
bool kvm_gsi_routing_allowed;
|
||||||
@ -99,8 +97,6 @@ bool kvm_gsi_direct_mapping;
|
|||||||
bool kvm_allowed;
|
bool kvm_allowed;
|
||||||
bool kvm_readonly_mem_allowed;
|
bool kvm_readonly_mem_allowed;
|
||||||
bool kvm_vm_attributes_allowed;
|
bool kvm_vm_attributes_allowed;
|
||||||
bool kvm_direct_msi_allowed;
|
|
||||||
bool kvm_ioeventfd_any_length_allowed;
|
|
||||||
bool kvm_msi_use_devid;
|
bool kvm_msi_use_devid;
|
||||||
bool kvm_has_guest_debug;
|
bool kvm_has_guest_debug;
|
||||||
static int kvm_sstep_flags;
|
static int kvm_sstep_flags;
|
||||||
@ -111,6 +107,9 @@ static const KVMCapabilityInfo kvm_required_capabilites[] = {
|
|||||||
KVM_CAP_INFO(USER_MEMORY),
|
KVM_CAP_INFO(USER_MEMORY),
|
||||||
KVM_CAP_INFO(DESTROY_MEMORY_REGION_WORKS),
|
KVM_CAP_INFO(DESTROY_MEMORY_REGION_WORKS),
|
||||||
KVM_CAP_INFO(JOIN_MEMORY_REGIONS_WORKS),
|
KVM_CAP_INFO(JOIN_MEMORY_REGIONS_WORKS),
|
||||||
|
KVM_CAP_INFO(INTERNAL_ERROR_DATA),
|
||||||
|
KVM_CAP_INFO(IOEVENTFD),
|
||||||
|
KVM_CAP_INFO(IOEVENTFD_ANY_LENGTH),
|
||||||
KVM_CAP_LAST_INFO
|
KVM_CAP_LAST_INFO
|
||||||
};
|
};
|
||||||
|
|
||||||
@ -1106,13 +1105,6 @@ static void kvm_coalesce_pio_del(MemoryListener *listener,
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
static MemoryListener kvm_coalesced_pio_listener = {
|
|
||||||
.name = "kvm-coalesced-pio",
|
|
||||||
.coalesced_io_add = kvm_coalesce_pio_add,
|
|
||||||
.coalesced_io_del = kvm_coalesce_pio_del,
|
|
||||||
.priority = MEMORY_LISTENER_PRIORITY_MIN,
|
|
||||||
};
|
|
||||||
|
|
||||||
int kvm_check_extension(KVMState *s, unsigned int extension)
|
int kvm_check_extension(KVMState *s, unsigned int extension)
|
||||||
{
|
{
|
||||||
int ret;
|
int ret;
|
||||||
@ -1254,43 +1246,6 @@ static int kvm_set_ioeventfd_pio(int fd, uint16_t addr, uint16_t val,
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
static int kvm_check_many_ioeventfds(void)
|
|
||||||
{
|
|
||||||
/* Userspace can use ioeventfd for io notification. This requires a host
|
|
||||||
* that supports eventfd(2) and an I/O thread; since eventfd does not
|
|
||||||
* support SIGIO it cannot interrupt the vcpu.
|
|
||||||
*
|
|
||||||
* Older kernels have a 6 device limit on the KVM io bus. Find out so we
|
|
||||||
* can avoid creating too many ioeventfds.
|
|
||||||
*/
|
|
||||||
#if defined(CONFIG_EVENTFD)
|
|
||||||
int ioeventfds[7];
|
|
||||||
int i, ret = 0;
|
|
||||||
for (i = 0; i < ARRAY_SIZE(ioeventfds); i++) {
|
|
||||||
ioeventfds[i] = eventfd(0, EFD_CLOEXEC);
|
|
||||||
if (ioeventfds[i] < 0) {
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
ret = kvm_set_ioeventfd_pio(ioeventfds[i], 0, i, true, 2, true);
|
|
||||||
if (ret < 0) {
|
|
||||||
close(ioeventfds[i]);
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/* Decide whether many devices are supported or not */
|
|
||||||
ret = i == ARRAY_SIZE(ioeventfds);
|
|
||||||
|
|
||||||
while (i-- > 0) {
|
|
||||||
kvm_set_ioeventfd_pio(ioeventfds[i], 0, i, false, 2, true);
|
|
||||||
close(ioeventfds[i]);
|
|
||||||
}
|
|
||||||
return ret;
|
|
||||||
#else
|
|
||||||
return 0;
|
|
||||||
#endif
|
|
||||||
}
|
|
||||||
|
|
||||||
static const KVMCapabilityInfo *
|
static const KVMCapabilityInfo *
|
||||||
kvm_check_extension_list(KVMState *s, const KVMCapabilityInfo *list)
|
kvm_check_extension_list(KVMState *s, const KVMCapabilityInfo *list)
|
||||||
{
|
{
|
||||||
@ -1806,6 +1761,8 @@ void kvm_memory_listener_register(KVMState *s, KVMMemoryListener *kml,
|
|||||||
|
|
||||||
static MemoryListener kvm_io_listener = {
|
static MemoryListener kvm_io_listener = {
|
||||||
.name = "kvm-io",
|
.name = "kvm-io",
|
||||||
|
.coalesced_io_add = kvm_coalesce_pio_add,
|
||||||
|
.coalesced_io_del = kvm_coalesce_pio_del,
|
||||||
.eventfd_add = kvm_io_ioeventfd_add,
|
.eventfd_add = kvm_io_ioeventfd_add,
|
||||||
.eventfd_del = kvm_io_ioeventfd_del,
|
.eventfd_del = kvm_io_ioeventfd_del,
|
||||||
.priority = MEMORY_LISTENER_PRIORITY_DEV_BACKEND,
|
.priority = MEMORY_LISTENER_PRIORITY_DEV_BACKEND,
|
||||||
@ -1847,7 +1804,7 @@ static void clear_gsi(KVMState *s, unsigned int gsi)
|
|||||||
|
|
||||||
void kvm_init_irq_routing(KVMState *s)
|
void kvm_init_irq_routing(KVMState *s)
|
||||||
{
|
{
|
||||||
int gsi_count, i;
|
int gsi_count;
|
||||||
|
|
||||||
gsi_count = kvm_check_extension(s, KVM_CAP_IRQ_ROUTING) - 1;
|
gsi_count = kvm_check_extension(s, KVM_CAP_IRQ_ROUTING) - 1;
|
||||||
if (gsi_count > 0) {
|
if (gsi_count > 0) {
|
||||||
@ -1859,12 +1816,6 @@ void kvm_init_irq_routing(KVMState *s)
|
|||||||
s->irq_routes = g_malloc0(sizeof(*s->irq_routes));
|
s->irq_routes = g_malloc0(sizeof(*s->irq_routes));
|
||||||
s->nr_allocated_irq_routes = 0;
|
s->nr_allocated_irq_routes = 0;
|
||||||
|
|
||||||
if (!kvm_direct_msi_allowed) {
|
|
||||||
for (i = 0; i < KVM_MSI_HASHTAB_SIZE; i++) {
|
|
||||||
QTAILQ_INIT(&s->msi_hashtab[i]);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
kvm_arch_init_irq_routing(s);
|
kvm_arch_init_irq_routing(s);
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -1984,41 +1935,10 @@ void kvm_irqchip_change_notify(void)
|
|||||||
notifier_list_notify(&kvm_irqchip_change_notifiers, NULL);
|
notifier_list_notify(&kvm_irqchip_change_notifiers, NULL);
|
||||||
}
|
}
|
||||||
|
|
||||||
static unsigned int kvm_hash_msi(uint32_t data)
|
|
||||||
{
|
|
||||||
/* This is optimized for IA32 MSI layout. However, no other arch shall
|
|
||||||
* repeat the mistake of not providing a direct MSI injection API. */
|
|
||||||
return data & 0xff;
|
|
||||||
}
|
|
||||||
|
|
||||||
static void kvm_flush_dynamic_msi_routes(KVMState *s)
|
|
||||||
{
|
|
||||||
KVMMSIRoute *route, *next;
|
|
||||||
unsigned int hash;
|
|
||||||
|
|
||||||
for (hash = 0; hash < KVM_MSI_HASHTAB_SIZE; hash++) {
|
|
||||||
QTAILQ_FOREACH_SAFE(route, &s->msi_hashtab[hash], entry, next) {
|
|
||||||
kvm_irqchip_release_virq(s, route->kroute.gsi);
|
|
||||||
QTAILQ_REMOVE(&s->msi_hashtab[hash], route, entry);
|
|
||||||
g_free(route);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
static int kvm_irqchip_get_virq(KVMState *s)
|
static int kvm_irqchip_get_virq(KVMState *s)
|
||||||
{
|
{
|
||||||
int next_virq;
|
int next_virq;
|
||||||
|
|
||||||
/*
|
|
||||||
* PIC and IOAPIC share the first 16 GSI numbers, thus the available
|
|
||||||
* GSI numbers are more than the number of IRQ route. Allocating a GSI
|
|
||||||
* number can succeed even though a new route entry cannot be added.
|
|
||||||
* When this happens, flush dynamic MSI entries to free IRQ route entries.
|
|
||||||
*/
|
|
||||||
if (!kvm_direct_msi_allowed && s->irq_routes->nr == s->gsi_count) {
|
|
||||||
kvm_flush_dynamic_msi_routes(s);
|
|
||||||
}
|
|
||||||
|
|
||||||
/* Return the lowest unused GSI in the bitmap */
|
/* Return the lowest unused GSI in the bitmap */
|
||||||
next_virq = find_first_zero_bit(s->used_gsi_bitmap, s->gsi_count);
|
next_virq = find_first_zero_bit(s->used_gsi_bitmap, s->gsi_count);
|
||||||
if (next_virq >= s->gsi_count) {
|
if (next_virq >= s->gsi_count) {
|
||||||
@ -2028,63 +1948,17 @@ static int kvm_irqchip_get_virq(KVMState *s)
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
static KVMMSIRoute *kvm_lookup_msi_route(KVMState *s, MSIMessage msg)
|
|
||||||
{
|
|
||||||
unsigned int hash = kvm_hash_msi(msg.data);
|
|
||||||
KVMMSIRoute *route;
|
|
||||||
|
|
||||||
QTAILQ_FOREACH(route, &s->msi_hashtab[hash], entry) {
|
|
||||||
if (route->kroute.u.msi.address_lo == (uint32_t)msg.address &&
|
|
||||||
route->kroute.u.msi.address_hi == (msg.address >> 32) &&
|
|
||||||
route->kroute.u.msi.data == le32_to_cpu(msg.data)) {
|
|
||||||
return route;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return NULL;
|
|
||||||
}
|
|
||||||
|
|
||||||
int kvm_irqchip_send_msi(KVMState *s, MSIMessage msg)
|
int kvm_irqchip_send_msi(KVMState *s, MSIMessage msg)
|
||||||
{
|
{
|
||||||
struct kvm_msi msi;
|
struct kvm_msi msi;
|
||||||
KVMMSIRoute *route;
|
|
||||||
|
|
||||||
if (kvm_direct_msi_allowed) {
|
msi.address_lo = (uint32_t)msg.address;
|
||||||
msi.address_lo = (uint32_t)msg.address;
|
msi.address_hi = msg.address >> 32;
|
||||||
msi.address_hi = msg.address >> 32;
|
msi.data = le32_to_cpu(msg.data);
|
||||||
msi.data = le32_to_cpu(msg.data);
|
msi.flags = 0;
|
||||||
msi.flags = 0;
|
memset(msi.pad, 0, sizeof(msi.pad));
|
||||||
memset(msi.pad, 0, sizeof(msi.pad));
|
|
||||||
|
|
||||||
return kvm_vm_ioctl(s, KVM_SIGNAL_MSI, &msi);
|
return kvm_vm_ioctl(s, KVM_SIGNAL_MSI, &msi);
|
||||||
}
|
|
||||||
|
|
||||||
route = kvm_lookup_msi_route(s, msg);
|
|
||||||
if (!route) {
|
|
||||||
int virq;
|
|
||||||
|
|
||||||
virq = kvm_irqchip_get_virq(s);
|
|
||||||
if (virq < 0) {
|
|
||||||
return virq;
|
|
||||||
}
|
|
||||||
|
|
||||||
route = g_new0(KVMMSIRoute, 1);
|
|
||||||
route->kroute.gsi = virq;
|
|
||||||
route->kroute.type = KVM_IRQ_ROUTING_MSI;
|
|
||||||
route->kroute.flags = 0;
|
|
||||||
route->kroute.u.msi.address_lo = (uint32_t)msg.address;
|
|
||||||
route->kroute.u.msi.address_hi = msg.address >> 32;
|
|
||||||
route->kroute.u.msi.data = le32_to_cpu(msg.data);
|
|
||||||
|
|
||||||
kvm_add_routing_entry(s, &route->kroute);
|
|
||||||
kvm_irqchip_commit_routes(s);
|
|
||||||
|
|
||||||
QTAILQ_INSERT_TAIL(&s->msi_hashtab[kvm_hash_msi(msg.data)], route,
|
|
||||||
entry);
|
|
||||||
}
|
|
||||||
|
|
||||||
assert(route->kroute.type == KVM_IRQ_ROUTING_MSI);
|
|
||||||
|
|
||||||
return kvm_set_irq(s, route->kroute.gsi, 1);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
int kvm_irqchip_add_msi_route(KVMRouteChange *c, int vector, PCIDevice *dev)
|
int kvm_irqchip_add_msi_route(KVMRouteChange *c, int vector, PCIDevice *dev)
|
||||||
@ -2211,10 +2085,6 @@ static int kvm_irqchip_assign_irqfd(KVMState *s, EventNotifier *event,
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!kvm_irqfds_enabled()) {
|
|
||||||
return -ENOSYS;
|
|
||||||
}
|
|
||||||
|
|
||||||
return kvm_vm_ioctl(s, KVM_IRQFD, &irqfd);
|
return kvm_vm_ioctl(s, KVM_IRQFD, &irqfd);
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -2375,6 +2245,11 @@ static void kvm_irqchip_create(KVMState *s)
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (kvm_check_extension(s, KVM_CAP_IRQFD) <= 0) {
|
||||||
|
fprintf(stderr, "kvm: irqfd not implemented\n");
|
||||||
|
exit(1);
|
||||||
|
}
|
||||||
|
|
||||||
/* First probe and see if there's a arch-specific hook to create the
|
/* First probe and see if there's a arch-specific hook to create the
|
||||||
* in-kernel irqchip for us */
|
* in-kernel irqchip for us */
|
||||||
ret = kvm_arch_irqchip_create(s);
|
ret = kvm_arch_irqchip_create(s);
|
||||||
@ -2649,22 +2524,8 @@ static int kvm_init(MachineState *ms)
|
|||||||
#ifdef KVM_CAP_VCPU_EVENTS
|
#ifdef KVM_CAP_VCPU_EVENTS
|
||||||
s->vcpu_events = kvm_check_extension(s, KVM_CAP_VCPU_EVENTS);
|
s->vcpu_events = kvm_check_extension(s, KVM_CAP_VCPU_EVENTS);
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
s->robust_singlestep =
|
|
||||||
kvm_check_extension(s, KVM_CAP_X86_ROBUST_SINGLESTEP);
|
|
||||||
|
|
||||||
#ifdef KVM_CAP_DEBUGREGS
|
|
||||||
s->debugregs = kvm_check_extension(s, KVM_CAP_DEBUGREGS);
|
|
||||||
#endif
|
|
||||||
|
|
||||||
s->max_nested_state_len = kvm_check_extension(s, KVM_CAP_NESTED_STATE);
|
s->max_nested_state_len = kvm_check_extension(s, KVM_CAP_NESTED_STATE);
|
||||||
|
|
||||||
#ifdef KVM_CAP_IRQ_ROUTING
|
|
||||||
kvm_direct_msi_allowed = (kvm_check_extension(s, KVM_CAP_SIGNAL_MSI) > 0);
|
|
||||||
#endif
|
|
||||||
|
|
||||||
s->intx_set_mask = kvm_check_extension(s, KVM_CAP_PCI_2_3);
|
|
||||||
|
|
||||||
s->irq_set_ioctl = KVM_IRQ_LINE;
|
s->irq_set_ioctl = KVM_IRQ_LINE;
|
||||||
if (kvm_check_extension(s, KVM_CAP_IRQ_INJECT_STATUS)) {
|
if (kvm_check_extension(s, KVM_CAP_IRQ_INJECT_STATUS)) {
|
||||||
s->irq_set_ioctl = KVM_IRQ_LINE_STATUS;
|
s->irq_set_ioctl = KVM_IRQ_LINE_STATUS;
|
||||||
@ -2673,21 +2534,12 @@ static int kvm_init(MachineState *ms)
|
|||||||
kvm_readonly_mem_allowed =
|
kvm_readonly_mem_allowed =
|
||||||
(kvm_check_extension(s, KVM_CAP_READONLY_MEM) > 0);
|
(kvm_check_extension(s, KVM_CAP_READONLY_MEM) > 0);
|
||||||
|
|
||||||
kvm_eventfds_allowed =
|
|
||||||
(kvm_check_extension(s, KVM_CAP_IOEVENTFD) > 0);
|
|
||||||
|
|
||||||
kvm_irqfds_allowed =
|
|
||||||
(kvm_check_extension(s, KVM_CAP_IRQFD) > 0);
|
|
||||||
|
|
||||||
kvm_resamplefds_allowed =
|
kvm_resamplefds_allowed =
|
||||||
(kvm_check_extension(s, KVM_CAP_IRQFD_RESAMPLE) > 0);
|
(kvm_check_extension(s, KVM_CAP_IRQFD_RESAMPLE) > 0);
|
||||||
|
|
||||||
kvm_vm_attributes_allowed =
|
kvm_vm_attributes_allowed =
|
||||||
(kvm_check_extension(s, KVM_CAP_VM_ATTRIBUTES) > 0);
|
(kvm_check_extension(s, KVM_CAP_VM_ATTRIBUTES) > 0);
|
||||||
|
|
||||||
kvm_ioeventfd_any_length_allowed =
|
|
||||||
(kvm_check_extension(s, KVM_CAP_IOEVENTFD_ANY_LENGTH) > 0);
|
|
||||||
|
|
||||||
#ifdef KVM_CAP_SET_GUEST_DEBUG
|
#ifdef KVM_CAP_SET_GUEST_DEBUG
|
||||||
kvm_has_guest_debug =
|
kvm_has_guest_debug =
|
||||||
(kvm_check_extension(s, KVM_CAP_SET_GUEST_DEBUG) > 0);
|
(kvm_check_extension(s, KVM_CAP_SET_GUEST_DEBUG) > 0);
|
||||||
@ -2724,24 +2576,16 @@ static int kvm_init(MachineState *ms)
|
|||||||
kvm_irqchip_create(s);
|
kvm_irqchip_create(s);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (kvm_eventfds_allowed) {
|
s->memory_listener.listener.eventfd_add = kvm_mem_ioeventfd_add;
|
||||||
s->memory_listener.listener.eventfd_add = kvm_mem_ioeventfd_add;
|
s->memory_listener.listener.eventfd_del = kvm_mem_ioeventfd_del;
|
||||||
s->memory_listener.listener.eventfd_del = kvm_mem_ioeventfd_del;
|
|
||||||
}
|
|
||||||
s->memory_listener.listener.coalesced_io_add = kvm_coalesce_mmio_region;
|
s->memory_listener.listener.coalesced_io_add = kvm_coalesce_mmio_region;
|
||||||
s->memory_listener.listener.coalesced_io_del = kvm_uncoalesce_mmio_region;
|
s->memory_listener.listener.coalesced_io_del = kvm_uncoalesce_mmio_region;
|
||||||
|
|
||||||
kvm_memory_listener_register(s, &s->memory_listener,
|
kvm_memory_listener_register(s, &s->memory_listener,
|
||||||
&address_space_memory, 0, "kvm-memory");
|
&address_space_memory, 0, "kvm-memory");
|
||||||
if (kvm_eventfds_allowed) {
|
memory_listener_register(&kvm_io_listener,
|
||||||
memory_listener_register(&kvm_io_listener,
|
|
||||||
&address_space_io);
|
|
||||||
}
|
|
||||||
memory_listener_register(&kvm_coalesced_pio_listener,
|
|
||||||
&address_space_io);
|
&address_space_io);
|
||||||
|
|
||||||
s->many_ioeventfds = kvm_check_many_ioeventfds();
|
|
||||||
|
|
||||||
s->sync_mmu = !!kvm_vm_check_extension(kvm_state, KVM_CAP_SYNC_MMU);
|
s->sync_mmu = !!kvm_vm_check_extension(kvm_state, KVM_CAP_SYNC_MMU);
|
||||||
if (!s->sync_mmu) {
|
if (!s->sync_mmu) {
|
||||||
ret = ram_block_discard_disable(true);
|
ret = ram_block_discard_disable(true);
|
||||||
@ -2794,16 +2638,14 @@ static void kvm_handle_io(uint16_t port, MemTxAttrs attrs, void *data, int direc
|
|||||||
|
|
||||||
static int kvm_handle_internal_error(CPUState *cpu, struct kvm_run *run)
|
static int kvm_handle_internal_error(CPUState *cpu, struct kvm_run *run)
|
||||||
{
|
{
|
||||||
|
int i;
|
||||||
|
|
||||||
fprintf(stderr, "KVM internal error. Suberror: %d\n",
|
fprintf(stderr, "KVM internal error. Suberror: %d\n",
|
||||||
run->internal.suberror);
|
run->internal.suberror);
|
||||||
|
|
||||||
if (kvm_check_extension(kvm_state, KVM_CAP_INTERNAL_ERROR_DATA)) {
|
for (i = 0; i < run->internal.ndata; ++i) {
|
||||||
int i;
|
fprintf(stderr, "extra data[%d]: 0x%016"PRIx64"\n",
|
||||||
|
i, (uint64_t)run->internal.data[i]);
|
||||||
for (i = 0; i < run->internal.ndata; ++i) {
|
|
||||||
fprintf(stderr, "extra data[%d]: 0x%016"PRIx64"\n",
|
|
||||||
i, (uint64_t)run->internal.data[i]);
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
if (run->internal.suberror == KVM_INTERNAL_ERROR_EMULATION) {
|
if (run->internal.suberror == KVM_INTERNAL_ERROR_EMULATION) {
|
||||||
fprintf(stderr, "emulation failure\n");
|
fprintf(stderr, "emulation failure\n");
|
||||||
@ -3297,29 +3139,11 @@ int kvm_has_vcpu_events(void)
|
|||||||
return kvm_state->vcpu_events;
|
return kvm_state->vcpu_events;
|
||||||
}
|
}
|
||||||
|
|
||||||
int kvm_has_robust_singlestep(void)
|
|
||||||
{
|
|
||||||
return kvm_state->robust_singlestep;
|
|
||||||
}
|
|
||||||
|
|
||||||
int kvm_has_debugregs(void)
|
|
||||||
{
|
|
||||||
return kvm_state->debugregs;
|
|
||||||
}
|
|
||||||
|
|
||||||
int kvm_max_nested_state_length(void)
|
int kvm_max_nested_state_length(void)
|
||||||
{
|
{
|
||||||
return kvm_state->max_nested_state_len;
|
return kvm_state->max_nested_state_len;
|
||||||
}
|
}
|
||||||
|
|
||||||
int kvm_has_many_ioeventfds(void)
|
|
||||||
{
|
|
||||||
if (!kvm_enabled()) {
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
return kvm_state->many_ioeventfds;
|
|
||||||
}
|
|
||||||
|
|
||||||
int kvm_has_gsi_routing(void)
|
int kvm_has_gsi_routing(void)
|
||||||
{
|
{
|
||||||
#ifdef KVM_CAP_IRQ_ROUTING
|
#ifdef KVM_CAP_IRQ_ROUTING
|
||||||
@ -3329,11 +3153,6 @@ int kvm_has_gsi_routing(void)
|
|||||||
#endif
|
#endif
|
||||||
}
|
}
|
||||||
|
|
||||||
int kvm_has_intx_set_mask(void)
|
|
||||||
{
|
|
||||||
return kvm_state->intx_set_mask;
|
|
||||||
}
|
|
||||||
|
|
||||||
bool kvm_arm_supports_user_irq(void)
|
bool kvm_arm_supports_user_irq(void)
|
||||||
{
|
{
|
||||||
return kvm_check_extension(kvm_state, KVM_CAP_ARM_USER_IRQ);
|
return kvm_check_extension(kvm_state, KVM_CAP_ARM_USER_IRQ);
|
||||||
|
@ -17,17 +17,13 @@
|
|||||||
KVMState *kvm_state;
|
KVMState *kvm_state;
|
||||||
bool kvm_kernel_irqchip;
|
bool kvm_kernel_irqchip;
|
||||||
bool kvm_async_interrupts_allowed;
|
bool kvm_async_interrupts_allowed;
|
||||||
bool kvm_eventfds_allowed;
|
|
||||||
bool kvm_irqfds_allowed;
|
|
||||||
bool kvm_resamplefds_allowed;
|
bool kvm_resamplefds_allowed;
|
||||||
bool kvm_msi_via_irqfd_allowed;
|
bool kvm_msi_via_irqfd_allowed;
|
||||||
bool kvm_gsi_routing_allowed;
|
bool kvm_gsi_routing_allowed;
|
||||||
bool kvm_gsi_direct_mapping;
|
bool kvm_gsi_direct_mapping;
|
||||||
bool kvm_allowed;
|
bool kvm_allowed;
|
||||||
bool kvm_readonly_mem_allowed;
|
bool kvm_readonly_mem_allowed;
|
||||||
bool kvm_ioeventfd_any_length_allowed;
|
|
||||||
bool kvm_msi_use_devid;
|
bool kvm_msi_use_devid;
|
||||||
bool kvm_direct_msi_allowed;
|
|
||||||
|
|
||||||
void kvm_flush_coalesced_mmio_buffer(void)
|
void kvm_flush_coalesced_mmio_buffer(void)
|
||||||
{
|
{
|
||||||
@ -42,11 +38,6 @@ bool kvm_has_sync_mmu(void)
|
|||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
int kvm_has_many_ioeventfds(void)
|
|
||||||
{
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
int kvm_on_sigbus_vcpu(CPUState *cpu, int code, void *addr)
|
int kvm_on_sigbus_vcpu(CPUState *cpu, int code, void *addr)
|
||||||
{
|
{
|
||||||
return 1;
|
return 1;
|
||||||
@ -92,11 +83,6 @@ void kvm_irqchip_change_notify(void)
|
|||||||
{
|
{
|
||||||
}
|
}
|
||||||
|
|
||||||
int kvm_irqchip_add_adapter_route(KVMState *s, AdapterInfo *adapter)
|
|
||||||
{
|
|
||||||
return -ENOSYS;
|
|
||||||
}
|
|
||||||
|
|
||||||
int kvm_irqchip_add_irqfd_notifier_gsi(KVMState *s, EventNotifier *n,
|
int kvm_irqchip_add_irqfd_notifier_gsi(KVMState *s, EventNotifier *n,
|
||||||
EventNotifier *rn, int virq)
|
EventNotifier *rn, int virq)
|
||||||
{
|
{
|
||||||
|
@ -22,10 +22,6 @@ void tlb_set_dirty(CPUState *cpu, vaddr vaddr)
|
|||||||
{
|
{
|
||||||
}
|
}
|
||||||
|
|
||||||
void tcg_flush_jmp_cache(CPUState *cpu)
|
|
||||||
{
|
|
||||||
}
|
|
||||||
|
|
||||||
int probe_access_flags(CPUArchState *env, vaddr addr, int size,
|
int probe_access_flags(CPUArchState *env, vaddr addr, int size,
|
||||||
MMUAccessType access_type, int mmu_idx,
|
MMUAccessType access_type, int mmu_idx,
|
||||||
bool nonfault, void **phost, uintptr_t retaddr)
|
bool nonfault, void **phost, uintptr_t retaddr)
|
||||||
|
@ -741,7 +741,7 @@ static inline bool cpu_handle_exception(CPUState *cpu, int *ret)
|
|||||||
&& cpu->neg.icount_decr.u16.low + cpu->icount_extra == 0) {
|
&& cpu->neg.icount_decr.u16.low + cpu->icount_extra == 0) {
|
||||||
/* Execute just one insn to trigger exception pending in the log */
|
/* Execute just one insn to trigger exception pending in the log */
|
||||||
cpu->cflags_next_tb = (curr_cflags(cpu) & ~CF_USE_ICOUNT)
|
cpu->cflags_next_tb = (curr_cflags(cpu) & ~CF_USE_ICOUNT)
|
||||||
| CF_LAST_IO | CF_NOIRQ | 1;
|
| CF_NOIRQ | 1;
|
||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
return false;
|
return false;
|
||||||
@ -1074,31 +1074,40 @@ cpu_exec_loop(CPUState *cpu, SyncClocks *sc)
|
|||||||
last_tb = NULL;
|
last_tb = NULL;
|
||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
|
//// --- Begin LibAFL code ---
|
||||||
|
|
||||||
|
int has_libafl_edge = 0;
|
||||||
|
TranslationBlock *edge;
|
||||||
|
|
||||||
/* See if we can patch the calling TB. */
|
/* See if we can patch the calling TB. */
|
||||||
if (last_tb) {
|
if (last_tb) {
|
||||||
// tb_add_jump(last_tb, tb_exit, tb);
|
// tb_add_jump(last_tb, tb_exit, tb);
|
||||||
|
|
||||||
//// --- Begin LibAFL code ---
|
|
||||||
|
|
||||||
if (last_tb->jmp_reset_offset[1] != TB_JMP_OFFSET_INVALID) {
|
if (last_tb->jmp_reset_offset[1] != TB_JMP_OFFSET_INVALID) {
|
||||||
mmap_lock();
|
mmap_lock();
|
||||||
TranslationBlock *edge = libafl_gen_edge(cpu, last_tb_pc, pc, tb_exit, cs_base, flags, cflags);
|
edge = libafl_gen_edge(cpu, last_tb_pc, pc, tb_exit, cs_base, flags, cflags);
|
||||||
mmap_unlock();
|
mmap_unlock();
|
||||||
|
|
||||||
if (edge) {
|
if (edge) {
|
||||||
tb_add_jump(last_tb, tb_exit, edge);
|
tb_add_jump(last_tb, tb_exit, edge);
|
||||||
tb_add_jump(edge, 0, tb);
|
tb_add_jump(edge, 0, tb);
|
||||||
|
has_libafl_edge = 1;
|
||||||
} else {
|
} else {
|
||||||
tb_add_jump(last_tb, tb_exit, tb);
|
tb_add_jump(last_tb, tb_exit, tb);
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
tb_add_jump(last_tb, tb_exit, tb);
|
tb_add_jump(last_tb, tb_exit, tb);
|
||||||
}
|
}
|
||||||
|
|
||||||
//// --- End LibAFL code ---
|
|
||||||
}
|
}
|
||||||
|
|
||||||
cpu_loop_exec_tb(cpu, tb, pc, &last_tb, &tb_exit, &last_tb_pc);
|
if (has_libafl_edge) {
|
||||||
|
cpu_loop_exec_tb(cpu, edge, last_tb_pc, &last_tb, &tb_exit, &last_tb_pc);
|
||||||
|
} else {
|
||||||
|
cpu_loop_exec_tb(cpu, tb, pc, &last_tb, &tb_exit, &last_tb_pc);
|
||||||
|
}
|
||||||
|
|
||||||
|
//// --- End LibAFL code ---
|
||||||
|
|
||||||
/* Try to align the host and virtual clocks
|
/* Try to align the host and virtual clocks
|
||||||
if the guest is in advance */
|
if the guest is in advance */
|
||||||
|
@ -24,6 +24,7 @@
|
|||||||
#include "exec/memory.h"
|
#include "exec/memory.h"
|
||||||
#include "exec/cpu_ldst.h"
|
#include "exec/cpu_ldst.h"
|
||||||
#include "exec/cputlb.h"
|
#include "exec/cputlb.h"
|
||||||
|
#include "exec/tb-flush.h"
|
||||||
#include "exec/memory-internal.h"
|
#include "exec/memory-internal.h"
|
||||||
#include "exec/ram_addr.h"
|
#include "exec/ram_addr.h"
|
||||||
#include "tcg/tcg.h"
|
#include "tcg/tcg.h"
|
||||||
@ -328,21 +329,6 @@ static void flush_all_helper(CPUState *src, run_on_cpu_func fn,
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
void tlb_flush_counts(size_t *pfull, size_t *ppart, size_t *pelide)
|
|
||||||
{
|
|
||||||
CPUState *cpu;
|
|
||||||
size_t full = 0, part = 0, elide = 0;
|
|
||||||
|
|
||||||
CPU_FOREACH(cpu) {
|
|
||||||
full += qatomic_read(&cpu->neg.tlb.c.full_flush_count);
|
|
||||||
part += qatomic_read(&cpu->neg.tlb.c.part_flush_count);
|
|
||||||
elide += qatomic_read(&cpu->neg.tlb.c.elide_flush_count);
|
|
||||||
}
|
|
||||||
*pfull = full;
|
|
||||||
*ppart = part;
|
|
||||||
*pelide = elide;
|
|
||||||
}
|
|
||||||
|
|
||||||
static void tlb_flush_by_mmuidx_async_work(CPUState *cpu, run_on_cpu_data data)
|
static void tlb_flush_by_mmuidx_async_work(CPUState *cpu, run_on_cpu_data data)
|
||||||
{
|
{
|
||||||
uint16_t asked = data.host_int;
|
uint16_t asked = data.host_int;
|
||||||
@ -1500,7 +1486,8 @@ int probe_access_full(CPUArchState *env, vaddr addr, int size,
|
|||||||
|
|
||||||
/* Handle clean RAM pages. */
|
/* Handle clean RAM pages. */
|
||||||
if (unlikely(flags & TLB_NOTDIRTY)) {
|
if (unlikely(flags & TLB_NOTDIRTY)) {
|
||||||
notdirty_write(env_cpu(env), addr, 1, *pfull, retaddr);
|
int dirtysize = size == 0 ? 1 : size;
|
||||||
|
notdirty_write(env_cpu(env), addr, dirtysize, *pfull, retaddr);
|
||||||
flags &= ~TLB_NOTDIRTY;
|
flags &= ~TLB_NOTDIRTY;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -1523,7 +1510,8 @@ int probe_access_full_mmu(CPUArchState *env, vaddr addr, int size,
|
|||||||
|
|
||||||
/* Handle clean RAM pages. */
|
/* Handle clean RAM pages. */
|
||||||
if (unlikely(flags & TLB_NOTDIRTY)) {
|
if (unlikely(flags & TLB_NOTDIRTY)) {
|
||||||
notdirty_write(env_cpu(env), addr, 1, *pfull, 0);
|
int dirtysize = size == 0 ? 1 : size;
|
||||||
|
notdirty_write(env_cpu(env), addr, dirtysize, *pfull, 0);
|
||||||
flags &= ~TLB_NOTDIRTY;
|
flags &= ~TLB_NOTDIRTY;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -1545,7 +1533,8 @@ int probe_access_flags(CPUArchState *env, vaddr addr, int size,
|
|||||||
|
|
||||||
/* Handle clean RAM pages. */
|
/* Handle clean RAM pages. */
|
||||||
if (unlikely(flags & TLB_NOTDIRTY)) {
|
if (unlikely(flags & TLB_NOTDIRTY)) {
|
||||||
notdirty_write(env_cpu(env), addr, 1, full, retaddr);
|
int dirtysize = size == 0 ? 1 : size;
|
||||||
|
notdirty_write(env_cpu(env), addr, dirtysize, full, retaddr);
|
||||||
flags &= ~TLB_NOTDIRTY;
|
flags &= ~TLB_NOTDIRTY;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -1581,7 +1570,7 @@ void *probe_access(CPUArchState *env, vaddr addr, int size,
|
|||||||
|
|
||||||
/* Handle clean RAM pages. */
|
/* Handle clean RAM pages. */
|
||||||
if (flags & TLB_NOTDIRTY) {
|
if (flags & TLB_NOTDIRTY) {
|
||||||
notdirty_write(env_cpu(env), addr, 1, full, retaddr);
|
notdirty_write(env_cpu(env), addr, size, full, retaddr);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -2739,7 +2728,7 @@ static uint64_t do_st16_leN(CPUState *cpu, MMULookupPageData *p,
|
|||||||
|
|
||||||
case MO_ATOM_WITHIN16_PAIR:
|
case MO_ATOM_WITHIN16_PAIR:
|
||||||
/* Since size > 8, this is the half that must be atomic. */
|
/* Since size > 8, this is the half that must be atomic. */
|
||||||
if (!HAVE_ATOMIC128_RW) {
|
if (!HAVE_CMPXCHG128) {
|
||||||
cpu_loop_exit_atomic(cpu, ra);
|
cpu_loop_exit_atomic(cpu, ra);
|
||||||
}
|
}
|
||||||
return store_whole_le16(p->haddr, p->size, val_le);
|
return store_whole_le16(p->haddr, p->size, val_le);
|
||||||
|
@ -14,8 +14,6 @@
|
|||||||
extern int64_t max_delay;
|
extern int64_t max_delay;
|
||||||
extern int64_t max_advance;
|
extern int64_t max_advance;
|
||||||
|
|
||||||
void dump_exec_info(GString *buf);
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Return true if CS is not running in parallel with other cpus, either
|
* Return true if CS is not running in parallel with other cpus, either
|
||||||
* because there are no other cpus or we are within an exclusive context.
|
* because there are no other cpus or we are within an exclusive context.
|
||||||
|
@ -825,7 +825,7 @@ static uint64_t store_whole_le16(void *pv, int size, Int128 val_le)
|
|||||||
int sh = o * 8;
|
int sh = o * 8;
|
||||||
Int128 m, v;
|
Int128 m, v;
|
||||||
|
|
||||||
qemu_build_assert(HAVE_ATOMIC128_RW);
|
qemu_build_assert(HAVE_CMPXCHG128);
|
||||||
|
|
||||||
/* Like MAKE_64BIT_MASK(0, sz), but larger. */
|
/* Like MAKE_64BIT_MASK(0, sz), but larger. */
|
||||||
if (sz <= 64) {
|
if (sz <= 64) {
|
||||||
@ -887,7 +887,7 @@ static void store_atom_2(CPUState *cpu, uintptr_t ra,
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
} else if ((pi & 15) == 7) {
|
} else if ((pi & 15) == 7) {
|
||||||
if (HAVE_ATOMIC128_RW) {
|
if (HAVE_CMPXCHG128) {
|
||||||
Int128 v = int128_lshift(int128_make64(val), 56);
|
Int128 v = int128_lshift(int128_make64(val), 56);
|
||||||
Int128 m = int128_lshift(int128_make64(0xffff), 56);
|
Int128 m = int128_lshift(int128_make64(0xffff), 56);
|
||||||
store_atom_insert_al16(pv - 7, v, m);
|
store_atom_insert_al16(pv - 7, v, m);
|
||||||
@ -956,7 +956,7 @@ static void store_atom_4(CPUState *cpu, uintptr_t ra,
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
if (HAVE_ATOMIC128_RW) {
|
if (HAVE_CMPXCHG128) {
|
||||||
store_whole_le16(pv, 4, int128_make64(cpu_to_le32(val)));
|
store_whole_le16(pv, 4, int128_make64(cpu_to_le32(val)));
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
@ -1021,7 +1021,7 @@ static void store_atom_8(CPUState *cpu, uintptr_t ra,
|
|||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
case MO_64:
|
case MO_64:
|
||||||
if (HAVE_ATOMIC128_RW) {
|
if (HAVE_CMPXCHG128) {
|
||||||
store_whole_le16(pv, 8, int128_make64(cpu_to_le64(val)));
|
store_whole_le16(pv, 8, int128_make64(cpu_to_le64(val)));
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
@ -1076,7 +1076,7 @@ static void store_atom_16(CPUState *cpu, uintptr_t ra,
|
|||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
case -MO_64:
|
case -MO_64:
|
||||||
if (HAVE_ATOMIC128_RW) {
|
if (HAVE_CMPXCHG128) {
|
||||||
uint64_t val_le;
|
uint64_t val_le;
|
||||||
int s2 = pi & 15;
|
int s2 = pi & 15;
|
||||||
int s1 = 16 - s2;
|
int s1 = 16 - s2;
|
||||||
@ -1103,10 +1103,6 @@ static void store_atom_16(CPUState *cpu, uintptr_t ra,
|
|||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
case MO_128:
|
case MO_128:
|
||||||
if (HAVE_ATOMIC128_RW) {
|
|
||||||
atomic16_set(pv, val);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
break;
|
break;
|
||||||
default:
|
default:
|
||||||
g_assert_not_reached();
|
g_assert_not_reached();
|
||||||
|
@ -8,6 +8,7 @@
|
|||||||
|
|
||||||
#include "qemu/osdep.h"
|
#include "qemu/osdep.h"
|
||||||
#include "qemu/accel.h"
|
#include "qemu/accel.h"
|
||||||
|
#include "qemu/qht.h"
|
||||||
#include "qapi/error.h"
|
#include "qapi/error.h"
|
||||||
#include "qapi/type-helpers.h"
|
#include "qapi/type-helpers.h"
|
||||||
#include "qapi/qapi-commands-machine.h"
|
#include "qapi/qapi-commands-machine.h"
|
||||||
@ -17,6 +18,7 @@
|
|||||||
#include "sysemu/tcg.h"
|
#include "sysemu/tcg.h"
|
||||||
#include "tcg/tcg.h"
|
#include "tcg/tcg.h"
|
||||||
#include "internal-common.h"
|
#include "internal-common.h"
|
||||||
|
#include "tb-context.h"
|
||||||
|
|
||||||
|
|
||||||
static void dump_drift_info(GString *buf)
|
static void dump_drift_info(GString *buf)
|
||||||
@ -50,6 +52,153 @@ static void dump_accel_info(GString *buf)
|
|||||||
one_insn_per_tb ? "on" : "off");
|
one_insn_per_tb ? "on" : "off");
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void print_qht_statistics(struct qht_stats hst, GString *buf)
|
||||||
|
{
|
||||||
|
uint32_t hgram_opts;
|
||||||
|
size_t hgram_bins;
|
||||||
|
char *hgram;
|
||||||
|
|
||||||
|
if (!hst.head_buckets) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
g_string_append_printf(buf, "TB hash buckets %zu/%zu "
|
||||||
|
"(%0.2f%% head buckets used)\n",
|
||||||
|
hst.used_head_buckets, hst.head_buckets,
|
||||||
|
(double)hst.used_head_buckets /
|
||||||
|
hst.head_buckets * 100);
|
||||||
|
|
||||||
|
hgram_opts = QDIST_PR_BORDER | QDIST_PR_LABELS;
|
||||||
|
hgram_opts |= QDIST_PR_100X | QDIST_PR_PERCENT;
|
||||||
|
if (qdist_xmax(&hst.occupancy) - qdist_xmin(&hst.occupancy) == 1) {
|
||||||
|
hgram_opts |= QDIST_PR_NODECIMAL;
|
||||||
|
}
|
||||||
|
hgram = qdist_pr(&hst.occupancy, 10, hgram_opts);
|
||||||
|
g_string_append_printf(buf, "TB hash occupancy %0.2f%% avg chain occ. "
|
||||||
|
"Histogram: %s\n",
|
||||||
|
qdist_avg(&hst.occupancy) * 100, hgram);
|
||||||
|
g_free(hgram);
|
||||||
|
|
||||||
|
hgram_opts = QDIST_PR_BORDER | QDIST_PR_LABELS;
|
||||||
|
hgram_bins = qdist_xmax(&hst.chain) - qdist_xmin(&hst.chain);
|
||||||
|
if (hgram_bins > 10) {
|
||||||
|
hgram_bins = 10;
|
||||||
|
} else {
|
||||||
|
hgram_bins = 0;
|
||||||
|
hgram_opts |= QDIST_PR_NODECIMAL | QDIST_PR_NOBINRANGE;
|
||||||
|
}
|
||||||
|
hgram = qdist_pr(&hst.chain, hgram_bins, hgram_opts);
|
||||||
|
g_string_append_printf(buf, "TB hash avg chain %0.3f buckets. "
|
||||||
|
"Histogram: %s\n",
|
||||||
|
qdist_avg(&hst.chain), hgram);
|
||||||
|
g_free(hgram);
|
||||||
|
}
|
||||||
|
|
||||||
|
struct tb_tree_stats {
|
||||||
|
size_t nb_tbs;
|
||||||
|
size_t host_size;
|
||||||
|
size_t target_size;
|
||||||
|
size_t max_target_size;
|
||||||
|
size_t direct_jmp_count;
|
||||||
|
size_t direct_jmp2_count;
|
||||||
|
size_t cross_page;
|
||||||
|
};
|
||||||
|
|
||||||
|
static gboolean tb_tree_stats_iter(gpointer key, gpointer value, gpointer data)
|
||||||
|
{
|
||||||
|
const TranslationBlock *tb = value;
|
||||||
|
struct tb_tree_stats *tst = data;
|
||||||
|
|
||||||
|
tst->nb_tbs++;
|
||||||
|
tst->host_size += tb->tc.size;
|
||||||
|
tst->target_size += tb->size;
|
||||||
|
if (tb->size > tst->max_target_size) {
|
||||||
|
tst->max_target_size = tb->size;
|
||||||
|
}
|
||||||
|
if (tb->page_addr[1] != -1) {
|
||||||
|
tst->cross_page++;
|
||||||
|
}
|
||||||
|
if (tb->jmp_reset_offset[0] != TB_JMP_OFFSET_INVALID) {
|
||||||
|
tst->direct_jmp_count++;
|
||||||
|
if (tb->jmp_reset_offset[1] != TB_JMP_OFFSET_INVALID) {
|
||||||
|
tst->direct_jmp2_count++;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void tlb_flush_counts(size_t *pfull, size_t *ppart, size_t *pelide)
|
||||||
|
{
|
||||||
|
CPUState *cpu;
|
||||||
|
size_t full = 0, part = 0, elide = 0;
|
||||||
|
|
||||||
|
CPU_FOREACH(cpu) {
|
||||||
|
full += qatomic_read(&cpu->neg.tlb.c.full_flush_count);
|
||||||
|
part += qatomic_read(&cpu->neg.tlb.c.part_flush_count);
|
||||||
|
elide += qatomic_read(&cpu->neg.tlb.c.elide_flush_count);
|
||||||
|
}
|
||||||
|
*pfull = full;
|
||||||
|
*ppart = part;
|
||||||
|
*pelide = elide;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void tcg_dump_info(GString *buf)
|
||||||
|
{
|
||||||
|
g_string_append_printf(buf, "[TCG profiler not compiled]\n");
|
||||||
|
}
|
||||||
|
|
||||||
|
static void dump_exec_info(GString *buf)
|
||||||
|
{
|
||||||
|
struct tb_tree_stats tst = {};
|
||||||
|
struct qht_stats hst;
|
||||||
|
size_t nb_tbs, flush_full, flush_part, flush_elide;
|
||||||
|
|
||||||
|
tcg_tb_foreach(tb_tree_stats_iter, &tst);
|
||||||
|
nb_tbs = tst.nb_tbs;
|
||||||
|
/* XXX: avoid using doubles ? */
|
||||||
|
g_string_append_printf(buf, "Translation buffer state:\n");
|
||||||
|
/*
|
||||||
|
* Report total code size including the padding and TB structs;
|
||||||
|
* otherwise users might think "-accel tcg,tb-size" is not honoured.
|
||||||
|
* For avg host size we use the precise numbers from tb_tree_stats though.
|
||||||
|
*/
|
||||||
|
g_string_append_printf(buf, "gen code size %zu/%zu\n",
|
||||||
|
tcg_code_size(), tcg_code_capacity());
|
||||||
|
g_string_append_printf(buf, "TB count %zu\n", nb_tbs);
|
||||||
|
g_string_append_printf(buf, "TB avg target size %zu max=%zu bytes\n",
|
||||||
|
nb_tbs ? tst.target_size / nb_tbs : 0,
|
||||||
|
tst.max_target_size);
|
||||||
|
g_string_append_printf(buf, "TB avg host size %zu bytes "
|
||||||
|
"(expansion ratio: %0.1f)\n",
|
||||||
|
nb_tbs ? tst.host_size / nb_tbs : 0,
|
||||||
|
tst.target_size ?
|
||||||
|
(double)tst.host_size / tst.target_size : 0);
|
||||||
|
g_string_append_printf(buf, "cross page TB count %zu (%zu%%)\n",
|
||||||
|
tst.cross_page,
|
||||||
|
nb_tbs ? (tst.cross_page * 100) / nb_tbs : 0);
|
||||||
|
g_string_append_printf(buf, "direct jump count %zu (%zu%%) "
|
||||||
|
"(2 jumps=%zu %zu%%)\n",
|
||||||
|
tst.direct_jmp_count,
|
||||||
|
nb_tbs ? (tst.direct_jmp_count * 100) / nb_tbs : 0,
|
||||||
|
tst.direct_jmp2_count,
|
||||||
|
nb_tbs ? (tst.direct_jmp2_count * 100) / nb_tbs : 0);
|
||||||
|
|
||||||
|
qht_statistics_init(&tb_ctx.htable, &hst);
|
||||||
|
print_qht_statistics(hst, buf);
|
||||||
|
qht_statistics_destroy(&hst);
|
||||||
|
|
||||||
|
g_string_append_printf(buf, "\nStatistics:\n");
|
||||||
|
g_string_append_printf(buf, "TB flush count %u\n",
|
||||||
|
qatomic_read(&tb_ctx.tb_flush_count));
|
||||||
|
g_string_append_printf(buf, "TB invalidate count %u\n",
|
||||||
|
qatomic_read(&tb_ctx.tb_phys_invalidate_count));
|
||||||
|
|
||||||
|
tlb_flush_counts(&flush_full, &flush_part, &flush_elide);
|
||||||
|
g_string_append_printf(buf, "TLB full flushes %zu\n", flush_full);
|
||||||
|
g_string_append_printf(buf, "TLB partial flushes %zu\n", flush_part);
|
||||||
|
g_string_append_printf(buf, "TLB elided flushes %zu\n", flush_elide);
|
||||||
|
tcg_dump_info(buf);
|
||||||
|
}
|
||||||
|
|
||||||
HumanReadableText *qmp_x_query_jit(Error **errp)
|
HumanReadableText *qmp_x_query_jit(Error **errp)
|
||||||
{
|
{
|
||||||
g_autoptr(GString) buf = g_string_new("");
|
g_autoptr(GString) buf = g_string_new("");
|
||||||
@ -66,6 +215,11 @@ HumanReadableText *qmp_x_query_jit(Error **errp)
|
|||||||
return human_readable_text_from_str(buf);
|
return human_readable_text_from_str(buf);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void tcg_dump_op_count(GString *buf)
|
||||||
|
{
|
||||||
|
g_string_append_printf(buf, "[TCG profiler not compiled]\n");
|
||||||
|
}
|
||||||
|
|
||||||
HumanReadableText *qmp_x_query_opcount(Error **errp)
|
HumanReadableText *qmp_x_query_opcount(Error **errp)
|
||||||
{
|
{
|
||||||
g_autoptr(GString) buf = g_string_new("");
|
g_autoptr(GString) buf = g_string_new("");
|
||||||
|
@ -327,8 +327,7 @@ static TCGOp *copy_st_ptr(TCGOp **begin_op, TCGOp *op)
|
|||||||
return op;
|
return op;
|
||||||
}
|
}
|
||||||
|
|
||||||
static TCGOp *copy_call(TCGOp **begin_op, TCGOp *op, void *empty_func,
|
static TCGOp *copy_call(TCGOp **begin_op, TCGOp *op, void *func, int *cb_idx)
|
||||||
void *func, int *cb_idx)
|
|
||||||
{
|
{
|
||||||
TCGOp *old_op;
|
TCGOp *old_op;
|
||||||
int func_idx;
|
int func_idx;
|
||||||
@ -372,8 +371,7 @@ static TCGOp *append_udata_cb(const struct qemu_plugin_dyn_cb *cb,
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* call */
|
/* call */
|
||||||
op = copy_call(&begin_op, op, HELPER(plugin_vcpu_udata_cb),
|
op = copy_call(&begin_op, op, cb->f.vcpu_udata, cb_idx);
|
||||||
cb->f.vcpu_udata, cb_idx);
|
|
||||||
|
|
||||||
return op;
|
return op;
|
||||||
}
|
}
|
||||||
@ -420,8 +418,7 @@ static TCGOp *append_mem_cb(const struct qemu_plugin_dyn_cb *cb,
|
|||||||
|
|
||||||
if (type == PLUGIN_GEN_CB_MEM) {
|
if (type == PLUGIN_GEN_CB_MEM) {
|
||||||
/* call */
|
/* call */
|
||||||
op = copy_call(&begin_op, op, HELPER(plugin_vcpu_mem_cb),
|
op = copy_call(&begin_op, op, cb->f.vcpu_udata, cb_idx);
|
||||||
cb->f.vcpu_udata, cb_idx);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return op;
|
return op;
|
||||||
|
@ -1083,8 +1083,7 @@ bool tb_invalidate_phys_page_unwind(tb_page_addr_t addr, uintptr_t pc)
|
|||||||
if (current_tb_modified) {
|
if (current_tb_modified) {
|
||||||
/* Force execution of one insn next time. */
|
/* Force execution of one insn next time. */
|
||||||
CPUState *cpu = current_cpu;
|
CPUState *cpu = current_cpu;
|
||||||
cpu->cflags_next_tb =
|
cpu->cflags_next_tb = 1 | CF_NOIRQ | curr_cflags(current_cpu);
|
||||||
1 | CF_LAST_IO | CF_NOIRQ | curr_cflags(current_cpu);
|
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
return false;
|
return false;
|
||||||
@ -1154,8 +1153,7 @@ tb_invalidate_phys_page_range__locked(struct page_collection *pages,
|
|||||||
if (current_tb_modified) {
|
if (current_tb_modified) {
|
||||||
page_collection_unlock(pages);
|
page_collection_unlock(pages);
|
||||||
/* Force execution of one insn next time. */
|
/* Force execution of one insn next time. */
|
||||||
current_cpu->cflags_next_tb =
|
current_cpu->cflags_next_tb = 1 | CF_NOIRQ | curr_cflags(current_cpu);
|
||||||
1 | CF_LAST_IO | CF_NOIRQ | curr_cflags(current_cpu);
|
|
||||||
mmap_unlock();
|
mmap_unlock();
|
||||||
cpu_loop_exit_noexc(current_cpu);
|
cpu_loop_exit_noexc(current_cpu);
|
||||||
}
|
}
|
||||||
|
@ -34,6 +34,7 @@
|
|||||||
#include "qemu/timer.h"
|
#include "qemu/timer.h"
|
||||||
#include "exec/exec-all.h"
|
#include "exec/exec-all.h"
|
||||||
#include "exec/hwaddr.h"
|
#include "exec/hwaddr.h"
|
||||||
|
#include "exec/tb-flush.h"
|
||||||
#include "exec/gdbstub.h"
|
#include "exec/gdbstub.h"
|
||||||
|
|
||||||
#include "tcg-accel-ops.h"
|
#include "tcg-accel-ops.h"
|
||||||
@ -77,6 +78,13 @@ int tcg_cpus_exec(CPUState *cpu)
|
|||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void tcg_cpu_reset_hold(CPUState *cpu)
|
||||||
|
{
|
||||||
|
tcg_flush_jmp_cache(cpu);
|
||||||
|
|
||||||
|
tlb_flush(cpu);
|
||||||
|
}
|
||||||
|
|
||||||
/* mask must never be zero, except for A20 change call */
|
/* mask must never be zero, except for A20 change call */
|
||||||
void tcg_handle_interrupt(CPUState *cpu, int mask)
|
void tcg_handle_interrupt(CPUState *cpu, int mask)
|
||||||
{
|
{
|
||||||
@ -205,6 +213,7 @@ static void tcg_accel_ops_init(AccelOpsClass *ops)
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
ops->cpu_reset_hold = tcg_cpu_reset_hold;
|
||||||
ops->supports_guest_debug = tcg_supports_guest_debug;
|
ops->supports_guest_debug = tcg_supports_guest_debug;
|
||||||
ops->insert_breakpoint = tcg_insert_breakpoint;
|
ops->insert_breakpoint = tcg_insert_breakpoint;
|
||||||
ops->remove_breakpoint = tcg_remove_breakpoint;
|
ops->remove_breakpoint = tcg_remove_breakpoint;
|
||||||
|
@ -926,7 +926,7 @@ TranslationBlock *libafl_gen_edge(CPUState *cpu, target_ulong src_block,
|
|||||||
phys_pc ^= reverse_bits((tb_page_addr_t)exit_n);
|
phys_pc ^= reverse_bits((tb_page_addr_t)exit_n);
|
||||||
|
|
||||||
/* Generate a one-shot TB with max 8 insn in it */
|
/* Generate a one-shot TB with max 8 insn in it */
|
||||||
cflags = (cflags & ~CF_COUNT_MASK) | CF_LAST_IO | 8;
|
cflags = (cflags & ~CF_COUNT_MASK) | 8;
|
||||||
|
|
||||||
max_insns = cflags & CF_COUNT_MASK;
|
max_insns = cflags & CF_COUNT_MASK;
|
||||||
if (max_insns == 0) {
|
if (max_insns == 0) {
|
||||||
@ -1064,7 +1064,7 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
|
|||||||
|
|
||||||
if (phys_pc == -1) {
|
if (phys_pc == -1) {
|
||||||
/* Generate a one-shot TB with 1 insn in it */
|
/* Generate a one-shot TB with 1 insn in it */
|
||||||
cflags = (cflags & ~CF_COUNT_MASK) | CF_LAST_IO | 1;
|
cflags = (cflags & ~CF_COUNT_MASK) | 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
max_insns = cflags & CF_COUNT_MASK;
|
max_insns = cflags & CF_COUNT_MASK;
|
||||||
@ -1400,7 +1400,7 @@ void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr)
|
|||||||
* operations only (which execute after completion) so we don't
|
* operations only (which execute after completion) so we don't
|
||||||
* double instrument the instruction.
|
* double instrument the instruction.
|
||||||
*/
|
*/
|
||||||
cpu->cflags_next_tb = curr_cflags(cpu) | CF_MEMI_ONLY | CF_LAST_IO | n;
|
cpu->cflags_next_tb = curr_cflags(cpu) | CF_MEMI_ONLY | n;
|
||||||
|
|
||||||
if (qemu_loglevel_mask(CPU_LOG_EXEC)) {
|
if (qemu_loglevel_mask(CPU_LOG_EXEC)) {
|
||||||
vaddr pc = log_pc(cpu, tb);
|
vaddr pc = log_pc(cpu, tb);
|
||||||
@ -1413,133 +1413,6 @@ void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr)
|
|||||||
cpu_loop_exit_noexc(cpu);
|
cpu_loop_exit_noexc(cpu);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void print_qht_statistics(struct qht_stats hst, GString *buf)
|
|
||||||
{
|
|
||||||
uint32_t hgram_opts;
|
|
||||||
size_t hgram_bins;
|
|
||||||
char *hgram;
|
|
||||||
|
|
||||||
if (!hst.head_buckets) {
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
g_string_append_printf(buf, "TB hash buckets %zu/%zu "
|
|
||||||
"(%0.2f%% head buckets used)\n",
|
|
||||||
hst.used_head_buckets, hst.head_buckets,
|
|
||||||
(double)hst.used_head_buckets /
|
|
||||||
hst.head_buckets * 100);
|
|
||||||
|
|
||||||
hgram_opts = QDIST_PR_BORDER | QDIST_PR_LABELS;
|
|
||||||
hgram_opts |= QDIST_PR_100X | QDIST_PR_PERCENT;
|
|
||||||
if (qdist_xmax(&hst.occupancy) - qdist_xmin(&hst.occupancy) == 1) {
|
|
||||||
hgram_opts |= QDIST_PR_NODECIMAL;
|
|
||||||
}
|
|
||||||
hgram = qdist_pr(&hst.occupancy, 10, hgram_opts);
|
|
||||||
g_string_append_printf(buf, "TB hash occupancy %0.2f%% avg chain occ. "
|
|
||||||
"Histogram: %s\n",
|
|
||||||
qdist_avg(&hst.occupancy) * 100, hgram);
|
|
||||||
g_free(hgram);
|
|
||||||
|
|
||||||
hgram_opts = QDIST_PR_BORDER | QDIST_PR_LABELS;
|
|
||||||
hgram_bins = qdist_xmax(&hst.chain) - qdist_xmin(&hst.chain);
|
|
||||||
if (hgram_bins > 10) {
|
|
||||||
hgram_bins = 10;
|
|
||||||
} else {
|
|
||||||
hgram_bins = 0;
|
|
||||||
hgram_opts |= QDIST_PR_NODECIMAL | QDIST_PR_NOBINRANGE;
|
|
||||||
}
|
|
||||||
hgram = qdist_pr(&hst.chain, hgram_bins, hgram_opts);
|
|
||||||
g_string_append_printf(buf, "TB hash avg chain %0.3f buckets. "
|
|
||||||
"Histogram: %s\n",
|
|
||||||
qdist_avg(&hst.chain), hgram);
|
|
||||||
g_free(hgram);
|
|
||||||
}
|
|
||||||
|
|
||||||
struct tb_tree_stats {
|
|
||||||
size_t nb_tbs;
|
|
||||||
size_t host_size;
|
|
||||||
size_t target_size;
|
|
||||||
size_t max_target_size;
|
|
||||||
size_t direct_jmp_count;
|
|
||||||
size_t direct_jmp2_count;
|
|
||||||
size_t cross_page;
|
|
||||||
};
|
|
||||||
|
|
||||||
static gboolean tb_tree_stats_iter(gpointer key, gpointer value, gpointer data)
|
|
||||||
{
|
|
||||||
const TranslationBlock *tb = value;
|
|
||||||
struct tb_tree_stats *tst = data;
|
|
||||||
|
|
||||||
tst->nb_tbs++;
|
|
||||||
tst->host_size += tb->tc.size;
|
|
||||||
tst->target_size += tb->size;
|
|
||||||
if (tb->size > tst->max_target_size) {
|
|
||||||
tst->max_target_size = tb->size;
|
|
||||||
}
|
|
||||||
if (tb_page_addr1(tb) != -1) {
|
|
||||||
tst->cross_page++;
|
|
||||||
}
|
|
||||||
if (tb->jmp_reset_offset[0] != TB_JMP_OFFSET_INVALID) {
|
|
||||||
tst->direct_jmp_count++;
|
|
||||||
if (tb->jmp_reset_offset[1] != TB_JMP_OFFSET_INVALID) {
|
|
||||||
tst->direct_jmp2_count++;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return false;
|
|
||||||
}
|
|
||||||
|
|
||||||
void dump_exec_info(GString *buf)
|
|
||||||
{
|
|
||||||
struct tb_tree_stats tst = {};
|
|
||||||
struct qht_stats hst;
|
|
||||||
size_t nb_tbs, flush_full, flush_part, flush_elide;
|
|
||||||
|
|
||||||
tcg_tb_foreach(tb_tree_stats_iter, &tst);
|
|
||||||
nb_tbs = tst.nb_tbs;
|
|
||||||
/* XXX: avoid using doubles ? */
|
|
||||||
g_string_append_printf(buf, "Translation buffer state:\n");
|
|
||||||
/*
|
|
||||||
* Report total code size including the padding and TB structs;
|
|
||||||
* otherwise users might think "-accel tcg,tb-size" is not honoured.
|
|
||||||
* For avg host size we use the precise numbers from tb_tree_stats though.
|
|
||||||
*/
|
|
||||||
g_string_append_printf(buf, "gen code size %zu/%zu\n",
|
|
||||||
tcg_code_size(), tcg_code_capacity());
|
|
||||||
g_string_append_printf(buf, "TB count %zu\n", nb_tbs);
|
|
||||||
g_string_append_printf(buf, "TB avg target size %zu max=%zu bytes\n",
|
|
||||||
nb_tbs ? tst.target_size / nb_tbs : 0,
|
|
||||||
tst.max_target_size);
|
|
||||||
g_string_append_printf(buf, "TB avg host size %zu bytes "
|
|
||||||
"(expansion ratio: %0.1f)\n",
|
|
||||||
nb_tbs ? tst.host_size / nb_tbs : 0,
|
|
||||||
tst.target_size ?
|
|
||||||
(double)tst.host_size / tst.target_size : 0);
|
|
||||||
g_string_append_printf(buf, "cross page TB count %zu (%zu%%)\n",
|
|
||||||
tst.cross_page,
|
|
||||||
nb_tbs ? (tst.cross_page * 100) / nb_tbs : 0);
|
|
||||||
g_string_append_printf(buf, "direct jump count %zu (%zu%%) "
|
|
||||||
"(2 jumps=%zu %zu%%)\n",
|
|
||||||
tst.direct_jmp_count,
|
|
||||||
nb_tbs ? (tst.direct_jmp_count * 100) / nb_tbs : 0,
|
|
||||||
tst.direct_jmp2_count,
|
|
||||||
nb_tbs ? (tst.direct_jmp2_count * 100) / nb_tbs : 0);
|
|
||||||
|
|
||||||
qht_statistics_init(&tb_ctx.htable, &hst);
|
|
||||||
print_qht_statistics(hst, buf);
|
|
||||||
qht_statistics_destroy(&hst);
|
|
||||||
|
|
||||||
g_string_append_printf(buf, "\nStatistics:\n");
|
|
||||||
g_string_append_printf(buf, "TB flush count %u\n",
|
|
||||||
qatomic_read(&tb_ctx.tb_flush_count));
|
|
||||||
g_string_append_printf(buf, "TB invalidate count %u\n",
|
|
||||||
qatomic_read(&tb_ctx.tb_phys_invalidate_count));
|
|
||||||
|
|
||||||
tlb_flush_counts(&flush_full, &flush_part, &flush_elide);
|
|
||||||
g_string_append_printf(buf, "TLB full flushes %zu\n", flush_full);
|
|
||||||
g_string_append_printf(buf, "TLB partial flushes %zu\n", flush_part);
|
|
||||||
g_string_append_printf(buf, "TLB elided flushes %zu\n", flush_elide);
|
|
||||||
tcg_dump_info(buf);
|
|
||||||
}
|
|
||||||
|
|
||||||
#else /* CONFIG_USER_ONLY */
|
#else /* CONFIG_USER_ONLY */
|
||||||
|
|
||||||
void cpu_interrupt(CPUState *cpu, int mask)
|
void cpu_interrupt(CPUState *cpu, int mask)
|
||||||
@ -1568,11 +1441,3 @@ void tcg_flush_jmp_cache(CPUState *cpu)
|
|||||||
qatomic_set(&jc->array[i].tb, NULL);
|
qatomic_set(&jc->array[i].tb, NULL);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/* This is a wrapper for common code that can not use CONFIG_SOFTMMU */
|
|
||||||
void tcg_flush_softmmu_tlb(CPUState *cs)
|
|
||||||
{
|
|
||||||
#ifdef CONFIG_SOFTMMU
|
|
||||||
tlb_flush(cs);
|
|
||||||
#endif
|
|
||||||
}
|
|
||||||
|
@ -89,7 +89,7 @@ static TCGOp *gen_tb_start(DisasContextBase *db, uint32_t cflags)
|
|||||||
* each translation block. The cost is minimal, plus it would be
|
* each translation block. The cost is minimal, plus it would be
|
||||||
* very easy to forget doing it in the translator.
|
* very easy to forget doing it in the translator.
|
||||||
*/
|
*/
|
||||||
set_can_do_io(db, db->max_insns == 1 && (cflags & CF_LAST_IO));
|
set_can_do_io(db, db->max_insns == 1);
|
||||||
|
|
||||||
return icount_start_insn;
|
return icount_start_insn;
|
||||||
}
|
}
|
||||||
@ -194,13 +194,7 @@ void translator_loop(CPUState *cpu, TranslationBlock *tb, int *max_insns,
|
|||||||
ops->tb_start(db, cpu);
|
ops->tb_start(db, cpu);
|
||||||
tcg_debug_assert(db->is_jmp == DISAS_NEXT); /* no early exit */
|
tcg_debug_assert(db->is_jmp == DISAS_NEXT); /* no early exit */
|
||||||
|
|
||||||
if (cflags & CF_MEMI_ONLY) {
|
plugin_enabled = plugin_gen_tb_start(cpu, db, cflags & CF_MEMI_ONLY);
|
||||||
/* We should only see CF_MEMI_ONLY for io_recompile. */
|
|
||||||
assert(cflags & CF_LAST_IO);
|
|
||||||
plugin_enabled = plugin_gen_tb_start(cpu, db, true);
|
|
||||||
} else {
|
|
||||||
plugin_enabled = plugin_gen_tb_start(cpu, db, false);
|
|
||||||
}
|
|
||||||
db->plugin_enabled = plugin_enabled;
|
db->plugin_enabled = plugin_enabled;
|
||||||
|
|
||||||
while (true) {
|
while (true) {
|
||||||
@ -255,9 +249,9 @@ void translator_loop(CPUState *cpu, TranslationBlock *tb, int *max_insns,
|
|||||||
if (backdoor == 0xf2) {
|
if (backdoor == 0xf2) {
|
||||||
backdoor = translator_ldub(cpu_env(cpu), db, db->pc_next +3);
|
backdoor = translator_ldub(cpu_env(cpu), db, db->pc_next +3);
|
||||||
if (backdoor == 0x44) {
|
if (backdoor == 0x44) {
|
||||||
struct libafl_backdoor_hook* hk = libafl_backdoor_hooks;
|
struct libafl_backdoor_hook* bhk = libafl_backdoor_hooks;
|
||||||
while (hk) {
|
while (bhk) {
|
||||||
TCGv_i64 tmp1 = tcg_constant_i64(hk->data);
|
TCGv_i64 tmp1 = tcg_constant_i64(bhk->data);
|
||||||
#if TARGET_LONG_BITS == 32
|
#if TARGET_LONG_BITS == 32
|
||||||
TCGv_i32 tmp0 = tcg_constant_i32(db->pc_next);
|
TCGv_i32 tmp0 = tcg_constant_i32(db->pc_next);
|
||||||
TCGTemp *tmp2[2] = { tcgv_i32_temp(tmp0), tcgv_i64_temp(tmp1) };
|
TCGTemp *tmp2[2] = { tcgv_i32_temp(tmp0), tcgv_i64_temp(tmp1) };
|
||||||
@ -265,15 +259,15 @@ void translator_loop(CPUState *cpu, TranslationBlock *tb, int *max_insns,
|
|||||||
TCGv_i64 tmp0 = tcg_constant_i64(db->pc_next);
|
TCGv_i64 tmp0 = tcg_constant_i64(db->pc_next);
|
||||||
TCGTemp *tmp2[2] = { tcgv_i64_temp(tmp0), tcgv_i64_temp(tmp1) };
|
TCGTemp *tmp2[2] = { tcgv_i64_temp(tmp0), tcgv_i64_temp(tmp1) };
|
||||||
#endif
|
#endif
|
||||||
// tcg_gen_callN(hk->exec, NULL, 2, tmp2);
|
// tcg_gen_callN(bhk->exec, NULL, 2, tmp2);
|
||||||
tcg_gen_callN(&hk->helper_info, NULL, tmp2);
|
tcg_gen_callN(&bhk->helper_info, NULL, tmp2);
|
||||||
#if TARGET_LONG_BITS == 32
|
#if TARGET_LONG_BITS == 32
|
||||||
tcg_temp_free_i32(tmp0);
|
tcg_temp_free_i32(tmp0);
|
||||||
#else
|
#else
|
||||||
tcg_temp_free_i64(tmp0);
|
tcg_temp_free_i64(tmp0);
|
||||||
#endif
|
#endif
|
||||||
tcg_temp_free_i64(tmp1);
|
tcg_temp_free_i64(tmp1);
|
||||||
hk = hk->next;
|
bhk = bhk->next;
|
||||||
}
|
}
|
||||||
|
|
||||||
db->pc_next += 4;
|
db->pc_next += 4;
|
||||||
@ -285,11 +279,13 @@ void translator_loop(CPUState *cpu, TranslationBlock *tb, int *max_insns,
|
|||||||
|
|
||||||
//// --- End LibAFL code ---
|
//// --- End LibAFL code ---
|
||||||
|
|
||||||
/* Disassemble one instruction. The translate_insn hook should
|
/*
|
||||||
update db->pc_next and db->is_jmp to indicate what should be
|
* Disassemble one instruction. The translate_insn hook should
|
||||||
done next -- either exiting this loop or locate the start of
|
* update db->pc_next and db->is_jmp to indicate what should be
|
||||||
the next instruction. */
|
* done next -- either exiting this loop or locate the start of
|
||||||
if (db->num_insns == db->max_insns && (cflags & CF_LAST_IO)) {
|
* the next instruction.
|
||||||
|
*/
|
||||||
|
if (db->num_insns == db->max_insns) {
|
||||||
/* Accept I/O on the last instruction. */
|
/* Accept I/O on the last instruction. */
|
||||||
set_can_do_io(db, true);
|
set_can_do_io(db, true);
|
||||||
}
|
}
|
||||||
|
@ -14,6 +14,10 @@ void qemu_init_vcpu(CPUState *cpu)
|
|||||||
{
|
{
|
||||||
}
|
}
|
||||||
|
|
||||||
|
void cpu_exec_reset_hold(CPUState *cpu)
|
||||||
|
{
|
||||||
|
}
|
||||||
|
|
||||||
/* User mode emulation does not support record/replay yet. */
|
/* User mode emulation does not support record/replay yet. */
|
||||||
|
|
||||||
bool replay_exception(void)
|
bool replay_exception(void)
|
||||||
|
@ -1781,7 +1781,7 @@ static AudioState *audio_init(Audiodev *dev, Error **errp)
|
|||||||
|
|
||||||
QTAILQ_INSERT_TAIL(&audio_states, s, list);
|
QTAILQ_INSERT_TAIL(&audio_states, s, list);
|
||||||
QLIST_INIT (&s->card_head);
|
QLIST_INIT (&s->card_head);
|
||||||
vmstate_register (NULL, 0, &vmstate_audio, s);
|
vmstate_register_any(NULL, &vmstate_audio, s);
|
||||||
return s;
|
return s;
|
||||||
|
|
||||||
out:
|
out:
|
||||||
|
@ -97,6 +97,10 @@ static int wav_init_out(HWVoiceOut *hw, struct audsettings *as,
|
|||||||
dolog ("WAVE files can not handle 32bit formats\n");
|
dolog ("WAVE files can not handle 32bit formats\n");
|
||||||
return -1;
|
return -1;
|
||||||
|
|
||||||
|
case AUDIO_FORMAT_F32:
|
||||||
|
dolog("WAVE files can not handle float formats\n");
|
||||||
|
return -1;
|
||||||
|
|
||||||
default:
|
default:
|
||||||
abort();
|
abort();
|
||||||
}
|
}
|
||||||
|
@ -426,8 +426,7 @@ dbus_vmstate_complete(UserCreatable *uc, Error **errp)
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (vmstate_register(VMSTATE_IF(self), VMSTATE_INSTANCE_ID_ANY,
|
if (vmstate_register_any(VMSTATE_IF(self), &dbus_vmstate, self) < 0) {
|
||||||
&dbus_vmstate, self) < 0) {
|
|
||||||
error_setg(errp, "Failed to register vmstate");
|
error_setg(errp, "Failed to register vmstate");
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -975,8 +975,7 @@ static void tpm_emulator_inst_init(Object *obj)
|
|||||||
qemu_add_vm_change_state_handler(tpm_emulator_vm_state_change,
|
qemu_add_vm_change_state_handler(tpm_emulator_vm_state_change,
|
||||||
tpm_emu);
|
tpm_emu);
|
||||||
|
|
||||||
vmstate_register(NULL, VMSTATE_INSTANCE_ID_ANY,
|
vmstate_register_any(NULL, &vmstate_tpm_emulator, obj);
|
||||||
&vmstate_tpm_emulator, obj);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
194
block.c
194
block.c
@ -820,12 +820,17 @@ int bdrv_probe_blocksizes(BlockDriverState *bs, BlockSizes *bsz)
|
|||||||
int bdrv_probe_geometry(BlockDriverState *bs, HDGeometry *geo)
|
int bdrv_probe_geometry(BlockDriverState *bs, HDGeometry *geo)
|
||||||
{
|
{
|
||||||
BlockDriver *drv = bs->drv;
|
BlockDriver *drv = bs->drv;
|
||||||
BlockDriverState *filtered = bdrv_filter_bs(bs);
|
BlockDriverState *filtered;
|
||||||
|
|
||||||
GLOBAL_STATE_CODE();
|
GLOBAL_STATE_CODE();
|
||||||
|
GRAPH_RDLOCK_GUARD_MAINLOOP();
|
||||||
|
|
||||||
if (drv && drv->bdrv_probe_geometry) {
|
if (drv && drv->bdrv_probe_geometry) {
|
||||||
return drv->bdrv_probe_geometry(bs, geo);
|
return drv->bdrv_probe_geometry(bs, geo);
|
||||||
} else if (filtered) {
|
}
|
||||||
|
|
||||||
|
filtered = bdrv_filter_bs(bs);
|
||||||
|
if (filtered) {
|
||||||
return bdrv_probe_geometry(filtered, geo);
|
return bdrv_probe_geometry(filtered, geo);
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -1702,12 +1707,14 @@ bdrv_open_driver(BlockDriverState *bs, BlockDriver *drv, const char *node_name,
|
|||||||
return 0;
|
return 0;
|
||||||
open_failed:
|
open_failed:
|
||||||
bs->drv = NULL;
|
bs->drv = NULL;
|
||||||
|
|
||||||
|
bdrv_graph_wrlock(NULL);
|
||||||
if (bs->file != NULL) {
|
if (bs->file != NULL) {
|
||||||
bdrv_graph_wrlock(NULL);
|
|
||||||
bdrv_unref_child(bs, bs->file);
|
bdrv_unref_child(bs, bs->file);
|
||||||
bdrv_graph_wrunlock();
|
|
||||||
assert(!bs->file);
|
assert(!bs->file);
|
||||||
}
|
}
|
||||||
|
bdrv_graph_wrunlock();
|
||||||
|
|
||||||
g_free(bs->opaque);
|
g_free(bs->opaque);
|
||||||
bs->opaque = NULL;
|
bs->opaque = NULL;
|
||||||
return ret;
|
return ret;
|
||||||
@ -1849,9 +1856,12 @@ static int bdrv_open_common(BlockDriverState *bs, BlockBackend *file,
|
|||||||
Error *local_err = NULL;
|
Error *local_err = NULL;
|
||||||
bool ro;
|
bool ro;
|
||||||
|
|
||||||
|
GLOBAL_STATE_CODE();
|
||||||
|
|
||||||
|
bdrv_graph_rdlock_main_loop();
|
||||||
assert(bs->file == NULL);
|
assert(bs->file == NULL);
|
||||||
assert(options != NULL && bs->options != options);
|
assert(options != NULL && bs->options != options);
|
||||||
GLOBAL_STATE_CODE();
|
bdrv_graph_rdunlock_main_loop();
|
||||||
|
|
||||||
opts = qemu_opts_create(&bdrv_runtime_opts, NULL, 0, &error_abort);
|
opts = qemu_opts_create(&bdrv_runtime_opts, NULL, 0, &error_abort);
|
||||||
if (!qemu_opts_absorb_qdict(opts, options, errp)) {
|
if (!qemu_opts_absorb_qdict(opts, options, errp)) {
|
||||||
@ -3209,8 +3219,6 @@ BdrvChild *bdrv_root_attach_child(BlockDriverState *child_bs,
|
|||||||
|
|
||||||
GLOBAL_STATE_CODE();
|
GLOBAL_STATE_CODE();
|
||||||
|
|
||||||
bdrv_graph_wrlock(child_bs);
|
|
||||||
|
|
||||||
child = bdrv_attach_child_common(child_bs, child_name, child_class,
|
child = bdrv_attach_child_common(child_bs, child_name, child_class,
|
||||||
child_role, perm, shared_perm, opaque,
|
child_role, perm, shared_perm, opaque,
|
||||||
tran, errp);
|
tran, errp);
|
||||||
@ -3223,9 +3231,8 @@ BdrvChild *bdrv_root_attach_child(BlockDriverState *child_bs,
|
|||||||
|
|
||||||
out:
|
out:
|
||||||
tran_finalize(tran, ret);
|
tran_finalize(tran, ret);
|
||||||
bdrv_graph_wrunlock();
|
|
||||||
|
|
||||||
bdrv_unref(child_bs);
|
bdrv_schedule_unref(child_bs);
|
||||||
|
|
||||||
return ret < 0 ? NULL : child;
|
return ret < 0 ? NULL : child;
|
||||||
}
|
}
|
||||||
@ -3530,19 +3537,7 @@ out:
|
|||||||
*
|
*
|
||||||
* If a backing child is already present (i.e. we're detaching a node), that
|
* If a backing child is already present (i.e. we're detaching a node), that
|
||||||
* child node must be drained.
|
* child node must be drained.
|
||||||
*
|
|
||||||
* After calling this function, the transaction @tran may only be completed
|
|
||||||
* while holding a writer lock for the graph.
|
|
||||||
*/
|
*/
|
||||||
static int GRAPH_WRLOCK
|
|
||||||
bdrv_set_backing_noperm(BlockDriverState *bs,
|
|
||||||
BlockDriverState *backing_hd,
|
|
||||||
Transaction *tran, Error **errp)
|
|
||||||
{
|
|
||||||
GLOBAL_STATE_CODE();
|
|
||||||
return bdrv_set_file_or_backing_noperm(bs, backing_hd, true, tran, errp);
|
|
||||||
}
|
|
||||||
|
|
||||||
int bdrv_set_backing_hd_drained(BlockDriverState *bs,
|
int bdrv_set_backing_hd_drained(BlockDriverState *bs,
|
||||||
BlockDriverState *backing_hd,
|
BlockDriverState *backing_hd,
|
||||||
Error **errp)
|
Error **errp)
|
||||||
@ -3555,9 +3550,8 @@ int bdrv_set_backing_hd_drained(BlockDriverState *bs,
|
|||||||
if (bs->backing) {
|
if (bs->backing) {
|
||||||
assert(bs->backing->bs->quiesce_counter > 0);
|
assert(bs->backing->bs->quiesce_counter > 0);
|
||||||
}
|
}
|
||||||
bdrv_graph_wrlock(backing_hd);
|
|
||||||
|
|
||||||
ret = bdrv_set_backing_noperm(bs, backing_hd, tran, errp);
|
ret = bdrv_set_file_or_backing_noperm(bs, backing_hd, true, tran, errp);
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
goto out;
|
goto out;
|
||||||
}
|
}
|
||||||
@ -3565,20 +3559,25 @@ int bdrv_set_backing_hd_drained(BlockDriverState *bs,
|
|||||||
ret = bdrv_refresh_perms(bs, tran, errp);
|
ret = bdrv_refresh_perms(bs, tran, errp);
|
||||||
out:
|
out:
|
||||||
tran_finalize(tran, ret);
|
tran_finalize(tran, ret);
|
||||||
bdrv_graph_wrunlock();
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
int bdrv_set_backing_hd(BlockDriverState *bs, BlockDriverState *backing_hd,
|
int bdrv_set_backing_hd(BlockDriverState *bs, BlockDriverState *backing_hd,
|
||||||
Error **errp)
|
Error **errp)
|
||||||
{
|
{
|
||||||
BlockDriverState *drain_bs = bs->backing ? bs->backing->bs : bs;
|
BlockDriverState *drain_bs;
|
||||||
int ret;
|
int ret;
|
||||||
GLOBAL_STATE_CODE();
|
GLOBAL_STATE_CODE();
|
||||||
|
|
||||||
|
bdrv_graph_rdlock_main_loop();
|
||||||
|
drain_bs = bs->backing ? bs->backing->bs : bs;
|
||||||
|
bdrv_graph_rdunlock_main_loop();
|
||||||
|
|
||||||
bdrv_ref(drain_bs);
|
bdrv_ref(drain_bs);
|
||||||
bdrv_drained_begin(drain_bs);
|
bdrv_drained_begin(drain_bs);
|
||||||
|
bdrv_graph_wrlock(backing_hd);
|
||||||
ret = bdrv_set_backing_hd_drained(bs, backing_hd, errp);
|
ret = bdrv_set_backing_hd_drained(bs, backing_hd, errp);
|
||||||
|
bdrv_graph_wrunlock();
|
||||||
bdrv_drained_end(drain_bs);
|
bdrv_drained_end(drain_bs);
|
||||||
bdrv_unref(drain_bs);
|
bdrv_unref(drain_bs);
|
||||||
|
|
||||||
@ -3612,6 +3611,7 @@ int bdrv_open_backing_file(BlockDriverState *bs, QDict *parent_options,
|
|||||||
Error *local_err = NULL;
|
Error *local_err = NULL;
|
||||||
|
|
||||||
GLOBAL_STATE_CODE();
|
GLOBAL_STATE_CODE();
|
||||||
|
GRAPH_RDLOCK_GUARD_MAINLOOP();
|
||||||
|
|
||||||
if (bs->backing != NULL) {
|
if (bs->backing != NULL) {
|
||||||
goto free_exit;
|
goto free_exit;
|
||||||
@ -3653,10 +3653,7 @@ int bdrv_open_backing_file(BlockDriverState *bs, QDict *parent_options,
|
|||||||
implicit_backing = !strcmp(bs->auto_backing_file, bs->backing_file);
|
implicit_backing = !strcmp(bs->auto_backing_file, bs->backing_file);
|
||||||
}
|
}
|
||||||
|
|
||||||
bdrv_graph_rdlock_main_loop();
|
|
||||||
backing_filename = bdrv_get_full_backing_filename(bs, &local_err);
|
backing_filename = bdrv_get_full_backing_filename(bs, &local_err);
|
||||||
bdrv_graph_rdunlock_main_loop();
|
|
||||||
|
|
||||||
if (local_err) {
|
if (local_err) {
|
||||||
ret = -EINVAL;
|
ret = -EINVAL;
|
||||||
error_propagate(errp, local_err);
|
error_propagate(errp, local_err);
|
||||||
@ -3687,9 +3684,7 @@ int bdrv_open_backing_file(BlockDriverState *bs, QDict *parent_options,
|
|||||||
}
|
}
|
||||||
|
|
||||||
if (implicit_backing) {
|
if (implicit_backing) {
|
||||||
bdrv_graph_rdlock_main_loop();
|
|
||||||
bdrv_refresh_filename(backing_hd);
|
bdrv_refresh_filename(backing_hd);
|
||||||
bdrv_graph_rdunlock_main_loop();
|
|
||||||
pstrcpy(bs->auto_backing_file, sizeof(bs->auto_backing_file),
|
pstrcpy(bs->auto_backing_file, sizeof(bs->auto_backing_file),
|
||||||
backing_hd->filename);
|
backing_hd->filename);
|
||||||
}
|
}
|
||||||
@ -4760,8 +4755,8 @@ bdrv_reopen_parse_file_or_backing(BDRVReopenState *reopen_state,
|
|||||||
{
|
{
|
||||||
BlockDriverState *bs = reopen_state->bs;
|
BlockDriverState *bs = reopen_state->bs;
|
||||||
BlockDriverState *new_child_bs;
|
BlockDriverState *new_child_bs;
|
||||||
BlockDriverState *old_child_bs = is_backing ? child_bs(bs->backing) :
|
BlockDriverState *old_child_bs;
|
||||||
child_bs(bs->file);
|
|
||||||
const char *child_name = is_backing ? "backing" : "file";
|
const char *child_name = is_backing ? "backing" : "file";
|
||||||
QObject *value;
|
QObject *value;
|
||||||
const char *str;
|
const char *str;
|
||||||
@ -4776,6 +4771,8 @@ bdrv_reopen_parse_file_or_backing(BDRVReopenState *reopen_state,
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
bdrv_graph_rdlock_main_loop();
|
||||||
|
|
||||||
switch (qobject_type(value)) {
|
switch (qobject_type(value)) {
|
||||||
case QTYPE_QNULL:
|
case QTYPE_QNULL:
|
||||||
assert(is_backing); /* The 'file' option does not allow a null value */
|
assert(is_backing); /* The 'file' option does not allow a null value */
|
||||||
@ -4785,17 +4782,16 @@ bdrv_reopen_parse_file_or_backing(BDRVReopenState *reopen_state,
|
|||||||
str = qstring_get_str(qobject_to(QString, value));
|
str = qstring_get_str(qobject_to(QString, value));
|
||||||
new_child_bs = bdrv_lookup_bs(NULL, str, errp);
|
new_child_bs = bdrv_lookup_bs(NULL, str, errp);
|
||||||
if (new_child_bs == NULL) {
|
if (new_child_bs == NULL) {
|
||||||
return -EINVAL;
|
ret = -EINVAL;
|
||||||
|
goto out_rdlock;
|
||||||
}
|
}
|
||||||
|
|
||||||
bdrv_graph_rdlock_main_loop();
|
|
||||||
has_child = bdrv_recurse_has_child(new_child_bs, bs);
|
has_child = bdrv_recurse_has_child(new_child_bs, bs);
|
||||||
bdrv_graph_rdunlock_main_loop();
|
|
||||||
|
|
||||||
if (has_child) {
|
if (has_child) {
|
||||||
error_setg(errp, "Making '%s' a %s child of '%s' would create a "
|
error_setg(errp, "Making '%s' a %s child of '%s' would create a "
|
||||||
"cycle", str, child_name, bs->node_name);
|
"cycle", str, child_name, bs->node_name);
|
||||||
return -EINVAL;
|
ret = -EINVAL;
|
||||||
|
goto out_rdlock;
|
||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
default:
|
default:
|
||||||
@ -4806,19 +4802,23 @@ bdrv_reopen_parse_file_or_backing(BDRVReopenState *reopen_state,
|
|||||||
g_assert_not_reached();
|
g_assert_not_reached();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
old_child_bs = is_backing ? child_bs(bs->backing) : child_bs(bs->file);
|
||||||
if (old_child_bs == new_child_bs) {
|
if (old_child_bs == new_child_bs) {
|
||||||
return 0;
|
ret = 0;
|
||||||
|
goto out_rdlock;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (old_child_bs) {
|
if (old_child_bs) {
|
||||||
if (bdrv_skip_implicit_filters(old_child_bs) == new_child_bs) {
|
if (bdrv_skip_implicit_filters(old_child_bs) == new_child_bs) {
|
||||||
return 0;
|
ret = 0;
|
||||||
|
goto out_rdlock;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (old_child_bs->implicit) {
|
if (old_child_bs->implicit) {
|
||||||
error_setg(errp, "Cannot replace implicit %s child of %s",
|
error_setg(errp, "Cannot replace implicit %s child of %s",
|
||||||
child_name, bs->node_name);
|
child_name, bs->node_name);
|
||||||
return -EPERM;
|
ret = -EPERM;
|
||||||
|
goto out_rdlock;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -4829,7 +4829,8 @@ bdrv_reopen_parse_file_or_backing(BDRVReopenState *reopen_state,
|
|||||||
*/
|
*/
|
||||||
error_setg(errp, "'%s' is a %s filter node that does not support a "
|
error_setg(errp, "'%s' is a %s filter node that does not support a "
|
||||||
"%s child", bs->node_name, bs->drv->format_name, child_name);
|
"%s child", bs->node_name, bs->drv->format_name, child_name);
|
||||||
return -EINVAL;
|
ret = -EINVAL;
|
||||||
|
goto out_rdlock;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (is_backing) {
|
if (is_backing) {
|
||||||
@ -4850,6 +4851,7 @@ bdrv_reopen_parse_file_or_backing(BDRVReopenState *reopen_state,
|
|||||||
aio_context_acquire(ctx);
|
aio_context_acquire(ctx);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
bdrv_graph_rdunlock_main_loop();
|
||||||
bdrv_graph_wrlock(new_child_bs);
|
bdrv_graph_wrlock(new_child_bs);
|
||||||
|
|
||||||
ret = bdrv_set_file_or_backing_noperm(bs, new_child_bs, is_backing,
|
ret = bdrv_set_file_or_backing_noperm(bs, new_child_bs, is_backing,
|
||||||
@ -4868,6 +4870,10 @@ bdrv_reopen_parse_file_or_backing(BDRVReopenState *reopen_state,
|
|||||||
}
|
}
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
|
out_rdlock:
|
||||||
|
bdrv_graph_rdunlock_main_loop();
|
||||||
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
@ -5008,13 +5014,16 @@ bdrv_reopen_prepare(BDRVReopenState *reopen_state, BlockReopenQueue *queue,
|
|||||||
* file or if the image file has a backing file name as part of
|
* file or if the image file has a backing file name as part of
|
||||||
* its metadata. Otherwise the 'backing' option can be omitted.
|
* its metadata. Otherwise the 'backing' option can be omitted.
|
||||||
*/
|
*/
|
||||||
|
bdrv_graph_rdlock_main_loop();
|
||||||
if (drv->supports_backing && reopen_state->backing_missing &&
|
if (drv->supports_backing && reopen_state->backing_missing &&
|
||||||
(reopen_state->bs->backing || reopen_state->bs->backing_file[0])) {
|
(reopen_state->bs->backing || reopen_state->bs->backing_file[0])) {
|
||||||
error_setg(errp, "backing is missing for '%s'",
|
error_setg(errp, "backing is missing for '%s'",
|
||||||
reopen_state->bs->node_name);
|
reopen_state->bs->node_name);
|
||||||
|
bdrv_graph_rdunlock_main_loop();
|
||||||
ret = -EINVAL;
|
ret = -EINVAL;
|
||||||
goto error;
|
goto error;
|
||||||
}
|
}
|
||||||
|
bdrv_graph_rdunlock_main_loop();
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Allow changing the 'backing' option. The new value can be
|
* Allow changing the 'backing' option. The new value can be
|
||||||
@ -5200,14 +5209,15 @@ static void bdrv_close(BlockDriverState *bs)
|
|||||||
bs->drv = NULL;
|
bs->drv = NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
bdrv_graph_wrlock(NULL);
|
bdrv_graph_wrlock(bs);
|
||||||
QLIST_FOREACH_SAFE(child, &bs->children, next, next) {
|
QLIST_FOREACH_SAFE(child, &bs->children, next, next) {
|
||||||
bdrv_unref_child(bs, child);
|
bdrv_unref_child(bs, child);
|
||||||
}
|
}
|
||||||
bdrv_graph_wrunlock();
|
|
||||||
|
|
||||||
assert(!bs->backing);
|
assert(!bs->backing);
|
||||||
assert(!bs->file);
|
assert(!bs->file);
|
||||||
|
bdrv_graph_wrunlock();
|
||||||
|
|
||||||
g_free(bs->opaque);
|
g_free(bs->opaque);
|
||||||
bs->opaque = NULL;
|
bs->opaque = NULL;
|
||||||
qatomic_set(&bs->copy_on_read, 0);
|
qatomic_set(&bs->copy_on_read, 0);
|
||||||
@ -5412,6 +5422,9 @@ bdrv_replace_node_noperm(BlockDriverState *from,
|
|||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
* Switch all parents of @from to point to @to instead. @from and @to must be in
|
||||||
|
* the same AioContext and both must be drained.
|
||||||
|
*
|
||||||
* With auto_skip=true bdrv_replace_node_common skips updating from parents
|
* With auto_skip=true bdrv_replace_node_common skips updating from parents
|
||||||
* if it creates a parent-child relation loop or if parent is block-job.
|
* if it creates a parent-child relation loop or if parent is block-job.
|
||||||
*
|
*
|
||||||
@ -5421,10 +5434,9 @@ bdrv_replace_node_noperm(BlockDriverState *from,
|
|||||||
* With @detach_subchain=true @to must be in a backing chain of @from. In this
|
* With @detach_subchain=true @to must be in a backing chain of @from. In this
|
||||||
* case backing link of the cow-parent of @to is removed.
|
* case backing link of the cow-parent of @to is removed.
|
||||||
*/
|
*/
|
||||||
static int bdrv_replace_node_common(BlockDriverState *from,
|
static int GRAPH_WRLOCK
|
||||||
BlockDriverState *to,
|
bdrv_replace_node_common(BlockDriverState *from, BlockDriverState *to,
|
||||||
bool auto_skip, bool detach_subchain,
|
bool auto_skip, bool detach_subchain, Error **errp)
|
||||||
Error **errp)
|
|
||||||
{
|
{
|
||||||
Transaction *tran = tran_new();
|
Transaction *tran = tran_new();
|
||||||
g_autoptr(GSList) refresh_list = NULL;
|
g_autoptr(GSList) refresh_list = NULL;
|
||||||
@ -5433,6 +5445,10 @@ static int bdrv_replace_node_common(BlockDriverState *from,
|
|||||||
|
|
||||||
GLOBAL_STATE_CODE();
|
GLOBAL_STATE_CODE();
|
||||||
|
|
||||||
|
assert(from->quiesce_counter);
|
||||||
|
assert(to->quiesce_counter);
|
||||||
|
assert(bdrv_get_aio_context(from) == bdrv_get_aio_context(to));
|
||||||
|
|
||||||
if (detach_subchain) {
|
if (detach_subchain) {
|
||||||
assert(bdrv_chain_contains(from, to));
|
assert(bdrv_chain_contains(from, to));
|
||||||
assert(from != to);
|
assert(from != to);
|
||||||
@ -5444,17 +5460,6 @@ static int bdrv_replace_node_common(BlockDriverState *from,
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Make sure that @from doesn't go away until we have successfully attached
|
|
||||||
* all of its parents to @to. */
|
|
||||||
bdrv_ref(from);
|
|
||||||
|
|
||||||
assert(qemu_get_current_aio_context() == qemu_get_aio_context());
|
|
||||||
assert(bdrv_get_aio_context(from) == bdrv_get_aio_context(to));
|
|
||||||
bdrv_drained_begin(from);
|
|
||||||
bdrv_drained_begin(to);
|
|
||||||
|
|
||||||
bdrv_graph_wrlock(to);
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Do the replacement without permission update.
|
* Do the replacement without permission update.
|
||||||
* Replacement may influence the permissions, we should calculate new
|
* Replacement may influence the permissions, we should calculate new
|
||||||
@ -5483,29 +5488,33 @@ static int bdrv_replace_node_common(BlockDriverState *from,
|
|||||||
|
|
||||||
out:
|
out:
|
||||||
tran_finalize(tran, ret);
|
tran_finalize(tran, ret);
|
||||||
bdrv_graph_wrunlock();
|
|
||||||
|
|
||||||
bdrv_drained_end(to);
|
|
||||||
bdrv_drained_end(from);
|
|
||||||
bdrv_unref(from);
|
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
int bdrv_replace_node(BlockDriverState *from, BlockDriverState *to,
|
int bdrv_replace_node(BlockDriverState *from, BlockDriverState *to,
|
||||||
Error **errp)
|
Error **errp)
|
||||||
{
|
{
|
||||||
GLOBAL_STATE_CODE();
|
|
||||||
|
|
||||||
return bdrv_replace_node_common(from, to, true, false, errp);
|
return bdrv_replace_node_common(from, to, true, false, errp);
|
||||||
}
|
}
|
||||||
|
|
||||||
int bdrv_drop_filter(BlockDriverState *bs, Error **errp)
|
int bdrv_drop_filter(BlockDriverState *bs, Error **errp)
|
||||||
{
|
{
|
||||||
|
BlockDriverState *child_bs;
|
||||||
|
int ret;
|
||||||
|
|
||||||
GLOBAL_STATE_CODE();
|
GLOBAL_STATE_CODE();
|
||||||
|
|
||||||
return bdrv_replace_node_common(bs, bdrv_filter_or_cow_bs(bs), true, true,
|
bdrv_graph_rdlock_main_loop();
|
||||||
errp);
|
child_bs = bdrv_filter_or_cow_bs(bs);
|
||||||
|
bdrv_graph_rdunlock_main_loop();
|
||||||
|
|
||||||
|
bdrv_drained_begin(child_bs);
|
||||||
|
bdrv_graph_wrlock(bs);
|
||||||
|
ret = bdrv_replace_node_common(bs, child_bs, true, true, errp);
|
||||||
|
bdrv_graph_wrunlock();
|
||||||
|
bdrv_drained_end(child_bs);
|
||||||
|
|
||||||
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
@ -5532,7 +5541,9 @@ int bdrv_append(BlockDriverState *bs_new, BlockDriverState *bs_top,
|
|||||||
|
|
||||||
GLOBAL_STATE_CODE();
|
GLOBAL_STATE_CODE();
|
||||||
|
|
||||||
|
bdrv_graph_rdlock_main_loop();
|
||||||
assert(!bs_new->backing);
|
assert(!bs_new->backing);
|
||||||
|
bdrv_graph_rdunlock_main_loop();
|
||||||
|
|
||||||
old_context = bdrv_get_aio_context(bs_top);
|
old_context = bdrv_get_aio_context(bs_top);
|
||||||
bdrv_drained_begin(bs_top);
|
bdrv_drained_begin(bs_top);
|
||||||
@ -5700,9 +5711,19 @@ BlockDriverState *bdrv_insert_node(BlockDriverState *bs, QDict *options,
|
|||||||
goto fail;
|
goto fail;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Make sure that @bs doesn't go away until we have successfully attached
|
||||||
|
* all of its parents to @new_node_bs and undrained it again.
|
||||||
|
*/
|
||||||
|
bdrv_ref(bs);
|
||||||
bdrv_drained_begin(bs);
|
bdrv_drained_begin(bs);
|
||||||
|
bdrv_drained_begin(new_node_bs);
|
||||||
|
bdrv_graph_wrlock(new_node_bs);
|
||||||
ret = bdrv_replace_node(bs, new_node_bs, errp);
|
ret = bdrv_replace_node(bs, new_node_bs, errp);
|
||||||
|
bdrv_graph_wrunlock();
|
||||||
|
bdrv_drained_end(new_node_bs);
|
||||||
bdrv_drained_end(bs);
|
bdrv_drained_end(bs);
|
||||||
|
bdrv_unref(bs);
|
||||||
|
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
error_prepend(errp, "Could not replace node: ");
|
error_prepend(errp, "Could not replace node: ");
|
||||||
@ -5748,13 +5769,14 @@ int coroutine_fn bdrv_co_check(BlockDriverState *bs,
|
|||||||
* image file header
|
* image file header
|
||||||
* -ENOTSUP - format driver doesn't support changing the backing file
|
* -ENOTSUP - format driver doesn't support changing the backing file
|
||||||
*/
|
*/
|
||||||
int bdrv_change_backing_file(BlockDriverState *bs, const char *backing_file,
|
int coroutine_fn
|
||||||
const char *backing_fmt, bool require)
|
bdrv_co_change_backing_file(BlockDriverState *bs, const char *backing_file,
|
||||||
|
const char *backing_fmt, bool require)
|
||||||
{
|
{
|
||||||
BlockDriver *drv = bs->drv;
|
BlockDriver *drv = bs->drv;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
GLOBAL_STATE_CODE();
|
IO_CODE();
|
||||||
|
|
||||||
if (!drv) {
|
if (!drv) {
|
||||||
return -ENOMEDIUM;
|
return -ENOMEDIUM;
|
||||||
@ -5769,8 +5791,8 @@ int bdrv_change_backing_file(BlockDriverState *bs, const char *backing_file,
|
|||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (drv->bdrv_change_backing_file != NULL) {
|
if (drv->bdrv_co_change_backing_file != NULL) {
|
||||||
ret = drv->bdrv_change_backing_file(bs, backing_file, backing_fmt);
|
ret = drv->bdrv_co_change_backing_file(bs, backing_file, backing_fmt);
|
||||||
} else {
|
} else {
|
||||||
ret = -ENOTSUP;
|
ret = -ENOTSUP;
|
||||||
}
|
}
|
||||||
@ -5827,8 +5849,9 @@ BlockDriverState *bdrv_find_base(BlockDriverState *bs)
|
|||||||
* between @bs and @base is frozen. @errp is set if that's the case.
|
* between @bs and @base is frozen. @errp is set if that's the case.
|
||||||
* @base must be reachable from @bs, or NULL.
|
* @base must be reachable from @bs, or NULL.
|
||||||
*/
|
*/
|
||||||
bool bdrv_is_backing_chain_frozen(BlockDriverState *bs, BlockDriverState *base,
|
static bool GRAPH_RDLOCK
|
||||||
Error **errp)
|
bdrv_is_backing_chain_frozen(BlockDriverState *bs, BlockDriverState *base,
|
||||||
|
Error **errp)
|
||||||
{
|
{
|
||||||
BlockDriverState *i;
|
BlockDriverState *i;
|
||||||
BdrvChild *child;
|
BdrvChild *child;
|
||||||
@ -5952,15 +5975,15 @@ int bdrv_drop_intermediate(BlockDriverState *top, BlockDriverState *base,
|
|||||||
|
|
||||||
bdrv_ref(top);
|
bdrv_ref(top);
|
||||||
bdrv_drained_begin(base);
|
bdrv_drained_begin(base);
|
||||||
bdrv_graph_rdlock_main_loop();
|
bdrv_graph_wrlock(base);
|
||||||
|
|
||||||
if (!top->drv || !base->drv) {
|
if (!top->drv || !base->drv) {
|
||||||
goto exit;
|
goto exit_wrlock;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Make sure that base is in the backing chain of top */
|
/* Make sure that base is in the backing chain of top */
|
||||||
if (!bdrv_chain_contains(top, base)) {
|
if (!bdrv_chain_contains(top, base)) {
|
||||||
goto exit;
|
goto exit_wrlock;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* If 'base' recursively inherits from 'top' then we should set
|
/* If 'base' recursively inherits from 'top' then we should set
|
||||||
@ -5992,6 +6015,8 @@ int bdrv_drop_intermediate(BlockDriverState *top, BlockDriverState *base,
|
|||||||
* That's a FIXME.
|
* That's a FIXME.
|
||||||
*/
|
*/
|
||||||
bdrv_replace_node_common(top, base, false, false, &local_err);
|
bdrv_replace_node_common(top, base, false, false, &local_err);
|
||||||
|
bdrv_graph_wrunlock();
|
||||||
|
|
||||||
if (local_err) {
|
if (local_err) {
|
||||||
error_report_err(local_err);
|
error_report_err(local_err);
|
||||||
goto exit;
|
goto exit;
|
||||||
@ -6024,8 +6049,11 @@ int bdrv_drop_intermediate(BlockDriverState *top, BlockDriverState *base,
|
|||||||
}
|
}
|
||||||
|
|
||||||
ret = 0;
|
ret = 0;
|
||||||
|
goto exit;
|
||||||
|
|
||||||
|
exit_wrlock:
|
||||||
|
bdrv_graph_wrunlock();
|
||||||
exit:
|
exit:
|
||||||
bdrv_graph_rdunlock_main_loop();
|
|
||||||
bdrv_drained_end(base);
|
bdrv_drained_end(base);
|
||||||
bdrv_unref(top);
|
bdrv_unref(top);
|
||||||
return ret;
|
return ret;
|
||||||
@ -6587,7 +6615,7 @@ int bdrv_has_zero_init_1(BlockDriverState *bs)
|
|||||||
return 1;
|
return 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
int bdrv_has_zero_init(BlockDriverState *bs)
|
int coroutine_mixed_fn bdrv_has_zero_init(BlockDriverState *bs)
|
||||||
{
|
{
|
||||||
BlockDriverState *filtered;
|
BlockDriverState *filtered;
|
||||||
GLOBAL_STATE_CODE();
|
GLOBAL_STATE_CODE();
|
||||||
@ -8100,7 +8128,7 @@ static bool append_strong_runtime_options(QDict *d, BlockDriverState *bs)
|
|||||||
/* Note: This function may return false positives; it may return true
|
/* Note: This function may return false positives; it may return true
|
||||||
* even if opening the backing file specified by bs's image header
|
* even if opening the backing file specified by bs's image header
|
||||||
* would result in exactly bs->backing. */
|
* would result in exactly bs->backing. */
|
||||||
static bool bdrv_backing_overridden(BlockDriverState *bs)
|
static bool GRAPH_RDLOCK bdrv_backing_overridden(BlockDriverState *bs)
|
||||||
{
|
{
|
||||||
GLOBAL_STATE_CODE();
|
GLOBAL_STATE_CODE();
|
||||||
if (bs->backing) {
|
if (bs->backing) {
|
||||||
@ -8474,8 +8502,8 @@ BdrvChild *bdrv_primary_child(BlockDriverState *bs)
|
|||||||
return found;
|
return found;
|
||||||
}
|
}
|
||||||
|
|
||||||
static BlockDriverState *bdrv_do_skip_filters(BlockDriverState *bs,
|
static BlockDriverState * GRAPH_RDLOCK
|
||||||
bool stop_on_explicit_filter)
|
bdrv_do_skip_filters(BlockDriverState *bs, bool stop_on_explicit_filter)
|
||||||
{
|
{
|
||||||
BdrvChild *c;
|
BdrvChild *c;
|
||||||
|
|
||||||
|
@ -374,7 +374,6 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
|
|||||||
assert(bs);
|
assert(bs);
|
||||||
assert(target);
|
assert(target);
|
||||||
GLOBAL_STATE_CODE();
|
GLOBAL_STATE_CODE();
|
||||||
GRAPH_RDLOCK_GUARD_MAINLOOP();
|
|
||||||
|
|
||||||
/* QMP interface protects us from these cases */
|
/* QMP interface protects us from these cases */
|
||||||
assert(sync_mode != MIRROR_SYNC_MODE_INCREMENTAL);
|
assert(sync_mode != MIRROR_SYNC_MODE_INCREMENTAL);
|
||||||
@ -385,31 +384,33 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
|
|||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
bdrv_graph_rdlock_main_loop();
|
||||||
if (!bdrv_is_inserted(bs)) {
|
if (!bdrv_is_inserted(bs)) {
|
||||||
error_setg(errp, "Device is not inserted: %s",
|
error_setg(errp, "Device is not inserted: %s",
|
||||||
bdrv_get_device_name(bs));
|
bdrv_get_device_name(bs));
|
||||||
return NULL;
|
goto error_rdlock;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!bdrv_is_inserted(target)) {
|
if (!bdrv_is_inserted(target)) {
|
||||||
error_setg(errp, "Device is not inserted: %s",
|
error_setg(errp, "Device is not inserted: %s",
|
||||||
bdrv_get_device_name(target));
|
bdrv_get_device_name(target));
|
||||||
return NULL;
|
goto error_rdlock;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (compress && !bdrv_supports_compressed_writes(target)) {
|
if (compress && !bdrv_supports_compressed_writes(target)) {
|
||||||
error_setg(errp, "Compression is not supported for this drive %s",
|
error_setg(errp, "Compression is not supported for this drive %s",
|
||||||
bdrv_get_device_name(target));
|
bdrv_get_device_name(target));
|
||||||
return NULL;
|
goto error_rdlock;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_BACKUP_SOURCE, errp)) {
|
if (bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_BACKUP_SOURCE, errp)) {
|
||||||
return NULL;
|
goto error_rdlock;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (bdrv_op_is_blocked(target, BLOCK_OP_TYPE_BACKUP_TARGET, errp)) {
|
if (bdrv_op_is_blocked(target, BLOCK_OP_TYPE_BACKUP_TARGET, errp)) {
|
||||||
return NULL;
|
goto error_rdlock;
|
||||||
}
|
}
|
||||||
|
bdrv_graph_rdunlock_main_loop();
|
||||||
|
|
||||||
if (perf->max_workers < 1 || perf->max_workers > INT_MAX) {
|
if (perf->max_workers < 1 || perf->max_workers > INT_MAX) {
|
||||||
error_setg(errp, "max-workers must be between 1 and %d", INT_MAX);
|
error_setg(errp, "max-workers must be between 1 and %d", INT_MAX);
|
||||||
@ -437,6 +438,7 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
|
|||||||
|
|
||||||
len = bdrv_getlength(bs);
|
len = bdrv_getlength(bs);
|
||||||
if (len < 0) {
|
if (len < 0) {
|
||||||
|
GRAPH_RDLOCK_GUARD_MAINLOOP();
|
||||||
error_setg_errno(errp, -len, "Unable to get length for '%s'",
|
error_setg_errno(errp, -len, "Unable to get length for '%s'",
|
||||||
bdrv_get_device_or_node_name(bs));
|
bdrv_get_device_or_node_name(bs));
|
||||||
goto error;
|
goto error;
|
||||||
@ -444,6 +446,7 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
|
|||||||
|
|
||||||
target_len = bdrv_getlength(target);
|
target_len = bdrv_getlength(target);
|
||||||
if (target_len < 0) {
|
if (target_len < 0) {
|
||||||
|
GRAPH_RDLOCK_GUARD_MAINLOOP();
|
||||||
error_setg_errno(errp, -target_len, "Unable to get length for '%s'",
|
error_setg_errno(errp, -target_len, "Unable to get length for '%s'",
|
||||||
bdrv_get_device_or_node_name(bs));
|
bdrv_get_device_or_node_name(bs));
|
||||||
goto error;
|
goto error;
|
||||||
@ -493,8 +496,10 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
|
|||||||
block_copy_set_speed(bcs, speed);
|
block_copy_set_speed(bcs, speed);
|
||||||
|
|
||||||
/* Required permissions are taken by copy-before-write filter target */
|
/* Required permissions are taken by copy-before-write filter target */
|
||||||
|
bdrv_graph_wrlock(target);
|
||||||
block_job_add_bdrv(&job->common, "target", target, 0, BLK_PERM_ALL,
|
block_job_add_bdrv(&job->common, "target", target, 0, BLK_PERM_ALL,
|
||||||
&error_abort);
|
&error_abort);
|
||||||
|
bdrv_graph_wrunlock();
|
||||||
|
|
||||||
return &job->common;
|
return &job->common;
|
||||||
|
|
||||||
@ -507,4 +512,8 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
|
|||||||
}
|
}
|
||||||
|
|
||||||
return NULL;
|
return NULL;
|
||||||
|
|
||||||
|
error_rdlock:
|
||||||
|
bdrv_graph_rdunlock_main_loop();
|
||||||
|
return NULL;
|
||||||
}
|
}
|
||||||
|
@ -508,6 +508,8 @@ static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
|
|||||||
goto out;
|
goto out;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
bdrv_graph_rdlock_main_loop();
|
||||||
|
|
||||||
bs->supported_write_flags = BDRV_REQ_WRITE_UNCHANGED |
|
bs->supported_write_flags = BDRV_REQ_WRITE_UNCHANGED |
|
||||||
(BDRV_REQ_FUA & bs->file->bs->supported_write_flags);
|
(BDRV_REQ_FUA & bs->file->bs->supported_write_flags);
|
||||||
bs->supported_zero_flags = BDRV_REQ_WRITE_UNCHANGED |
|
bs->supported_zero_flags = BDRV_REQ_WRITE_UNCHANGED |
|
||||||
@ -520,7 +522,7 @@ static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
|
|||||||
if (s->align && (s->align >= INT_MAX || !is_power_of_2(s->align))) {
|
if (s->align && (s->align >= INT_MAX || !is_power_of_2(s->align))) {
|
||||||
error_setg(errp, "Cannot meet constraints with align %" PRIu64,
|
error_setg(errp, "Cannot meet constraints with align %" PRIu64,
|
||||||
s->align);
|
s->align);
|
||||||
goto out;
|
goto out_rdlock;
|
||||||
}
|
}
|
||||||
align = MAX(s->align, bs->file->bs->bl.request_alignment);
|
align = MAX(s->align, bs->file->bs->bl.request_alignment);
|
||||||
|
|
||||||
@ -530,7 +532,7 @@ static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
|
|||||||
!QEMU_IS_ALIGNED(s->max_transfer, align))) {
|
!QEMU_IS_ALIGNED(s->max_transfer, align))) {
|
||||||
error_setg(errp, "Cannot meet constraints with max-transfer %" PRIu64,
|
error_setg(errp, "Cannot meet constraints with max-transfer %" PRIu64,
|
||||||
s->max_transfer);
|
s->max_transfer);
|
||||||
goto out;
|
goto out_rdlock;
|
||||||
}
|
}
|
||||||
|
|
||||||
s->opt_write_zero = qemu_opt_get_size(opts, "opt-write-zero", 0);
|
s->opt_write_zero = qemu_opt_get_size(opts, "opt-write-zero", 0);
|
||||||
@ -539,7 +541,7 @@ static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
|
|||||||
!QEMU_IS_ALIGNED(s->opt_write_zero, align))) {
|
!QEMU_IS_ALIGNED(s->opt_write_zero, align))) {
|
||||||
error_setg(errp, "Cannot meet constraints with opt-write-zero %" PRIu64,
|
error_setg(errp, "Cannot meet constraints with opt-write-zero %" PRIu64,
|
||||||
s->opt_write_zero);
|
s->opt_write_zero);
|
||||||
goto out;
|
goto out_rdlock;
|
||||||
}
|
}
|
||||||
|
|
||||||
s->max_write_zero = qemu_opt_get_size(opts, "max-write-zero", 0);
|
s->max_write_zero = qemu_opt_get_size(opts, "max-write-zero", 0);
|
||||||
@ -549,7 +551,7 @@ static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
|
|||||||
MAX(s->opt_write_zero, align)))) {
|
MAX(s->opt_write_zero, align)))) {
|
||||||
error_setg(errp, "Cannot meet constraints with max-write-zero %" PRIu64,
|
error_setg(errp, "Cannot meet constraints with max-write-zero %" PRIu64,
|
||||||
s->max_write_zero);
|
s->max_write_zero);
|
||||||
goto out;
|
goto out_rdlock;
|
||||||
}
|
}
|
||||||
|
|
||||||
s->opt_discard = qemu_opt_get_size(opts, "opt-discard", 0);
|
s->opt_discard = qemu_opt_get_size(opts, "opt-discard", 0);
|
||||||
@ -558,7 +560,7 @@ static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
|
|||||||
!QEMU_IS_ALIGNED(s->opt_discard, align))) {
|
!QEMU_IS_ALIGNED(s->opt_discard, align))) {
|
||||||
error_setg(errp, "Cannot meet constraints with opt-discard %" PRIu64,
|
error_setg(errp, "Cannot meet constraints with opt-discard %" PRIu64,
|
||||||
s->opt_discard);
|
s->opt_discard);
|
||||||
goto out;
|
goto out_rdlock;
|
||||||
}
|
}
|
||||||
|
|
||||||
s->max_discard = qemu_opt_get_size(opts, "max-discard", 0);
|
s->max_discard = qemu_opt_get_size(opts, "max-discard", 0);
|
||||||
@ -568,12 +570,14 @@ static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
|
|||||||
MAX(s->opt_discard, align)))) {
|
MAX(s->opt_discard, align)))) {
|
||||||
error_setg(errp, "Cannot meet constraints with max-discard %" PRIu64,
|
error_setg(errp, "Cannot meet constraints with max-discard %" PRIu64,
|
||||||
s->max_discard);
|
s->max_discard);
|
||||||
goto out;
|
goto out_rdlock;
|
||||||
}
|
}
|
||||||
|
|
||||||
bdrv_debug_event(bs, BLKDBG_NONE);
|
bdrv_debug_event(bs, BLKDBG_NONE);
|
||||||
|
|
||||||
ret = 0;
|
ret = 0;
|
||||||
|
out_rdlock:
|
||||||
|
bdrv_graph_rdunlock_main_loop();
|
||||||
out:
|
out:
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
qemu_mutex_destroy(&s->lock);
|
qemu_mutex_destroy(&s->lock);
|
||||||
@ -746,13 +750,10 @@ blkdebug_co_pdiscard(BlockDriverState *bs, int64_t offset, int64_t bytes)
|
|||||||
return bdrv_co_pdiscard(bs->file, offset, bytes);
|
return bdrv_co_pdiscard(bs->file, offset, bytes);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int coroutine_fn blkdebug_co_block_status(BlockDriverState *bs,
|
static int coroutine_fn GRAPH_RDLOCK
|
||||||
bool want_zero,
|
blkdebug_co_block_status(BlockDriverState *bs, bool want_zero, int64_t offset,
|
||||||
int64_t offset,
|
int64_t bytes, int64_t *pnum, int64_t *map,
|
||||||
int64_t bytes,
|
BlockDriverState **file)
|
||||||
int64_t *pnum,
|
|
||||||
int64_t *map,
|
|
||||||
BlockDriverState **file)
|
|
||||||
{
|
{
|
||||||
int err;
|
int err;
|
||||||
|
|
||||||
@ -973,7 +974,7 @@ blkdebug_co_getlength(BlockDriverState *bs)
|
|||||||
return bdrv_co_getlength(bs->file->bs);
|
return bdrv_co_getlength(bs->file->bs);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void blkdebug_refresh_filename(BlockDriverState *bs)
|
static void GRAPH_RDLOCK blkdebug_refresh_filename(BlockDriverState *bs)
|
||||||
{
|
{
|
||||||
BDRVBlkdebugState *s = bs->opaque;
|
BDRVBlkdebugState *s = bs->opaque;
|
||||||
const QDictEntry *e;
|
const QDictEntry *e;
|
||||||
|
@ -13,6 +13,7 @@
|
|||||||
#include "block/block_int.h"
|
#include "block/block_int.h"
|
||||||
#include "exec/memory.h"
|
#include "exec/memory.h"
|
||||||
#include "exec/cpu-common.h" /* for qemu_ram_get_fd() */
|
#include "exec/cpu-common.h" /* for qemu_ram_get_fd() */
|
||||||
|
#include "qemu/defer-call.h"
|
||||||
#include "qapi/error.h"
|
#include "qapi/error.h"
|
||||||
#include "qemu/error-report.h"
|
#include "qemu/error-report.h"
|
||||||
#include "qapi/qmp/qdict.h"
|
#include "qapi/qmp/qdict.h"
|
||||||
@ -312,10 +313,10 @@ static void blkio_detach_aio_context(BlockDriverState *bs)
|
|||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Called by blk_io_unplug() or immediately if not plugged. Called without
|
* Called by defer_call_end() or immediately if not in a deferred section.
|
||||||
* blkio_lock.
|
* Called without blkio_lock.
|
||||||
*/
|
*/
|
||||||
static void blkio_unplug_fn(void *opaque)
|
static void blkio_deferred_fn(void *opaque)
|
||||||
{
|
{
|
||||||
BDRVBlkioState *s = opaque;
|
BDRVBlkioState *s = opaque;
|
||||||
|
|
||||||
@ -332,7 +333,7 @@ static void blkio_submit_io(BlockDriverState *bs)
|
|||||||
{
|
{
|
||||||
BDRVBlkioState *s = bs->opaque;
|
BDRVBlkioState *s = bs->opaque;
|
||||||
|
|
||||||
blk_io_plug_call(blkio_unplug_fn, s);
|
defer_call(blkio_deferred_fn, s);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int coroutine_fn
|
static int coroutine_fn
|
||||||
|
@ -130,7 +130,13 @@ static int coroutine_fn GRAPH_RDLOCK blkreplay_co_flush(BlockDriverState *bs)
|
|||||||
static int blkreplay_snapshot_goto(BlockDriverState *bs,
|
static int blkreplay_snapshot_goto(BlockDriverState *bs,
|
||||||
const char *snapshot_id)
|
const char *snapshot_id)
|
||||||
{
|
{
|
||||||
return bdrv_snapshot_goto(bs->file->bs, snapshot_id, NULL);
|
BlockDriverState *file_bs;
|
||||||
|
|
||||||
|
bdrv_graph_rdlock_main_loop();
|
||||||
|
file_bs = bs->file->bs;
|
||||||
|
bdrv_graph_rdunlock_main_loop();
|
||||||
|
|
||||||
|
return bdrv_snapshot_goto(file_bs, snapshot_id, NULL);
|
||||||
}
|
}
|
||||||
|
|
||||||
static BlockDriver bdrv_blkreplay = {
|
static BlockDriver bdrv_blkreplay = {
|
||||||
|
@ -33,8 +33,8 @@ typedef struct BlkverifyRequest {
|
|||||||
uint64_t bytes;
|
uint64_t bytes;
|
||||||
int flags;
|
int flags;
|
||||||
|
|
||||||
int (*request_fn)(BdrvChild *, int64_t, int64_t, QEMUIOVector *,
|
int GRAPH_RDLOCK_PTR (*request_fn)(
|
||||||
BdrvRequestFlags);
|
BdrvChild *, int64_t, int64_t, QEMUIOVector *, BdrvRequestFlags);
|
||||||
|
|
||||||
int ret; /* test image result */
|
int ret; /* test image result */
|
||||||
int raw_ret; /* raw image result */
|
int raw_ret; /* raw image result */
|
||||||
@ -170,8 +170,11 @@ static void coroutine_fn blkverify_do_test_req(void *opaque)
|
|||||||
BlkverifyRequest *r = opaque;
|
BlkverifyRequest *r = opaque;
|
||||||
BDRVBlkverifyState *s = r->bs->opaque;
|
BDRVBlkverifyState *s = r->bs->opaque;
|
||||||
|
|
||||||
|
bdrv_graph_co_rdlock();
|
||||||
r->ret = r->request_fn(s->test_file, r->offset, r->bytes, r->qiov,
|
r->ret = r->request_fn(s->test_file, r->offset, r->bytes, r->qiov,
|
||||||
r->flags);
|
r->flags);
|
||||||
|
bdrv_graph_co_rdunlock();
|
||||||
|
|
||||||
r->done++;
|
r->done++;
|
||||||
qemu_coroutine_enter_if_inactive(r->co);
|
qemu_coroutine_enter_if_inactive(r->co);
|
||||||
}
|
}
|
||||||
@ -180,13 +183,16 @@ static void coroutine_fn blkverify_do_raw_req(void *opaque)
|
|||||||
{
|
{
|
||||||
BlkverifyRequest *r = opaque;
|
BlkverifyRequest *r = opaque;
|
||||||
|
|
||||||
|
bdrv_graph_co_rdlock();
|
||||||
r->raw_ret = r->request_fn(r->bs->file, r->offset, r->bytes, r->raw_qiov,
|
r->raw_ret = r->request_fn(r->bs->file, r->offset, r->bytes, r->raw_qiov,
|
||||||
r->flags);
|
r->flags);
|
||||||
|
bdrv_graph_co_rdunlock();
|
||||||
|
|
||||||
r->done++;
|
r->done++;
|
||||||
qemu_coroutine_enter_if_inactive(r->co);
|
qemu_coroutine_enter_if_inactive(r->co);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int coroutine_fn
|
static int coroutine_fn GRAPH_RDLOCK
|
||||||
blkverify_co_prwv(BlockDriverState *bs, BlkverifyRequest *r, uint64_t offset,
|
blkverify_co_prwv(BlockDriverState *bs, BlkverifyRequest *r, uint64_t offset,
|
||||||
uint64_t bytes, QEMUIOVector *qiov, QEMUIOVector *raw_qiov,
|
uint64_t bytes, QEMUIOVector *qiov, QEMUIOVector *raw_qiov,
|
||||||
int flags, bool is_write)
|
int flags, bool is_write)
|
||||||
@ -222,7 +228,7 @@ blkverify_co_prwv(BlockDriverState *bs, BlkverifyRequest *r, uint64_t offset,
|
|||||||
return r->ret;
|
return r->ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int coroutine_fn
|
static int coroutine_fn GRAPH_RDLOCK
|
||||||
blkverify_co_preadv(BlockDriverState *bs, int64_t offset, int64_t bytes,
|
blkverify_co_preadv(BlockDriverState *bs, int64_t offset, int64_t bytes,
|
||||||
QEMUIOVector *qiov, BdrvRequestFlags flags)
|
QEMUIOVector *qiov, BdrvRequestFlags flags)
|
||||||
{
|
{
|
||||||
@ -251,7 +257,7 @@ blkverify_co_preadv(BlockDriverState *bs, int64_t offset, int64_t bytes,
|
|||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int coroutine_fn
|
static int coroutine_fn GRAPH_RDLOCK
|
||||||
blkverify_co_pwritev(BlockDriverState *bs, int64_t offset, int64_t bytes,
|
blkverify_co_pwritev(BlockDriverState *bs, int64_t offset, int64_t bytes,
|
||||||
QEMUIOVector *qiov, BdrvRequestFlags flags)
|
QEMUIOVector *qiov, BdrvRequestFlags flags)
|
||||||
{
|
{
|
||||||
@ -282,7 +288,7 @@ blkverify_recurse_can_replace(BlockDriverState *bs,
|
|||||||
bdrv_recurse_can_replace(s->test_file->bs, to_replace);
|
bdrv_recurse_can_replace(s->test_file->bs, to_replace);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void blkverify_refresh_filename(BlockDriverState *bs)
|
static void GRAPH_RDLOCK blkverify_refresh_filename(BlockDriverState *bs)
|
||||||
{
|
{
|
||||||
BDRVBlkverifyState *s = bs->opaque;
|
BDRVBlkverifyState *s = bs->opaque;
|
||||||
|
|
||||||
|
@ -931,10 +931,12 @@ int blk_insert_bs(BlockBackend *blk, BlockDriverState *bs, Error **errp)
|
|||||||
ThrottleGroupMember *tgm = &blk->public.throttle_group_member;
|
ThrottleGroupMember *tgm = &blk->public.throttle_group_member;
|
||||||
GLOBAL_STATE_CODE();
|
GLOBAL_STATE_CODE();
|
||||||
bdrv_ref(bs);
|
bdrv_ref(bs);
|
||||||
|
bdrv_graph_wrlock(bs);
|
||||||
blk->root = bdrv_root_attach_child(bs, "root", &child_root,
|
blk->root = bdrv_root_attach_child(bs, "root", &child_root,
|
||||||
BDRV_CHILD_FILTERED | BDRV_CHILD_PRIMARY,
|
BDRV_CHILD_FILTERED | BDRV_CHILD_PRIMARY,
|
||||||
blk->perm, blk->shared_perm,
|
blk->perm, blk->shared_perm,
|
||||||
blk, errp);
|
blk, errp);
|
||||||
|
bdrv_graph_wrunlock();
|
||||||
if (blk->root == NULL) {
|
if (blk->root == NULL) {
|
||||||
return -EPERM;
|
return -EPERM;
|
||||||
}
|
}
|
||||||
@ -2666,6 +2668,8 @@ int blk_load_vmstate(BlockBackend *blk, uint8_t *buf, int64_t pos, int size)
|
|||||||
int blk_probe_blocksizes(BlockBackend *blk, BlockSizes *bsz)
|
int blk_probe_blocksizes(BlockBackend *blk, BlockSizes *bsz)
|
||||||
{
|
{
|
||||||
GLOBAL_STATE_CODE();
|
GLOBAL_STATE_CODE();
|
||||||
|
GRAPH_RDLOCK_GUARD_MAINLOOP();
|
||||||
|
|
||||||
if (!blk_is_available(blk)) {
|
if (!blk_is_available(blk)) {
|
||||||
return -ENOMEDIUM;
|
return -ENOMEDIUM;
|
||||||
}
|
}
|
||||||
@ -2726,6 +2730,7 @@ int blk_commit_all(void)
|
|||||||
{
|
{
|
||||||
BlockBackend *blk = NULL;
|
BlockBackend *blk = NULL;
|
||||||
GLOBAL_STATE_CODE();
|
GLOBAL_STATE_CODE();
|
||||||
|
GRAPH_RDLOCK_GUARD_MAINLOOP();
|
||||||
|
|
||||||
while ((blk = blk_all_next(blk)) != NULL) {
|
while ((blk = blk_all_next(blk)) != NULL) {
|
||||||
AioContext *aio_context = blk_get_aio_context(blk);
|
AioContext *aio_context = blk_get_aio_context(blk);
|
||||||
|
@ -313,7 +313,12 @@ static int64_t block_copy_calculate_cluster_size(BlockDriverState *target,
|
|||||||
{
|
{
|
||||||
int ret;
|
int ret;
|
||||||
BlockDriverInfo bdi;
|
BlockDriverInfo bdi;
|
||||||
bool target_does_cow = bdrv_backing_chain_next(target);
|
bool target_does_cow;
|
||||||
|
|
||||||
|
GLOBAL_STATE_CODE();
|
||||||
|
GRAPH_RDLOCK_GUARD_MAINLOOP();
|
||||||
|
|
||||||
|
target_does_cow = bdrv_backing_chain_next(target);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* If there is no backing file on the target, we cannot rely on COW if our
|
* If there is no backing file on the target, we cannot rely on COW if our
|
||||||
@ -355,6 +360,8 @@ BlockCopyState *block_copy_state_new(BdrvChild *source, BdrvChild *target,
|
|||||||
BdrvDirtyBitmap *copy_bitmap;
|
BdrvDirtyBitmap *copy_bitmap;
|
||||||
bool is_fleecing;
|
bool is_fleecing;
|
||||||
|
|
||||||
|
GLOBAL_STATE_CODE();
|
||||||
|
|
||||||
cluster_size = block_copy_calculate_cluster_size(target->bs, errp);
|
cluster_size = block_copy_calculate_cluster_size(target->bs, errp);
|
||||||
if (cluster_size < 0) {
|
if (cluster_size < 0) {
|
||||||
return NULL;
|
return NULL;
|
||||||
@ -392,7 +399,9 @@ BlockCopyState *block_copy_state_new(BdrvChild *source, BdrvChild *target,
|
|||||||
* For more information see commit f8d59dfb40bb and test
|
* For more information see commit f8d59dfb40bb and test
|
||||||
* tests/qemu-iotests/222
|
* tests/qemu-iotests/222
|
||||||
*/
|
*/
|
||||||
|
bdrv_graph_rdlock_main_loop();
|
||||||
is_fleecing = bdrv_chain_contains(target->bs, source->bs);
|
is_fleecing = bdrv_chain_contains(target->bs, source->bs);
|
||||||
|
bdrv_graph_rdunlock_main_loop();
|
||||||
|
|
||||||
s = g_new(BlockCopyState, 1);
|
s = g_new(BlockCopyState, 1);
|
||||||
*s = (BlockCopyState) {
|
*s = (BlockCopyState) {
|
||||||
|
@ -105,6 +105,8 @@ static int bochs_open(BlockDriverState *bs, QDict *options, int flags,
|
|||||||
struct bochs_header bochs;
|
struct bochs_header bochs;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
|
GLOBAL_STATE_CODE();
|
||||||
|
|
||||||
/* No write support yet */
|
/* No write support yet */
|
||||||
bdrv_graph_rdlock_main_loop();
|
bdrv_graph_rdlock_main_loop();
|
||||||
ret = bdrv_apply_auto_read_only(bs, NULL, errp);
|
ret = bdrv_apply_auto_read_only(bs, NULL, errp);
|
||||||
@ -118,6 +120,8 @@ static int bochs_open(BlockDriverState *bs, QDict *options, int flags,
|
|||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
GRAPH_RDLOCK_GUARD_MAINLOOP();
|
||||||
|
|
||||||
ret = bdrv_pread(bs->file, 0, sizeof(bochs), &bochs, 0);
|
ret = bdrv_pread(bs->file, 0, sizeof(bochs), &bochs, 0);
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
return ret;
|
return ret;
|
||||||
|
@ -67,6 +67,8 @@ static int cloop_open(BlockDriverState *bs, QDict *options, int flags,
|
|||||||
uint32_t offsets_size, max_compressed_block_size = 1, i;
|
uint32_t offsets_size, max_compressed_block_size = 1, i;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
|
GLOBAL_STATE_CODE();
|
||||||
|
|
||||||
bdrv_graph_rdlock_main_loop();
|
bdrv_graph_rdlock_main_loop();
|
||||||
ret = bdrv_apply_auto_read_only(bs, NULL, errp);
|
ret = bdrv_apply_auto_read_only(bs, NULL, errp);
|
||||||
bdrv_graph_rdunlock_main_loop();
|
bdrv_graph_rdunlock_main_loop();
|
||||||
@ -79,6 +81,8 @@ static int cloop_open(BlockDriverState *bs, QDict *options, int flags,
|
|||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
GRAPH_RDLOCK_GUARD_MAINLOOP();
|
||||||
|
|
||||||
/* read header */
|
/* read header */
|
||||||
ret = bdrv_pread(bs->file, 128, 4, &s->block_size, 0);
|
ret = bdrv_pread(bs->file, 128, 4, &s->block_size, 0);
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
|
@ -48,8 +48,10 @@ static int commit_prepare(Job *job)
|
|||||||
{
|
{
|
||||||
CommitBlockJob *s = container_of(job, CommitBlockJob, common.job);
|
CommitBlockJob *s = container_of(job, CommitBlockJob, common.job);
|
||||||
|
|
||||||
|
bdrv_graph_rdlock_main_loop();
|
||||||
bdrv_unfreeze_backing_chain(s->commit_top_bs, s->base_bs);
|
bdrv_unfreeze_backing_chain(s->commit_top_bs, s->base_bs);
|
||||||
s->chain_frozen = false;
|
s->chain_frozen = false;
|
||||||
|
bdrv_graph_rdunlock_main_loop();
|
||||||
|
|
||||||
/* Remove base node parent that still uses BLK_PERM_WRITE/RESIZE before
|
/* Remove base node parent that still uses BLK_PERM_WRITE/RESIZE before
|
||||||
* the normal backing chain can be restored. */
|
* the normal backing chain can be restored. */
|
||||||
@ -66,9 +68,12 @@ static void commit_abort(Job *job)
|
|||||||
{
|
{
|
||||||
CommitBlockJob *s = container_of(job, CommitBlockJob, common.job);
|
CommitBlockJob *s = container_of(job, CommitBlockJob, common.job);
|
||||||
BlockDriverState *top_bs = blk_bs(s->top);
|
BlockDriverState *top_bs = blk_bs(s->top);
|
||||||
|
BlockDriverState *commit_top_backing_bs;
|
||||||
|
|
||||||
if (s->chain_frozen) {
|
if (s->chain_frozen) {
|
||||||
|
bdrv_graph_rdlock_main_loop();
|
||||||
bdrv_unfreeze_backing_chain(s->commit_top_bs, s->base_bs);
|
bdrv_unfreeze_backing_chain(s->commit_top_bs, s->base_bs);
|
||||||
|
bdrv_graph_rdunlock_main_loop();
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Make sure commit_top_bs and top stay around until bdrv_replace_node() */
|
/* Make sure commit_top_bs and top stay around until bdrv_replace_node() */
|
||||||
@ -90,8 +95,15 @@ static void commit_abort(Job *job)
|
|||||||
* XXX Can (or should) we somehow keep 'consistent read' blocked even
|
* XXX Can (or should) we somehow keep 'consistent read' blocked even
|
||||||
* after the failed/cancelled commit job is gone? If we already wrote
|
* after the failed/cancelled commit job is gone? If we already wrote
|
||||||
* something to base, the intermediate images aren't valid any more. */
|
* something to base, the intermediate images aren't valid any more. */
|
||||||
bdrv_replace_node(s->commit_top_bs, s->commit_top_bs->backing->bs,
|
bdrv_graph_rdlock_main_loop();
|
||||||
&error_abort);
|
commit_top_backing_bs = s->commit_top_bs->backing->bs;
|
||||||
|
bdrv_graph_rdunlock_main_loop();
|
||||||
|
|
||||||
|
bdrv_drained_begin(commit_top_backing_bs);
|
||||||
|
bdrv_graph_wrlock(commit_top_backing_bs);
|
||||||
|
bdrv_replace_node(s->commit_top_bs, commit_top_backing_bs, &error_abort);
|
||||||
|
bdrv_graph_wrunlock();
|
||||||
|
bdrv_drained_end(commit_top_backing_bs);
|
||||||
|
|
||||||
bdrv_unref(s->commit_top_bs);
|
bdrv_unref(s->commit_top_bs);
|
||||||
bdrv_unref(top_bs);
|
bdrv_unref(top_bs);
|
||||||
@ -210,7 +222,7 @@ bdrv_commit_top_preadv(BlockDriverState *bs, int64_t offset, int64_t bytes,
|
|||||||
return bdrv_co_preadv(bs->backing, offset, bytes, qiov, flags);
|
return bdrv_co_preadv(bs->backing, offset, bytes, qiov, flags);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void bdrv_commit_top_refresh_filename(BlockDriverState *bs)
|
static GRAPH_RDLOCK void bdrv_commit_top_refresh_filename(BlockDriverState *bs)
|
||||||
{
|
{
|
||||||
pstrcpy(bs->exact_filename, sizeof(bs->exact_filename),
|
pstrcpy(bs->exact_filename, sizeof(bs->exact_filename),
|
||||||
bs->backing->bs->filename);
|
bs->backing->bs->filename);
|
||||||
@ -255,10 +267,13 @@ void commit_start(const char *job_id, BlockDriverState *bs,
|
|||||||
GLOBAL_STATE_CODE();
|
GLOBAL_STATE_CODE();
|
||||||
|
|
||||||
assert(top != bs);
|
assert(top != bs);
|
||||||
|
bdrv_graph_rdlock_main_loop();
|
||||||
if (bdrv_skip_filters(top) == bdrv_skip_filters(base)) {
|
if (bdrv_skip_filters(top) == bdrv_skip_filters(base)) {
|
||||||
error_setg(errp, "Invalid files for merge: top and base are the same");
|
error_setg(errp, "Invalid files for merge: top and base are the same");
|
||||||
|
bdrv_graph_rdunlock_main_loop();
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
bdrv_graph_rdunlock_main_loop();
|
||||||
|
|
||||||
base_size = bdrv_getlength(base);
|
base_size = bdrv_getlength(base);
|
||||||
if (base_size < 0) {
|
if (base_size < 0) {
|
||||||
@ -324,6 +339,7 @@ void commit_start(const char *job_id, BlockDriverState *bs,
|
|||||||
* this is the responsibility of the interface (i.e. whoever calls
|
* this is the responsibility of the interface (i.e. whoever calls
|
||||||
* commit_start()).
|
* commit_start()).
|
||||||
*/
|
*/
|
||||||
|
bdrv_graph_wrlock(top);
|
||||||
s->base_overlay = bdrv_find_overlay(top, base);
|
s->base_overlay = bdrv_find_overlay(top, base);
|
||||||
assert(s->base_overlay);
|
assert(s->base_overlay);
|
||||||
|
|
||||||
@ -354,16 +370,20 @@ void commit_start(const char *job_id, BlockDriverState *bs,
|
|||||||
ret = block_job_add_bdrv(&s->common, "intermediate node", iter, 0,
|
ret = block_job_add_bdrv(&s->common, "intermediate node", iter, 0,
|
||||||
iter_shared_perms, errp);
|
iter_shared_perms, errp);
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
|
bdrv_graph_wrunlock();
|
||||||
goto fail;
|
goto fail;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if (bdrv_freeze_backing_chain(commit_top_bs, base, errp) < 0) {
|
if (bdrv_freeze_backing_chain(commit_top_bs, base, errp) < 0) {
|
||||||
|
bdrv_graph_wrunlock();
|
||||||
goto fail;
|
goto fail;
|
||||||
}
|
}
|
||||||
s->chain_frozen = true;
|
s->chain_frozen = true;
|
||||||
|
|
||||||
ret = block_job_add_bdrv(&s->common, "base", base, 0, BLK_PERM_ALL, errp);
|
ret = block_job_add_bdrv(&s->common, "base", base, 0, BLK_PERM_ALL, errp);
|
||||||
|
bdrv_graph_wrunlock();
|
||||||
|
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
goto fail;
|
goto fail;
|
||||||
}
|
}
|
||||||
@ -396,7 +416,9 @@ void commit_start(const char *job_id, BlockDriverState *bs,
|
|||||||
|
|
||||||
fail:
|
fail:
|
||||||
if (s->chain_frozen) {
|
if (s->chain_frozen) {
|
||||||
|
bdrv_graph_rdlock_main_loop();
|
||||||
bdrv_unfreeze_backing_chain(commit_top_bs, base);
|
bdrv_unfreeze_backing_chain(commit_top_bs, base);
|
||||||
|
bdrv_graph_rdunlock_main_loop();
|
||||||
}
|
}
|
||||||
if (s->base) {
|
if (s->base) {
|
||||||
blk_unref(s->base);
|
blk_unref(s->base);
|
||||||
@ -411,7 +433,11 @@ fail:
|
|||||||
/* commit_top_bs has to be replaced after deleting the block job,
|
/* commit_top_bs has to be replaced after deleting the block job,
|
||||||
* otherwise this would fail because of lack of permissions. */
|
* otherwise this would fail because of lack of permissions. */
|
||||||
if (commit_top_bs) {
|
if (commit_top_bs) {
|
||||||
|
bdrv_drained_begin(top);
|
||||||
|
bdrv_graph_wrlock(top);
|
||||||
bdrv_replace_node(commit_top_bs, top, &error_abort);
|
bdrv_replace_node(commit_top_bs, top, &error_abort);
|
||||||
|
bdrv_graph_wrunlock();
|
||||||
|
bdrv_drained_end(top);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -203,7 +203,7 @@ static int coroutine_fn GRAPH_RDLOCK cbw_co_flush(BlockDriverState *bs)
|
|||||||
* It's guaranteed that guest writes will not interact in the region until
|
* It's guaranteed that guest writes will not interact in the region until
|
||||||
* cbw_snapshot_read_unlock() called.
|
* cbw_snapshot_read_unlock() called.
|
||||||
*/
|
*/
|
||||||
static coroutine_fn BlockReq *
|
static BlockReq * coroutine_fn GRAPH_RDLOCK
|
||||||
cbw_snapshot_read_lock(BlockDriverState *bs, int64_t offset, int64_t bytes,
|
cbw_snapshot_read_lock(BlockDriverState *bs, int64_t offset, int64_t bytes,
|
||||||
int64_t *pnum, BdrvChild **file)
|
int64_t *pnum, BdrvChild **file)
|
||||||
{
|
{
|
||||||
@ -335,7 +335,7 @@ cbw_co_pdiscard_snapshot(BlockDriverState *bs, int64_t offset, int64_t bytes)
|
|||||||
return bdrv_co_pdiscard(s->target, offset, bytes);
|
return bdrv_co_pdiscard(s->target, offset, bytes);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void cbw_refresh_filename(BlockDriverState *bs)
|
static void GRAPH_RDLOCK cbw_refresh_filename(BlockDriverState *bs)
|
||||||
{
|
{
|
||||||
pstrcpy(bs->exact_filename, sizeof(bs->exact_filename),
|
pstrcpy(bs->exact_filename, sizeof(bs->exact_filename),
|
||||||
bs->file->bs->filename);
|
bs->file->bs->filename);
|
||||||
@ -433,6 +433,8 @@ static int cbw_open(BlockDriverState *bs, QDict *options, int flags,
|
|||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
GRAPH_RDLOCK_GUARD_MAINLOOP();
|
||||||
|
|
||||||
ctx = bdrv_get_aio_context(bs);
|
ctx = bdrv_get_aio_context(bs);
|
||||||
aio_context_acquire(ctx);
|
aio_context_acquire(ctx);
|
||||||
|
|
||||||
|
@ -35,8 +35,8 @@ typedef struct BDRVStateCOR {
|
|||||||
} BDRVStateCOR;
|
} BDRVStateCOR;
|
||||||
|
|
||||||
|
|
||||||
static int cor_open(BlockDriverState *bs, QDict *options, int flags,
|
static int GRAPH_UNLOCKED
|
||||||
Error **errp)
|
cor_open(BlockDriverState *bs, QDict *options, int flags, Error **errp)
|
||||||
{
|
{
|
||||||
BlockDriverState *bottom_bs = NULL;
|
BlockDriverState *bottom_bs = NULL;
|
||||||
BDRVStateCOR *state = bs->opaque;
|
BDRVStateCOR *state = bs->opaque;
|
||||||
@ -44,11 +44,15 @@ static int cor_open(BlockDriverState *bs, QDict *options, int flags,
|
|||||||
const char *bottom_node = qdict_get_try_str(options, "bottom");
|
const char *bottom_node = qdict_get_try_str(options, "bottom");
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
|
GLOBAL_STATE_CODE();
|
||||||
|
|
||||||
ret = bdrv_open_file_child(NULL, options, "file", bs, errp);
|
ret = bdrv_open_file_child(NULL, options, "file", bs, errp);
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
GRAPH_RDLOCK_GUARD_MAINLOOP();
|
||||||
|
|
||||||
bs->supported_read_flags = BDRV_REQ_PREFETCH;
|
bs->supported_read_flags = BDRV_REQ_PREFETCH;
|
||||||
|
|
||||||
bs->supported_write_flags = BDRV_REQ_WRITE_UNCHANGED |
|
bs->supported_write_flags = BDRV_REQ_WRITE_UNCHANGED |
|
||||||
@ -227,13 +231,17 @@ cor_co_lock_medium(BlockDriverState *bs, bool locked)
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
static void cor_close(BlockDriverState *bs)
|
static void GRAPH_UNLOCKED cor_close(BlockDriverState *bs)
|
||||||
{
|
{
|
||||||
BDRVStateCOR *s = bs->opaque;
|
BDRVStateCOR *s = bs->opaque;
|
||||||
|
|
||||||
|
GLOBAL_STATE_CODE();
|
||||||
|
|
||||||
if (s->chain_frozen) {
|
if (s->chain_frozen) {
|
||||||
|
bdrv_graph_rdlock_main_loop();
|
||||||
s->chain_frozen = false;
|
s->chain_frozen = false;
|
||||||
bdrv_unfreeze_backing_chain(bs, s->bottom_bs);
|
bdrv_unfreeze_backing_chain(bs, s->bottom_bs);
|
||||||
|
bdrv_graph_rdunlock_main_loop();
|
||||||
}
|
}
|
||||||
|
|
||||||
bdrv_unref(s->bottom_bs);
|
bdrv_unref(s->bottom_bs);
|
||||||
@ -263,12 +271,15 @@ static BlockDriver bdrv_copy_on_read = {
|
|||||||
};
|
};
|
||||||
|
|
||||||
|
|
||||||
void bdrv_cor_filter_drop(BlockDriverState *cor_filter_bs)
|
void no_coroutine_fn bdrv_cor_filter_drop(BlockDriverState *cor_filter_bs)
|
||||||
{
|
{
|
||||||
BDRVStateCOR *s = cor_filter_bs->opaque;
|
BDRVStateCOR *s = cor_filter_bs->opaque;
|
||||||
|
|
||||||
|
GLOBAL_STATE_CODE();
|
||||||
|
|
||||||
/* unfreeze, as otherwise bdrv_replace_node() will fail */
|
/* unfreeze, as otherwise bdrv_replace_node() will fail */
|
||||||
if (s->chain_frozen) {
|
if (s->chain_frozen) {
|
||||||
|
GRAPH_RDLOCK_GUARD_MAINLOOP();
|
||||||
s->chain_frozen = false;
|
s->chain_frozen = false;
|
||||||
bdrv_unfreeze_backing_chain(cor_filter_bs, s->bottom_bs);
|
bdrv_unfreeze_backing_chain(cor_filter_bs, s->bottom_bs);
|
||||||
}
|
}
|
||||||
|
@ -27,6 +27,7 @@
|
|||||||
|
|
||||||
#include "block/block_int.h"
|
#include "block/block_int.h"
|
||||||
|
|
||||||
void bdrv_cor_filter_drop(BlockDriverState *cor_filter_bs);
|
void no_coroutine_fn GRAPH_UNLOCKED
|
||||||
|
bdrv_cor_filter_drop(BlockDriverState *cor_filter_bs);
|
||||||
|
|
||||||
#endif /* BLOCK_COPY_ON_READ_H */
|
#endif /* BLOCK_COPY_ON_READ_H */
|
||||||
|
@ -65,6 +65,9 @@ static int block_crypto_read_func(QCryptoBlock *block,
|
|||||||
BlockDriverState *bs = opaque;
|
BlockDriverState *bs = opaque;
|
||||||
ssize_t ret;
|
ssize_t ret;
|
||||||
|
|
||||||
|
GLOBAL_STATE_CODE();
|
||||||
|
GRAPH_RDLOCK_GUARD_MAINLOOP();
|
||||||
|
|
||||||
ret = bdrv_pread(bs->file, offset, buflen, buf, 0);
|
ret = bdrv_pread(bs->file, offset, buflen, buf, 0);
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
error_setg_errno(errp, -ret, "Could not read encryption header");
|
error_setg_errno(errp, -ret, "Could not read encryption header");
|
||||||
@ -83,6 +86,9 @@ static int block_crypto_write_func(QCryptoBlock *block,
|
|||||||
BlockDriverState *bs = opaque;
|
BlockDriverState *bs = opaque;
|
||||||
ssize_t ret;
|
ssize_t ret;
|
||||||
|
|
||||||
|
GLOBAL_STATE_CODE();
|
||||||
|
GRAPH_RDLOCK_GUARD_MAINLOOP();
|
||||||
|
|
||||||
ret = bdrv_pwrite(bs->file, offset, buflen, buf, 0);
|
ret = bdrv_pwrite(bs->file, offset, buflen, buf, 0);
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
error_setg_errno(errp, -ret, "Could not write encryption header");
|
error_setg_errno(errp, -ret, "Could not write encryption header");
|
||||||
@ -263,11 +269,15 @@ static int block_crypto_open_generic(QCryptoBlockFormat format,
|
|||||||
unsigned int cflags = 0;
|
unsigned int cflags = 0;
|
||||||
QDict *cryptoopts = NULL;
|
QDict *cryptoopts = NULL;
|
||||||
|
|
||||||
|
GLOBAL_STATE_CODE();
|
||||||
|
|
||||||
ret = bdrv_open_file_child(NULL, options, "file", bs, errp);
|
ret = bdrv_open_file_child(NULL, options, "file", bs, errp);
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
GRAPH_RDLOCK_GUARD_MAINLOOP();
|
||||||
|
|
||||||
bs->supported_write_flags = BDRV_REQ_FUA &
|
bs->supported_write_flags = BDRV_REQ_FUA &
|
||||||
bs->file->bs->supported_write_flags;
|
bs->file->bs->supported_write_flags;
|
||||||
|
|
||||||
|
21
block/dmg.c
21
block/dmg.c
@ -70,7 +70,8 @@ static int dmg_probe(const uint8_t *buf, int buf_size, const char *filename)
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int read_uint64(BlockDriverState *bs, int64_t offset, uint64_t *result)
|
static int GRAPH_RDLOCK
|
||||||
|
read_uint64(BlockDriverState *bs, int64_t offset, uint64_t *result)
|
||||||
{
|
{
|
||||||
uint64_t buffer;
|
uint64_t buffer;
|
||||||
int ret;
|
int ret;
|
||||||
@ -84,7 +85,8 @@ static int read_uint64(BlockDriverState *bs, int64_t offset, uint64_t *result)
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int read_uint32(BlockDriverState *bs, int64_t offset, uint32_t *result)
|
static int GRAPH_RDLOCK
|
||||||
|
read_uint32(BlockDriverState *bs, int64_t offset, uint32_t *result)
|
||||||
{
|
{
|
||||||
uint32_t buffer;
|
uint32_t buffer;
|
||||||
int ret;
|
int ret;
|
||||||
@ -321,8 +323,9 @@ fail:
|
|||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int dmg_read_resource_fork(BlockDriverState *bs, DmgHeaderState *ds,
|
static int GRAPH_RDLOCK
|
||||||
uint64_t info_begin, uint64_t info_length)
|
dmg_read_resource_fork(BlockDriverState *bs, DmgHeaderState *ds,
|
||||||
|
uint64_t info_begin, uint64_t info_length)
|
||||||
{
|
{
|
||||||
BDRVDMGState *s = bs->opaque;
|
BDRVDMGState *s = bs->opaque;
|
||||||
int ret;
|
int ret;
|
||||||
@ -388,8 +391,9 @@ fail:
|
|||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int dmg_read_plist_xml(BlockDriverState *bs, DmgHeaderState *ds,
|
static int GRAPH_RDLOCK
|
||||||
uint64_t info_begin, uint64_t info_length)
|
dmg_read_plist_xml(BlockDriverState *bs, DmgHeaderState *ds,
|
||||||
|
uint64_t info_begin, uint64_t info_length)
|
||||||
{
|
{
|
||||||
BDRVDMGState *s = bs->opaque;
|
BDRVDMGState *s = bs->opaque;
|
||||||
int ret;
|
int ret;
|
||||||
@ -452,6 +456,8 @@ static int dmg_open(BlockDriverState *bs, QDict *options, int flags,
|
|||||||
int64_t offset;
|
int64_t offset;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
|
GLOBAL_STATE_CODE();
|
||||||
|
|
||||||
bdrv_graph_rdlock_main_loop();
|
bdrv_graph_rdlock_main_loop();
|
||||||
ret = bdrv_apply_auto_read_only(bs, NULL, errp);
|
ret = bdrv_apply_auto_read_only(bs, NULL, errp);
|
||||||
bdrv_graph_rdunlock_main_loop();
|
bdrv_graph_rdunlock_main_loop();
|
||||||
@ -463,6 +469,9 @@ static int dmg_open(BlockDriverState *bs, QDict *options, int flags,
|
|||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
GRAPH_RDLOCK_GUARD_MAINLOOP();
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* NB: if uncompress submodules are absent,
|
* NB: if uncompress submodules are absent,
|
||||||
* ie block_module_load return value == 0, the function pointers
|
* ie block_module_load return value == 0, the function pointers
|
||||||
|
@ -160,7 +160,6 @@ typedef struct BDRVRawState {
|
|||||||
bool has_write_zeroes:1;
|
bool has_write_zeroes:1;
|
||||||
bool use_linux_aio:1;
|
bool use_linux_aio:1;
|
||||||
bool use_linux_io_uring:1;
|
bool use_linux_io_uring:1;
|
||||||
int64_t *offset; /* offset of zone append operation */
|
|
||||||
int page_cache_inconsistent; /* errno from fdatasync failure */
|
int page_cache_inconsistent; /* errno from fdatasync failure */
|
||||||
bool has_fallocate;
|
bool has_fallocate;
|
||||||
bool needs_alignment;
|
bool needs_alignment;
|
||||||
@ -2445,12 +2444,13 @@ static bool bdrv_qiov_is_aligned(BlockDriverState *bs, QEMUIOVector *qiov)
|
|||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int coroutine_fn raw_co_prw(BlockDriverState *bs, uint64_t offset,
|
static int coroutine_fn raw_co_prw(BlockDriverState *bs, int64_t *offset_ptr,
|
||||||
uint64_t bytes, QEMUIOVector *qiov, int type)
|
uint64_t bytes, QEMUIOVector *qiov, int type)
|
||||||
{
|
{
|
||||||
BDRVRawState *s = bs->opaque;
|
BDRVRawState *s = bs->opaque;
|
||||||
RawPosixAIOData acb;
|
RawPosixAIOData acb;
|
||||||
int ret;
|
int ret;
|
||||||
|
uint64_t offset = *offset_ptr;
|
||||||
|
|
||||||
if (fd_open(bs) < 0)
|
if (fd_open(bs) < 0)
|
||||||
return -EIO;
|
return -EIO;
|
||||||
@ -2513,8 +2513,8 @@ out:
|
|||||||
uint64_t *wp = &wps->wp[offset / bs->bl.zone_size];
|
uint64_t *wp = &wps->wp[offset / bs->bl.zone_size];
|
||||||
if (!BDRV_ZT_IS_CONV(*wp)) {
|
if (!BDRV_ZT_IS_CONV(*wp)) {
|
||||||
if (type & QEMU_AIO_ZONE_APPEND) {
|
if (type & QEMU_AIO_ZONE_APPEND) {
|
||||||
*s->offset = *wp;
|
*offset_ptr = *wp;
|
||||||
trace_zbd_zone_append_complete(bs, *s->offset
|
trace_zbd_zone_append_complete(bs, *offset_ptr
|
||||||
>> BDRV_SECTOR_BITS);
|
>> BDRV_SECTOR_BITS);
|
||||||
}
|
}
|
||||||
/* Advance the wp if needed */
|
/* Advance the wp if needed */
|
||||||
@ -2523,7 +2523,10 @@ out:
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
update_zones_wp(bs, s->fd, 0, 1);
|
/*
|
||||||
|
* write and append write are not allowed to cross zone boundaries
|
||||||
|
*/
|
||||||
|
update_zones_wp(bs, s->fd, offset, 1);
|
||||||
}
|
}
|
||||||
|
|
||||||
qemu_co_mutex_unlock(&wps->colock);
|
qemu_co_mutex_unlock(&wps->colock);
|
||||||
@ -2536,14 +2539,14 @@ static int coroutine_fn raw_co_preadv(BlockDriverState *bs, int64_t offset,
|
|||||||
int64_t bytes, QEMUIOVector *qiov,
|
int64_t bytes, QEMUIOVector *qiov,
|
||||||
BdrvRequestFlags flags)
|
BdrvRequestFlags flags)
|
||||||
{
|
{
|
||||||
return raw_co_prw(bs, offset, bytes, qiov, QEMU_AIO_READ);
|
return raw_co_prw(bs, &offset, bytes, qiov, QEMU_AIO_READ);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int coroutine_fn raw_co_pwritev(BlockDriverState *bs, int64_t offset,
|
static int coroutine_fn raw_co_pwritev(BlockDriverState *bs, int64_t offset,
|
||||||
int64_t bytes, QEMUIOVector *qiov,
|
int64_t bytes, QEMUIOVector *qiov,
|
||||||
BdrvRequestFlags flags)
|
BdrvRequestFlags flags)
|
||||||
{
|
{
|
||||||
return raw_co_prw(bs, offset, bytes, qiov, QEMU_AIO_WRITE);
|
return raw_co_prw(bs, &offset, bytes, qiov, QEMU_AIO_WRITE);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int coroutine_fn raw_co_flush_to_disk(BlockDriverState *bs)
|
static int coroutine_fn raw_co_flush_to_disk(BlockDriverState *bs)
|
||||||
@ -3470,7 +3473,7 @@ static int coroutine_fn raw_co_zone_mgmt(BlockDriverState *bs, BlockZoneOp op,
|
|||||||
len >> BDRV_SECTOR_BITS);
|
len >> BDRV_SECTOR_BITS);
|
||||||
ret = raw_thread_pool_submit(handle_aiocb_zone_mgmt, &acb);
|
ret = raw_thread_pool_submit(handle_aiocb_zone_mgmt, &acb);
|
||||||
if (ret != 0) {
|
if (ret != 0) {
|
||||||
update_zones_wp(bs, s->fd, offset, i);
|
update_zones_wp(bs, s->fd, offset, nrz);
|
||||||
error_report("ioctl %s failed %d", op_name, ret);
|
error_report("ioctl %s failed %d", op_name, ret);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
@ -3506,8 +3509,6 @@ static int coroutine_fn raw_co_zone_append(BlockDriverState *bs,
|
|||||||
int64_t zone_size_mask = bs->bl.zone_size - 1;
|
int64_t zone_size_mask = bs->bl.zone_size - 1;
|
||||||
int64_t iov_len = 0;
|
int64_t iov_len = 0;
|
||||||
int64_t len = 0;
|
int64_t len = 0;
|
||||||
BDRVRawState *s = bs->opaque;
|
|
||||||
s->offset = offset;
|
|
||||||
|
|
||||||
if (*offset & zone_size_mask) {
|
if (*offset & zone_size_mask) {
|
||||||
error_report("sector offset %" PRId64 " is not aligned to zone size "
|
error_report("sector offset %" PRId64 " is not aligned to zone size "
|
||||||
@ -3528,7 +3529,7 @@ static int coroutine_fn raw_co_zone_append(BlockDriverState *bs,
|
|||||||
}
|
}
|
||||||
|
|
||||||
trace_zbd_zone_append(bs, *offset >> BDRV_SECTOR_BITS);
|
trace_zbd_zone_append(bs, *offset >> BDRV_SECTOR_BITS);
|
||||||
return raw_co_prw(bs, *offset, len, qiov, QEMU_AIO_ZONE_APPEND);
|
return raw_co_prw(bs, offset, len, qiov, QEMU_AIO_ZONE_APPEND);
|
||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
|
@ -36,6 +36,8 @@ static int compress_open(BlockDriverState *bs, QDict *options, int flags,
|
|||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
GRAPH_RDLOCK_GUARD_MAINLOOP();
|
||||||
|
|
||||||
if (!bs->file->bs->drv || !block_driver_can_compress(bs->file->bs->drv)) {
|
if (!bs->file->bs->drv || !block_driver_can_compress(bs->file->bs->drv)) {
|
||||||
error_setg(errp,
|
error_setg(errp,
|
||||||
"Compression is not supported for underlying format: %s",
|
"Compression is not supported for underlying format: %s",
|
||||||
@ -97,7 +99,8 @@ compress_co_pdiscard(BlockDriverState *bs, int64_t offset, int64_t bytes)
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
static void compress_refresh_limits(BlockDriverState *bs, Error **errp)
|
static void GRAPH_RDLOCK
|
||||||
|
compress_refresh_limits(BlockDriverState *bs, Error **errp)
|
||||||
{
|
{
|
||||||
BlockDriverInfo bdi;
|
BlockDriverInfo bdi;
|
||||||
int ret;
|
int ret;
|
||||||
|
@ -3685,6 +3685,8 @@ out:
|
|||||||
void bdrv_cancel_in_flight(BlockDriverState *bs)
|
void bdrv_cancel_in_flight(BlockDriverState *bs)
|
||||||
{
|
{
|
||||||
GLOBAL_STATE_CODE();
|
GLOBAL_STATE_CODE();
|
||||||
|
GRAPH_RDLOCK_GUARD_MAINLOOP();
|
||||||
|
|
||||||
if (!bs || !bs->drv) {
|
if (!bs || !bs->drv) {
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
@ -15,6 +15,7 @@
|
|||||||
#include "block/block.h"
|
#include "block/block.h"
|
||||||
#include "block/raw-aio.h"
|
#include "block/raw-aio.h"
|
||||||
#include "qemu/coroutine.h"
|
#include "qemu/coroutine.h"
|
||||||
|
#include "qemu/defer-call.h"
|
||||||
#include "qapi/error.h"
|
#include "qapi/error.h"
|
||||||
#include "sysemu/block-backend.h"
|
#include "sysemu/block-backend.h"
|
||||||
#include "trace.h"
|
#include "trace.h"
|
||||||
@ -124,6 +125,9 @@ static void luring_process_completions(LuringState *s)
|
|||||||
{
|
{
|
||||||
struct io_uring_cqe *cqes;
|
struct io_uring_cqe *cqes;
|
||||||
int total_bytes;
|
int total_bytes;
|
||||||
|
|
||||||
|
defer_call_begin();
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Request completion callbacks can run the nested event loop.
|
* Request completion callbacks can run the nested event loop.
|
||||||
* Schedule ourselves so the nested event loop will "see" remaining
|
* Schedule ourselves so the nested event loop will "see" remaining
|
||||||
@ -216,7 +220,10 @@ end:
|
|||||||
aio_co_wake(luringcb->co);
|
aio_co_wake(luringcb->co);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
qemu_bh_cancel(s->completion_bh);
|
qemu_bh_cancel(s->completion_bh);
|
||||||
|
|
||||||
|
defer_call_end();
|
||||||
}
|
}
|
||||||
|
|
||||||
static int ioq_submit(LuringState *s)
|
static int ioq_submit(LuringState *s)
|
||||||
@ -306,7 +313,7 @@ static void ioq_init(LuringQueue *io_q)
|
|||||||
io_q->blocked = false;
|
io_q->blocked = false;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void luring_unplug_fn(void *opaque)
|
static void luring_deferred_fn(void *opaque)
|
||||||
{
|
{
|
||||||
LuringState *s = opaque;
|
LuringState *s = opaque;
|
||||||
trace_luring_unplug_fn(s, s->io_q.blocked, s->io_q.in_queue,
|
trace_luring_unplug_fn(s, s->io_q.blocked, s->io_q.in_queue,
|
||||||
@ -367,7 +374,7 @@ static int luring_do_submit(int fd, LuringAIOCB *luringcb, LuringState *s,
|
|||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
blk_io_plug_call(luring_unplug_fn, s);
|
defer_call(luring_deferred_fn, s);
|
||||||
}
|
}
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
@ -14,6 +14,7 @@
|
|||||||
#include "block/raw-aio.h"
|
#include "block/raw-aio.h"
|
||||||
#include "qemu/event_notifier.h"
|
#include "qemu/event_notifier.h"
|
||||||
#include "qemu/coroutine.h"
|
#include "qemu/coroutine.h"
|
||||||
|
#include "qemu/defer-call.h"
|
||||||
#include "qapi/error.h"
|
#include "qapi/error.h"
|
||||||
#include "sysemu/block-backend.h"
|
#include "sysemu/block-backend.h"
|
||||||
|
|
||||||
@ -204,6 +205,8 @@ static void qemu_laio_process_completions(LinuxAioState *s)
|
|||||||
{
|
{
|
||||||
struct io_event *events;
|
struct io_event *events;
|
||||||
|
|
||||||
|
defer_call_begin();
|
||||||
|
|
||||||
/* Reschedule so nested event loops see currently pending completions */
|
/* Reschedule so nested event loops see currently pending completions */
|
||||||
qemu_bh_schedule(s->completion_bh);
|
qemu_bh_schedule(s->completion_bh);
|
||||||
|
|
||||||
@ -230,6 +233,8 @@ static void qemu_laio_process_completions(LinuxAioState *s)
|
|||||||
* own `for` loop. If we are the last all counters dropped to zero. */
|
* own `for` loop. If we are the last all counters dropped to zero. */
|
||||||
s->event_max = 0;
|
s->event_max = 0;
|
||||||
s->event_idx = 0;
|
s->event_idx = 0;
|
||||||
|
|
||||||
|
defer_call_end();
|
||||||
}
|
}
|
||||||
|
|
||||||
static void qemu_laio_process_completions_and_submit(LinuxAioState *s)
|
static void qemu_laio_process_completions_and_submit(LinuxAioState *s)
|
||||||
@ -353,7 +358,7 @@ static uint64_t laio_max_batch(LinuxAioState *s, uint64_t dev_max_batch)
|
|||||||
return max_batch;
|
return max_batch;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void laio_unplug_fn(void *opaque)
|
static void laio_deferred_fn(void *opaque)
|
||||||
{
|
{
|
||||||
LinuxAioState *s = opaque;
|
LinuxAioState *s = opaque;
|
||||||
|
|
||||||
@ -393,7 +398,7 @@ static int laio_do_submit(int fd, struct qemu_laiocb *laiocb, off_t offset,
|
|||||||
if (s->io_q.in_queue >= laio_max_batch(s, dev_max_batch)) {
|
if (s->io_q.in_queue >= laio_max_batch(s, dev_max_batch)) {
|
||||||
ioq_submit(s);
|
ioq_submit(s);
|
||||||
} else {
|
} else {
|
||||||
blk_io_plug_call(laio_unplug_fn, s);
|
defer_call(laio_deferred_fn, s);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -21,7 +21,6 @@ block_ss.add(files(
|
|||||||
'mirror.c',
|
'mirror.c',
|
||||||
'nbd.c',
|
'nbd.c',
|
||||||
'null.c',
|
'null.c',
|
||||||
'plug.c',
|
|
||||||
'preallocate.c',
|
'preallocate.c',
|
||||||
'progress_meter.c',
|
'progress_meter.c',
|
||||||
'qapi.c',
|
'qapi.c',
|
||||||
|
219
block/mirror.c
219
block/mirror.c
@ -55,10 +55,18 @@ typedef struct MirrorBlockJob {
|
|||||||
BlockMirrorBackingMode backing_mode;
|
BlockMirrorBackingMode backing_mode;
|
||||||
/* Whether the target image requires explicit zero-initialization */
|
/* Whether the target image requires explicit zero-initialization */
|
||||||
bool zero_target;
|
bool zero_target;
|
||||||
|
/*
|
||||||
|
* To be accesssed with atomics. Written only under the BQL (required by the
|
||||||
|
* current implementation of mirror_change()).
|
||||||
|
*/
|
||||||
MirrorCopyMode copy_mode;
|
MirrorCopyMode copy_mode;
|
||||||
BlockdevOnError on_source_error, on_target_error;
|
BlockdevOnError on_source_error, on_target_error;
|
||||||
/* Set when the target is synced (dirty bitmap is clean, nothing
|
/*
|
||||||
* in flight) and the job is running in active mode */
|
* To be accessed with atomics.
|
||||||
|
*
|
||||||
|
* Set when the target is synced (dirty bitmap is clean, nothing in flight)
|
||||||
|
* and the job is running in active mode.
|
||||||
|
*/
|
||||||
bool actively_synced;
|
bool actively_synced;
|
||||||
bool should_complete;
|
bool should_complete;
|
||||||
int64_t granularity;
|
int64_t granularity;
|
||||||
@ -122,7 +130,7 @@ typedef enum MirrorMethod {
|
|||||||
static BlockErrorAction mirror_error_action(MirrorBlockJob *s, bool read,
|
static BlockErrorAction mirror_error_action(MirrorBlockJob *s, bool read,
|
||||||
int error)
|
int error)
|
||||||
{
|
{
|
||||||
s->actively_synced = false;
|
qatomic_set(&s->actively_synced, false);
|
||||||
if (read) {
|
if (read) {
|
||||||
return block_job_error_action(&s->common, s->on_source_error,
|
return block_job_error_action(&s->common, s->on_source_error,
|
||||||
true, error);
|
true, error);
|
||||||
@ -471,7 +479,7 @@ static unsigned mirror_perform(MirrorBlockJob *s, int64_t offset,
|
|||||||
return bytes_handled;
|
return bytes_handled;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void coroutine_fn mirror_iteration(MirrorBlockJob *s)
|
static void coroutine_fn GRAPH_RDLOCK mirror_iteration(MirrorBlockJob *s)
|
||||||
{
|
{
|
||||||
BlockDriverState *source = s->mirror_top_bs->backing->bs;
|
BlockDriverState *source = s->mirror_top_bs->backing->bs;
|
||||||
MirrorOp *pseudo_op;
|
MirrorOp *pseudo_op;
|
||||||
@ -670,6 +678,7 @@ static int mirror_exit_common(Job *job)
|
|||||||
s->prepared = true;
|
s->prepared = true;
|
||||||
|
|
||||||
aio_context_acquire(qemu_get_aio_context());
|
aio_context_acquire(qemu_get_aio_context());
|
||||||
|
bdrv_graph_rdlock_main_loop();
|
||||||
|
|
||||||
mirror_top_bs = s->mirror_top_bs;
|
mirror_top_bs = s->mirror_top_bs;
|
||||||
bs_opaque = mirror_top_bs->opaque;
|
bs_opaque = mirror_top_bs->opaque;
|
||||||
@ -688,6 +697,8 @@ static int mirror_exit_common(Job *job)
|
|||||||
bdrv_ref(mirror_top_bs);
|
bdrv_ref(mirror_top_bs);
|
||||||
bdrv_ref(target_bs);
|
bdrv_ref(target_bs);
|
||||||
|
|
||||||
|
bdrv_graph_rdunlock_main_loop();
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Remove target parent that still uses BLK_PERM_WRITE/RESIZE before
|
* Remove target parent that still uses BLK_PERM_WRITE/RESIZE before
|
||||||
* inserting target_bs at s->to_replace, where we might not be able to get
|
* inserting target_bs at s->to_replace, where we might not be able to get
|
||||||
@ -701,12 +712,12 @@ static int mirror_exit_common(Job *job)
|
|||||||
* these permissions any more means that we can't allow any new requests on
|
* these permissions any more means that we can't allow any new requests on
|
||||||
* mirror_top_bs from now on, so keep it drained. */
|
* mirror_top_bs from now on, so keep it drained. */
|
||||||
bdrv_drained_begin(mirror_top_bs);
|
bdrv_drained_begin(mirror_top_bs);
|
||||||
|
bdrv_drained_begin(target_bs);
|
||||||
bs_opaque->stop = true;
|
bs_opaque->stop = true;
|
||||||
|
|
||||||
bdrv_graph_rdlock_main_loop();
|
bdrv_graph_rdlock_main_loop();
|
||||||
bdrv_child_refresh_perms(mirror_top_bs, mirror_top_bs->backing,
|
bdrv_child_refresh_perms(mirror_top_bs, mirror_top_bs->backing,
|
||||||
&error_abort);
|
&error_abort);
|
||||||
bdrv_graph_rdunlock_main_loop();
|
|
||||||
|
|
||||||
if (!abort && s->backing_mode == MIRROR_SOURCE_BACKING_CHAIN) {
|
if (!abort && s->backing_mode == MIRROR_SOURCE_BACKING_CHAIN) {
|
||||||
BlockDriverState *backing = s->is_none_mode ? src : s->base;
|
BlockDriverState *backing = s->is_none_mode ? src : s->base;
|
||||||
@ -729,6 +740,7 @@ static int mirror_exit_common(Job *job)
|
|||||||
local_err = NULL;
|
local_err = NULL;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
bdrv_graph_rdunlock_main_loop();
|
||||||
|
|
||||||
if (s->to_replace) {
|
if (s->to_replace) {
|
||||||
replace_aio_context = bdrv_get_aio_context(s->to_replace);
|
replace_aio_context = bdrv_get_aio_context(s->to_replace);
|
||||||
@ -746,15 +758,13 @@ static int mirror_exit_common(Job *job)
|
|||||||
/* The mirror job has no requests in flight any more, but we need to
|
/* The mirror job has no requests in flight any more, but we need to
|
||||||
* drain potential other users of the BDS before changing the graph. */
|
* drain potential other users of the BDS before changing the graph. */
|
||||||
assert(s->in_drain);
|
assert(s->in_drain);
|
||||||
bdrv_drained_begin(target_bs);
|
bdrv_drained_begin(to_replace);
|
||||||
/*
|
/*
|
||||||
* Cannot use check_to_replace_node() here, because that would
|
* Cannot use check_to_replace_node() here, because that would
|
||||||
* check for an op blocker on @to_replace, and we have our own
|
* check for an op blocker on @to_replace, and we have our own
|
||||||
* there.
|
* there.
|
||||||
*
|
|
||||||
* TODO Pull out the writer lock from bdrv_replace_node() to here
|
|
||||||
*/
|
*/
|
||||||
bdrv_graph_rdlock_main_loop();
|
bdrv_graph_wrlock(target_bs);
|
||||||
if (bdrv_recurse_can_replace(src, to_replace)) {
|
if (bdrv_recurse_can_replace(src, to_replace)) {
|
||||||
bdrv_replace_node(to_replace, target_bs, &local_err);
|
bdrv_replace_node(to_replace, target_bs, &local_err);
|
||||||
} else {
|
} else {
|
||||||
@ -763,8 +773,8 @@ static int mirror_exit_common(Job *job)
|
|||||||
"would not lead to an abrupt change of visible data",
|
"would not lead to an abrupt change of visible data",
|
||||||
to_replace->node_name, target_bs->node_name);
|
to_replace->node_name, target_bs->node_name);
|
||||||
}
|
}
|
||||||
bdrv_graph_rdunlock_main_loop();
|
bdrv_graph_wrunlock();
|
||||||
bdrv_drained_end(target_bs);
|
bdrv_drained_end(to_replace);
|
||||||
if (local_err) {
|
if (local_err) {
|
||||||
error_report_err(local_err);
|
error_report_err(local_err);
|
||||||
ret = -EPERM;
|
ret = -EPERM;
|
||||||
@ -779,7 +789,6 @@ static int mirror_exit_common(Job *job)
|
|||||||
aio_context_release(replace_aio_context);
|
aio_context_release(replace_aio_context);
|
||||||
}
|
}
|
||||||
g_free(s->replaces);
|
g_free(s->replaces);
|
||||||
bdrv_unref(target_bs);
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Remove the mirror filter driver from the graph. Before this, get rid of
|
* Remove the mirror filter driver from the graph. Before this, get rid of
|
||||||
@ -787,7 +796,12 @@ static int mirror_exit_common(Job *job)
|
|||||||
* valid.
|
* valid.
|
||||||
*/
|
*/
|
||||||
block_job_remove_all_bdrv(bjob);
|
block_job_remove_all_bdrv(bjob);
|
||||||
|
bdrv_graph_wrlock(mirror_top_bs);
|
||||||
bdrv_replace_node(mirror_top_bs, mirror_top_bs->backing->bs, &error_abort);
|
bdrv_replace_node(mirror_top_bs, mirror_top_bs->backing->bs, &error_abort);
|
||||||
|
bdrv_graph_wrunlock();
|
||||||
|
|
||||||
|
bdrv_drained_end(target_bs);
|
||||||
|
bdrv_unref(target_bs);
|
||||||
|
|
||||||
bs_opaque->job = NULL;
|
bs_opaque->job = NULL;
|
||||||
|
|
||||||
@ -825,14 +839,18 @@ static void coroutine_fn mirror_throttle(MirrorBlockJob *s)
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
static int coroutine_fn mirror_dirty_init(MirrorBlockJob *s)
|
static int coroutine_fn GRAPH_UNLOCKED mirror_dirty_init(MirrorBlockJob *s)
|
||||||
{
|
{
|
||||||
int64_t offset;
|
int64_t offset;
|
||||||
BlockDriverState *bs = s->mirror_top_bs->backing->bs;
|
BlockDriverState *bs;
|
||||||
BlockDriverState *target_bs = blk_bs(s->target);
|
BlockDriverState *target_bs = blk_bs(s->target);
|
||||||
int ret;
|
int ret;
|
||||||
int64_t count;
|
int64_t count;
|
||||||
|
|
||||||
|
bdrv_graph_co_rdlock();
|
||||||
|
bs = s->mirror_top_bs->backing->bs;
|
||||||
|
bdrv_graph_co_rdunlock();
|
||||||
|
|
||||||
if (s->zero_target) {
|
if (s->zero_target) {
|
||||||
if (!bdrv_can_write_zeroes_with_unmap(target_bs)) {
|
if (!bdrv_can_write_zeroes_with_unmap(target_bs)) {
|
||||||
bdrv_set_dirty_bitmap(s->dirty_bitmap, 0, s->bdev_length);
|
bdrv_set_dirty_bitmap(s->dirty_bitmap, 0, s->bdev_length);
|
||||||
@ -912,7 +930,7 @@ static int coroutine_fn mirror_flush(MirrorBlockJob *s)
|
|||||||
static int coroutine_fn mirror_run(Job *job, Error **errp)
|
static int coroutine_fn mirror_run(Job *job, Error **errp)
|
||||||
{
|
{
|
||||||
MirrorBlockJob *s = container_of(job, MirrorBlockJob, common.job);
|
MirrorBlockJob *s = container_of(job, MirrorBlockJob, common.job);
|
||||||
BlockDriverState *bs = s->mirror_top_bs->backing->bs;
|
BlockDriverState *bs;
|
||||||
MirrorBDSOpaque *mirror_top_opaque = s->mirror_top_bs->opaque;
|
MirrorBDSOpaque *mirror_top_opaque = s->mirror_top_bs->opaque;
|
||||||
BlockDriverState *target_bs = blk_bs(s->target);
|
BlockDriverState *target_bs = blk_bs(s->target);
|
||||||
bool need_drain = true;
|
bool need_drain = true;
|
||||||
@ -924,6 +942,10 @@ static int coroutine_fn mirror_run(Job *job, Error **errp)
|
|||||||
checking for a NULL string */
|
checking for a NULL string */
|
||||||
int ret = 0;
|
int ret = 0;
|
||||||
|
|
||||||
|
bdrv_graph_co_rdlock();
|
||||||
|
bs = bdrv_filter_bs(s->mirror_top_bs);
|
||||||
|
bdrv_graph_co_rdunlock();
|
||||||
|
|
||||||
if (job_is_cancelled(&s->common.job)) {
|
if (job_is_cancelled(&s->common.job)) {
|
||||||
goto immediate_exit;
|
goto immediate_exit;
|
||||||
}
|
}
|
||||||
@ -962,7 +984,7 @@ static int coroutine_fn mirror_run(Job *job, Error **errp)
|
|||||||
if (s->bdev_length == 0) {
|
if (s->bdev_length == 0) {
|
||||||
/* Transition to the READY state and wait for complete. */
|
/* Transition to the READY state and wait for complete. */
|
||||||
job_transition_to_ready(&s->common.job);
|
job_transition_to_ready(&s->common.job);
|
||||||
s->actively_synced = true;
|
qatomic_set(&s->actively_synced, true);
|
||||||
while (!job_cancel_requested(&s->common.job) && !s->should_complete) {
|
while (!job_cancel_requested(&s->common.job) && !s->should_complete) {
|
||||||
job_yield(&s->common.job);
|
job_yield(&s->common.job);
|
||||||
}
|
}
|
||||||
@ -984,13 +1006,13 @@ static int coroutine_fn mirror_run(Job *job, Error **errp)
|
|||||||
} else {
|
} else {
|
||||||
s->target_cluster_size = BDRV_SECTOR_SIZE;
|
s->target_cluster_size = BDRV_SECTOR_SIZE;
|
||||||
}
|
}
|
||||||
bdrv_graph_co_rdunlock();
|
|
||||||
if (backing_filename[0] && !bdrv_backing_chain_next(target_bs) &&
|
if (backing_filename[0] && !bdrv_backing_chain_next(target_bs) &&
|
||||||
s->granularity < s->target_cluster_size) {
|
s->granularity < s->target_cluster_size) {
|
||||||
s->buf_size = MAX(s->buf_size, s->target_cluster_size);
|
s->buf_size = MAX(s->buf_size, s->target_cluster_size);
|
||||||
s->cow_bitmap = bitmap_new(length);
|
s->cow_bitmap = bitmap_new(length);
|
||||||
}
|
}
|
||||||
s->max_iov = MIN(bs->bl.max_iov, target_bs->bl.max_iov);
|
s->max_iov = MIN(bs->bl.max_iov, target_bs->bl.max_iov);
|
||||||
|
bdrv_graph_co_rdunlock();
|
||||||
|
|
||||||
s->buf = qemu_try_blockalign(bs, s->buf_size);
|
s->buf = qemu_try_blockalign(bs, s->buf_size);
|
||||||
if (s->buf == NULL) {
|
if (s->buf == NULL) {
|
||||||
@ -1056,7 +1078,9 @@ static int coroutine_fn mirror_run(Job *job, Error **errp)
|
|||||||
mirror_wait_for_free_in_flight_slot(s);
|
mirror_wait_for_free_in_flight_slot(s);
|
||||||
continue;
|
continue;
|
||||||
} else if (cnt != 0) {
|
} else if (cnt != 0) {
|
||||||
|
bdrv_graph_co_rdlock();
|
||||||
mirror_iteration(s);
|
mirror_iteration(s);
|
||||||
|
bdrv_graph_co_rdunlock();
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -1074,9 +1098,9 @@ static int coroutine_fn mirror_run(Job *job, Error **errp)
|
|||||||
* the target in a consistent state.
|
* the target in a consistent state.
|
||||||
*/
|
*/
|
||||||
job_transition_to_ready(&s->common.job);
|
job_transition_to_ready(&s->common.job);
|
||||||
if (s->copy_mode != MIRROR_COPY_MODE_BACKGROUND) {
|
}
|
||||||
s->actively_synced = true;
|
if (qatomic_read(&s->copy_mode) != MIRROR_COPY_MODE_BACKGROUND) {
|
||||||
}
|
qatomic_set(&s->actively_synced, true);
|
||||||
}
|
}
|
||||||
|
|
||||||
should_complete = s->should_complete ||
|
should_complete = s->should_complete ||
|
||||||
@ -1246,6 +1270,48 @@ static bool commit_active_cancel(Job *job, bool force)
|
|||||||
return force || !job_is_ready(job);
|
return force || !job_is_ready(job);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void mirror_change(BlockJob *job, BlockJobChangeOptions *opts,
|
||||||
|
Error **errp)
|
||||||
|
{
|
||||||
|
MirrorBlockJob *s = container_of(job, MirrorBlockJob, common);
|
||||||
|
BlockJobChangeOptionsMirror *change_opts = &opts->u.mirror;
|
||||||
|
MirrorCopyMode current;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* The implementation relies on the fact that copy_mode is only written
|
||||||
|
* under the BQL. Otherwise, further synchronization would be required.
|
||||||
|
*/
|
||||||
|
|
||||||
|
GLOBAL_STATE_CODE();
|
||||||
|
|
||||||
|
if (qatomic_read(&s->copy_mode) == change_opts->copy_mode) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (change_opts->copy_mode != MIRROR_COPY_MODE_WRITE_BLOCKING) {
|
||||||
|
error_setg(errp, "Change to copy mode '%s' is not implemented",
|
||||||
|
MirrorCopyMode_str(change_opts->copy_mode));
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
current = qatomic_cmpxchg(&s->copy_mode, MIRROR_COPY_MODE_BACKGROUND,
|
||||||
|
change_opts->copy_mode);
|
||||||
|
if (current != MIRROR_COPY_MODE_BACKGROUND) {
|
||||||
|
error_setg(errp, "Expected current copy mode '%s', got '%s'",
|
||||||
|
MirrorCopyMode_str(MIRROR_COPY_MODE_BACKGROUND),
|
||||||
|
MirrorCopyMode_str(current));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static void mirror_query(BlockJob *job, BlockJobInfo *info)
|
||||||
|
{
|
||||||
|
MirrorBlockJob *s = container_of(job, MirrorBlockJob, common);
|
||||||
|
|
||||||
|
info->u.mirror = (BlockJobInfoMirror) {
|
||||||
|
.actively_synced = qatomic_read(&s->actively_synced),
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
static const BlockJobDriver mirror_job_driver = {
|
static const BlockJobDriver mirror_job_driver = {
|
||||||
.job_driver = {
|
.job_driver = {
|
||||||
.instance_size = sizeof(MirrorBlockJob),
|
.instance_size = sizeof(MirrorBlockJob),
|
||||||
@ -1260,6 +1326,8 @@ static const BlockJobDriver mirror_job_driver = {
|
|||||||
.cancel = mirror_cancel,
|
.cancel = mirror_cancel,
|
||||||
},
|
},
|
||||||
.drained_poll = mirror_drained_poll,
|
.drained_poll = mirror_drained_poll,
|
||||||
|
.change = mirror_change,
|
||||||
|
.query = mirror_query,
|
||||||
};
|
};
|
||||||
|
|
||||||
static const BlockJobDriver commit_active_job_driver = {
|
static const BlockJobDriver commit_active_job_driver = {
|
||||||
@ -1378,7 +1446,7 @@ do_sync_target_write(MirrorBlockJob *job, MirrorMethod method,
|
|||||||
bitmap_end = QEMU_ALIGN_UP(offset + bytes, job->granularity);
|
bitmap_end = QEMU_ALIGN_UP(offset + bytes, job->granularity);
|
||||||
bdrv_set_dirty_bitmap(job->dirty_bitmap, bitmap_offset,
|
bdrv_set_dirty_bitmap(job->dirty_bitmap, bitmap_offset,
|
||||||
bitmap_end - bitmap_offset);
|
bitmap_end - bitmap_offset);
|
||||||
job->actively_synced = false;
|
qatomic_set(&job->actively_synced, false);
|
||||||
|
|
||||||
action = mirror_error_action(job, false, -ret);
|
action = mirror_error_action(job, false, -ret);
|
||||||
if (action == BLOCK_ERROR_ACTION_REPORT) {
|
if (action == BLOCK_ERROR_ACTION_REPORT) {
|
||||||
@ -1437,7 +1505,8 @@ static void coroutine_fn GRAPH_RDLOCK active_write_settle(MirrorOp *op)
|
|||||||
uint64_t end_chunk = DIV_ROUND_UP(op->offset + op->bytes,
|
uint64_t end_chunk = DIV_ROUND_UP(op->offset + op->bytes,
|
||||||
op->s->granularity);
|
op->s->granularity);
|
||||||
|
|
||||||
if (!--op->s->in_active_write_counter && op->s->actively_synced) {
|
if (!--op->s->in_active_write_counter &&
|
||||||
|
qatomic_read(&op->s->actively_synced)) {
|
||||||
BdrvChild *source = op->s->mirror_top_bs->backing;
|
BdrvChild *source = op->s->mirror_top_bs->backing;
|
||||||
|
|
||||||
if (QLIST_FIRST(&source->bs->parents) == source &&
|
if (QLIST_FIRST(&source->bs->parents) == source &&
|
||||||
@ -1463,21 +1532,21 @@ bdrv_mirror_top_preadv(BlockDriverState *bs, int64_t offset, int64_t bytes,
|
|||||||
return bdrv_co_preadv(bs->backing, offset, bytes, qiov, flags);
|
return bdrv_co_preadv(bs->backing, offset, bytes, qiov, flags);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static bool should_copy_to_target(MirrorBDSOpaque *s)
|
||||||
|
{
|
||||||
|
return s->job && s->job->ret >= 0 &&
|
||||||
|
!job_is_cancelled(&s->job->common.job) &&
|
||||||
|
qatomic_read(&s->job->copy_mode) == MIRROR_COPY_MODE_WRITE_BLOCKING;
|
||||||
|
}
|
||||||
|
|
||||||
static int coroutine_fn GRAPH_RDLOCK
|
static int coroutine_fn GRAPH_RDLOCK
|
||||||
bdrv_mirror_top_do_write(BlockDriverState *bs, MirrorMethod method,
|
bdrv_mirror_top_do_write(BlockDriverState *bs, MirrorMethod method,
|
||||||
uint64_t offset, uint64_t bytes, QEMUIOVector *qiov,
|
bool copy_to_target, uint64_t offset, uint64_t bytes,
|
||||||
int flags)
|
QEMUIOVector *qiov, int flags)
|
||||||
{
|
{
|
||||||
MirrorOp *op = NULL;
|
MirrorOp *op = NULL;
|
||||||
MirrorBDSOpaque *s = bs->opaque;
|
MirrorBDSOpaque *s = bs->opaque;
|
||||||
int ret = 0;
|
int ret = 0;
|
||||||
bool copy_to_target = false;
|
|
||||||
|
|
||||||
if (s->job) {
|
|
||||||
copy_to_target = s->job->ret >= 0 &&
|
|
||||||
!job_is_cancelled(&s->job->common.job) &&
|
|
||||||
s->job->copy_mode == MIRROR_COPY_MODE_WRITE_BLOCKING;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (copy_to_target) {
|
if (copy_to_target) {
|
||||||
op = active_write_prepare(s->job, offset, bytes);
|
op = active_write_prepare(s->job, offset, bytes);
|
||||||
@ -1500,6 +1569,11 @@ bdrv_mirror_top_do_write(BlockDriverState *bs, MirrorMethod method,
|
|||||||
abort();
|
abort();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (!copy_to_target && s->job && s->job->dirty_bitmap) {
|
||||||
|
qatomic_set(&s->job->actively_synced, false);
|
||||||
|
bdrv_set_dirty_bitmap(s->job->dirty_bitmap, offset, bytes);
|
||||||
|
}
|
||||||
|
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
goto out;
|
goto out;
|
||||||
}
|
}
|
||||||
@ -1519,17 +1593,10 @@ static int coroutine_fn GRAPH_RDLOCK
|
|||||||
bdrv_mirror_top_pwritev(BlockDriverState *bs, int64_t offset, int64_t bytes,
|
bdrv_mirror_top_pwritev(BlockDriverState *bs, int64_t offset, int64_t bytes,
|
||||||
QEMUIOVector *qiov, BdrvRequestFlags flags)
|
QEMUIOVector *qiov, BdrvRequestFlags flags)
|
||||||
{
|
{
|
||||||
MirrorBDSOpaque *s = bs->opaque;
|
|
||||||
QEMUIOVector bounce_qiov;
|
QEMUIOVector bounce_qiov;
|
||||||
void *bounce_buf;
|
void *bounce_buf;
|
||||||
int ret = 0;
|
int ret = 0;
|
||||||
bool copy_to_target = false;
|
bool copy_to_target = should_copy_to_target(bs->opaque);
|
||||||
|
|
||||||
if (s->job) {
|
|
||||||
copy_to_target = s->job->ret >= 0 &&
|
|
||||||
!job_is_cancelled(&s->job->common.job) &&
|
|
||||||
s->job->copy_mode == MIRROR_COPY_MODE_WRITE_BLOCKING;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (copy_to_target) {
|
if (copy_to_target) {
|
||||||
/* The guest might concurrently modify the data to write; but
|
/* The guest might concurrently modify the data to write; but
|
||||||
@ -1546,8 +1613,8 @@ bdrv_mirror_top_pwritev(BlockDriverState *bs, int64_t offset, int64_t bytes,
|
|||||||
flags &= ~BDRV_REQ_REGISTERED_BUF;
|
flags &= ~BDRV_REQ_REGISTERED_BUF;
|
||||||
}
|
}
|
||||||
|
|
||||||
ret = bdrv_mirror_top_do_write(bs, MIRROR_METHOD_COPY, offset, bytes, qiov,
|
ret = bdrv_mirror_top_do_write(bs, MIRROR_METHOD_COPY, copy_to_target,
|
||||||
flags);
|
offset, bytes, qiov, flags);
|
||||||
|
|
||||||
if (copy_to_target) {
|
if (copy_to_target) {
|
||||||
qemu_iovec_destroy(&bounce_qiov);
|
qemu_iovec_destroy(&bounce_qiov);
|
||||||
@ -1570,18 +1637,20 @@ static int coroutine_fn GRAPH_RDLOCK
|
|||||||
bdrv_mirror_top_pwrite_zeroes(BlockDriverState *bs, int64_t offset,
|
bdrv_mirror_top_pwrite_zeroes(BlockDriverState *bs, int64_t offset,
|
||||||
int64_t bytes, BdrvRequestFlags flags)
|
int64_t bytes, BdrvRequestFlags flags)
|
||||||
{
|
{
|
||||||
return bdrv_mirror_top_do_write(bs, MIRROR_METHOD_ZERO, offset, bytes, NULL,
|
bool copy_to_target = should_copy_to_target(bs->opaque);
|
||||||
flags);
|
return bdrv_mirror_top_do_write(bs, MIRROR_METHOD_ZERO, copy_to_target,
|
||||||
|
offset, bytes, NULL, flags);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int coroutine_fn GRAPH_RDLOCK
|
static int coroutine_fn GRAPH_RDLOCK
|
||||||
bdrv_mirror_top_pdiscard(BlockDriverState *bs, int64_t offset, int64_t bytes)
|
bdrv_mirror_top_pdiscard(BlockDriverState *bs, int64_t offset, int64_t bytes)
|
||||||
{
|
{
|
||||||
return bdrv_mirror_top_do_write(bs, MIRROR_METHOD_DISCARD, offset, bytes,
|
bool copy_to_target = should_copy_to_target(bs->opaque);
|
||||||
NULL, 0);
|
return bdrv_mirror_top_do_write(bs, MIRROR_METHOD_DISCARD, copy_to_target,
|
||||||
|
offset, bytes, NULL, 0);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void bdrv_mirror_top_refresh_filename(BlockDriverState *bs)
|
static void GRAPH_RDLOCK bdrv_mirror_top_refresh_filename(BlockDriverState *bs)
|
||||||
{
|
{
|
||||||
if (bs->backing == NULL) {
|
if (bs->backing == NULL) {
|
||||||
/* we can be here after failed bdrv_attach_child in
|
/* we can be here after failed bdrv_attach_child in
|
||||||
@ -1691,12 +1760,15 @@ static BlockJob *mirror_start_job(
|
|||||||
buf_size = DEFAULT_MIRROR_BUF_SIZE;
|
buf_size = DEFAULT_MIRROR_BUF_SIZE;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
bdrv_graph_rdlock_main_loop();
|
||||||
if (bdrv_skip_filters(bs) == bdrv_skip_filters(target)) {
|
if (bdrv_skip_filters(bs) == bdrv_skip_filters(target)) {
|
||||||
error_setg(errp, "Can't mirror node into itself");
|
error_setg(errp, "Can't mirror node into itself");
|
||||||
|
bdrv_graph_rdunlock_main_loop();
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
target_is_backing = bdrv_chain_contains(bs, target);
|
target_is_backing = bdrv_chain_contains(bs, target);
|
||||||
|
bdrv_graph_rdunlock_main_loop();
|
||||||
|
|
||||||
/* In the case of active commit, add dummy driver to provide consistent
|
/* In the case of active commit, add dummy driver to provide consistent
|
||||||
* reads on the top, while disabling it in the intermediate nodes, and make
|
* reads on the top, while disabling it in the intermediate nodes, and make
|
||||||
@ -1779,14 +1851,19 @@ static BlockJob *mirror_start_job(
|
|||||||
}
|
}
|
||||||
|
|
||||||
target_shared_perms |= BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE;
|
target_shared_perms |= BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE;
|
||||||
} else if (bdrv_chain_contains(bs, bdrv_skip_filters(target))) {
|
} else {
|
||||||
/*
|
bdrv_graph_rdlock_main_loop();
|
||||||
* We may want to allow this in the future, but it would
|
if (bdrv_chain_contains(bs, bdrv_skip_filters(target))) {
|
||||||
* require taking some extra care.
|
/*
|
||||||
*/
|
* We may want to allow this in the future, but it would
|
||||||
error_setg(errp, "Cannot mirror to a filter on top of a node in the "
|
* require taking some extra care.
|
||||||
"source's backing chain");
|
*/
|
||||||
goto fail;
|
error_setg(errp, "Cannot mirror to a filter on top of a node in "
|
||||||
|
"the source's backing chain");
|
||||||
|
bdrv_graph_rdunlock_main_loop();
|
||||||
|
goto fail;
|
||||||
|
}
|
||||||
|
bdrv_graph_rdunlock_main_loop();
|
||||||
}
|
}
|
||||||
|
|
||||||
s->target = blk_new(s->common.job.aio_context,
|
s->target = blk_new(s->common.job.aio_context,
|
||||||
@ -1807,13 +1884,14 @@ static BlockJob *mirror_start_job(
|
|||||||
blk_set_allow_aio_context_change(s->target, true);
|
blk_set_allow_aio_context_change(s->target, true);
|
||||||
blk_set_disable_request_queuing(s->target, true);
|
blk_set_disable_request_queuing(s->target, true);
|
||||||
|
|
||||||
|
bdrv_graph_rdlock_main_loop();
|
||||||
s->replaces = g_strdup(replaces);
|
s->replaces = g_strdup(replaces);
|
||||||
s->on_source_error = on_source_error;
|
s->on_source_error = on_source_error;
|
||||||
s->on_target_error = on_target_error;
|
s->on_target_error = on_target_error;
|
||||||
s->is_none_mode = is_none_mode;
|
s->is_none_mode = is_none_mode;
|
||||||
s->backing_mode = backing_mode;
|
s->backing_mode = backing_mode;
|
||||||
s->zero_target = zero_target;
|
s->zero_target = zero_target;
|
||||||
s->copy_mode = copy_mode;
|
qatomic_set(&s->copy_mode, copy_mode);
|
||||||
s->base = base;
|
s->base = base;
|
||||||
s->base_overlay = bdrv_find_overlay(bs, base);
|
s->base_overlay = bdrv_find_overlay(bs, base);
|
||||||
s->granularity = granularity;
|
s->granularity = granularity;
|
||||||
@ -1822,20 +1900,27 @@ static BlockJob *mirror_start_job(
|
|||||||
if (auto_complete) {
|
if (auto_complete) {
|
||||||
s->should_complete = true;
|
s->should_complete = true;
|
||||||
}
|
}
|
||||||
|
bdrv_graph_rdunlock_main_loop();
|
||||||
|
|
||||||
s->dirty_bitmap = bdrv_create_dirty_bitmap(bs, granularity, NULL, errp);
|
s->dirty_bitmap = bdrv_create_dirty_bitmap(s->mirror_top_bs, granularity,
|
||||||
|
NULL, errp);
|
||||||
if (!s->dirty_bitmap) {
|
if (!s->dirty_bitmap) {
|
||||||
goto fail;
|
goto fail;
|
||||||
}
|
}
|
||||||
if (s->copy_mode == MIRROR_COPY_MODE_WRITE_BLOCKING) {
|
|
||||||
bdrv_disable_dirty_bitmap(s->dirty_bitmap);
|
|
||||||
}
|
|
||||||
|
|
||||||
|
/*
|
||||||
|
* The dirty bitmap is set by bdrv_mirror_top_do_write() when not in active
|
||||||
|
* mode.
|
||||||
|
*/
|
||||||
|
bdrv_disable_dirty_bitmap(s->dirty_bitmap);
|
||||||
|
|
||||||
|
bdrv_graph_wrlock(bs);
|
||||||
ret = block_job_add_bdrv(&s->common, "source", bs, 0,
|
ret = block_job_add_bdrv(&s->common, "source", bs, 0,
|
||||||
BLK_PERM_WRITE_UNCHANGED | BLK_PERM_WRITE |
|
BLK_PERM_WRITE_UNCHANGED | BLK_PERM_WRITE |
|
||||||
BLK_PERM_CONSISTENT_READ,
|
BLK_PERM_CONSISTENT_READ,
|
||||||
errp);
|
errp);
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
|
bdrv_graph_wrunlock();
|
||||||
goto fail;
|
goto fail;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -1880,14 +1965,17 @@ static BlockJob *mirror_start_job(
|
|||||||
ret = block_job_add_bdrv(&s->common, "intermediate node", iter, 0,
|
ret = block_job_add_bdrv(&s->common, "intermediate node", iter, 0,
|
||||||
iter_shared_perms, errp);
|
iter_shared_perms, errp);
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
|
bdrv_graph_wrunlock();
|
||||||
goto fail;
|
goto fail;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if (bdrv_freeze_backing_chain(mirror_top_bs, target, errp) < 0) {
|
if (bdrv_freeze_backing_chain(mirror_top_bs, target, errp) < 0) {
|
||||||
|
bdrv_graph_wrunlock();
|
||||||
goto fail;
|
goto fail;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
bdrv_graph_wrunlock();
|
||||||
|
|
||||||
QTAILQ_INIT(&s->ops_in_flight);
|
QTAILQ_INIT(&s->ops_in_flight);
|
||||||
|
|
||||||
@ -1912,11 +2000,14 @@ fail:
|
|||||||
}
|
}
|
||||||
|
|
||||||
bs_opaque->stop = true;
|
bs_opaque->stop = true;
|
||||||
bdrv_graph_rdlock_main_loop();
|
bdrv_drained_begin(bs);
|
||||||
|
bdrv_graph_wrlock(bs);
|
||||||
|
assert(mirror_top_bs->backing->bs == bs);
|
||||||
bdrv_child_refresh_perms(mirror_top_bs, mirror_top_bs->backing,
|
bdrv_child_refresh_perms(mirror_top_bs, mirror_top_bs->backing,
|
||||||
&error_abort);
|
&error_abort);
|
||||||
bdrv_graph_rdunlock_main_loop();
|
bdrv_replace_node(mirror_top_bs, bs, &error_abort);
|
||||||
bdrv_replace_node(mirror_top_bs, mirror_top_bs->backing->bs, &error_abort);
|
bdrv_graph_wrunlock();
|
||||||
|
bdrv_drained_end(bs);
|
||||||
|
|
||||||
bdrv_unref(mirror_top_bs);
|
bdrv_unref(mirror_top_bs);
|
||||||
|
|
||||||
@ -1945,8 +2036,12 @@ void mirror_start(const char *job_id, BlockDriverState *bs,
|
|||||||
MirrorSyncMode_str(mode));
|
MirrorSyncMode_str(mode));
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
bdrv_graph_rdlock_main_loop();
|
||||||
is_none_mode = mode == MIRROR_SYNC_MODE_NONE;
|
is_none_mode = mode == MIRROR_SYNC_MODE_NONE;
|
||||||
base = mode == MIRROR_SYNC_MODE_TOP ? bdrv_backing_chain_next(bs) : NULL;
|
base = mode == MIRROR_SYNC_MODE_TOP ? bdrv_backing_chain_next(bs) : NULL;
|
||||||
|
bdrv_graph_rdunlock_main_loop();
|
||||||
|
|
||||||
mirror_start_job(job_id, bs, creation_flags, target, replaces,
|
mirror_start_job(job_id, bs, creation_flags, target, replaces,
|
||||||
speed, granularity, buf_size, backing_mode, zero_target,
|
speed, granularity, buf_size, backing_mode, zero_target,
|
||||||
on_source_error, on_target_error, unmap, NULL, NULL,
|
on_source_error, on_target_error, unmap, NULL, NULL,
|
||||||
|
@ -206,6 +206,9 @@ void hmp_commit(Monitor *mon, const QDict *qdict)
|
|||||||
BlockBackend *blk;
|
BlockBackend *blk;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
|
GLOBAL_STATE_CODE();
|
||||||
|
GRAPH_RDLOCK_GUARD_MAINLOOP();
|
||||||
|
|
||||||
if (!strcmp(device, "all")) {
|
if (!strcmp(device, "all")) {
|
||||||
ret = blk_commit_all();
|
ret = blk_commit_all();
|
||||||
} else {
|
} else {
|
||||||
@ -846,7 +849,7 @@ void hmp_info_block_jobs(Monitor *mon, const QDict *qdict)
|
|||||||
}
|
}
|
||||||
|
|
||||||
while (list) {
|
while (list) {
|
||||||
if (strcmp(list->value->type, "stream") == 0) {
|
if (list->value->type == JOB_TYPE_STREAM) {
|
||||||
monitor_printf(mon, "Streaming device %s: Completed %" PRId64
|
monitor_printf(mon, "Streaming device %s: Completed %" PRId64
|
||||||
" of %" PRId64 " bytes, speed limit %" PRId64
|
" of %" PRId64 " bytes, speed limit %" PRId64
|
||||||
" bytes/s\n",
|
" bytes/s\n",
|
||||||
@ -858,7 +861,7 @@ void hmp_info_block_jobs(Monitor *mon, const QDict *qdict)
|
|||||||
monitor_printf(mon, "Type %s, device %s: Completed %" PRId64
|
monitor_printf(mon, "Type %s, device %s: Completed %" PRId64
|
||||||
" of %" PRId64 " bytes, speed limit %" PRId64
|
" of %" PRId64 " bytes, speed limit %" PRId64
|
||||||
" bytes/s\n",
|
" bytes/s\n",
|
||||||
list->value->type,
|
JobType_str(list->value->type),
|
||||||
list->value->device,
|
list->value->device,
|
||||||
list->value->offset,
|
list->value->offset,
|
||||||
list->value->len,
|
list->value->len,
|
||||||
|
12
block/nvme.c
12
block/nvme.c
@ -16,6 +16,7 @@
|
|||||||
#include "qapi/error.h"
|
#include "qapi/error.h"
|
||||||
#include "qapi/qmp/qdict.h"
|
#include "qapi/qmp/qdict.h"
|
||||||
#include "qapi/qmp/qstring.h"
|
#include "qapi/qmp/qstring.h"
|
||||||
|
#include "qemu/defer-call.h"
|
||||||
#include "qemu/error-report.h"
|
#include "qemu/error-report.h"
|
||||||
#include "qemu/main-loop.h"
|
#include "qemu/main-loop.h"
|
||||||
#include "qemu/module.h"
|
#include "qemu/module.h"
|
||||||
@ -416,9 +417,10 @@ static bool nvme_process_completion(NVMeQueuePair *q)
|
|||||||
q->cq_phase = !q->cq_phase;
|
q->cq_phase = !q->cq_phase;
|
||||||
}
|
}
|
||||||
cid = le16_to_cpu(c->cid);
|
cid = le16_to_cpu(c->cid);
|
||||||
if (cid == 0 || cid > NVME_QUEUE_SIZE) {
|
if (cid == 0 || cid > NVME_NUM_REQS) {
|
||||||
warn_report("NVMe: Unexpected CID in completion queue: %"PRIu32", "
|
warn_report("NVMe: Unexpected CID in completion queue: %" PRIu32
|
||||||
"queue size: %u", cid, NVME_QUEUE_SIZE);
|
", should be within: 1..%u inclusively", cid,
|
||||||
|
NVME_NUM_REQS);
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
trace_nvme_complete_command(s, q->index, cid);
|
trace_nvme_complete_command(s, q->index, cid);
|
||||||
@ -476,7 +478,7 @@ static void nvme_trace_command(const NvmeCmd *cmd)
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
static void nvme_unplug_fn(void *opaque)
|
static void nvme_deferred_fn(void *opaque)
|
||||||
{
|
{
|
||||||
NVMeQueuePair *q = opaque;
|
NVMeQueuePair *q = opaque;
|
||||||
|
|
||||||
@ -503,7 +505,7 @@ static void nvme_submit_command(NVMeQueuePair *q, NVMeRequest *req,
|
|||||||
q->need_kick++;
|
q->need_kick++;
|
||||||
qemu_mutex_unlock(&q->lock);
|
qemu_mutex_unlock(&q->lock);
|
||||||
|
|
||||||
blk_io_plug_call(nvme_unplug_fn, q);
|
defer_call(nvme_deferred_fn, q);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void nvme_admin_cmd_sync_cb(void *opaque, int ret)
|
static void nvme_admin_cmd_sync_cb(void *opaque, int ret)
|
||||||
|
@ -59,11 +59,10 @@ typedef struct ParallelsDirtyBitmapFeature {
|
|||||||
} QEMU_PACKED ParallelsDirtyBitmapFeature;
|
} QEMU_PACKED ParallelsDirtyBitmapFeature;
|
||||||
|
|
||||||
/* Given L1 table read bitmap data from the image and populate @bitmap */
|
/* Given L1 table read bitmap data from the image and populate @bitmap */
|
||||||
static int parallels_load_bitmap_data(BlockDriverState *bs,
|
static int GRAPH_RDLOCK
|
||||||
const uint64_t *l1_table,
|
parallels_load_bitmap_data(BlockDriverState *bs, const uint64_t *l1_table,
|
||||||
uint32_t l1_size,
|
uint32_t l1_size, BdrvDirtyBitmap *bitmap,
|
||||||
BdrvDirtyBitmap *bitmap,
|
Error **errp)
|
||||||
Error **errp)
|
|
||||||
{
|
{
|
||||||
BDRVParallelsState *s = bs->opaque;
|
BDRVParallelsState *s = bs->opaque;
|
||||||
int ret = 0;
|
int ret = 0;
|
||||||
@ -120,17 +119,16 @@ finish:
|
|||||||
* @data buffer (of @data_size size) is the Dirty bitmaps feature which
|
* @data buffer (of @data_size size) is the Dirty bitmaps feature which
|
||||||
* consists of ParallelsDirtyBitmapFeature followed by L1 table.
|
* consists of ParallelsDirtyBitmapFeature followed by L1 table.
|
||||||
*/
|
*/
|
||||||
static BdrvDirtyBitmap *parallels_load_bitmap(BlockDriverState *bs,
|
static BdrvDirtyBitmap * GRAPH_RDLOCK
|
||||||
uint8_t *data,
|
parallels_load_bitmap(BlockDriverState *bs, uint8_t *data, size_t data_size,
|
||||||
size_t data_size,
|
Error **errp)
|
||||||
Error **errp)
|
|
||||||
{
|
{
|
||||||
int ret;
|
int ret;
|
||||||
ParallelsDirtyBitmapFeature bf;
|
ParallelsDirtyBitmapFeature bf;
|
||||||
g_autofree uint64_t *l1_table = NULL;
|
g_autofree uint64_t *l1_table = NULL;
|
||||||
BdrvDirtyBitmap *bitmap;
|
BdrvDirtyBitmap *bitmap;
|
||||||
QemuUUID uuid;
|
QemuUUID uuid;
|
||||||
char uuidstr[UUID_FMT_LEN + 1];
|
char uuidstr[UUID_STR_LEN];
|
||||||
int i;
|
int i;
|
||||||
|
|
||||||
if (data_size < sizeof(bf)) {
|
if (data_size < sizeof(bf)) {
|
||||||
@ -183,8 +181,9 @@ static BdrvDirtyBitmap *parallels_load_bitmap(BlockDriverState *bs,
|
|||||||
return bitmap;
|
return bitmap;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int parallels_parse_format_extension(BlockDriverState *bs,
|
static int GRAPH_RDLOCK
|
||||||
uint8_t *ext_cluster, Error **errp)
|
parallels_parse_format_extension(BlockDriverState *bs, uint8_t *ext_cluster,
|
||||||
|
Error **errp)
|
||||||
{
|
{
|
||||||
BDRVParallelsState *s = bs->opaque;
|
BDRVParallelsState *s = bs->opaque;
|
||||||
int ret;
|
int ret;
|
||||||
|
@ -200,7 +200,7 @@ static int mark_used(BlockDriverState *bs, unsigned long *bitmap,
|
|||||||
* bitmap anyway, as much as we can. This information will be used for
|
* bitmap anyway, as much as we can. This information will be used for
|
||||||
* error resolution.
|
* error resolution.
|
||||||
*/
|
*/
|
||||||
static int parallels_fill_used_bitmap(BlockDriverState *bs)
|
static int GRAPH_RDLOCK parallels_fill_used_bitmap(BlockDriverState *bs)
|
||||||
{
|
{
|
||||||
BDRVParallelsState *s = bs->opaque;
|
BDRVParallelsState *s = bs->opaque;
|
||||||
int64_t payload_bytes;
|
int64_t payload_bytes;
|
||||||
@ -415,14 +415,10 @@ parallels_co_flush_to_os(BlockDriverState *bs)
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static int coroutine_fn GRAPH_RDLOCK
|
||||||
static int coroutine_fn parallels_co_block_status(BlockDriverState *bs,
|
parallels_co_block_status(BlockDriverState *bs, bool want_zero, int64_t offset,
|
||||||
bool want_zero,
|
int64_t bytes, int64_t *pnum, int64_t *map,
|
||||||
int64_t offset,
|
BlockDriverState **file)
|
||||||
int64_t bytes,
|
|
||||||
int64_t *pnum,
|
|
||||||
int64_t *map,
|
|
||||||
BlockDriverState **file)
|
|
||||||
{
|
{
|
||||||
BDRVParallelsState *s = bs->opaque;
|
BDRVParallelsState *s = bs->opaque;
|
||||||
int count;
|
int count;
|
||||||
@ -1189,7 +1185,7 @@ static int parallels_probe(const uint8_t *buf, int buf_size,
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int parallels_update_header(BlockDriverState *bs)
|
static int GRAPH_RDLOCK parallels_update_header(BlockDriverState *bs)
|
||||||
{
|
{
|
||||||
BDRVParallelsState *s = bs->opaque;
|
BDRVParallelsState *s = bs->opaque;
|
||||||
unsigned size = MAX(bdrv_opt_mem_align(bs->file->bs),
|
unsigned size = MAX(bdrv_opt_mem_align(bs->file->bs),
|
||||||
@ -1259,6 +1255,8 @@ static int parallels_open(BlockDriverState *bs, QDict *options, int flags,
|
|||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
GRAPH_RDLOCK_GUARD_MAINLOOP();
|
||||||
|
|
||||||
file_nb_sectors = bdrv_nb_sectors(bs->file->bs);
|
file_nb_sectors = bdrv_nb_sectors(bs->file->bs);
|
||||||
if (file_nb_sectors < 0) {
|
if (file_nb_sectors < 0) {
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
@ -1363,13 +1361,11 @@ static int parallels_open(BlockDriverState *bs, QDict *options, int flags,
|
|||||||
bitmap_new(DIV_ROUND_UP(s->header_size, s->bat_dirty_block));
|
bitmap_new(DIV_ROUND_UP(s->header_size, s->bat_dirty_block));
|
||||||
|
|
||||||
/* Disable migration until bdrv_activate method is added */
|
/* Disable migration until bdrv_activate method is added */
|
||||||
bdrv_graph_rdlock_main_loop();
|
|
||||||
error_setg(&s->migration_blocker, "The Parallels format used by node '%s' "
|
error_setg(&s->migration_blocker, "The Parallels format used by node '%s' "
|
||||||
"does not support live migration",
|
"does not support live migration",
|
||||||
bdrv_get_device_or_node_name(bs));
|
bdrv_get_device_or_node_name(bs));
|
||||||
bdrv_graph_rdunlock_main_loop();
|
|
||||||
|
|
||||||
ret = migrate_add_blocker(&s->migration_blocker, errp);
|
ret = migrate_add_blocker_normal(&s->migration_blocker, errp);
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
goto fail;
|
goto fail;
|
||||||
}
|
}
|
||||||
@ -1432,6 +1428,8 @@ static void parallels_close(BlockDriverState *bs)
|
|||||||
{
|
{
|
||||||
BDRVParallelsState *s = bs->opaque;
|
BDRVParallelsState *s = bs->opaque;
|
||||||
|
|
||||||
|
GRAPH_RDLOCK_GUARD_MAINLOOP();
|
||||||
|
|
||||||
if ((bs->open_flags & BDRV_O_RDWR) && !(bs->open_flags & BDRV_O_INACTIVE)) {
|
if ((bs->open_flags & BDRV_O_RDWR) && !(bs->open_flags & BDRV_O_INACTIVE)) {
|
||||||
s->header->inuse = 0;
|
s->header->inuse = 0;
|
||||||
parallels_update_header(bs);
|
parallels_update_header(bs);
|
||||||
|
@ -90,7 +90,8 @@ typedef struct BDRVParallelsState {
|
|||||||
Error *migration_blocker;
|
Error *migration_blocker;
|
||||||
} BDRVParallelsState;
|
} BDRVParallelsState;
|
||||||
|
|
||||||
int parallels_read_format_extension(BlockDriverState *bs,
|
int GRAPH_RDLOCK
|
||||||
int64_t ext_off, Error **errp);
|
parallels_read_format_extension(BlockDriverState *bs, int64_t ext_off,
|
||||||
|
Error **errp);
|
||||||
|
|
||||||
#endif
|
#endif
|
||||||
|
159
block/plug.c
159
block/plug.c
@ -1,159 +0,0 @@
|
|||||||
/* SPDX-License-Identifier: GPL-2.0-or-later */
|
|
||||||
/*
|
|
||||||
* Block I/O plugging
|
|
||||||
*
|
|
||||||
* Copyright Red Hat.
|
|
||||||
*
|
|
||||||
* This API defers a function call within a blk_io_plug()/blk_io_unplug()
|
|
||||||
* section, allowing multiple calls to batch up. This is a performance
|
|
||||||
* optimization that is used in the block layer to submit several I/O requests
|
|
||||||
* at once instead of individually:
|
|
||||||
*
|
|
||||||
* blk_io_plug(); <-- start of plugged region
|
|
||||||
* ...
|
|
||||||
* blk_io_plug_call(my_func, my_obj); <-- deferred my_func(my_obj) call
|
|
||||||
* blk_io_plug_call(my_func, my_obj); <-- another
|
|
||||||
* blk_io_plug_call(my_func, my_obj); <-- another
|
|
||||||
* ...
|
|
||||||
* blk_io_unplug(); <-- end of plugged region, my_func(my_obj) is called once
|
|
||||||
*
|
|
||||||
* This code is actually generic and not tied to the block layer. If another
|
|
||||||
* subsystem needs this functionality, it could be renamed.
|
|
||||||
*/
|
|
||||||
|
|
||||||
#include "qemu/osdep.h"
|
|
||||||
#include "qemu/coroutine-tls.h"
|
|
||||||
#include "qemu/notify.h"
|
|
||||||
#include "qemu/thread.h"
|
|
||||||
#include "sysemu/block-backend.h"
|
|
||||||
|
|
||||||
/* A function call that has been deferred until unplug() */
|
|
||||||
typedef struct {
|
|
||||||
void (*fn)(void *);
|
|
||||||
void *opaque;
|
|
||||||
} UnplugFn;
|
|
||||||
|
|
||||||
/* Per-thread state */
|
|
||||||
typedef struct {
|
|
||||||
unsigned count; /* how many times has plug() been called? */
|
|
||||||
GArray *unplug_fns; /* functions to call at unplug time */
|
|
||||||
} Plug;
|
|
||||||
|
|
||||||
/* Use get_ptr_plug() to fetch this thread-local value */
|
|
||||||
QEMU_DEFINE_STATIC_CO_TLS(Plug, plug);
|
|
||||||
|
|
||||||
/* Called at thread cleanup time */
|
|
||||||
static void blk_io_plug_atexit(Notifier *n, void *value)
|
|
||||||
{
|
|
||||||
Plug *plug = get_ptr_plug();
|
|
||||||
g_array_free(plug->unplug_fns, TRUE);
|
|
||||||
}
|
|
||||||
|
|
||||||
/* This won't involve coroutines, so use __thread */
|
|
||||||
static __thread Notifier blk_io_plug_atexit_notifier;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* blk_io_plug_call:
|
|
||||||
* @fn: a function pointer to be invoked
|
|
||||||
* @opaque: a user-defined argument to @fn()
|
|
||||||
*
|
|
||||||
* Call @fn(@opaque) immediately if not within a blk_io_plug()/blk_io_unplug()
|
|
||||||
* section.
|
|
||||||
*
|
|
||||||
* Otherwise defer the call until the end of the outermost
|
|
||||||
* blk_io_plug()/blk_io_unplug() section in this thread. If the same
|
|
||||||
* @fn/@opaque pair has already been deferred, it will only be called once upon
|
|
||||||
* blk_io_unplug() so that accumulated calls are batched into a single call.
|
|
||||||
*
|
|
||||||
* The caller must ensure that @opaque is not freed before @fn() is invoked.
|
|
||||||
*/
|
|
||||||
void blk_io_plug_call(void (*fn)(void *), void *opaque)
|
|
||||||
{
|
|
||||||
Plug *plug = get_ptr_plug();
|
|
||||||
|
|
||||||
/* Call immediately if we're not plugged */
|
|
||||||
if (plug->count == 0) {
|
|
||||||
fn(opaque);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
GArray *array = plug->unplug_fns;
|
|
||||||
if (!array) {
|
|
||||||
array = g_array_new(FALSE, FALSE, sizeof(UnplugFn));
|
|
||||||
plug->unplug_fns = array;
|
|
||||||
blk_io_plug_atexit_notifier.notify = blk_io_plug_atexit;
|
|
||||||
qemu_thread_atexit_add(&blk_io_plug_atexit_notifier);
|
|
||||||
}
|
|
||||||
|
|
||||||
UnplugFn *fns = (UnplugFn *)array->data;
|
|
||||||
UnplugFn new_fn = {
|
|
||||||
.fn = fn,
|
|
||||||
.opaque = opaque,
|
|
||||||
};
|
|
||||||
|
|
||||||
/*
|
|
||||||
* There won't be many, so do a linear search. If this becomes a bottleneck
|
|
||||||
* then a binary search (glib 2.62+) or different data structure could be
|
|
||||||
* used.
|
|
||||||
*/
|
|
||||||
for (guint i = 0; i < array->len; i++) {
|
|
||||||
if (memcmp(&fns[i], &new_fn, sizeof(new_fn)) == 0) {
|
|
||||||
return; /* already exists */
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
g_array_append_val(array, new_fn);
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* blk_io_plug: Defer blk_io_plug_call() functions until blk_io_unplug()
|
|
||||||
*
|
|
||||||
* blk_io_plug/unplug are thread-local operations. This means that multiple
|
|
||||||
* threads can simultaneously call plug/unplug, but the caller must ensure that
|
|
||||||
* each unplug() is called in the same thread of the matching plug().
|
|
||||||
*
|
|
||||||
* Nesting is supported. blk_io_plug_call() functions are only called at the
|
|
||||||
* outermost blk_io_unplug().
|
|
||||||
*/
|
|
||||||
void blk_io_plug(void)
|
|
||||||
{
|
|
||||||
Plug *plug = get_ptr_plug();
|
|
||||||
|
|
||||||
assert(plug->count < UINT32_MAX);
|
|
||||||
|
|
||||||
plug->count++;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* blk_io_unplug: Run any pending blk_io_plug_call() functions
|
|
||||||
*
|
|
||||||
* There must have been a matching blk_io_plug() call in the same thread prior
|
|
||||||
* to this blk_io_unplug() call.
|
|
||||||
*/
|
|
||||||
void blk_io_unplug(void)
|
|
||||||
{
|
|
||||||
Plug *plug = get_ptr_plug();
|
|
||||||
|
|
||||||
assert(plug->count > 0);
|
|
||||||
|
|
||||||
if (--plug->count > 0) {
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
GArray *array = plug->unplug_fns;
|
|
||||||
if (!array) {
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
UnplugFn *fns = (UnplugFn *)array->data;
|
|
||||||
|
|
||||||
for (guint i = 0; i < array->len; i++) {
|
|
||||||
fns[i].fn(fns[i].opaque);
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
|
||||||
* This resets the array without freeing memory so that appending is cheap
|
|
||||||
* in the future.
|
|
||||||
*/
|
|
||||||
g_array_set_size(array, 0);
|
|
||||||
}
|
|
@ -143,6 +143,8 @@ static int preallocate_open(BlockDriverState *bs, QDict *options, int flags,
|
|||||||
BDRVPreallocateState *s = bs->opaque;
|
BDRVPreallocateState *s = bs->opaque;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
|
GLOBAL_STATE_CODE();
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* s->data_end and friends should be initialized on permission update.
|
* s->data_end and friends should be initialized on permission update.
|
||||||
* For this to work, mark them invalid.
|
* For this to work, mark them invalid.
|
||||||
@ -155,6 +157,8 @@ static int preallocate_open(BlockDriverState *bs, QDict *options, int flags,
|
|||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
GRAPH_RDLOCK_GUARD_MAINLOOP();
|
||||||
|
|
||||||
if (!preallocate_absorb_opts(&s->opts, options, bs->file->bs, errp)) {
|
if (!preallocate_absorb_opts(&s->opts, options, bs->file->bs, errp)) {
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
@ -169,7 +173,8 @@ static int preallocate_open(BlockDriverState *bs, QDict *options, int flags,
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int preallocate_truncate_to_real_size(BlockDriverState *bs, Error **errp)
|
static int GRAPH_RDLOCK
|
||||||
|
preallocate_truncate_to_real_size(BlockDriverState *bs, Error **errp)
|
||||||
{
|
{
|
||||||
BDRVPreallocateState *s = bs->opaque;
|
BDRVPreallocateState *s = bs->opaque;
|
||||||
int ret;
|
int ret;
|
||||||
@ -200,6 +205,9 @@ static void preallocate_close(BlockDriverState *bs)
|
|||||||
{
|
{
|
||||||
BDRVPreallocateState *s = bs->opaque;
|
BDRVPreallocateState *s = bs->opaque;
|
||||||
|
|
||||||
|
GLOBAL_STATE_CODE();
|
||||||
|
GRAPH_RDLOCK_GUARD_MAINLOOP();
|
||||||
|
|
||||||
qemu_bh_cancel(s->drop_resize_bh);
|
qemu_bh_cancel(s->drop_resize_bh);
|
||||||
qemu_bh_delete(s->drop_resize_bh);
|
qemu_bh_delete(s->drop_resize_bh);
|
||||||
|
|
||||||
@ -223,6 +231,9 @@ static int preallocate_reopen_prepare(BDRVReopenState *reopen_state,
|
|||||||
PreallocateOpts *opts = g_new0(PreallocateOpts, 1);
|
PreallocateOpts *opts = g_new0(PreallocateOpts, 1);
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
|
GLOBAL_STATE_CODE();
|
||||||
|
GRAPH_RDLOCK_GUARD_MAINLOOP();
|
||||||
|
|
||||||
if (!preallocate_absorb_opts(opts, reopen_state->options,
|
if (!preallocate_absorb_opts(opts, reopen_state->options,
|
||||||
reopen_state->bs->file->bs, errp)) {
|
reopen_state->bs->file->bs, errp)) {
|
||||||
g_free(opts);
|
g_free(opts);
|
||||||
@ -283,7 +294,7 @@ static bool can_write_resize(uint64_t perm)
|
|||||||
return (perm & BLK_PERM_WRITE) && (perm & BLK_PERM_RESIZE);
|
return (perm & BLK_PERM_WRITE) && (perm & BLK_PERM_RESIZE);
|
||||||
}
|
}
|
||||||
|
|
||||||
static bool has_prealloc_perms(BlockDriverState *bs)
|
static bool GRAPH_RDLOCK has_prealloc_perms(BlockDriverState *bs)
|
||||||
{
|
{
|
||||||
BDRVPreallocateState *s = bs->opaque;
|
BDRVPreallocateState *s = bs->opaque;
|
||||||
|
|
||||||
@ -499,7 +510,8 @@ preallocate_co_getlength(BlockDriverState *bs)
|
|||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int preallocate_drop_resize(BlockDriverState *bs, Error **errp)
|
static int GRAPH_RDLOCK
|
||||||
|
preallocate_drop_resize(BlockDriverState *bs, Error **errp)
|
||||||
{
|
{
|
||||||
BDRVPreallocateState *s = bs->opaque;
|
BDRVPreallocateState *s = bs->opaque;
|
||||||
int ret;
|
int ret;
|
||||||
@ -525,15 +537,16 @@ static int preallocate_drop_resize(BlockDriverState *bs, Error **errp)
|
|||||||
*/
|
*/
|
||||||
s->data_end = s->file_end = s->zero_start = -EINVAL;
|
s->data_end = s->file_end = s->zero_start = -EINVAL;
|
||||||
|
|
||||||
bdrv_graph_rdlock_main_loop();
|
|
||||||
bdrv_child_refresh_perms(bs, bs->file, NULL);
|
bdrv_child_refresh_perms(bs, bs->file, NULL);
|
||||||
bdrv_graph_rdunlock_main_loop();
|
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void preallocate_drop_resize_bh(void *opaque)
|
static void preallocate_drop_resize_bh(void *opaque)
|
||||||
{
|
{
|
||||||
|
GLOBAL_STATE_CODE();
|
||||||
|
GRAPH_RDLOCK_GUARD_MAINLOOP();
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* In case of errors, we'll simply keep the exclusive lock on the image
|
* In case of errors, we'll simply keep the exclusive lock on the image
|
||||||
* indefinitely.
|
* indefinitely.
|
||||||
@ -541,8 +554,8 @@ static void preallocate_drop_resize_bh(void *opaque)
|
|||||||
preallocate_drop_resize(opaque, NULL);
|
preallocate_drop_resize(opaque, NULL);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void preallocate_set_perm(BlockDriverState *bs,
|
static void GRAPH_RDLOCK
|
||||||
uint64_t perm, uint64_t shared)
|
preallocate_set_perm(BlockDriverState *bs, uint64_t perm, uint64_t shared)
|
||||||
{
|
{
|
||||||
BDRVPreallocateState *s = bs->opaque;
|
BDRVPreallocateState *s = bs->opaque;
|
||||||
|
|
||||||
|
@ -237,6 +237,7 @@ static void qmp_blockdev_insert_anon_medium(BlockBackend *blk,
|
|||||||
BlockDriverState *bs, Error **errp)
|
BlockDriverState *bs, Error **errp)
|
||||||
{
|
{
|
||||||
Error *local_err = NULL;
|
Error *local_err = NULL;
|
||||||
|
AioContext *ctx;
|
||||||
bool has_device;
|
bool has_device;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
@ -258,7 +259,11 @@ static void qmp_blockdev_insert_anon_medium(BlockBackend *blk,
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
ctx = bdrv_get_aio_context(bs);
|
||||||
|
aio_context_acquire(ctx);
|
||||||
ret = blk_insert_bs(blk, bs, errp);
|
ret = blk_insert_bs(blk, bs, errp);
|
||||||
|
aio_context_release(ctx);
|
||||||
|
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
15
block/qcow.c
15
block/qcow.c
@ -124,9 +124,11 @@ static int qcow_open(BlockDriverState *bs, QDict *options, int flags,
|
|||||||
|
|
||||||
ret = bdrv_open_file_child(NULL, options, "file", bs, errp);
|
ret = bdrv_open_file_child(NULL, options, "file", bs, errp);
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
goto fail;
|
goto fail_unlocked;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
bdrv_graph_rdlock_main_loop();
|
||||||
|
|
||||||
ret = bdrv_pread(bs->file, 0, sizeof(header), &header, 0);
|
ret = bdrv_pread(bs->file, 0, sizeof(header), &header, 0);
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
goto fail;
|
goto fail;
|
||||||
@ -301,13 +303,11 @@ static int qcow_open(BlockDriverState *bs, QDict *options, int flags,
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* Disable migration when qcow images are used */
|
/* Disable migration when qcow images are used */
|
||||||
bdrv_graph_rdlock_main_loop();
|
|
||||||
error_setg(&s->migration_blocker, "The qcow format used by node '%s' "
|
error_setg(&s->migration_blocker, "The qcow format used by node '%s' "
|
||||||
"does not support live migration",
|
"does not support live migration",
|
||||||
bdrv_get_device_or_node_name(bs));
|
bdrv_get_device_or_node_name(bs));
|
||||||
bdrv_graph_rdunlock_main_loop();
|
|
||||||
|
|
||||||
ret = migrate_add_blocker(&s->migration_blocker, errp);
|
ret = migrate_add_blocker_normal(&s->migration_blocker, errp);
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
goto fail;
|
goto fail;
|
||||||
}
|
}
|
||||||
@ -315,9 +315,12 @@ static int qcow_open(BlockDriverState *bs, QDict *options, int flags,
|
|||||||
qobject_unref(encryptopts);
|
qobject_unref(encryptopts);
|
||||||
qapi_free_QCryptoBlockOpenOptions(crypto_opts);
|
qapi_free_QCryptoBlockOpenOptions(crypto_opts);
|
||||||
qemu_co_mutex_init(&s->lock);
|
qemu_co_mutex_init(&s->lock);
|
||||||
|
bdrv_graph_rdunlock_main_loop();
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
fail:
|
fail:
|
||||||
|
bdrv_graph_rdunlock_main_loop();
|
||||||
|
fail_unlocked:
|
||||||
g_free(s->l1_table);
|
g_free(s->l1_table);
|
||||||
qemu_vfree(s->l2_cache);
|
qemu_vfree(s->l2_cache);
|
||||||
g_free(s->cluster_cache);
|
g_free(s->cluster_cache);
|
||||||
@ -1024,7 +1027,7 @@ fail:
|
|||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int qcow_make_empty(BlockDriverState *bs)
|
static int GRAPH_RDLOCK qcow_make_empty(BlockDriverState *bs)
|
||||||
{
|
{
|
||||||
BDRVQcowState *s = bs->opaque;
|
BDRVQcowState *s = bs->opaque;
|
||||||
uint32_t l1_length = s->l1_size * sizeof(uint64_t);
|
uint32_t l1_length = s->l1_size * sizeof(uint64_t);
|
||||||
|
@ -105,7 +105,7 @@ static inline bool can_write(BlockDriverState *bs)
|
|||||||
return !bdrv_is_read_only(bs) && !(bdrv_get_flags(bs) & BDRV_O_INACTIVE);
|
return !bdrv_is_read_only(bs) && !(bdrv_get_flags(bs) & BDRV_O_INACTIVE);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int update_header_sync(BlockDriverState *bs)
|
static int GRAPH_RDLOCK update_header_sync(BlockDriverState *bs)
|
||||||
{
|
{
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
@ -221,8 +221,9 @@ clear_bitmap_table(BlockDriverState *bs, uint64_t *bitmap_table,
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
static int bitmap_table_load(BlockDriverState *bs, Qcow2BitmapTable *tb,
|
static int GRAPH_RDLOCK
|
||||||
uint64_t **bitmap_table)
|
bitmap_table_load(BlockDriverState *bs, Qcow2BitmapTable *tb,
|
||||||
|
uint64_t **bitmap_table)
|
||||||
{
|
{
|
||||||
int ret;
|
int ret;
|
||||||
BDRVQcow2State *s = bs->opaque;
|
BDRVQcow2State *s = bs->opaque;
|
||||||
@ -551,8 +552,9 @@ static uint32_t bitmap_list_count(Qcow2BitmapList *bm_list)
|
|||||||
* Get bitmap list from qcow2 image. Actually reads bitmap directory,
|
* Get bitmap list from qcow2 image. Actually reads bitmap directory,
|
||||||
* checks it and convert to bitmap list.
|
* checks it and convert to bitmap list.
|
||||||
*/
|
*/
|
||||||
static Qcow2BitmapList *bitmap_list_load(BlockDriverState *bs, uint64_t offset,
|
static Qcow2BitmapList * GRAPH_RDLOCK
|
||||||
uint64_t size, Error **errp)
|
bitmap_list_load(BlockDriverState *bs, uint64_t offset, uint64_t size,
|
||||||
|
Error **errp)
|
||||||
{
|
{
|
||||||
int ret;
|
int ret;
|
||||||
BDRVQcow2State *s = bs->opaque;
|
BDRVQcow2State *s = bs->opaque;
|
||||||
@ -961,7 +963,7 @@ static void set_readonly_helper(gpointer bitmap, gpointer value)
|
|||||||
* If header_updated is not NULL then it is set appropriately regardless of
|
* If header_updated is not NULL then it is set appropriately regardless of
|
||||||
* the return value.
|
* the return value.
|
||||||
*/
|
*/
|
||||||
bool coroutine_fn GRAPH_RDLOCK
|
bool coroutine_fn
|
||||||
qcow2_load_dirty_bitmaps(BlockDriverState *bs,
|
qcow2_load_dirty_bitmaps(BlockDriverState *bs,
|
||||||
bool *header_updated, Error **errp)
|
bool *header_updated, Error **errp)
|
||||||
{
|
{
|
||||||
|
@ -391,11 +391,10 @@ fail:
|
|||||||
* If the L2 entry is invalid return -errno and set @type to
|
* If the L2 entry is invalid return -errno and set @type to
|
||||||
* QCOW2_SUBCLUSTER_INVALID.
|
* QCOW2_SUBCLUSTER_INVALID.
|
||||||
*/
|
*/
|
||||||
static int qcow2_get_subcluster_range_type(BlockDriverState *bs,
|
static int GRAPH_RDLOCK
|
||||||
uint64_t l2_entry,
|
qcow2_get_subcluster_range_type(BlockDriverState *bs, uint64_t l2_entry,
|
||||||
uint64_t l2_bitmap,
|
uint64_t l2_bitmap, unsigned sc_from,
|
||||||
unsigned sc_from,
|
QCow2SubclusterType *type)
|
||||||
QCow2SubclusterType *type)
|
|
||||||
{
|
{
|
||||||
BDRVQcow2State *s = bs->opaque;
|
BDRVQcow2State *s = bs->opaque;
|
||||||
uint32_t val;
|
uint32_t val;
|
||||||
@ -442,9 +441,10 @@ static int qcow2_get_subcluster_range_type(BlockDriverState *bs,
|
|||||||
* On failure return -errno and update @l2_index to point to the
|
* On failure return -errno and update @l2_index to point to the
|
||||||
* invalid entry.
|
* invalid entry.
|
||||||
*/
|
*/
|
||||||
static int count_contiguous_subclusters(BlockDriverState *bs, int nb_clusters,
|
static int GRAPH_RDLOCK
|
||||||
unsigned sc_index, uint64_t *l2_slice,
|
count_contiguous_subclusters(BlockDriverState *bs, int nb_clusters,
|
||||||
unsigned *l2_index)
|
unsigned sc_index, uint64_t *l2_slice,
|
||||||
|
unsigned *l2_index)
|
||||||
{
|
{
|
||||||
BDRVQcow2State *s = bs->opaque;
|
BDRVQcow2State *s = bs->opaque;
|
||||||
int i, count = 0;
|
int i, count = 0;
|
||||||
@ -1329,7 +1329,8 @@ calculate_l2_meta(BlockDriverState *bs, uint64_t host_cluster_offset,
|
|||||||
* requires a new allocation (that is, if the cluster is unallocated
|
* requires a new allocation (that is, if the cluster is unallocated
|
||||||
* or has refcount > 1 and therefore cannot be written in-place).
|
* or has refcount > 1 and therefore cannot be written in-place).
|
||||||
*/
|
*/
|
||||||
static bool cluster_needs_new_alloc(BlockDriverState *bs, uint64_t l2_entry)
|
static bool GRAPH_RDLOCK
|
||||||
|
cluster_needs_new_alloc(BlockDriverState *bs, uint64_t l2_entry)
|
||||||
{
|
{
|
||||||
switch (qcow2_get_cluster_type(bs, l2_entry)) {
|
switch (qcow2_get_cluster_type(bs, l2_entry)) {
|
||||||
case QCOW2_CLUSTER_NORMAL:
|
case QCOW2_CLUSTER_NORMAL:
|
||||||
@ -1360,9 +1361,9 @@ static bool cluster_needs_new_alloc(BlockDriverState *bs, uint64_t l2_entry)
|
|||||||
* allocated and can be overwritten in-place (this includes clusters
|
* allocated and can be overwritten in-place (this includes clusters
|
||||||
* of type QCOW2_CLUSTER_ZERO_ALLOC).
|
* of type QCOW2_CLUSTER_ZERO_ALLOC).
|
||||||
*/
|
*/
|
||||||
static int count_single_write_clusters(BlockDriverState *bs, int nb_clusters,
|
static int GRAPH_RDLOCK
|
||||||
uint64_t *l2_slice, int l2_index,
|
count_single_write_clusters(BlockDriverState *bs, int nb_clusters,
|
||||||
bool new_alloc)
|
uint64_t *l2_slice, int l2_index, bool new_alloc)
|
||||||
{
|
{
|
||||||
BDRVQcow2State *s = bs->opaque;
|
BDRVQcow2State *s = bs->opaque;
|
||||||
uint64_t l2_entry = get_l2_entry(s, l2_slice, l2_index);
|
uint64_t l2_entry = get_l2_entry(s, l2_slice, l2_index);
|
||||||
@ -1983,7 +1984,7 @@ discard_in_l2_slice(BlockDriverState *bs, uint64_t offset, uint64_t nb_clusters,
|
|||||||
/* If we keep the reference, pass on the discard still */
|
/* If we keep the reference, pass on the discard still */
|
||||||
bdrv_pdiscard(s->data_file, old_l2_entry & L2E_OFFSET_MASK,
|
bdrv_pdiscard(s->data_file, old_l2_entry & L2E_OFFSET_MASK,
|
||||||
s->cluster_size);
|
s->cluster_size);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
qcow2_cache_put(s->l2_table_cache, (void **) &l2_slice);
|
qcow2_cache_put(s->l2_table_cache, (void **) &l2_slice);
|
||||||
@ -2061,9 +2062,15 @@ zero_in_l2_slice(BlockDriverState *bs, uint64_t offset,
|
|||||||
QCow2ClusterType type = qcow2_get_cluster_type(bs, old_l2_entry);
|
QCow2ClusterType type = qcow2_get_cluster_type(bs, old_l2_entry);
|
||||||
bool unmap = (type == QCOW2_CLUSTER_COMPRESSED) ||
|
bool unmap = (type == QCOW2_CLUSTER_COMPRESSED) ||
|
||||||
((flags & BDRV_REQ_MAY_UNMAP) && qcow2_cluster_is_allocated(type));
|
((flags & BDRV_REQ_MAY_UNMAP) && qcow2_cluster_is_allocated(type));
|
||||||
uint64_t new_l2_entry = unmap ? 0 : old_l2_entry;
|
bool keep_reference =
|
||||||
|
(s->discard_no_unref && type != QCOW2_CLUSTER_COMPRESSED);
|
||||||
|
uint64_t new_l2_entry = old_l2_entry;
|
||||||
uint64_t new_l2_bitmap = old_l2_bitmap;
|
uint64_t new_l2_bitmap = old_l2_bitmap;
|
||||||
|
|
||||||
|
if (unmap && !keep_reference) {
|
||||||
|
new_l2_entry = 0;
|
||||||
|
}
|
||||||
|
|
||||||
if (has_subclusters(s)) {
|
if (has_subclusters(s)) {
|
||||||
new_l2_bitmap = QCOW_L2_BITMAP_ALL_ZEROES;
|
new_l2_bitmap = QCOW_L2_BITMAP_ALL_ZEROES;
|
||||||
} else {
|
} else {
|
||||||
@ -2081,9 +2088,17 @@ zero_in_l2_slice(BlockDriverState *bs, uint64_t offset,
|
|||||||
set_l2_bitmap(s, l2_slice, l2_index + i, new_l2_bitmap);
|
set_l2_bitmap(s, l2_slice, l2_index + i, new_l2_bitmap);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Then decrease the refcount */
|
|
||||||
if (unmap) {
|
if (unmap) {
|
||||||
qcow2_free_any_cluster(bs, old_l2_entry, QCOW2_DISCARD_REQUEST);
|
if (!keep_reference) {
|
||||||
|
/* Then decrease the refcount */
|
||||||
|
qcow2_free_any_cluster(bs, old_l2_entry, QCOW2_DISCARD_REQUEST);
|
||||||
|
} else if (s->discard_passthrough[QCOW2_DISCARD_REQUEST] &&
|
||||||
|
(type == QCOW2_CLUSTER_NORMAL ||
|
||||||
|
type == QCOW2_CLUSTER_ZERO_ALLOC)) {
|
||||||
|
/* If we keep the reference, pass on the discard still */
|
||||||
|
bdrv_pdiscard(s->data_file, old_l2_entry & L2E_OFFSET_MASK,
|
||||||
|
s->cluster_size);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
132
block/qcow2.c
132
block/qcow2.c
@ -95,9 +95,10 @@ static int qcow2_probe(const uint8_t *buf, int buf_size, const char *filename)
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
static int qcow2_crypto_hdr_read_func(QCryptoBlock *block, size_t offset,
|
static int GRAPH_RDLOCK
|
||||||
uint8_t *buf, size_t buflen,
|
qcow2_crypto_hdr_read_func(QCryptoBlock *block, size_t offset,
|
||||||
void *opaque, Error **errp)
|
uint8_t *buf, size_t buflen,
|
||||||
|
void *opaque, Error **errp)
|
||||||
{
|
{
|
||||||
BlockDriverState *bs = opaque;
|
BlockDriverState *bs = opaque;
|
||||||
BDRVQcow2State *s = bs->opaque;
|
BDRVQcow2State *s = bs->opaque;
|
||||||
@ -156,7 +157,7 @@ qcow2_crypto_hdr_init_func(QCryptoBlock *block, size_t headerlen, void *opaque,
|
|||||||
|
|
||||||
|
|
||||||
/* The graph lock must be held when called in coroutine context */
|
/* The graph lock must be held when called in coroutine context */
|
||||||
static int coroutine_mixed_fn
|
static int coroutine_mixed_fn GRAPH_RDLOCK
|
||||||
qcow2_crypto_hdr_write_func(QCryptoBlock *block, size_t offset,
|
qcow2_crypto_hdr_write_func(QCryptoBlock *block, size_t offset,
|
||||||
const uint8_t *buf, size_t buflen,
|
const uint8_t *buf, size_t buflen,
|
||||||
void *opaque, Error **errp)
|
void *opaque, Error **errp)
|
||||||
@ -2029,6 +2030,8 @@ static void qcow2_reopen_commit(BDRVReopenState *state)
|
|||||||
{
|
{
|
||||||
BDRVQcow2State *s = state->bs->opaque;
|
BDRVQcow2State *s = state->bs->opaque;
|
||||||
|
|
||||||
|
GRAPH_RDLOCK_GUARD_MAINLOOP();
|
||||||
|
|
||||||
qcow2_update_options_commit(state->bs, state->opaque);
|
qcow2_update_options_commit(state->bs, state->opaque);
|
||||||
if (!s->data_file) {
|
if (!s->data_file) {
|
||||||
/*
|
/*
|
||||||
@ -2064,6 +2067,8 @@ static void qcow2_reopen_abort(BDRVReopenState *state)
|
|||||||
{
|
{
|
||||||
BDRVQcow2State *s = state->bs->opaque;
|
BDRVQcow2State *s = state->bs->opaque;
|
||||||
|
|
||||||
|
GRAPH_RDLOCK_GUARD_MAINLOOP();
|
||||||
|
|
||||||
if (!s->data_file) {
|
if (!s->data_file) {
|
||||||
/*
|
/*
|
||||||
* If we don't have an external data file, s->data_file was cleared by
|
* If we don't have an external data file, s->data_file was cleared by
|
||||||
@ -3155,8 +3160,9 @@ fail:
|
|||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int qcow2_change_backing_file(BlockDriverState *bs,
|
static int coroutine_fn GRAPH_RDLOCK
|
||||||
const char *backing_file, const char *backing_fmt)
|
qcow2_co_change_backing_file(BlockDriverState *bs, const char *backing_file,
|
||||||
|
const char *backing_fmt)
|
||||||
{
|
{
|
||||||
BDRVQcow2State *s = bs->opaque;
|
BDRVQcow2State *s = bs->opaque;
|
||||||
|
|
||||||
@ -3816,8 +3822,11 @@ qcow2_co_create(BlockdevCreateOptions *create_options, Error **errp)
|
|||||||
backing_format = BlockdevDriver_str(qcow2_opts->backing_fmt);
|
backing_format = BlockdevDriver_str(qcow2_opts->backing_fmt);
|
||||||
}
|
}
|
||||||
|
|
||||||
ret = bdrv_change_backing_file(blk_bs(blk), qcow2_opts->backing_file,
|
bdrv_graph_co_rdlock();
|
||||||
backing_format, false);
|
ret = bdrv_co_change_backing_file(blk_bs(blk), qcow2_opts->backing_file,
|
||||||
|
backing_format, false);
|
||||||
|
bdrv_graph_co_rdunlock();
|
||||||
|
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
error_setg_errno(errp, -ret, "Could not assign backing file '%s' "
|
error_setg_errno(errp, -ret, "Could not assign backing file '%s' "
|
||||||
"with format '%s'", qcow2_opts->backing_file,
|
"with format '%s'", qcow2_opts->backing_file,
|
||||||
@ -5222,8 +5231,8 @@ qcow2_co_get_info(BlockDriverState *bs, BlockDriverInfo *bdi)
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static ImageInfoSpecific *qcow2_get_specific_info(BlockDriverState *bs,
|
static ImageInfoSpecific * GRAPH_RDLOCK
|
||||||
Error **errp)
|
qcow2_get_specific_info(BlockDriverState *bs, Error **errp)
|
||||||
{
|
{
|
||||||
BDRVQcow2State *s = bs->opaque;
|
BDRVQcow2State *s = bs->opaque;
|
||||||
ImageInfoSpecific *spec_info;
|
ImageInfoSpecific *spec_info;
|
||||||
@ -5302,7 +5311,8 @@ static ImageInfoSpecific *qcow2_get_specific_info(BlockDriverState *bs,
|
|||||||
return spec_info;
|
return spec_info;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int coroutine_mixed_fn qcow2_has_zero_init(BlockDriverState *bs)
|
static int coroutine_mixed_fn GRAPH_RDLOCK
|
||||||
|
qcow2_has_zero_init(BlockDriverState *bs)
|
||||||
{
|
{
|
||||||
BDRVQcow2State *s = bs->opaque;
|
BDRVQcow2State *s = bs->opaque;
|
||||||
bool preallocated;
|
bool preallocated;
|
||||||
@ -6114,64 +6124,64 @@ static const char *const qcow2_strong_runtime_opts[] = {
|
|||||||
};
|
};
|
||||||
|
|
||||||
BlockDriver bdrv_qcow2 = {
|
BlockDriver bdrv_qcow2 = {
|
||||||
.format_name = "qcow2",
|
.format_name = "qcow2",
|
||||||
.instance_size = sizeof(BDRVQcow2State),
|
.instance_size = sizeof(BDRVQcow2State),
|
||||||
.bdrv_probe = qcow2_probe,
|
.bdrv_probe = qcow2_probe,
|
||||||
.bdrv_open = qcow2_open,
|
.bdrv_open = qcow2_open,
|
||||||
.bdrv_close = qcow2_close,
|
.bdrv_close = qcow2_close,
|
||||||
.bdrv_reopen_prepare = qcow2_reopen_prepare,
|
.bdrv_reopen_prepare = qcow2_reopen_prepare,
|
||||||
.bdrv_reopen_commit = qcow2_reopen_commit,
|
.bdrv_reopen_commit = qcow2_reopen_commit,
|
||||||
.bdrv_reopen_commit_post = qcow2_reopen_commit_post,
|
.bdrv_reopen_commit_post = qcow2_reopen_commit_post,
|
||||||
.bdrv_reopen_abort = qcow2_reopen_abort,
|
.bdrv_reopen_abort = qcow2_reopen_abort,
|
||||||
.bdrv_join_options = qcow2_join_options,
|
.bdrv_join_options = qcow2_join_options,
|
||||||
.bdrv_child_perm = bdrv_default_perms,
|
.bdrv_child_perm = bdrv_default_perms,
|
||||||
.bdrv_co_create_opts = qcow2_co_create_opts,
|
.bdrv_co_create_opts = qcow2_co_create_opts,
|
||||||
.bdrv_co_create = qcow2_co_create,
|
.bdrv_co_create = qcow2_co_create,
|
||||||
.bdrv_has_zero_init = qcow2_has_zero_init,
|
.bdrv_has_zero_init = qcow2_has_zero_init,
|
||||||
.bdrv_co_block_status = qcow2_co_block_status,
|
.bdrv_co_block_status = qcow2_co_block_status,
|
||||||
|
|
||||||
.bdrv_co_preadv_part = qcow2_co_preadv_part,
|
.bdrv_co_preadv_part = qcow2_co_preadv_part,
|
||||||
.bdrv_co_pwritev_part = qcow2_co_pwritev_part,
|
.bdrv_co_pwritev_part = qcow2_co_pwritev_part,
|
||||||
.bdrv_co_flush_to_os = qcow2_co_flush_to_os,
|
.bdrv_co_flush_to_os = qcow2_co_flush_to_os,
|
||||||
|
|
||||||
.bdrv_co_pwrite_zeroes = qcow2_co_pwrite_zeroes,
|
.bdrv_co_pwrite_zeroes = qcow2_co_pwrite_zeroes,
|
||||||
.bdrv_co_pdiscard = qcow2_co_pdiscard,
|
.bdrv_co_pdiscard = qcow2_co_pdiscard,
|
||||||
.bdrv_co_copy_range_from = qcow2_co_copy_range_from,
|
.bdrv_co_copy_range_from = qcow2_co_copy_range_from,
|
||||||
.bdrv_co_copy_range_to = qcow2_co_copy_range_to,
|
.bdrv_co_copy_range_to = qcow2_co_copy_range_to,
|
||||||
.bdrv_co_truncate = qcow2_co_truncate,
|
.bdrv_co_truncate = qcow2_co_truncate,
|
||||||
.bdrv_co_pwritev_compressed_part = qcow2_co_pwritev_compressed_part,
|
.bdrv_co_pwritev_compressed_part = qcow2_co_pwritev_compressed_part,
|
||||||
.bdrv_make_empty = qcow2_make_empty,
|
.bdrv_make_empty = qcow2_make_empty,
|
||||||
|
|
||||||
.bdrv_snapshot_create = qcow2_snapshot_create,
|
.bdrv_snapshot_create = qcow2_snapshot_create,
|
||||||
.bdrv_snapshot_goto = qcow2_snapshot_goto,
|
.bdrv_snapshot_goto = qcow2_snapshot_goto,
|
||||||
.bdrv_snapshot_delete = qcow2_snapshot_delete,
|
.bdrv_snapshot_delete = qcow2_snapshot_delete,
|
||||||
.bdrv_snapshot_list = qcow2_snapshot_list,
|
.bdrv_snapshot_list = qcow2_snapshot_list,
|
||||||
.bdrv_snapshot_load_tmp = qcow2_snapshot_load_tmp,
|
.bdrv_snapshot_load_tmp = qcow2_snapshot_load_tmp,
|
||||||
.bdrv_measure = qcow2_measure,
|
.bdrv_measure = qcow2_measure,
|
||||||
.bdrv_co_get_info = qcow2_co_get_info,
|
.bdrv_co_get_info = qcow2_co_get_info,
|
||||||
.bdrv_get_specific_info = qcow2_get_specific_info,
|
.bdrv_get_specific_info = qcow2_get_specific_info,
|
||||||
|
|
||||||
.bdrv_co_save_vmstate = qcow2_co_save_vmstate,
|
.bdrv_co_save_vmstate = qcow2_co_save_vmstate,
|
||||||
.bdrv_co_load_vmstate = qcow2_co_load_vmstate,
|
.bdrv_co_load_vmstate = qcow2_co_load_vmstate,
|
||||||
|
|
||||||
.is_format = true,
|
.is_format = true,
|
||||||
.supports_backing = true,
|
.supports_backing = true,
|
||||||
.bdrv_change_backing_file = qcow2_change_backing_file,
|
.bdrv_co_change_backing_file = qcow2_co_change_backing_file,
|
||||||
|
|
||||||
.bdrv_refresh_limits = qcow2_refresh_limits,
|
.bdrv_refresh_limits = qcow2_refresh_limits,
|
||||||
.bdrv_co_invalidate_cache = qcow2_co_invalidate_cache,
|
.bdrv_co_invalidate_cache = qcow2_co_invalidate_cache,
|
||||||
.bdrv_inactivate = qcow2_inactivate,
|
.bdrv_inactivate = qcow2_inactivate,
|
||||||
|
|
||||||
.create_opts = &qcow2_create_opts,
|
.create_opts = &qcow2_create_opts,
|
||||||
.amend_opts = &qcow2_amend_opts,
|
.amend_opts = &qcow2_amend_opts,
|
||||||
.strong_runtime_opts = qcow2_strong_runtime_opts,
|
.strong_runtime_opts = qcow2_strong_runtime_opts,
|
||||||
.mutable_opts = mutable_opts,
|
.mutable_opts = mutable_opts,
|
||||||
.bdrv_co_check = qcow2_co_check,
|
.bdrv_co_check = qcow2_co_check,
|
||||||
.bdrv_amend_options = qcow2_amend_options,
|
.bdrv_amend_options = qcow2_amend_options,
|
||||||
.bdrv_co_amend = qcow2_co_amend,
|
.bdrv_co_amend = qcow2_co_amend,
|
||||||
|
|
||||||
.bdrv_detach_aio_context = qcow2_detach_aio_context,
|
.bdrv_detach_aio_context = qcow2_detach_aio_context,
|
||||||
.bdrv_attach_aio_context = qcow2_attach_aio_context,
|
.bdrv_attach_aio_context = qcow2_attach_aio_context,
|
||||||
|
|
||||||
.bdrv_supports_persistent_dirty_bitmap =
|
.bdrv_supports_persistent_dirty_bitmap =
|
||||||
qcow2_supports_persistent_dirty_bitmap,
|
qcow2_supports_persistent_dirty_bitmap,
|
||||||
|
@ -641,7 +641,7 @@ static inline void set_l2_bitmap(BDRVQcow2State *s, uint64_t *l2_slice,
|
|||||||
l2_slice[idx + 1] = cpu_to_be64(bitmap);
|
l2_slice[idx + 1] = cpu_to_be64(bitmap);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline bool has_data_file(BlockDriverState *bs)
|
static inline bool GRAPH_RDLOCK has_data_file(BlockDriverState *bs)
|
||||||
{
|
{
|
||||||
BDRVQcow2State *s = bs->opaque;
|
BDRVQcow2State *s = bs->opaque;
|
||||||
return (s->data_file != bs->file);
|
return (s->data_file != bs->file);
|
||||||
@ -709,8 +709,8 @@ static inline int64_t qcow2_vm_state_offset(BDRVQcow2State *s)
|
|||||||
return (int64_t)s->l1_vm_state_index << (s->cluster_bits + s->l2_bits);
|
return (int64_t)s->l1_vm_state_index << (s->cluster_bits + s->l2_bits);
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline QCow2ClusterType qcow2_get_cluster_type(BlockDriverState *bs,
|
static inline QCow2ClusterType GRAPH_RDLOCK
|
||||||
uint64_t l2_entry)
|
qcow2_get_cluster_type(BlockDriverState *bs, uint64_t l2_entry)
|
||||||
{
|
{
|
||||||
BDRVQcow2State *s = bs->opaque;
|
BDRVQcow2State *s = bs->opaque;
|
||||||
|
|
||||||
@ -743,7 +743,7 @@ static inline QCow2ClusterType qcow2_get_cluster_type(BlockDriverState *bs,
|
|||||||
* (this checks the whole entry and bitmap, not only the bits related
|
* (this checks the whole entry and bitmap, not only the bits related
|
||||||
* to subcluster @sc_index).
|
* to subcluster @sc_index).
|
||||||
*/
|
*/
|
||||||
static inline
|
static inline GRAPH_RDLOCK
|
||||||
QCow2SubclusterType qcow2_get_subcluster_type(BlockDriverState *bs,
|
QCow2SubclusterType qcow2_get_subcluster_type(BlockDriverState *bs,
|
||||||
uint64_t l2_entry,
|
uint64_t l2_entry,
|
||||||
uint64_t l2_bitmap,
|
uint64_t l2_bitmap,
|
||||||
@ -834,9 +834,9 @@ int64_t qcow2_refcount_metadata_size(int64_t clusters, size_t cluster_size,
|
|||||||
int refcount_order, bool generous_increase,
|
int refcount_order, bool generous_increase,
|
||||||
uint64_t *refblock_count);
|
uint64_t *refblock_count);
|
||||||
|
|
||||||
int qcow2_mark_dirty(BlockDriverState *bs);
|
int GRAPH_RDLOCK qcow2_mark_dirty(BlockDriverState *bs);
|
||||||
int qcow2_mark_corrupt(BlockDriverState *bs);
|
int GRAPH_RDLOCK qcow2_mark_corrupt(BlockDriverState *bs);
|
||||||
int qcow2_update_header(BlockDriverState *bs);
|
int GRAPH_RDLOCK qcow2_update_header(BlockDriverState *bs);
|
||||||
|
|
||||||
void GRAPH_RDLOCK
|
void GRAPH_RDLOCK
|
||||||
qcow2_signal_corruption(BlockDriverState *bs, bool fatal, int64_t offset,
|
qcow2_signal_corruption(BlockDriverState *bs, bool fatal, int64_t offset,
|
||||||
@ -890,10 +890,11 @@ int GRAPH_RDLOCK qcow2_write_caches(BlockDriverState *bs);
|
|||||||
int coroutine_fn qcow2_check_refcounts(BlockDriverState *bs, BdrvCheckResult *res,
|
int coroutine_fn qcow2_check_refcounts(BlockDriverState *bs, BdrvCheckResult *res,
|
||||||
BdrvCheckMode fix);
|
BdrvCheckMode fix);
|
||||||
|
|
||||||
void qcow2_process_discards(BlockDriverState *bs, int ret);
|
void GRAPH_RDLOCK qcow2_process_discards(BlockDriverState *bs, int ret);
|
||||||
|
|
||||||
int qcow2_check_metadata_overlap(BlockDriverState *bs, int ign, int64_t offset,
|
int GRAPH_RDLOCK
|
||||||
int64_t size);
|
qcow2_check_metadata_overlap(BlockDriverState *bs, int ign, int64_t offset,
|
||||||
|
int64_t size);
|
||||||
int GRAPH_RDLOCK
|
int GRAPH_RDLOCK
|
||||||
qcow2_pre_write_overlap_check(BlockDriverState *bs, int ign, int64_t offset,
|
qcow2_pre_write_overlap_check(BlockDriverState *bs, int ign, int64_t offset,
|
||||||
int64_t size, bool data_file);
|
int64_t size, bool data_file);
|
||||||
@ -939,8 +940,9 @@ qcow2_alloc_host_offset(BlockDriverState *bs, uint64_t offset,
|
|||||||
int coroutine_fn GRAPH_RDLOCK
|
int coroutine_fn GRAPH_RDLOCK
|
||||||
qcow2_alloc_compressed_cluster_offset(BlockDriverState *bs, uint64_t offset,
|
qcow2_alloc_compressed_cluster_offset(BlockDriverState *bs, uint64_t offset,
|
||||||
int compressed_size, uint64_t *host_offset);
|
int compressed_size, uint64_t *host_offset);
|
||||||
void qcow2_parse_compressed_l2_entry(BlockDriverState *bs, uint64_t l2_entry,
|
void GRAPH_RDLOCK
|
||||||
uint64_t *coffset, int *csize);
|
qcow2_parse_compressed_l2_entry(BlockDriverState *bs, uint64_t l2_entry,
|
||||||
|
uint64_t *coffset, int *csize);
|
||||||
|
|
||||||
int coroutine_fn GRAPH_RDLOCK
|
int coroutine_fn GRAPH_RDLOCK
|
||||||
qcow2_alloc_cluster_link_l2(BlockDriverState *bs, QCowL2Meta *m);
|
qcow2_alloc_cluster_link_l2(BlockDriverState *bs, QCowL2Meta *m);
|
||||||
@ -972,11 +974,12 @@ int GRAPH_RDLOCK
|
|||||||
qcow2_snapshot_delete(BlockDriverState *bs, const char *snapshot_id,
|
qcow2_snapshot_delete(BlockDriverState *bs, const char *snapshot_id,
|
||||||
const char *name, Error **errp);
|
const char *name, Error **errp);
|
||||||
|
|
||||||
int qcow2_snapshot_list(BlockDriverState *bs, QEMUSnapshotInfo **psn_tab);
|
int GRAPH_RDLOCK
|
||||||
int qcow2_snapshot_load_tmp(BlockDriverState *bs,
|
qcow2_snapshot_list(BlockDriverState *bs, QEMUSnapshotInfo **psn_tab);
|
||||||
const char *snapshot_id,
|
|
||||||
const char *name,
|
int GRAPH_RDLOCK
|
||||||
Error **errp);
|
qcow2_snapshot_load_tmp(BlockDriverState *bs, const char *snapshot_id,
|
||||||
|
const char *name, Error **errp);
|
||||||
|
|
||||||
void qcow2_free_snapshots(BlockDriverState *bs);
|
void qcow2_free_snapshots(BlockDriverState *bs);
|
||||||
int coroutine_fn GRAPH_RDLOCK
|
int coroutine_fn GRAPH_RDLOCK
|
||||||
@ -992,8 +995,9 @@ qcow2_check_fix_snapshot_table(BlockDriverState *bs, BdrvCheckResult *result,
|
|||||||
BdrvCheckMode fix);
|
BdrvCheckMode fix);
|
||||||
|
|
||||||
/* qcow2-cache.c functions */
|
/* qcow2-cache.c functions */
|
||||||
Qcow2Cache *qcow2_cache_create(BlockDriverState *bs, int num_tables,
|
Qcow2Cache * GRAPH_RDLOCK
|
||||||
unsigned table_size);
|
qcow2_cache_create(BlockDriverState *bs, int num_tables, unsigned table_size);
|
||||||
|
|
||||||
int qcow2_cache_destroy(Qcow2Cache *c);
|
int qcow2_cache_destroy(Qcow2Cache *c);
|
||||||
|
|
||||||
void qcow2_cache_entry_mark_dirty(Qcow2Cache *c, void *table);
|
void qcow2_cache_entry_mark_dirty(Qcow2Cache *c, void *table);
|
||||||
@ -1019,17 +1023,24 @@ void *qcow2_cache_is_table_offset(Qcow2Cache *c, uint64_t offset);
|
|||||||
void qcow2_cache_discard(Qcow2Cache *c, void *table);
|
void qcow2_cache_discard(Qcow2Cache *c, void *table);
|
||||||
|
|
||||||
/* qcow2-bitmap.c functions */
|
/* qcow2-bitmap.c functions */
|
||||||
int coroutine_fn
|
int coroutine_fn GRAPH_RDLOCK
|
||||||
qcow2_check_bitmaps_refcounts(BlockDriverState *bs, BdrvCheckResult *res,
|
qcow2_check_bitmaps_refcounts(BlockDriverState *bs, BdrvCheckResult *res,
|
||||||
void **refcount_table,
|
void **refcount_table,
|
||||||
int64_t *refcount_table_size);
|
int64_t *refcount_table_size);
|
||||||
|
|
||||||
bool coroutine_fn GRAPH_RDLOCK
|
bool coroutine_fn GRAPH_RDLOCK
|
||||||
qcow2_load_dirty_bitmaps(BlockDriverState *bs, bool *header_updated, Error **errp);
|
qcow2_load_dirty_bitmaps(BlockDriverState *bs, bool *header_updated,
|
||||||
bool qcow2_get_bitmap_info_list(BlockDriverState *bs,
|
Error **errp);
|
||||||
Qcow2BitmapInfoList **info_list, Error **errp);
|
|
||||||
|
bool GRAPH_RDLOCK
|
||||||
|
qcow2_get_bitmap_info_list(BlockDriverState *bs,
|
||||||
|
Qcow2BitmapInfoList **info_list, Error **errp);
|
||||||
|
|
||||||
int GRAPH_RDLOCK qcow2_reopen_bitmaps_rw(BlockDriverState *bs, Error **errp);
|
int GRAPH_RDLOCK qcow2_reopen_bitmaps_rw(BlockDriverState *bs, Error **errp);
|
||||||
int GRAPH_RDLOCK qcow2_reopen_bitmaps_ro(BlockDriverState *bs, Error **errp);
|
int GRAPH_RDLOCK qcow2_reopen_bitmaps_ro(BlockDriverState *bs, Error **errp);
|
||||||
int coroutine_fn qcow2_truncate_bitmaps_check(BlockDriverState *bs, Error **errp);
|
|
||||||
|
int coroutine_fn GRAPH_RDLOCK
|
||||||
|
qcow2_truncate_bitmaps_check(BlockDriverState *bs, Error **errp);
|
||||||
|
|
||||||
bool GRAPH_RDLOCK
|
bool GRAPH_RDLOCK
|
||||||
qcow2_store_persistent_dirty_bitmaps(BlockDriverState *bs, bool release_stored,
|
qcow2_store_persistent_dirty_bitmaps(BlockDriverState *bs, bool release_stored,
|
||||||
|
86
block/qed.c
86
block/qed.c
@ -612,7 +612,7 @@ static int bdrv_qed_reopen_prepare(BDRVReopenState *state,
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void bdrv_qed_close(BlockDriverState *bs)
|
static void GRAPH_RDLOCK bdrv_qed_do_close(BlockDriverState *bs)
|
||||||
{
|
{
|
||||||
BDRVQEDState *s = bs->opaque;
|
BDRVQEDState *s = bs->opaque;
|
||||||
|
|
||||||
@ -631,6 +631,14 @@ static void bdrv_qed_close(BlockDriverState *bs)
|
|||||||
qemu_vfree(s->l1_table);
|
qemu_vfree(s->l1_table);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void GRAPH_UNLOCKED bdrv_qed_close(BlockDriverState *bs)
|
||||||
|
{
|
||||||
|
GLOBAL_STATE_CODE();
|
||||||
|
GRAPH_RDLOCK_GUARD_MAINLOOP();
|
||||||
|
|
||||||
|
bdrv_qed_do_close(bs);
|
||||||
|
}
|
||||||
|
|
||||||
static int coroutine_fn GRAPH_UNLOCKED
|
static int coroutine_fn GRAPH_UNLOCKED
|
||||||
bdrv_qed_co_create(BlockdevCreateOptions *opts, Error **errp)
|
bdrv_qed_co_create(BlockdevCreateOptions *opts, Error **errp)
|
||||||
{
|
{
|
||||||
@ -1138,7 +1146,7 @@ out:
|
|||||||
/**
|
/**
|
||||||
* Check if the QED_F_NEED_CHECK bit should be set during allocating write
|
* Check if the QED_F_NEED_CHECK bit should be set during allocating write
|
||||||
*/
|
*/
|
||||||
static bool qed_should_set_need_check(BDRVQEDState *s)
|
static bool GRAPH_RDLOCK qed_should_set_need_check(BDRVQEDState *s)
|
||||||
{
|
{
|
||||||
/* The flush before L2 update path ensures consistency */
|
/* The flush before L2 update path ensures consistency */
|
||||||
if (s->bs->backing) {
|
if (s->bs->backing) {
|
||||||
@ -1443,12 +1451,10 @@ bdrv_qed_co_pwrite_zeroes(BlockDriverState *bs, int64_t offset, int64_t bytes,
|
|||||||
QED_AIOCB_WRITE | QED_AIOCB_ZERO);
|
QED_AIOCB_WRITE | QED_AIOCB_ZERO);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int coroutine_fn bdrv_qed_co_truncate(BlockDriverState *bs,
|
static int coroutine_fn GRAPH_RDLOCK
|
||||||
int64_t offset,
|
bdrv_qed_co_truncate(BlockDriverState *bs, int64_t offset, bool exact,
|
||||||
bool exact,
|
PreallocMode prealloc, BdrvRequestFlags flags,
|
||||||
PreallocMode prealloc,
|
Error **errp)
|
||||||
BdrvRequestFlags flags,
|
|
||||||
Error **errp)
|
|
||||||
{
|
{
|
||||||
BDRVQEDState *s = bs->opaque;
|
BDRVQEDState *s = bs->opaque;
|
||||||
uint64_t old_image_size;
|
uint64_t old_image_size;
|
||||||
@ -1498,9 +1504,9 @@ bdrv_qed_co_get_info(BlockDriverState *bs, BlockDriverInfo *bdi)
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int bdrv_qed_change_backing_file(BlockDriverState *bs,
|
static int coroutine_fn GRAPH_RDLOCK
|
||||||
const char *backing_file,
|
bdrv_qed_co_change_backing_file(BlockDriverState *bs, const char *backing_file,
|
||||||
const char *backing_fmt)
|
const char *backing_fmt)
|
||||||
{
|
{
|
||||||
BDRVQEDState *s = bs->opaque;
|
BDRVQEDState *s = bs->opaque;
|
||||||
QEDHeader new_header, le_header;
|
QEDHeader new_header, le_header;
|
||||||
@ -1562,7 +1568,7 @@ static int bdrv_qed_change_backing_file(BlockDriverState *bs,
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* Write new header */
|
/* Write new header */
|
||||||
ret = bdrv_pwrite_sync(bs->file, 0, buffer_len, buffer, 0);
|
ret = bdrv_co_pwrite_sync(bs->file, 0, buffer_len, buffer, 0);
|
||||||
g_free(buffer);
|
g_free(buffer);
|
||||||
if (ret == 0) {
|
if (ret == 0) {
|
||||||
memcpy(&s->header, &new_header, sizeof(new_header));
|
memcpy(&s->header, &new_header, sizeof(new_header));
|
||||||
@ -1576,7 +1582,7 @@ bdrv_qed_co_invalidate_cache(BlockDriverState *bs, Error **errp)
|
|||||||
BDRVQEDState *s = bs->opaque;
|
BDRVQEDState *s = bs->opaque;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
bdrv_qed_close(bs);
|
bdrv_qed_do_close(bs);
|
||||||
|
|
||||||
bdrv_qed_init_state(bs);
|
bdrv_qed_init_state(bs);
|
||||||
qemu_co_mutex_lock(&s->table_lock);
|
qemu_co_mutex_lock(&s->table_lock);
|
||||||
@ -1636,34 +1642,34 @@ static QemuOptsList qed_create_opts = {
|
|||||||
};
|
};
|
||||||
|
|
||||||
static BlockDriver bdrv_qed = {
|
static BlockDriver bdrv_qed = {
|
||||||
.format_name = "qed",
|
.format_name = "qed",
|
||||||
.instance_size = sizeof(BDRVQEDState),
|
.instance_size = sizeof(BDRVQEDState),
|
||||||
.create_opts = &qed_create_opts,
|
.create_opts = &qed_create_opts,
|
||||||
.is_format = true,
|
.is_format = true,
|
||||||
.supports_backing = true,
|
.supports_backing = true,
|
||||||
|
|
||||||
.bdrv_probe = bdrv_qed_probe,
|
.bdrv_probe = bdrv_qed_probe,
|
||||||
.bdrv_open = bdrv_qed_open,
|
.bdrv_open = bdrv_qed_open,
|
||||||
.bdrv_close = bdrv_qed_close,
|
.bdrv_close = bdrv_qed_close,
|
||||||
.bdrv_reopen_prepare = bdrv_qed_reopen_prepare,
|
.bdrv_reopen_prepare = bdrv_qed_reopen_prepare,
|
||||||
.bdrv_child_perm = bdrv_default_perms,
|
.bdrv_child_perm = bdrv_default_perms,
|
||||||
.bdrv_co_create = bdrv_qed_co_create,
|
.bdrv_co_create = bdrv_qed_co_create,
|
||||||
.bdrv_co_create_opts = bdrv_qed_co_create_opts,
|
.bdrv_co_create_opts = bdrv_qed_co_create_opts,
|
||||||
.bdrv_has_zero_init = bdrv_has_zero_init_1,
|
.bdrv_has_zero_init = bdrv_has_zero_init_1,
|
||||||
.bdrv_co_block_status = bdrv_qed_co_block_status,
|
.bdrv_co_block_status = bdrv_qed_co_block_status,
|
||||||
.bdrv_co_readv = bdrv_qed_co_readv,
|
.bdrv_co_readv = bdrv_qed_co_readv,
|
||||||
.bdrv_co_writev = bdrv_qed_co_writev,
|
.bdrv_co_writev = bdrv_qed_co_writev,
|
||||||
.bdrv_co_pwrite_zeroes = bdrv_qed_co_pwrite_zeroes,
|
.bdrv_co_pwrite_zeroes = bdrv_qed_co_pwrite_zeroes,
|
||||||
.bdrv_co_truncate = bdrv_qed_co_truncate,
|
.bdrv_co_truncate = bdrv_qed_co_truncate,
|
||||||
.bdrv_co_getlength = bdrv_qed_co_getlength,
|
.bdrv_co_getlength = bdrv_qed_co_getlength,
|
||||||
.bdrv_co_get_info = bdrv_qed_co_get_info,
|
.bdrv_co_get_info = bdrv_qed_co_get_info,
|
||||||
.bdrv_refresh_limits = bdrv_qed_refresh_limits,
|
.bdrv_refresh_limits = bdrv_qed_refresh_limits,
|
||||||
.bdrv_change_backing_file = bdrv_qed_change_backing_file,
|
.bdrv_co_change_backing_file = bdrv_qed_co_change_backing_file,
|
||||||
.bdrv_co_invalidate_cache = bdrv_qed_co_invalidate_cache,
|
.bdrv_co_invalidate_cache = bdrv_qed_co_invalidate_cache,
|
||||||
.bdrv_co_check = bdrv_qed_co_check,
|
.bdrv_co_check = bdrv_qed_co_check,
|
||||||
.bdrv_detach_aio_context = bdrv_qed_detach_aio_context,
|
.bdrv_detach_aio_context = bdrv_qed_detach_aio_context,
|
||||||
.bdrv_attach_aio_context = bdrv_qed_attach_aio_context,
|
.bdrv_attach_aio_context = bdrv_qed_attach_aio_context,
|
||||||
.bdrv_drain_begin = bdrv_qed_drain_begin,
|
.bdrv_drain_begin = bdrv_qed_drain_begin,
|
||||||
};
|
};
|
||||||
|
|
||||||
static void bdrv_qed_init(void)
|
static void bdrv_qed_init(void)
|
||||||
|
@ -185,7 +185,7 @@ enum {
|
|||||||
/**
|
/**
|
||||||
* Header functions
|
* Header functions
|
||||||
*/
|
*/
|
||||||
int qed_write_header_sync(BDRVQEDState *s);
|
int GRAPH_RDLOCK qed_write_header_sync(BDRVQEDState *s);
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* L2 cache functions
|
* L2 cache functions
|
||||||
|
@ -95,9 +95,9 @@ end:
|
|||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int raw_apply_options(BlockDriverState *bs, BDRVRawState *s,
|
static int GRAPH_RDLOCK
|
||||||
uint64_t offset, bool has_size, uint64_t size,
|
raw_apply_options(BlockDriverState *bs, BDRVRawState *s, uint64_t offset,
|
||||||
Error **errp)
|
bool has_size, uint64_t size, Error **errp)
|
||||||
{
|
{
|
||||||
int64_t real_size = 0;
|
int64_t real_size = 0;
|
||||||
|
|
||||||
@ -145,6 +145,9 @@ static int raw_reopen_prepare(BDRVReopenState *reopen_state,
|
|||||||
uint64_t offset, size;
|
uint64_t offset, size;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
|
GLOBAL_STATE_CODE();
|
||||||
|
GRAPH_RDLOCK_GUARD_MAINLOOP();
|
||||||
|
|
||||||
assert(reopen_state != NULL);
|
assert(reopen_state != NULL);
|
||||||
assert(reopen_state->bs != NULL);
|
assert(reopen_state->bs != NULL);
|
||||||
|
|
||||||
@ -279,11 +282,10 @@ fail:
|
|||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int coroutine_fn raw_co_block_status(BlockDriverState *bs,
|
static int coroutine_fn GRAPH_RDLOCK
|
||||||
bool want_zero, int64_t offset,
|
raw_co_block_status(BlockDriverState *bs, bool want_zero, int64_t offset,
|
||||||
int64_t bytes, int64_t *pnum,
|
int64_t bytes, int64_t *pnum, int64_t *map,
|
||||||
int64_t *map,
|
BlockDriverState **file)
|
||||||
BlockDriverState **file)
|
|
||||||
{
|
{
|
||||||
BDRVRawState *s = bs->opaque;
|
BDRVRawState *s = bs->opaque;
|
||||||
*pnum = bytes;
|
*pnum = bytes;
|
||||||
@ -397,7 +399,7 @@ raw_co_get_info(BlockDriverState *bs, BlockDriverInfo *bdi)
|
|||||||
return bdrv_co_get_info(bs->file->bs, bdi);
|
return bdrv_co_get_info(bs->file->bs, bdi);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void raw_refresh_limits(BlockDriverState *bs, Error **errp)
|
static void GRAPH_RDLOCK raw_refresh_limits(BlockDriverState *bs, Error **errp)
|
||||||
{
|
{
|
||||||
bs->bl.has_variable_length = bs->file->bs->bl.has_variable_length;
|
bs->bl.has_variable_length = bs->file->bs->bl.has_variable_length;
|
||||||
|
|
||||||
@ -452,7 +454,7 @@ raw_co_ioctl(BlockDriverState *bs, unsigned long int req, void *buf)
|
|||||||
return bdrv_co_ioctl(bs->file->bs, req, buf);
|
return bdrv_co_ioctl(bs->file->bs, req, buf);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int raw_has_zero_init(BlockDriverState *bs)
|
static int GRAPH_RDLOCK raw_has_zero_init(BlockDriverState *bs)
|
||||||
{
|
{
|
||||||
return bdrv_has_zero_init(bs->file->bs);
|
return bdrv_has_zero_init(bs->file->bs);
|
||||||
}
|
}
|
||||||
@ -474,6 +476,8 @@ static int raw_open(BlockDriverState *bs, QDict *options, int flags,
|
|||||||
BdrvChildRole file_role;
|
BdrvChildRole file_role;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
|
GLOBAL_STATE_CODE();
|
||||||
|
|
||||||
ret = raw_read_options(options, &offset, &has_size, &size, errp);
|
ret = raw_read_options(options, &offset, &has_size, &size, errp);
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
return ret;
|
return ret;
|
||||||
@ -491,6 +495,8 @@ static int raw_open(BlockDriverState *bs, QDict *options, int flags,
|
|||||||
|
|
||||||
bdrv_open_child(NULL, options, "file", bs, &child_of_bds,
|
bdrv_open_child(NULL, options, "file", bs, &child_of_bds,
|
||||||
file_role, false, errp);
|
file_role, false, errp);
|
||||||
|
|
||||||
|
GRAPH_RDLOCK_GUARD_MAINLOOP();
|
||||||
if (!bs->file) {
|
if (!bs->file) {
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
@ -505,9 +511,7 @@ static int raw_open(BlockDriverState *bs, QDict *options, int flags,
|
|||||||
BDRV_REQ_ZERO_WRITE;
|
BDRV_REQ_ZERO_WRITE;
|
||||||
|
|
||||||
if (bs->probed && !bdrv_is_read_only(bs)) {
|
if (bs->probed && !bdrv_is_read_only(bs)) {
|
||||||
bdrv_graph_rdlock_main_loop();
|
|
||||||
bdrv_refresh_filename(bs->file->bs);
|
bdrv_refresh_filename(bs->file->bs);
|
||||||
bdrv_graph_rdunlock_main_loop();
|
|
||||||
fprintf(stderr,
|
fprintf(stderr,
|
||||||
"WARNING: Image format was not specified for '%s' and probing "
|
"WARNING: Image format was not specified for '%s' and probing "
|
||||||
"guessed raw.\n"
|
"guessed raw.\n"
|
||||||
@ -543,7 +547,8 @@ static int raw_probe(const uint8_t *buf, int buf_size, const char *filename)
|
|||||||
return 1;
|
return 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int raw_probe_blocksizes(BlockDriverState *bs, BlockSizes *bsz)
|
static int GRAPH_RDLOCK
|
||||||
|
raw_probe_blocksizes(BlockDriverState *bs, BlockSizes *bsz)
|
||||||
{
|
{
|
||||||
BDRVRawState *s = bs->opaque;
|
BDRVRawState *s = bs->opaque;
|
||||||
int ret;
|
int ret;
|
||||||
@ -560,7 +565,8 @@ static int raw_probe_blocksizes(BlockDriverState *bs, BlockSizes *bsz)
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int raw_probe_geometry(BlockDriverState *bs, HDGeometry *geo)
|
static int GRAPH_RDLOCK
|
||||||
|
raw_probe_geometry(BlockDriverState *bs, HDGeometry *geo)
|
||||||
{
|
{
|
||||||
BDRVRawState *s = bs->opaque;
|
BDRVRawState *s = bs->opaque;
|
||||||
if (s->offset || s->has_size) {
|
if (s->offset || s->has_size) {
|
||||||
@ -610,7 +616,7 @@ static const char *const raw_strong_runtime_opts[] = {
|
|||||||
NULL
|
NULL
|
||||||
};
|
};
|
||||||
|
|
||||||
static void raw_cancel_in_flight(BlockDriverState *bs)
|
static void GRAPH_RDLOCK raw_cancel_in_flight(BlockDriverState *bs)
|
||||||
{
|
{
|
||||||
bdrv_cancel_in_flight(bs->file->bs);
|
bdrv_cancel_in_flight(bs->file->bs);
|
||||||
}
|
}
|
||||||
|
@ -311,7 +311,7 @@ static void GRAPH_UNLOCKED
|
|||||||
secondary_do_checkpoint(BlockDriverState *bs, Error **errp)
|
secondary_do_checkpoint(BlockDriverState *bs, Error **errp)
|
||||||
{
|
{
|
||||||
BDRVReplicationState *s = bs->opaque;
|
BDRVReplicationState *s = bs->opaque;
|
||||||
BdrvChild *active_disk = bs->file;
|
BdrvChild *active_disk;
|
||||||
Error *local_err = NULL;
|
Error *local_err = NULL;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
@ -328,6 +328,7 @@ secondary_do_checkpoint(BlockDriverState *bs, Error **errp)
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
active_disk = bs->file;
|
||||||
if (!active_disk->bs->drv) {
|
if (!active_disk->bs->drv) {
|
||||||
error_setg(errp, "Active disk %s is ejected",
|
error_setg(errp, "Active disk %s is ejected",
|
||||||
active_disk->bs->node_name);
|
active_disk->bs->node_name);
|
||||||
@ -363,6 +364,9 @@ static void reopen_backing_file(BlockDriverState *bs, bool writable,
|
|||||||
BdrvChild *hidden_disk, *secondary_disk;
|
BdrvChild *hidden_disk, *secondary_disk;
|
||||||
BlockReopenQueue *reopen_queue = NULL;
|
BlockReopenQueue *reopen_queue = NULL;
|
||||||
|
|
||||||
|
GLOBAL_STATE_CODE();
|
||||||
|
GRAPH_RDLOCK_GUARD_MAINLOOP();
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* s->hidden_disk and s->secondary_disk may not be set yet, as they will
|
* s->hidden_disk and s->secondary_disk may not be set yet, as they will
|
||||||
* only be set after the children are writable.
|
* only be set after the children are writable.
|
||||||
@ -496,9 +500,11 @@ static void replication_start(ReplicationState *rs, ReplicationMode mode,
|
|||||||
case REPLICATION_MODE_PRIMARY:
|
case REPLICATION_MODE_PRIMARY:
|
||||||
break;
|
break;
|
||||||
case REPLICATION_MODE_SECONDARY:
|
case REPLICATION_MODE_SECONDARY:
|
||||||
|
bdrv_graph_rdlock_main_loop();
|
||||||
active_disk = bs->file;
|
active_disk = bs->file;
|
||||||
if (!active_disk || !active_disk->bs || !active_disk->bs->backing) {
|
if (!active_disk || !active_disk->bs || !active_disk->bs->backing) {
|
||||||
error_setg(errp, "Active disk doesn't have backing file");
|
error_setg(errp, "Active disk doesn't have backing file");
|
||||||
|
bdrv_graph_rdunlock_main_loop();
|
||||||
aio_context_release(aio_context);
|
aio_context_release(aio_context);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
@ -506,11 +512,11 @@ static void replication_start(ReplicationState *rs, ReplicationMode mode,
|
|||||||
hidden_disk = active_disk->bs->backing;
|
hidden_disk = active_disk->bs->backing;
|
||||||
if (!hidden_disk->bs || !hidden_disk->bs->backing) {
|
if (!hidden_disk->bs || !hidden_disk->bs->backing) {
|
||||||
error_setg(errp, "Hidden disk doesn't have backing file");
|
error_setg(errp, "Hidden disk doesn't have backing file");
|
||||||
|
bdrv_graph_rdunlock_main_loop();
|
||||||
aio_context_release(aio_context);
|
aio_context_release(aio_context);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
bdrv_graph_rdlock_main_loop();
|
|
||||||
secondary_disk = hidden_disk->bs->backing;
|
secondary_disk = hidden_disk->bs->backing;
|
||||||
if (!secondary_disk->bs || !bdrv_has_blk(secondary_disk->bs)) {
|
if (!secondary_disk->bs || !bdrv_has_blk(secondary_disk->bs)) {
|
||||||
error_setg(errp, "The secondary disk doesn't have block backend");
|
error_setg(errp, "The secondary disk doesn't have block backend");
|
||||||
@ -750,11 +756,13 @@ static void replication_stop(ReplicationState *rs, bool failover, Error **errp)
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
bdrv_graph_rdlock_main_loop();
|
||||||
s->stage = BLOCK_REPLICATION_FAILOVER;
|
s->stage = BLOCK_REPLICATION_FAILOVER;
|
||||||
s->commit_job = commit_active_start(
|
s->commit_job = commit_active_start(
|
||||||
NULL, bs->file->bs, s->secondary_disk->bs,
|
NULL, bs->file->bs, s->secondary_disk->bs,
|
||||||
JOB_INTERNAL, 0, BLOCKDEV_ON_ERROR_REPORT,
|
JOB_INTERNAL, 0, BLOCKDEV_ON_ERROR_REPORT,
|
||||||
NULL, replication_done, bs, true, errp);
|
NULL, replication_done, bs, true, errp);
|
||||||
|
bdrv_graph_rdunlock_main_loop();
|
||||||
break;
|
break;
|
||||||
default:
|
default:
|
||||||
aio_context_release(aio_context);
|
aio_context_release(aio_context);
|
||||||
|
@ -73,7 +73,7 @@ snapshot_access_co_pwritev_part(BlockDriverState *bs,
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
static void snapshot_access_refresh_filename(BlockDriverState *bs)
|
static void GRAPH_RDLOCK snapshot_access_refresh_filename(BlockDriverState *bs)
|
||||||
{
|
{
|
||||||
pstrcpy(bs->exact_filename, sizeof(bs->exact_filename),
|
pstrcpy(bs->exact_filename, sizeof(bs->exact_filename),
|
||||||
bs->file->bs->filename);
|
bs->file->bs->filename);
|
||||||
@ -85,6 +85,9 @@ static int snapshot_access_open(BlockDriverState *bs, QDict *options, int flags,
|
|||||||
bdrv_open_child(NULL, options, "file", bs, &child_of_bds,
|
bdrv_open_child(NULL, options, "file", bs, &child_of_bds,
|
||||||
BDRV_CHILD_DATA | BDRV_CHILD_PRIMARY,
|
BDRV_CHILD_DATA | BDRV_CHILD_PRIMARY,
|
||||||
false, errp);
|
false, errp);
|
||||||
|
|
||||||
|
GRAPH_RDLOCK_GUARD_MAINLOOP();
|
||||||
|
|
||||||
if (!bs->file) {
|
if (!bs->file) {
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
|
@ -629,7 +629,6 @@ int bdrv_all_goto_snapshot(const char *name,
|
|||||||
while (iterbdrvs) {
|
while (iterbdrvs) {
|
||||||
BlockDriverState *bs = iterbdrvs->data;
|
BlockDriverState *bs = iterbdrvs->data;
|
||||||
AioContext *ctx = bdrv_get_aio_context(bs);
|
AioContext *ctx = bdrv_get_aio_context(bs);
|
||||||
int ret = 0;
|
|
||||||
bool all_snapshots_includes_bs;
|
bool all_snapshots_includes_bs;
|
||||||
|
|
||||||
aio_context_acquire(ctx);
|
aio_context_acquire(ctx);
|
||||||
@ -637,9 +636,8 @@ int bdrv_all_goto_snapshot(const char *name,
|
|||||||
all_snapshots_includes_bs = bdrv_all_snapshots_includes_bs(bs);
|
all_snapshots_includes_bs = bdrv_all_snapshots_includes_bs(bs);
|
||||||
bdrv_graph_rdunlock_main_loop();
|
bdrv_graph_rdunlock_main_loop();
|
||||||
|
|
||||||
if (devices || all_snapshots_includes_bs) {
|
ret = (devices || all_snapshots_includes_bs) ?
|
||||||
ret = bdrv_snapshot_goto(bs, name, errp);
|
bdrv_snapshot_goto(bs, name, errp) : 0;
|
||||||
}
|
|
||||||
aio_context_release(ctx);
|
aio_context_release(ctx);
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
bdrv_graph_rdlock_main_loop();
|
bdrv_graph_rdlock_main_loop();
|
||||||
|
@ -53,13 +53,20 @@ static int coroutine_fn stream_populate(BlockBackend *blk,
|
|||||||
static int stream_prepare(Job *job)
|
static int stream_prepare(Job *job)
|
||||||
{
|
{
|
||||||
StreamBlockJob *s = container_of(job, StreamBlockJob, common.job);
|
StreamBlockJob *s = container_of(job, StreamBlockJob, common.job);
|
||||||
BlockDriverState *unfiltered_bs = bdrv_skip_filters(s->target_bs);
|
BlockDriverState *unfiltered_bs;
|
||||||
BlockDriverState *unfiltered_bs_cow = bdrv_cow_bs(unfiltered_bs);
|
BlockDriverState *unfiltered_bs_cow;
|
||||||
BlockDriverState *base;
|
BlockDriverState *base;
|
||||||
BlockDriverState *unfiltered_base;
|
BlockDriverState *unfiltered_base;
|
||||||
Error *local_err = NULL;
|
Error *local_err = NULL;
|
||||||
int ret = 0;
|
int ret = 0;
|
||||||
|
|
||||||
|
GLOBAL_STATE_CODE();
|
||||||
|
|
||||||
|
bdrv_graph_rdlock_main_loop();
|
||||||
|
unfiltered_bs = bdrv_skip_filters(s->target_bs);
|
||||||
|
unfiltered_bs_cow = bdrv_cow_bs(unfiltered_bs);
|
||||||
|
bdrv_graph_rdunlock_main_loop();
|
||||||
|
|
||||||
/* We should drop filter at this point, as filter hold the backing chain */
|
/* We should drop filter at this point, as filter hold the backing chain */
|
||||||
bdrv_cor_filter_drop(s->cor_filter_bs);
|
bdrv_cor_filter_drop(s->cor_filter_bs);
|
||||||
s->cor_filter_bs = NULL;
|
s->cor_filter_bs = NULL;
|
||||||
@ -78,10 +85,12 @@ static int stream_prepare(Job *job)
|
|||||||
bdrv_drained_begin(unfiltered_bs_cow);
|
bdrv_drained_begin(unfiltered_bs_cow);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
bdrv_graph_rdlock_main_loop();
|
||||||
base = bdrv_filter_or_cow_bs(s->above_base);
|
base = bdrv_filter_or_cow_bs(s->above_base);
|
||||||
unfiltered_base = bdrv_skip_filters(base);
|
unfiltered_base = bdrv_skip_filters(base);
|
||||||
|
bdrv_graph_rdunlock_main_loop();
|
||||||
|
|
||||||
if (bdrv_cow_child(unfiltered_bs)) {
|
if (unfiltered_bs_cow) {
|
||||||
const char *base_id = NULL, *base_fmt = NULL;
|
const char *base_id = NULL, *base_fmt = NULL;
|
||||||
if (unfiltered_base) {
|
if (unfiltered_base) {
|
||||||
base_id = s->backing_file_str ?: unfiltered_base->filename;
|
base_id = s->backing_file_str ?: unfiltered_base->filename;
|
||||||
@ -90,7 +99,9 @@ static int stream_prepare(Job *job)
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
bdrv_graph_wrlock(base);
|
||||||
bdrv_set_backing_hd_drained(unfiltered_bs, base, &local_err);
|
bdrv_set_backing_hd_drained(unfiltered_bs, base, &local_err);
|
||||||
|
bdrv_graph_wrunlock();
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* This call will do I/O, so the graph can change again from here on.
|
* This call will do I/O, so the graph can change again from here on.
|
||||||
@ -138,18 +149,19 @@ static void stream_clean(Job *job)
|
|||||||
static int coroutine_fn stream_run(Job *job, Error **errp)
|
static int coroutine_fn stream_run(Job *job, Error **errp)
|
||||||
{
|
{
|
||||||
StreamBlockJob *s = container_of(job, StreamBlockJob, common.job);
|
StreamBlockJob *s = container_of(job, StreamBlockJob, common.job);
|
||||||
BlockDriverState *unfiltered_bs = bdrv_skip_filters(s->target_bs);
|
BlockDriverState *unfiltered_bs;
|
||||||
int64_t len;
|
int64_t len;
|
||||||
int64_t offset = 0;
|
int64_t offset = 0;
|
||||||
int error = 0;
|
int error = 0;
|
||||||
int64_t n = 0; /* bytes */
|
int64_t n = 0; /* bytes */
|
||||||
|
|
||||||
if (unfiltered_bs == s->base_overlay) {
|
|
||||||
/* Nothing to stream */
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
WITH_GRAPH_RDLOCK_GUARD() {
|
WITH_GRAPH_RDLOCK_GUARD() {
|
||||||
|
unfiltered_bs = bdrv_skip_filters(s->target_bs);
|
||||||
|
if (unfiltered_bs == s->base_overlay) {
|
||||||
|
/* Nothing to stream */
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
len = bdrv_co_getlength(s->target_bs);
|
len = bdrv_co_getlength(s->target_bs);
|
||||||
if (len < 0) {
|
if (len < 0) {
|
||||||
return len;
|
return len;
|
||||||
@ -256,6 +268,8 @@ void stream_start(const char *job_id, BlockDriverState *bs,
|
|||||||
assert(!(base && bottom));
|
assert(!(base && bottom));
|
||||||
assert(!(backing_file_str && bottom));
|
assert(!(backing_file_str && bottom));
|
||||||
|
|
||||||
|
bdrv_graph_rdlock_main_loop();
|
||||||
|
|
||||||
if (bottom) {
|
if (bottom) {
|
||||||
/*
|
/*
|
||||||
* New simple interface. The code is written in terms of old interface
|
* New simple interface. The code is written in terms of old interface
|
||||||
@ -272,7 +286,7 @@ void stream_start(const char *job_id, BlockDriverState *bs,
|
|||||||
if (!base_overlay) {
|
if (!base_overlay) {
|
||||||
error_setg(errp, "'%s' is not in the backing chain of '%s'",
|
error_setg(errp, "'%s' is not in the backing chain of '%s'",
|
||||||
base->node_name, bs->node_name);
|
base->node_name, bs->node_name);
|
||||||
return;
|
goto out_rdlock;
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
@ -294,7 +308,7 @@ void stream_start(const char *job_id, BlockDriverState *bs,
|
|||||||
if (bs_read_only) {
|
if (bs_read_only) {
|
||||||
/* Hold the chain during reopen */
|
/* Hold the chain during reopen */
|
||||||
if (bdrv_freeze_backing_chain(bs, above_base, errp) < 0) {
|
if (bdrv_freeze_backing_chain(bs, above_base, errp) < 0) {
|
||||||
return;
|
goto out_rdlock;
|
||||||
}
|
}
|
||||||
|
|
||||||
ret = bdrv_reopen_set_read_only(bs, false, errp);
|
ret = bdrv_reopen_set_read_only(bs, false, errp);
|
||||||
@ -303,10 +317,12 @@ void stream_start(const char *job_id, BlockDriverState *bs,
|
|||||||
bdrv_unfreeze_backing_chain(bs, above_base);
|
bdrv_unfreeze_backing_chain(bs, above_base);
|
||||||
|
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
return;
|
goto out_rdlock;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
bdrv_graph_rdunlock_main_loop();
|
||||||
|
|
||||||
opts = qdict_new();
|
opts = qdict_new();
|
||||||
|
|
||||||
qdict_put_str(opts, "driver", "copy-on-read");
|
qdict_put_str(opts, "driver", "copy-on-read");
|
||||||
@ -350,8 +366,10 @@ void stream_start(const char *job_id, BlockDriverState *bs,
|
|||||||
* already have our own plans. Also don't allow resize as the image size is
|
* already have our own plans. Also don't allow resize as the image size is
|
||||||
* queried only at the job start and then cached.
|
* queried only at the job start and then cached.
|
||||||
*/
|
*/
|
||||||
|
bdrv_graph_wrlock(bs);
|
||||||
if (block_job_add_bdrv(&s->common, "active node", bs, 0,
|
if (block_job_add_bdrv(&s->common, "active node", bs, 0,
|
||||||
basic_flags | BLK_PERM_WRITE, errp)) {
|
basic_flags | BLK_PERM_WRITE, errp)) {
|
||||||
|
bdrv_graph_wrunlock();
|
||||||
goto fail;
|
goto fail;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -371,9 +389,11 @@ void stream_start(const char *job_id, BlockDriverState *bs,
|
|||||||
ret = block_job_add_bdrv(&s->common, "intermediate node", iter, 0,
|
ret = block_job_add_bdrv(&s->common, "intermediate node", iter, 0,
|
||||||
basic_flags, errp);
|
basic_flags, errp);
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
|
bdrv_graph_wrunlock();
|
||||||
goto fail;
|
goto fail;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
bdrv_graph_wrunlock();
|
||||||
|
|
||||||
s->base_overlay = base_overlay;
|
s->base_overlay = base_overlay;
|
||||||
s->above_base = above_base;
|
s->above_base = above_base;
|
||||||
@ -397,4 +417,8 @@ fail:
|
|||||||
if (bs_read_only) {
|
if (bs_read_only) {
|
||||||
bdrv_reopen_set_read_only(bs, true, NULL);
|
bdrv_reopen_set_read_only(bs, true, NULL);
|
||||||
}
|
}
|
||||||
|
return;
|
||||||
|
|
||||||
|
out_rdlock:
|
||||||
|
bdrv_graph_rdunlock_main_loop();
|
||||||
}
|
}
|
||||||
|
@ -84,6 +84,9 @@ static int throttle_open(BlockDriverState *bs, QDict *options,
|
|||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
GRAPH_RDLOCK_GUARD_MAINLOOP();
|
||||||
|
|
||||||
bs->supported_write_flags = bs->file->bs->supported_write_flags |
|
bs->supported_write_flags = bs->file->bs->supported_write_flags |
|
||||||
BDRV_REQ_WRITE_UNCHANGED;
|
BDRV_REQ_WRITE_UNCHANGED;
|
||||||
bs->supported_zero_flags = bs->file->bs->supported_zero_flags |
|
bs->supported_zero_flags = bs->file->bs->supported_zero_flags |
|
||||||
|
19
block/vdi.c
19
block/vdi.c
@ -239,7 +239,7 @@ static void vdi_header_to_le(VdiHeader *header)
|
|||||||
|
|
||||||
static void vdi_header_print(VdiHeader *header)
|
static void vdi_header_print(VdiHeader *header)
|
||||||
{
|
{
|
||||||
char uuidstr[37];
|
char uuidstr[UUID_STR_LEN];
|
||||||
QemuUUID uuid;
|
QemuUUID uuid;
|
||||||
logout("text %s", header->text);
|
logout("text %s", header->text);
|
||||||
logout("signature 0x%08x\n", header->signature);
|
logout("signature 0x%08x\n", header->signature);
|
||||||
@ -383,6 +383,8 @@ static int vdi_open(BlockDriverState *bs, QDict *options, int flags,
|
|||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
GRAPH_RDLOCK_GUARD_MAINLOOP();
|
||||||
|
|
||||||
logout("\n");
|
logout("\n");
|
||||||
|
|
||||||
ret = bdrv_pread(bs->file, 0, sizeof(header), &header, 0);
|
ret = bdrv_pread(bs->file, 0, sizeof(header), &header, 0);
|
||||||
@ -492,13 +494,11 @@ static int vdi_open(BlockDriverState *bs, QDict *options, int flags,
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* Disable migration when vdi images are used */
|
/* Disable migration when vdi images are used */
|
||||||
bdrv_graph_rdlock_main_loop();
|
|
||||||
error_setg(&s->migration_blocker, "The vdi format used by node '%s' "
|
error_setg(&s->migration_blocker, "The vdi format used by node '%s' "
|
||||||
"does not support live migration",
|
"does not support live migration",
|
||||||
bdrv_get_device_or_node_name(bs));
|
bdrv_get_device_or_node_name(bs));
|
||||||
bdrv_graph_rdunlock_main_loop();
|
|
||||||
|
|
||||||
ret = migrate_add_blocker(&s->migration_blocker, errp);
|
ret = migrate_add_blocker_normal(&s->migration_blocker, errp);
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
goto fail_free_bmap;
|
goto fail_free_bmap;
|
||||||
}
|
}
|
||||||
@ -520,11 +520,10 @@ static int vdi_reopen_prepare(BDRVReopenState *state,
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int coroutine_fn vdi_co_block_status(BlockDriverState *bs,
|
static int coroutine_fn GRAPH_RDLOCK
|
||||||
bool want_zero,
|
vdi_co_block_status(BlockDriverState *bs, bool want_zero, int64_t offset,
|
||||||
int64_t offset, int64_t bytes,
|
int64_t bytes, int64_t *pnum, int64_t *map,
|
||||||
int64_t *pnum, int64_t *map,
|
BlockDriverState **file)
|
||||||
BlockDriverState **file)
|
|
||||||
{
|
{
|
||||||
BDRVVdiState *s = (BDRVVdiState *)bs->opaque;
|
BDRVVdiState *s = (BDRVVdiState *)bs->opaque;
|
||||||
size_t bmap_index = offset / s->block_size;
|
size_t bmap_index = offset / s->block_size;
|
||||||
@ -990,7 +989,7 @@ static void vdi_close(BlockDriverState *bs)
|
|||||||
migrate_del_blocker(&s->migration_blocker);
|
migrate_del_blocker(&s->migration_blocker);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int vdi_has_zero_init(BlockDriverState *bs)
|
static int GRAPH_RDLOCK vdi_has_zero_init(BlockDriverState *bs)
|
||||||
{
|
{
|
||||||
BDRVVdiState *s = bs->opaque;
|
BDRVVdiState *s = bs->opaque;
|
||||||
|
|
||||||
|
@ -55,8 +55,9 @@ static const MSGUID zero_guid = { 0 };
|
|||||||
|
|
||||||
/* Allow peeking at the hdr entry at the beginning of the current
|
/* Allow peeking at the hdr entry at the beginning of the current
|
||||||
* read index, without advancing the read index */
|
* read index, without advancing the read index */
|
||||||
static int vhdx_log_peek_hdr(BlockDriverState *bs, VHDXLogEntries *log,
|
static int GRAPH_RDLOCK
|
||||||
VHDXLogEntryHeader *hdr)
|
vhdx_log_peek_hdr(BlockDriverState *bs, VHDXLogEntries *log,
|
||||||
|
VHDXLogEntryHeader *hdr)
|
||||||
{
|
{
|
||||||
int ret = 0;
|
int ret = 0;
|
||||||
uint64_t offset;
|
uint64_t offset;
|
||||||
@ -107,7 +108,7 @@ static int vhdx_log_inc_idx(uint32_t idx, uint64_t length)
|
|||||||
|
|
||||||
|
|
||||||
/* Reset the log to empty */
|
/* Reset the log to empty */
|
||||||
static void vhdx_log_reset(BlockDriverState *bs, BDRVVHDXState *s)
|
static void GRAPH_RDLOCK vhdx_log_reset(BlockDriverState *bs, BDRVVHDXState *s)
|
||||||
{
|
{
|
||||||
MSGUID guid = { 0 };
|
MSGUID guid = { 0 };
|
||||||
s->log.read = s->log.write = 0;
|
s->log.read = s->log.write = 0;
|
||||||
@ -127,9 +128,10 @@ static void vhdx_log_reset(BlockDriverState *bs, BDRVVHDXState *s)
|
|||||||
* not modified.
|
* not modified.
|
||||||
*
|
*
|
||||||
* 0 is returned on success, -errno otherwise. */
|
* 0 is returned on success, -errno otherwise. */
|
||||||
static int vhdx_log_read_sectors(BlockDriverState *bs, VHDXLogEntries *log,
|
static int GRAPH_RDLOCK
|
||||||
uint32_t *sectors_read, void *buffer,
|
vhdx_log_read_sectors(BlockDriverState *bs, VHDXLogEntries *log,
|
||||||
uint32_t num_sectors, bool peek)
|
uint32_t *sectors_read, void *buffer,
|
||||||
|
uint32_t num_sectors, bool peek)
|
||||||
{
|
{
|
||||||
int ret = 0;
|
int ret = 0;
|
||||||
uint64_t offset;
|
uint64_t offset;
|
||||||
@ -333,9 +335,9 @@ static int vhdx_compute_desc_sectors(uint32_t desc_cnt)
|
|||||||
* will allocate all the space for buffer, which must be NULL when
|
* will allocate all the space for buffer, which must be NULL when
|
||||||
* passed into this function. Each descriptor will also be validated,
|
* passed into this function. Each descriptor will also be validated,
|
||||||
* and error returned if any are invalid. */
|
* and error returned if any are invalid. */
|
||||||
static int vhdx_log_read_desc(BlockDriverState *bs, BDRVVHDXState *s,
|
static int GRAPH_RDLOCK
|
||||||
VHDXLogEntries *log, VHDXLogDescEntries **buffer,
|
vhdx_log_read_desc(BlockDriverState *bs, BDRVVHDXState *s, VHDXLogEntries *log,
|
||||||
bool convert_endian)
|
VHDXLogDescEntries **buffer, bool convert_endian)
|
||||||
{
|
{
|
||||||
int ret = 0;
|
int ret = 0;
|
||||||
uint32_t desc_sectors;
|
uint32_t desc_sectors;
|
||||||
@ -412,8 +414,9 @@ exit:
|
|||||||
* For a zero descriptor, it may describe multiple sectors to fill with zeroes.
|
* For a zero descriptor, it may describe multiple sectors to fill with zeroes.
|
||||||
* In this case, it should be noted that zeroes are written to disk, and the
|
* In this case, it should be noted that zeroes are written to disk, and the
|
||||||
* image file is not extended as a sparse file. */
|
* image file is not extended as a sparse file. */
|
||||||
static int vhdx_log_flush_desc(BlockDriverState *bs, VHDXLogDescriptor *desc,
|
static int GRAPH_RDLOCK
|
||||||
VHDXLogDataSector *data)
|
vhdx_log_flush_desc(BlockDriverState *bs, VHDXLogDescriptor *desc,
|
||||||
|
VHDXLogDataSector *data)
|
||||||
{
|
{
|
||||||
int ret = 0;
|
int ret = 0;
|
||||||
uint64_t seq, file_offset;
|
uint64_t seq, file_offset;
|
||||||
@ -484,8 +487,8 @@ exit:
|
|||||||
* file, and then set the log to 'empty' status once complete.
|
* file, and then set the log to 'empty' status once complete.
|
||||||
*
|
*
|
||||||
* The log entries should be validate prior to flushing */
|
* The log entries should be validate prior to flushing */
|
||||||
static int vhdx_log_flush(BlockDriverState *bs, BDRVVHDXState *s,
|
static int GRAPH_RDLOCK
|
||||||
VHDXLogSequence *logs)
|
vhdx_log_flush(BlockDriverState *bs, BDRVVHDXState *s, VHDXLogSequence *logs)
|
||||||
{
|
{
|
||||||
int ret = 0;
|
int ret = 0;
|
||||||
int i;
|
int i;
|
||||||
@ -584,9 +587,10 @@ exit:
|
|||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int vhdx_validate_log_entry(BlockDriverState *bs, BDRVVHDXState *s,
|
static int GRAPH_RDLOCK
|
||||||
VHDXLogEntries *log, uint64_t seq,
|
vhdx_validate_log_entry(BlockDriverState *bs, BDRVVHDXState *s,
|
||||||
bool *valid, VHDXLogEntryHeader *entry)
|
VHDXLogEntries *log, uint64_t seq,
|
||||||
|
bool *valid, VHDXLogEntryHeader *entry)
|
||||||
{
|
{
|
||||||
int ret = 0;
|
int ret = 0;
|
||||||
VHDXLogEntryHeader hdr;
|
VHDXLogEntryHeader hdr;
|
||||||
@ -663,8 +667,8 @@ free_and_exit:
|
|||||||
/* Search through the log circular buffer, and find the valid, active
|
/* Search through the log circular buffer, and find the valid, active
|
||||||
* log sequence, if any exists
|
* log sequence, if any exists
|
||||||
* */
|
* */
|
||||||
static int vhdx_log_search(BlockDriverState *bs, BDRVVHDXState *s,
|
static int GRAPH_RDLOCK
|
||||||
VHDXLogSequence *logs)
|
vhdx_log_search(BlockDriverState *bs, BDRVVHDXState *s, VHDXLogSequence *logs)
|
||||||
{
|
{
|
||||||
int ret = 0;
|
int ret = 0;
|
||||||
uint32_t tail;
|
uint32_t tail;
|
||||||
|
39
block/vhdx.c
39
block/vhdx.c
@ -353,8 +353,9 @@ exit:
|
|||||||
*
|
*
|
||||||
* - non-current header is updated with largest sequence number
|
* - non-current header is updated with largest sequence number
|
||||||
*/
|
*/
|
||||||
static int vhdx_update_header(BlockDriverState *bs, BDRVVHDXState *s,
|
static int GRAPH_RDLOCK
|
||||||
bool generate_data_write_guid, MSGUID *log_guid)
|
vhdx_update_header(BlockDriverState *bs, BDRVVHDXState *s,
|
||||||
|
bool generate_data_write_guid, MSGUID *log_guid)
|
||||||
{
|
{
|
||||||
int ret = 0;
|
int ret = 0;
|
||||||
int hdr_idx = 0;
|
int hdr_idx = 0;
|
||||||
@ -416,8 +417,8 @@ int vhdx_update_headers(BlockDriverState *bs, BDRVVHDXState *s,
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* opens the specified header block from the VHDX file header section */
|
/* opens the specified header block from the VHDX file header section */
|
||||||
static void vhdx_parse_header(BlockDriverState *bs, BDRVVHDXState *s,
|
static void GRAPH_RDLOCK
|
||||||
Error **errp)
|
vhdx_parse_header(BlockDriverState *bs, BDRVVHDXState *s, Error **errp)
|
||||||
{
|
{
|
||||||
int ret;
|
int ret;
|
||||||
VHDXHeader *header1;
|
VHDXHeader *header1;
|
||||||
@ -517,7 +518,8 @@ exit:
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
static int vhdx_open_region_tables(BlockDriverState *bs, BDRVVHDXState *s)
|
static int GRAPH_RDLOCK
|
||||||
|
vhdx_open_region_tables(BlockDriverState *bs, BDRVVHDXState *s)
|
||||||
{
|
{
|
||||||
int ret = 0;
|
int ret = 0;
|
||||||
uint8_t *buffer;
|
uint8_t *buffer;
|
||||||
@ -634,7 +636,8 @@ fail:
|
|||||||
* Also, if the File Parameters indicate this is a differencing file,
|
* Also, if the File Parameters indicate this is a differencing file,
|
||||||
* we must also look for the Parent Locator metadata item.
|
* we must also look for the Parent Locator metadata item.
|
||||||
*/
|
*/
|
||||||
static int vhdx_parse_metadata(BlockDriverState *bs, BDRVVHDXState *s)
|
static int GRAPH_RDLOCK
|
||||||
|
vhdx_parse_metadata(BlockDriverState *bs, BDRVVHDXState *s)
|
||||||
{
|
{
|
||||||
int ret = 0;
|
int ret = 0;
|
||||||
uint8_t *buffer;
|
uint8_t *buffer;
|
||||||
@ -885,7 +888,8 @@ static void vhdx_calc_bat_entries(BDRVVHDXState *s)
|
|||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static int vhdx_check_bat_entries(BlockDriverState *bs, int *errcnt)
|
static int coroutine_mixed_fn GRAPH_RDLOCK
|
||||||
|
vhdx_check_bat_entries(BlockDriverState *bs, int *errcnt)
|
||||||
{
|
{
|
||||||
BDRVVHDXState *s = bs->opaque;
|
BDRVVHDXState *s = bs->opaque;
|
||||||
int64_t image_file_size = bdrv_getlength(bs->file->bs);
|
int64_t image_file_size = bdrv_getlength(bs->file->bs);
|
||||||
@ -1096,7 +1100,7 @@ static int vhdx_open(BlockDriverState *bs, QDict *options, int flags,
|
|||||||
error_setg(&s->migration_blocker, "The vhdx format used by node '%s' "
|
error_setg(&s->migration_blocker, "The vhdx format used by node '%s' "
|
||||||
"does not support live migration",
|
"does not support live migration",
|
||||||
bdrv_get_device_or_node_name(bs));
|
bdrv_get_device_or_node_name(bs));
|
||||||
ret = migrate_add_blocker(&s->migration_blocker, errp);
|
ret = migrate_add_blocker_normal(&s->migration_blocker, errp);
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
goto fail;
|
goto fail;
|
||||||
}
|
}
|
||||||
@ -1695,7 +1699,7 @@ exit:
|
|||||||
* Fixed images: default state of the BAT is fully populated, with
|
* Fixed images: default state of the BAT is fully populated, with
|
||||||
* file offsets and state PAYLOAD_BLOCK_FULLY_PRESENT.
|
* file offsets and state PAYLOAD_BLOCK_FULLY_PRESENT.
|
||||||
*/
|
*/
|
||||||
static int coroutine_fn
|
static int coroutine_fn GRAPH_UNLOCKED
|
||||||
vhdx_create_bat(BlockBackend *blk, BDRVVHDXState *s,
|
vhdx_create_bat(BlockBackend *blk, BDRVVHDXState *s,
|
||||||
uint64_t image_size, VHDXImageType type,
|
uint64_t image_size, VHDXImageType type,
|
||||||
bool use_zero_blocks, uint64_t file_offset,
|
bool use_zero_blocks, uint64_t file_offset,
|
||||||
@ -1708,6 +1712,7 @@ vhdx_create_bat(BlockBackend *blk, BDRVVHDXState *s,
|
|||||||
uint64_t unused;
|
uint64_t unused;
|
||||||
int block_state;
|
int block_state;
|
||||||
VHDXSectorInfo sinfo;
|
VHDXSectorInfo sinfo;
|
||||||
|
bool has_zero_init;
|
||||||
|
|
||||||
assert(s->bat == NULL);
|
assert(s->bat == NULL);
|
||||||
|
|
||||||
@ -1737,9 +1742,13 @@ vhdx_create_bat(BlockBackend *blk, BDRVVHDXState *s,
|
|||||||
goto exit;
|
goto exit;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
bdrv_graph_co_rdlock();
|
||||||
|
has_zero_init = bdrv_has_zero_init(blk_bs(blk));
|
||||||
|
bdrv_graph_co_rdunlock();
|
||||||
|
|
||||||
if (type == VHDX_TYPE_FIXED ||
|
if (type == VHDX_TYPE_FIXED ||
|
||||||
use_zero_blocks ||
|
use_zero_blocks ||
|
||||||
bdrv_has_zero_init(blk_bs(blk)) == 0) {
|
has_zero_init == 0) {
|
||||||
/* for a fixed file, the default BAT entry is not zero */
|
/* for a fixed file, the default BAT entry is not zero */
|
||||||
s->bat = g_try_malloc0(length);
|
s->bat = g_try_malloc0(length);
|
||||||
if (length && s->bat == NULL) {
|
if (length && s->bat == NULL) {
|
||||||
@ -1782,7 +1791,7 @@ exit:
|
|||||||
* to create the BAT itself, we will also cause the BAT to be
|
* to create the BAT itself, we will also cause the BAT to be
|
||||||
* created.
|
* created.
|
||||||
*/
|
*/
|
||||||
static int coroutine_fn
|
static int coroutine_fn GRAPH_UNLOCKED
|
||||||
vhdx_create_new_region_table(BlockBackend *blk, uint64_t image_size,
|
vhdx_create_new_region_table(BlockBackend *blk, uint64_t image_size,
|
||||||
uint32_t block_size, uint32_t sector_size,
|
uint32_t block_size, uint32_t sector_size,
|
||||||
uint32_t log_size, bool use_zero_blocks,
|
uint32_t log_size, bool use_zero_blocks,
|
||||||
@ -2158,9 +2167,9 @@ fail:
|
|||||||
* r/w and any log has already been replayed, so there is nothing (currently)
|
* r/w and any log has already been replayed, so there is nothing (currently)
|
||||||
* for us to do here
|
* for us to do here
|
||||||
*/
|
*/
|
||||||
static int coroutine_fn vhdx_co_check(BlockDriverState *bs,
|
static int coroutine_fn GRAPH_RDLOCK
|
||||||
BdrvCheckResult *result,
|
vhdx_co_check(BlockDriverState *bs, BdrvCheckResult *result,
|
||||||
BdrvCheckMode fix)
|
BdrvCheckMode fix)
|
||||||
{
|
{
|
||||||
BDRVVHDXState *s = bs->opaque;
|
BDRVVHDXState *s = bs->opaque;
|
||||||
|
|
||||||
@ -2173,7 +2182,7 @@ static int coroutine_fn vhdx_co_check(BlockDriverState *bs,
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int vhdx_has_zero_init(BlockDriverState *bs)
|
static int GRAPH_RDLOCK vhdx_has_zero_init(BlockDriverState *bs)
|
||||||
{
|
{
|
||||||
BDRVVHDXState *s = bs->opaque;
|
BDRVVHDXState *s = bs->opaque;
|
||||||
int state;
|
int state;
|
||||||
|
@ -401,8 +401,9 @@ typedef struct BDRVVHDXState {
|
|||||||
|
|
||||||
void vhdx_guid_generate(MSGUID *guid);
|
void vhdx_guid_generate(MSGUID *guid);
|
||||||
|
|
||||||
int vhdx_update_headers(BlockDriverState *bs, BDRVVHDXState *s, bool rw,
|
int GRAPH_RDLOCK
|
||||||
MSGUID *log_guid);
|
vhdx_update_headers(BlockDriverState *bs, BDRVVHDXState *s, bool rw,
|
||||||
|
MSGUID *log_guid);
|
||||||
|
|
||||||
uint32_t vhdx_update_checksum(uint8_t *buf, size_t size, int crc_offset);
|
uint32_t vhdx_update_checksum(uint8_t *buf, size_t size, int crc_offset);
|
||||||
uint32_t vhdx_checksum_calc(uint32_t crc, uint8_t *buf, size_t size,
|
uint32_t vhdx_checksum_calc(uint32_t crc, uint8_t *buf, size_t size,
|
||||||
@ -448,6 +449,8 @@ void vhdx_metadata_header_le_import(VHDXMetadataTableHeader *hdr);
|
|||||||
void vhdx_metadata_header_le_export(VHDXMetadataTableHeader *hdr);
|
void vhdx_metadata_header_le_export(VHDXMetadataTableHeader *hdr);
|
||||||
void vhdx_metadata_entry_le_import(VHDXMetadataTableEntry *e);
|
void vhdx_metadata_entry_le_import(VHDXMetadataTableEntry *e);
|
||||||
void vhdx_metadata_entry_le_export(VHDXMetadataTableEntry *e);
|
void vhdx_metadata_entry_le_export(VHDXMetadataTableEntry *e);
|
||||||
int vhdx_user_visible_write(BlockDriverState *bs, BDRVVHDXState *s);
|
|
||||||
|
int GRAPH_RDLOCK
|
||||||
|
vhdx_user_visible_write(BlockDriverState *bs, BDRVVHDXState *s);
|
||||||
|
|
||||||
#endif
|
#endif
|
||||||
|
25
block/vmdk.c
25
block/vmdk.c
@ -300,7 +300,8 @@ static void vmdk_free_last_extent(BlockDriverState *bs)
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* Return -ve errno, or 0 on success and write CID into *pcid. */
|
/* Return -ve errno, or 0 on success and write CID into *pcid. */
|
||||||
static int vmdk_read_cid(BlockDriverState *bs, int parent, uint32_t *pcid)
|
static int GRAPH_RDLOCK
|
||||||
|
vmdk_read_cid(BlockDriverState *bs, int parent, uint32_t *pcid)
|
||||||
{
|
{
|
||||||
char *desc;
|
char *desc;
|
||||||
uint32_t cid;
|
uint32_t cid;
|
||||||
@ -380,7 +381,7 @@ out:
|
|||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int coroutine_fn vmdk_is_cid_valid(BlockDriverState *bs)
|
static int coroutine_fn GRAPH_RDLOCK vmdk_is_cid_valid(BlockDriverState *bs)
|
||||||
{
|
{
|
||||||
BDRVVmdkState *s = bs->opaque;
|
BDRVVmdkState *s = bs->opaque;
|
||||||
uint32_t cur_pcid;
|
uint32_t cur_pcid;
|
||||||
@ -415,6 +416,9 @@ static int vmdk_reopen_prepare(BDRVReopenState *state,
|
|||||||
BDRVVmdkReopenState *rs;
|
BDRVVmdkReopenState *rs;
|
||||||
int i;
|
int i;
|
||||||
|
|
||||||
|
GLOBAL_STATE_CODE();
|
||||||
|
GRAPH_RDLOCK_GUARD_MAINLOOP();
|
||||||
|
|
||||||
assert(state != NULL);
|
assert(state != NULL);
|
||||||
assert(state->bs != NULL);
|
assert(state->bs != NULL);
|
||||||
assert(state->opaque == NULL);
|
assert(state->opaque == NULL);
|
||||||
@ -451,6 +455,9 @@ static void vmdk_reopen_commit(BDRVReopenState *state)
|
|||||||
BDRVVmdkReopenState *rs = state->opaque;
|
BDRVVmdkReopenState *rs = state->opaque;
|
||||||
int i;
|
int i;
|
||||||
|
|
||||||
|
GLOBAL_STATE_CODE();
|
||||||
|
GRAPH_RDLOCK_GUARD_MAINLOOP();
|
||||||
|
|
||||||
for (i = 0; i < s->num_extents; i++) {
|
for (i = 0; i < s->num_extents; i++) {
|
||||||
if (rs->extents_using_bs_file[i]) {
|
if (rs->extents_using_bs_file[i]) {
|
||||||
s->extents[i].file = state->bs->file;
|
s->extents[i].file = state->bs->file;
|
||||||
@ -465,7 +472,7 @@ static void vmdk_reopen_abort(BDRVReopenState *state)
|
|||||||
vmdk_reopen_clean(state);
|
vmdk_reopen_clean(state);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int vmdk_parent_open(BlockDriverState *bs)
|
static int GRAPH_RDLOCK vmdk_parent_open(BlockDriverState *bs)
|
||||||
{
|
{
|
||||||
char *p_name;
|
char *p_name;
|
||||||
char *desc;
|
char *desc;
|
||||||
@ -1386,7 +1393,7 @@ static int vmdk_open(BlockDriverState *bs, QDict *options, int flags,
|
|||||||
error_setg(&s->migration_blocker, "The vmdk format used by node '%s' "
|
error_setg(&s->migration_blocker, "The vmdk format used by node '%s' "
|
||||||
"does not support live migration",
|
"does not support live migration",
|
||||||
bdrv_get_device_or_node_name(bs));
|
bdrv_get_device_or_node_name(bs));
|
||||||
ret = migrate_add_blocker(&s->migration_blocker, errp);
|
ret = migrate_add_blocker_normal(&s->migration_blocker, errp);
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
goto fail;
|
goto fail;
|
||||||
}
|
}
|
||||||
@ -2547,7 +2554,10 @@ vmdk_co_do_create(int64_t size,
|
|||||||
ret = -EINVAL;
|
ret = -EINVAL;
|
||||||
goto exit;
|
goto exit;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
bdrv_graph_co_rdlock();
|
||||||
ret = vmdk_read_cid(blk_bs(backing), 0, &parent_cid);
|
ret = vmdk_read_cid(blk_bs(backing), 0, &parent_cid);
|
||||||
|
bdrv_graph_co_rdunlock();
|
||||||
blk_co_unref(backing);
|
blk_co_unref(backing);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
error_setg(errp, "Failed to read parent CID");
|
error_setg(errp, "Failed to read parent CID");
|
||||||
@ -2894,7 +2904,7 @@ vmdk_co_get_allocated_file_size(BlockDriverState *bs)
|
|||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int vmdk_has_zero_init(BlockDriverState *bs)
|
static int GRAPH_RDLOCK vmdk_has_zero_init(BlockDriverState *bs)
|
||||||
{
|
{
|
||||||
int i;
|
int i;
|
||||||
BDRVVmdkState *s = bs->opaque;
|
BDRVVmdkState *s = bs->opaque;
|
||||||
@ -3044,8 +3054,9 @@ vmdk_co_get_info(BlockDriverState *bs, BlockDriverInfo *bdi)
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void vmdk_gather_child_options(BlockDriverState *bs, QDict *target,
|
static void GRAPH_RDLOCK
|
||||||
bool backing_overridden)
|
vmdk_gather_child_options(BlockDriverState *bs, QDict *target,
|
||||||
|
bool backing_overridden)
|
||||||
{
|
{
|
||||||
/* No children but file and backing can be explicitly specified (TODO) */
|
/* No children but file and backing can be explicitly specified (TODO) */
|
||||||
qdict_put(target, "file",
|
qdict_put(target, "file",
|
||||||
|
@ -238,6 +238,8 @@ static int vpc_open(BlockDriverState *bs, QDict *options, int flags,
|
|||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
GRAPH_RDLOCK_GUARD_MAINLOOP();
|
||||||
|
|
||||||
opts = qemu_opts_create(&vpc_runtime_opts, NULL, 0, &error_abort);
|
opts = qemu_opts_create(&vpc_runtime_opts, NULL, 0, &error_abort);
|
||||||
if (!qemu_opts_absorb_qdict(opts, options, errp)) {
|
if (!qemu_opts_absorb_qdict(opts, options, errp)) {
|
||||||
ret = -EINVAL;
|
ret = -EINVAL;
|
||||||
@ -446,13 +448,11 @@ static int vpc_open(BlockDriverState *bs, QDict *options, int flags,
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* Disable migration when VHD images are used */
|
/* Disable migration when VHD images are used */
|
||||||
bdrv_graph_rdlock_main_loop();
|
|
||||||
error_setg(&s->migration_blocker, "The vpc format used by node '%s' "
|
error_setg(&s->migration_blocker, "The vpc format used by node '%s' "
|
||||||
"does not support live migration",
|
"does not support live migration",
|
||||||
bdrv_get_device_or_node_name(bs));
|
bdrv_get_device_or_node_name(bs));
|
||||||
bdrv_graph_rdunlock_main_loop();
|
|
||||||
|
|
||||||
ret = migrate_add_blocker(&s->migration_blocker, errp);
|
ret = migrate_add_blocker_normal(&s->migration_blocker, errp);
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
goto fail;
|
goto fail;
|
||||||
}
|
}
|
||||||
@ -1170,7 +1170,7 @@ fail:
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
static int vpc_has_zero_init(BlockDriverState *bs)
|
static int GRAPH_RDLOCK vpc_has_zero_init(BlockDriverState *bs)
|
||||||
{
|
{
|
||||||
BDRVVPCState *s = bs->opaque;
|
BDRVVPCState *s = bs->opaque;
|
||||||
|
|
||||||
|
@ -1268,7 +1268,7 @@ static int vvfat_open(BlockDriverState *bs, QDict *options, int flags,
|
|||||||
"The vvfat (rw) format used by node '%s' "
|
"The vvfat (rw) format used by node '%s' "
|
||||||
"does not support live migration",
|
"does not support live migration",
|
||||||
bdrv_get_device_or_node_name(bs));
|
bdrv_get_device_or_node_name(bs));
|
||||||
ret = migrate_add_blocker(&s->migration_blocker, errp);
|
ret = migrate_add_blocker_normal(&s->migration_blocker, errp);
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
goto fail;
|
goto fail;
|
||||||
}
|
}
|
||||||
|
115
blockdev.c
115
blockdev.c
@ -255,13 +255,13 @@ void drive_check_orphaned(void)
|
|||||||
* Ignore default drives, because we create certain default
|
* Ignore default drives, because we create certain default
|
||||||
* drives unconditionally, then leave them unclaimed. Not the
|
* drives unconditionally, then leave them unclaimed. Not the
|
||||||
* users fault.
|
* users fault.
|
||||||
* Ignore IF_VIRTIO, because it gets desugared into -device,
|
* Ignore IF_VIRTIO or IF_XEN, because it gets desugared into
|
||||||
* so we can leave failing to -device.
|
* -device, so we can leave failing to -device.
|
||||||
* Ignore IF_NONE, because leaving unclaimed IF_NONE remains
|
* Ignore IF_NONE, because leaving unclaimed IF_NONE remains
|
||||||
* available for device_add is a feature.
|
* available for device_add is a feature.
|
||||||
*/
|
*/
|
||||||
if (dinfo->is_default || dinfo->type == IF_VIRTIO
|
if (dinfo->is_default || dinfo->type == IF_VIRTIO
|
||||||
|| dinfo->type == IF_NONE) {
|
|| dinfo->type == IF_XEN || dinfo->type == IF_NONE) {
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
if (!blk_get_attached_dev(blk)) {
|
if (!blk_get_attached_dev(blk)) {
|
||||||
@ -977,6 +977,15 @@ DriveInfo *drive_new(QemuOpts *all_opts, BlockInterfaceType block_default_type,
|
|||||||
qemu_opt_set(devopts, "driver", "virtio-blk", &error_abort);
|
qemu_opt_set(devopts, "driver", "virtio-blk", &error_abort);
|
||||||
qemu_opt_set(devopts, "drive", qdict_get_str(bs_opts, "id"),
|
qemu_opt_set(devopts, "drive", qdict_get_str(bs_opts, "id"),
|
||||||
&error_abort);
|
&error_abort);
|
||||||
|
} else if (type == IF_XEN) {
|
||||||
|
QemuOpts *devopts;
|
||||||
|
devopts = qemu_opts_create(qemu_find_opts("device"), NULL, 0,
|
||||||
|
&error_abort);
|
||||||
|
qemu_opt_set(devopts, "driver",
|
||||||
|
(media == MEDIA_CDROM) ? "xen-cdrom" : "xen-disk",
|
||||||
|
&error_abort);
|
||||||
|
qemu_opt_set(devopts, "drive", qdict_get_str(bs_opts, "id"),
|
||||||
|
&error_abort);
|
||||||
}
|
}
|
||||||
|
|
||||||
filename = qemu_opt_get(legacy_opts, "file");
|
filename = qemu_opt_get(legacy_opts, "file");
|
||||||
@ -1601,7 +1610,12 @@ static void external_snapshot_abort(void *opaque)
|
|||||||
aio_context_acquire(aio_context);
|
aio_context_acquire(aio_context);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
bdrv_drained_begin(state->new_bs);
|
||||||
|
bdrv_graph_wrlock(state->old_bs);
|
||||||
bdrv_replace_node(state->new_bs, state->old_bs, &error_abort);
|
bdrv_replace_node(state->new_bs, state->old_bs, &error_abort);
|
||||||
|
bdrv_graph_wrunlock();
|
||||||
|
bdrv_drained_end(state->new_bs);
|
||||||
|
|
||||||
bdrv_unref(state->old_bs); /* bdrv_replace_node() ref'ed old_bs */
|
bdrv_unref(state->old_bs); /* bdrv_replace_node() ref'ed old_bs */
|
||||||
|
|
||||||
aio_context_release(aio_context);
|
aio_context_release(aio_context);
|
||||||
@ -1701,7 +1715,6 @@ static void drive_backup_action(DriveBackup *backup,
|
|||||||
bdrv_graph_rdunlock_main_loop();
|
bdrv_graph_rdunlock_main_loop();
|
||||||
goto out;
|
goto out;
|
||||||
}
|
}
|
||||||
bdrv_graph_rdunlock_main_loop();
|
|
||||||
|
|
||||||
flags = bs->open_flags | BDRV_O_RDWR;
|
flags = bs->open_flags | BDRV_O_RDWR;
|
||||||
|
|
||||||
@ -1726,6 +1739,7 @@ static void drive_backup_action(DriveBackup *backup,
|
|||||||
flags |= BDRV_O_NO_BACKING;
|
flags |= BDRV_O_NO_BACKING;
|
||||||
set_backing_hd = true;
|
set_backing_hd = true;
|
||||||
}
|
}
|
||||||
|
bdrv_graph_rdunlock_main_loop();
|
||||||
|
|
||||||
size = bdrv_getlength(bs);
|
size = bdrv_getlength(bs);
|
||||||
if (size < 0) {
|
if (size < 0) {
|
||||||
@ -1737,10 +1751,10 @@ static void drive_backup_action(DriveBackup *backup,
|
|||||||
assert(format);
|
assert(format);
|
||||||
if (source) {
|
if (source) {
|
||||||
/* Implicit filters should not appear in the filename */
|
/* Implicit filters should not appear in the filename */
|
||||||
BlockDriverState *explicit_backing =
|
BlockDriverState *explicit_backing;
|
||||||
bdrv_skip_implicit_filters(source);
|
|
||||||
|
|
||||||
bdrv_graph_rdlock_main_loop();
|
bdrv_graph_rdlock_main_loop();
|
||||||
|
explicit_backing = bdrv_skip_implicit_filters(source);
|
||||||
bdrv_refresh_filename(explicit_backing);
|
bdrv_refresh_filename(explicit_backing);
|
||||||
bdrv_graph_rdunlock_main_loop();
|
bdrv_graph_rdunlock_main_loop();
|
||||||
|
|
||||||
@ -2441,11 +2455,12 @@ void qmp_block_stream(const char *job_id, const char *device,
|
|||||||
aio_context = bdrv_get_aio_context(bs);
|
aio_context = bdrv_get_aio_context(bs);
|
||||||
aio_context_acquire(aio_context);
|
aio_context_acquire(aio_context);
|
||||||
|
|
||||||
|
bdrv_graph_rdlock_main_loop();
|
||||||
if (base) {
|
if (base) {
|
||||||
base_bs = bdrv_find_backing_image(bs, base);
|
base_bs = bdrv_find_backing_image(bs, base);
|
||||||
if (base_bs == NULL) {
|
if (base_bs == NULL) {
|
||||||
error_setg(errp, "Can't find '%s' in the backing chain", base);
|
error_setg(errp, "Can't find '%s' in the backing chain", base);
|
||||||
goto out;
|
goto out_rdlock;
|
||||||
}
|
}
|
||||||
assert(bdrv_get_aio_context(base_bs) == aio_context);
|
assert(bdrv_get_aio_context(base_bs) == aio_context);
|
||||||
}
|
}
|
||||||
@ -2453,38 +2468,36 @@ void qmp_block_stream(const char *job_id, const char *device,
|
|||||||
if (base_node) {
|
if (base_node) {
|
||||||
base_bs = bdrv_lookup_bs(NULL, base_node, errp);
|
base_bs = bdrv_lookup_bs(NULL, base_node, errp);
|
||||||
if (!base_bs) {
|
if (!base_bs) {
|
||||||
goto out;
|
goto out_rdlock;
|
||||||
}
|
}
|
||||||
if (bs == base_bs || !bdrv_chain_contains(bs, base_bs)) {
|
if (bs == base_bs || !bdrv_chain_contains(bs, base_bs)) {
|
||||||
error_setg(errp, "Node '%s' is not a backing image of '%s'",
|
error_setg(errp, "Node '%s' is not a backing image of '%s'",
|
||||||
base_node, device);
|
base_node, device);
|
||||||
goto out;
|
goto out_rdlock;
|
||||||
}
|
}
|
||||||
assert(bdrv_get_aio_context(base_bs) == aio_context);
|
assert(bdrv_get_aio_context(base_bs) == aio_context);
|
||||||
|
|
||||||
bdrv_graph_rdlock_main_loop();
|
|
||||||
bdrv_refresh_filename(base_bs);
|
bdrv_refresh_filename(base_bs);
|
||||||
bdrv_graph_rdunlock_main_loop();
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if (bottom) {
|
if (bottom) {
|
||||||
bottom_bs = bdrv_lookup_bs(NULL, bottom, errp);
|
bottom_bs = bdrv_lookup_bs(NULL, bottom, errp);
|
||||||
if (!bottom_bs) {
|
if (!bottom_bs) {
|
||||||
goto out;
|
goto out_rdlock;
|
||||||
}
|
}
|
||||||
if (!bottom_bs->drv) {
|
if (!bottom_bs->drv) {
|
||||||
error_setg(errp, "Node '%s' is not open", bottom);
|
error_setg(errp, "Node '%s' is not open", bottom);
|
||||||
goto out;
|
goto out_rdlock;
|
||||||
}
|
}
|
||||||
if (bottom_bs->drv->is_filter) {
|
if (bottom_bs->drv->is_filter) {
|
||||||
error_setg(errp, "Node '%s' is a filter, use a non-filter node "
|
error_setg(errp, "Node '%s' is a filter, use a non-filter node "
|
||||||
"as 'bottom'", bottom);
|
"as 'bottom'", bottom);
|
||||||
goto out;
|
goto out_rdlock;
|
||||||
}
|
}
|
||||||
if (!bdrv_chain_contains(bs, bottom_bs)) {
|
if (!bdrv_chain_contains(bs, bottom_bs)) {
|
||||||
error_setg(errp, "Node '%s' is not in a chain starting from '%s'",
|
error_setg(errp, "Node '%s' is not in a chain starting from '%s'",
|
||||||
bottom, device);
|
bottom, device);
|
||||||
goto out;
|
goto out_rdlock;
|
||||||
}
|
}
|
||||||
assert(bdrv_get_aio_context(bottom_bs) == aio_context);
|
assert(bdrv_get_aio_context(bottom_bs) == aio_context);
|
||||||
}
|
}
|
||||||
@ -2493,13 +2506,11 @@ void qmp_block_stream(const char *job_id, const char *device,
|
|||||||
* Check for op blockers in the whole chain between bs and base (or bottom)
|
* Check for op blockers in the whole chain between bs and base (or bottom)
|
||||||
*/
|
*/
|
||||||
iter_end = bottom ? bdrv_filter_or_cow_bs(bottom_bs) : base_bs;
|
iter_end = bottom ? bdrv_filter_or_cow_bs(bottom_bs) : base_bs;
|
||||||
bdrv_graph_rdlock_main_loop();
|
|
||||||
for (iter = bs; iter && iter != iter_end;
|
for (iter = bs; iter && iter != iter_end;
|
||||||
iter = bdrv_filter_or_cow_bs(iter))
|
iter = bdrv_filter_or_cow_bs(iter))
|
||||||
{
|
{
|
||||||
if (bdrv_op_is_blocked(iter, BLOCK_OP_TYPE_STREAM, errp)) {
|
if (bdrv_op_is_blocked(iter, BLOCK_OP_TYPE_STREAM, errp)) {
|
||||||
bdrv_graph_rdunlock_main_loop();
|
goto out_rdlock;
|
||||||
goto out;
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
bdrv_graph_rdunlock_main_loop();
|
bdrv_graph_rdunlock_main_loop();
|
||||||
@ -2531,6 +2542,11 @@ void qmp_block_stream(const char *job_id, const char *device,
|
|||||||
|
|
||||||
out:
|
out:
|
||||||
aio_context_release(aio_context);
|
aio_context_release(aio_context);
|
||||||
|
return;
|
||||||
|
|
||||||
|
out_rdlock:
|
||||||
|
bdrv_graph_rdunlock_main_loop();
|
||||||
|
aio_context_release(aio_context);
|
||||||
}
|
}
|
||||||
|
|
||||||
void qmp_block_commit(const char *job_id, const char *device,
|
void qmp_block_commit(const char *job_id, const char *device,
|
||||||
@ -2968,6 +2984,7 @@ static void blockdev_mirror_common(const char *job_id, BlockDriverState *bs,
|
|||||||
|
|
||||||
if (replaces) {
|
if (replaces) {
|
||||||
BlockDriverState *to_replace_bs;
|
BlockDriverState *to_replace_bs;
|
||||||
|
AioContext *aio_context;
|
||||||
AioContext *replace_aio_context;
|
AioContext *replace_aio_context;
|
||||||
int64_t bs_size, replace_size;
|
int64_t bs_size, replace_size;
|
||||||
|
|
||||||
@ -2982,10 +2999,19 @@ static void blockdev_mirror_common(const char *job_id, BlockDriverState *bs,
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
aio_context = bdrv_get_aio_context(bs);
|
||||||
replace_aio_context = bdrv_get_aio_context(to_replace_bs);
|
replace_aio_context = bdrv_get_aio_context(to_replace_bs);
|
||||||
aio_context_acquire(replace_aio_context);
|
/*
|
||||||
|
* bdrv_getlength() is a co-wrapper and uses AIO_WAIT_WHILE. Be sure not
|
||||||
|
* to acquire the same AioContext twice.
|
||||||
|
*/
|
||||||
|
if (replace_aio_context != aio_context) {
|
||||||
|
aio_context_acquire(replace_aio_context);
|
||||||
|
}
|
||||||
replace_size = bdrv_getlength(to_replace_bs);
|
replace_size = bdrv_getlength(to_replace_bs);
|
||||||
aio_context_release(replace_aio_context);
|
if (replace_aio_context != aio_context) {
|
||||||
|
aio_context_release(replace_aio_context);
|
||||||
|
}
|
||||||
|
|
||||||
if (replace_size < 0) {
|
if (replace_size < 0) {
|
||||||
error_setg_errno(errp, -replace_size,
|
error_setg_errno(errp, -replace_size,
|
||||||
@ -3035,7 +3061,6 @@ void qmp_drive_mirror(DriveMirror *arg, Error **errp)
|
|||||||
bdrv_graph_rdunlock_main_loop();
|
bdrv_graph_rdunlock_main_loop();
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
bdrv_graph_rdunlock_main_loop();
|
|
||||||
|
|
||||||
aio_context = bdrv_get_aio_context(bs);
|
aio_context = bdrv_get_aio_context(bs);
|
||||||
aio_context_acquire(aio_context);
|
aio_context_acquire(aio_context);
|
||||||
@ -3057,6 +3082,7 @@ void qmp_drive_mirror(DriveMirror *arg, Error **errp)
|
|||||||
if (arg->sync == MIRROR_SYNC_MODE_NONE) {
|
if (arg->sync == MIRROR_SYNC_MODE_NONE) {
|
||||||
target_backing_bs = bs;
|
target_backing_bs = bs;
|
||||||
}
|
}
|
||||||
|
bdrv_graph_rdunlock_main_loop();
|
||||||
|
|
||||||
size = bdrv_getlength(bs);
|
size = bdrv_getlength(bs);
|
||||||
if (size < 0) {
|
if (size < 0) {
|
||||||
@ -3089,16 +3115,18 @@ void qmp_drive_mirror(DriveMirror *arg, Error **errp)
|
|||||||
bdrv_img_create(arg->target, format,
|
bdrv_img_create(arg->target, format,
|
||||||
NULL, NULL, NULL, size, flags, false, &local_err);
|
NULL, NULL, NULL, size, flags, false, &local_err);
|
||||||
} else {
|
} else {
|
||||||
/* Implicit filters should not appear in the filename */
|
BlockDriverState *explicit_backing;
|
||||||
BlockDriverState *explicit_backing =
|
|
||||||
bdrv_skip_implicit_filters(target_backing_bs);
|
|
||||||
|
|
||||||
switch (arg->mode) {
|
switch (arg->mode) {
|
||||||
case NEW_IMAGE_MODE_EXISTING:
|
case NEW_IMAGE_MODE_EXISTING:
|
||||||
break;
|
break;
|
||||||
case NEW_IMAGE_MODE_ABSOLUTE_PATHS:
|
case NEW_IMAGE_MODE_ABSOLUTE_PATHS:
|
||||||
/* create new image with backing file */
|
/*
|
||||||
|
* Create new image with backing file.
|
||||||
|
* Implicit filters should not appear in the filename.
|
||||||
|
*/
|
||||||
bdrv_graph_rdlock_main_loop();
|
bdrv_graph_rdlock_main_loop();
|
||||||
|
explicit_backing = bdrv_skip_implicit_filters(target_backing_bs);
|
||||||
bdrv_refresh_filename(explicit_backing);
|
bdrv_refresh_filename(explicit_backing);
|
||||||
bdrv_graph_rdunlock_main_loop();
|
bdrv_graph_rdunlock_main_loop();
|
||||||
|
|
||||||
@ -3137,9 +3165,11 @@ void qmp_drive_mirror(DriveMirror *arg, Error **errp)
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
bdrv_graph_rdlock_main_loop();
|
||||||
zero_target = (arg->sync == MIRROR_SYNC_MODE_FULL &&
|
zero_target = (arg->sync == MIRROR_SYNC_MODE_FULL &&
|
||||||
(arg->mode == NEW_IMAGE_MODE_EXISTING ||
|
(arg->mode == NEW_IMAGE_MODE_EXISTING ||
|
||||||
!bdrv_has_zero_init(target_bs)));
|
!bdrv_has_zero_init(target_bs)));
|
||||||
|
bdrv_graph_rdunlock_main_loop();
|
||||||
|
|
||||||
|
|
||||||
/* Honor bdrv_try_change_aio_context() context acquisition requirements. */
|
/* Honor bdrv_try_change_aio_context() context acquisition requirements. */
|
||||||
@ -3382,6 +3412,20 @@ void qmp_block_job_dismiss(const char *id, Error **errp)
|
|||||||
job_dismiss_locked(&job, errp);
|
job_dismiss_locked(&job, errp);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
void qmp_block_job_change(BlockJobChangeOptions *opts, Error **errp)
|
||||||
|
{
|
||||||
|
BlockJob *job;
|
||||||
|
|
||||||
|
JOB_LOCK_GUARD();
|
||||||
|
job = find_block_job_locked(opts->id, errp);
|
||||||
|
|
||||||
|
if (!job) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
block_job_change_locked(job, opts, errp);
|
||||||
|
}
|
||||||
|
|
||||||
void qmp_change_backing_file(const char *device,
|
void qmp_change_backing_file(const char *device,
|
||||||
const char *image_node_name,
|
const char *image_node_name,
|
||||||
const char *backing_file,
|
const char *backing_file,
|
||||||
@ -3402,38 +3446,38 @@ void qmp_change_backing_file(const char *device,
|
|||||||
aio_context = bdrv_get_aio_context(bs);
|
aio_context = bdrv_get_aio_context(bs);
|
||||||
aio_context_acquire(aio_context);
|
aio_context_acquire(aio_context);
|
||||||
|
|
||||||
|
bdrv_graph_rdlock_main_loop();
|
||||||
|
|
||||||
image_bs = bdrv_lookup_bs(NULL, image_node_name, &local_err);
|
image_bs = bdrv_lookup_bs(NULL, image_node_name, &local_err);
|
||||||
if (local_err) {
|
if (local_err) {
|
||||||
error_propagate(errp, local_err);
|
error_propagate(errp, local_err);
|
||||||
goto out;
|
goto out_rdlock;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!image_bs) {
|
if (!image_bs) {
|
||||||
error_setg(errp, "image file not found");
|
error_setg(errp, "image file not found");
|
||||||
goto out;
|
goto out_rdlock;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (bdrv_find_base(image_bs) == image_bs) {
|
if (bdrv_find_base(image_bs) == image_bs) {
|
||||||
error_setg(errp, "not allowing backing file change on an image "
|
error_setg(errp, "not allowing backing file change on an image "
|
||||||
"without a backing file");
|
"without a backing file");
|
||||||
goto out;
|
goto out_rdlock;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* even though we are not necessarily operating on bs, we need it to
|
/* even though we are not necessarily operating on bs, we need it to
|
||||||
* determine if block ops are currently prohibited on the chain */
|
* determine if block ops are currently prohibited on the chain */
|
||||||
bdrv_graph_rdlock_main_loop();
|
|
||||||
if (bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_CHANGE, errp)) {
|
if (bdrv_op_is_blocked(bs, BLOCK_OP_TYPE_CHANGE, errp)) {
|
||||||
bdrv_graph_rdunlock_main_loop();
|
goto out_rdlock;
|
||||||
goto out;
|
|
||||||
}
|
}
|
||||||
bdrv_graph_rdunlock_main_loop();
|
|
||||||
|
|
||||||
/* final sanity check */
|
/* final sanity check */
|
||||||
if (!bdrv_chain_contains(bs, image_bs)) {
|
if (!bdrv_chain_contains(bs, image_bs)) {
|
||||||
error_setg(errp, "'%s' and image file are not in the same chain",
|
error_setg(errp, "'%s' and image file are not in the same chain",
|
||||||
device);
|
device);
|
||||||
goto out;
|
goto out_rdlock;
|
||||||
}
|
}
|
||||||
|
bdrv_graph_rdunlock_main_loop();
|
||||||
|
|
||||||
/* if not r/w, reopen to make r/w */
|
/* if not r/w, reopen to make r/w */
|
||||||
ro = bdrv_is_read_only(image_bs);
|
ro = bdrv_is_read_only(image_bs);
|
||||||
@ -3461,6 +3505,11 @@ void qmp_change_backing_file(const char *device,
|
|||||||
|
|
||||||
out:
|
out:
|
||||||
aio_context_release(aio_context);
|
aio_context_release(aio_context);
|
||||||
|
return;
|
||||||
|
|
||||||
|
out_rdlock:
|
||||||
|
bdrv_graph_rdunlock_main_loop();
|
||||||
|
aio_context_release(aio_context);
|
||||||
}
|
}
|
||||||
|
|
||||||
void qmp_blockdev_add(BlockdevOptions *options, Error **errp)
|
void qmp_blockdev_add(BlockdevOptions *options, Error **errp)
|
||||||
|
36
blockjob.c
36
blockjob.c
@ -198,7 +198,9 @@ void block_job_remove_all_bdrv(BlockJob *job)
|
|||||||
* one to make sure that such a concurrent access does not attempt
|
* one to make sure that such a concurrent access does not attempt
|
||||||
* to process an already freed BdrvChild.
|
* to process an already freed BdrvChild.
|
||||||
*/
|
*/
|
||||||
|
aio_context_release(job->job.aio_context);
|
||||||
bdrv_graph_wrlock(NULL);
|
bdrv_graph_wrlock(NULL);
|
||||||
|
aio_context_acquire(job->job.aio_context);
|
||||||
while (job->nodes) {
|
while (job->nodes) {
|
||||||
GSList *l = job->nodes;
|
GSList *l = job->nodes;
|
||||||
BdrvChild *c = l->data;
|
BdrvChild *c = l->data;
|
||||||
@ -328,6 +330,26 @@ static bool block_job_set_speed(BlockJob *job, int64_t speed, Error **errp)
|
|||||||
return block_job_set_speed_locked(job, speed, errp);
|
return block_job_set_speed_locked(job, speed, errp);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
void block_job_change_locked(BlockJob *job, BlockJobChangeOptions *opts,
|
||||||
|
Error **errp)
|
||||||
|
{
|
||||||
|
const BlockJobDriver *drv = block_job_driver(job);
|
||||||
|
|
||||||
|
GLOBAL_STATE_CODE();
|
||||||
|
|
||||||
|
if (job_apply_verb_locked(&job->job, JOB_VERB_CHANGE, errp)) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (drv->change) {
|
||||||
|
job_unlock();
|
||||||
|
drv->change(job, opts, errp);
|
||||||
|
job_lock();
|
||||||
|
} else {
|
||||||
|
error_setg(errp, "Job type does not support change");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
void block_job_ratelimit_processed_bytes(BlockJob *job, uint64_t n)
|
void block_job_ratelimit_processed_bytes(BlockJob *job, uint64_t n)
|
||||||
{
|
{
|
||||||
IO_CODE();
|
IO_CODE();
|
||||||
@ -356,6 +378,7 @@ BlockJobInfo *block_job_query_locked(BlockJob *job, Error **errp)
|
|||||||
{
|
{
|
||||||
BlockJobInfo *info;
|
BlockJobInfo *info;
|
||||||
uint64_t progress_current, progress_total;
|
uint64_t progress_current, progress_total;
|
||||||
|
const BlockJobDriver *drv = block_job_driver(job);
|
||||||
|
|
||||||
GLOBAL_STATE_CODE();
|
GLOBAL_STATE_CODE();
|
||||||
|
|
||||||
@ -368,7 +391,7 @@ BlockJobInfo *block_job_query_locked(BlockJob *job, Error **errp)
|
|||||||
&progress_total);
|
&progress_total);
|
||||||
|
|
||||||
info = g_new0(BlockJobInfo, 1);
|
info = g_new0(BlockJobInfo, 1);
|
||||||
info->type = g_strdup(job_type_str(&job->job));
|
info->type = job_type(&job->job);
|
||||||
info->device = g_strdup(job->job.id);
|
info->device = g_strdup(job->job.id);
|
||||||
info->busy = job->job.busy;
|
info->busy = job->job.busy;
|
||||||
info->paused = job->job.pause_count > 0;
|
info->paused = job->job.pause_count > 0;
|
||||||
@ -385,6 +408,11 @@ BlockJobInfo *block_job_query_locked(BlockJob *job, Error **errp)
|
|||||||
g_strdup(error_get_pretty(job->job.err)) :
|
g_strdup(error_get_pretty(job->job.err)) :
|
||||||
g_strdup(strerror(-job->job.ret));
|
g_strdup(strerror(-job->job.ret));
|
||||||
}
|
}
|
||||||
|
if (drv->query) {
|
||||||
|
job_unlock();
|
||||||
|
drv->query(job, info);
|
||||||
|
job_lock();
|
||||||
|
}
|
||||||
return info;
|
return info;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -485,7 +513,8 @@ void *block_job_create(const char *job_id, const BlockJobDriver *driver,
|
|||||||
BlockJob *job;
|
BlockJob *job;
|
||||||
int ret;
|
int ret;
|
||||||
GLOBAL_STATE_CODE();
|
GLOBAL_STATE_CODE();
|
||||||
GRAPH_RDLOCK_GUARD_MAINLOOP();
|
|
||||||
|
bdrv_graph_wrlock(bs);
|
||||||
|
|
||||||
if (job_id == NULL && !(flags & JOB_INTERNAL)) {
|
if (job_id == NULL && !(flags & JOB_INTERNAL)) {
|
||||||
job_id = bdrv_get_device_name(bs);
|
job_id = bdrv_get_device_name(bs);
|
||||||
@ -494,6 +523,7 @@ void *block_job_create(const char *job_id, const BlockJobDriver *driver,
|
|||||||
job = job_create(job_id, &driver->job_driver, txn, bdrv_get_aio_context(bs),
|
job = job_create(job_id, &driver->job_driver, txn, bdrv_get_aio_context(bs),
|
||||||
flags, cb, opaque, errp);
|
flags, cb, opaque, errp);
|
||||||
if (job == NULL) {
|
if (job == NULL) {
|
||||||
|
bdrv_graph_wrunlock();
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -533,9 +563,11 @@ void *block_job_create(const char *job_id, const BlockJobDriver *driver,
|
|||||||
goto fail;
|
goto fail;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
bdrv_graph_wrunlock();
|
||||||
return job;
|
return job;
|
||||||
|
|
||||||
fail:
|
fail:
|
||||||
|
bdrv_graph_wrunlock();
|
||||||
job_early_fail(&job->job);
|
job_early_fail(&job->job);
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
@ -21,6 +21,7 @@
|
|||||||
#define TARGET_ARCH_H
|
#define TARGET_ARCH_H
|
||||||
|
|
||||||
#include "qemu.h"
|
#include "qemu.h"
|
||||||
|
#include "target/arm/cpu-features.h"
|
||||||
|
|
||||||
void target_cpu_set_tls(CPUARMState *env, target_ulong newtls);
|
void target_cpu_set_tls(CPUARMState *env, target_ulong newtls);
|
||||||
target_ulong target_cpu_get_tls(CPUARMState *env);
|
target_ulong target_cpu_get_tls(CPUARMState *env);
|
||||||
|
@ -118,7 +118,7 @@ void fork_end(int child)
|
|||||||
*/
|
*/
|
||||||
CPU_FOREACH_SAFE(cpu, next_cpu) {
|
CPU_FOREACH_SAFE(cpu, next_cpu) {
|
||||||
if (cpu != thread_cpu) {
|
if (cpu != thread_cpu) {
|
||||||
QTAILQ_REMOVE_RCU(&cpus, cpu, node);
|
QTAILQ_REMOVE_RCU(&cpus_queue, cpu, node);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
mmap_fork_end(child);
|
mmap_fork_end(child);
|
||||||
|
@ -14,6 +14,7 @@ CONFIG_SAM460EX=y
|
|||||||
CONFIG_MAC_OLDWORLD=y
|
CONFIG_MAC_OLDWORLD=y
|
||||||
CONFIG_MAC_NEWWORLD=y
|
CONFIG_MAC_NEWWORLD=y
|
||||||
|
|
||||||
|
CONFIG_AMIGAONE=y
|
||||||
CONFIG_PEGASOS2=y
|
CONFIG_PEGASOS2=y
|
||||||
|
|
||||||
# For PReP
|
# For PReP
|
||||||
|
@ -1,4 +1,5 @@
|
|||||||
TARGET_ARCH=hppa
|
TARGET_ARCH=hppa
|
||||||
|
TARGET_ABI32=y
|
||||||
TARGET_SYSTBL_ABI=common,32
|
TARGET_SYSTBL_ABI=common,32
|
||||||
TARGET_SYSTBL=syscall.tbl
|
TARGET_SYSTBL=syscall.tbl
|
||||||
TARGET_BIG_ENDIAN=y
|
TARGET_BIG_ENDIAN=y
|
||||||
|
@ -1,3 +1,4 @@
|
|||||||
# Default configuration for loongarch64-linux-user
|
# Default configuration for loongarch64-linux-user
|
||||||
TARGET_ARCH=loongarch64
|
TARGET_ARCH=loongarch64
|
||||||
TARGET_BASE_ARCH=loongarch
|
TARGET_BASE_ARCH=loongarch
|
||||||
|
TARGET_XML_FILES=gdb-xml/loongarch-base64.xml gdb-xml/loongarch-fpu.xml
|
||||||
|
@ -1,2 +1,3 @@
|
|||||||
TARGET_ARCH=sparc
|
TARGET_ARCH=sparc
|
||||||
TARGET_BIG_ENDIAN=y
|
TARGET_BIG_ENDIAN=y
|
||||||
|
TARGET_SUPPORTS_MTTCG=y
|
||||||
|
@ -1,3 +1,4 @@
|
|||||||
TARGET_ARCH=sparc64
|
TARGET_ARCH=sparc64
|
||||||
TARGET_BASE_ARCH=sparc
|
TARGET_BASE_ARCH=sparc
|
||||||
TARGET_BIG_ENDIAN=y
|
TARGET_BIG_ENDIAN=y
|
||||||
|
TARGET_SUPPORTS_MTTCG=y
|
||||||
|
51
configure
vendored
51
configure
vendored
@ -309,6 +309,7 @@ fi
|
|||||||
ar="${AR-${cross_prefix}ar}"
|
ar="${AR-${cross_prefix}ar}"
|
||||||
as="${AS-${cross_prefix}as}"
|
as="${AS-${cross_prefix}as}"
|
||||||
ccas="${CCAS-$cc}"
|
ccas="${CCAS-$cc}"
|
||||||
|
dlltool="${DLLTOOL-${cross_prefix}dlltool}"
|
||||||
objcopy="${OBJCOPY-${cross_prefix}objcopy}"
|
objcopy="${OBJCOPY-${cross_prefix}objcopy}"
|
||||||
ld="${LD-${cross_prefix}ld}"
|
ld="${LD-${cross_prefix}ld}"
|
||||||
ranlib="${RANLIB-${cross_prefix}ranlib}"
|
ranlib="${RANLIB-${cross_prefix}ranlib}"
|
||||||
@ -1023,9 +1024,9 @@ if test "$targetos" = "bogus"; then
|
|||||||
fi
|
fi
|
||||||
|
|
||||||
# test for any invalid configuration combinations
|
# test for any invalid configuration combinations
|
||||||
if test "$targetos" = "windows"; then
|
if test "$targetos" = "windows" && ! has "$dlltool"; then
|
||||||
if test "$plugins" = "yes"; then
|
if test "$plugins" = "yes"; then
|
||||||
error_exit "TCG plugins not currently supported on Windows platforms"
|
error_exit "TCG plugins requires dlltool to build on Windows platforms"
|
||||||
fi
|
fi
|
||||||
plugins="no"
|
plugins="no"
|
||||||
fi
|
fi
|
||||||
@ -1294,6 +1295,11 @@ probe_target_compiler() {
|
|||||||
container_cross_prefix=aarch64-linux-gnu-
|
container_cross_prefix=aarch64-linux-gnu-
|
||||||
container_cross_cc=${container_cross_prefix}gcc
|
container_cross_cc=${container_cross_prefix}gcc
|
||||||
;;
|
;;
|
||||||
|
alpha)
|
||||||
|
container_image=debian-legacy-test-cross
|
||||||
|
container_cross_prefix=alpha-linux-gnu-
|
||||||
|
container_cross_cc=${container_cross_prefix}gcc
|
||||||
|
;;
|
||||||
arm)
|
arm)
|
||||||
# We don't have any bigendian build tools so we only use this for ARM
|
# We don't have any bigendian build tools so we only use this for ARM
|
||||||
container_image=debian-armhf-cross
|
container_image=debian-armhf-cross
|
||||||
@ -1308,6 +1314,11 @@ probe_target_compiler() {
|
|||||||
container_cross_prefix=hexagon-unknown-linux-musl-
|
container_cross_prefix=hexagon-unknown-linux-musl-
|
||||||
container_cross_cc=${container_cross_prefix}clang
|
container_cross_cc=${container_cross_prefix}clang
|
||||||
;;
|
;;
|
||||||
|
hppa)
|
||||||
|
container_image=debian-all-test-cross
|
||||||
|
container_cross_prefix=hppa-linux-gnu-
|
||||||
|
container_cross_cc=${container_cross_prefix}gcc
|
||||||
|
;;
|
||||||
i386)
|
i386)
|
||||||
container_image=fedora-i386-cross
|
container_image=fedora-i386-cross
|
||||||
container_cross_prefix=
|
container_cross_prefix=
|
||||||
@ -1316,6 +1327,11 @@ probe_target_compiler() {
|
|||||||
container_image=debian-loongarch-cross
|
container_image=debian-loongarch-cross
|
||||||
container_cross_prefix=loongarch64-unknown-linux-gnu-
|
container_cross_prefix=loongarch64-unknown-linux-gnu-
|
||||||
;;
|
;;
|
||||||
|
m68k)
|
||||||
|
container_image=debian-all-test-cross
|
||||||
|
container_cross_prefix=m68k-linux-gnu-
|
||||||
|
container_cross_cc=${container_cross_prefix}gcc
|
||||||
|
;;
|
||||||
microblaze)
|
microblaze)
|
||||||
container_image=debian-microblaze-cross
|
container_image=debian-microblaze-cross
|
||||||
container_cross_prefix=microblaze-linux-musl-
|
container_cross_prefix=microblaze-linux-musl-
|
||||||
@ -1325,22 +1341,37 @@ probe_target_compiler() {
|
|||||||
container_cross_prefix=mips64el-linux-gnuabi64-
|
container_cross_prefix=mips64el-linux-gnuabi64-
|
||||||
;;
|
;;
|
||||||
mips64)
|
mips64)
|
||||||
container_image=debian-mips64-cross
|
container_image=debian-all-test-cross
|
||||||
container_cross_prefix=mips64-linux-gnuabi64-
|
container_cross_prefix=mips64-linux-gnuabi64-
|
||||||
;;
|
;;
|
||||||
|
mips)
|
||||||
|
container_image=debian-all-test-cross
|
||||||
|
container_cross_prefix=mips-linux-gnu-
|
||||||
|
;;
|
||||||
nios2)
|
nios2)
|
||||||
container_image=debian-nios2-cross
|
container_image=debian-nios2-cross
|
||||||
container_cross_prefix=nios2-linux-gnu-
|
container_cross_prefix=nios2-linux-gnu-
|
||||||
;;
|
;;
|
||||||
ppc)
|
ppc)
|
||||||
container_image=debian-powerpc-test-cross
|
container_image=debian-all-test-cross
|
||||||
container_cross_prefix=powerpc-linux-gnu-
|
container_cross_prefix=powerpc-linux-gnu-
|
||||||
container_cross_cc=${container_cross_prefix}gcc
|
container_cross_cc=${container_cross_prefix}gcc
|
||||||
;;
|
;;
|
||||||
ppc64|ppc64le)
|
ppc64|ppc64le)
|
||||||
container_image=debian-powerpc-test-cross
|
container_image=debian-all-test-cross
|
||||||
container_cross_prefix=powerpc${target_arch#ppc}-linux-gnu-
|
container_cross_prefix=powerpc${target_arch#ppc}-linux-gnu-
|
||||||
container_cross_cc=${container_cross_prefix}gcc-10
|
;;
|
||||||
|
riscv64)
|
||||||
|
container_image=debian-all-test-cross
|
||||||
|
container_cross_prefix=riscv64-linux-gnu-
|
||||||
|
;;
|
||||||
|
sh4)
|
||||||
|
container_image=debian-legacy-test-cross
|
||||||
|
container_cross_prefix=sh4-linux-gnu-
|
||||||
|
;;
|
||||||
|
sparc64)
|
||||||
|
container_image=debian-all-test-cross
|
||||||
|
container_cross_prefix=sparc64-linux-gnu-
|
||||||
;;
|
;;
|
||||||
tricore)
|
tricore)
|
||||||
container_image=debian-tricore-cross
|
container_image=debian-tricore-cross
|
||||||
@ -1650,9 +1681,15 @@ echo "SRC_PATH=$source_path/contrib/plugins" >> contrib/plugins/$config_host_mak
|
|||||||
echo "PKG_CONFIG=${pkg_config}" >> contrib/plugins/$config_host_mak
|
echo "PKG_CONFIG=${pkg_config}" >> contrib/plugins/$config_host_mak
|
||||||
echo "CC=$cc $CPU_CFLAGS" >> contrib/plugins/$config_host_mak
|
echo "CC=$cc $CPU_CFLAGS" >> contrib/plugins/$config_host_mak
|
||||||
echo "CFLAGS=${CFLAGS-$default_cflags} $EXTRA_CFLAGS" >> contrib/plugins/$config_host_mak
|
echo "CFLAGS=${CFLAGS-$default_cflags} $EXTRA_CFLAGS" >> contrib/plugins/$config_host_mak
|
||||||
|
if test "$targetos" = windows; then
|
||||||
|
echo "DLLTOOL=$dlltool" >> contrib/plugins/$config_host_mak
|
||||||
|
fi
|
||||||
if test "$targetos" = darwin; then
|
if test "$targetos" = darwin; then
|
||||||
echo "CONFIG_DARWIN=y" >> contrib/plugins/$config_host_mak
|
echo "CONFIG_DARWIN=y" >> contrib/plugins/$config_host_mak
|
||||||
fi
|
fi
|
||||||
|
if test "$targetos" = windows; then
|
||||||
|
echo "CONFIG_WIN32=y" >> contrib/plugins/$config_host_mak
|
||||||
|
fi
|
||||||
|
|
||||||
# tests/tcg configuration
|
# tests/tcg configuration
|
||||||
(config_host_mak=tests/tcg/config-host.mak
|
(config_host_mak=tests/tcg/config-host.mak
|
||||||
@ -1755,6 +1792,7 @@ if test "$skip_meson" = no; then
|
|||||||
test -n "$cxx" && echo "cpp = [$(meson_quote $cxx $CPU_CFLAGS)]" >> $cross
|
test -n "$cxx" && echo "cpp = [$(meson_quote $cxx $CPU_CFLAGS)]" >> $cross
|
||||||
test -n "$objcc" && echo "objc = [$(meson_quote $objcc $CPU_CFLAGS)]" >> $cross
|
test -n "$objcc" && echo "objc = [$(meson_quote $objcc $CPU_CFLAGS)]" >> $cross
|
||||||
echo "ar = [$(meson_quote $ar)]" >> $cross
|
echo "ar = [$(meson_quote $ar)]" >> $cross
|
||||||
|
echo "dlltool = [$(meson_quote $dlltool)]" >> $cross
|
||||||
echo "nm = [$(meson_quote $nm)]" >> $cross
|
echo "nm = [$(meson_quote $nm)]" >> $cross
|
||||||
echo "pkgconfig = [$(meson_quote $pkg_config)]" >> $cross
|
echo "pkgconfig = [$(meson_quote $pkg_config)]" >> $cross
|
||||||
echo "pkg-config = [$(meson_quote $pkg_config)]" >> $cross
|
echo "pkg-config = [$(meson_quote $pkg_config)]" >> $cross
|
||||||
@ -1860,6 +1898,7 @@ preserve_env CC
|
|||||||
preserve_env CFLAGS
|
preserve_env CFLAGS
|
||||||
preserve_env CXX
|
preserve_env CXX
|
||||||
preserve_env CXXFLAGS
|
preserve_env CXXFLAGS
|
||||||
|
preserve_env DLLTOOL
|
||||||
preserve_env LD
|
preserve_env LD
|
||||||
preserve_env LDFLAGS
|
preserve_env LDFLAGS
|
||||||
preserve_env LD_LIBRARY_PATH
|
preserve_env LD_LIBRARY_PATH
|
||||||
|
@ -12,15 +12,18 @@ amd.com AMD
|
|||||||
aspeedtech.com ASPEED Technology Inc.
|
aspeedtech.com ASPEED Technology Inc.
|
||||||
baidu.com Baidu
|
baidu.com Baidu
|
||||||
bytedance.com ByteDance
|
bytedance.com ByteDance
|
||||||
|
cestc.cn Cestc
|
||||||
cmss.chinamobile.com China Mobile
|
cmss.chinamobile.com China Mobile
|
||||||
citrix.com Citrix
|
citrix.com Citrix
|
||||||
crudebyte.com Crudebyte
|
crudebyte.com Crudebyte
|
||||||
chinatelecom.cn China Telecom
|
chinatelecom.cn China Telecom
|
||||||
|
daynix.com Daynix
|
||||||
eldorado.org.br Instituto de Pesquisas Eldorado
|
eldorado.org.br Instituto de Pesquisas Eldorado
|
||||||
fb.com Facebook
|
fb.com Facebook
|
||||||
fujitsu.com Fujitsu
|
fujitsu.com Fujitsu
|
||||||
google.com Google
|
google.com Google
|
||||||
greensocs.com GreenSocs
|
greensocs.com GreenSocs
|
||||||
|
hisilicon.com Huawei
|
||||||
huawei.com Huawei
|
huawei.com Huawei
|
||||||
ibm.com IBM
|
ibm.com IBM
|
||||||
igalia.com Igalia
|
igalia.com Igalia
|
||||||
@ -38,6 +41,7 @@ proxmox.com Proxmox
|
|||||||
quicinc.com Qualcomm Innovation Center
|
quicinc.com Qualcomm Innovation Center
|
||||||
redhat.com Red Hat
|
redhat.com Red Hat
|
||||||
rev.ng rev.ng Labs
|
rev.ng rev.ng Labs
|
||||||
|
rivosinc.com Rivos Inc
|
||||||
rt-rk.com RT-RK
|
rt-rk.com RT-RK
|
||||||
samsung.com Samsung
|
samsung.com Samsung
|
||||||
siemens.com Siemens
|
siemens.com Siemens
|
||||||
|
@ -17,12 +17,25 @@ NAMES += execlog
|
|||||||
NAMES += hotblocks
|
NAMES += hotblocks
|
||||||
NAMES += hotpages
|
NAMES += hotpages
|
||||||
NAMES += howvec
|
NAMES += howvec
|
||||||
|
|
||||||
|
# The lockstep example communicates using unix sockets,
|
||||||
|
# and can't be easily made to work on windows.
|
||||||
|
ifneq ($(CONFIG_WIN32),y)
|
||||||
NAMES += lockstep
|
NAMES += lockstep
|
||||||
|
endif
|
||||||
|
|
||||||
NAMES += hwprofile
|
NAMES += hwprofile
|
||||||
NAMES += cache
|
NAMES += cache
|
||||||
NAMES += drcov
|
NAMES += drcov
|
||||||
|
|
||||||
SONAMES := $(addsuffix .so,$(addprefix lib,$(NAMES)))
|
ifeq ($(CONFIG_WIN32),y)
|
||||||
|
SO_SUFFIX := .dll
|
||||||
|
LDLIBS += $(shell $(PKG_CONFIG) --libs glib-2.0)
|
||||||
|
else
|
||||||
|
SO_SUFFIX := .so
|
||||||
|
endif
|
||||||
|
|
||||||
|
SONAMES := $(addsuffix $(SO_SUFFIX),$(addprefix lib,$(NAMES)))
|
||||||
|
|
||||||
# The main QEMU uses Glib extensively so it's perfectly fine to use it
|
# The main QEMU uses Glib extensively so it's perfectly fine to use it
|
||||||
# in plugins (which many example do).
|
# in plugins (which many example do).
|
||||||
@ -35,15 +48,20 @@ all: $(SONAMES)
|
|||||||
%.o: %.c
|
%.o: %.c
|
||||||
$(CC) $(CFLAGS) $(PLUGIN_CFLAGS) -c -o $@ $<
|
$(CC) $(CFLAGS) $(PLUGIN_CFLAGS) -c -o $@ $<
|
||||||
|
|
||||||
lib%.so: %.o
|
ifeq ($(CONFIG_WIN32),y)
|
||||||
ifeq ($(CONFIG_DARWIN),y)
|
lib%$(SO_SUFFIX): %.o win32_linker.o ../../plugins/qemu_plugin_api.lib
|
||||||
|
$(CC) -shared -o $@ $^ $(LDLIBS)
|
||||||
|
else ifeq ($(CONFIG_DARWIN),y)
|
||||||
|
lib%$(SO_SUFFIX): %.o
|
||||||
$(CC) -bundle -Wl,-undefined,dynamic_lookup -o $@ $^ $(LDLIBS)
|
$(CC) -bundle -Wl,-undefined,dynamic_lookup -o $@ $^ $(LDLIBS)
|
||||||
else
|
else
|
||||||
|
lib%$(SO_SUFFIX): %.o
|
||||||
$(CC) -shared -o $@ $^ $(LDLIBS)
|
$(CC) -shared -o $@ $^ $(LDLIBS)
|
||||||
endif
|
endif
|
||||||
|
|
||||||
|
|
||||||
clean:
|
clean:
|
||||||
rm -f *.o *.so *.d
|
rm -f *.o *$(SO_SUFFIX) *.d
|
||||||
rm -Rf .libs
|
rm -Rf .libs
|
||||||
|
|
||||||
.PHONY: all clean
|
.PHONY: all clean
|
||||||
|
@ -276,6 +276,7 @@ static bool setup_socket(const char *path)
|
|||||||
sockaddr.sun_family = AF_UNIX;
|
sockaddr.sun_family = AF_UNIX;
|
||||||
if (g_strlcpy(sockaddr.sun_path, path, pathlen) >= pathlen) {
|
if (g_strlcpy(sockaddr.sun_path, path, pathlen) >= pathlen) {
|
||||||
perror("bad path");
|
perror("bad path");
|
||||||
|
close(fd);
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -322,6 +323,7 @@ static bool connect_socket(const char *path)
|
|||||||
sockaddr.sun_family = AF_UNIX;
|
sockaddr.sun_family = AF_UNIX;
|
||||||
if (g_strlcpy(sockaddr.sun_path, path, pathlen) >= pathlen) {
|
if (g_strlcpy(sockaddr.sun_path, path, pathlen) >= pathlen) {
|
||||||
perror("bad path");
|
perror("bad path");
|
||||||
|
close(fd);
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
34
contrib/plugins/win32_linker.c
Normal file
34
contrib/plugins/win32_linker.c
Normal file
@ -0,0 +1,34 @@
|
|||||||
|
/*
|
||||||
|
* Copyright (C) 2023, Greg Manning <gmanning@rapitasystems.com>
|
||||||
|
*
|
||||||
|
* This hook, __pfnDliFailureHook2, is documented in the microsoft documentation here:
|
||||||
|
* https://learn.microsoft.com/en-us/cpp/build/reference/error-handling-and-notification
|
||||||
|
* It gets called when a delay-loaded DLL encounters various errors.
|
||||||
|
* We handle the specific case of a DLL looking for a "qemu.exe",
|
||||||
|
* and give it the running executable (regardless of what it is named).
|
||||||
|
*
|
||||||
|
* This work is licensed under the terms of the GNU LGPL, version 2 or later.
|
||||||
|
* See the COPYING.LIB file in the top-level directory.
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include <windows.h>
|
||||||
|
#include <delayimp.h>
|
||||||
|
|
||||||
|
FARPROC WINAPI dll_failure_hook(unsigned dliNotify, PDelayLoadInfo pdli);
|
||||||
|
|
||||||
|
|
||||||
|
PfnDliHook __pfnDliFailureHook2 = dll_failure_hook;
|
||||||
|
|
||||||
|
FARPROC WINAPI dll_failure_hook(unsigned dliNotify, PDelayLoadInfo pdli) {
|
||||||
|
if (dliNotify == dliFailLoadLib) {
|
||||||
|
/* If the failing request was for qemu.exe, ... */
|
||||||
|
if (strcmp(pdli->szDll, "qemu.exe") == 0) {
|
||||||
|
/* Then pass back a pointer to the top level module. */
|
||||||
|
HMODULE top = GetModuleHandle(NULL);
|
||||||
|
return (FARPROC) top;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
/* Otherwise we can't do anything special. */
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
@ -73,7 +73,7 @@ static int cpu_get_free_index(void)
|
|||||||
return max_cpu_index;
|
return max_cpu_index;
|
||||||
}
|
}
|
||||||
|
|
||||||
CPUTailQ cpus = QTAILQ_HEAD_INITIALIZER(cpus);
|
CPUTailQ cpus_queue = QTAILQ_HEAD_INITIALIZER(cpus_queue);
|
||||||
static unsigned int cpu_list_generation_id;
|
static unsigned int cpu_list_generation_id;
|
||||||
|
|
||||||
unsigned int cpu_list_generation_id_get(void)
|
unsigned int cpu_list_generation_id_get(void)
|
||||||
@ -90,7 +90,7 @@ void cpu_list_add(CPUState *cpu)
|
|||||||
} else {
|
} else {
|
||||||
assert(!cpu_index_auto_assigned);
|
assert(!cpu_index_auto_assigned);
|
||||||
}
|
}
|
||||||
QTAILQ_INSERT_TAIL_RCU(&cpus, cpu, node);
|
QTAILQ_INSERT_TAIL_RCU(&cpus_queue, cpu, node);
|
||||||
cpu_list_generation_id++;
|
cpu_list_generation_id++;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -102,7 +102,7 @@ void cpu_list_remove(CPUState *cpu)
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
QTAILQ_REMOVE_RCU(&cpus, cpu, node);
|
QTAILQ_REMOVE_RCU(&cpus_queue, cpu, node);
|
||||||
cpu->cpu_index = UNASSIGNED_CPU_INDEX;
|
cpu->cpu_index = UNASSIGNED_CPU_INDEX;
|
||||||
cpu_list_generation_id++;
|
cpu_list_generation_id++;
|
||||||
}
|
}
|
||||||
|
17
cpu-target.c
17
cpu-target.c
@ -42,7 +42,6 @@
|
|||||||
#include "hw/core/accel-cpu.h"
|
#include "hw/core/accel-cpu.h"
|
||||||
#include "trace/trace-root.h"
|
#include "trace/trace-root.h"
|
||||||
#include "qemu/accel.h"
|
#include "qemu/accel.h"
|
||||||
#include "qemu/plugin.h"
|
|
||||||
|
|
||||||
//// --- Begin LibAFL code ---
|
//// --- Begin LibAFL code ---
|
||||||
|
|
||||||
@ -430,23 +429,18 @@ const VMStateDescription vmstate_cpu_common = {
|
|||||||
};
|
};
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
void cpu_exec_realizefn(CPUState *cpu, Error **errp)
|
bool cpu_exec_realizefn(CPUState *cpu, Error **errp)
|
||||||
{
|
{
|
||||||
/* cache the cpu class for the hotpath */
|
/* cache the cpu class for the hotpath */
|
||||||
cpu->cc = CPU_GET_CLASS(cpu);
|
cpu->cc = CPU_GET_CLASS(cpu);
|
||||||
|
|
||||||
if (!accel_cpu_common_realize(cpu, errp)) {
|
if (!accel_cpu_common_realize(cpu, errp)) {
|
||||||
return;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Wait until cpu initialization complete before exposing cpu. */
|
/* Wait until cpu initialization complete before exposing cpu. */
|
||||||
cpu_list_add(cpu);
|
cpu_list_add(cpu);
|
||||||
|
|
||||||
/* Plugin initialization must wait until cpu_index assigned. */
|
|
||||||
if (tcg_enabled()) {
|
|
||||||
qemu_plugin_vcpu_init_hook(cpu);
|
|
||||||
}
|
|
||||||
|
|
||||||
#ifdef CONFIG_USER_ONLY
|
#ifdef CONFIG_USER_ONLY
|
||||||
assert(qdev_get_vmsd(DEVICE(cpu)) == NULL ||
|
assert(qdev_get_vmsd(DEVICE(cpu)) == NULL ||
|
||||||
qdev_get_vmsd(DEVICE(cpu))->unmigratable);
|
qdev_get_vmsd(DEVICE(cpu))->unmigratable);
|
||||||
@ -458,6 +452,8 @@ void cpu_exec_realizefn(CPUState *cpu, Error **errp)
|
|||||||
vmstate_register(NULL, cpu->cpu_index, cpu->cc->sysemu_ops->legacy_vmsd, cpu);
|
vmstate_register(NULL, cpu->cpu_index, cpu->cc->sysemu_ops->legacy_vmsd, cpu);
|
||||||
}
|
}
|
||||||
#endif /* CONFIG_USER_ONLY */
|
#endif /* CONFIG_USER_ONLY */
|
||||||
|
|
||||||
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
void cpu_exec_unrealizefn(CPUState *cpu)
|
void cpu_exec_unrealizefn(CPUState *cpu)
|
||||||
@ -473,11 +469,6 @@ void cpu_exec_unrealizefn(CPUState *cpu)
|
|||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
/* Call the plugin hook before clearing cpu->cpu_index in cpu_list_remove */
|
|
||||||
if (tcg_enabled()) {
|
|
||||||
qemu_plugin_vcpu_exit_hook(cpu);
|
|
||||||
}
|
|
||||||
|
|
||||||
cpu_list_remove(cpu);
|
cpu_list_remove(cpu);
|
||||||
/*
|
/*
|
||||||
* Now that the vCPU has been removed from the RCU list, we can call
|
* Now that the vCPU has been removed from the RCU list, we can call
|
||||||
|
@ -88,15 +88,13 @@ static QCryptoAkCipherRSAKey *qcrypto_builtin_rsa_public_key_parse(
|
|||||||
goto error;
|
goto error;
|
||||||
}
|
}
|
||||||
if (seq_length != 0) {
|
if (seq_length != 0) {
|
||||||
|
error_setg(errp, "Invalid RSA public key");
|
||||||
goto error;
|
goto error;
|
||||||
}
|
}
|
||||||
|
|
||||||
return rsa;
|
return rsa;
|
||||||
|
|
||||||
error:
|
error:
|
||||||
if (errp && !*errp) {
|
|
||||||
error_setg(errp, "Invalid RSA public key");
|
|
||||||
}
|
|
||||||
qcrypto_akcipher_rsakey_free(rsa);
|
qcrypto_akcipher_rsakey_free(rsa);
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
@ -169,15 +167,13 @@ static QCryptoAkCipherRSAKey *qcrypto_builtin_rsa_private_key_parse(
|
|||||||
return rsa;
|
return rsa;
|
||||||
}
|
}
|
||||||
if (seq_length != 0) {
|
if (seq_length != 0) {
|
||||||
|
error_setg(errp, "Invalid RSA private key");
|
||||||
goto error;
|
goto error;
|
||||||
}
|
}
|
||||||
|
|
||||||
return rsa;
|
return rsa;
|
||||||
|
|
||||||
error:
|
error:
|
||||||
if (errp && !*errp) {
|
|
||||||
error_setg(errp, "Invalid RSA private key");
|
|
||||||
}
|
|
||||||
qcrypto_akcipher_rsakey_free(rsa);
|
qcrypto_akcipher_rsakey_free(rsa);
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
157
disas/riscv.c
157
disas/riscv.c
@ -862,6 +862,47 @@ typedef enum {
|
|||||||
rv_op_fltq_q = 831,
|
rv_op_fltq_q = 831,
|
||||||
rv_op_fleq_h = 832,
|
rv_op_fleq_h = 832,
|
||||||
rv_op_fltq_h = 833,
|
rv_op_fltq_h = 833,
|
||||||
|
rv_op_vaesdf_vv = 834,
|
||||||
|
rv_op_vaesdf_vs = 835,
|
||||||
|
rv_op_vaesdm_vv = 836,
|
||||||
|
rv_op_vaesdm_vs = 837,
|
||||||
|
rv_op_vaesef_vv = 838,
|
||||||
|
rv_op_vaesef_vs = 839,
|
||||||
|
rv_op_vaesem_vv = 840,
|
||||||
|
rv_op_vaesem_vs = 841,
|
||||||
|
rv_op_vaeskf1_vi = 842,
|
||||||
|
rv_op_vaeskf2_vi = 843,
|
||||||
|
rv_op_vaesz_vs = 844,
|
||||||
|
rv_op_vandn_vv = 845,
|
||||||
|
rv_op_vandn_vx = 846,
|
||||||
|
rv_op_vbrev_v = 847,
|
||||||
|
rv_op_vbrev8_v = 848,
|
||||||
|
rv_op_vclmul_vv = 849,
|
||||||
|
rv_op_vclmul_vx = 850,
|
||||||
|
rv_op_vclmulh_vv = 851,
|
||||||
|
rv_op_vclmulh_vx = 852,
|
||||||
|
rv_op_vclz_v = 853,
|
||||||
|
rv_op_vcpop_v = 854,
|
||||||
|
rv_op_vctz_v = 855,
|
||||||
|
rv_op_vghsh_vv = 856,
|
||||||
|
rv_op_vgmul_vv = 857,
|
||||||
|
rv_op_vrev8_v = 858,
|
||||||
|
rv_op_vrol_vv = 859,
|
||||||
|
rv_op_vrol_vx = 860,
|
||||||
|
rv_op_vror_vv = 861,
|
||||||
|
rv_op_vror_vx = 862,
|
||||||
|
rv_op_vror_vi = 863,
|
||||||
|
rv_op_vsha2ch_vv = 864,
|
||||||
|
rv_op_vsha2cl_vv = 865,
|
||||||
|
rv_op_vsha2ms_vv = 866,
|
||||||
|
rv_op_vsm3c_vi = 867,
|
||||||
|
rv_op_vsm3me_vv = 868,
|
||||||
|
rv_op_vsm4k_vi = 869,
|
||||||
|
rv_op_vsm4r_vv = 870,
|
||||||
|
rv_op_vsm4r_vs = 871,
|
||||||
|
rv_op_vwsll_vv = 872,
|
||||||
|
rv_op_vwsll_vx = 873,
|
||||||
|
rv_op_vwsll_vi = 874,
|
||||||
} rv_op;
|
} rv_op;
|
||||||
|
|
||||||
/* register names */
|
/* register names */
|
||||||
@ -2008,6 +2049,47 @@ const rv_opcode_data rvi_opcode_data[] = {
|
|||||||
{ "fltq.q", rv_codec_r, rv_fmt_rd_frs1_frs2, NULL, 0, 0, 0 },
|
{ "fltq.q", rv_codec_r, rv_fmt_rd_frs1_frs2, NULL, 0, 0, 0 },
|
||||||
{ "fleq.h", rv_codec_r, rv_fmt_rd_frs1_frs2, NULL, 0, 0, 0 },
|
{ "fleq.h", rv_codec_r, rv_fmt_rd_frs1_frs2, NULL, 0, 0, 0 },
|
||||||
{ "fltq.h", rv_codec_r, rv_fmt_rd_frs1_frs2, NULL, 0, 0, 0 },
|
{ "fltq.h", rv_codec_r, rv_fmt_rd_frs1_frs2, NULL, 0, 0, 0 },
|
||||||
|
{ "vaesdf.vv", rv_codec_v_r, rv_fmt_vd_vs2, NULL, 0, 0, 0 },
|
||||||
|
{ "vaesdf.vs", rv_codec_v_r, rv_fmt_vd_vs2, NULL, 0, 0, 0 },
|
||||||
|
{ "vaesdm.vv", rv_codec_v_r, rv_fmt_vd_vs2, NULL, 0, 0, 0 },
|
||||||
|
{ "vaesdm.vs", rv_codec_v_r, rv_fmt_vd_vs2, NULL, 0, 0, 0 },
|
||||||
|
{ "vaesef.vv", rv_codec_v_r, rv_fmt_vd_vs2, NULL, 0, 0, 0 },
|
||||||
|
{ "vaesef.vs", rv_codec_v_r, rv_fmt_vd_vs2, NULL, 0, 0, 0 },
|
||||||
|
{ "vaesem.vv", rv_codec_v_r, rv_fmt_vd_vs2, NULL, 0, 0, 0 },
|
||||||
|
{ "vaesem.vs", rv_codec_v_r, rv_fmt_vd_vs2, NULL, 0, 0, 0 },
|
||||||
|
{ "vaeskf1.vi", rv_codec_v_i, rv_fmt_vd_vs2_uimm, NULL, 0, 0, 0 },
|
||||||
|
{ "vaeskf2.vi", rv_codec_v_i, rv_fmt_vd_vs2_uimm, NULL, 0, 0, 0 },
|
||||||
|
{ "vaesz.vs", rv_codec_v_r, rv_fmt_vd_vs2, NULL, 0, 0, 0 },
|
||||||
|
{ "vandn.vv", rv_codec_v_r, rv_fmt_vd_vs2_vs1_vm, NULL, 0, 0, 0 },
|
||||||
|
{ "vandn.vx", rv_codec_v_r, rv_fmt_vd_vs2_rs1_vm, NULL, 0, 0, 0 },
|
||||||
|
{ "vbrev.v", rv_codec_v_r, rv_fmt_vd_vs2_vm, NULL, 0, 0, 0 },
|
||||||
|
{ "vbrev8.v", rv_codec_v_r, rv_fmt_vd_vs2_vm, NULL, 0, 0, 0 },
|
||||||
|
{ "vclmul.vv", rv_codec_v_r, rv_fmt_vd_vs2_vs1_vm, NULL, 0, 0, 0 },
|
||||||
|
{ "vclmul.vx", rv_codec_v_r, rv_fmt_vd_vs2_rs1_vm, NULL, 0, 0, 0 },
|
||||||
|
{ "vclmulh.vv", rv_codec_v_r, rv_fmt_vd_vs2_vs1_vm, NULL, 0, 0, 0 },
|
||||||
|
{ "vclmulh.vx", rv_codec_v_r, rv_fmt_vd_vs2_rs1_vm, NULL, 0, 0, 0 },
|
||||||
|
{ "vclz.v", rv_codec_v_r, rv_fmt_vd_vs2_vm, NULL, 0, 0, 0 },
|
||||||
|
{ "vcpop.v", rv_codec_v_r, rv_fmt_vd_vs2_vm, NULL, 0, 0, 0 },
|
||||||
|
{ "vctz.v", rv_codec_v_r, rv_fmt_vd_vs2_vm, NULL, 0, 0, 0 },
|
||||||
|
{ "vghsh.vv", rv_codec_v_r, rv_fmt_vd_vs2_vs1, NULL, 0, 0, 0 },
|
||||||
|
{ "vgmul.vv", rv_codec_v_r, rv_fmt_vd_vs2, NULL, 0, 0, 0 },
|
||||||
|
{ "vrev8.v", rv_codec_v_r, rv_fmt_vd_vs2_vm, NULL, 0, 0, 0 },
|
||||||
|
{ "vrol.vv", rv_codec_v_r, rv_fmt_vd_vs2_vs1_vm, NULL, 0, 0, 0 },
|
||||||
|
{ "vrol.vx", rv_codec_v_r, rv_fmt_vd_vs2_rs1_vm, NULL, 0, 0, 0 },
|
||||||
|
{ "vror.vv", rv_codec_v_r, rv_fmt_vd_vs2_vs1_vm, NULL, 0, 0, 0 },
|
||||||
|
{ "vror.vx", rv_codec_v_r, rv_fmt_vd_vs2_rs1_vm, NULL, 0, 0, 0 },
|
||||||
|
{ "vror.vi", rv_codec_vror_vi, rv_fmt_vd_vs2_uimm_vm, NULL, 0, 0, 0 },
|
||||||
|
{ "vsha2ch.vv", rv_codec_v_r, rv_fmt_vd_vs2_vs1, NULL, 0, 0, 0 },
|
||||||
|
{ "vsha2cl.vv", rv_codec_v_r, rv_fmt_vd_vs2_vs1, NULL, 0, 0, 0 },
|
||||||
|
{ "vsha2ms.vv", rv_codec_v_r, rv_fmt_vd_vs2_vs1, NULL, 0, 0, 0 },
|
||||||
|
{ "vsm3c.vi", rv_codec_v_i, rv_fmt_vd_vs2_uimm, NULL, 0, 0, 0 },
|
||||||
|
{ "vsm3me.vv", rv_codec_v_r, rv_fmt_vd_vs2_vs1, NULL, 0, 0, 0 },
|
||||||
|
{ "vsm4k.vi", rv_codec_v_i, rv_fmt_vd_vs2_uimm, NULL, 0, 0, 0 },
|
||||||
|
{ "vsm4r.vv", rv_codec_v_r, rv_fmt_vd_vs2, NULL, 0, 0, 0 },
|
||||||
|
{ "vsm4r.vs", rv_codec_v_r, rv_fmt_vd_vs2, NULL, 0, 0, 0 },
|
||||||
|
{ "vwsll.vv", rv_codec_v_r, rv_fmt_vd_vs2_vs1_vm, NULL, 0, 0, 0 },
|
||||||
|
{ "vwsll.vx", rv_codec_v_r, rv_fmt_vd_vs2_rs1_vm, NULL, 0, 0, 0 },
|
||||||
|
{ "vwsll.vi", rv_codec_v_i, rv_fmt_vd_vs2_uimm_vm, NULL, 0, 0, 0 },
|
||||||
};
|
};
|
||||||
|
|
||||||
/* CSR names */
|
/* CSR names */
|
||||||
@ -3054,12 +3136,12 @@ static void decode_inst_opcode(rv_decode *dec, rv_isa isa)
|
|||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
case 89:
|
case 89:
|
||||||
switch (((inst >> 12) & 0b111)) {
|
switch (((inst >> 12) & 0b111)) {
|
||||||
case 0: op = rv_op_fmvp_d_x; break;
|
case 0: op = rv_op_fmvp_d_x; break;
|
||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
case 91:
|
case 91:
|
||||||
switch (((inst >> 12) & 0b111)) {
|
switch (((inst >> 12) & 0b111)) {
|
||||||
case 0: op = rv_op_fmvp_q_x; break;
|
case 0: op = rv_op_fmvp_q_x; break;
|
||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
@ -3176,6 +3258,7 @@ static void decode_inst_opcode(rv_decode *dec, rv_isa isa)
|
|||||||
case 0:
|
case 0:
|
||||||
switch ((inst >> 26) & 0b111111) {
|
switch ((inst >> 26) & 0b111111) {
|
||||||
case 0: op = rv_op_vadd_vv; break;
|
case 0: op = rv_op_vadd_vv; break;
|
||||||
|
case 1: op = rv_op_vandn_vv; break;
|
||||||
case 2: op = rv_op_vsub_vv; break;
|
case 2: op = rv_op_vsub_vv; break;
|
||||||
case 4: op = rv_op_vminu_vv; break;
|
case 4: op = rv_op_vminu_vv; break;
|
||||||
case 5: op = rv_op_vmin_vv; break;
|
case 5: op = rv_op_vmin_vv; break;
|
||||||
@ -3198,6 +3281,8 @@ static void decode_inst_opcode(rv_decode *dec, rv_isa isa)
|
|||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
case 19: op = rv_op_vmsbc_vvm; break;
|
case 19: op = rv_op_vmsbc_vvm; break;
|
||||||
|
case 20: op = rv_op_vror_vv; break;
|
||||||
|
case 21: op = rv_op_vrol_vv; break;
|
||||||
case 23:
|
case 23:
|
||||||
if (((inst >> 20) & 0b111111) == 32)
|
if (((inst >> 20) & 0b111111) == 32)
|
||||||
op = rv_op_vmv_v_v;
|
op = rv_op_vmv_v_v;
|
||||||
@ -3226,6 +3311,7 @@ static void decode_inst_opcode(rv_decode *dec, rv_isa isa)
|
|||||||
case 47: op = rv_op_vnclip_wv; break;
|
case 47: op = rv_op_vnclip_wv; break;
|
||||||
case 48: op = rv_op_vwredsumu_vs; break;
|
case 48: op = rv_op_vwredsumu_vs; break;
|
||||||
case 49: op = rv_op_vwredsum_vs; break;
|
case 49: op = rv_op_vwredsum_vs; break;
|
||||||
|
case 53: op = rv_op_vwsll_vv; break;
|
||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
case 1:
|
case 1:
|
||||||
@ -3323,6 +3409,8 @@ static void decode_inst_opcode(rv_decode *dec, rv_isa isa)
|
|||||||
case 9: op = rv_op_vaadd_vv; break;
|
case 9: op = rv_op_vaadd_vv; break;
|
||||||
case 10: op = rv_op_vasubu_vv; break;
|
case 10: op = rv_op_vasubu_vv; break;
|
||||||
case 11: op = rv_op_vasub_vv; break;
|
case 11: op = rv_op_vasub_vv; break;
|
||||||
|
case 12: op = rv_op_vclmul_vv; break;
|
||||||
|
case 13: op = rv_op_vclmulh_vv; break;
|
||||||
case 16:
|
case 16:
|
||||||
switch ((inst >> 15) & 0b11111) {
|
switch ((inst >> 15) & 0b11111) {
|
||||||
case 0: if ((inst >> 25) & 1) op = rv_op_vmv_x_s; break;
|
case 0: if ((inst >> 25) & 1) op = rv_op_vmv_x_s; break;
|
||||||
@ -3338,6 +3426,12 @@ static void decode_inst_opcode(rv_decode *dec, rv_isa isa)
|
|||||||
case 5: op = rv_op_vsext_vf4; break;
|
case 5: op = rv_op_vsext_vf4; break;
|
||||||
case 6: op = rv_op_vzext_vf2; break;
|
case 6: op = rv_op_vzext_vf2; break;
|
||||||
case 7: op = rv_op_vsext_vf2; break;
|
case 7: op = rv_op_vsext_vf2; break;
|
||||||
|
case 8: op = rv_op_vbrev8_v; break;
|
||||||
|
case 9: op = rv_op_vrev8_v; break;
|
||||||
|
case 10: op = rv_op_vbrev_v; break;
|
||||||
|
case 12: op = rv_op_vclz_v; break;
|
||||||
|
case 13: op = rv_op_vctz_v; break;
|
||||||
|
case 14: op = rv_op_vcpop_v; break;
|
||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
case 20:
|
case 20:
|
||||||
@ -3406,6 +3500,7 @@ static void decode_inst_opcode(rv_decode *dec, rv_isa isa)
|
|||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
case 17: op = rv_op_vmadc_vim; break;
|
case 17: op = rv_op_vmadc_vim; break;
|
||||||
|
case 20: case 21: op = rv_op_vror_vi; break;
|
||||||
case 23:
|
case 23:
|
||||||
if (((inst >> 20) & 0b111111) == 32)
|
if (((inst >> 20) & 0b111111) == 32)
|
||||||
op = rv_op_vmv_v_i;
|
op = rv_op_vmv_v_i;
|
||||||
@ -3437,11 +3532,13 @@ static void decode_inst_opcode(rv_decode *dec, rv_isa isa)
|
|||||||
case 45: op = rv_op_vnsra_wi; break;
|
case 45: op = rv_op_vnsra_wi; break;
|
||||||
case 46: op = rv_op_vnclipu_wi; break;
|
case 46: op = rv_op_vnclipu_wi; break;
|
||||||
case 47: op = rv_op_vnclip_wi; break;
|
case 47: op = rv_op_vnclip_wi; break;
|
||||||
|
case 53: op = rv_op_vwsll_vi; break;
|
||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
case 4:
|
case 4:
|
||||||
switch ((inst >> 26) & 0b111111) {
|
switch ((inst >> 26) & 0b111111) {
|
||||||
case 0: op = rv_op_vadd_vx; break;
|
case 0: op = rv_op_vadd_vx; break;
|
||||||
|
case 1: op = rv_op_vandn_vx; break;
|
||||||
case 2: op = rv_op_vsub_vx; break;
|
case 2: op = rv_op_vsub_vx; break;
|
||||||
case 3: op = rv_op_vrsub_vx; break;
|
case 3: op = rv_op_vrsub_vx; break;
|
||||||
case 4: op = rv_op_vminu_vx; break;
|
case 4: op = rv_op_vminu_vx; break;
|
||||||
@ -3466,6 +3563,8 @@ static void decode_inst_opcode(rv_decode *dec, rv_isa isa)
|
|||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
case 19: op = rv_op_vmsbc_vxm; break;
|
case 19: op = rv_op_vmsbc_vxm; break;
|
||||||
|
case 20: op = rv_op_vror_vx; break;
|
||||||
|
case 21: op = rv_op_vrol_vx; break;
|
||||||
case 23:
|
case 23:
|
||||||
if (((inst >> 20) & 0b111111) == 32)
|
if (((inst >> 20) & 0b111111) == 32)
|
||||||
op = rv_op_vmv_v_x;
|
op = rv_op_vmv_v_x;
|
||||||
@ -3494,6 +3593,7 @@ static void decode_inst_opcode(rv_decode *dec, rv_isa isa)
|
|||||||
case 45: op = rv_op_vnsra_wx; break;
|
case 45: op = rv_op_vnsra_wx; break;
|
||||||
case 46: op = rv_op_vnclipu_wx; break;
|
case 46: op = rv_op_vnclipu_wx; break;
|
||||||
case 47: op = rv_op_vnclip_wx; break;
|
case 47: op = rv_op_vnclip_wx; break;
|
||||||
|
case 53: op = rv_op_vwsll_vx; break;
|
||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
case 5:
|
case 5:
|
||||||
@ -3554,6 +3654,8 @@ static void decode_inst_opcode(rv_decode *dec, rv_isa isa)
|
|||||||
case 9: op = rv_op_vaadd_vx; break;
|
case 9: op = rv_op_vaadd_vx; break;
|
||||||
case 10: op = rv_op_vasubu_vx; break;
|
case 10: op = rv_op_vasubu_vx; break;
|
||||||
case 11: op = rv_op_vasub_vx; break;
|
case 11: op = rv_op_vasub_vx; break;
|
||||||
|
case 12: op = rv_op_vclmul_vx; break;
|
||||||
|
case 13: op = rv_op_vclmulh_vx; break;
|
||||||
case 14: op = rv_op_vslide1up_vx; break;
|
case 14: op = rv_op_vslide1up_vx; break;
|
||||||
case 15: op = rv_op_vslide1down_vx; break;
|
case 15: op = rv_op_vslide1down_vx; break;
|
||||||
case 16:
|
case 16:
|
||||||
@ -3686,6 +3788,41 @@ static void decode_inst_opcode(rv_decode *dec, rv_isa isa)
|
|||||||
case 7: op = rv_op_csrrci; break;
|
case 7: op = rv_op_csrrci; break;
|
||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
|
case 29:
|
||||||
|
if (((inst >> 25) & 1) == 1 && ((inst >> 12) & 0b111) == 2) {
|
||||||
|
switch ((inst >> 26) & 0b111111) {
|
||||||
|
case 32: op = rv_op_vsm3me_vv; break;
|
||||||
|
case 33: op = rv_op_vsm4k_vi; break;
|
||||||
|
case 34: op = rv_op_vaeskf1_vi; break;
|
||||||
|
case 40:
|
||||||
|
switch ((inst >> 15) & 0b11111) {
|
||||||
|
case 0: op = rv_op_vaesdm_vv; break;
|
||||||
|
case 1: op = rv_op_vaesdf_vv; break;
|
||||||
|
case 2: op = rv_op_vaesem_vv; break;
|
||||||
|
case 3: op = rv_op_vaesef_vv; break;
|
||||||
|
case 16: op = rv_op_vsm4r_vv; break;
|
||||||
|
case 17: op = rv_op_vgmul_vv; break;
|
||||||
|
}
|
||||||
|
break;
|
||||||
|
case 41:
|
||||||
|
switch ((inst >> 15) & 0b11111) {
|
||||||
|
case 0: op = rv_op_vaesdm_vs; break;
|
||||||
|
case 1: op = rv_op_vaesdf_vs; break;
|
||||||
|
case 2: op = rv_op_vaesem_vs; break;
|
||||||
|
case 3: op = rv_op_vaesef_vs; break;
|
||||||
|
case 7: op = rv_op_vaesz_vs; break;
|
||||||
|
case 16: op = rv_op_vsm4r_vs; break;
|
||||||
|
}
|
||||||
|
break;
|
||||||
|
case 42: op = rv_op_vaeskf2_vi; break;
|
||||||
|
case 43: op = rv_op_vsm3c_vi; break;
|
||||||
|
case 44: op = rv_op_vghsh_vv; break;
|
||||||
|
case 45: op = rv_op_vsha2ms_vv; break;
|
||||||
|
case 46: op = rv_op_vsha2ch_vv; break;
|
||||||
|
case 47: op = rv_op_vsha2cl_vv; break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
break;
|
||||||
case 30:
|
case 30:
|
||||||
switch (((inst >> 22) & 0b1111111000) |
|
switch (((inst >> 22) & 0b1111111000) |
|
||||||
((inst >> 12) & 0b0000000111)) {
|
((inst >> 12) & 0b0000000111)) {
|
||||||
@ -4011,6 +4148,12 @@ static uint32_t operand_vzimm10(rv_inst inst)
|
|||||||
return (inst << 34) >> 54;
|
return (inst << 34) >> 54;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static uint32_t operand_vzimm6(rv_inst inst)
|
||||||
|
{
|
||||||
|
return ((inst << 37) >> 63) << 5 |
|
||||||
|
((inst << 44) >> 59);
|
||||||
|
}
|
||||||
|
|
||||||
static uint32_t operand_bs(rv_inst inst)
|
static uint32_t operand_bs(rv_inst inst)
|
||||||
{
|
{
|
||||||
return (inst << 32) >> 62;
|
return (inst << 32) >> 62;
|
||||||
@ -4393,6 +4536,12 @@ static void decode_inst_operands(rv_decode *dec, rv_isa isa)
|
|||||||
dec->imm = operand_vimm(inst);
|
dec->imm = operand_vimm(inst);
|
||||||
dec->vm = operand_vm(inst);
|
dec->vm = operand_vm(inst);
|
||||||
break;
|
break;
|
||||||
|
case rv_codec_vror_vi:
|
||||||
|
dec->rd = operand_rd(inst);
|
||||||
|
dec->rs2 = operand_rs2(inst);
|
||||||
|
dec->imm = operand_vzimm6(inst);
|
||||||
|
dec->vm = operand_vm(inst);
|
||||||
|
break;
|
||||||
case rv_codec_vsetvli:
|
case rv_codec_vsetvli:
|
||||||
dec->rd = operand_rd(inst);
|
dec->rd = operand_rd(inst);
|
||||||
dec->rs1 = operand_rs1(inst);
|
dec->rs1 = operand_rs1(inst);
|
||||||
@ -4430,7 +4579,7 @@ static void decode_inst_operands(rv_decode *dec, rv_isa isa)
|
|||||||
break;
|
break;
|
||||||
case rv_codec_zcmt_jt:
|
case rv_codec_zcmt_jt:
|
||||||
dec->imm = operand_tbl_index(inst);
|
dec->imm = operand_tbl_index(inst);
|
||||||
break;
|
break;
|
||||||
case rv_codec_fli:
|
case rv_codec_fli:
|
||||||
dec->rd = operand_rd(inst);
|
dec->rd = operand_rd(inst);
|
||||||
dec->imm = operand_rs1(inst);
|
dec->imm = operand_rs1(inst);
|
||||||
@ -4677,7 +4826,7 @@ static void format_inst(char *buf, size_t buflen, size_t tab, rv_decode *dec)
|
|||||||
append(buf, tmp, buflen);
|
append(buf, tmp, buflen);
|
||||||
break;
|
break;
|
||||||
case 'u':
|
case 'u':
|
||||||
snprintf(tmp, sizeof(tmp), "%u", ((uint32_t)dec->imm & 0b11111));
|
snprintf(tmp, sizeof(tmp), "%u", ((uint32_t)dec->imm & 0b111111));
|
||||||
append(buf, tmp, buflen);
|
append(buf, tmp, buflen);
|
||||||
break;
|
break;
|
||||||
case 'j':
|
case 'j':
|
||||||
|
@ -152,6 +152,7 @@ typedef enum {
|
|||||||
rv_codec_v_i,
|
rv_codec_v_i,
|
||||||
rv_codec_vsetvli,
|
rv_codec_vsetvli,
|
||||||
rv_codec_vsetivli,
|
rv_codec_vsetivli,
|
||||||
|
rv_codec_vror_vi,
|
||||||
rv_codec_zcb_ext,
|
rv_codec_zcb_ext,
|
||||||
rv_codec_zcb_mul,
|
rv_codec_zcb_mul,
|
||||||
rv_codec_zcb_lb,
|
rv_codec_zcb_lb,
|
||||||
@ -274,6 +275,7 @@ enum {
|
|||||||
#define rv_fmt_vd_vs2_fs1_vm "O\tD,F,4m"
|
#define rv_fmt_vd_vs2_fs1_vm "O\tD,F,4m"
|
||||||
#define rv_fmt_vd_vs2_imm_vl "O\tD,F,il"
|
#define rv_fmt_vd_vs2_imm_vl "O\tD,F,il"
|
||||||
#define rv_fmt_vd_vs2_imm_vm "O\tD,F,im"
|
#define rv_fmt_vd_vs2_imm_vm "O\tD,F,im"
|
||||||
|
#define rv_fmt_vd_vs2_uimm "O\tD,F,u"
|
||||||
#define rv_fmt_vd_vs2_uimm_vm "O\tD,F,um"
|
#define rv_fmt_vd_vs2_uimm_vm "O\tD,F,um"
|
||||||
#define rv_fmt_vd_vs1_vs2_vm "O\tD,E,Fm"
|
#define rv_fmt_vd_vs1_vs2_vm "O\tD,E,Fm"
|
||||||
#define rv_fmt_vd_rs1_vs2_vm "O\tD,1,Fm"
|
#define rv_fmt_vd_rs1_vs2_vm "O\tD,1,Fm"
|
||||||
|
@ -247,6 +247,14 @@ deprecated; use the new name ``dtb-randomness`` instead. The new name
|
|||||||
better reflects the way this property affects all random data within
|
better reflects the way this property affects all random data within
|
||||||
the device tree blob, not just the ``kaslr-seed`` node.
|
the device tree blob, not just the ``kaslr-seed`` node.
|
||||||
|
|
||||||
|
``pc-i440fx-2.0`` up to ``pc-i440fx-2.3`` (since 8.2)
|
||||||
|
'''''''''''''''''''''''''''''''''''''''''''''''''''''
|
||||||
|
|
||||||
|
These old machine types are quite neglected nowadays and thus might have
|
||||||
|
various pitfalls with regards to live migration. Use a newer machine type
|
||||||
|
instead.
|
||||||
|
|
||||||
|
|
||||||
Backend options
|
Backend options
|
||||||
---------------
|
---------------
|
||||||
|
|
||||||
@ -405,6 +413,18 @@ Specifying the iSCSI password in plain text on the command line using the
|
|||||||
used instead, to refer to a ``--object secret...`` instance that provides
|
used instead, to refer to a ``--object secret...`` instance that provides
|
||||||
a password via a file, or encrypted.
|
a password via a file, or encrypted.
|
||||||
|
|
||||||
|
CPU device properties
|
||||||
|
'''''''''''''''''''''
|
||||||
|
|
||||||
|
``pmu-num=n`` on RISC-V CPUs (since 8.2)
|
||||||
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
|
||||||
|
In order to support more flexible counter configurations this has been replaced
|
||||||
|
by a ``pmu-mask`` property. If set of counters is continuous then the mask can
|
||||||
|
be calculated with ``((2 ^ n) - 1) << 3``. The least significant three bits
|
||||||
|
must be left clear.
|
||||||
|
|
||||||
|
|
||||||
Backwards compatibility
|
Backwards compatibility
|
||||||
-----------------------
|
-----------------------
|
||||||
|
|
||||||
@ -461,3 +481,38 @@ Migration
|
|||||||
``skipped`` field in Migration stats has been deprecated. It hasn't
|
``skipped`` field in Migration stats has been deprecated. It hasn't
|
||||||
been used for more than 10 years.
|
been used for more than 10 years.
|
||||||
|
|
||||||
|
``inc`` migrate command option (since 8.2)
|
||||||
|
''''''''''''''''''''''''''''''''''''''''''
|
||||||
|
|
||||||
|
Use blockdev-mirror with NBD instead.
|
||||||
|
|
||||||
|
As an intermediate step the ``inc`` functionality can be achieved by
|
||||||
|
setting the ``block-incremental`` migration parameter to ``true``.
|
||||||
|
But this parameter is also deprecated.
|
||||||
|
|
||||||
|
``blk`` migrate command option (since 8.2)
|
||||||
|
''''''''''''''''''''''''''''''''''''''''''
|
||||||
|
|
||||||
|
Use blockdev-mirror with NBD instead.
|
||||||
|
|
||||||
|
As an intermediate step the ``blk`` functionality can be achieved by
|
||||||
|
setting the ``block`` migration capability to ``true``. But this
|
||||||
|
capability is also deprecated.
|
||||||
|
|
||||||
|
block migration (since 8.2)
|
||||||
|
'''''''''''''''''''''''''''
|
||||||
|
|
||||||
|
Block migration is too inflexible. It needs to migrate all block
|
||||||
|
devices or none.
|
||||||
|
|
||||||
|
Please see "QMP invocation for live storage migration with
|
||||||
|
``blockdev-mirror`` + NBD" in docs/interop/live-block-operations.rst
|
||||||
|
for a detailed explanation.
|
||||||
|
|
||||||
|
old compression method (since 8.2)
|
||||||
|
''''''''''''''''''''''''''''''''''
|
||||||
|
|
||||||
|
Compression method fails too much. Too many races. We are going to
|
||||||
|
remove it if nobody fixes it. For starters, migration-test
|
||||||
|
compression tests are disabled becase they fail randomly. If you need
|
||||||
|
compression, use multifd compression methods.
|
||||||
|
@ -11,6 +11,7 @@ generated from in-code annotations to function prototypes.
|
|||||||
loads-stores
|
loads-stores
|
||||||
memory
|
memory
|
||||||
modules
|
modules
|
||||||
|
pci
|
||||||
qom-api
|
qom-api
|
||||||
qdev-api
|
qdev-api
|
||||||
ui
|
ui
|
||||||
|
@ -28,6 +28,8 @@ the guest to be stopped. Typically the time that the guest is
|
|||||||
unresponsive during live migration is the low hundred of milliseconds
|
unresponsive during live migration is the low hundred of milliseconds
|
||||||
(notice that this depends on a lot of things).
|
(notice that this depends on a lot of things).
|
||||||
|
|
||||||
|
.. contents::
|
||||||
|
|
||||||
Transports
|
Transports
|
||||||
==========
|
==========
|
||||||
|
|
||||||
@ -165,13 +167,17 @@ An example (from hw/input/pckbd.c)
|
|||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
We are declaring the state with name "pckbd".
|
We are declaring the state with name "pckbd". The ``version_id`` is
|
||||||
The ``version_id`` is 3, and the fields are 4 uint8_t in a KBDState structure.
|
3, and there are 4 uint8_t fields in the KBDState structure. We
|
||||||
We registered this with:
|
registered this ``VMSTATEDescription`` with one of the following
|
||||||
|
functions. The first one will generate a device ``instance_id``
|
||||||
|
different for each registration. Use the second one if you already
|
||||||
|
have an id that is different for each instance of the device:
|
||||||
|
|
||||||
.. code:: c
|
.. code:: c
|
||||||
|
|
||||||
vmstate_register(NULL, 0, &vmstate_kbd, s);
|
vmstate_register_any(NULL, &vmstate_kbd, s);
|
||||||
|
vmstate_register(NULL, instance_id, &vmstate_kbd, s);
|
||||||
|
|
||||||
For devices that are ``qdev`` based, we can register the device in the class
|
For devices that are ``qdev`` based, we can register the device in the class
|
||||||
init function:
|
init function:
|
||||||
@ -588,6 +594,77 @@ path.
|
|||||||
Return path - opened by main thread, written by main thread AND postcopy
|
Return path - opened by main thread, written by main thread AND postcopy
|
||||||
thread (protected by rp_mutex)
|
thread (protected by rp_mutex)
|
||||||
|
|
||||||
|
Dirty limit
|
||||||
|
=====================
|
||||||
|
The dirty limit, short for dirty page rate upper limit, is a new capability
|
||||||
|
introduced in the 8.1 QEMU release that uses a new algorithm based on the KVM
|
||||||
|
dirty ring to throttle down the guest during live migration.
|
||||||
|
|
||||||
|
The algorithm framework is as follows:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
------------------------------------------------------------------------------
|
||||||
|
main --------------> throttle thread ------------> PREPARE(1) <--------
|
||||||
|
thread \ | |
|
||||||
|
\ | |
|
||||||
|
\ V |
|
||||||
|
-\ CALCULATE(2) |
|
||||||
|
\ | |
|
||||||
|
\ | |
|
||||||
|
\ V |
|
||||||
|
\ SET PENALTY(3) -----
|
||||||
|
-\ |
|
||||||
|
\ |
|
||||||
|
\ V
|
||||||
|
-> virtual CPU thread -------> ACCEPT PENALTY(4)
|
||||||
|
------------------------------------------------------------------------------
|
||||||
|
|
||||||
|
When the qmp command qmp_set_vcpu_dirty_limit is called for the first time,
|
||||||
|
the QEMU main thread starts the throttle thread. The throttle thread, once
|
||||||
|
launched, executes the loop, which consists of three steps:
|
||||||
|
|
||||||
|
- PREPARE (1)
|
||||||
|
|
||||||
|
The entire work of PREPARE (1) is preparation for the second stage,
|
||||||
|
CALCULATE(2), as the name implies. It involves preparing the dirty
|
||||||
|
page rate value and the corresponding upper limit of the VM:
|
||||||
|
The dirty page rate is calculated via the KVM dirty ring mechanism,
|
||||||
|
which tells QEMU how many dirty pages a virtual CPU has had since the
|
||||||
|
last KVM_EXIT_DIRTY_RING_FULL exception; The dirty page rate upper
|
||||||
|
limit is specified by caller, therefore fetch it directly.
|
||||||
|
|
||||||
|
- CALCULATE (2)
|
||||||
|
|
||||||
|
Calculate a suitable sleep period for each virtual CPU, which will be
|
||||||
|
used to determine the penalty for the target virtual CPU. The
|
||||||
|
computation must be done carefully in order to reduce the dirty page
|
||||||
|
rate progressively down to the upper limit without oscillation. To
|
||||||
|
achieve this, two strategies are provided: the first is to add or
|
||||||
|
subtract sleep time based on the ratio of the current dirty page rate
|
||||||
|
to the limit, which is used when the current dirty page rate is far
|
||||||
|
from the limit; the second is to add or subtract a fixed time when
|
||||||
|
the current dirty page rate is close to the limit.
|
||||||
|
|
||||||
|
- SET PENALTY (3)
|
||||||
|
|
||||||
|
Set the sleep time for each virtual CPU that should be penalized based
|
||||||
|
on the results of the calculation supplied by step CALCULATE (2).
|
||||||
|
|
||||||
|
After completing the three above stages, the throttle thread loops back
|
||||||
|
to step PREPARE (1) until the dirty limit is reached.
|
||||||
|
|
||||||
|
On the other hand, each virtual CPU thread reads the sleep duration and
|
||||||
|
sleeps in the path of the KVM_EXIT_DIRTY_RING_FULL exception handler, that
|
||||||
|
is ACCEPT PENALTY (4). Virtual CPUs tied with writing processes will
|
||||||
|
obviously exit to the path and get penalized, whereas virtual CPUs involved
|
||||||
|
with read processes will not.
|
||||||
|
|
||||||
|
In summary, thanks to the KVM dirty ring technology, the dirty limit
|
||||||
|
algorithm will restrict virtual CPUs as needed to keep their dirty page
|
||||||
|
rate inside the limit. This leads to more steady reading performance during
|
||||||
|
live migration and can aid in improving large guest responsiveness.
|
||||||
|
|
||||||
Postcopy
|
Postcopy
|
||||||
========
|
========
|
||||||
|
|
||||||
@ -917,3 +994,521 @@ versioned machine types to cut down on the combinations that will need
|
|||||||
support. This is also useful when newer versions of firmware outgrow
|
support. This is also useful when newer versions of firmware outgrow
|
||||||
the padding.
|
the padding.
|
||||||
|
|
||||||
|
|
||||||
|
Backwards compatibility
|
||||||
|
=======================
|
||||||
|
|
||||||
|
How backwards compatibility works
|
||||||
|
---------------------------------
|
||||||
|
|
||||||
|
When we do migration, we have two QEMU processes: the source and the
|
||||||
|
target. There are two cases, they are the same version or they are
|
||||||
|
different versions. The easy case is when they are the same version.
|
||||||
|
The difficult one is when they are different versions.
|
||||||
|
|
||||||
|
There are two things that are different, but they have very similar
|
||||||
|
names and sometimes get confused:
|
||||||
|
|
||||||
|
- QEMU version
|
||||||
|
- machine type version
|
||||||
|
|
||||||
|
Let's start with a practical example, we start with:
|
||||||
|
|
||||||
|
- qemu-system-x86_64 (v5.2), from now on qemu-5.2.
|
||||||
|
- qemu-system-x86_64 (v5.1), from now on qemu-5.1.
|
||||||
|
|
||||||
|
Related to this are the "latest" machine types defined on each of
|
||||||
|
them:
|
||||||
|
|
||||||
|
- pc-q35-5.2 (newer one in qemu-5.2) from now on pc-5.2
|
||||||
|
- pc-q35-5.1 (newer one in qemu-5.1) from now on pc-5.1
|
||||||
|
|
||||||
|
First of all, migration is only supposed to work if you use the same
|
||||||
|
machine type in both source and destination. The QEMU hardware
|
||||||
|
configuration needs to be the same also on source and destination.
|
||||||
|
Most aspects of the backend configuration can be changed at will,
|
||||||
|
except for a few cases where the backend features influence frontend
|
||||||
|
device feature exposure. But that is not relevant for this section.
|
||||||
|
|
||||||
|
I am going to list the number of combinations that we can have. Let's
|
||||||
|
start with the trivial ones, QEMU is the same on source and
|
||||||
|
destination:
|
||||||
|
|
||||||
|
1 - qemu-5.2 -M pc-5.2 -> migrates to -> qemu-5.2 -M pc-5.2
|
||||||
|
|
||||||
|
This is the latest QEMU with the latest machine type.
|
||||||
|
This have to work, and if it doesn't work it is a bug.
|
||||||
|
|
||||||
|
2 - qemu-5.1 -M pc-5.1 -> migrates to -> qemu-5.1 -M pc-5.1
|
||||||
|
|
||||||
|
Exactly the same case than the previous one, but for 5.1.
|
||||||
|
Nothing to see here either.
|
||||||
|
|
||||||
|
This are the easiest ones, we will not talk more about them in this
|
||||||
|
section.
|
||||||
|
|
||||||
|
Now we start with the more interesting cases. Consider the case where
|
||||||
|
we have the same QEMU version in both sides (qemu-5.2) but we are using
|
||||||
|
the latest machine type for that version (pc-5.2) but one of an older
|
||||||
|
QEMU version, in this case pc-5.1.
|
||||||
|
|
||||||
|
3 - qemu-5.2 -M pc-5.1 -> migrates to -> qemu-5.2 -M pc-5.1
|
||||||
|
|
||||||
|
It needs to use the definition of pc-5.1 and the devices as they
|
||||||
|
were configured on 5.1, but this should be easy in the sense that
|
||||||
|
both sides are the same QEMU and both sides have exactly the same
|
||||||
|
idea of what the pc-5.1 machine is.
|
||||||
|
|
||||||
|
4 - qemu-5.1 -M pc-5.2 -> migrates to -> qemu-5.1 -M pc-5.2
|
||||||
|
|
||||||
|
This combination is not possible as the qemu-5.1 doen't understand
|
||||||
|
pc-5.2 machine type. So nothing to worry here.
|
||||||
|
|
||||||
|
Now it comes the interesting ones, when both QEMU processes are
|
||||||
|
different. Notice also that the machine type needs to be pc-5.1,
|
||||||
|
because we have the limitation than qemu-5.1 doesn't know pc-5.2. So
|
||||||
|
the possible cases are:
|
||||||
|
|
||||||
|
5 - qemu-5.2 -M pc-5.1 -> migrates to -> qemu-5.1 -M pc-5.1
|
||||||
|
|
||||||
|
This migration is known as newer to older. We need to make sure
|
||||||
|
when we are developing 5.2 we need to take care about not to break
|
||||||
|
migration to qemu-5.1. Notice that we can't make updates to
|
||||||
|
qemu-5.1 to understand whatever qemu-5.2 decides to change, so it is
|
||||||
|
in qemu-5.2 side to make the relevant changes.
|
||||||
|
|
||||||
|
6 - qemu-5.1 -M pc-5.1 -> migrates to -> qemu-5.2 -M pc-5.1
|
||||||
|
|
||||||
|
This migration is known as older to newer. We need to make sure
|
||||||
|
than we are able to receive migrations from qemu-5.1. The problem is
|
||||||
|
similar to the previous one.
|
||||||
|
|
||||||
|
If qemu-5.1 and qemu-5.2 were the same, there will not be any
|
||||||
|
compatibility problems. But the reason that we create qemu-5.2 is to
|
||||||
|
get new features, devices, defaults, etc.
|
||||||
|
|
||||||
|
If we get a device that has a new feature, or change a default value,
|
||||||
|
we have a problem when we try to migrate between different QEMU
|
||||||
|
versions.
|
||||||
|
|
||||||
|
So we need a way to tell qemu-5.2 that when we are using machine type
|
||||||
|
pc-5.1, it needs to **not** use the feature, to be able to migrate to
|
||||||
|
real qemu-5.1.
|
||||||
|
|
||||||
|
And the equivalent part when migrating from qemu-5.1 to qemu-5.2.
|
||||||
|
qemu-5.2 has to expect that it is not going to get data for the new
|
||||||
|
feature, because qemu-5.1 doesn't know about it.
|
||||||
|
|
||||||
|
How do we tell QEMU about these device feature changes? In
|
||||||
|
hw/core/machine.c:hw_compat_X_Y arrays.
|
||||||
|
|
||||||
|
If we change a default value, we need to put back the old value on
|
||||||
|
that array. And the device, during initialization needs to look at
|
||||||
|
that array to see what value it needs to get for that feature. And
|
||||||
|
what are we going to put in that array, the value of a property.
|
||||||
|
|
||||||
|
To create a property for a device, we need to use one of the
|
||||||
|
DEFINE_PROP_*() macros. See include/hw/qdev-properties.h to find the
|
||||||
|
macros that exist. With it, we set the default value for that
|
||||||
|
property, and that is what it is going to get in the latest released
|
||||||
|
version. But if we want a different value for a previous version, we
|
||||||
|
can change that in the hw_compat_X_Y arrays.
|
||||||
|
|
||||||
|
hw_compat_X_Y is an array of registers that have the format:
|
||||||
|
|
||||||
|
- name_device
|
||||||
|
- name_property
|
||||||
|
- value
|
||||||
|
|
||||||
|
Let's see a practical example.
|
||||||
|
|
||||||
|
In qemu-5.2 virtio-blk-device got multi queue support. This is a
|
||||||
|
change that is not backward compatible. In qemu-5.1 it has one
|
||||||
|
queue. In qemu-5.2 it has the same number of queues as the number of
|
||||||
|
cpus in the system.
|
||||||
|
|
||||||
|
When we are doing migration, if we migrate from a device that has 4
|
||||||
|
queues to a device that have only one queue, we don't know where to
|
||||||
|
put the extra information for the other 3 queues, and we fail
|
||||||
|
migration.
|
||||||
|
|
||||||
|
Similar problem when we migrate from qemu-5.1 that has only one queue
|
||||||
|
to qemu-5.2, we only sent information for one queue, but destination
|
||||||
|
has 4, and we have 3 queues that are not properly initialized and
|
||||||
|
anything can happen.
|
||||||
|
|
||||||
|
So, how can we address this problem. Easy, just convince qemu-5.2
|
||||||
|
that when it is running pc-5.1, it needs to set the number of queues
|
||||||
|
for virtio-blk-devices to 1.
|
||||||
|
|
||||||
|
That way we fix the cases 5 and 6.
|
||||||
|
|
||||||
|
5 - qemu-5.2 -M pc-5.1 -> migrates to -> qemu-5.1 -M pc-5.1
|
||||||
|
|
||||||
|
qemu-5.2 -M pc-5.1 sets number of queues to be 1.
|
||||||
|
qemu-5.1 -M pc-5.1 expects number of queues to be 1.
|
||||||
|
|
||||||
|
correct. migration works.
|
||||||
|
|
||||||
|
6 - qemu-5.1 -M pc-5.1 -> migrates to -> qemu-5.2 -M pc-5.1
|
||||||
|
|
||||||
|
qemu-5.1 -M pc-5.1 sets number of queues to be 1.
|
||||||
|
qemu-5.2 -M pc-5.1 expects number of queues to be 1.
|
||||||
|
|
||||||
|
correct. migration works.
|
||||||
|
|
||||||
|
And now the other interesting case, case 3. In this case we have:
|
||||||
|
|
||||||
|
3 - qemu-5.2 -M pc-5.1 -> migrates to -> qemu-5.2 -M pc-5.1
|
||||||
|
|
||||||
|
Here we have the same QEMU in both sides. So it doesn't matter a
|
||||||
|
lot if we have set the number of queues to 1 or not, because
|
||||||
|
they are the same.
|
||||||
|
|
||||||
|
WRONG!
|
||||||
|
|
||||||
|
Think what happens if we do one of this double migrations:
|
||||||
|
|
||||||
|
A -> migrates -> B -> migrates -> C
|
||||||
|
|
||||||
|
where:
|
||||||
|
|
||||||
|
A: qemu-5.1 -M pc-5.1
|
||||||
|
B: qemu-5.2 -M pc-5.1
|
||||||
|
C: qemu-5.2 -M pc-5.1
|
||||||
|
|
||||||
|
migration A -> B is case 6, so number of queues needs to be 1.
|
||||||
|
|
||||||
|
migration B -> C is case 3, so we don't care. But actually we
|
||||||
|
care because we haven't started the guest in qemu-5.2, it came
|
||||||
|
migrated from qemu-5.1. So to be in the safe place, we need to
|
||||||
|
always use number of queues 1 when we are using pc-5.1.
|
||||||
|
|
||||||
|
Now, how was this done in reality? The following commit shows how it
|
||||||
|
was done::
|
||||||
|
|
||||||
|
commit 9445e1e15e66c19e42bea942ba810db28052cd05
|
||||||
|
Author: Stefan Hajnoczi <stefanha@redhat.com>
|
||||||
|
Date: Tue Aug 18 15:33:47 2020 +0100
|
||||||
|
|
||||||
|
virtio-blk-pci: default num_queues to -smp N
|
||||||
|
|
||||||
|
The relevant parts for migration are::
|
||||||
|
|
||||||
|
@@ -1281,7 +1284,8 @@ static Property virtio_blk_properties[] = {
|
||||||
|
#endif
|
||||||
|
DEFINE_PROP_BIT("request-merging", VirtIOBlock, conf.request_merging, 0,
|
||||||
|
true),
|
||||||
|
- DEFINE_PROP_UINT16("num-queues", VirtIOBlock, conf.num_queues, 1),
|
||||||
|
+ DEFINE_PROP_UINT16("num-queues", VirtIOBlock, conf.num_queues,
|
||||||
|
+ VIRTIO_BLK_AUTO_NUM_QUEUES),
|
||||||
|
DEFINE_PROP_UINT16("queue-size", VirtIOBlock, conf.queue_size, 256),
|
||||||
|
|
||||||
|
It changes the default value of num_queues. But it fishes it for old
|
||||||
|
machine types to have the right value::
|
||||||
|
|
||||||
|
@@ -31,6 +31,7 @@
|
||||||
|
GlobalProperty hw_compat_5_1[] = {
|
||||||
|
...
|
||||||
|
+ { "virtio-blk-device", "num-queues", "1"},
|
||||||
|
...
|
||||||
|
};
|
||||||
|
|
||||||
|
A device with diferent features on both sides
|
||||||
|
---------------------------------------------
|
||||||
|
|
||||||
|
Let's assume that we are using the same QEMU binary on both sides,
|
||||||
|
just to make the things easier. But we have a device that has
|
||||||
|
different features on both sides of the migration. That can be
|
||||||
|
because the devices are different, because the kernel driver of both
|
||||||
|
devices have different features, whatever.
|
||||||
|
|
||||||
|
How can we get this to work with migration. The way to do that is
|
||||||
|
"theoretically" easy. You have to get the features that the device
|
||||||
|
has in the source of the migration. The features that the device has
|
||||||
|
on the target of the migration, you get the intersection of the
|
||||||
|
features of both sides, and that is the way that you should launch
|
||||||
|
QEMU.
|
||||||
|
|
||||||
|
Notice that this is not completely related to QEMU. The most
|
||||||
|
important thing here is that this should be handled by the managing
|
||||||
|
application that launches QEMU. If QEMU is configured correctly, the
|
||||||
|
migration will succeed.
|
||||||
|
|
||||||
|
That said, actually doing it is complicated. Almost all devices are
|
||||||
|
bad at being able to be launched with only some features enabled.
|
||||||
|
With one big exception: cpus.
|
||||||
|
|
||||||
|
You can read the documentation for QEMU x86 cpu models here:
|
||||||
|
|
||||||
|
https://qemu-project.gitlab.io/qemu/system/qemu-cpu-models.html
|
||||||
|
|
||||||
|
See when they talk about migration they recommend that one chooses the
|
||||||
|
newest cpu model that is supported for all cpus.
|
||||||
|
|
||||||
|
Let's say that we have:
|
||||||
|
|
||||||
|
Host A:
|
||||||
|
|
||||||
|
Device X has the feature Y
|
||||||
|
|
||||||
|
Host B:
|
||||||
|
|
||||||
|
Device X has not the feature Y
|
||||||
|
|
||||||
|
If we try to migrate without any care from host A to host B, it will
|
||||||
|
fail because when migration tries to load the feature Y on
|
||||||
|
destination, it will find that the hardware is not there.
|
||||||
|
|
||||||
|
Doing this would be the equivalent of doing with cpus:
|
||||||
|
|
||||||
|
Host A:
|
||||||
|
|
||||||
|
$ qemu-system-x86_64 -cpu host
|
||||||
|
|
||||||
|
Host B:
|
||||||
|
|
||||||
|
$ qemu-system-x86_64 -cpu host
|
||||||
|
|
||||||
|
When both hosts have different cpu features this is guaranteed to
|
||||||
|
fail. Especially if Host B has less features than host A. If host A
|
||||||
|
has less features than host B, sometimes it works. Important word of
|
||||||
|
last sentence is "sometimes".
|
||||||
|
|
||||||
|
So, forgetting about cpu models and continuing with the -cpu host
|
||||||
|
example, let's see that the differences of the cpus is that Host A and
|
||||||
|
B have the following features:
|
||||||
|
|
||||||
|
Features: 'pcid' 'stibp' 'taa-no'
|
||||||
|
Host A: X X
|
||||||
|
Host B: X
|
||||||
|
|
||||||
|
And we want to migrate between them, the way configure both QEMU cpu
|
||||||
|
will be:
|
||||||
|
|
||||||
|
Host A:
|
||||||
|
|
||||||
|
$ qemu-system-x86_64 -cpu host,pcid=off,stibp=off
|
||||||
|
|
||||||
|
Host B:
|
||||||
|
|
||||||
|
$ qemu-system-x86_64 -cpu host,taa-no=off
|
||||||
|
|
||||||
|
And you would be able to migrate between them. It is responsability
|
||||||
|
of the management application or of the user to make sure that the
|
||||||
|
configuration is correct. QEMU doesn't know how to look at this kind
|
||||||
|
of features in general.
|
||||||
|
|
||||||
|
Notice that we don't recomend to use -cpu host for migration. It is
|
||||||
|
used in this example because it makes the example simpler.
|
||||||
|
|
||||||
|
Other devices have worse control about individual features. If they
|
||||||
|
want to be able to migrate between hosts that show different features,
|
||||||
|
the device needs a way to configure which ones it is going to use.
|
||||||
|
|
||||||
|
In this section we have considered that we are using the same QEMU
|
||||||
|
binary in both sides of the migration. If we use different QEMU
|
||||||
|
versions process, then we need to have into account all other
|
||||||
|
differences and the examples become even more complicated.
|
||||||
|
|
||||||
|
How to mitigate when we have a backward compatibility error
|
||||||
|
-----------------------------------------------------------
|
||||||
|
|
||||||
|
We broke migration for old machine types continuously during
|
||||||
|
development. But as soon as we find that there is a problem, we fix
|
||||||
|
it. The problem is what happens when we detect after we have done a
|
||||||
|
release that something has gone wrong.
|
||||||
|
|
||||||
|
Let see how it worked with one example.
|
||||||
|
|
||||||
|
After the release of qemu-8.0 we found a problem when doing migration
|
||||||
|
of the machine type pc-7.2.
|
||||||
|
|
||||||
|
- $ qemu-7.2 -M pc-7.2 -> qemu-7.2 -M pc-7.2
|
||||||
|
|
||||||
|
This migration works
|
||||||
|
|
||||||
|
- $ qemu-8.0 -M pc-7.2 -> qemu-8.0 -M pc-7.2
|
||||||
|
|
||||||
|
This migration works
|
||||||
|
|
||||||
|
- $ qemu-8.0 -M pc-7.2 -> qemu-7.2 -M pc-7.2
|
||||||
|
|
||||||
|
This migration fails
|
||||||
|
|
||||||
|
- $ qemu-7.2 -M pc-7.2 -> qemu-8.0 -M pc-7.2
|
||||||
|
|
||||||
|
This migration fails
|
||||||
|
|
||||||
|
So clearly something fails when migration between qemu-7.2 and
|
||||||
|
qemu-8.0 with machine type pc-7.2. The error messages, and git bisect
|
||||||
|
pointed to this commit.
|
||||||
|
|
||||||
|
In qemu-8.0 we got this commit::
|
||||||
|
|
||||||
|
commit 010746ae1db7f52700cb2e2c46eb94f299cfa0d2
|
||||||
|
Author: Jonathan Cameron <Jonathan.Cameron@huawei.com>
|
||||||
|
Date: Thu Mar 2 13:37:02 2023 +0000
|
||||||
|
|
||||||
|
hw/pci/aer: Implement PCI_ERR_UNCOR_MASK register
|
||||||
|
|
||||||
|
|
||||||
|
The relevant bits of the commit for our example are this ones::
|
||||||
|
|
||||||
|
--- a/hw/pci/pcie_aer.c
|
||||||
|
+++ b/hw/pci/pcie_aer.c
|
||||||
|
@@ -112,6 +112,10 @@ int pcie_aer_init(PCIDevice *dev,
|
||||||
|
|
||||||
|
pci_set_long(dev->w1cmask + offset + PCI_ERR_UNCOR_STATUS,
|
||||||
|
PCI_ERR_UNC_SUPPORTED);
|
||||||
|
+ pci_set_long(dev->config + offset + PCI_ERR_UNCOR_MASK,
|
||||||
|
+ PCI_ERR_UNC_MASK_DEFAULT);
|
||||||
|
+ pci_set_long(dev->wmask + offset + PCI_ERR_UNCOR_MASK,
|
||||||
|
+ PCI_ERR_UNC_SUPPORTED);
|
||||||
|
|
||||||
|
pci_set_long(dev->config + offset + PCI_ERR_UNCOR_SEVER,
|
||||||
|
PCI_ERR_UNC_SEVERITY_DEFAULT);
|
||||||
|
|
||||||
|
The patch changes how we configure PCI space for AER. But QEMU fails
|
||||||
|
when the PCI space configuration is different between source and
|
||||||
|
destination.
|
||||||
|
|
||||||
|
The following commit shows how this got fixed::
|
||||||
|
|
||||||
|
commit 5ed3dabe57dd9f4c007404345e5f5bf0e347317f
|
||||||
|
Author: Leonardo Bras <leobras@redhat.com>
|
||||||
|
Date: Tue May 2 21:27:02 2023 -0300
|
||||||
|
|
||||||
|
hw/pci: Disable PCI_ERR_UNCOR_MASK register for machine type < 8.0
|
||||||
|
|
||||||
|
[...]
|
||||||
|
|
||||||
|
The relevant parts of the fix in QEMU are as follow:
|
||||||
|
|
||||||
|
First, we create a new property for the device to be able to configure
|
||||||
|
the old behaviour or the new behaviour::
|
||||||
|
|
||||||
|
diff --git a/hw/pci/pci.c b/hw/pci/pci.c
|
||||||
|
index 8a87ccc8b0..5153ad63d6 100644
|
||||||
|
--- a/hw/pci/pci.c
|
||||||
|
+++ b/hw/pci/pci.c
|
||||||
|
@@ -79,6 +79,8 @@ static Property pci_props[] = {
|
||||||
|
DEFINE_PROP_STRING("failover_pair_id", PCIDevice,
|
||||||
|
failover_pair_id),
|
||||||
|
DEFINE_PROP_UINT32("acpi-index", PCIDevice, acpi_index, 0),
|
||||||
|
+ DEFINE_PROP_BIT("x-pcie-err-unc-mask", PCIDevice, cap_present,
|
||||||
|
+ QEMU_PCIE_ERR_UNC_MASK_BITNR, true),
|
||||||
|
DEFINE_PROP_END_OF_LIST()
|
||||||
|
};
|
||||||
|
|
||||||
|
Notice that we enable the feature for new machine types.
|
||||||
|
|
||||||
|
Now we see how the fix is done. This is going to depend on what kind
|
||||||
|
of breakage happens, but in this case it is quite simple::
|
||||||
|
|
||||||
|
diff --git a/hw/pci/pcie_aer.c b/hw/pci/pcie_aer.c
|
||||||
|
index 103667c368..374d593ead 100644
|
||||||
|
--- a/hw/pci/pcie_aer.c
|
||||||
|
+++ b/hw/pci/pcie_aer.c
|
||||||
|
@@ -112,10 +112,13 @@ int pcie_aer_init(PCIDevice *dev, uint8_t cap_ver,
|
||||||
|
uint16_t offset,
|
||||||
|
|
||||||
|
pci_set_long(dev->w1cmask + offset + PCI_ERR_UNCOR_STATUS,
|
||||||
|
PCI_ERR_UNC_SUPPORTED);
|
||||||
|
- pci_set_long(dev->config + offset + PCI_ERR_UNCOR_MASK,
|
||||||
|
- PCI_ERR_UNC_MASK_DEFAULT);
|
||||||
|
- pci_set_long(dev->wmask + offset + PCI_ERR_UNCOR_MASK,
|
||||||
|
- PCI_ERR_UNC_SUPPORTED);
|
||||||
|
+
|
||||||
|
+ if (dev->cap_present & QEMU_PCIE_ERR_UNC_MASK) {
|
||||||
|
+ pci_set_long(dev->config + offset + PCI_ERR_UNCOR_MASK,
|
||||||
|
+ PCI_ERR_UNC_MASK_DEFAULT);
|
||||||
|
+ pci_set_long(dev->wmask + offset + PCI_ERR_UNCOR_MASK,
|
||||||
|
+ PCI_ERR_UNC_SUPPORTED);
|
||||||
|
+ }
|
||||||
|
|
||||||
|
pci_set_long(dev->config + offset + PCI_ERR_UNCOR_SEVER,
|
||||||
|
PCI_ERR_UNC_SEVERITY_DEFAULT);
|
||||||
|
|
||||||
|
I.e. If the property bit is enabled, we configure it as we did for
|
||||||
|
qemu-8.0. If the property bit is not set, we configure it as it was in 7.2.
|
||||||
|
|
||||||
|
And now, everything that is missing is disabling the feature for old
|
||||||
|
machine types::
|
||||||
|
|
||||||
|
diff --git a/hw/core/machine.c b/hw/core/machine.c
|
||||||
|
index 47a34841a5..07f763eb2e 100644
|
||||||
|
--- a/hw/core/machine.c
|
||||||
|
+++ b/hw/core/machine.c
|
||||||
|
@@ -48,6 +48,7 @@ GlobalProperty hw_compat_7_2[] = {
|
||||||
|
{ "e1000e", "migrate-timadj", "off" },
|
||||||
|
{ "virtio-mem", "x-early-migration", "false" },
|
||||||
|
{ "migration", "x-preempt-pre-7-2", "true" },
|
||||||
|
+ { TYPE_PCI_DEVICE, "x-pcie-err-unc-mask", "off" },
|
||||||
|
};
|
||||||
|
const size_t hw_compat_7_2_len = G_N_ELEMENTS(hw_compat_7_2);
|
||||||
|
|
||||||
|
And now, when qemu-8.0.1 is released with this fix, all combinations
|
||||||
|
are going to work as supposed.
|
||||||
|
|
||||||
|
- $ qemu-7.2 -M pc-7.2 -> qemu-7.2 -M pc-7.2 (works)
|
||||||
|
- $ qemu-8.0.1 -M pc-7.2 -> qemu-8.0.1 -M pc-7.2 (works)
|
||||||
|
- $ qemu-8.0.1 -M pc-7.2 -> qemu-7.2 -M pc-7.2 (works)
|
||||||
|
- $ qemu-7.2 -M pc-7.2 -> qemu-8.0.1 -M pc-7.2 (works)
|
||||||
|
|
||||||
|
So the normality has been restored and everything is ok, no?
|
||||||
|
|
||||||
|
Not really, now our matrix is much bigger. We started with the easy
|
||||||
|
cases, migration from the same version to the same version always
|
||||||
|
works:
|
||||||
|
|
||||||
|
- $ qemu-7.2 -M pc-7.2 -> qemu-7.2 -M pc-7.2
|
||||||
|
- $ qemu-8.0 -M pc-7.2 -> qemu-8.0 -M pc-7.2
|
||||||
|
- $ qemu-8.0.1 -M pc-7.2 -> qemu-8.0.1 -M pc-7.2
|
||||||
|
|
||||||
|
Now the interesting ones. When the QEMU processes versions are
|
||||||
|
different. For the 1st set, their fail and we can do nothing, both
|
||||||
|
versions are released and we can't change anything.
|
||||||
|
|
||||||
|
- $ qemu-7.2 -M pc-7.2 -> qemu-8.0 -M pc-7.2
|
||||||
|
- $ qemu-8.0 -M pc-7.2 -> qemu-7.2 -M pc-7.2
|
||||||
|
|
||||||
|
This two are the ones that work. The whole point of making the
|
||||||
|
change in qemu-8.0.1 release was to fix this issue:
|
||||||
|
|
||||||
|
- $ qemu-7.2 -M pc-7.2 -> qemu-8.0.1 -M pc-7.2
|
||||||
|
- $ qemu-8.0.1 -M pc-7.2 -> qemu-7.2 -M pc-7.2
|
||||||
|
|
||||||
|
But now we found that qemu-8.0 neither can migrate to qemu-7.2 not
|
||||||
|
qemu-8.0.1.
|
||||||
|
|
||||||
|
- $ qemu-8.0 -M pc-7.2 -> qemu-8.0.1 -M pc-7.2
|
||||||
|
- $ qemu-8.0.1 -M pc-7.2 -> qemu-8.0 -M pc-7.2
|
||||||
|
|
||||||
|
So, if we start a pc-7.2 machine in qemu-8.0 we can't migrate it to
|
||||||
|
anything except to qemu-8.0.
|
||||||
|
|
||||||
|
Can we do better?
|
||||||
|
|
||||||
|
Yeap. If we know that we are going to do this migration:
|
||||||
|
|
||||||
|
- $ qemu-8.0 -M pc-7.2 -> qemu-8.0.1 -M pc-7.2
|
||||||
|
|
||||||
|
We can launch the appropriate devices with::
|
||||||
|
|
||||||
|
--device...,x-pci-e-err-unc-mask=on
|
||||||
|
|
||||||
|
And now we can receive a migration from 8.0. And from now on, we can
|
||||||
|
do that migration to new machine types if we remember to enable that
|
||||||
|
property for pc-7.2. Notice that we need to remember, it is not
|
||||||
|
enough to know that the source of the migration is qemu-8.0. Think of
|
||||||
|
this example:
|
||||||
|
|
||||||
|
$ qemu-8.0 -M pc-7.2 -> qemu-8.0.1 -M pc-7.2 -> qemu-8.2 -M pc-7.2
|
||||||
|
|
||||||
|
In the second migration, the source is not qemu-8.0, but we still have
|
||||||
|
that "problem" and have that property enabled. Notice that we need to
|
||||||
|
continue having this mark/property until we have this machine
|
||||||
|
rebooted. But it is not a normal reboot (that don't reload QEMU) we
|
||||||
|
need the machine to poweroff/poweron on a fixed QEMU. And from now
|
||||||
|
on we can use the proper real machine.
|
||||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user