RT-Task-Generator-QEMU/accel/tcg/translate-all.c
Andrea Fioraldi c6a00ab288
Full system hooks (#8)
* scsi-disk: add new quirks bitmap to SCSIDiskState

Since the MacOS SCSI implementation is quite old (and Apple added some firmware
customisations to their drives for m68k Macs) there is need to add a mechanism
to correctly handle Apple-specific quirks.

Add a new quirks bitmap to SCSIDiskState that can be used to enable these
features as required.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20220622105314.802852-2-mark.cave-ayland@ilande.co.uk>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

* scsi-disk: add MODE_PAGE_APPLE_VENDOR quirk for Macintosh

One of the mechanisms MacOS uses to identify CDROM drives compatible with MacOS
is to send a custom MODE SELECT command for page 0x30 to the drive. The
response to this is a hard-coded manufacturer string which must match in order
for the CDROM to be usable within MacOS.

Add an implementation of the MODE SELECT page 0x30 response guarded by a newly
defined SCSI_DISK_QUIRK_MODE_PAGE_APPLE_VENDOR quirk bit so that CDROM drives
attached to non-Apple machines function exactly as before.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20220622105314.802852-3-mark.cave-ayland@ilande.co.uk>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

* q800: implement compat_props to enable quirk_mode_page_apple_vendor for scsi-cd devices

By default quirk_mode_page_apple_vendor should be enabled for all scsi-cd devices
connected to the q800 machine to enable MacOS to detect and use them.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Message-Id: <20220622105314.802852-4-mark.cave-ayland@ilande.co.uk>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

* scsi-disk: add SCSI_DISK_QUIRK_MODE_SENSE_ROM_USE_DBD quirk for Macintosh

During SCSI bus enumeration A/UX sends a MODE SENSE command to the CDROM with
the DBD bit unset and expects the response to include a block descriptor. As per
the latest SCSI documentation, QEMU currently force-disables the block
descriptor for CDROM devices but the A/UX driver expects the requested block
descriptor to be returned.

If the block descriptor is not returned in the response then A/UX becomes
confused, since the block descriptor returned in the MODE SENSE response is
used to generate a subsequent MODE SELECT command which is then invalid.

Add a new SCSI_DISK_QUIRK_MODE_SENSE_ROM_USE_DBD quirk to allow this behaviour
to be enabled as required. Note that an additional workaround is required for
the previous SCSI_DISK_QUIRK_MODE_PAGE_APPLE_VENDOR quirk which must never
return a block descriptor even though the DBD bit is left unset.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Message-Id: <20220622105314.802852-5-mark.cave-ayland@ilande.co.uk>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

* q800: implement compat_props to enable quirk_mode_sense_rom_use_dbd for scsi-cd devices

By default quirk_mode_sense_rom_use_dbd should be enabled for all scsi-cd devices
connected to the q800 machine to correctly report the CDROM block descriptor back
to A/UX.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20220622105314.802852-6-mark.cave-ayland@ilande.co.uk>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

* scsi-disk: add SCSI_DISK_QUIRK_MODE_PAGE_VENDOR_SPECIFIC_APPLE quirk for Macintosh

Both MacOS and A/UX make use of vendor-specific MODE SELECT commands with PF=0
to identify SCSI devices:

- MacOS sends a MODE SELECT command with PF=0 for the MODE_PAGE_VENDOR_SPECIFIC
  (0x0) mode page containing 2 bytes before initialising a disk

- A/UX (installed on disk) sends a MODE SELECT command with PF=0 during SCSI
  bus enumeration, and gets stuck in an infinite loop if it fails

Add a new SCSI_DISK_QUIRK_MODE_PAGE_VENDOR_SPECIFIC_APPLE quirk to allow both
PF=0 MODE SELECT commands and implement a MODE_PAGE_VENDOR_SPECIFIC (0x0)
mode page which is compatible with MacOS.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Message-Id: <20220622105314.802852-7-mark.cave-ayland@ilande.co.uk>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

* q800: implement compat_props to enable quirk_mode_page_vendor_specific_apple for scsi devices

By default quirk_mode_page_vendor_specific_apple should be enabled for both scsi-hd
and scsi-cd devices to allow MacOS to format SCSI disk devices, and A/UX to
enumerate SCSI CDROM devices succesfully without getting stuck in a loop.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Message-Id: <20220622105314.802852-8-mark.cave-ayland@ilande.co.uk>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

* scsi-disk: add FORMAT UNIT command

When initialising a drive ready to install MacOS, Apple HD SC Setup first attempts
to format the drive. Add a simple FORMAT UNIT command which simply returns success
to allow the format to succeed.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220622105314.802852-9-mark.cave-ayland@ilande.co.uk>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

* scsi-disk: add SCSI_DISK_QUIRK_MODE_PAGE_TRUNCATED quirk for Macintosh

When A/UX configures the CDROM device it sends a truncated MODE SELECT request
for page 1 (MODE_PAGE_R_W_ERROR) which is only 6 bytes in length rather than
10. This seems to be due to bug in Apple's code which calculates the CDB message
length incorrectly.

The work at [1] suggests that this truncated request is accepted on real
hardware whereas in QEMU it generates an INVALID_PARAM_LEN sense code which
causes A/UX to get stuck in a loop retrying the command in an attempt to succeed.

Alter the mode page request length check so that truncated requests are allowed
if the SCSI_DISK_QUIRK_MODE_PAGE_TRUNCATED quirk is enabled, whilst also adding a
trace event to enable the condition to be detected.

[1] https://68kmla.org/bb/index.php?threads/scsi2sd-project-anyone-interested.29040/page-7#post-316444

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Message-Id: <20220622105314.802852-10-mark.cave-ayland@ilande.co.uk>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

* q800: implement compat_props to enable quirk_mode_page_truncated for scsi-cd devices

By default quirk_mode_page_truncated should be enabled for all scsi-cd devices
connected to the q800 machine to allow A/UX to enumerate SCSI CDROM devices
without hanging.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Message-Id: <20220622105314.802852-11-mark.cave-ayland@ilande.co.uk>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

* scsi-disk: allow the MODE_PAGE_R_W_ERROR AWRE bit to be changeable for CDROM drives

A/UX sends a MODE_PAGE_R_W_ERROR command with the AWRE bit set to 0 when enumerating
CDROM drives. Since the bit is currently hardcoded to 1 then indicate that the AWRE
bit can be changed (even though we don't care about the value) so that
the MODE_PAGE_R_W_ERROR page can be set successfully.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Message-Id: <20220622105314.802852-12-mark.cave-ayland@ilande.co.uk>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

* scsi-disk: allow MODE SELECT block descriptor to set the block size

The MODE SELECT command can contain an optional block descriptor that can be used
to set the device block size. If the block descriptor is present then update the
block size on the SCSI device accordingly.

This allows CDROMs to be used with A/UX which requires a CDROM drive which is
capable of switching from a 2048 byte sector size to a 512 byte sector size.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Message-Id: <20220622105314.802852-13-mark.cave-ayland@ilande.co.uk>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

* q800: add default vendor and product information for scsi-hd devices

The Apple HD SC Setup program uses a SCSI INQUIRY command to check that any SCSI
hard disks detected match a whitelist of vendors and products before allowing
the "Initialise" button to prepare an empty disk.

Add known-good default vendor and product information using the existing
compat_prop mechanism so the user doesn't have to use long command lines to set
the qdev properties manually.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20220622105314.802852-14-mark.cave-ayland@ilande.co.uk>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

* q800: add default vendor and product information for scsi-cd devices

The MacOS CDROM driver uses a SCSI INQUIRY command to check that any SCSI CDROMs
detected match a whitelist of vendors and products before adding them to the
list of available devices.

Add known-good default vendor and product information using the existing
compat_prop mechanism so the user doesn't have to use long command lines to set
the qdev properties manually.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20220622105314.802852-15-mark.cave-ayland@ilande.co.uk>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

* pc-bios/s390-ccw: add -Wno-array-bounds

The option generates a lot of warnings for integers casted to pointers,
for example:

/home/pbonzini/work/upstream/qemu/pc-bios/s390-ccw/dasd-ipl.c:174:19: warning: array subscript 0 is outside array bounds of ‘CcwSeekData[0]’ [-Warray-bounds]
  174 |     seekData->cyl = 0x00;
      |     ~~~~~~~~~~~~~~^~~~~~

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

* aspeed: sbc: Allow per-machine settings

In order to correctly report secure boot running firmware the values
of certain registers must be set.

We don't yet have documentation from ASPEED on what they mean. The
meaning is inferred from u-boot's use of them.

Introduce properties so the settings can be configured per-machine.

Reviewed-by: Peter Delevoryas <pdel@fb.com>
Tested-by: Peter Delevoryas <pdel@fb.com>
Signed-off-by: Joel Stanley <joel@jms.id.au>
Message-Id: <20220628154740.1117349-4-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>

* hw/i2c/pmbus: Add idle state to return 0xff's

Signed-off-by: Peter Delevoryas <pdel@fb.com>
Reviewed-by: Titus Rwantare <titusr@google.com>
Message-Id: <20220701000626.77395-2-me@pjd.dev>
Signed-off-by: Cédric Le Goater <clg@kaod.org>

* hw/sensor: Add IC_DEVICE_ID to ISL voltage regulators

This commit adds a passthrough for PMBUS_IC_DEVICE_ID to allow Renesas
voltage regulators to return the integrated circuit device ID if they
would like to.

The behavior is very device specific, so it hasn't been added to the
general PMBUS model. Additionally, if the device ID hasn't been set,
then the voltage regulator will respond with the error byte value.  The
guest error message will change slightly for IC_DEVICE_ID with this
commit.

Signed-off-by: Peter Delevoryas <pdel@fb.com>
Reviewed-by: Titus Rwantare <titusr@google.com>
Message-Id: <20220701000626.77395-3-me@pjd.dev>
Signed-off-by: Cédric Le Goater <clg@kaod.org>

* hw/sensor: Add Renesas ISL69259 device model

This adds the ISL69259, using all the same functionality as the existing
ISL69260 but overriding the IC_DEVICE_ID.

Signed-off-by: Peter Delevoryas <pdel@fb.com>
Reviewed-by: Titus Rwantare <titusr@google.com>
Message-Id: <20220701000626.77395-4-me@pjd.dev>
Signed-off-by: Cédric Le Goater <clg@kaod.org>

* aspeed: Create SRAM name from first CPU index

To support multiple SoC's running simultaneously, we need a unique name for
each RAM region. DRAM is created by the machine, but SRAM is created by the
SoC, since in hardware it is part of the SoC's internals.

We need a way to uniquely identify each SRAM region though, for VM
migration. Since each of the SoC's CPU's has an index which identifies it
uniquely from other CPU's in the machine, we can use the index of any of the
CPU's in the SoC to uniquely identify differentiate the SRAM name from other
SoC SRAM's. In this change, I just elected to use the index of the first CPU
in each SoC.

Signed-off-by: Peter Delevoryas <peter@pjd.dev>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220705191400.41632-3-peter@pjd.dev>
Signed-off-by: Cédric Le Goater <clg@kaod.org>

* aspeed: Refactor UART init for multi-SoC machines

This change moves the code that connects the SoC UART's to serial_hd's
to the machine.

It makes each UART a proper child member of the SoC, and then allows the
machine to selectively initialize the chardev for each UART with a
serial_hd.

This should preserve backwards compatibility, but also allow multi-SoC
boards to completely change the wiring of serial devices from the
command line to specific SoC UART's.

This also removes the uart-default property from the SoC, since the SoC
doesn't need to know what UART is the "default" on the machine anymore.

I tested this using the images and commands from the previous
refactoring, and another test image for the ast1030:

    wget https://github.com/facebook/openbmc/releases/download/v2021.49.0/fuji.mtd
    wget https://github.com/facebook/openbmc/releases/download/v2021.49.0/wedge100.mtd
    wget https://github.com/peterdelevoryas/OpenBIC/releases/download/oby35-cl-2022.13.01/Y35BCL.elf

Fuji uses UART1:

    qemu-system-arm -machine fuji-bmc \
        -drive file=fuji.mtd,format=raw,if=mtd \
        -nographic

ast2600-evb uses uart-default=UART5:

    qemu-system-arm -machine ast2600-evb \
        -drive file=fuji.mtd,format=raw,if=mtd \
        -serial null -serial mon:stdio -display none

Wedge100 uses UART3:

    qemu-system-arm -machine palmetto-bmc \
        -drive file=wedge100.mtd,format=raw,if=mtd \
        -serial null -serial null -serial null \
        -serial mon:stdio -display none

AST1030 EVB uses UART5:

    qemu-system-arm -machine ast1030-evb \
        -kernel Y35BCL.elf -nographic

Fixes: 6827ff20b2975 ("hw: aspeed: Init all UART's with serial devices")
Signed-off-by: Peter Delevoryas <peter@pjd.dev>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220705191400.41632-4-peter@pjd.dev>
Signed-off-by: Cédric Le Goater <clg@kaod.org>

* aspeed: Make aspeed_board_init_flashes public

Signed-off-by: Peter Delevoryas <peter@pjd.dev>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220705191400.41632-5-peter@pjd.dev>
Signed-off-by: Cédric Le Goater <clg@kaod.org>

* aspeed: Add fby35 skeleton

Signed-off-by: Peter Delevoryas <peter@pjd.dev>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220705191400.41632-6-peter@pjd.dev>
Signed-off-by: Cédric Le Goater <clg@kaod.org>

* aspeed: Add AST2600 (BMC) to fby35

You can test booting the BMC with both '-device loader' and '-drive
file'. This is necessary because of how the fb-openbmc boot sequence
works (jump to 0x20000000 after U-Boot SPL).

    wget https://github.com/facebook/openbmc/releases/download/openbmc-e2294ff5d31d/fby35.mtd
    qemu-system-arm -machine fby35 -nographic \
        -device loader,file=fby35.mtd,addr=0,cpu-num=0 -drive file=fby35.mtd,format=raw,if=mtd

Signed-off-by: Peter Delevoryas <peter@pjd.dev>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220705191400.41632-7-peter@pjd.dev>
Signed-off-by: Cédric Le Goater <clg@kaod.org>

* aspeed: fby35: Add a bootrom for the BMC

The BMC boots from the first flash device by fetching instructions
from the flash contents. Add an alias region on 0x0 for this
purpose. There are currently performance issues with this method (TBs
being flushed too often), so as a faster alternative, install the
flash contents as a ROM in the BMC memory space.

See commit 1a15311a12fa ("hw/arm/aspeed: add a 'execute-in-place'
property to boot directly from CE0")

Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Peter Delevoryas <peter@pjd.dev>
[ clg: blk_pread() fixes ]
Message-Id: <20220705191400.41632-8-peter@pjd.dev>
Signed-off-by: Cédric Le Goater <clg@kaod.org>

* aspeed: Add AST1030 (BIC) to fby35

With the BIC, the easiest way to run everything is to create two pty's
for each SoC and reserve stdin/stdout for the monitor:

    wget https://github.com/facebook/openbmc/releases/download/openbmc-e2294ff5d31d/fby35.mtd
    wget https://github.com/peterdelevoryas/OpenBIC/releases/download/oby35-cl-2022.13.01/Y35BCL.elf
    qemu-system-arm -machine fby35 \
        -drive file=fby35.mtd,format=raw,if=mtd \
        -device loader,file=fby35.mtd,addr=0,cpu-num=0 \
        -serial pty -serial pty -serial mon:stdio -display none -S

    screen /dev/ttys0
    screen /dev/ttys1
    (qemu) c

This commit only adds the the first server board's Bridge IC, but in the
future we'll try to include the other three server board Bridge IC's
too.

Signed-off-by: Peter Delevoryas <peter@pjd.dev>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220705191400.41632-9-peter@pjd.dev>
Signed-off-by: Cédric Le Goater <clg@kaod.org>

* docs: aspeed: Add fby35 multi-SoC machine section

Signed-off-by: Peter Delevoryas <peter@pjd.dev>
Reviewed-by: Joel Stanley <joel@jms.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
[ clg: - fixed URL links
       - Moved Facebook Yosemite section at the end of the file ]
Message-Id: <20220705191400.41632-10-peter@pjd.dev>
Signed-off-by: Cédric Le Goater <clg@kaod.org>

* docs: aspeed: Minor updates

Some more controllers have been modeled recently. Reflect that in the
list of supported devices. New machines were also added.

Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Peter Delevoryas <peter@pjd.dev>
Reviewed-by: Joel Stanley <joel@jms.id.au>
Message-Id: <20220706172131.809255-1-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>

* test/avocado/machine_aspeed.py: Add SDK tests

The Aspeed SDK kernel usually includes support for the lastest HW
features. This is interesting to exercise QEMU and discover the gaps
in the models.

Add extra I2C tests for the AST2600 EVB machine to check the new
register interface.

Message-Id: <20220707091239.1029561-1-clg@kaod.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>

* hw: m25p80: Add Block Protect and Top Bottom bits for write protect

Signed-off-by: Iris Chen <irischenlj@fb.com>
Reviewed-by: Francisco Iglesias <frasse.iglesias@gmail.com>
Message-Id: <20220708164552.3462620-1-irischenlj@fb.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>

* hw: m25p80: add tests for BP and TB bit write protect

Signed-off-by: Iris Chen <irischenlj@fb.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220627185234.1911337-3-irischenlj@fb.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>

* qtest/aspeed_gpio: Add input pin modification test

Verify the current behavior, which is that input pins can be modified by
guest OS register writes.

Signed-off-by: Peter Delevoryas <peter@pjd.dev>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220712023219.41065-2-peter@pjd.dev>
Signed-off-by: Cédric Le Goater <clg@kaod.org>

* hw/gpio/aspeed: Don't let guests modify input pins

Up until now, guests could modify input pins by overwriting the data
value register. The guest OS should only be allowed to modify output pin
values, and the QOM property setter should only be permitted to modify
input pins.

This change also updates the gpio input pin test to match this
expectation.

Andrew suggested this particularly refactoring here:

    https://lore.kernel.org/qemu-devel/23523aa1-ba81-412b-92cc-8174faba3612@www.fastmail.com/

Suggested-by: Andrew Jeffery <andrew@aj.id.au>
Signed-off-by: Peter Delevoryas <peter@pjd.dev>
Fixes: 4b7f956862dc ("hw/gpio: Add basic Aspeed GPIO model for AST2400 and AST2500")
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220712023219.41065-3-peter@pjd.dev>
Signed-off-by: Cédric Le Goater <clg@kaod.org>

* aspeed: Add fby35-bmc slot GPIO's

Signed-off-by: Peter Delevoryas <peter@pjd.dev>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20220712023219.41065-4-peter@pjd.dev>
Signed-off-by: Cédric Le Goater <clg@kaod.org>

* hw/nvme: Implement shadow doorbell buffer support

Implement Doorbel Buffer Config command (Section 5.7 in NVMe Spec 1.3)
and Shadow Doorbel buffer & EventIdx buffer handling logic (Section 7.13
in NVMe Spec 1.3). For queues created before the Doorbell Buffer Config
command, the nvme_dbbuf_config function tries to associate each existing
SQ and CQ with its Shadow Doorbel buffer and EventIdx buffer address.
Queues created after the Doorbell Buffer Config command will have the
doorbell buffers associated with them when they are initialized.

In nvme_process_sq and nvme_post_cqe, proactively check for Shadow
Doorbell buffer changes instead of wait for doorbell register changes.
This reduces the number of MMIOs.

In nvme_process_db(), update the shadow doorbell buffer value with
the doorbell register value if it is the admin queue. This is a hack
since hosts like Linux NVMe driver and SPDK do not use shadow
doorbell buffer for the admin queue. Copying the doorbell register
value to the shadow doorbell buffer allows us to support these hosts
as well as spec-compliant hosts that use shadow doorbell buffer for
the admin queue.

Signed-off-by: Jinhao Fan <fanjinhao21s@ict.ac.cn>
Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
[k.jensen: rebased]
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>

* hw/nvme: Add trace events for shadow doorbell buffer

When shadow doorbell buffer is enabled, doorbell registers are lazily
updated. The actual queue head and tail pointers are stored in Shadow
Doorbell buffers.

Add trace events for updates on the Shadow Doorbell buffers and EventIdx
buffers. Also add trace event for the Doorbell Buffer Config command.

Signed-off-by: Jinhao Fan <fanjinhao21s@ict.ac.cn>
Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
[k.jensen: rebased]
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>

* hw/nvme: fix example serial in documentation

The serial prop on the controller is actually describing the nvme
subsystem serial, which has to be identical for all controllers within
the same nvme subsystem.

This is enforced since commit a859eb9f8f64 ("hw/nvme: enforce common
serial per subsystem").

Fix the documentation, so that people copying the qemu command line
example won't get an error on qemu start.

Signed-off-by: Niklas Cassel <niklas.cassel@wdc.com>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>

* hw/nvme: force nvme-ns param 'shared' to false if no nvme-subsys node

Since commit 916b0f0b5264 ("hw/nvme: change nvme-ns 'shared' default")
the default value of nvme-ns param 'shared' is set to true, regardless
if there is a nvme-subsys node or not.

On a system without a nvme-subsys node, a namespace will never be able
to be attached to more than one controller, so for this configuration,
it is counterintuitive for this parameter to be set by default.

Force the nvme-ns param 'shared' to false for configurations where
there is no nvme-subsys node, as the namespace will never be able to
attach to more than one controller anyway.

Signed-off-by: Niklas Cassel <niklas.cassel@wdc.com>
Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>

* nvme: Fix misleading macro when mixed with ternary operator

Using the Parfait source code analyser and issue was found in
hw/nvme/ctrl.c where the macros NVME_CAP_SET_CMBS and NVME_CAP_SET_PMRS
are called with a ternary operatore in the second parameter, resulting
in a potentially unexpected expansion of the form:

  x ? a: b & FLAG_TEST

which will result in a different result to:

  (x ? a: b) & FLAG_TEST.

The macros should wrap each of the parameters in brackets to ensure the
correct result on expansion.

Signed-off-by: Darren Kenny <darren.kenny@oracle.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>

* hw/nvme: Use ioeventfd to handle doorbell updates

Add property "ioeventfd" which is enabled by default. When this is
enabled, updates on the doorbell registers will cause KVM to signal
an event to the QEMU main loop to handle the doorbell updates.
Therefore, instead of letting the vcpu thread run both guest VM and
IO emulation, we now use the main loop thread to do IO emulation and
thus the vcpu thread has more cycles for the guest VM.

Since ioeventfd does not tell us the exact value that is written, it is
only useful when shadow doorbell buffer is enabled, where we check
for the value in the shadow doorbell buffer when we get the doorbell
update event.

IOPS comparison on Linux 5.19-rc2: (Unit: KIOPS)

qd           1   4  16  64
qemu        35 121 176 153
ioeventfd   41 133 258 313

Changes since v3:
 - Do not deregister ioeventfd when it was not enabled on a SQ/CQ

Signed-off-by: Jinhao Fan <fanjinhao21s@ict.ac.cn>
Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>

* MAINTAINERS: Add myself as Guest Agent co-maintainer

Signed-off-by: Konstantin Kostiuk <kkostiuk@redhat.com>
Acked-by: Michael Roth <michael.roth@amd.com>

* hw/intc/armv7m_nvic: ICPRn must not unpend an IRQ that is being held high

In the M-profile Arm ARM, rule R_CVJS defines when an interrupt should
be set to the Pending state:
 A) when the input line is high and the interrupt is not Active
 B) when the input line transitions from low to high and the interrupt
    is Active
(Note that the first of these is an ongoing condition, and the
second is a point-in-time event.)

This can be rephrased as:
 1 when the line goes from low to high, set Pending
 2 when Active goes from 1 to 0, if line is high then set Pending
 3 ignore attempts to clear Pending when the line is high
   and Active is 0

where 1 covers both B and one of the "transition into condition A"
cases, 2 deals with the other "transition into condition A"
possibility, and 3 is "don't drop Pending if we're already in
condition A".  Transitions out of condition A don't affect Pending
state.

We handle case 1 in set_irq_level(). For an interrupt (as opposed
to other kinds of exception) the only place where we clear Active
is in armv7m_nvic_complete_irq(), where we handle case 2 by
checking for whether we need to re-pend the exception. For case 3,
the only places where we clear Pending state on an interrupt are in
armv7m_nvic_acknowledge_irq() (where we are setting Active so it
doesn't count) and for writes to NVIC_ICPRn.

It is the "write to NVIC_ICPRn" case that we missed: we must ignore
this if the input line is high and the interrupt is not Active.
(This required behaviour is differently and perhaps more clearly
stated in the v7M Arm ARM, which has pseudocode in section B3.4.1
that implies it.)

Reported-by: Igor Kotrasiński <i.kotrasinsk@samsung.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20220628154724.3297442-1-peter.maydell@linaro.org

* target/arm: Fill in VL for tbflags when SME enabled and SVE disabled

When PSTATE.SM, VL = SVL even if SVE is disabled.
This is visible in kselftest ssve-test.

Reported-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220713045848.217364-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>

* target/arm: Fix aarch64_sve_change_el for SME

We were only checking for SVE disabled and not taking into
account PSTATE.SM to check SME disabled, which resulted in
vectors being incorrectly truncated.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220713045848.217364-3-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>

* linux-user/aarch64: Do not clear PROT_MTE on mprotect

The documentation for PROT_MTE says that it cannot be cleared
by mprotect.  Further, the implementation of the VM_ARCH_CLEAR bit,
contains PROT_BTI confiming that bit should be cleared.

Introduce PAGE_TARGET_STICKY to allow target/arch/cpu.h to control
which bits may be reset during page_set_flags.  This is sort of the
opposite of VM_ARCH_CLEAR, but works better with qemu's PAGE_* bits
that are separate from PROT_* bits.

Reported-by: Vitaly Buka <vitalybuka@google.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220711031420.17820-1-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>

* target/arm: Define and use new regime_tcr_value() function

The regime_tcr() function returns a pointer to a struct TCR
corresponding to the TCR controlling a translation regime.  The
struct TCR has the raw value of the register, plus two fields mask
and base_mask which are used as a small optimization in the case of
32-bit short-descriptor lookups.  Almost all callers of regime_tcr()
only want the raw register value.  Define and use a new
regime_tcr_value() function which returns only the raw 64-bit
register value.

This is a preliminary to removing the 32-bit short descriptor
optimization -- it only saves a handful of bit operations, which is
tiny compared to the overhead of doing a page table walk at all, and
the TCR struct is awkward and makes fixing
https://gitlab.com/qemu-project/qemu/-/issues/1103 unnecessarily
difficult.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220714132303.1287193-2-peter.maydell@linaro.org

* target/arm: Calculate mask/base_mask in get_level1_table_address()

In get_level1_table_address(), instead of using precalculated values
of mask and base_mask from the TCR struct, calculate them directly
(in the same way we currently do in vmsa_ttbcr_raw_write() to
populate the TCR struct fields).

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220714132303.1287193-3-peter.maydell@linaro.org

* target/arm: Fold regime_tcr() and regime_tcr_value() together

The only caller of regime_tcr() is now regime_tcr_value(); fold the
two together, and use the shorter and more natural 'regime_tcr'
name for the new function.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220714132303.1287193-4-peter.maydell@linaro.org

* target/arm: Fix big-endian host handling of VTCR

We have a bug in our handling of accesses to the AArch32 VTCR
register on big-endian hosts: we were not adjusting the part of the
uint64_t field within TCR that the generated code would access.  That
can be done with offsetoflow32(), by using an ARM_CP_STATE_BOTH cpreg
struct, or by defining a full set of read/write/reset functions --
the various other TCR cpreg structs used one or another of those
strategies, but for VTCR we did not, so on a big-endian host VTCR
accesses would touch the wrong half of the register.

Use offsetoflow32() in the VTCR register struct.  This works even
though the field in the CPU struct is currently a struct TCR, because
the first field in that struct is the uint64_t raw_tcr.

None of the other TCR registers have this bug -- either they are
AArch64 only, or else they define resetfn, writefn, etc, and
expect to be passed the full struct pointer.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220714132303.1287193-5-peter.maydell@linaro.org

* target/arm: Store VTCR_EL2, VSTCR_EL2 registers as uint64_t

Change the representation of the VSTCR_EL2 and VTCR_EL2 registers in
the CPU state struct from struct TCR to uint64_t.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220714132303.1287193-6-peter.maydell@linaro.org

* target/arm: Store TCR_EL* registers as uint64_t

Change the representation of the TCR_EL* registers in the CPU state
struct from struct TCR to uint64_t.  This allows us to drop the
custom vmsa_ttbcr_raw_write() function, moving the "enforce RES0"
checks to their more usual location in the writefn
vmsa_ttbcr_write().  We also don't need the resetfn any more.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220714132303.1287193-7-peter.maydell@linaro.org

* target/arm: Honour VTCR_EL2 bits in Secure EL2

In regime_tcr() we return the appropriate TCR register for the
translation regime.  For Secure EL2, we return the VSTCR_EL2 value,
but in this translation regime some fields that control behaviour are
in VTCR_EL2.  When this code was originally written (as the comment
notes), QEMU didn't care about any of those fields, but we have since
added support for features such as LPA2 which do need the values from
those fields.

Synthesize a TCR value by merging in the relevant VTCR_EL2 fields to
the VSTCR_EL2 value.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1103
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220714132303.1287193-8-peter.maydell@linaro.org

* hw/adc: Fix CONV bit in NPCM7XX ADC CON register

The correct bit for the CONV bit in NPCM7XX ADC is bit 13. This patch
fixes that in the module, and also lower the IRQ when the guest
is done handling an interrupt event from the ADC module.

Signed-off-by: Hao Wu <wuhaotsh@google.com>
Reviewed-by: Patrick Venture<venture@google.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220714182836.89602-4-wuhaotsh@google.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>

* hw/adc: Make adci[*] R/W in NPCM7XX ADC

Our sensor test requires both reading and writing from a sensor's
QOM property. So we need to make the input of ADC module R/W instead
of write only for that to work.

Signed-off-by: Hao Wu <wuhaotsh@google.com>
Reviewed-by: Titus Rwantare <titusr@google.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20220714182836.89602-5-wuhaotsh@google.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>

* target/arm: Don't set syndrome ISS for loads and stores with writeback

The architecture requires that for faults on loads and stores which
do writeback, the syndrome information does not have the ISS
instruction syndrome information (i.e. ISV is 0).  We got this wrong
for the load and store instructions covered by disas_ldst_reg_imm9().
Calculate iss_valid correctly so that if the insn is a writeback one
it is false.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1057
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220715123323.1550983-1-peter.maydell@linaro.org

* Align Raspberry Pi DMA interrupts with Linux DTS

There is nothing in the specs on DMA engine interrupt lines: it should have
been in the "BCM2835 ARM Peripherals" datasheet but the appropriate
"ARM peripherals interrupt table" (p.113) is nearly empty.

All Raspberry Pi models 1-3 (based on bcm2835) have
Linux device tree (arch/arm/boot/dts/bcm2835-common.dtsi +25):

    /* dma channel 11-14 share one irq */

This information is repeated in the driver code
(drivers/dma/bcm2835-dma.c +1344):

    /*
     * in case of channel >= 11
     * use the 11th interrupt and that is shared
     */

In this patch channels 0--10 and 11--14 are handled separately.

Signed-off-by: Andrey Makarov <andrey.makarov@auriga.com>
Message-id: 20220716113210.349153-1-andrey.makarov@auriga.com
[PMM: fixed checkpatch nits]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>

* monitor: add support for boolean statistics

The next version of Linux will introduce boolean statistics, which
can only have 0 or 1 values.  Support them in the schema and in
the HMP command.

Suggested-by: Amneesh Singh <natto@weirdnatto.in>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

* kvm: add support for boolean statistics

The next version of Linux will introduce boolean statistics, which
can only have 0 or 1 values.  Convert them to the new QAPI fields
added in the previous commit.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

* ppc64: Allocate IRQ lines with qdev_init_gpio_in()

This replaces the IRQ array 'irq_inputs' with GPIO lines, the goal
being to remove 'irq_inputs' when all CPUs have been converted.

Signed-off-by: Cédric Le Goater <clg@kaod.org>
Acked-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220705145814.461723-2-clg@kaod.org>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>

* ppc/40x: Allocate IRQ lines with qdev_init_gpio_in()

Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220705145814.461723-3-clg@kaod.org>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>

* ppc/6xx: Allocate IRQ lines with qdev_init_gpio_in()

Signed-off-by: Cédric Le Goater <clg@kaod.org>
Acked-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220705145814.461723-4-clg@kaod.org>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>

* ppc/e500: Allocate IRQ lines with qdev_init_gpio_in()

Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220705145814.461723-5-clg@kaod.org>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>

* ppc: Remove unused irq_inputs

Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220705145814.461723-6-clg@kaod.org>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>

* hw/ppc: pass random seed to fdt

If the FDT contains /chosen/rng-seed, then the Linux RNG will use it to
initialize early. Set this using the usual guest random number
generation function. This is confirmed to successfully initialize the
RNG on Linux 5.19-rc6. The rng-seed node is part of the DT spec. Set
this on the paravirt platforms, spapr and e500, just as is done on other
architectures with paravirt hardware.

Cc: Daniel Henrique Barboza <danielhb413@gmail.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220712135114.289855-1-Jason@zx2c4.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>

* target/ppc/kvm: Skip current and parent directories in kvmppc_find_cpu_dt

Some systems have /proc/device-tree/cpus/../clock-frequency. However,
this is not the expected path for a CPU device tree directory.

Signed-off-by: Murilo Opsfelder Araujo <muriloo@linux.ibm.com>
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220712210810.35514-1-muriloo@linux.ibm.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>

* target/ppc: Fix gen_priv_exception error value in mfspr/mtspr

The code in linux-user/ppc/cpu_loop.c expects POWERPC_EXCP_PRIV
exception with error POWERPC_EXCP_PRIV_OPC or POWERPC_EXCP_PRIV_REG,
while POWERPC_EXCP_INVAL_SPR is expected in POWERPC_EXCP_INVAL
exceptions. This mismatch caused an EXCP_DUMP with the message "Unknown
privilege violation (03)", as seen in [1].

[1] https://gitlab.com/qemu-project/qemu/-/issues/588

Fixes: 9b2fadda3e01 ("ppc: Rework generation of priv and inval interrupts")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/588
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20220627141104.669152-2-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>

* target/ppc: fix exception error value in slbfee

Testing on a POWER9 DD2.3, we observed that the Linux kernel delivers a
signal with si_code ILL_PRVOPC (5) when a userspace application tries to
use slbfee. To obtain this behavior on linux-user, we should use
POWERPC_EXCP_PRIV with POWERPC_EXCP_PRIV_OPC.

No functional change is intended for softmmu targets as
gen_hvpriv_exception uses the same 'exception' argument
(POWERPC_EXCP_HV_EMU) for raise_exception_*, and the powerpc_excp_*
methods do not use lower bits of the exception error code when handling
POWERPC_EXCP_{INVAL,PRIV}.

Reported-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220627141104.669152-3-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>

* target/ppc: remove mfdcrux and mtdcrux

The only PowerPC implementations with these insns were the 460 and 460F,
which had their definitions removed in [1].

[1] 7ff26aa6c657 ("target/ppc: Remove unused PPC 460 and 460F definitions")

Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220627141104.669152-4-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>

* target/ppc: fix exception error code in helper_{load, store}_dcr

POWERPC_EXCP_INVAL should only be or-ed with other constants prefixed
with POWERPC_EXCP_INVAL_. Also, take the opportunity to move both
helpers under #if !defined(CONFIG_USER_ONLY) as the instructions that
use them are privileged.

No functional change is intended, the lower 4 bits of the error code are
ignored by all powerpc_excp_* methods on POWERPC_EXCP_INVAL exceptions.

Reported-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220627141104.669152-5-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>

* target/ppc: fix PMU Group A register read/write exceptions

A call to "gen_(hv)priv_exception" should use POWERPC_EXCP_PRIV_* as the
'error' argument instead of POWERPC_EXCP_INVAL_*, and POWERPC_EXCP_FU is
an exception type, not an exception error code. To correctly set
FSCR[IC], we should raise Facility Unavailable with this exception type
and IC value as the error code.

Fixes: 565cb1096733 ("target/ppc: add user read/write functions for MMCR0")
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220627141104.669152-6-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>

* target/ppc: fix exception error code in spr_write_excp_vector

The 'error' argument of gen_inval_exception will be or-ed with
POWERPC_EXCP_INVAL, so it should always be a constant prefixed with
POWERPC_EXCP_INVAL_. No functional change is intended,
spr_write_excp_vector is only used by register_BookE_sprs, and
powerpc_excp_booke ignores the lower 4 bits of the error code on
POWERPC_EXCP_INVAL exceptions.

Also, take the opportunity to replace printf with qemu_log_mask.

Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220627141104.669152-7-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>

* target/ppc: Move tlbie[l] to decode tree

Also decode RIC, PRS and R operands.

Signed-off-by: Leandro Lupori <leandro.lupori@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220712193741.59134-2-leandro.lupori@eldorado.org.br>
[danielhb: mark bit 31 in @X_tlbie pattern as ignored]
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>

* target/ppc: Implement ISA 3.00 tlbie[l]

This initial version supports the invalidation of one or all
TLB entries. Flush by PID/LPID, or based in process/partition
scope is not supported, because it would make using the
generic QEMU TLB implementation hard. In these cases, all
entries are flushed.

Signed-off-by: Leandro Lupori <leandro.lupori@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220712193741.59134-3-leandro.lupori@eldorado.org.br>
[danielhb: moved 'set' declaration to TLBIE_RIC_PWC block]
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>

* target/ppc: receive DisasContext explicitly in GEN_PRIV

GEN_PRIV and related CHK_* macros just assumed that variable named
"ctx" would be in scope when they are used, and that it would be a
pointer to DisasContext. Change these macros to receive the pointer
explicitly.

Reviewed-by: Leandro Lupori <leandro.lupori@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Signed-off-by: Lucas Coutinho <lucas.coutinho@eldorado.org.br>
Message-Id: <20220701133507.740619-2-lucas.coutinho@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>

* target/ppc: add macros to check privilege level

Equivalent to CHK_SV and CHK_HV, but can be used in decodetree methods.

Reviewed-by: Leandro Lupori <leandro.lupori@eldorado.org.br>
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Signed-off-by: Lucas Coutinho <lucas.coutinho@eldorado.org.br>
Message-Id: <20220701133507.740619-3-lucas.coutinho@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>

* target/ppc: Move slbie to decodetree

Reviewed-by: Leandro Lupori <leandro.lupori@eldorado.org.br>
Signed-off-by: Lucas Coutinho <lucas.coutinho@eldorado.org.br>
Message-Id: <20220701133507.740619-4-lucas.coutinho@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>

* target/ppc: Move slbieg to decodetree

Reviewed-by: Leandro Lupori <leandro.lupori@eldorado.org.br>
Signed-off-by: Lucas Coutinho <lucas.coutinho@eldorado.org.br>
Message-Id: <20220701133507.740619-5-lucas.coutinho@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>

* target/ppc: Move slbia to decodetree

Reviewed-by: Leandro Lupori <leandro.lupori@eldorado.org.br>
Signed-off-by: Lucas Coutinho <lucas.coutinho@eldorado.org.br>
Message-Id: <20220701133507.740619-6-lucas.coutinho@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>

* target/ppc: Move slbmte to decodetree

Reviewed-by: Leandro Lupori <leandro.lupori@eldorado.org.br>
Signed-off-by: Lucas Coutinho <lucas.coutinho@eldorado.org.br>
Message-Id: <20220701133507.740619-7-lucas.coutinho@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>

* target/ppc: Move slbmfev to decodetree

Reviewed-by: Leandro Lupori <leandro.lupori@eldorado.org.br>
Signed-off-by: Lucas Coutinho <lucas.coutinho@eldorado.org.br>
Message-Id: <20220701133507.740619-8-lucas.coutinho@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>

* target/ppc: Move slbmfee to decodetree

Reviewed-by: Leandro Lupori <leandro.lupori@eldorado.org.br>
Signed-off-by: Lucas Coutinho <lucas.coutinho@eldorado.org.br>
Message-Id: <20220701133507.740619-9-lucas.coutinho@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>

* target/ppc: Move slbfee to decodetree

Reviewed-by: Leandro Lupori <leandro.lupori@eldorado.org.br>
Signed-off-by: Lucas Coutinho <lucas.coutinho@eldorado.org.br>
Message-Id: <20220701133507.740619-10-lucas.coutinho@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>

* target/ppc: Move slbsync to decodetree

Reviewed-by: Leandro Lupori <leandro.lupori@eldorado.org.br>
Signed-off-by: Lucas Coutinho <lucas.coutinho@eldorado.org.br>
Message-Id: <20220701133507.740619-11-lucas.coutinho@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>

* target/ppc: Implement slbiag

Reviewed-by: Leandro Lupori <leandro.lupori@eldorado.org.br>
Signed-off-by: Lucas Coutinho <lucas.coutinho@eldorado.org.br>
Message-Id: <20220701133507.740619-12-lucas.coutinho@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>

* target/ppc: check tb_env != 0 before printing TBU/TBL/DECR

When using "-machine none", env->tb_env is not allocated, causing the
segmentation fault reported in issue #85 (launchpad bug #811683). To
avoid this problem, check if the pointer != NULL before calling the
methods to print TBU/TBL/DECR.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/85
Signed-off-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220714172343.80539-1-matheus.ferst@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>

* ppc: Check partition and process table alignment

Check if partition and process tables are properly aligned, in
their size, according to PowerISA 3.1B, Book III 6.7.6 programming
note. Hardware and KVM also raise an exception in these cases.

Signed-off-by: Leandro Lupori <leandro.lupori@eldorado.org.br>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220628133959.15131-2-leandro.lupori@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>

* target/ppc: Improve Radix xlate level validation

Check if the number and size of Radix levels are valid on
POWER9/POWER10 CPUs, according to the supported Radix Tree
Configurations described in their User Manuals.

Signed-off-by: Leandro Lupori <leandro.lupori@eldorado.org.br>
Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com>
Message-Id: <20220628133959.15131-3-leandro.lupori@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>

* target/ppc: Check page dir/table base alignment

According to PowerISA 3.1B, Book III 6.7.6 programming note, the
page directory base addresses are expected to be aligned to their
size. Real hardware seems to rely on that and will access the
wrong address if they are misaligned. This results in a
translation failure even if the page tables seem to be properly
populated.

Signed-off-by: Leandro Lupori <leandro.lupori@eldorado.org.br>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20220628133959.15131-4-leandro.lupori@eldorado.org.br>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>

* qga: treat get-guest-fsinfo as "best effort"

In some container environments, there may be references to block devices
witnessable from a container through /proc/self/mountinfo that reference
devices we simply don't have access to in the container, and cannot
provide information about.

Instead of failing the entire fsinfo command, return stub information
for these failed lookups.

This allows test-qga to pass under docker tests, which are in turn used
by the CentOS VM tests.

Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20220708153503.18864-2-jsnow@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>

* tests/vm: use 'cp' instead of 'ln' for temporary vm images

If the initial setup fails, you've permanently altered the state of the
downloaded image in an unknowable way. Use 'cp' like our other test
setup scripts do.

Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220708153503.18864-3-jsnow@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>

* tests/vm: switch CentOS 8 to CentOS 8 Stream

The old CentOS image didn't work anymore because it was already EOL at
the beginning of 2022.

Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220708153503.18864-4-jsnow@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>

* tests/vm: switch centos.aarch64 to CentOS 8 Stream

Switch this test over to using a cloud image like the base CentOS8 VM
test, which helps make this script a bit simpler too.

Note: At time of writing, this test seems pretty flaky when run without
KVM support for aarch64. Certain unit tests like migration-test,
virtio-net-failover, test-hmp and qom-test seem quite prone to fail
under TCG. Still, this is an improvement in that at least pure build
tests are functional.

Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220708153503.18864-5-jsnow@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>

* tests/vm: upgrade Ubuntu 18.04 VM to 20.04

18.04 has fallen out of our support window, so move ubuntu.aarch64
forward to ubuntu 20.04, which is now our oldest supported Ubuntu
release.

Notes:

This checksum changes periodically; use a fixed point image with a known
checksum so that the image isn't re-downloaded on every single
invocation. (The checksum for the 18.04 image was already incorrect at
the time of writing.)

Just like the centos.aarch64 test, this test currently seems very
flaky when run as a TCG test.

Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220708153503.18864-6-jsnow@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>

* tests/vm: remove ubuntu.i386 VM test

Ubuntu 18.04 is out of our support window, and Ubuntu 20.04 does not
support i386 anymore. The debian project does, but they do not provide
any cloud images for it, a new expect-style script would have to be
written.

Since we have i386 cross-compiler tests hosted on GitLab CI, we don't
need to support this VM test anymore.

Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220708153503.18864-7-jsnow@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>

* tests/vm: remove duplicate 'centos' VM test

This is listed twice by accident; we require genisoimage to run the
test, so remove the unconditional entry.

Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220708153503.18864-8-jsnow@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>

* tests/vm: add 1GB extra memory per core

If you try to run a 16 or 32 threaded test, you're going to run out of
memory very quickly with qom-test and a few others. Bump the memory
limit to try to scale with larger-core machines.

Granted, this means that a 16 core processor is going to ask for 16GB,
but you *probably* meet that requirement if you have such a machine.

512MB per core didn't seem to be enough to avoid ENOMEM and SIGABRTs in
the test cases in practice on a six core machine; so I bumped it up to
1GB which seemed to help.

Add this magic in early to the configuration process so that the
config file, if provided, can still override it.

Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Acked-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220708153503.18864-9-jsnow@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>

* tests/vm: Remove docker cross-compile test from CentOS VM

The fedora container has since been split apart, so there's no suitable
nearby target that would support "test-mingw" as it requires both x32
and x64 support -- so either fedora-cross-win32 nor fedora-cross-win64
would be truly suitable.

Just remove this test as superfluous with our current CI infrastructure.

Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220708153503.18864-10-jsnow@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>

* qtest/machine-none: Add LoongArch support

Update the cpu_maps[] to support the LoongArch target.

Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220713020258.601424-1-gaosong@loongson.cn>
Signed-off-by: Thomas Huth <thuth@redhat.com>

* tests/unit: Replace g_memdup() by g_memdup2()

Per https://discourse.gnome.org/t/port-your-module-from-g-memdup-to-g-memdup2-now/5538

  The old API took the size of the memory to duplicate as a guint,
  whereas most memory functions take memory sizes as a gsize. This
  made it easy to accidentally pass a gsize to g_memdup(). For large
  values, that would lead to a silent truncation of the size from 64
  to 32 bits, and result in a heap area being returned which is
  significantly smaller than what the caller expects. This can likely
  be exploited in various modules to cause a heap buffer overflow.

Replace g_memdup() by the safer g_memdup2() wrapper.

Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20210903174510.751630-24-philmd@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>

* Replace 'whitelist' with 'allow'

Let's use more inclusive language here and avoid terms
that are frowned upon nowadays.

Message-Id: <20220711095300.60462-1-thuth@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>

* util: Fix broken build on Haiku

A recent commit moved some Haiku-specific code parts from oslib-posix.c
to cutils.c, but failed to move the corresponding header #include
statement, too, so "make vm-build-haiku.x86_64" is currently broken.
Fix it by moving the header #include, too.

Fixes: 06680b15b4 ("include: move qemu_*_exec_dir() to cutils")
Message-Id: <20220718172026.139004-1-thuth@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>

* python/qemu/qmp/legacy: Replace 'returns-whitelist' with the correct type

'returns-whitelist' has been renamed to 'command-returns-exceptions' in
commit b86df3747848 ("qapi: Rename pragma *-whitelist to *-exceptions").

Message-Id: <20220711095721.61280-1-thuth@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>

* pl050: move PL050State from pl050.c to new pl050.h header file

This allows the QOM types in pl050.c to be used elsewhere by simply including
pl050.h.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Helge Deller <deller@gmx.de>
Acked-by: Helge Deller <deller@gmx.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220712215251.7944-2-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

* pl050: rename pl050_keyboard_init() to pl050_kbd_init()

This is for consistency with all of the other devices that use the PS2 keyboard
device.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Helge Deller <deller@gmx.de>
Acked-by: Helge Deller <deller@gmx.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220712215251.7944-3-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

* pl050: change PL050State dev pointer from void to PS2State

This allows the compiler to enforce that the PS2 device pointer is always of
type PS2State. Update the name of the pointer from dev to ps2dev to emphasise
this type change.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Helge Deller <deller@gmx.de>
Acked-by: Helge Deller <deller@gmx.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220712215251.7944-4-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

* pl050: introduce new PL050_KBD_DEVICE QOM type

This will be soon be used to hold the underlying PS2_KBD_DEVICE object.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Helge Deller <deller@gmx.de>
Acked-by: Helge Deller <deller@gmx.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220712215251.7944-5-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

* pl050: introduce new PL050_MOUSE_DEVICE QOM type

This will be soon be used to hold the underlying PS2_MOUSE_DEVICE object.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Helge Deller <deller@gmx.de>
Acked-by: Helge Deller <deller@gmx.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220712215251.7944-6-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

* pl050: move logic from pl050_realize() to pl050_init()

The logic for initialising the register memory region and the sysbus output IRQ
does not depend upon any device properties and so can be moved from
pl050_realize() to pl050_init().

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Helge Deller <deller@gmx.de>
Acked-by: Helge Deller <deller@gmx.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220712215251.7944-7-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

* pl050: introduce PL050DeviceClass for the PL050 device

This will soon be used to store the reference to the PL050 parent device
for PL050_KBD_DEVICE and PL050_MOUSE_DEVICE.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Helge Deller <deller@gmx.de>
Acked-by: Helge Deller <deller@gmx.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220712215251.7944-8-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

* pl050: introduce pl050_kbd_class_init() and pl050_kbd_realize()

Introduce a new pl050_kbd_class_init() function containing a call to
device_class_set_parent_realize() which calls a new pl050_kbd_realize()
function to initialise the PS2 keyboard device.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Helge Deller <deller@gmx.de>
Acked-by: Helge Deller <deller@gmx.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220712215251.7944-9-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

* pl050: introduce pl050_mouse_class_init() and pl050_mouse_realize()

Introduce a new pl050_mouse_class_init() function containing a call to
device_class_set_parent_realize() which calls a new pl050_mouse_realize()
function to initialise the PS2 mouse device.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Helge Deller <deller@gmx.de>
Acked-by: Helge Deller <deller@gmx.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220712215251.7944-10-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

* pl050: don't use legacy ps2_kbd_init() function

Instantiate the PS2 keyboard device within PL050KbdState using
object_initialize_child() in pl050_kbd_init() and realize it in
pl050_kbd_realize() accordingly.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Helge Deller <deller@gmx.de>
Acked-by: Helge Deller <deller@gmx.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220712215251.7944-11-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

* pl050: don't use legacy ps2_mouse_init() function

Instantiate the PS2 mouse device within PL050MouseState using
object_initialize_child() in pl050_mouse_init() and realize it in
pl050_mouse_realize() accordingly.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Helge Deller <deller@gmx.de>
Acked-by: Helge Deller <deller@gmx.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220712215251.7944-12-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

* lasips2: don't use vmstate_register() in lasips2_realize()

Since lasips2 is a qdev device then vmstate_ps2_mouse can be registered using
the DeviceClass vmsd field instead.

Note that due to the use of the base parameter in the original vmstate_register()
function call, this is actually a migration break for the HPPA B160L machine.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Helge Deller <deller@gmx.de>
Acked-by: Helge Deller <deller@gmx.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220712215251.7944-13-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

* lasips2: remove the qdev base property and the lasips2_properties array

The base property was only needed for use by vmstate_register() in order to
preserve migration compatibility. Now that the lasips2 migration state is
registered through the DeviceClass vmsd field, the base property and also
the lasips2_properties array can be removed completely as they are no longer
required.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Helge Deller <deller@gmx.de>
Acked-by: Helge Deller <deller@gmx.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220712215251.7944-14-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

* lasips2: remove legacy lasips2_initfn() function

There is only one user of the legacy lasips2_initfn() function which is in
machine_hppa_init(), so inline its functionality into machine_hppa_init() and
then remove it.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Helge Deller <deller@gmx.de>
Acked-by: Helge Deller <deller@gmx.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220712215251.7944-15-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

* lasips2: change LASIPS2State dev pointer from void to PS2State

This allows the compiler to enforce that the PS2 device pointer is always of
type PS2State. Update the name of the pointer from dev to ps2dev to emphasise
this type change.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Helge Deller <deller@gmx.de>
Acked-by: Helge Deller <deller@gmx.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220712215251.7944-16-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

* lasips2: QOMify LASIPS2Port

This becomes an abstract QOM type which will be a parent type for separate
keyboard and mouse port types.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Helge Deller <deller@gmx.de>
Acked-by: Helge Deller <deller@gmx.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220712215251.7944-17-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

* lasips2: introduce new LASIPS2_KBD_PORT QOM type

This will be soon be used to hold the underlying PS2_KBD_DEVICE object.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Helge Deller <deller@gmx.de>
Acked-by: Helge Deller <deller@gmx.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220712215251.7944-18-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

* lasips2: introduce new LASIPS2_MOUSE_PORT QOM type

This will be soon be used to hold the underlying PS2_MOUSE_DEVICE object.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Helge Deller <deller@gmx.de>
Acked-by: Helge Deller <deller@gmx.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220712215251.7944-19-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

* lasips2: move keyboard port initialisation to new lasips2_kbd_port_init() function

Move the initialisation of the keyboard port from lasips2_init() to
a new lasips2_kbd_port_init() function which will be invoked using
object_initialize_child() during the LASIPS2 device init.

Update LASIPS2State so that it now holds the new LASIPS2KbdPort child object and
ensure that it is realised in lasips2_realize().

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Helge Deller <deller@gmx.de>
Acked-by: Helge Deller <deller@gmx.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220712215251.7944-20-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

* lasips2: move mouse port initialisation to new lasips2_mouse_port_init() function

Move the initialisation of the mouse port from lasips2_init() to
a new lasips2_mouse_port_init() function which will be invoked using
object_initialize_child() during the LASIPS2 device init.

Update LASIPS2State so that it now holds the new LASIPS2MousePort child object and
ensure that it is realised in lasips2_realize().

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Helge Deller <deller@gmx.de>
Acked-by: Helge Deller <deller@gmx.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220712215251.7944-21-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

* lasips2: introduce lasips2_kbd_port_class_init() and lasips2_kbd_port_realize()

Introduce a new lasips2_kbd_port_class_init() function which uses a new
lasips2_kbd_port_realize() function to initialise the PS2 keyboard device.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Helge Deller <deller@gmx.de>
Acked-by: Helge Deller <deller@gmx.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220712215251.7944-22-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

* lasips2: introduce lasips2_mouse_port_class_init() and lasips2_mouse_port_realize()

Introduce a new lasips2_mouse_port_class_init() function which uses a new
lasips2_mouse_port_realize() function to initialise the PS2 mouse device.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Helge Deller <deller@gmx.de>
Acked-by: Helge Deller <deller@gmx.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220712215251.7944-23-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

* lasips2: rename LASIPS2Port irq field to birq

The existing boolean irq field in LASIPS2Port will soon be replaced by a proper
qemu_irq, so rename the field to birq to allow the upcoming qemu_irq to use the
irq name.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Helge Deller <deller@gmx.de>
Acked-by: Helge Deller <deller@gmx.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220712215251.7944-24-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

* lasips2: introduce port IRQ and new lasips2_port_init() function

Introduce a new lasips2_port_init() QOM init function for the LASIPS2_PORT type
and use it to initialise a new gpio for use as a port IRQ. Add a new qemu_irq
representing the gpio as a new irq field within LASIPS2Port.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Helge Deller <deller@gmx.de>
Acked-by: Helge Deller <deller@gmx.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220712215251.7944-25-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

* lasips2: introduce LASIPS2PortDeviceClass for the LASIPS2_PORT device

This will soon be used to store the reference to the LASIPS2_PORT parent device
for LASIPS2_KBD_PORT and LASIPS2_MOUSE_PORT.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Helge Deller <deller@gmx.de>
Acked-by: Helge Deller <deller@gmx.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220712215251.7944-26-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

* lasips2: add named input gpio to port for downstream PS2 device IRQ

The named input gpio is to be connected to the IRQ output of the downstream
PS2 device and used to drive the port IRQ. Initialise the named input gpio
in lasips2_port_init() and add new lasips2_port_class_init() and
lasips2_port_realize() functions to connect the PS2 device output gpio to
the new named input gpio.

Note that the reference to lasips2_port_realize() is stored in
LASIPS2PortDeviceClass but not yet used.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Helge Deller <deller@gmx.de>
Acked-by: Helge Deller <deller@gmx.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220712215251.7944-27-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

* lasips2: add named input gpio to handle incoming port IRQs

The LASIPS2 device named input gpio is soon to be connected to the port output
IRQs. Add a new int_status field to LASIPS2State which is a bitmap representing
the port input IRQ status which will be enabled in the next patch.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Helge Deller <deller@gmx.de>
Acked-by: Helge Deller <deller@gmx.de>
Message-Id: <20220712215251.7944-28-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

* lasips2: switch to using port-based IRQs

Now we can implement port-based IRQs by wiring the PS2 device IRQs to the
LASI2Port named input gpios rather than directly to the LASIPS2 device, and
generate the LASIPS2 output IRQ from the int_status bitmap representing the
individual port IRQs instead of the birq boolean.

This enables us to remove the separate PS2 keyboard and PS2 mouse named input
gpios from the LASIPS2 device and simplify the register implementation to
drive the port IRQ using qemu_set_irq() rather than accessing the LASIPS2
device IRQs directly. As a consequence the IRQ level logic in lasips2_set_irq()
can also be simplified accordingly.

For now this patch ignores adding the int_status bitmap and simply drops the
birq boolean from the vmstate_lasips2 VMStateDescription. This is because the
migration stream is already missing some required LASIPS2 fields, and as this
series already introduces a migration break for the lasips2 device it is
easiest to fix this in a follow-up patch.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Helge Deller <deller@gmx.de>
Acked-by: Helge Deller <deller@gmx.de>
Message-Id: <20220712215251.7944-29-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

* lasips2: rename LASIPS2Port parent pointer to lasips2

This makes it clearer that the pointer is a reference to the LASIPS2 container
device rather than an implied part of the QOM hierarchy.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Helge Deller <deller@gmx.de>
Acked-by: Helge Deller <deller@gmx.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220712215251.7944-30-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

* lasips2: standardise on lp name for LASIPS2Port variables

This is shorter to type and keeps the naming convention consistent within the
LASIPS2 device.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Helge Deller <deller@gmx.de>
Acked-by: Helge Deller <deller@gmx.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220712215251.7944-31-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

* lasips2: switch register memory region to DEVICE_BIG_ENDIAN

The LASI device (and so also the LASIPS2 device) are only used for the HPPA
B160L machine which is a big endian architecture.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Helge Deller <deller@gmx.de>
Acked-by: Helge Deller <deller@gmx.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220712215251.7944-32-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

* lasips2: don't use legacy ps2_kbd_init() function

Instantiate the PS2 keyboard device within LASIPS2KbdPort using
object_initialize_child() in lasips2_kbd_port_init() and realize it in
lasips2_kbd_port_realize() accordingly.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Helge Deller <deller@gmx.de>
Acked-by: Helge Deller <deller@gmx.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220712215251.7944-33-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

* lasips2: don't use legacy ps2_mouse_init() function

Instantiate the PS2 mouse device within LASIPS2MousePort using
object_initialize_child() in lasips2_mouse_port_init() and realize it in
lasips2_mouse_port_realize() accordingly.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Helge Deller <deller@gmx.de>
Acked-by: Helge Deller <deller@gmx.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220712215251.7944-34-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

* lasips2: update VMStateDescription for LASIPS2 device

Since this series has already introduced a migration break for the HPPA B160L
machine, we can use this opportunity to improve the VMStateDescription for
the LASIPS2 device.

Add the new int_status field to the VMStateDescription and remodel the ports
as separate VMSTATE_STRUCT instances representing each LASIPS2Port. Once this
is done, the migration stream can be updated to include buf and loopback_rbne
for each port (which is necessary since the values are accessed across separate
IO accesses), and drop the port id as this is hardcoded for each port type.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Helge Deller <deller@gmx.de>
Acked-by: Helge Deller <deller@gmx.de>
Message-Id: <20220712215251.7944-35-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

* pckbd: introduce new vmstate_kbd_mmio VMStateDescription for the I8042_MMIO device

This enables us to register the VMStateDescription using the DeviceClass vmsd
property rather than having to call vmstate_register() from i8042_mmio_realize().

Note that this is a migration break for the MIPS magnum machine which is the only
user of the I8042_MMIO device.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Helge Deller <deller@gmx.de>
Acked-by: Helge Deller <deller@gmx.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220712215251.7944-36-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

* pckbd: don't use legacy ps2_kbd_init() function

Instantiate the PS2 keyboard device within KBDState using
object_initialize_child() in i8042_initfn() and i8042_mmio_init() and realize
it in i8042_realizefn() and i8042_mmio_realize() accordingly.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Helge Deller <deller@gmx.de>
Acked-by: Helge Deller <deller@gmx.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220712215251.7944-37-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

* ps2: remove unused legacy ps2_kbd_init() function

Now that the legacy ps2_kbd_init() function is no longer used, it can be completely
removed along with its associated trace-event.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Helge Deller <deller@gmx.de>
Acked-by: Helge Deller <deller@gmx.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220712215251.7944-38-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

* pckbd: don't use legacy ps2_mouse_init() function

Instantiate the PS2 mouse device within KBDState using
object_initialize_child() in i8042_initfn() and i8042_mmio_init() and realize
it in i8042_realizefn() and i8042_mmio_realize() accordingly.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Helge Deller <deller@gmx.de>
Acked-by: Helge Deller <deller@gmx.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220712215251.7944-39-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

* ps2: remove unused legacy ps2_mouse_init() function

Now that the legacy ps2_mouse_init() function is no longer used, it can be completely
removed along with its associated trace-event.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Helge Deller <deller@gmx.de>
Acked-by: Helge Deller <deller@gmx.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220712215251.7944-40-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

* pckbd: remove legacy i8042_mm_init() function

This legacy function is only used during the initialisation of the MIPS magnum
machine, so inline its functionality directly into mips_jazz_init() and then
remove it.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Helge Deller <deller@gmx.de>
Acked-by: Helge Deller <deller@gmx.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20220712215251.7944-41-mark.cave-ayland@ilande.co.uk>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

* util: Fix broken build on Haiku

A recent commit moved some Haiku-specific code parts from oslib-posix.c
to cutils.c, but failed to move the corresponding header #include
statement, too, so "make vm-build-haiku.x86_64" is currently broken.
Fix it by moving the header #include, too.

Fixes: 06680b15b4 ("include: move qemu_*_exec_dir() to cutils")
Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20220718172026.139004-1-thuth@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

* target/s390x: fix handling of zeroes in vfmin/vfmax

vfmin_res() / vfmax_res() are trying to check whether a and b are both
zeroes, but in reality they check that they are the same kind of zero.
This causes incorrect results when comparing positive and negative
zeroes.

Fixes: da4807527f3b ("s390x/tcg: Implement VECTOR FP (MAXIMUM|MINIMUM)")
Co-developed-by: Ulrich Weigand <ulrich.weigand@de.ibm.com>
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20220713182612.3780050-2-iii@linux.ibm.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>

* target/s390x: fix NaN propagation rules

s390x has the same NaN propagation rules as ARM, and not as x86.

Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20220713182612.3780050-3-iii@linux.ibm.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>

* tests/tcg/s390x: test signed vfmin/vfmax

Add a test to prevent regressions. Try all floating point value sizes
and all combinations of floating point value classes. Verify the results
against PoP tables, which are represented as close to the original as
possible - this produces a lot of checkpatch complaints, but it seems
to be justified in this case.

Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220713182612.3780050-4-iii@linux.ibm.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>

* dbus-display: fix test race when initializing p2p connection

The D-Bus connection starts processing messages before QEMU has the time
to set the object manager server. This is causing dbus-display-test to
fail randomly with:

ERROR:../tests/qtest/dbus-display-test.c:68:test_dbus_display_vm:
assertion failed
(qemu_dbus_display1_vm_get_name(QEMU_DBUS_DISPLAY1_VM(vm)) ==
"dbus-test"): (NULL == "dbus-test") ERROR

Use the delayed message processing flag and method to avoid that
situation.

(the bus connection doesn't need a fix, as the initialization is done
synchronously)

Reported-by: Robinson, Cole <crobinso@redhat.com>
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Tested-by: Cole Robinson <crobinso@redhat.com>
Message-Id: <20220609152647.870373-1-marcandre.lureau@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>

* microvm: turn off io reservations for pcie root ports

The pcie host bridge has no io window on microvm,
so io reservations will not work.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Message-Id: <20220701091516.43489-1-kraxel@redhat.com>

* usb/hcd-xhci: check slotid in xhci_wakeup_endpoint()

This prevents an OOB read (followed by an assertion failure in
xhci_kick_ep) when slotid > xhci->numslots.

Reported-by: Soul Chen <soulchen8650@gmail.com>
Signed-off-by: Mauro Matteo Cascella <mcascell@redhat.com>
Message-Id: <20220705174734.2348829-1-mcascell@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>

* usb: document guest-reset and guest-reset-all

Suggested-by: Michal Prívozník <mprivozn@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Message-Id: <20220711094437.3995927-2-kraxel@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>

* usb: document pcap (aka usb traffic capture)

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Message-Id: <20220711094437.3995927-3-kraxel@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>

* gtk: Add show_tabs=on|off command line option.

The patch adds "show_tabs" command line option for GTK ui similar to
"grab_on_hover". This option allows tabbed view mode to not have to be
enabled by hand at each start of the VM.

Signed-off-by: Felix "xq" Queißner <xq@random-projects.net>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Hanna Reitz <hreitz@redhat.com>
Message-Id: <20220712133753.18937-1-xq@random-projects.net>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>

* tests/docker/dockerfiles: Add debian-loongarch-cross.docker

Use the pre-packaged toolchain provided by Loongson via github.

Tested-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220704070824.965429-1-richard.henderson@linaro.org>

* target/loongarch: Fix loongarch_cpu_class_by_name

The cpu_model argument may already have the '-loongarch-cpu' suffix,
e.g. when using the default for the LS7A1000 machine.  If that fails,
try again with the suffix.  Validate that the object created by the
function is derived from the proper base class.

Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220715060740.1500628-2-yangxiaojuan@loongson.cn>
[rth: Try without and then with the suffix, to avoid testsuite breakage.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

* hw/intc/loongarch_pch_pic: Fix bugs for update_irq function

Fix such errors:
1. We should not use 'unsigned long' type as argument when we use
find_first_bit(), and we use ctz64() to replace find_first_bit()
to fix this bug.
2. It is not standard to use '1ULL << irq' to generate a irq mask.
So, we replace it with 'MAKE_64BIT_MASK(irq, 1)'.

Fix coverity CID: 1489761 1489764 1489765

Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Message-Id: <20220715060740.1500628-3-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

* target/loongarch/cpu: Fix coverity errors about excp_names

Fix out-of-bounds errors when access excp_names[] array. the valid
boundary size of excp_names should be 0 to ARRAY_SIZE(excp_names)-1.
However, the general code do not consider the max boundary.

Fix coverity CID: 1489758

Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220715060740.1500628-4-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

* target/loongarch/tlb_helper: Fix coverity integer overflow error

Replace '1 << shift' with 'MAKE_64BIT_MASK(shift, 1)' to fix
unintentional integer overflow errors in tlb_helper file.

Fix coverity CID: 1489759 1489762

Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220715060740.1500628-5-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

* target/loongarch/op_helper: Fix coverity cond_at_most error

The boundary size of cpucfg array should be 0 to ARRAY_SIZE(cpucfg)-1.
So, using index bigger than max boundary to access cpucfg[] must be
forbidden.

Fix coverity CID: 1489760

Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220715060740.1500628-6-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

* target/loongarch/cpu: Fix cpucfg default value

We should config cpucfg[20] to set value for the scache's ways, sets,
and size arguments when loongarch cpu init. However, the old code
wirte 'sets argument' twice, so we change one of them to 'size argument'.

Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220715064829.1521482-1-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

* fpu/softfloat: Add LoongArch specializations for pickNaN*

The muladd (inf,zero,nan) case sets InvalidOp and returns the
input value 'c', and prefer sNaN over qNaN, in c,a,b order.
Binary operations prefer sNaN over qNaN and a,b order.

Signed-off-by: Song Gao <gaosong@loongson.cn>
Message-Id: <20220716085426.3098060-3-gaosong@loongson.cn>
[rth: Add specialization for pickNaN]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

* target/loongarch: Fix float_convd/float_convs test failing

We should result zero when exception is invalid and operation is nan

Signed-off-by: Song Gao <gaosong@loongson.cn>
Message-Id: <20220716085426.3098060-4-gaosong@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

* tests/tcg/loongarch64: Add float reference files

Generated on Loongson-3A5000 (CPU revision 0x0014c011).

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20220104132022.2146857-1-f4bug@amsat.org>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Message-Id: <20220716085426.3098060-2-gaosong@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

* tests/tcg/loongarch64: Add clo related instructions test

This includes:
- CL{O/Z}.{W/D}
- CT{O/Z}.{W/D}

Signed-off-by: Song Gao <gaosong@loongson.cn>
Message-Id: <20220716085426.3098060-5-gaosong@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

* tests/tcg/loongarch64: Add div and mod related instructions test

This includes:
- DIV.{W[U]/D[U]}
- MOD.{W[U]/D[U]}

Signed-off-by: Song Gao <gaosong@loongson.cn>
Message-Id: <20220716085426.3098060-6-gaosong@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

* tests/tcg/loongarch64: Add fclass test

This includes:
- FCLASS.{S/D}

Signed-off-by: Song Gao <gaosong@loongson.cn>
Message-Id: <20220716085426.3098060-7-gaosong@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

* tests/tcg/loongarch64: Add fp comparison instructions test

Choose some instructions to test:
- FCMP.cond.S
- cond: ceq clt cle cne seq slt sle sne

Signed-off-by: Song Gao <gaosong@loongson.cn>
Message-Id: <20220716085426.3098060-8-gaosong@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

* tests/tcg/loongarch64: Add pcadd related instructions test

This includes:
- PCADDI
- PCADDU12I
- PCADDU18I
- PCALAU12I

Signed-off-by: Song Gao <gaosong@loongson.cn>
Message-Id: <20220716085426.3098060-9-gaosong@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

* hw/loongarch: Add fw_cfg table support

Add fw_cfg table for loongarch virt machine, including memmap table.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Message-Id: <20220712083206.4187715-2-yangxiaojuan@loongson.cn>
[rth: Replace fprintf with assert; drop unused return value;
      initialize reserved slot to zero.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

* hw/loongarch: Add uefi bios loading support

Add uefi bios loading support, now only uefi bios is porting to
loongarch virt machine.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Message-Id: <20220712083206.4187715-3-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

* hw/loongarch: Add linux kernel booting support

There are two situations to start system by kernel file. If exists bios
option, system will boot from loaded bios file, else system will boot
from hardcoded auxcode, and jump to kernel elf entry.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Message-Id: <20220712083206.4187715-4-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

* hw/loongarch: Add smbios support

Add smbios support for loongarch virt machine, and put them into fw_cfg
table so that bios can parse them quickly. The weblink of smbios spec:
https://www.dmtf.org/dsp/DSP0134, the version is 3.6.0.

Acked-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Message-Id: <20220712083206.4187715-5-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

* hw/loongarch: Add acpi ged support

Loongarch virt machine uses general hardware reduces acpi method, rather
than LS7A acpi device. Now only power management function is used in
acpi ged device, memory hotplug will be added later. Also acpi tables
such as RSDP/RSDT/FADT etc.

The acpi table has submited to acpi spec, and will release soon.

Acked-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Message-Id: <20220712083206.4187715-6-yangxiaojuan@loongson.cn>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

* hw/loongarch: Add fdt support

Add LoongArch flatted device tree, adding cpu device node, firmware cfg node,
pcie node into it, and create fdt rom memory region. Now fdt info is not
full since only uefi bios uses fdt, linux kernel does not use fdt.
Loongarch Linux kernel uses acpi table which is full in qemu virt
machine.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Message-Id: <20220712083206.4187715-7-yangxiaojuan@loongson.cn>
[rth: Set TARGET_NEED_FDT, add fdt to meson.build]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

* Hexagon (target/hexagon) fix store w/mem_noshuf & predicated load

Call the CHECK_NOSHUF macro multiple times: once in the
fGEN_TCG_PRED_LOAD() and again in fLOAD().

Before this commit, a packet with a store and a predicated
load with mem_noshuf that gets encoded like this:

    { P0 = cmp.eq(R17,#0x0)
      memw(R18+#0x0) = R2
      if (!P0.new) R3 = memw(R17+#0x4) }

... would end up generating a branch over both the load
and the store like so:

    ...
    brcond_i32 loc17,$0x0,eq,$L1
    mov_i32 loc18,store_addr_1
    qemu_st_i32 store_val32_1,store_addr_1,leul,0
    qemu_ld_i32 loc16,loc7,leul,0
    set_label $L1
    ...

Test cases added to tests/tcg/hexagon/mem_noshuf.c

Co-authored-by: Taylor Simpson <tsimpson@quicinc.com>
Signed-off-by: Brian Cain <bcain@quicinc.com>
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220707210546.15985-2-tsimpson@quicinc.com>

* Hexagon (target/hexagon) fix bug in mem_noshuf load exception

The semantics of a mem_noshuf packet are that the store effectively
happens before the load.  However, in cases where the load raises an
exception, we cannot simply execute the store first.

This change adds a probe to check that the load will not raise an
exception before executing the store.

If the load is predicated, this requires special handling.  We check
the condition before performing the probe.  Since, we need the EA to
perform the check, we move the GET_EA portion inside CHECK_NOSHUF_PRED.

Test case added in tests/tcg/hexagon/mem_noshuf_exception.c

Suggested-by: Alessandro Di Federico <ale@rev.ng>
Suggested-by: Anton Johansson <anjo@rev.ng>
Signed-off-by: Taylor Simpson <tsimpson@quicinc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20220707210546.15985-3-tsimpson@quicinc.com>

* vhost: move descriptor translation to vhost_svq_vring_write_descs

It's done for both in and out descriptors so it's better placed here.

Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>

* virtio-net: Expose MAC_TABLE_ENTRIES

vhost-vdpa control virtqueue needs to know the maximum entries supported
by the virtio-net device, so we know if it is possible to apply the
filter.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>

* virtio-net: Expose ctrl virtqueue logic

This allows external vhost-net devices to modify the state of the
VirtIO device model once the vhost-vdpa device has acknowledged the
control commands.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>

* vdpa: Avoid compiler to squash reads to used idx

In the next patch we will allow busypolling of this value. The compiler
have a running path where shadow_used_idx, last_used_idx, and vring used
idx are not modified within the same thread busypolling.

This was not an issue before since we always cleared device event
notifier before checking it, and that could act as memory barrier.
However, the busypoll needs something similar to kernel READ_ONCE.

Let's add it here, sepparated from the polling.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>

* vhost: Reorder vhost_svq_kick

Future code needs to call it from vhost_svq_add.

No functional change intended.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>

* vhost: Move vhost_svq_kick call to vhost_svq_add

The series needs to expose vhost_svq_add with full functionality,
including kick

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>

* vhost: Check for queue full at vhost_svq_add

The series need to expose vhost_svq_add with full functionality,
including checking for full queue.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>

* vhost: Decouple vhost_svq_add from VirtQueueElement

VirtQueueElement comes from the guest, but we're heading SVQ to be able
to modify the element presented to the device without the guest's
knowledge.

To do so, make SVQ accept sg buffers directly, instead of using
VirtQueueElement.

Add vhost_svq_add_element to maintain element convenience.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>

* vhost: Add SVQDescState

This will allow SVQ to add context to the different queue elements.

This patch only store the actual element, no functional change intended.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>

* vhost: Track number of descs in SVQDescState

A guest's buffer continuos on GPA may need multiple descriptors on
qemu's VA, so SVQ should track its length sepparatedly.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>

* vhost: add vhost_svq_push_elem

This function allows external SVQ users to return guest's available
buffers.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>

* vhost: Expose vhost_svq_add

This allows external parts of SVQ to forward custom buffers to the
device.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>

* vhost: add vhost_svq_poll

It allows the Shadow Control VirtQueue to wait for the device to use the
available buffers.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>

* vhost: Add svq avail_handler callback

This allows external handlers to be aware of new buffers that the guest
places in the virtqueue.

When this callback is defined the ownership of the guest's virtqueue
element is transferred to the callback. This means that if the user
wants to forward the descriptor it needs to manually inject it. The
callback is also free to process the command by itself and use the
element with svq_push.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>

* vdpa: Export vhost_vdpa_dma_map and unmap calls

Shadow CVQ will copy buffers on qemu VA, so we avoid TOCTOU attacks from
the guest that could set a different state in qemu device model and vdpa
device.

To do so, it needs to be able to map these new buffers to the device.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>

* vhost-net-vdpa: add stubs for when no virtio-net device is present

net/vhost-vdpa.c will need functions that are declared in
vhost-shadow-virtqueue.c, that needs functions of virtio-net.c.

Copy the vhost-vdpa-stub.c code so
only the constructor net_init_vhost_vdpa needs to be defined.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>

* vdpa: manual forward CVQ buffers

Do a simple forwarding of CVQ buffers, the same work SVQ could do but
through callbacks. No functional change intended.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>

* vdpa: Buffer CVQ support on shadow virtqueue

Introduce the control virtqueue support for vDPA shadow virtqueue. This
is needed for advanced networking features like rx filtering.

Virtio-net control VQ copies the descriptors to qemu's VA, so we avoid
TOCTOU with the guest's or device's memory every time there is a device
model change.  Otherwise, the guest could change the memory content in
the time between qemu and the device read it.

To demonstrate command handling, VIRTIO_NET_F_CTRL_MACADDR is
implemented.  If the virtio-net driver changes MAC the virtio-net device
model will be updated with the new one, and a rx filtering change event
will be raised.

More cvq commands could be added here straightforwardly but they have
not been tested.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>

* vdpa: Extract get features part from vhost_vdpa_get_max_queue_pairs

To know the device features is needed for CVQ SVQ, so SVQ knows if it
can handle all commands or not. Extract from
vhost_vdpa_get_max_queue_pairs so we can reuse it.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>

* vdpa: Add device migration blocker

Since the vhost-vdpa device is exposing _F_LOG, adding a migration blocker if
it uses CVQ.

However, qemu is able to migrate simple devices with no CVQ as long as
they use SVQ. To allow it, add a placeholder error to vhost_vdpa, and
only add to vhost_dev when used. vhost_dev machinery place the migration
blocker if needed.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>

* vdpa: Add x-svq to NetdevVhostVDPAOptions

Finally offering the possibility to enable SVQ from the command line.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>

* softmmu/runstate.c: add RunStateTransition support form COLO to PRELAUNCH

If the checkpoint occurs when the guest finishes restarting
but has not started running, the runstate_set() may reject
the transition from COLO to PRELAUNCH with the crash log:

{"timestamp": {"seconds": 1593484591, "microseconds": 26605},\
"event": "RESET", "data": {"guest": true, "reason": "guest-reset"}}
qemu-system-x86_64: invalid runstate transition: 'colo' -> 'prelaunch'

Long-term testing says that it's pretty safe.

Signed-off-by: Like Xu <like.xu@linux.intel.com>
Signed-off-by: Zhang Chen <chen.zhang@intel.com>
Acked-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>

* net/colo: Fix a "double free" crash to clear the conn_list

We notice the QEMU may crash when the guest has too many
incoming network connections with the following log:

15197@1593578622.668573:colo_proxy_main : colo proxy connection hashtable full, clear it
free(): invalid pointer
[1]    15195 abort (core dumped)  qemu-system-x86_64 ....

This is because we create the s->connection_track_table with
g_hash_table_new_full() which is defined as:

GHashTable * g_hash_table_new_full (GHashFunc hash_func,
                       GEqualFunc key_equal_func,
                       GDestroyNotify key_destroy_func,
                       GDestroyNotify value_destroy_func);

The fourth parameter connection_destroy() will be called to free the
memory allocated for all 'Connection' values in the hashtable when
we call g_hash_table_remove_all() in the connection_hashtable_reset().

But both connection_track_table and conn_list reference to the same
conn instance. It will trigger double free in conn_list clear. So this
patch remove free action on hash table side to avoid double free the
conn.

Signed-off-by: Like Xu <like.xu@linux.intel.com>
Signed-off-by: Zhang Chen <chen.zhang@intel.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>

* net/colo.c: No need to track conn_list for filter-rewriter

Filter-rewriter no need to track connection in conn_list.
This patch fix the glib g_queue_is_empty assertion when COLO guest
keep a lot of network connection.

Signed-off-by: Zhang Chen <chen.zhang@intel.com>
Reviewed-by: Li Zhijian <lizhijian@fujitsu.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>

* net/colo.c: fix segmentation fault when packet is not parsed correctly

When COLO use only one vnet_hdr_support parameter between
filter-redirector and filter-mirror(or colo-compare), COLO will crash
with segmentation fault. Back track as follow:

Thread 1 "qemu-system-x86" received signal SIGSEGV, Segmentation fault.
0x0000555555cb200b in eth_get_l2_hdr_length (p=0x0)
    at /home/tao/project/COLO/colo-qemu/include/net/eth.h:296
296         uint16_t proto = be16_to_cpu(PKT_GET_ETH_HDR(p)->h_proto);
(gdb) bt
0  0x0000555555cb200b in eth_get_l2_hdr_length (p=0x0)
    at /home/tao/project/COLO/colo-qemu/include/net/eth.h:296
1  0x0000555555cb22b4 in parse_packet_early (pkt=0x555556a44840) at
net/colo.c:49
2  0x0000555555cb2b91 in is_tcp_packet (pkt=0x555556a44840) at
net/filter-rewriter.c:63

So wrong vnet_hdr_len will cause pkt->data become NULL. Add check to
raise error and add trace-events to track vnet_hdr_len.

Signed-off-by: Tao Xu <tao3.xu@intel.com>
Signed-off-by: Zhang Chen <chen.zhang@intel.com>
Reviewed-by: Li Zhijian <lizhijian@fujitsu.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>

* accel/kvm/kvm-all: Refactor per-vcpu dirty ring reaping

Add a non-required argument 'CPUState' to kvm_dirty_ring_reap so
that it can cover single vcpu dirty-ring-reaping scenario.

Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
Reviewed-by: Peter Xu <peterx@redhat.com>
Message-Id: <c32001242875e83b0d9f78f396fe2dcd380ba9e8.1656177590.git.huangy81@chinatelecom.cn>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

* cpus: Introduce cpu_list_generation_id

Introduce cpu_list_generation_id to track cpu list generation so
that cpu hotplug/unplug can be detected during measurement of
dirty page rate.

cpu_list_generation_id could be used to detect changes of cpu
list, which is prepared for dirty page rate measurement.

Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
Reviewed-by: Peter Xu <peterx@redhat.com>
Message-Id: <06e1f1362b2501a471dce796abb065b04f320fa5.1656177590.git.huangy81@chinatelecom.cn>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

* migration/dirtyrate: Refactor dirty page rate calculation

abstract out dirty log change logic into function
global_dirty_log_change.

abstract out dirty page rate calculation logic via
dirty-ring into function vcpu_calculate_dirtyrate.

abstract out mathematical dirty page rate calculation
into do_calculate_dirtyrate, decouple it from DirtyStat.

rename set_sample_page_period to dirty_stat_wait, which
is well-understood and will be reused in dirtylimit.

handle cpu hotplug/unplug scenario during measurement of
dirty page rate.

export util functions outside migration.

Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
Reviewed-by: Peter Xu <peterx@redhat.com>
Message-Id: <7b6f6f4748d5b3d017b31a0429e630229ae97538.1656177590.git.huangy81@chinatelecom.cn>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

* softmmu/dirtylimit: Implement vCPU dirtyrate calculation periodically

Introduce the third method GLOBAL_DIRTY_LIMIT of dirty
tracking for calculate dirtyrate periodly for dirty page
rate limit.

Add dirtylimit.c to implement dirtyrate calculation periodly,
which will be used for dirty page rate limit.

Add dirtylimit.h to export util functions for dirty page rate
limit implementation.

Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
Reviewed-by: Peter Xu <peterx@redhat.com>
Message-Id: <5d0d641bffcb9b1c4cc3e323b6dfecb36050d948.1656177590.git.huangy81@chinatelecom.cn>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

* accel/kvm/kvm-all: Introduce kvm_dirty_ring_size function

Introduce kvm_dirty_ring_size util function to help calculate
dirty ring ful time.

Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
Acked-by: Peter Xu <peterx@redhat.com>
Message-Id: <f9ce1f550bfc0e3a1f711e17b1dbc8f701700e56.1656177590.git.huangy81@chinatelecom.cn>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

* softmmu/dirtylimit: Implement virtual CPU throttle

Setup a negative feedback system when vCPU thread
handling KVM_EXIT_DIRTY_RING_FULL exit by introducing
throttle_us_per_full field in struct CPUState. Sleep
throttle_us_per_full microseconds to throttle vCPU
if dirtylimit is in service.

Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
Reviewed-by: Peter Xu <peterx@redhat.com>
Message-Id: <977e808e03a1cef5151cae75984658b6821be618.1656177590.git.huangy81@chinatelecom.cn>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

* softmmu/dirtylimit: Implement dirty page rate limit

Implement dirtyrate calculation periodically basing on
dirty-ring and throttle virtual CPU until it reachs the quota
dirty page rate given by user.

Introduce qmp commands "set-vcpu-dirty-limit",
"cancel-vcpu-dirty-limit", "query-vcpu-dirty-limit"
to enable, disable, query dirty page limit for virtual CPU.

Meanwhile, introduce corresponding hmp commands
"set_vcpu_dirty_limit", "cancel_vcpu_dirty_limit",
"info vcpu_dirty_limit" so the feature can be more usable.

"query-vcpu-dirty-limit" success depends on enabling dirty
page rate limit, so just add it to the list of skipped
command to ensure qmp-cmd-test run successfully.

Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
Acked-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Message-Id: <4143f26706d413dd29db0b672fe58b3d3fbe34bc.1656177590.git.huangy81@chinatelecom.cn>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

* tests: Add dirty page rate limit test

Add dirty page rate limit test if kernel support dirty ring,

The following qmp commands are covered by this test case:
"calc-dirty-rate", "query-dirty-rate", "set-vcpu-dirty-limit",
"cancel-vcpu-dirty-limit" and "query-vcpu-dirty-limit".

Signed-off-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
Acked-by: Peter Xu <peterx@redhat.com>
Message-Id: <eed5b847a6ef0a9c02a36383dbdd7db367dd1e7e.1656177590.git.huangy81@chinatelecom.cn>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

* multifd: Copy pages before compressing them with zlib

zlib_send_prepare() compresses pages of a running VM. zlib does not
make any thread-safety guarantees with respect to changing deflate()
input concurrently with deflate() [1].

One can observe problems due to this with the IBM zEnterprise Data
Compression accelerator capable zlib [2]. When the hardware
acceleration is enabled, migration/multifd/tcp/plain/zlib test fails
intermittently [3] due to sliding window corruption. The accelerator's
architecture explicitly discourages concurrent accesses [4]:

    Page 26-57, "Other Conditions":

    As observed by this CPU, other CPUs, and channel
    programs, references to the parameter block, first,
    second, and third operands may be multiple-access
    references, accesses to these storage locations are
    not necessarily block-concurrent, and the sequence
    of these accesses or references is undefined.

Mark Adler pointed out that vanilla zlib performs double fetches under
certain circumstances as well [5], therefore we need to copy data
before passing it to deflate().

[1] https://zlib.net/manual.html
[2] https://github.com/madler/zlib/pull/410
[3] https://lists.nongnu.org/archive/html/qemu-devel/2022-03/msg03988.html
[4] http://publibfp.dhe.ibm.com/epubs/pdf/a227832c.pdf
[5] https://lists.gnu.org/archive/html/qemu-devel/2022-07/msg00889.html

Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Message-Id: <20220705203559.2960949-1-iii@linux.ibm.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

* migration: Add postcopy-preempt capability

Firstly, postcopy already preempts precopy due to the fact that we do
unqueue_page() first before looking into dirty bits.

However that's not enough, e.g., when there're host huge page enabled, when
sending a precopy huge page, a postcopy request needs to wait until the whole
huge page that is sending to finish.  That could introduce quite some delay,
the bigger the huge page is the larger delay it'll bring.

This patch adds a new capability to allow postcopy requests to preempt existing
precopy page during sending a huge page, so that postcopy requests can be
serviced even faster.

Meanwhile to send it even faster, bypass the precopy stream by providing a
standalone postcopy socket for sending requested pages.

Since the new behavior will not be compatible with the old behavior, this will
not be the default, it's enabled only when the new capability is set on both
src/dst QEMUs.

This patch only adds the capability itself, the logic will be added in follow
up patches.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20220707185342.26794-2-peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

* migration: Postcopy preemption preparation on channel creation

Create a new socket for postcopy to be prepared to send postcopy requested
pages via this specific channel, so as to not get blocked by precopy pages.

A new thread is also created on dest qemu to receive data from this new channel
based on the ram_load_postcopy() routine.

The ram_load_postcopy(POSTCOPY) branch and the thread has not started to
function, and that'll be done in follow up patches.

Cleanup the new sockets on both src/dst QEMUs, meanwhile look after the new
thread too to make sure it'll be recycled properly.

Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20220707185502.27149-1-peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
  dgilbert: With Peter's fix to quieten compiler warning on
       start_migration

* migration: Postcopy preemption enablement

This patch enables postcopy-preempt feature.

It contains two major changes to the migration logic:

(1) Postcopy requests are now sent via a different socket from precopy
    background migration stream, so as to be isolated from very high page
    request delays.

(2) For huge page enabled hosts: when there's postcopy requests, they can now
    intercept a partial sending of huge host pages on src QEMU.

After this patch, we'll live migrate a VM with two channels for postcopy: (1)
PRECOPY channel, which is the default channel that transfers background pages;
and (2) POSTCOPY channel, which only transfers requested pages.

There's no strict rule of which channel to use, e.g., if a requested page is
already being transferred on precopy channel, then we will keep using the same
precopy channel to transfer the page even if it's explicitly requested.  In 99%
of the cases we'll prioritize the channels so we send requested page via the
postcopy channel as long as possible.

On the source QEMU, when we found a postcopy request, we'll interrupt the
PRECOPY channel sending process and quickly switch to the POSTCOPY channel.
After we serviced all the high priority postcopy pages, we'll switch back to
PRECOPY channel so that we'll continue to send the interrupted huge page again.
There's no new thread introduced on src QEMU.

On the destination QEMU, one new thread is introduced to receive page data from
the postcopy specific socket (done in the preparation patch).

This patch has a side effect: after sending postcopy pages, previously we'll
assume the guest will access follow up pages so we'll keep sending from there.
Now it's changed.  Instead of going on with a postcopy requested page, we'll go
back and continue sending the precopy huge page (which can be intercepted by a
postcopy request so the huge page can be sent partially before).

Whether that's a problem is debatable, because "assuming the guest will
continue to access the next page" may not really suite when huge pages are
used, especially if the huge page is large (e.g. 1GB pages).  So that locality
hint is much meaningless if huge pages are used.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20220707185504.27203-1-peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

* migration: Postcopy recover with preempt enabled

To allow postcopy recovery, the ram fast load (preempt-only) dest QEMU thread
needs similar handling on fault tolerance.  When ram_load_postcopy() fails,
instead of stopping the thread it halts with a semaphore, preparing to be
kicked again when recovery is detected.

A mutex is introduced to make sure there's no concurrent operation upon the
socket.  To make it simple, the fast ram load thread will take the mutex during
its whole procedure, and only release it if it's paused.  The fast-path socket
will be properly released by the main loading thread safely when there's
network failures during postcopy with that mutex held.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20220707185506.27257-1-peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

* migration: Create the postcopy preempt channel asynchronously

This patch allows the postcopy preempt channel to be created
asynchronously.  The benefit is that when the connection is slow, we won't
take the BQL (and potentially block all things like QMP) for a long time
without releasing.

A function postcopy_preempt_wait_channel() is introduced, allowing the
migration thread to be able to wait on the channel creation.  The channel
is always created by the main thread, in which we'll kick a new semaphore
to tell the migration thread that the channel has created.

We'll need to wait for the new channel in two places: (1) when there's a
new postcopy migration that is starting, or (2) when there's a postcopy
migration to resume.

For the start of migration, we don't need to wait for this channel until
when we want to start postcopy, aka, postcopy_start().  We'll fail the
migration if we found that the channel creation failed (which should
probably not happen at all in 99% of the cases, because the main channel is
using the same network topology).

For a postcopy recovery, we'll need to wait in postcopy_pause().  In that
case if the channel creation failed, we can't fail the migration or we'll
crash the VM, instead we keep in PAUSED state, waiting for yet another
recovery.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Manish Mishra <manish.mishra@nutanix.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20220707185509.27311-1-peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

* migration: Add property x-postcopy-preempt-break-huge

Add a property field that can conditionally disable the "break sending huge
page" behavior in postcopy preemption.  By default it's enabled.

It should only be used for debugging purposes, and we should never remove
the "x-" prefix.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Manish Mishra <manish.mishra@nutanix.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20220707185511.27366-1-peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

* migration: Add helpers to detect TLS capability

Add migrate_channel_requires_tls() to detect whether the specific channel
requires TLS, leveraging the recently introduced migrate_use_tls().  No
functional change intended.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20220707185513.27421-1-peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

* migration: Export tls-[creds|hostname|authz] params to cmdline too

It's useful for specifying tls credentials all in the cmdline (along with
the -object tls-creds-*), especially for debugging purpose.

The trick here is we must remember to not free these fields again in the
finalize() function of migration object, otherwise it'll cause double-free.

The thing is when destroying an object, we'll first destroy the properties
that bound to the object, then the object itself.  To be explicit, when
destroy the object in object_finalize() we have such sequence of
operations:

    object_property_del_all(obj);
    object_deinit(obj, ti);

So after this change the two fields are properly released already even
before reaching the finalize() function but in object_property_del_all(),
hence we don't need to free them anymore in finalize() or it's double-free.

This also fixes a trivial memory leak for tls-authz as we forgot to free it
before this patch.

Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20220707185515.27475-1-peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

* migration: Enable TLS for preempt channel

This patch is based on the async preempt channel creation.  It continues
wiring up the new channel with TLS handshake to destionation when enabled.

Note that only the src QEMU needs such operation; the dest QEMU does not
need any change for TLS support due to the fact that all channels are
established synchronously there, so all the TLS magic is already properly
handled by migration_tls_channel_process_incoming().

Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20220707185518.27529-1-peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

* migration: Respect postcopy request order in preemption mode

With preemption mode on, when we see a postcopy request that was requesting
for exactly the page that we have preempted before (so we've partially sent
the page already via PRECOPY channel and it got preempted by another
postcopy request), currently we drop the request so that after all the
other postcopy requests are serviced then we'll go back to precopy stream
and start to handle that.

We dropped the request because we can't send it via postcopy channel since
the precopy channel already contains partial of the data, and we can only
send a huge page via one channel as a whole.  We can't split a huge page
into two channels.

That's a very corner case and that works, but there's a change on the order
of postcopy requests that we handle since we're postponing this (unlucky)
postcopy request to be later than the other queued postcopy requests.  The
problem is there's a possibility that when the guest was very busy, the
postcopy queue can be always non-empty, it means this dropped request will
never be handled until the end of postcopy migration. So, there's a chance
that there's one dest QEMU vcpu thread waiting for a page fault for an
extremely long time just because it's unluckily accessing the specific page
that was preempted before.

The worst case time it needs can be as long as the whole postcopy migration
procedure.  It's extremely unlikely to happen, but when it happens it's not
good.

The root cause of this problem is because we treat pss->postcopy_requested
variable as with two meanings bound together, as the variable shows:

  1. Whether this page request is urgent, and,
  2. Which channel we should use for this page request.

With the old code, when we set postcopy_requested it means either both (1)
and (2) are true, or both (1) and (2) are false.  We can never have (1)
and (2) to have different values.

However it doesn't necessarily need to be like that.  It's very legal that
there's one request that has (1) very high urgency, but (2) we'd like to
use the precopy channel.  Just like the corner case we were discussing
above.

To differenciate the two meanings better, introduce a new field called
postcopy_target_channel, showing which channel we should use for this page
request, so as to cover the old meaning (2) only.  Then we leave the
postcopy_requested variable to stand only for meaning (1), which is the
urgency of this page request.

With this change, we can easily boost priority of a preempted precopy page
as long as we know that page is also requested as a postcopy page.  So with
the new approach in get_queued_page() instead of dropping that request, we
send it right away with the precopy channel so we get back the ordering of
the page faults just like how they're requested on dest.

Reported-by: Manish Mishra <manish.mishra@nutanix.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Manish Mishra <manish.mishra@nutanix.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20220707185520.27583-1-peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

* tests: Move MigrateCommon upper

So that it can be used in postcopy tests too soon.

Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20220707185522.27638-1-peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

* tests: Add postcopy tls migration test

We just added TLS tests for precopy but not postcopy.  Add the
corresponding test for vanilla postcopy.

Rename the vanilla postcopy to "postcopy/plain" because all postcopy tests
will only use unix sockets as channel.

Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20220707185525.27692-1-peterx@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
  dgilbert: Manual merge

* tests: Add postcopy tls recovery migration test

It's easy to build this upon the postcopy tls test.  Rename the old
postcopy recovery test to postcopy/recovery/plain.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20220707185527.27747-1-peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
  dgilbert: Manual merge

* tests: Add postcopy preempt tests

Four tests are added for preempt mode:

  - Postcopy plain
  - Postcopy recovery
  - Postcopy tls
  - Postcopy tls+recovery

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20220707185530.27801-1-peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
  dgilbert: Manual merge

* migration: remove unreachable code after reading data

The code calls qio_channel_read() in a loop when it reports
QIO_CHANNEL_ERR_BLOCK. This code is reported when errno==EAGAIN.

As such the later block of code will always hit the 'errno != EAGAIN'
condition, making the final 'else' unreachable.

Fixes: Coverity CID 1490203
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20220627135318.156121-1-berrange@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

* QIOChannelSocket: Fix zero-copy flush returning code 1 when nothing sent

If flush is called when no buffer was sent with MSG_ZEROCOPY, it currently
returns 1. This return code should be used only when Linux fails to use
MSG_ZEROCOPY on a lot of sendmsg().

Fix this by returning early from flush if no sendmsg(...,MSG_ZEROCOPY)
was attempted.

Fixes: 2bc58ffc2926 ("QIOChannelSocket: Implement io_writev zero copy flag & io_flush for CONFIG_LINUX")
Signed-off-by: Leonardo Bras <leobras@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Acked-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Message-Id: <20220711211112.18951-2-leobras@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

* Add dirty-sync-missed-zero-copy migration stat

Signed-off-by: Leonardo Bras <leobras@redhat.com>
Acked-by: Markus Armbruster <armbru@redhat.com>
Acked-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20220711211112.18951-3-leobras@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

* migration/multifd: Report to user when zerocopy not working

Some errors, like the lack of Scatter-Gather support by the network
interface(NETIF_F_SG) may cause sendmsg(...,MSG_ZEROCOPY) to fail on using
zero-copy, which causes it to fall back to the default copying mechanism.

After each full dirty-bitmap scan there should be a zero-copy flush
happening, which checks for errors each of the previous calls to
sendmsg(...,MSG_ZEROCOPY). If all of them failed to use zero-copy, then
increment dirty_sync_missed_zero_copy migration stat to let the user know
about it.

Signed-off-by: Leonardo Bras <leobras@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Acked-by: Peter Xu <peterx@redhat.com>
Message-Id: <20220711211112.18951-4-leobras@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

* multifd: Document the locking of MultiFD{Send/Recv}Params

Reorder the structures so we can know if the fields are:
- Read only
- Their own locking (i.e. sems)
- Protected by 'mutex'
- Only for the multifd channel

Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20220531104318.7494-2-quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
  dgilbert: Typo fixes from Chen Zhang

* migration: Avoid false-positive on non-supported scenarios for zero-copy-send

Migration with zero-copy-send currently has it's limitations, as it can't
be used with TLS nor any kind of compression. In such scenarios, it should
output errors during parameter / capability setting.

But currently there are some ways of setting this not-supported scenarios
without printing the error message:

!) For 'compression' capability, it works by enabling it together with
zero-copy-send. This happens because the validity test for zero-copy uses
the helper unction migrate_use_compression(), which check for compression
presence in s->enabled_capabilities[MIGRATION_CAPABILITY_COMPRESS].

The point here is: the validity test happens before the capability gets
enabled. If all of them get enabled together, this test will not return
error.

In order to fix that, replace migrate_use_compression() by directly testing
the cap_list parameter migrate_caps_check().

2) For features enabled by parameters such as TLS & 'multifd_compression',
there was also a possibility of setting non-supported scenarios: setting
zero-copy-send first, then setting the unsupported parameter.

In order to fix that, also add a check for parameters conflicting with
zero-copy-send on migrate_params_check().

3) XBZRLE is also a compression capability, so it makes sense to also add
it to the list of capabilities which are not supported with zero-copy-send.

Fixes: 1abaec9a1b2c ("migration: Change zero_copy_send from migration parameter to migration capability")
Signed-off-by: Leonardo Bras <leobras@redhat.com>
Message-Id: <20220719122345.253713-1-leobras@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

* Revert "gitlab: disable accelerated zlib for s390x"

This reverts commit 309df6acb29346f89e1ee542b1986f60cab12b87.
With Ilya's 'multifd: Copy pages before compressing them with zlib'
in the latest migration series, this shouldn't be a problem any more.

Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>

* slow snapshots api

Co-authored-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Co-authored-by: Paolo Bonzini <pbonzini@redhat.com>
Co-authored-by: Peter Maydell <peter.maydell@linaro.org>
Co-authored-by: Joel Stanley <joel@jms.id.au>
Co-authored-by: Peter Delevoryas <pdel@fb.com>
Co-authored-by: Peter Delevoryas <peter@pjd.dev>
Co-authored-by: Cédric Le Goater <clg@kaod.org>
Co-authored-by: Iris Chen <irischenlj@fb.com>
Co-authored-by: Jinhao Fan <fanjinhao21s@ict.ac.cn>
Co-authored-by: Niklas Cassel <niklas.cassel@wdc.com>
Co-authored-by: Darren Kenny <darren.kenny@oracle.com>
Co-authored-by: Konstantin Kostiuk <kkostiuk@redhat.com>
Co-authored-by: Richard Henderson <richard.henderson@linaro.org>
Co-authored-by: Hao Wu <wuhaotsh@google.com>
Co-authored-by: Andrey Makarov <ph.makarov@gmail.com>
Co-authored-by: Jason A. Donenfeld <Jason@zx2c4.com>
Co-authored-by: Murilo Opsfelder Araujo <muriloo@linux.ibm.com>
Co-authored-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Co-authored-by: Leandro Lupori <leandro.lupori@eldorado.org.br>
Co-authored-by: Lucas Coutinho <lucas.coutinho@eldorado.org.br>
Co-authored-by: John Snow <jsnow@redhat.com>
Co-authored-by: Song Gao <gaosong@loongson.cn>
Co-authored-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Co-authored-by: Thomas Huth <thuth@redhat.com>
Co-authored-by: Ilya Leoshkevich <iii@linux.ibm.com>
Co-authored-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Co-authored-by: Gerd Hoffmann <kraxel@redhat.com>
Co-authored-by: Mauro Matteo Cascella <mcascell@redhat.com>
Co-authored-by: Felix xq Queißner <xq@random-projects.net>
Co-authored-by: Xiaojuan Yang <yangxiaojuan@loongson.cn>
Co-authored-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Co-authored-by: Taylor Simpson <tsimpson@quicinc.com>
Co-authored-by: Eugenio Pérez <eperezma@redhat.com>
Co-authored-by: Zhang Chen <chen.zhang@intel.com>
Co-authored-by: Hyman Huang(黄勇) <huangy81@chinatelecom.cn>
Co-authored-by: Peter Xu <peterx@redhat.com>
Co-authored-by: Daniel P. Berrangé <berrange@redhat.com>
Co-authored-by: Leonardo Bras <leobras@redhat.com>
Co-authored-by: Juan Quintela <quintela@redhat.com>
Co-authored-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2022-07-22 17:02:58 +02:00

3340 lines
101 KiB
C

/*
* Host code generation
*
* Copyright (c) 2003 Fabrice Bellard
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*/
#include "qemu/osdep.h"
#define NO_CPU_IO_DEFS
#include "trace.h"
#include "disas/disas.h"
#include "exec/exec-all.h"
#include "tcg/tcg.h"
#if defined(CONFIG_USER_ONLY)
#include "qemu.h"
#if defined(__FreeBSD__) || defined(__FreeBSD_kernel__)
#include <sys/param.h>
#if __FreeBSD_version >= 700104
#define HAVE_KINFO_GETVMMAP
#define sigqueue sigqueue_freebsd /* avoid redefinition */
#include <sys/proc.h>
#include <machine/profile.h>
#define _KERNEL
#include <sys/user.h>
#undef _KERNEL
#undef sigqueue
#include <libutil.h>
#endif
#endif
#else
#include "exec/ram_addr.h"
#endif
#include "exec/cputlb.h"
#include "exec/translate-all.h"
#include "qemu/bitmap.h"
#include "qemu/qemu-print.h"
#include "qemu/timer.h"
#include "qemu/main-loop.h"
#include "qemu/cacheinfo.h"
#include "exec/log.h"
#include "sysemu/cpus.h"
#include "sysemu/cpu-timers.h"
#include "sysemu/tcg.h"
#include "qapi/error.h"
#include "hw/core/tcg-cpu-ops.h"
#include "tb-hash.h"
#include "tb-context.h"
#include "internal.h"
//// --- Begin LibAFL code ---
#include "tcg/tcg-op.h"
#include "tcg/tcg-internal.h"
#include "exec/helper-head.h"
target_ulong libafl_gen_cur_pc;
void libafl_helper_table_add(TCGHelperInfo* info);
TranslationBlock *libafl_gen_edge(CPUState *cpu, target_ulong src_block,
target_ulong dst_block, int exit_n,
target_ulong cs_base, uint32_t flags,
int cflags);
void libafl_gen_read(TCGv addr, MemOp ot);
void libafl_gen_read_N(TCGv addr, size_t size);
void libafl_gen_write(TCGv addr, MemOp ot);
void libafl_gen_write_N(TCGv addr, size_t size);
void libafl_gen_cmp(target_ulong pc, TCGv op0, TCGv op1, MemOp ot);
void libafl_gen_backdoor(target_ulong pc);
static TCGHelperInfo libafl_exec_edge_hook_info = {
.func = NULL, .name = "libafl_exec_edge_hook", \
.flags = dh_callflag(void), \
.typemask = dh_typemask(void, 0) | dh_typemask(i64, 1)
};
struct libafl_edge_hook {
uint64_t (*gen)(target_ulong src, target_ulong dst, uint64_t data);
void (*exec)(uint64_t id, uint64_t data);
uint64_t data;
uint64_t cur_id;
TCGHelperInfo helper_info;
struct libafl_edge_hook* next;
};
struct libafl_edge_hook* libafl_edge_hooks;
void libafl_add_edge_hook(uint64_t (*gen)(target_ulong src, target_ulong dst, uint64_t data),
void (*exec)(uint64_t id, uint64_t data),
uint64_t data);
void libafl_add_edge_hook(uint64_t (*gen)(target_ulong src, target_ulong dst, uint64_t data),
void (*exec)(uint64_t id, uint64_t data),
uint64_t data)
{
CPUState *cpu;
CPU_FOREACH(cpu) {
tb_flush(cpu);
}
struct libafl_edge_hook* hook = malloc(sizeof(struct libafl_edge_hook));
hook->gen = gen;
hook->exec = exec;
hook->data = data;
hook->next = libafl_edge_hooks;
libafl_edge_hooks = hook;
if (exec) {
memcpy(&hook->helper_info, &libafl_exec_edge_hook_info, sizeof(TCGHelperInfo));
hook->helper_info.func = exec;
libafl_helper_table_add(&hook->helper_info);
}
}
static TCGHelperInfo libafl_exec_block_hook_info = {
.func = NULL, .name = "libafl_exec_block_hook", \
.flags = dh_callflag(void), \
.typemask = dh_typemask(void, 0) | dh_typemask(i64, 1)
};
struct libafl_block_hook {
uint64_t (*gen)(target_ulong pc, uint64_t data);
void (*exec)(uint64_t id, uint64_t data);
uint64_t data;
TCGHelperInfo helper_info;
struct libafl_block_hook* next;
};
struct libafl_block_hook* libafl_block_hooks;
void libafl_add_block_hook(uint64_t (*gen)(target_ulong pc, uint64_t data),
void (*exec)(uint64_t id, uint64_t data),
uint64_t data);
void libafl_add_block_hook(uint64_t (*gen)(target_ulong pc, uint64_t data),
void (*exec)(uint64_t id, uint64_t data),
uint64_t data)
{
CPUState *cpu;
CPU_FOREACH(cpu) {
tb_flush(cpu);
}
struct libafl_block_hook* hook = malloc(sizeof(struct libafl_block_hook));
hook->gen = gen;
hook->exec = exec;
hook->data = data;
hook->next = libafl_block_hooks;
libafl_block_hooks = hook;
if (exec) {
memcpy(&hook->helper_info, &libafl_exec_block_hook_info, sizeof(TCGHelperInfo));
hook->helper_info.func = exec;
libafl_helper_table_add(&hook->helper_info);
}
}
static TCGHelperInfo libafl_exec_read_hook1_info = {
.func = NULL, .name = "libafl_exec_read_hook1", \
.flags = dh_callflag(void), \
.typemask = dh_typemask(void, 0) | dh_typemask(i64, 1) | dh_typemask(i64, 2)
};
static TCGHelperInfo libafl_exec_read_hook2_info = {
.func = NULL, .name = "libafl_exec_read_hook2", \
.flags = dh_callflag(void), \
.typemask = dh_typemask(void, 0) | dh_typemask(i64, 1) | dh_typemask(i64, 2)
};
static TCGHelperInfo libafl_exec_read_hook4_info = {
.func = NULL, .name = "libafl_exec_read_hook4", \
.flags = dh_callflag(void), \
.typemask = dh_typemask(void, 0) | dh_typemask(i64, 1) | dh_typemask(i64, 2)
};
static TCGHelperInfo libafl_exec_read_hook8_info = {
.func = NULL, .name = "libafl_exec_read_hook8", \
.flags = dh_callflag(void), \
.typemask = dh_typemask(void, 0) | dh_typemask(i64, 1) | dh_typemask(i64, 2)
};
static TCGHelperInfo libafl_exec_read_hookN_info = {
.func = NULL, .name = "libafl_exec_read_hookN", \
.flags = dh_callflag(void), \
.typemask = dh_typemask(void, 0) | dh_typemask(i64, 1) | dh_typemask(tl, 2)
| dh_typemask(i64, 3)
};
static TCGHelperInfo libafl_exec_write_hook1_info = {
.func = NULL, .name = "libafl_exec_write_hook1", \
.flags = dh_callflag(void), \
.typemask = dh_typemask(void, 0) | dh_typemask(i64, 1) | dh_typemask(i64, 2)
};
static TCGHelperInfo libafl_exec_write_hook2_info = {
.func = NULL, .name = "libafl_exec_write_hook2", \
.flags = dh_callflag(void), \
.typemask = dh_typemask(void, 0) | dh_typemask(i64, 1) | dh_typemask(i64, 2)
};
static TCGHelperInfo libafl_exec_write_hook4_info = {
.func = NULL, .name = "libafl_exec_write_hook4", \
.flags = dh_callflag(void), \
.typemask = dh_typemask(void, 0) | dh_typemask(i64, 1) | dh_typemask(i64, 2)
};
static TCGHelperInfo libafl_exec_write_hook8_info = {
.func = NULL, .name = "libafl_exec_write_hook8", \
.flags = dh_callflag(void), \
.typemask = dh_typemask(void, 0) | dh_typemask(i64, 1) | dh_typemask(i64, 2)
};
static TCGHelperInfo libafl_exec_write_hookN_info = {
.func = NULL, .name = "libafl_exec_write_hookN", \
.flags = dh_callflag(void), \
.typemask = dh_typemask(void, 0) | dh_typemask(i64, 1) | dh_typemask(tl, 2)
| dh_typemask(i64, 3)
};
struct libafl_rw_hook {
uint64_t (*gen)(target_ulong pc, size_t size, uint64_t data);
void (*exec1)(uint64_t id, target_ulong addr, uint64_t data);
void (*exec2)(uint64_t id, target_ulong addr, uint64_t data);
void (*exec4)(uint64_t id, target_ulong addr, uint64_t data);
void (*exec8)(uint64_t id, target_ulong addr, uint64_t data);
void (*execN)(uint64_t id, target_ulong addr, size_t size, uint64_t data);
uint64_t data;
TCGHelperInfo helper_info1;
TCGHelperInfo helper_info2;
TCGHelperInfo helper_info4;
TCGHelperInfo helper_info8;
TCGHelperInfo helper_infoN;
struct libafl_rw_hook* next;
};
struct libafl_rw_hook* libafl_read_hooks;
void libafl_add_read_hook(uint64_t (*gen)(target_ulong pc, size_t size, uint64_t data),
void (*exec1)(uint64_t id, target_ulong addr, uint64_t data),
void (*exec2)(uint64_t id, target_ulong addr, uint64_t data),
void (*exec4)(uint64_t id, target_ulong addr, uint64_t data),
void (*exec8)(uint64_t id, target_ulong addr, uint64_t data),
void (*execN)(uint64_t id, target_ulong addr, size_t size, uint64_t data),
uint64_t data);
void libafl_add_read_hook(uint64_t (*gen)(target_ulong pc, size_t size, uint64_t data),
void (*exec1)(uint64_t id, target_ulong addr, uint64_t data),
void (*exec2)(uint64_t id, target_ulong addr, uint64_t data),
void (*exec4)(uint64_t id, target_ulong addr, uint64_t data),
void (*exec8)(uint64_t id, target_ulong addr, uint64_t data),
void (*execN)(uint64_t id, target_ulong addr, size_t size, uint64_t data),
uint64_t data)
{
CPUState *cpu;
CPU_FOREACH(cpu) {
tb_flush(cpu);
}
struct libafl_rw_hook* hook = malloc(sizeof(struct libafl_rw_hook));
hook->gen = gen;
hook->exec1 = exec1;
hook->exec2 = exec2;
hook->exec4 = exec4;
hook->exec8 = exec8;
hook->execN = execN;
hook->data = data;
hook->next = libafl_read_hooks;
libafl_read_hooks = hook;
if (exec1) {
memcpy(&hook->helper_info1, &libafl_exec_read_hook1_info, sizeof(TCGHelperInfo));
hook->helper_info1.func = exec1;
libafl_helper_table_add(&hook->helper_info1);
}
if (exec2) {
memcpy(&hook->helper_info2, &libafl_exec_read_hook2_info, sizeof(TCGHelperInfo));
hook->helper_info2.func = exec2;
libafl_helper_table_add(&hook->helper_info2);
}
if (exec4) {
memcpy(&hook->helper_info4, &libafl_exec_read_hook4_info, sizeof(TCGHelperInfo));
hook->helper_info4.func = exec4;
libafl_helper_table_add(&hook->helper_info4);
}
if (exec8) {
memcpy(&hook->helper_info8, &libafl_exec_read_hook8_info, sizeof(TCGHelperInfo));
hook->helper_info8.func = exec8;
libafl_helper_table_add(&hook->helper_info8);
}
if (execN) {
memcpy(&hook->helper_infoN, &libafl_exec_read_hookN_info, sizeof(TCGHelperInfo));
hook->helper_infoN.func = execN;
libafl_helper_table_add(&hook->helper_infoN);
}
}
void libafl_gen_read(TCGv addr, MemOp ot)
{
size_t size = 0;
switch (ot & MO_SIZE) {
case MO_64:
size = 8;
break;
case MO_32:
size = 4;
break;
case MO_16:
size = 2;
break;
case MO_8:
size = 1;
break;
default:
return;
}
struct libafl_rw_hook* hook = libafl_read_hooks;
while (hook) {
uint64_t cur_id = 0;
if (hook->gen)
cur_id = hook->gen(libafl_gen_cur_pc, size, hook->data);
void* func = NULL;
if (size == 1) func = hook->exec1;
else if (size == 2) func = hook->exec2;
else if (size == 4) func = hook->exec4;
else if (size == 8) func = hook->exec8;
if (cur_id != (uint64_t)-1 && func) {
TCGv_i64 tmp0 = tcg_const_i64(cur_id);
TCGv_i64 tmp1 = tcg_const_i64(hook->data);
TCGTemp *tmp2[3] = { tcgv_i64_temp(tmp0),
#if TARGET_LONG_BITS == 32
tcgv_i32_temp(addr),
#else
tcgv_i64_temp(addr),
#endif
tcgv_i64_temp(tmp1) };
tcg_gen_callN(func, NULL, 3, tmp2);
tcg_temp_free_i64(tmp0);
tcg_temp_free_i64(tmp1);
}
hook = hook->next;
}
}
void libafl_gen_read_N(TCGv addr, size_t size)
{
struct libafl_rw_hook* hook = libafl_read_hooks;
while (hook) {
uint64_t cur_id = 0;
if (hook->gen)
cur_id = hook->gen(libafl_gen_cur_pc, size, hook->data);
if (cur_id != (uint64_t)-1 && hook->execN) {
TCGv_i64 tmp0 = tcg_const_i64(cur_id);
TCGv tmp1 = tcg_const_tl(size);
TCGv_i64 tmp2 = tcg_const_i64(hook->data);
TCGTemp *tmp3[4] = { tcgv_i64_temp(tmp0),
#if TARGET_LONG_BITS == 32
tcgv_i32_temp(addr),
tcgv_i32_temp(tmp1),
#else
tcgv_i64_temp(addr),
tcgv_i64_temp(tmp1),
#endif
tcgv_i64_temp(tmp2) };
tcg_gen_callN(hook->execN, NULL, 4, tmp3);
tcg_temp_free_i64(tmp0);
#if TARGET_LONG_BITS == 32
tcg_temp_free_i32(tmp1);
#else
tcg_temp_free_i64(tmp1);
#endif
tcg_temp_free_i64(tmp2);
}
hook = hook->next;
}
}
struct libafl_rw_hook* libafl_write_hooks;
void libafl_add_write_hook(uint64_t (*gen)(target_ulong pc, size_t size, uint64_t data),
void (*exec1)(uint64_t id, target_ulong addr, uint64_t data),
void (*exec2)(uint64_t id, target_ulong addr, uint64_t data),
void (*exec4)(uint64_t id, target_ulong addr, uint64_t data),
void (*exec8)(uint64_t id, target_ulong addr, uint64_t data),
void (*execN)(uint64_t id, target_ulong addr, size_t size, uint64_t data),
uint64_t data);
void libafl_add_write_hook(uint64_t (*gen)(target_ulong pc, size_t size, uint64_t data),
void (*exec1)(uint64_t id, target_ulong addr, uint64_t data),
void (*exec2)(uint64_t id, target_ulong addr, uint64_t data),
void (*exec4)(uint64_t id, target_ulong addr, uint64_t data),
void (*exec8)(uint64_t id, target_ulong addr, uint64_t data),
void (*execN)(uint64_t id, target_ulong addr, size_t size, uint64_t data),
uint64_t data)
{
CPUState *cpu;
CPU_FOREACH(cpu) {
tb_flush(cpu);
}
struct libafl_rw_hook* hook = malloc(sizeof(struct libafl_rw_hook));
hook->gen = gen;
hook->exec1 = exec1;
hook->exec2 = exec2;
hook->exec4 = exec4;
hook->exec8 = exec8;
hook->execN = execN;
hook->data = data;
hook->next = libafl_write_hooks;
libafl_write_hooks = hook;
if (exec1) {
memcpy(&hook->helper_info1, &libafl_exec_write_hook1_info, sizeof(TCGHelperInfo));
hook->helper_info1.func = exec1;
libafl_helper_table_add(&hook->helper_info1);
}
if (exec2) {
memcpy(&hook->helper_info2, &libafl_exec_write_hook2_info, sizeof(TCGHelperInfo));
hook->helper_info2.func = exec2;
libafl_helper_table_add(&hook->helper_info2);
}
if (exec4) {
memcpy(&hook->helper_info4, &libafl_exec_write_hook4_info, sizeof(TCGHelperInfo));
hook->helper_info4.func = exec4;
libafl_helper_table_add(&hook->helper_info4);
}
if (exec8) {
memcpy(&hook->helper_info8, &libafl_exec_write_hook8_info, sizeof(TCGHelperInfo));
hook->helper_info8.func = exec8;
libafl_helper_table_add(&hook->helper_info8);
}
if (execN) {
memcpy(&hook->helper_infoN, &libafl_exec_write_hookN_info, sizeof(TCGHelperInfo));
hook->helper_infoN.func = execN;
libafl_helper_table_add(&hook->helper_infoN);
}
}
void libafl_gen_write(TCGv addr, MemOp ot)
{
size_t size = 0;
switch (ot & MO_SIZE) {
case MO_64:
size = 8;
break;
case MO_32:
size = 4;
break;
case MO_16:
size = 2;
break;
case MO_8:
size = 1;
break;
default:
return;
}
struct libafl_rw_hook* hook = libafl_write_hooks;
while (hook) {
uint64_t cur_id = 0;
if (hook->gen)
cur_id = hook->gen(libafl_gen_cur_pc, size, hook->data);
void* func = NULL;
if (size == 1) func = hook->exec1;
else if (size == 2) func = hook->exec2;
else if (size == 4) func = hook->exec4;
else if (size == 8) func = hook->exec8;
if (cur_id != (uint64_t)-1 && func) {
TCGv_i64 tmp0 = tcg_const_i64(cur_id);
TCGv_i64 tmp1 = tcg_const_i64(hook->data);
TCGTemp *tmp2[3] = { tcgv_i64_temp(tmp0),
#if TARGET_LONG_BITS == 32
tcgv_i32_temp(addr),
#else
tcgv_i64_temp(addr),
#endif
tcgv_i64_temp(tmp1) };
tcg_gen_callN(func, NULL, 3, tmp2);
tcg_temp_free_i64(tmp0);
tcg_temp_free_i64(tmp1);
}
hook = hook->next;
}
}
void libafl_gen_write_N(TCGv addr, size_t size)
{
struct libafl_rw_hook* hook = libafl_write_hooks;
while (hook) {
uint64_t cur_id = 0;
if (hook->gen)
cur_id = hook->gen(libafl_gen_cur_pc, size, hook->data);
if (cur_id != (uint64_t)-1 && hook->execN) {
TCGv_i64 tmp0 = tcg_const_i64(cur_id);
TCGv tmp1 = tcg_const_tl(size);
TCGv_i64 tmp2 = tcg_const_i64(hook->data);
TCGTemp *tmp3[4] = { tcgv_i64_temp(tmp0),
#if TARGET_LONG_BITS == 32
tcgv_i32_temp(addr),
tcgv_i32_temp(tmp1),
#else
tcgv_i64_temp(addr),
tcgv_i64_temp(tmp1),
#endif
tcgv_i64_temp(tmp2) };
tcg_gen_callN(hook->execN, NULL, 4, tmp3);
tcg_temp_free_i64(tmp0);
#if TARGET_LONG_BITS == 32
tcg_temp_free_i32(tmp1);
#else
tcg_temp_free_i64(tmp1);
#endif
tcg_temp_free_i64(tmp2);
}
hook = hook->next;
}
}
static TCGHelperInfo libafl_exec_cmp_hook1_info = {
.func = NULL, .name = "libafl_exec_cmp_hook1", \
.flags = dh_callflag(void), \
.typemask = dh_typemask(void, 0) | dh_typemask(i64, 1)
| dh_typemask(tl, 2) | dh_typemask(tl, 3) | dh_typemask(i64, 4)
};
static TCGHelperInfo libafl_exec_cmp_hook2_info = {
.func = NULL, .name = "libafl_exec_cmp_hook2", \
.flags = dh_callflag(void), \
.typemask = dh_typemask(void, 0) | dh_typemask(i64, 1)
| dh_typemask(tl, 2) | dh_typemask(tl, 3) | dh_typemask(i64, 4)
};
static TCGHelperInfo libafl_exec_cmp_hook4_info = {
.func = NULL, .name = "libafl_exec_cmp_hook4", \
.flags = dh_callflag(void), \
.typemask = dh_typemask(void, 0) | dh_typemask(i64, 1)
| dh_typemask(tl, 2) | dh_typemask(tl, 3) | dh_typemask(i64, 4)
};
static TCGHelperInfo libafl_exec_cmp_hook8_info = {
.func = NULL, .name = "libafl_exec_cmp_hook8", \
.flags = dh_callflag(void), \
.typemask = dh_typemask(void, 0) | dh_typemask(i64, 1)
| dh_typemask(tl, 2) | dh_typemask(tl, 3) | dh_typemask(i64, 4)
};
struct libafl_cmp_hook {
uint64_t (*gen)(target_ulong pc, size_t size, uint64_t data);
void (*exec1)(uint64_t id, uint8_t v0, uint8_t v1, uint64_t data);
void (*exec2)(uint64_t id, uint16_t v0, uint16_t v1, uint64_t data);
void (*exec4)(uint64_t id, uint32_t v0, uint32_t v1, uint64_t data);
void (*exec8)(uint64_t id, uint64_t v0, uint64_t v1, uint64_t data);
uint64_t data;
TCGHelperInfo helper_info1;
TCGHelperInfo helper_info2;
TCGHelperInfo helper_info4;
TCGHelperInfo helper_info8;
struct libafl_cmp_hook* next;
};
struct libafl_cmp_hook* libafl_cmp_hooks;
void libafl_add_cmp_hook(uint64_t (*gen)(target_ulong pc, size_t size, uint64_t data),
void (*exec1)(uint64_t id, uint8_t v0, uint8_t v1, uint64_t data),
void (*exec2)(uint64_t id, uint16_t v0, uint16_t v1, uint64_t data),
void (*exec4)(uint64_t id, uint32_t v0, uint32_t v1, uint64_t data),
void (*exec8)(uint64_t id, uint64_t v0, uint64_t v1, uint64_t data),
uint64_t data);
void libafl_add_cmp_hook(uint64_t (*gen)(target_ulong pc, size_t size, uint64_t data),
void (*exec1)(uint64_t id, uint8_t v0, uint8_t v1, uint64_t data),
void (*exec2)(uint64_t id, uint16_t v0, uint16_t v1, uint64_t data),
void (*exec4)(uint64_t id, uint32_t v0, uint32_t v1, uint64_t data),
void (*exec8)(uint64_t id, uint64_t v0, uint64_t v1, uint64_t data),
uint64_t data)
{
CPUState *cpu;
CPU_FOREACH(cpu) {
tb_flush(cpu);
}
struct libafl_cmp_hook* hook = malloc(sizeof(struct libafl_cmp_hook));
hook->gen = gen;
hook->exec1 = exec1;
hook->exec2 = exec2;
hook->exec4 = exec4;
hook->exec8 = exec8;
hook->data = data;
hook->next = libafl_cmp_hooks;
libafl_cmp_hooks = hook;
if (exec1) {
memcpy(&hook->helper_info1, &libafl_exec_cmp_hook1_info, sizeof(TCGHelperInfo));
hook->helper_info1.func = exec1;
libafl_helper_table_add(&hook->helper_info1);
}
if (exec2) {
memcpy(&hook->helper_info2, &libafl_exec_cmp_hook2_info, sizeof(TCGHelperInfo));
hook->helper_info2.func = exec2;
libafl_helper_table_add(&hook->helper_info2);
}
if (exec4) {
memcpy(&hook->helper_info4, &libafl_exec_cmp_hook4_info, sizeof(TCGHelperInfo));
hook->helper_info4.func = exec4;
libafl_helper_table_add(&hook->helper_info4);
}
if (exec8) {
memcpy(&hook->helper_info8, &libafl_exec_cmp_hook8_info, sizeof(TCGHelperInfo));
hook->helper_info8.func = exec8;
libafl_helper_table_add(&hook->helper_info8);
}
}
void libafl_gen_cmp(target_ulong pc, TCGv op0, TCGv op1, MemOp ot)
{
size_t size = 0;
switch (ot & MO_SIZE) {
case MO_64:
size = 8;
break;
case MO_32:
size = 4;
break;
case MO_16:
size = 2;
break;
case MO_8:
size = 1;
break;
default:
return;
}
struct libafl_cmp_hook* hook = libafl_cmp_hooks;
while (hook) {
uint64_t cur_id = 0;
if (hook->gen)
cur_id = hook->gen(pc, size, hook->data);
void* func = NULL;
if (size == 1) func = hook->exec1;
else if (size == 2) func = hook->exec2;
else if (size == 4) func = hook->exec4;
else if (size == 8) func = hook->exec8;
if (cur_id != (uint64_t)-1 && func) {
TCGv_i64 tmp0 = tcg_const_i64(cur_id);
TCGv_i64 tmp1 = tcg_const_i64(hook->data);
TCGTemp *tmp2[4] = { tcgv_i64_temp(tmp0),
#if TARGET_LONG_BITS == 32
tcgv_i32_temp(op0), tcgv_i32_temp(op1),
#else
tcgv_i64_temp(op0), tcgv_i64_temp(op1),
#endif
tcgv_i64_temp(tmp1) };
tcg_gen_callN(func, NULL, 4, tmp2);
tcg_temp_free_i64(tmp0);
tcg_temp_free_i64(tmp1);
}
hook = hook->next;
}
}
static TCGHelperInfo libafl_exec_backdoor_hook_info = {
.func = NULL, .name = "libafl_exec_backdoor_hook", \
.flags = dh_callflag(void), \
.typemask = dh_typemask(void, 0) | dh_typemask(tl, 1) | dh_typemask(i64, 2)
};
struct libafl_backdoor_hook {
void (*exec)(target_ulong pc, uint64_t data);
uint64_t data;
TCGHelperInfo helper_info;
struct libafl_backdoor_hook* next;
};
struct libafl_backdoor_hook* libafl_backdoor_hooks;
void libafl_add_backdoor_hook(void (*exec)(target_ulong pc, uint64_t data),
uint64_t data);
void libafl_add_backdoor_hook(void (*exec)(uint64_t id, uint64_t data),
uint64_t data)
{
struct libafl_backdoor_hook* hook = malloc(sizeof(struct libafl_backdoor_hook));
hook->exec = exec;
hook->data = data;
hook->next = libafl_backdoor_hooks;
libafl_backdoor_hooks = hook;
memcpy(&hook->helper_info, &libafl_exec_backdoor_hook_info, sizeof(TCGHelperInfo));
hook->helper_info.func = exec;
libafl_helper_table_add(&hook->helper_info);
}
//// --- End LibAFL code ---
/* #define DEBUG_TB_INVALIDATE */
/* #define DEBUG_TB_FLUSH */
/* make various TB consistency checks */
/* #define DEBUG_TB_CHECK */
#ifdef DEBUG_TB_INVALIDATE
#define DEBUG_TB_INVALIDATE_GATE 1
#else
#define DEBUG_TB_INVALIDATE_GATE 0
#endif
#ifdef DEBUG_TB_FLUSH
#define DEBUG_TB_FLUSH_GATE 1
#else
#define DEBUG_TB_FLUSH_GATE 0
#endif
#if !defined(CONFIG_USER_ONLY)
/* TB consistency checks only implemented for usermode emulation. */
#undef DEBUG_TB_CHECK
#endif
#ifdef DEBUG_TB_CHECK
#define DEBUG_TB_CHECK_GATE 1
#else
#define DEBUG_TB_CHECK_GATE 0
#endif
/* Access to the various translations structures need to be serialised via locks
* for consistency.
* In user-mode emulation access to the memory related structures are protected
* with mmap_lock.
* In !user-mode we use per-page locks.
*/
#ifdef CONFIG_SOFTMMU
#define assert_memory_lock()
#else
#define assert_memory_lock() tcg_debug_assert(have_mmap_lock())
#endif
#define SMC_BITMAP_USE_THRESHOLD 10
typedef struct PageDesc {
/* list of TBs intersecting this ram page */
uintptr_t first_tb;
#ifdef CONFIG_SOFTMMU
/* in order to optimize self modifying code, we count the number
of lookups we do to a given page to use a bitmap */
unsigned long *code_bitmap;
unsigned int code_write_count;
#else
unsigned long flags;
void *target_data;
#endif
#ifndef CONFIG_USER_ONLY
QemuSpin lock;
#endif
} PageDesc;
/**
* struct page_entry - page descriptor entry
* @pd: pointer to the &struct PageDesc of the page this entry represents
* @index: page index of the page
* @locked: whether the page is locked
*
* This struct helps us keep track of the locked state of a page, without
* bloating &struct PageDesc.
*
* A page lock protects accesses to all fields of &struct PageDesc.
*
* See also: &struct page_collection.
*/
struct page_entry {
PageDesc *pd;
tb_page_addr_t index;
bool locked;
};
/**
* struct page_collection - tracks a set of pages (i.e. &struct page_entry's)
* @tree: Binary search tree (BST) of the pages, with key == page index
* @max: Pointer to the page in @tree with the highest page index
*
* To avoid deadlock we lock pages in ascending order of page index.
* When operating on a set of pages, we need to keep track of them so that
* we can lock them in order and also unlock them later. For this we collect
* pages (i.e. &struct page_entry's) in a binary search @tree. Given that the
* @tree implementation we use does not provide an O(1) operation to obtain the
* highest-ranked element, we use @max to keep track of the inserted page
* with the highest index. This is valuable because if a page is not in
* the tree and its index is higher than @max's, then we can lock it
* without breaking the locking order rule.
*
* Note on naming: 'struct page_set' would be shorter, but we already have a few
* page_set_*() helpers, so page_collection is used instead to avoid confusion.
*
* See also: page_collection_lock().
*/
struct page_collection {
GTree *tree;
struct page_entry *max;
};
/* list iterators for lists of tagged pointers in TranslationBlock */
#define TB_FOR_EACH_TAGGED(head, tb, n, field) \
for (n = (head) & 1, tb = (TranslationBlock *)((head) & ~1); \
tb; tb = (TranslationBlock *)tb->field[n], n = (uintptr_t)tb & 1, \
tb = (TranslationBlock *)((uintptr_t)tb & ~1))
#define PAGE_FOR_EACH_TB(pagedesc, tb, n) \
TB_FOR_EACH_TAGGED((pagedesc)->first_tb, tb, n, page_next)
#define TB_FOR_EACH_JMP(head_tb, tb, n) \
TB_FOR_EACH_TAGGED((head_tb)->jmp_list_head, tb, n, jmp_list_next)
/*
* In system mode we want L1_MAP to be based on ram offsets,
* while in user mode we want it to be based on virtual addresses.
*
* TODO: For user mode, see the caveat re host vs guest virtual
* address spaces near GUEST_ADDR_MAX.
*/
#if !defined(CONFIG_USER_ONLY)
#if HOST_LONG_BITS < TARGET_PHYS_ADDR_SPACE_BITS
# define L1_MAP_ADDR_SPACE_BITS HOST_LONG_BITS
#else
# define L1_MAP_ADDR_SPACE_BITS TARGET_PHYS_ADDR_SPACE_BITS
#endif
#else
# define L1_MAP_ADDR_SPACE_BITS MIN(HOST_LONG_BITS, TARGET_ABI_BITS)
#endif
/* Size of the L2 (and L3, etc) page tables. */
#define V_L2_BITS 10
#define V_L2_SIZE (1 << V_L2_BITS)
/* Make sure all possible CPU event bits fit in tb->trace_vcpu_dstate */
QEMU_BUILD_BUG_ON(CPU_TRACE_DSTATE_MAX_EVENTS >
sizeof_field(TranslationBlock, trace_vcpu_dstate)
* BITS_PER_BYTE);
/*
* L1 Mapping properties
*/
static int v_l1_size;
static int v_l1_shift;
static int v_l2_levels;
/* The bottom level has pointers to PageDesc, and is indexed by
* anything from 4 to (V_L2_BITS + 3) bits, depending on target page size.
*/
#define V_L1_MIN_BITS 4
#define V_L1_MAX_BITS (V_L2_BITS + 3)
#define V_L1_MAX_SIZE (1 << V_L1_MAX_BITS)
static void *l1_map[V_L1_MAX_SIZE];
TBContext tb_ctx;
static void page_table_config_init(void)
{
uint32_t v_l1_bits;
assert(TARGET_PAGE_BITS);
/* The bits remaining after N lower levels of page tables. */
v_l1_bits = (L1_MAP_ADDR_SPACE_BITS - TARGET_PAGE_BITS) % V_L2_BITS;
if (v_l1_bits < V_L1_MIN_BITS) {
v_l1_bits += V_L2_BITS;
}
v_l1_size = 1 << v_l1_bits;
v_l1_shift = L1_MAP_ADDR_SPACE_BITS - TARGET_PAGE_BITS - v_l1_bits;
v_l2_levels = v_l1_shift / V_L2_BITS - 1;
assert(v_l1_bits <= V_L1_MAX_BITS);
assert(v_l1_shift % V_L2_BITS == 0);
assert(v_l2_levels >= 0);
}
/* Encode VAL as a signed leb128 sequence at P.
Return P incremented past the encoded value. */
static uint8_t *encode_sleb128(uint8_t *p, target_long val)
{
int more, byte;
do {
byte = val & 0x7f;
val >>= 7;
more = !((val == 0 && (byte & 0x40) == 0)
|| (val == -1 && (byte & 0x40) != 0));
if (more) {
byte |= 0x80;
}
*p++ = byte;
} while (more);
return p;
}
/* Decode a signed leb128 sequence at *PP; increment *PP past the
decoded value. Return the decoded value. */
static target_long decode_sleb128(const uint8_t **pp)
{
const uint8_t *p = *pp;
target_long val = 0;
int byte, shift = 0;
do {
byte = *p++;
val |= (target_ulong)(byte & 0x7f) << shift;
shift += 7;
} while (byte & 0x80);
if (shift < TARGET_LONG_BITS && (byte & 0x40)) {
val |= -(target_ulong)1 << shift;
}
*pp = p;
return val;
}
/* Encode the data collected about the instructions while compiling TB.
Place the data at BLOCK, and return the number of bytes consumed.
The logical table consists of TARGET_INSN_START_WORDS target_ulong's,
which come from the target's insn_start data, followed by a uintptr_t
which comes from the host pc of the end of the code implementing the insn.
Each line of the table is encoded as sleb128 deltas from the previous
line. The seed for the first line is { tb->pc, 0..., tb->tc.ptr }.
That is, the first column is seeded with the guest pc, the last column
with the host pc, and the middle columns with zeros. */
static int encode_search(TranslationBlock *tb, uint8_t *block)
{
uint8_t *highwater = tcg_ctx->code_gen_highwater;
uint8_t *p = block;
int i, j, n;
for (i = 0, n = tb->icount; i < n; ++i) {
target_ulong prev;
for (j = 0; j < TARGET_INSN_START_WORDS; ++j) {
if (i == 0) {
prev = (j == 0 ? tb->pc : 0);
} else {
prev = tcg_ctx->gen_insn_data[i - 1][j];
}
p = encode_sleb128(p, tcg_ctx->gen_insn_data[i][j] - prev);
}
prev = (i == 0 ? 0 : tcg_ctx->gen_insn_end_off[i - 1]);
p = encode_sleb128(p, tcg_ctx->gen_insn_end_off[i] - prev);
/* Test for (pending) buffer overflow. The assumption is that any
one row beginning below the high water mark cannot overrun
the buffer completely. Thus we can test for overflow after
encoding a row without having to check during encoding. */
if (unlikely(p > highwater)) {
return -1;
}
}
return p - block;
}
/* The cpu state corresponding to 'searched_pc' is restored.
* When reset_icount is true, current TB will be interrupted and
* icount should be recalculated.
*/
static int cpu_restore_state_from_tb(CPUState *cpu, TranslationBlock *tb,
uintptr_t searched_pc, bool reset_icount)
{
target_ulong data[TARGET_INSN_START_WORDS] = { tb->pc };
uintptr_t host_pc = (uintptr_t)tb->tc.ptr;
CPUArchState *env = cpu->env_ptr;
const uint8_t *p = tb->tc.ptr + tb->tc.size;
int i, j, num_insns = tb->icount;
#ifdef CONFIG_PROFILER
TCGProfile *prof = &tcg_ctx->prof;
int64_t ti = profile_getclock();
#endif
searched_pc -= GETPC_ADJ;
if (searched_pc < host_pc) {
return -1;
}
/* Reconstruct the stored insn data while looking for the point at
which the end of the insn exceeds the searched_pc. */
for (i = 0; i < num_insns; ++i) {
for (j = 0; j < TARGET_INSN_START_WORDS; ++j) {
data[j] += decode_sleb128(&p);
}
host_pc += decode_sleb128(&p);
if (host_pc > searched_pc) {
goto found;
}
}
return -1;
found:
if (reset_icount && (tb_cflags(tb) & CF_USE_ICOUNT)) {
assert(icount_enabled());
/* Reset the cycle counter to the start of the block
and shift if to the number of actually executed instructions */
cpu_neg(cpu)->icount_decr.u16.low += num_insns - i;
}
restore_state_to_opc(env, tb, data);
#ifdef CONFIG_PROFILER
qatomic_set(&prof->restore_time,
prof->restore_time + profile_getclock() - ti);
qatomic_set(&prof->restore_count, prof->restore_count + 1);
#endif
return 0;
}
bool cpu_restore_state(CPUState *cpu, uintptr_t host_pc, bool will_exit)
{
/*
* The host_pc has to be in the rx region of the code buffer.
* If it is not we will not be able to resolve it here.
* The two cases where host_pc will not be correct are:
*
* - fault during translation (instruction fetch)
* - fault from helper (not using GETPC() macro)
*
* Either way we need return early as we can't resolve it here.
*/
if (in_code_gen_buffer((const void *)(host_pc - tcg_splitwx_diff))) {
TranslationBlock *tb = tcg_tb_lookup(host_pc);
if (tb) {
cpu_restore_state_from_tb(cpu, tb, host_pc, will_exit);
return true;
}
}
return false;
}
void page_init(void)
{
page_size_init();
page_table_config_init();
#if defined(CONFIG_BSD) && defined(CONFIG_USER_ONLY)
{
#ifdef HAVE_KINFO_GETVMMAP
struct kinfo_vmentry *freep;
int i, cnt;
freep = kinfo_getvmmap(getpid(), &cnt);
if (freep) {
mmap_lock();
for (i = 0; i < cnt; i++) {
unsigned long startaddr, endaddr;
startaddr = freep[i].kve_start;
endaddr = freep[i].kve_end;
if (h2g_valid(startaddr)) {
startaddr = h2g(startaddr) & TARGET_PAGE_MASK;
if (h2g_valid(endaddr)) {
endaddr = h2g(endaddr);
page_set_flags(startaddr, endaddr, PAGE_RESERVED);
} else {
#if TARGET_ABI_BITS <= L1_MAP_ADDR_SPACE_BITS
endaddr = ~0ul;
page_set_flags(startaddr, endaddr, PAGE_RESERVED);
#endif
}
}
}
free(freep);
mmap_unlock();
}
#else
FILE *f;
last_brk = (unsigned long)sbrk(0);
f = fopen("/compat/linux/proc/self/maps", "r");
if (f) {
mmap_lock();
do {
unsigned long startaddr, endaddr;
int n;
n = fscanf(f, "%lx-%lx %*[^\n]\n", &startaddr, &endaddr);
if (n == 2 && h2g_valid(startaddr)) {
startaddr = h2g(startaddr) & TARGET_PAGE_MASK;
if (h2g_valid(endaddr)) {
endaddr = h2g(endaddr);
} else {
endaddr = ~0ul;
}
page_set_flags(startaddr, endaddr, PAGE_RESERVED);
}
} while (!feof(f));
fclose(f);
mmap_unlock();
}
#endif
}
#endif
}
static PageDesc *page_find_alloc(tb_page_addr_t index, int alloc)
{
PageDesc *pd;
void **lp;
int i;
/* Level 1. Always allocated. */
lp = l1_map + ((index >> v_l1_shift) & (v_l1_size - 1));
/* Level 2..N-1. */
for (i = v_l2_levels; i > 0; i--) {
void **p = qatomic_rcu_read(lp);
if (p == NULL) {
void *existing;
if (!alloc) {
return NULL;
}
p = g_new0(void *, V_L2_SIZE);
existing = qatomic_cmpxchg(lp, NULL, p);
if (unlikely(existing)) {
g_free(p);
p = existing;
}
}
lp = p + ((index >> (i * V_L2_BITS)) & (V_L2_SIZE - 1));
}
pd = qatomic_rcu_read(lp);
if (pd == NULL) {
void *existing;
if (!alloc) {
return NULL;
}
pd = g_new0(PageDesc, V_L2_SIZE);
#ifndef CONFIG_USER_ONLY
{
int i;
for (i = 0; i < V_L2_SIZE; i++) {
qemu_spin_init(&pd[i].lock);
}
}
#endif
existing = qatomic_cmpxchg(lp, NULL, pd);
if (unlikely(existing)) {
#ifndef CONFIG_USER_ONLY
{
int i;
for (i = 0; i < V_L2_SIZE; i++) {
qemu_spin_destroy(&pd[i].lock);
}
}
#endif
g_free(pd);
pd = existing;
}
}
return pd + (index & (V_L2_SIZE - 1));
}
static inline PageDesc *page_find(tb_page_addr_t index)
{
return page_find_alloc(index, 0);
}
static void page_lock_pair(PageDesc **ret_p1, tb_page_addr_t phys1,
PageDesc **ret_p2, tb_page_addr_t phys2, int alloc);
/* In user-mode page locks aren't used; mmap_lock is enough */
#ifdef CONFIG_USER_ONLY
#define assert_page_locked(pd) tcg_debug_assert(have_mmap_lock())
static inline void page_lock(PageDesc *pd)
{ }
static inline void page_unlock(PageDesc *pd)
{ }
static inline void page_lock_tb(const TranslationBlock *tb)
{ }
static inline void page_unlock_tb(const TranslationBlock *tb)
{ }
struct page_collection *
page_collection_lock(tb_page_addr_t start, tb_page_addr_t end)
{
return NULL;
}
void page_collection_unlock(struct page_collection *set)
{ }
#else /* !CONFIG_USER_ONLY */
#ifdef CONFIG_DEBUG_TCG
static __thread GHashTable *ht_pages_locked_debug;
static void ht_pages_locked_debug_init(void)
{
if (ht_pages_locked_debug) {
return;
}
ht_pages_locked_debug = g_hash_table_new(NULL, NULL);
}
static bool page_is_locked(const PageDesc *pd)
{
PageDesc *found;
ht_pages_locked_debug_init();
found = g_hash_table_lookup(ht_pages_locked_debug, pd);
return !!found;
}
static void page_lock__debug(PageDesc *pd)
{
ht_pages_locked_debug_init();
g_assert(!page_is_locked(pd));
g_hash_table_insert(ht_pages_locked_debug, pd, pd);
}
static void page_unlock__debug(const PageDesc *pd)
{
bool removed;
ht_pages_locked_debug_init();
g_assert(page_is_locked(pd));
removed = g_hash_table_remove(ht_pages_locked_debug, pd);
g_assert(removed);
}
static void
do_assert_page_locked(const PageDesc *pd, const char *file, int line)
{
if (unlikely(!page_is_locked(pd))) {
error_report("assert_page_lock: PageDesc %p not locked @ %s:%d",
pd, file, line);
abort();
}
}
#define assert_page_locked(pd) do_assert_page_locked(pd, __FILE__, __LINE__)
void assert_no_pages_locked(void)
{
ht_pages_locked_debug_init();
g_assert(g_hash_table_size(ht_pages_locked_debug) == 0);
}
#else /* !CONFIG_DEBUG_TCG */
#define assert_page_locked(pd)
static inline void page_lock__debug(const PageDesc *pd)
{
}
static inline void page_unlock__debug(const PageDesc *pd)
{
}
#endif /* CONFIG_DEBUG_TCG */
static inline void page_lock(PageDesc *pd)
{
page_lock__debug(pd);
qemu_spin_lock(&pd->lock);
}
static inline void page_unlock(PageDesc *pd)
{
qemu_spin_unlock(&pd->lock);
page_unlock__debug(pd);
}
/* lock the page(s) of a TB in the correct acquisition order */
static inline void page_lock_tb(const TranslationBlock *tb)
{
page_lock_pair(NULL, tb->page_addr[0], NULL, tb->page_addr[1], 0);
}
static inline void page_unlock_tb(const TranslationBlock *tb)
{
PageDesc *p1 = page_find(tb->page_addr[0] >> TARGET_PAGE_BITS);
page_unlock(p1);
if (unlikely(tb->page_addr[1] != -1)) {
PageDesc *p2 = page_find(tb->page_addr[1] >> TARGET_PAGE_BITS);
if (p2 != p1) {
page_unlock(p2);
}
}
}
static inline struct page_entry *
page_entry_new(PageDesc *pd, tb_page_addr_t index)
{
struct page_entry *pe = g_malloc(sizeof(*pe));
pe->index = index;
pe->pd = pd;
pe->locked = false;
return pe;
}
static void page_entry_destroy(gpointer p)
{
struct page_entry *pe = p;
g_assert(pe->locked);
page_unlock(pe->pd);
g_free(pe);
}
/* returns false on success */
static bool page_entry_trylock(struct page_entry *pe)
{
bool busy;
busy = qemu_spin_trylock(&pe->pd->lock);
if (!busy) {
g_assert(!pe->locked);
pe->locked = true;
page_lock__debug(pe->pd);
}
return busy;
}
static void do_page_entry_lock(struct page_entry *pe)
{
page_lock(pe->pd);
g_assert(!pe->locked);
pe->locked = true;
}
static gboolean page_entry_lock(gpointer key, gpointer value, gpointer data)
{
struct page_entry *pe = value;
do_page_entry_lock(pe);
return FALSE;
}
static gboolean page_entry_unlock(gpointer key, gpointer value, gpointer data)
{
struct page_entry *pe = value;
if (pe->locked) {
pe->locked = false;
page_unlock(pe->pd);
}
return FALSE;
}
/*
* Trylock a page, and if successful, add the page to a collection.
* Returns true ("busy") if the page could not be locked; false otherwise.
*/
static bool page_trylock_add(struct page_collection *set, tb_page_addr_t addr)
{
tb_page_addr_t index = addr >> TARGET_PAGE_BITS;
struct page_entry *pe;
PageDesc *pd;
pe = g_tree_lookup(set->tree, &index);
if (pe) {
return false;
}
pd = page_find(index);
if (pd == NULL) {
return false;
}
pe = page_entry_new(pd, index);
g_tree_insert(set->tree, &pe->index, pe);
/*
* If this is either (1) the first insertion or (2) a page whose index
* is higher than any other so far, just lock the page and move on.
*/
if (set->max == NULL || pe->index > set->max->index) {
set->max = pe;
do_page_entry_lock(pe);
return false;
}
/*
* Try to acquire out-of-order lock; if busy, return busy so that we acquire
* locks in order.
*/
return page_entry_trylock(pe);
}
static gint tb_page_addr_cmp(gconstpointer ap, gconstpointer bp, gpointer udata)
{
tb_page_addr_t a = *(const tb_page_addr_t *)ap;
tb_page_addr_t b = *(const tb_page_addr_t *)bp;
if (a == b) {
return 0;
} else if (a < b) {
return -1;
}
return 1;
}
/*
* Lock a range of pages ([@start,@end[) as well as the pages of all
* intersecting TBs.
* Locking order: acquire locks in ascending order of page index.
*/
struct page_collection *
page_collection_lock(tb_page_addr_t start, tb_page_addr_t end)
{
struct page_collection *set = g_malloc(sizeof(*set));
tb_page_addr_t index;
PageDesc *pd;
start >>= TARGET_PAGE_BITS;
end >>= TARGET_PAGE_BITS;
g_assert(start <= end);
set->tree = g_tree_new_full(tb_page_addr_cmp, NULL, NULL,
page_entry_destroy);
set->max = NULL;
assert_no_pages_locked();
retry:
g_tree_foreach(set->tree, page_entry_lock, NULL);
for (index = start; index <= end; index++) {
TranslationBlock *tb;
int n;
pd = page_find(index);
if (pd == NULL) {
continue;
}
if (page_trylock_add(set, index << TARGET_PAGE_BITS)) {
g_tree_foreach(set->tree, page_entry_unlock, NULL);
goto retry;
}
assert_page_locked(pd);
PAGE_FOR_EACH_TB(pd, tb, n) {
if (page_trylock_add(set, tb->page_addr[0]) ||
(tb->page_addr[1] != -1 &&
page_trylock_add(set, tb->page_addr[1]))) {
/* drop all locks, and reacquire in order */
g_tree_foreach(set->tree, page_entry_unlock, NULL);
goto retry;
}
}
}
return set;
}
void page_collection_unlock(struct page_collection *set)
{
/* entries are unlocked and freed via page_entry_destroy */
g_tree_destroy(set->tree);
g_free(set);
}
#endif /* !CONFIG_USER_ONLY */
static void page_lock_pair(PageDesc **ret_p1, tb_page_addr_t phys1,
PageDesc **ret_p2, tb_page_addr_t phys2, int alloc)
{
PageDesc *p1, *p2;
tb_page_addr_t page1;
tb_page_addr_t page2;
assert_memory_lock();
g_assert(phys1 != -1);
page1 = phys1 >> TARGET_PAGE_BITS;
page2 = phys2 >> TARGET_PAGE_BITS;
p1 = page_find_alloc(page1, alloc);
if (ret_p1) {
*ret_p1 = p1;
}
if (likely(phys2 == -1)) {
page_lock(p1);
return;
} else if (page1 == page2) {
page_lock(p1);
if (ret_p2) {
*ret_p2 = p1;
}
return;
}
p2 = page_find_alloc(page2, alloc);
if (ret_p2) {
*ret_p2 = p2;
}
if (page1 < page2) {
page_lock(p1);
page_lock(p2);
} else {
page_lock(p2);
page_lock(p1);
}
}
static bool tb_cmp(const void *ap, const void *bp)
{
const TranslationBlock *a = ap;
const TranslationBlock *b = bp;
return a->pc == b->pc &&
a->cs_base == b->cs_base &&
a->flags == b->flags &&
(tb_cflags(a) & ~CF_INVALID) == (tb_cflags(b) & ~CF_INVALID) &&
a->trace_vcpu_dstate == b->trace_vcpu_dstate &&
a->page_addr[0] == b->page_addr[0] &&
a->page_addr[1] == b->page_addr[1];
}
void tb_htable_init(void)
{
unsigned int mode = QHT_MODE_AUTO_RESIZE;
qht_init(&tb_ctx.htable, tb_cmp, CODE_GEN_HTABLE_SIZE, mode);
}
/* call with @p->lock held */
static inline void invalidate_page_bitmap(PageDesc *p)
{
assert_page_locked(p);
#ifdef CONFIG_SOFTMMU
g_free(p->code_bitmap);
p->code_bitmap = NULL;
p->code_write_count = 0;
#endif
}
/* Set to NULL all the 'first_tb' fields in all PageDescs. */
static void page_flush_tb_1(int level, void **lp)
{
int i;
if (*lp == NULL) {
return;
}
if (level == 0) {
PageDesc *pd = *lp;
for (i = 0; i < V_L2_SIZE; ++i) {
page_lock(&pd[i]);
pd[i].first_tb = (uintptr_t)NULL;
invalidate_page_bitmap(pd + i);
page_unlock(&pd[i]);
}
} else {
void **pp = *lp;
for (i = 0; i < V_L2_SIZE; ++i) {
page_flush_tb_1(level - 1, pp + i);
}
}
}
static void page_flush_tb(void)
{
int i, l1_sz = v_l1_size;
for (i = 0; i < l1_sz; i++) {
page_flush_tb_1(v_l2_levels, l1_map + i);
}
}
static gboolean tb_host_size_iter(gpointer key, gpointer value, gpointer data)
{
const TranslationBlock *tb = value;
size_t *size = data;
*size += tb->tc.size;
return false;
}
/* flush all the translation blocks */
static void do_tb_flush(CPUState *cpu, run_on_cpu_data tb_flush_count)
{
bool did_flush = false;
mmap_lock();
/* If it is already been done on request of another CPU,
* just retry.
*/
if (tb_ctx.tb_flush_count != tb_flush_count.host_int) {
goto done;
}
did_flush = true;
if (DEBUG_TB_FLUSH_GATE) {
size_t nb_tbs = tcg_nb_tbs();
size_t host_size = 0;
tcg_tb_foreach(tb_host_size_iter, &host_size);
printf("qemu: flush code_size=%zu nb_tbs=%zu avg_tb_size=%zu\n",
tcg_code_size(), nb_tbs, nb_tbs > 0 ? host_size / nb_tbs : 0);
}
CPU_FOREACH(cpu) {
cpu_tb_jmp_cache_clear(cpu);
}
qht_reset_size(&tb_ctx.htable, CODE_GEN_HTABLE_SIZE);
page_flush_tb();
tcg_region_reset_all();
/* XXX: flush processor icache at this point if cache flush is
expensive */
qatomic_mb_set(&tb_ctx.tb_flush_count, tb_ctx.tb_flush_count + 1);
done:
mmap_unlock();
if (did_flush) {
qemu_plugin_flush_cb();
}
}
void tb_flush(CPUState *cpu)
{
if (tcg_enabled()) {
unsigned tb_flush_count = qatomic_mb_read(&tb_ctx.tb_flush_count);
if (cpu_in_exclusive_context(cpu)) {
do_tb_flush(cpu, RUN_ON_CPU_HOST_INT(tb_flush_count));
} else {
async_safe_run_on_cpu(cpu, do_tb_flush,
RUN_ON_CPU_HOST_INT(tb_flush_count));
}
}
}
/*
* Formerly ifdef DEBUG_TB_CHECK. These debug functions are user-mode-only,
* so in order to prevent bit rot we compile them unconditionally in user-mode,
* and let the optimizer get rid of them by wrapping their user-only callers
* with if (DEBUG_TB_CHECK_GATE).
*/
#ifdef CONFIG_USER_ONLY
static void do_tb_invalidate_check(void *p, uint32_t hash, void *userp)
{
TranslationBlock *tb = p;
target_ulong addr = *(target_ulong *)userp;
if (!(addr + TARGET_PAGE_SIZE <= tb->pc || addr >= tb->pc + tb->size)) {
printf("ERROR invalidate: address=" TARGET_FMT_lx
" PC=%08lx size=%04x\n", addr, (long)tb->pc, tb->size);
}
}
/* verify that all the pages have correct rights for code
*
* Called with mmap_lock held.
*/
static void tb_invalidate_check(target_ulong address)
{
address &= TARGET_PAGE_MASK;
qht_iter(&tb_ctx.htable, do_tb_invalidate_check, &address);
}
static void do_tb_page_check(void *p, uint32_t hash, void *userp)
{
TranslationBlock *tb = p;
int flags1, flags2;
flags1 = page_get_flags(tb->pc);
flags2 = page_get_flags(tb->pc + tb->size - 1);
if ((flags1 & PAGE_WRITE) || (flags2 & PAGE_WRITE)) {
printf("ERROR page flags: PC=%08lx size=%04x f1=%x f2=%x\n",
(long)tb->pc, tb->size, flags1, flags2);
}
}
/* verify that all the pages have correct rights for code */
static void tb_page_check(void)
{
qht_iter(&tb_ctx.htable, do_tb_page_check, NULL);
}
#endif /* CONFIG_USER_ONLY */
/*
* user-mode: call with mmap_lock held
* !user-mode: call with @pd->lock held
*/
static inline void tb_page_remove(PageDesc *pd, TranslationBlock *tb)
{
TranslationBlock *tb1;
uintptr_t *pprev;
unsigned int n1;
assert_page_locked(pd);
pprev = &pd->first_tb;
PAGE_FOR_EACH_TB(pd, tb1, n1) {
if (tb1 == tb) {
*pprev = tb1->page_next[n1];
return;
}
pprev = &tb1->page_next[n1];
}
g_assert_not_reached();
}
/* remove @orig from its @n_orig-th jump list */
static inline void tb_remove_from_jmp_list(TranslationBlock *orig, int n_orig)
{
uintptr_t ptr, ptr_locked;
TranslationBlock *dest;
TranslationBlock *tb;
uintptr_t *pprev;
int n;
/* mark the LSB of jmp_dest[] so that no further jumps can be inserted */
ptr = qatomic_or_fetch(&orig->jmp_dest[n_orig], 1);
dest = (TranslationBlock *)(ptr & ~1);
if (dest == NULL) {
return;
}
qemu_spin_lock(&dest->jmp_lock);
/*
* While acquiring the lock, the jump might have been removed if the
* destination TB was invalidated; check again.
*/
ptr_locked = qatomic_read(&orig->jmp_dest[n_orig]);
if (ptr_locked != ptr) {
qemu_spin_unlock(&dest->jmp_lock);
/*
* The only possibility is that the jump was unlinked via
* tb_jump_unlink(dest). Seeing here another destination would be a bug,
* because we set the LSB above.
*/
g_assert(ptr_locked == 1 && dest->cflags & CF_INVALID);
return;
}
/*
* We first acquired the lock, and since the destination pointer matches,
* we know for sure that @orig is in the jmp list.
*/
pprev = &dest->jmp_list_head;
TB_FOR_EACH_JMP(dest, tb, n) {
if (tb == orig && n == n_orig) {
*pprev = tb->jmp_list_next[n];
/* no need to set orig->jmp_dest[n]; setting the LSB was enough */
qemu_spin_unlock(&dest->jmp_lock);
return;
}
pprev = &tb->jmp_list_next[n];
}
g_assert_not_reached();
}
/* reset the jump entry 'n' of a TB so that it is not chained to
another TB */
static inline void tb_reset_jump(TranslationBlock *tb, int n)
{
uintptr_t addr = (uintptr_t)(tb->tc.ptr + tb->jmp_reset_offset[n]);
tb_set_jmp_target(tb, n, addr);
}
/* remove any jumps to the TB */
static inline void tb_jmp_unlink(TranslationBlock *dest)
{
TranslationBlock *tb;
int n;
qemu_spin_lock(&dest->jmp_lock);
TB_FOR_EACH_JMP(dest, tb, n) {
tb_reset_jump(tb, n);
qatomic_and(&tb->jmp_dest[n], (uintptr_t)NULL | 1);
/* No need to clear the list entry; setting the dest ptr is enough */
}
dest->jmp_list_head = (uintptr_t)NULL;
qemu_spin_unlock(&dest->jmp_lock);
}
/*
* In user-mode, call with mmap_lock held.
* In !user-mode, if @rm_from_page_list is set, call with the TB's pages'
* locks held.
*/
static void do_tb_phys_invalidate(TranslationBlock *tb, bool rm_from_page_list)
{
CPUState *cpu;
PageDesc *p;
uint32_t h;
tb_page_addr_t phys_pc;
uint32_t orig_cflags = tb_cflags(tb);
assert_memory_lock();
/* make sure no further incoming jumps will be chained to this TB */
qemu_spin_lock(&tb->jmp_lock);
qatomic_set(&tb->cflags, tb->cflags | CF_INVALID);
qemu_spin_unlock(&tb->jmp_lock);
/* remove the TB from the hash list */
phys_pc = tb->page_addr[0] + (tb->pc & ~TARGET_PAGE_MASK);
h = tb_hash_func(phys_pc, tb->pc, tb->flags, orig_cflags,
tb->trace_vcpu_dstate);
if (!qht_remove(&tb_ctx.htable, tb, h)) {
return;
}
/* remove the TB from the page list */
if (rm_from_page_list) {
p = page_find(tb->page_addr[0] >> TARGET_PAGE_BITS);
tb_page_remove(p, tb);
invalidate_page_bitmap(p);
if (tb->page_addr[1] != -1) {
p = page_find(tb->page_addr[1] >> TARGET_PAGE_BITS);
tb_page_remove(p, tb);
invalidate_page_bitmap(p);
}
}
/* remove the TB from the hash list */
h = tb_jmp_cache_hash_func(tb->pc);
CPU_FOREACH(cpu) {
if (qatomic_read(&cpu->tb_jmp_cache[h]) == tb) {
qatomic_set(&cpu->tb_jmp_cache[h], NULL);
}
}
/* suppress this TB from the two jump lists */
tb_remove_from_jmp_list(tb, 0);
tb_remove_from_jmp_list(tb, 1);
/* suppress any remaining jumps to this TB */
tb_jmp_unlink(tb);
qatomic_set(&tb_ctx.tb_phys_invalidate_count,
tb_ctx.tb_phys_invalidate_count + 1);
}
static void tb_phys_invalidate__locked(TranslationBlock *tb)
{
qemu_thread_jit_write();
do_tb_phys_invalidate(tb, true);
qemu_thread_jit_execute();
}
/* invalidate one TB
*
* Called with mmap_lock held in user-mode.
*/
void tb_phys_invalidate(TranslationBlock *tb, tb_page_addr_t page_addr)
{
if (page_addr == -1 && tb->page_addr[0] != -1) {
page_lock_tb(tb);
do_tb_phys_invalidate(tb, true);
page_unlock_tb(tb);
} else {
do_tb_phys_invalidate(tb, false);
}
}
#ifdef CONFIG_SOFTMMU
/* call with @p->lock held */
static void build_page_bitmap(PageDesc *p)
{
int n, tb_start, tb_end;
TranslationBlock *tb;
assert_page_locked(p);
p->code_bitmap = bitmap_new(TARGET_PAGE_SIZE);
PAGE_FOR_EACH_TB(p, tb, n) {
/* NOTE: this is subtle as a TB may span two physical pages */
if (n == 0) {
/* NOTE: tb_end may be after the end of the page, but
it is not a problem */
tb_start = tb->pc & ~TARGET_PAGE_MASK;
tb_end = tb_start + tb->size;
if (tb_end > TARGET_PAGE_SIZE) {
tb_end = TARGET_PAGE_SIZE;
}
} else {
tb_start = 0;
tb_end = ((tb->pc + tb->size) & ~TARGET_PAGE_MASK);
}
bitmap_set(p->code_bitmap, tb_start, tb_end - tb_start);
}
}
#endif
/* add the tb in the target page and protect it if necessary
*
* Called with mmap_lock held for user-mode emulation.
* Called with @p->lock held in !user-mode.
*/
static inline void tb_page_add(PageDesc *p, TranslationBlock *tb,
unsigned int n, tb_page_addr_t page_addr)
{
#ifndef CONFIG_USER_ONLY
bool page_already_protected;
#endif
assert_page_locked(p);
tb->page_addr[n] = page_addr;
tb->page_next[n] = p->first_tb;
#ifndef CONFIG_USER_ONLY
page_already_protected = p->first_tb != (uintptr_t)NULL;
#endif
p->first_tb = (uintptr_t)tb | n;
invalidate_page_bitmap(p);
#if defined(CONFIG_USER_ONLY)
/* translator_loop() must have made all TB pages non-writable */
assert(!(p->flags & PAGE_WRITE));
#else
/* if some code is already present, then the pages are already
protected. So we handle the case where only the first TB is
allocated in a physical page */
if (!page_already_protected) {
tlb_protect_code(page_addr);
}
#endif
}
/*
* Add a new TB and link it to the physical page tables. phys_page2 is
* (-1) to indicate that only one page contains the TB.
*
* Called with mmap_lock held for user-mode emulation.
*
* Returns a pointer @tb, or a pointer to an existing TB that matches @tb.
* Note that in !user-mode, another thread might have already added a TB
* for the same block of guest code that @tb corresponds to. In that case,
* the caller should discard the original @tb, and use instead the returned TB.
*/
static TranslationBlock *
tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc,
tb_page_addr_t phys_page2)
{
PageDesc *p;
PageDesc *p2 = NULL;
void *existing_tb = NULL;
uint32_t h;
assert_memory_lock();
tcg_debug_assert(!(tb->cflags & CF_INVALID));
/*
* Add the TB to the page list, acquiring first the pages's locks.
* We keep the locks held until after inserting the TB in the hash table,
* so that if the insertion fails we know for sure that the TBs are still
* in the page descriptors.
* Note that inserting into the hash table first isn't an option, since
* we can only insert TBs that are fully initialized.
*/
page_lock_pair(&p, phys_pc, &p2, phys_page2, 1);
tb_page_add(p, tb, 0, phys_pc & TARGET_PAGE_MASK);
if (p2) {
tb_page_add(p2, tb, 1, phys_page2);
} else {
tb->page_addr[1] = -1;
}
/* add in the hash table */
h = tb_hash_func(phys_pc, tb->pc, tb->flags, tb->cflags,
tb->trace_vcpu_dstate);
qht_insert(&tb_ctx.htable, tb, h, &existing_tb);
/* remove TB from the page(s) if we couldn't insert it */
if (unlikely(existing_tb)) {
tb_page_remove(p, tb);
invalidate_page_bitmap(p);
if (p2) {
tb_page_remove(p2, tb);
invalidate_page_bitmap(p2);
}
tb = existing_tb;
}
if (p2 && p2 != p) {
page_unlock(p2);
}
page_unlock(p);
#ifdef CONFIG_USER_ONLY
if (DEBUG_TB_CHECK_GATE) {
tb_page_check();
}
#endif
return tb;
}
//// --- Begin LibAFL code ---
static target_ulong reverse_bits(target_ulong num)
{
unsigned int count = sizeof(num) * 8 - 1;
target_ulong reverse_num = num;
num >>= 1;
while(num)
{
reverse_num <<= 1;
reverse_num |= num & 1;
num >>= 1;
count--;
}
reverse_num <<= count;
return reverse_num;
}
/* Called with mmap_lock held for user mode emulation. */
TranslationBlock *libafl_gen_edge(CPUState *cpu, target_ulong src_block,
target_ulong dst_block, int exit_n,
target_ulong cs_base, uint32_t flags,
int cflags)
{
CPUArchState *env = cpu->env_ptr;
TranslationBlock *tb, *existing_tb;
tb_page_addr_t phys_pc, phys_page2;
target_ulong virt_page2;
tcg_insn_unit *gen_code_buf;
int gen_code_size, search_size, max_insns;
#ifdef CONFIG_PROFILER
TCGProfile *prof = &tcg_ctx->prof;
int64_t ti;
#endif
target_ulong pc = src_block ^ reverse_bits((target_ulong)exit_n);
(void)virt_page2;
(void)phys_page2;
(void)phys_pc;
(void)existing_tb;
assert_memory_lock();
struct libafl_edge_hook* hook = libafl_edge_hooks;
int no_exec_hook = 1;
while (hook) {
hook->cur_id = 0;
if (hook->gen)
hook->cur_id = hook->gen(src_block, dst_block, hook->data);
if (hook->cur_id != (uint64_t)-1 && hook->exec)
no_exec_hook = 0;
hook = hook->next;
}
if (no_exec_hook)
return NULL;
qemu_thread_jit_write();
phys_pc = get_page_addr_code(env, src_block);
phys_pc ^= reverse_bits((tb_page_addr_t)exit_n);
/* Generate a one-shot TB with max 8 insn in it */
cflags = (cflags & ~CF_COUNT_MASK) | CF_LAST_IO | 8;
max_insns = cflags & CF_COUNT_MASK;
if (max_insns == 0) {
max_insns = TCG_MAX_INSNS;
}
QEMU_BUILD_BUG_ON(CF_COUNT_MASK + 1 != TCG_MAX_INSNS);
buffer_overflow1:
tb = tcg_tb_alloc(tcg_ctx);
if (unlikely(!tb)) {
/* flush must be done */
tb_flush(cpu);
mmap_unlock();
/* Make the execution loop process the flush as soon as possible. */
cpu->exception_index = EXCP_INTERRUPT;
cpu_loop_exit(cpu);
}
gen_code_buf = tcg_ctx->code_gen_ptr;
tb->tc.ptr = tcg_splitwx_to_rx(gen_code_buf);
tb->pc = pc;
tb->cs_base = cs_base;
tb->flags = flags;
tb->cflags = cflags;
tb->trace_vcpu_dstate = *cpu->trace_dstate;
tcg_ctx->tb_cflags = cflags;
#ifdef CONFIG_PROFILER
/* includes aborted translations because of exceptions */
qatomic_set(&prof->tb_count1, prof->tb_count1 + 1);
ti = profile_getclock();
#endif
tcg_func_start(tcg_ctx);
tcg_ctx->cpu = env_cpu(env);
hook = libafl_edge_hooks;
size_t hcount = 0;
while (hook) {
if (hook->cur_id != (uint64_t)-1 && hook->exec) {
hcount++;
TCGv_i64 tmp0 = tcg_const_i64(hook->cur_id);
TCGv_i64 tmp1 = tcg_const_i64(hook->data);
TCGTemp *tmp2[2] = { tcgv_i64_temp(tmp0), tcgv_i64_temp(tmp1) };
tcg_gen_callN(hook->exec, NULL, 2, tmp2);
tcg_temp_free_i64(tmp0);
tcg_temp_free_i64(tmp1);
}
hook = hook->next;
}
tcg_gen_goto_tb(0);
tcg_gen_exit_tb(tb, 0);
tb->size = hcount;
tb->icount = hcount;
assert(tb->size != 0);
tcg_ctx->cpu = NULL;
max_insns = tb->icount;
trace_translate_block(tb, tb->pc, tb->tc.ptr);
/* generate machine code */
tb->jmp_reset_offset[0] = TB_JMP_RESET_OFFSET_INVALID;
tb->jmp_reset_offset[1] = TB_JMP_RESET_OFFSET_INVALID;
tcg_ctx->tb_jmp_reset_offset = tb->jmp_reset_offset;
if (TCG_TARGET_HAS_direct_jump) {
tcg_ctx->tb_jmp_insn_offset = tb->jmp_target_arg;
tcg_ctx->tb_jmp_target_addr = NULL;
} else {
tcg_ctx->tb_jmp_insn_offset = NULL;
tcg_ctx->tb_jmp_target_addr = tb->jmp_target_arg;
}
#ifdef CONFIG_PROFILER
qatomic_set(&prof->tb_count, prof->tb_count + 1);
qatomic_set(&prof->interm_time,
prof->interm_time + profile_getclock() - ti);
ti = profile_getclock();
#endif
gen_code_size = tcg_gen_code(tcg_ctx, tb);
if (unlikely(gen_code_size < 0)) {
goto buffer_overflow1;
}
search_size = encode_search(tb, (void *)gen_code_buf + gen_code_size);
if (unlikely(search_size < 0)) {
goto buffer_overflow1;
}
tb->tc.size = gen_code_size;
#ifdef CONFIG_PROFILER
qatomic_set(&prof->code_time, prof->code_time + profile_getclock() - ti);
qatomic_set(&prof->code_in_len, prof->code_in_len + tb->size);
qatomic_set(&prof->code_out_len, prof->code_out_len + gen_code_size);
qatomic_set(&prof->search_out_len, prof->search_out_len + search_size);
#endif
qatomic_set(&tcg_ctx->code_gen_ptr, (void *)
ROUND_UP((uintptr_t)gen_code_buf + gen_code_size + search_size,
CODE_GEN_ALIGN));
/* init jump list */
qemu_spin_init(&tb->jmp_lock);
tb->jmp_list_head = (uintptr_t)NULL;
tb->jmp_list_next[0] = (uintptr_t)NULL;
tb->jmp_list_next[1] = (uintptr_t)NULL;
tb->jmp_dest[0] = (uintptr_t)NULL;
tb->jmp_dest[1] = (uintptr_t)NULL;
/* init original jump addresses which have been set during tcg_gen_code() */
if (tb->jmp_reset_offset[0] != TB_JMP_RESET_OFFSET_INVALID) {
tb_reset_jump(tb, 0);
}
if (tb->jmp_reset_offset[1] != TB_JMP_RESET_OFFSET_INVALID) {
tb_reset_jump(tb, 1);
}
tb->page_addr[0] = tb->page_addr[1] = -1;
return tb;
}
//// --- End LibAFL code ---
/* Called with mmap_lock held for user mode emulation. */
TranslationBlock *tb_gen_code(CPUState *cpu,
target_ulong pc, target_ulong cs_base,
uint32_t flags, int cflags)
{
CPUArchState *env = cpu->env_ptr;
TranslationBlock *tb, *existing_tb;
tb_page_addr_t phys_pc, phys_page2;
target_ulong virt_page2;
tcg_insn_unit *gen_code_buf;
int gen_code_size, search_size, max_insns;
#ifdef CONFIG_PROFILER
TCGProfile *prof = &tcg_ctx->prof;
int64_t ti;
#endif
assert_memory_lock();
qemu_thread_jit_write();
phys_pc = get_page_addr_code(env, pc);
if (phys_pc == -1) {
/* Generate a one-shot TB with 1 insn in it */
cflags = (cflags & ~CF_COUNT_MASK) | CF_LAST_IO | 1;
}
max_insns = cflags & CF_COUNT_MASK;
if (max_insns == 0) {
max_insns = TCG_MAX_INSNS;
}
QEMU_BUILD_BUG_ON(CF_COUNT_MASK + 1 != TCG_MAX_INSNS);
buffer_overflow:
tb = tcg_tb_alloc(tcg_ctx);
if (unlikely(!tb)) {
/* flush must be done */
tb_flush(cpu);
mmap_unlock();
/* Make the execution loop process the flush as soon as possible. */
cpu->exception_index = EXCP_INTERRUPT;
cpu_loop_exit(cpu);
}
gen_code_buf = tcg_ctx->code_gen_ptr;
tb->tc.ptr = tcg_splitwx_to_rx(gen_code_buf);
tb->pc = pc;
tb->cs_base = cs_base;
tb->flags = flags;
tb->cflags = cflags;
tb->trace_vcpu_dstate = *cpu->trace_dstate;
tcg_ctx->tb_cflags = cflags;
tb_overflow:
#ifdef CONFIG_PROFILER
/* includes aborted translations because of exceptions */
qatomic_set(&prof->tb_count1, prof->tb_count1 + 1);
ti = profile_getclock();
#endif
gen_code_size = sigsetjmp(tcg_ctx->jmp_trans, 0);
if (unlikely(gen_code_size != 0)) {
goto error_return;
}
tcg_func_start(tcg_ctx);
tcg_ctx->cpu = env_cpu(env);
//// --- Begin LibAFL code ---
struct libafl_block_hook* hook = libafl_block_hooks;
while (hook) {
uint64_t cur_id = 0;
if (hook->gen)
cur_id = hook->gen(pc, hook->data);
if (cur_id != (uint64_t)-1 && hook->exec) {
TCGv_i64 tmp0 = tcg_const_i64(cur_id);
TCGv_i64 tmp1 = tcg_const_i64(hook->data);
TCGTemp *tmp2[2] = { tcgv_i64_temp(tmp0), tcgv_i64_temp(tmp1) };
tcg_gen_callN(hook->exec, NULL, 2, tmp2);
tcg_temp_free_i64(tmp0);
tcg_temp_free_i64(tmp1);
}
hook = hook->next;
}
//// --- End LibAFL code ---
gen_intermediate_code(cpu, tb, max_insns);
assert(tb->size != 0);
tcg_ctx->cpu = NULL;
max_insns = tb->icount;
trace_translate_block(tb, tb->pc, tb->tc.ptr);
/* generate machine code */
tb->jmp_reset_offset[0] = TB_JMP_RESET_OFFSET_INVALID;
tb->jmp_reset_offset[1] = TB_JMP_RESET_OFFSET_INVALID;
tcg_ctx->tb_jmp_reset_offset = tb->jmp_reset_offset;
if (TCG_TARGET_HAS_direct_jump) {
tcg_ctx->tb_jmp_insn_offset = tb->jmp_target_arg;
tcg_ctx->tb_jmp_target_addr = NULL;
} else {
tcg_ctx->tb_jmp_insn_offset = NULL;
tcg_ctx->tb_jmp_target_addr = tb->jmp_target_arg;
}
#ifdef CONFIG_PROFILER
qatomic_set(&prof->tb_count, prof->tb_count + 1);
qatomic_set(&prof->interm_time,
prof->interm_time + profile_getclock() - ti);
ti = profile_getclock();
#endif
gen_code_size = tcg_gen_code(tcg_ctx, tb);
if (unlikely(gen_code_size < 0)) {
error_return:
switch (gen_code_size) {
case -1:
/*
* Overflow of code_gen_buffer, or the current slice of it.
*
* TODO: We don't need to re-do gen_intermediate_code, nor
* should we re-do the tcg optimization currently hidden
* inside tcg_gen_code. All that should be required is to
* flush the TBs, allocate a new TB, re-initialize it per
* above, and re-do the actual code generation.
*/
qemu_log_mask(CPU_LOG_TB_OP | CPU_LOG_TB_OP_OPT,
"Restarting code generation for "
"code_gen_buffer overflow\n");
goto buffer_overflow;
case -2:
/*
* The code generated for the TranslationBlock is too large.
* The maximum size allowed by the unwind info is 64k.
* There may be stricter constraints from relocations
* in the tcg backend.
*
* Try again with half as many insns as we attempted this time.
* If a single insn overflows, there's a bug somewhere...
*/
assert(max_insns > 1);
max_insns /= 2;
qemu_log_mask(CPU_LOG_TB_OP | CPU_LOG_TB_OP_OPT,
"Restarting code generation with "
"smaller translation block (max %d insns)\n",
max_insns);
goto tb_overflow;
default:
g_assert_not_reached();
}
}
search_size = encode_search(tb, (void *)gen_code_buf + gen_code_size);
if (unlikely(search_size < 0)) {
goto buffer_overflow;
}
tb->tc.size = gen_code_size;
#ifdef CONFIG_PROFILER
qatomic_set(&prof->code_time, prof->code_time + profile_getclock() - ti);
qatomic_set(&prof->code_in_len, prof->code_in_len + tb->size);
qatomic_set(&prof->code_out_len, prof->code_out_len + gen_code_size);
qatomic_set(&prof->search_out_len, prof->search_out_len + search_size);
#endif
#ifdef DEBUG_DISAS
if (qemu_loglevel_mask(CPU_LOG_TB_OUT_ASM) &&
qemu_log_in_addr_range(tb->pc)) {
FILE *logfile = qemu_log_trylock();
if (logfile) {
int code_size, data_size;
const tcg_target_ulong *rx_data_gen_ptr;
size_t chunk_start;
int insn = 0;
if (tcg_ctx->data_gen_ptr) {
rx_data_gen_ptr = tcg_splitwx_to_rx(tcg_ctx->data_gen_ptr);
code_size = (const void *)rx_data_gen_ptr - tb->tc.ptr;
data_size = gen_code_size - code_size;
} else {
rx_data_gen_ptr = 0;
code_size = gen_code_size;
data_size = 0;
}
/* Dump header and the first instruction */
fprintf(logfile, "OUT: [size=%d]\n", gen_code_size);
fprintf(logfile,
" -- guest addr 0x" TARGET_FMT_lx " + tb prologue\n",
tcg_ctx->gen_insn_data[insn][0]);
chunk_start = tcg_ctx->gen_insn_end_off[insn];
disas(logfile, tb->tc.ptr, chunk_start);
/*
* Dump each instruction chunk, wrapping up empty chunks into
* the next instruction. The whole array is offset so the
* first entry is the beginning of the 2nd instruction.
*/
while (insn < tb->icount) {
size_t chunk_end = tcg_ctx->gen_insn_end_off[insn];
if (chunk_end > chunk_start) {
fprintf(logfile, " -- guest addr 0x" TARGET_FMT_lx "\n",
tcg_ctx->gen_insn_data[insn][0]);
disas(logfile, tb->tc.ptr + chunk_start,
chunk_end - chunk_start);
chunk_start = chunk_end;
}
insn++;
}
if (chunk_start < code_size) {
fprintf(logfile, " -- tb slow paths + alignment\n");
disas(logfile, tb->tc.ptr + chunk_start,
code_size - chunk_start);
}
/* Finally dump any data we may have after the block */
if (data_size) {
int i;
fprintf(logfile, " data: [size=%d]\n", data_size);
for (i = 0; i < data_size / sizeof(tcg_target_ulong); i++) {
if (sizeof(tcg_target_ulong) == 8) {
fprintf(logfile,
"0x%08" PRIxPTR ": .quad 0x%016" TCG_PRIlx "\n",
(uintptr_t)&rx_data_gen_ptr[i], rx_data_gen_ptr[i]);
} else if (sizeof(tcg_target_ulong) == 4) {
fprintf(logfile,
"0x%08" PRIxPTR ": .long 0x%08" TCG_PRIlx "\n",
(uintptr_t)&rx_data_gen_ptr[i], rx_data_gen_ptr[i]);
} else {
qemu_build_not_reached();
}
}
}
fprintf(logfile, "\n");
qemu_log_unlock(logfile);
}
}
#endif
qatomic_set(&tcg_ctx->code_gen_ptr, (void *)
ROUND_UP((uintptr_t)gen_code_buf + gen_code_size + search_size,
CODE_GEN_ALIGN));
/* init jump list */
qemu_spin_init(&tb->jmp_lock);
tb->jmp_list_head = (uintptr_t)NULL;
tb->jmp_list_next[0] = (uintptr_t)NULL;
tb->jmp_list_next[1] = (uintptr_t)NULL;
tb->jmp_dest[0] = (uintptr_t)NULL;
tb->jmp_dest[1] = (uintptr_t)NULL;
/* init original jump addresses which have been set during tcg_gen_code() */
if (tb->jmp_reset_offset[0] != TB_JMP_RESET_OFFSET_INVALID) {
tb_reset_jump(tb, 0);
}
if (tb->jmp_reset_offset[1] != TB_JMP_RESET_OFFSET_INVALID) {
tb_reset_jump(tb, 1);
}
/*
* If the TB is not associated with a physical RAM page then
* it must be a temporary one-insn TB, and we have nothing to do
* except fill in the page_addr[] fields. Return early before
* attempting to link to other TBs or add to the lookup table.
*/
if (phys_pc == -1) {
tb->page_addr[0] = tb->page_addr[1] = -1;
return tb;
}
/*
* Insert TB into the corresponding region tree before publishing it
* through QHT. Otherwise rewinding happened in the TB might fail to
* lookup itself using host PC.
*/
tcg_tb_insert(tb);
/* check next page if needed */
virt_page2 = (pc + tb->size - 1) & TARGET_PAGE_MASK;
phys_page2 = -1;
if ((pc & TARGET_PAGE_MASK) != virt_page2) {
phys_page2 = get_page_addr_code(env, virt_page2);
}
/*
* No explicit memory barrier is required -- tb_link_page() makes the
* TB visible in a consistent state.
*/
existing_tb = tb_link_page(tb, phys_pc, phys_page2);
/* if the TB already exists, discard what we just translated */
if (unlikely(existing_tb != tb)) {
uintptr_t orig_aligned = (uintptr_t)gen_code_buf;
orig_aligned -= ROUND_UP(sizeof(*tb), qemu_icache_linesize);
qatomic_set(&tcg_ctx->code_gen_ptr, (void *)orig_aligned);
tcg_tb_remove(tb);
return existing_tb;
}
return tb;
}
/*
* @p must be non-NULL.
* user-mode: call with mmap_lock held.
* !user-mode: call with all @pages locked.
*/
static void
tb_invalidate_phys_page_range__locked(struct page_collection *pages,
PageDesc *p, tb_page_addr_t start,
tb_page_addr_t end,
uintptr_t retaddr)
{
TranslationBlock *tb;
tb_page_addr_t tb_start, tb_end;
int n;
#ifdef TARGET_HAS_PRECISE_SMC
CPUState *cpu = current_cpu;
CPUArchState *env = NULL;
bool current_tb_not_found = retaddr != 0;
bool current_tb_modified = false;
TranslationBlock *current_tb = NULL;
target_ulong current_pc = 0;
target_ulong current_cs_base = 0;
uint32_t current_flags = 0;
#endif /* TARGET_HAS_PRECISE_SMC */
assert_page_locked(p);
#if defined(TARGET_HAS_PRECISE_SMC)
if (cpu != NULL) {
env = cpu->env_ptr;
}
#endif
/* we remove all the TBs in the range [start, end[ */
/* XXX: see if in some cases it could be faster to invalidate all
the code */
PAGE_FOR_EACH_TB(p, tb, n) {
assert_page_locked(p);
/* NOTE: this is subtle as a TB may span two physical pages */
if (n == 0) {
/* NOTE: tb_end may be after the end of the page, but
it is not a problem */
tb_start = tb->page_addr[0] + (tb->pc & ~TARGET_PAGE_MASK);
tb_end = tb_start + tb->size;
} else {
tb_start = tb->page_addr[1];
tb_end = tb_start + ((tb->pc + tb->size) & ~TARGET_PAGE_MASK);
}
if (!(tb_end <= start || tb_start >= end)) {
#ifdef TARGET_HAS_PRECISE_SMC
if (current_tb_not_found) {
current_tb_not_found = false;
/* now we have a real cpu fault */
current_tb = tcg_tb_lookup(retaddr);
}
if (current_tb == tb &&
(tb_cflags(current_tb) & CF_COUNT_MASK) != 1) {
/*
* If we are modifying the current TB, we must stop
* its execution. We could be more precise by checking
* that the modification is after the current PC, but it
* would require a specialized function to partially
* restore the CPU state.
*/
current_tb_modified = true;
cpu_restore_state_from_tb(cpu, current_tb, retaddr, true);
cpu_get_tb_cpu_state(env, &current_pc, &current_cs_base,
&current_flags);
}
#endif /* TARGET_HAS_PRECISE_SMC */
tb_phys_invalidate__locked(tb);
}
}
#if !defined(CONFIG_USER_ONLY)
/* if no code remaining, no need to continue to use slow writes */
if (!p->first_tb) {
invalidate_page_bitmap(p);
tlb_unprotect_code(start);
}
#endif
#ifdef TARGET_HAS_PRECISE_SMC
if (current_tb_modified) {
page_collection_unlock(pages);
/* Force execution of one insn next time. */
cpu->cflags_next_tb = 1 | CF_NOIRQ | curr_cflags(cpu);
mmap_unlock();
cpu_loop_exit_noexc(cpu);
}
#endif
}
/*
* Invalidate all TBs which intersect with the target physical address range
* [start;end[. NOTE: start and end must refer to the *same* physical page.
* 'is_cpu_write_access' should be true if called from a real cpu write
* access: the virtual CPU will exit the current TB if code is modified inside
* this TB.
*
* Called with mmap_lock held for user-mode emulation
*/
void tb_invalidate_phys_page_range(tb_page_addr_t start, tb_page_addr_t end)
{
struct page_collection *pages;
PageDesc *p;
assert_memory_lock();
p = page_find(start >> TARGET_PAGE_BITS);
if (p == NULL) {
return;
}
pages = page_collection_lock(start, end);
tb_invalidate_phys_page_range__locked(pages, p, start, end, 0);
page_collection_unlock(pages);
}
/*
* Invalidate all TBs which intersect with the target physical address range
* [start;end[. NOTE: start and end may refer to *different* physical pages.
* 'is_cpu_write_access' should be true if called from a real cpu write
* access: the virtual CPU will exit the current TB if code is modified inside
* this TB.
*
* Called with mmap_lock held for user-mode emulation.
*/
#ifdef CONFIG_SOFTMMU
void tb_invalidate_phys_range(ram_addr_t start, ram_addr_t end)
#else
void tb_invalidate_phys_range(target_ulong start, target_ulong end)
#endif
{
struct page_collection *pages;
tb_page_addr_t next;
assert_memory_lock();
pages = page_collection_lock(start, end);
for (next = (start & TARGET_PAGE_MASK) + TARGET_PAGE_SIZE;
start < end;
start = next, next += TARGET_PAGE_SIZE) {
PageDesc *pd = page_find(start >> TARGET_PAGE_BITS);
tb_page_addr_t bound = MIN(next, end);
if (pd == NULL) {
continue;
}
tb_invalidate_phys_page_range__locked(pages, pd, start, bound, 0);
}
page_collection_unlock(pages);
}
#ifdef CONFIG_SOFTMMU
/* len must be <= 8 and start must be a multiple of len.
* Called via softmmu_template.h when code areas are written to with
* iothread mutex not held.
*
* Call with all @pages in the range [@start, @start + len[ locked.
*/
void tb_invalidate_phys_page_fast(struct page_collection *pages,
tb_page_addr_t start, int len,
uintptr_t retaddr)
{
PageDesc *p;
assert_memory_lock();
p = page_find(start >> TARGET_PAGE_BITS);
if (!p) {
return;
}
assert_page_locked(p);
if (!p->code_bitmap &&
++p->code_write_count >= SMC_BITMAP_USE_THRESHOLD) {
build_page_bitmap(p);
}
if (p->code_bitmap) {
unsigned int nr;
unsigned long b;
nr = start & ~TARGET_PAGE_MASK;
b = p->code_bitmap[BIT_WORD(nr)] >> (nr & (BITS_PER_LONG - 1));
if (b & ((1 << len) - 1)) {
goto do_invalidate;
}
} else {
do_invalidate:
tb_invalidate_phys_page_range__locked(pages, p, start, start + len,
retaddr);
}
}
#else
/* Called with mmap_lock held. If pc is not 0 then it indicates the
* host PC of the faulting store instruction that caused this invalidate.
* Returns true if the caller needs to abort execution of the current
* TB (because it was modified by this store and the guest CPU has
* precise-SMC semantics).
*/
static bool tb_invalidate_phys_page(tb_page_addr_t addr, uintptr_t pc)
{
TranslationBlock *tb;
PageDesc *p;
int n;
#ifdef TARGET_HAS_PRECISE_SMC
TranslationBlock *current_tb = NULL;
CPUState *cpu = current_cpu;
CPUArchState *env = NULL;
int current_tb_modified = 0;
target_ulong current_pc = 0;
target_ulong current_cs_base = 0;
uint32_t current_flags = 0;
#endif
assert_memory_lock();
addr &= TARGET_PAGE_MASK;
p = page_find(addr >> TARGET_PAGE_BITS);
if (!p) {
return false;
}
#ifdef TARGET_HAS_PRECISE_SMC
if (p->first_tb && pc != 0) {
current_tb = tcg_tb_lookup(pc);
}
if (cpu != NULL) {
env = cpu->env_ptr;
}
#endif
assert_page_locked(p);
PAGE_FOR_EACH_TB(p, tb, n) {
#ifdef TARGET_HAS_PRECISE_SMC
if (current_tb == tb &&
(tb_cflags(current_tb) & CF_COUNT_MASK) != 1) {
/* If we are modifying the current TB, we must stop
its execution. We could be more precise by checking
that the modification is after the current PC, but it
would require a specialized function to partially
restore the CPU state */
current_tb_modified = 1;
cpu_restore_state_from_tb(cpu, current_tb, pc, true);
cpu_get_tb_cpu_state(env, &current_pc, &current_cs_base,
&current_flags);
}
#endif /* TARGET_HAS_PRECISE_SMC */
tb_phys_invalidate(tb, addr);
}
p->first_tb = (uintptr_t)NULL;
#ifdef TARGET_HAS_PRECISE_SMC
if (current_tb_modified) {
/* Force execution of one insn next time. */
cpu->cflags_next_tb = 1 | CF_NOIRQ | curr_cflags(cpu);
return true;
}
#endif
return false;
}
#endif
/* user-mode: call with mmap_lock held */
void tb_check_watchpoint(CPUState *cpu, uintptr_t retaddr)
{
TranslationBlock *tb;
assert_memory_lock();
tb = tcg_tb_lookup(retaddr);
if (tb) {
/* We can use retranslation to find the PC. */
cpu_restore_state_from_tb(cpu, tb, retaddr, true);
tb_phys_invalidate(tb, -1);
} else {
/* The exception probably happened in a helper. The CPU state should
have been saved before calling it. Fetch the PC from there. */
CPUArchState *env = cpu->env_ptr;
target_ulong pc, cs_base;
tb_page_addr_t addr;
uint32_t flags;
cpu_get_tb_cpu_state(env, &pc, &cs_base, &flags);
addr = get_page_addr_code(env, pc);
if (addr != -1) {
tb_invalidate_phys_range(addr, addr + 1);
}
}
}
#ifndef CONFIG_USER_ONLY
/*
* In deterministic execution mode, instructions doing device I/Os
* must be at the end of the TB.
*
* Called by softmmu_template.h, with iothread mutex not held.
*/
void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr)
{
TranslationBlock *tb;
CPUClass *cc;
uint32_t n;
tb = tcg_tb_lookup(retaddr);
if (!tb) {
cpu_abort(cpu, "cpu_io_recompile: could not find TB for pc=%p",
(void *)retaddr);
}
cpu_restore_state_from_tb(cpu, tb, retaddr, true);
/*
* Some guests must re-execute the branch when re-executing a delay
* slot instruction. When this is the case, adjust icount and N
* to account for the re-execution of the branch.
*/
n = 1;
cc = CPU_GET_CLASS(cpu);
if (cc->tcg_ops->io_recompile_replay_branch &&
cc->tcg_ops->io_recompile_replay_branch(cpu, tb)) {
cpu_neg(cpu)->icount_decr.u16.low++;
n = 2;
}
/*
* Exit the loop and potentially generate a new TB executing the
* just the I/O insns. We also limit instrumentation to memory
* operations only (which execute after completion) so we don't
* double instrument the instruction.
*/
cpu->cflags_next_tb = curr_cflags(cpu) | CF_MEMI_ONLY | CF_LAST_IO | n;
qemu_log_mask_and_addr(CPU_LOG_EXEC, tb->pc,
"cpu_io_recompile: rewound execution of TB to "
TARGET_FMT_lx "\n", tb->pc);
cpu_loop_exit_noexc(cpu);
}
static void print_qht_statistics(struct qht_stats hst, GString *buf)
{
uint32_t hgram_opts;
size_t hgram_bins;
char *hgram;
if (!hst.head_buckets) {
return;
}
g_string_append_printf(buf, "TB hash buckets %zu/%zu "
"(%0.2f%% head buckets used)\n",
hst.used_head_buckets, hst.head_buckets,
(double)hst.used_head_buckets /
hst.head_buckets * 100);
hgram_opts = QDIST_PR_BORDER | QDIST_PR_LABELS;
hgram_opts |= QDIST_PR_100X | QDIST_PR_PERCENT;
if (qdist_xmax(&hst.occupancy) - qdist_xmin(&hst.occupancy) == 1) {
hgram_opts |= QDIST_PR_NODECIMAL;
}
hgram = qdist_pr(&hst.occupancy, 10, hgram_opts);
g_string_append_printf(buf, "TB hash occupancy %0.2f%% avg chain occ. "
"Histogram: %s\n",
qdist_avg(&hst.occupancy) * 100, hgram);
g_free(hgram);
hgram_opts = QDIST_PR_BORDER | QDIST_PR_LABELS;
hgram_bins = qdist_xmax(&hst.chain) - qdist_xmin(&hst.chain);
if (hgram_bins > 10) {
hgram_bins = 10;
} else {
hgram_bins = 0;
hgram_opts |= QDIST_PR_NODECIMAL | QDIST_PR_NOBINRANGE;
}
hgram = qdist_pr(&hst.chain, hgram_bins, hgram_opts);
g_string_append_printf(buf, "TB hash avg chain %0.3f buckets. "
"Histogram: %s\n",
qdist_avg(&hst.chain), hgram);
g_free(hgram);
}
struct tb_tree_stats {
size_t nb_tbs;
size_t host_size;
size_t target_size;
size_t max_target_size;
size_t direct_jmp_count;
size_t direct_jmp2_count;
size_t cross_page;
};
static gboolean tb_tree_stats_iter(gpointer key, gpointer value, gpointer data)
{
const TranslationBlock *tb = value;
struct tb_tree_stats *tst = data;
tst->nb_tbs++;
tst->host_size += tb->tc.size;
tst->target_size += tb->size;
if (tb->size > tst->max_target_size) {
tst->max_target_size = tb->size;
}
if (tb->page_addr[1] != -1) {
tst->cross_page++;
}
if (tb->jmp_reset_offset[0] != TB_JMP_RESET_OFFSET_INVALID) {
tst->direct_jmp_count++;
if (tb->jmp_reset_offset[1] != TB_JMP_RESET_OFFSET_INVALID) {
tst->direct_jmp2_count++;
}
}
return false;
}
void dump_exec_info(GString *buf)
{
struct tb_tree_stats tst = {};
struct qht_stats hst;
size_t nb_tbs, flush_full, flush_part, flush_elide;
tcg_tb_foreach(tb_tree_stats_iter, &tst);
nb_tbs = tst.nb_tbs;
/* XXX: avoid using doubles ? */
g_string_append_printf(buf, "Translation buffer state:\n");
/*
* Report total code size including the padding and TB structs;
* otherwise users might think "-accel tcg,tb-size" is not honoured.
* For avg host size we use the precise numbers from tb_tree_stats though.
*/
g_string_append_printf(buf, "gen code size %zu/%zu\n",
tcg_code_size(), tcg_code_capacity());
g_string_append_printf(buf, "TB count %zu\n", nb_tbs);
g_string_append_printf(buf, "TB avg target size %zu max=%zu bytes\n",
nb_tbs ? tst.target_size / nb_tbs : 0,
tst.max_target_size);
g_string_append_printf(buf, "TB avg host size %zu bytes "
"(expansion ratio: %0.1f)\n",
nb_tbs ? tst.host_size / nb_tbs : 0,
tst.target_size ?
(double)tst.host_size / tst.target_size : 0);
g_string_append_printf(buf, "cross page TB count %zu (%zu%%)\n",
tst.cross_page,
nb_tbs ? (tst.cross_page * 100) / nb_tbs : 0);
g_string_append_printf(buf, "direct jump count %zu (%zu%%) "
"(2 jumps=%zu %zu%%)\n",
tst.direct_jmp_count,
nb_tbs ? (tst.direct_jmp_count * 100) / nb_tbs : 0,
tst.direct_jmp2_count,
nb_tbs ? (tst.direct_jmp2_count * 100) / nb_tbs : 0);
qht_statistics_init(&tb_ctx.htable, &hst);
print_qht_statistics(hst, buf);
qht_statistics_destroy(&hst);
g_string_append_printf(buf, "\nStatistics:\n");
g_string_append_printf(buf, "TB flush count %u\n",
qatomic_read(&tb_ctx.tb_flush_count));
g_string_append_printf(buf, "TB invalidate count %u\n",
qatomic_read(&tb_ctx.tb_phys_invalidate_count));
tlb_flush_counts(&flush_full, &flush_part, &flush_elide);
g_string_append_printf(buf, "TLB full flushes %zu\n", flush_full);
g_string_append_printf(buf, "TLB partial flushes %zu\n", flush_part);
g_string_append_printf(buf, "TLB elided flushes %zu\n", flush_elide);
tcg_dump_info(buf);
}
#else /* CONFIG_USER_ONLY */
void cpu_interrupt(CPUState *cpu, int mask)
{
g_assert(qemu_mutex_iothread_locked());
cpu->interrupt_request |= mask;
qatomic_set(&cpu_neg(cpu)->icount_decr.u16.high, -1);
}
/*
* Walks guest process memory "regions" one by one
* and calls callback function 'fn' for each region.
*/
struct walk_memory_regions_data {
walk_memory_regions_fn fn;
void *priv;
target_ulong start;
int prot;
};
static int walk_memory_regions_end(struct walk_memory_regions_data *data,
target_ulong end, int new_prot)
{
if (data->start != -1u) {
int rc = data->fn(data->priv, data->start, end, data->prot);
if (rc != 0) {
return rc;
}
}
data->start = (new_prot ? end : -1u);
data->prot = new_prot;
return 0;
}
static int walk_memory_regions_1(struct walk_memory_regions_data *data,
target_ulong base, int level, void **lp)
{
target_ulong pa;
int i, rc;
if (*lp == NULL) {
return walk_memory_regions_end(data, base, 0);
}
if (level == 0) {
PageDesc *pd = *lp;
for (i = 0; i < V_L2_SIZE; ++i) {
int prot = pd[i].flags;
pa = base | (i << TARGET_PAGE_BITS);
if (prot != data->prot) {
rc = walk_memory_regions_end(data, pa, prot);
if (rc != 0) {
return rc;
}
}
}
} else {
void **pp = *lp;
for (i = 0; i < V_L2_SIZE; ++i) {
pa = base | ((target_ulong)i <<
(TARGET_PAGE_BITS + V_L2_BITS * level));
rc = walk_memory_regions_1(data, pa, level - 1, pp + i);
if (rc != 0) {
return rc;
}
}
}
return 0;
}
int walk_memory_regions(void *priv, walk_memory_regions_fn fn)
{
struct walk_memory_regions_data data;
uintptr_t i, l1_sz = v_l1_size;
data.fn = fn;
data.priv = priv;
data.start = -1u;
data.prot = 0;
for (i = 0; i < l1_sz; i++) {
target_ulong base = i << (v_l1_shift + TARGET_PAGE_BITS);
int rc = walk_memory_regions_1(&data, base, v_l2_levels, l1_map + i);
if (rc != 0) {
return rc;
}
}
return walk_memory_regions_end(&data, 0, 0);
}
static int dump_region(void *priv, target_ulong start,
target_ulong end, unsigned long prot)
{
FILE *f = (FILE *)priv;
(void) fprintf(f, TARGET_FMT_lx"-"TARGET_FMT_lx
" "TARGET_FMT_lx" %c%c%c\n",
start, end, end - start,
((prot & PAGE_READ) ? 'r' : '-'),
((prot & PAGE_WRITE) ? 'w' : '-'),
((prot & PAGE_EXEC) ? 'x' : '-'));
return 0;
}
/* dump memory mappings */
void page_dump(FILE *f)
{
const int length = sizeof(target_ulong) * 2;
(void) fprintf(f, "%-*s %-*s %-*s %s\n",
length, "start", length, "end", length, "size", "prot");
walk_memory_regions(f, dump_region);
}
int page_get_flags(target_ulong address)
{
PageDesc *p;
p = page_find(address >> TARGET_PAGE_BITS);
if (!p) {
return 0;
}
return p->flags;
}
/*
* Allow the target to decide if PAGE_TARGET_[12] may be reset.
* By default, they are not kept.
*/
#ifndef PAGE_TARGET_STICKY
#define PAGE_TARGET_STICKY 0
#endif
#define PAGE_STICKY (PAGE_ANON | PAGE_TARGET_STICKY)
/* Modify the flags of a page and invalidate the code if necessary.
The flag PAGE_WRITE_ORG is positioned automatically depending
on PAGE_WRITE. The mmap_lock should already be held. */
void page_set_flags(target_ulong start, target_ulong end, int flags)
{
target_ulong addr, len;
bool reset_target_data;
/* This function should never be called with addresses outside the
guest address space. If this assert fires, it probably indicates
a missing call to h2g_valid. */
assert(end - 1 <= GUEST_ADDR_MAX);
assert(start < end);
/* Only set PAGE_ANON with new mappings. */
assert(!(flags & PAGE_ANON) || (flags & PAGE_RESET));
assert_memory_lock();
start = start & TARGET_PAGE_MASK;
end = TARGET_PAGE_ALIGN(end);
if (flags & PAGE_WRITE) {
flags |= PAGE_WRITE_ORG;
}
reset_target_data = !(flags & PAGE_VALID) || (flags & PAGE_RESET);
flags &= ~PAGE_RESET;
for (addr = start, len = end - start;
len != 0;
len -= TARGET_PAGE_SIZE, addr += TARGET_PAGE_SIZE) {
PageDesc *p = page_find_alloc(addr >> TARGET_PAGE_BITS, 1);
/* If the write protection bit is set, then we invalidate
the code inside. */
if (!(p->flags & PAGE_WRITE) &&
(flags & PAGE_WRITE) &&
p->first_tb) {
tb_invalidate_phys_page(addr, 0);
}
if (reset_target_data) {
g_free(p->target_data);
p->target_data = NULL;
p->flags = flags;
} else {
/* Using mprotect on a page does not change sticky bits. */
p->flags = (p->flags & PAGE_STICKY) | flags;
}
}
}
void *page_get_target_data(target_ulong address)
{
PageDesc *p = page_find(address >> TARGET_PAGE_BITS);
return p ? p->target_data : NULL;
}
void *page_alloc_target_data(target_ulong address, size_t size)
{
PageDesc *p = page_find(address >> TARGET_PAGE_BITS);
void *ret = NULL;
if (p->flags & PAGE_VALID) {
ret = p->target_data;
if (!ret) {
p->target_data = ret = g_malloc0(size);
}
}
return ret;
}
int page_check_range(target_ulong start, target_ulong len, int flags)
{
PageDesc *p;
target_ulong end;
target_ulong addr;
/* This function should never be called with addresses outside the
guest address space. If this assert fires, it probably indicates
a missing call to h2g_valid. */
if (TARGET_ABI_BITS > L1_MAP_ADDR_SPACE_BITS) {
assert(start < ((target_ulong)1 << L1_MAP_ADDR_SPACE_BITS));
}
if (len == 0) {
return 0;
}
if (start + len - 1 < start) {
/* We've wrapped around. */
return -1;
}
/* must do before we loose bits in the next step */
end = TARGET_PAGE_ALIGN(start + len);
start = start & TARGET_PAGE_MASK;
for (addr = start, len = end - start;
len != 0;
len -= TARGET_PAGE_SIZE, addr += TARGET_PAGE_SIZE) {
p = page_find(addr >> TARGET_PAGE_BITS);
if (!p) {
return -1;
}
if (!(p->flags & PAGE_VALID)) {
return -1;
}
if ((flags & PAGE_READ) && !(p->flags & PAGE_READ)) {
return -1;
}
if (flags & PAGE_WRITE) {
if (!(p->flags & PAGE_WRITE_ORG)) {
return -1;
}
/* unprotect the page if it was put read-only because it
contains translated code */
if (!(p->flags & PAGE_WRITE)) {
if (!page_unprotect(addr, 0)) {
return -1;
}
}
}
}
return 0;
}
void page_protect(tb_page_addr_t page_addr)
{
target_ulong addr;
PageDesc *p;
int prot;
p = page_find(page_addr >> TARGET_PAGE_BITS);
if (p && (p->flags & PAGE_WRITE)) {
/*
* Force the host page as non writable (writes will have a page fault +
* mprotect overhead).
*/
page_addr &= qemu_host_page_mask;
prot = 0;
for (addr = page_addr; addr < page_addr + qemu_host_page_size;
addr += TARGET_PAGE_SIZE) {
p = page_find(addr >> TARGET_PAGE_BITS);
if (!p) {
continue;
}
prot |= p->flags;
p->flags &= ~PAGE_WRITE;
}
mprotect(g2h_untagged(page_addr), qemu_host_page_size,
(prot & PAGE_BITS) & ~PAGE_WRITE);
if (DEBUG_TB_INVALIDATE_GATE) {
printf("protecting code page: 0x" TB_PAGE_ADDR_FMT "\n", page_addr);
}
}
}
/* called from signal handler: invalidate the code and unprotect the
* page. Return 0 if the fault was not handled, 1 if it was handled,
* and 2 if it was handled but the caller must cause the TB to be
* immediately exited. (We can only return 2 if the 'pc' argument is
* non-zero.)
*/
int page_unprotect(target_ulong address, uintptr_t pc)
{
unsigned int prot;
bool current_tb_invalidated;
PageDesc *p;
target_ulong host_start, host_end, addr;
/* Technically this isn't safe inside a signal handler. However we
know this only ever happens in a synchronous SEGV handler, so in
practice it seems to be ok. */
mmap_lock();
p = page_find(address >> TARGET_PAGE_BITS);
if (!p) {
mmap_unlock();
return 0;
}
/* if the page was really writable, then we change its
protection back to writable */
if (p->flags & PAGE_WRITE_ORG) {
current_tb_invalidated = false;
if (p->flags & PAGE_WRITE) {
/* If the page is actually marked WRITE then assume this is because
* this thread raced with another one which got here first and
* set the page to PAGE_WRITE and did the TB invalidate for us.
*/
#ifdef TARGET_HAS_PRECISE_SMC
TranslationBlock *current_tb = tcg_tb_lookup(pc);
if (current_tb) {
current_tb_invalidated = tb_cflags(current_tb) & CF_INVALID;
}
#endif
} else {
host_start = address & qemu_host_page_mask;
host_end = host_start + qemu_host_page_size;
prot = 0;
for (addr = host_start; addr < host_end; addr += TARGET_PAGE_SIZE) {
p = page_find(addr >> TARGET_PAGE_BITS);
p->flags |= PAGE_WRITE;
prot |= p->flags;
/* and since the content will be modified, we must invalidate
the corresponding translated code. */
current_tb_invalidated |= tb_invalidate_phys_page(addr, pc);
#ifdef CONFIG_USER_ONLY
if (DEBUG_TB_CHECK_GATE) {
tb_invalidate_check(addr);
}
#endif
}
mprotect((void *)g2h_untagged(host_start), qemu_host_page_size,
prot & PAGE_BITS);
}
mmap_unlock();
/* If current TB was invalidated return to main loop */
return current_tb_invalidated ? 2 : 1;
}
mmap_unlock();
return 0;
}
#endif /* CONFIG_USER_ONLY */
/* This is a wrapper for common code that can not use CONFIG_SOFTMMU */
void tcg_flush_softmmu_tlb(CPUState *cs)
{
#ifdef CONFIG_SOFTMMU
tlb_flush(cs);
#endif
}