* target/i386: optimize string instructions

* target/i386: new Sierra Forest and Clearwater Forest models
 * rust: type-safe vmstate implementation
 * rust: use interior mutability for PL011
 * rust: clean ups
 * memtxattrs: remove usage of bitfields from MEMTXATTRS_UNSPECIFIED
 * gitlab-ci: enable Rust backtraces
 -----BEGIN PGP SIGNATURE-----
 
 iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmeZ6VYUHHBib256aW5p
 QHJlZGhhdC5jb20ACgkQv/vSX3jHroMjbQgApuooMOp0z/8Ky4/ux8M8/vrlcNCH
 V1Pm6WzrjEzd9TIMLGr6npOyLOkWI31Aa4o/TuW09SeKE3dpCf/7LYA5VDEtkH79
 F57MgnSj56sMNgu+QZ/SiGvkKJXl+3091jIianrrI0dtX8hPonm6bt55woDvQt3z
 p94+4zzv5G0nc+ncITCDho8sn5itdZWVOjf9n6VCOumMjF4nRSoMkJKYIvjNht6n
 GtjMhYA70tzjkIi4bPyYkhFpMNlAqEDIp2TvPzp6klG5QoUErHIzdzoRTAtE4Dpb
 7240r6jarQX41TBXGOFq0NrxES1cm5zO/6159D24qZGHGm2hG4nDx+t2jw==
 =ZKFy
 -----END PGP SIGNATURE-----

Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging

* target/i386: optimize string instructions
* target/i386: new Sierra Forest and Clearwater Forest models
* rust: type-safe vmstate implementation
* rust: use interior mutability for PL011
* rust: clean ups
* memtxattrs: remove usage of bitfields from MEMTXATTRS_UNSPECIFIED
* gitlab-ci: enable Rust backtraces

# -----BEGIN PGP SIGNATURE-----
#
# iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmeZ6VYUHHBib256aW5p
# QHJlZGhhdC5jb20ACgkQv/vSX3jHroMjbQgApuooMOp0z/8Ky4/ux8M8/vrlcNCH
# V1Pm6WzrjEzd9TIMLGr6npOyLOkWI31Aa4o/TuW09SeKE3dpCf/7LYA5VDEtkH79
# F57MgnSj56sMNgu+QZ/SiGvkKJXl+3091jIianrrI0dtX8hPonm6bt55woDvQt3z
# p94+4zzv5G0nc+ncITCDho8sn5itdZWVOjf9n6VCOumMjF4nRSoMkJKYIvjNht6n
# GtjMhYA70tzjkIi4bPyYkhFpMNlAqEDIp2TvPzp6klG5QoUErHIzdzoRTAtE4Dpb
# 7240r6jarQX41TBXGOFq0NrxES1cm5zO/6159D24qZGHGm2hG4nDx+t2jw==
# =ZKFy
# -----END PGP SIGNATURE-----
# gpg: Signature made Wed 29 Jan 2025 03:39:50 EST
# gpg:                using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
# gpg:                issuer "pbonzini@redhat.com"
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
# gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4  E2F7 7E15 100C CD36 69B1
#      Subkey fingerprint: F133 3857 4B66 2389 866C  7682 BFFB D25F 78C7 AE83

* tag 'for-upstream' of https://gitlab.com/bonzini/qemu: (49 commits)
  gitlab-ci: include full Rust backtraces in test runs
  rust: qemu-api: add sub-subclass to the integration tests
  rust/zeroable: Implement Zeroable with const_zero macro
  rust: qdev: make reset take a shared reference
  rust: pl011: drop use of ControlFlow
  rust: pl011: pull device-specific code out of MemoryRegionOps callbacks
  rust: pl011: remove duplicate definitions
  rust: pl011: wrap registers with BqlRefCell
  rust: pl011: extract PL011Registers
  rust: pl011: pull interrupt updates out of read/write ops
  rust: pl011: extract CharBackend receive logic into a separate function
  rust: pl011: extract conversion to RegisterOffset
  rust: pl011: hide unnecessarily "pub" items from outside pl011::device
  rust: pl011: remove unnecessary "extern crate"
  rust: prefer NonNull::new to assertions
  rust: vmstate: make order of parameters consistent in vmstate_clock
  rust: vmstate: remove translation of C vmstate macros
  rust: pl011: switch vmstate to new-style macros
  rust: qemu_api: add vmstate_struct
  rust: vmstate: add public utility macros to implement VMState
  ...

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This commit is contained in:
Stefan Hajnoczi 2025-01-29 09:51:03 -05:00
commit 871af84dd5
18 changed files with 1423 additions and 855 deletions

View File

@ -63,6 +63,7 @@
stage: test
image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:$QEMU_CI_CONTAINER_TAG
script:
- export RUST_BACKTRACE=1
- source scripts/ci/gitlab-ci-section
- section_start buildenv "Setting up to run tests"
- scripts/git-submodule.sh update roms/SLOF

View File

@ -71,6 +71,16 @@ mixture of host CPU models between machines, if live migration
compatibility is required, use the newest CPU model that is compatible
across all desired hosts.
``ClearwaterForest``
Intel Xeon Processor (ClearwaterForest, 2025)
``SierraForest``, ``SierraForest-v2``
Intel Xeon Processor (SierraForest, 2024), SierraForest-v2 mitigates
the GDS and RFDS vulnerabilities with stepping 3.
``GraniteRapids``, ``GraniteRapids-v2``
Intel Xeon Processor (GraniteRapids, 2024)
``Cascadelake-Server``, ``Cascadelake-Server-noTSX``
Intel Xeon Processor (Cascade Lake, 2019), with "stepping" levels 6
or 7 only. (The Cascade Lake Xeon processor with *stepping 5 is
@ -181,7 +191,7 @@ features are included if using "Host passthrough" or "Host model".
CVE-2018-12127, [MSBDS] CVE-2018-12126).
This is an MSR (Model-Specific Register) feature rather than a CPUID feature,
so it will not appear in the Linux ``/proc/cpuinfo`` in the host or
therefore it will not appear in the Linux ``/proc/cpuinfo`` in the host or
guest. Instead, the host kernel uses it to populate the MDS
vulnerability file in ``sysfs``.
@ -189,10 +199,10 @@ features are included if using "Host passthrough" or "Host model".
affected} in the ``/sys/devices/system/cpu/vulnerabilities/mds`` file.
``taa-no``
Recommended to inform that the guest that the host is ``not``
Recommended to inform the guest that the host is ``not``
vulnerable to CVE-2019-11135, TSX Asynchronous Abort (TAA).
This too is an MSR feature, so it does not show up in the Linux
This is also an MSR feature, therefore it does not show up in the Linux
``/proc/cpuinfo`` in the host or guest.
It should only be enabled for VMs if the host reports ``Not affected``
@ -214,7 +224,7 @@ features are included if using "Host passthrough" or "Host model".
By disabling TSX, KVM-based guests can avoid paying the price of
mitigating TSX-based attacks.
Note that ``tsx-ctrl`` too is an MSR feature, so it does not show
Note that ``tsx-ctrl`` is also an MSR feature, therefore it does not show
up in the Linux ``/proc/cpuinfo`` in the host or guest.
To validate that Intel TSX is indeed disabled for the guest, there are
@ -223,6 +233,38 @@ features are included if using "Host passthrough" or "Host model".
``/sys/devices/system/cpu/vulnerabilities/tsx_async_abort`` file in
the guest should report ``Mitigation: TSX disabled``.
``bhi-no``
Recommended to inform the guest that the host is ``not``
vulnerable to CVE-2022-0001, Branch History Injection (BHI).
This is also an MSR feature, therefore it does not show up in the Linux
``/proc/cpuinfo`` in the host or guest.
It should only be enabled for VMs if the host reports
``BHI: Not affected`` in the
``/sys/devices/system/cpu/vulnerabilities/spectre_v2`` file.
``gds-no``
Recommended to inform the guest that the host is ``not``
vulnerable to CVE-2022-40982, Gather Data Sampling (GDS).
This is also an MSR feature, therefore it does not show up in the Linux
``/proc/cpuinfo`` in the host or guest.
It should only be enabled for VMs if the host reports ``Not affected``
in the ``/sys/devices/system/cpu/vulnerabilities/gather_data_sampling``
file.
``rfds-no``
Recommended to inform the guest that the host is ``not``
vulnerable to CVE-2023-28746, Register File Data Sampling (RFDS).
This is also an MSR feature, therefore it does not show up in the Linux
``/proc/cpuinfo`` in the host or guest.
It should only be enabled for VMs if the host reports ``Not affected``
in the ``/sys/devices/system/cpu/vulnerabilities/reg_file_data_sampling``
file.
Preferred CPU models for AMD x86 hosts
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

View File

@ -23,12 +23,6 @@
* different semantics.
*/
typedef struct MemTxAttrs {
/* Bus masters which don't specify any attributes will get this
* (via the MEMTXATTRS_UNSPECIFIED constant), so that we can
* distinguish "all attributes deliberately clear" from
* "didn't specify" if necessary.
*/
unsigned int unspecified:1;
/*
* ARM/AMBA: TrustZone Secure access
* x86: System Management Mode access
@ -57,14 +51,27 @@ typedef struct MemTxAttrs {
* PID (PCI PASID) support: Limited to 8 bits process identifier.
*/
unsigned int pid:8;
/*
* Bus masters which don't specify any attributes will get this
* (via the MEMTXATTRS_UNSPECIFIED constant), so that we can
* distinguish "all attributes deliberately clear" from
* "didn't specify" if necessary.
*/
bool unspecified;
uint8_t _reserved1;
uint16_t _reserved2;
} MemTxAttrs;
QEMU_BUILD_BUG_ON(sizeof(MemTxAttrs) > 8);
/* Bus masters which don't specify any attributes will get this,
* which has all attribute bits clear except the topmost one
* (so that we can distinguish "all attributes deliberately clear"
* from "didn't specify" if necessary).
*/
#define MEMTXATTRS_UNSPECIFIED ((MemTxAttrs) { .unspecified = 1 })
#define MEMTXATTRS_UNSPECIFIED ((MemTxAttrs) { .unspecified = true })
/* New-style MMIO accessors can indicate that the transaction failed.
* A zero (MEMTX_OK) response means success; anything else is a failure

View File

@ -2,19 +2,27 @@
// Author(s): Manos Pitsidianakis <manos.pitsidianakis@linaro.org>
// SPDX-License-Identifier: GPL-2.0-or-later
use core::ptr::{addr_of_mut, NonNull};
use core::ptr::{addr_of, addr_of_mut, NonNull};
use std::{
ffi::CStr,
os::raw::{c_int, c_uint, c_void},
os::raw::{c_int, c_void},
};
use qemu_api::{
bindings::{self, *},
c_str,
bindings::{
error_fatal, hwaddr, memory_region_init_io, qdev_init_clock_in, qdev_new,
qdev_prop_set_chr, qemu_chr_fe_accept_input, qemu_chr_fe_ioctl, qemu_chr_fe_set_handlers,
qemu_chr_fe_write_all, qemu_irq, sysbus_connect_irq, sysbus_mmio_map,
sysbus_realize_and_unref, CharBackend, Chardev, Clock, ClockEvent, MemoryRegion,
QEMUChrEvent, CHR_IOCTL_SERIAL_SET_BREAK,
},
c_str, impl_vmstate_forward,
irq::InterruptSource,
prelude::*,
qdev::DeviceImpl,
qdev::{DeviceImpl, DeviceState, Property},
qom::{ClassInitImpl, ObjectImpl, ParentField},
sysbus::{SysBusDevice, SysBusDeviceClass},
vmstate::VMStateDescription,
};
use crate::{
@ -54,6 +62,7 @@ impl DeviceId {
#[repr(transparent)]
#[derive(Debug, Default)]
pub struct Fifo([registers::Data; PL011_FIFO_DEPTH as usize]);
impl_vmstate_forward!(Fifo);
impl Fifo {
const fn len(&self) -> u32 {
@ -76,11 +85,8 @@ impl std::ops::Index<u32> for Fifo {
}
#[repr(C)]
#[derive(Debug, qemu_api_macros::Object, qemu_api_macros::offsets)]
/// PL011 Device Model in QEMU
pub struct PL011State {
pub parent_obj: ParentField<SysBusDevice>,
pub iomem: MemoryRegion,
#[derive(Debug, Default, qemu_api_macros::offsets)]
pub struct PL011Registers {
#[doc(alias = "fr")]
pub flags: registers::Flags,
#[doc(alias = "lcr")]
@ -100,8 +106,17 @@ pub struct PL011State {
pub read_pos: u32,
pub read_count: u32,
pub read_trigger: u32,
}
#[repr(C)]
#[derive(qemu_api_macros::Object, qemu_api_macros::offsets)]
/// PL011 Device Model in QEMU
pub struct PL011State {
pub parent_obj: ParentField<SysBusDevice>,
pub iomem: MemoryRegion,
#[doc(alias = "chr")]
pub char_backend: CharBackend,
pub regs: BqlRefCell<PL011Registers>,
/// QEMU interrupts
///
/// ```text
@ -123,6 +138,7 @@ pub struct PL011State {
qom_isa!(PL011State : SysBusDevice, DeviceState, Object);
#[repr(C)]
pub struct PL011Class {
parent_class: <SysBusDevice as ObjectType>::Class,
/// The byte string that identifies the device.
@ -155,77 +171,17 @@ impl DeviceImpl for PL011State {
fn vmsd() -> Option<&'static VMStateDescription> {
Some(&device_class::VMSTATE_PL011)
}
const REALIZE: Option<fn(&mut Self)> = Some(Self::realize);
const RESET: Option<fn(&mut Self)> = Some(Self::reset);
const REALIZE: Option<fn(&Self)> = Some(Self::realize);
const RESET: Option<fn(&Self)> = Some(Self::reset);
}
impl PL011State {
/// Initializes a pre-allocated, unitialized instance of `PL011State`.
///
/// # Safety
///
/// `self` must point to a correctly sized and aligned location for the
/// `PL011State` type. It must not be called more than once on the same
/// location/instance. All its fields are expected to hold unitialized
/// values with the sole exception of `parent_obj`.
unsafe fn init(&mut self) {
const CLK_NAME: &CStr = c_str!("clk");
// SAFETY:
//
// self and self.iomem are guaranteed to be valid at this point since callers
// must make sure the `self` reference is valid.
unsafe {
memory_region_init_io(
addr_of_mut!(self.iomem),
addr_of_mut!(*self).cast::<Object>(),
&PL011_OPS,
addr_of_mut!(*self).cast::<c_void>(),
Self::TYPE_NAME.as_ptr(),
0x1000,
);
}
// SAFETY:
//
// self.clock is not initialized at this point; but since `NonNull<_>` is Copy,
// we can overwrite the undefined value without side effects. This is
// safe since all PL011State instances are created by QOM code which
// calls this function to initialize the fields; therefore no code is
// able to access an invalid self.clock value.
unsafe {
let dev: &mut DeviceState = self.upcast_mut();
self.clock = NonNull::new(qdev_init_clock_in(
dev,
CLK_NAME.as_ptr(),
None, /* pl011_clock_update */
addr_of_mut!(*self).cast::<c_void>(),
ClockEvent::ClockUpdate.0,
))
.unwrap();
}
}
fn post_init(&self) {
self.init_mmio(&self.iomem);
for irq in self.interrupts.iter() {
self.init_irq(irq);
}
}
pub fn read(&mut self, offset: hwaddr, _size: c_uint) -> std::ops::ControlFlow<u64, u64> {
impl PL011Registers {
pub(self) fn read(&mut self, offset: RegisterOffset) -> (bool, u32) {
use RegisterOffset::*;
let value = match RegisterOffset::try_from(offset) {
Err(v) if (0x3f8..0x400).contains(&(v >> 2)) => {
let device_id = self.get_class().device_id;
u32::from(device_id[(offset - 0xfe0) >> 2])
}
Err(_) => {
// qemu_log_mask(LOG_GUEST_ERROR, "pl011_read: Bad offset 0x%x\n", (int)offset);
0
}
Ok(DR) => {
let mut update = false;
let result = match offset {
DR => {
self.flags.set_receive_fifo_full(false);
let c = self.read_fifo[self.read_pos];
if self.read_count > 0 {
@ -236,117 +192,109 @@ impl PL011State {
self.flags.set_receive_fifo_empty(true);
}
if self.read_count + 1 == self.read_trigger {
self.int_level &= !registers::INT_RX;
self.int_level &= !Interrupt::RX.0;
}
// Update error bits.
self.receive_status_error_clear.set_from_data(c);
self.update();
// Must call qemu_chr_fe_accept_input, so return Continue:
let c = u32::from(c);
return std::ops::ControlFlow::Continue(u64::from(c));
// Must call qemu_chr_fe_accept_input
update = true;
u32::from(c)
}
Ok(RSR) => u32::from(self.receive_status_error_clear),
Ok(FR) => u32::from(self.flags),
Ok(FBRD) => self.fbrd,
Ok(ILPR) => self.ilpr,
Ok(IBRD) => self.ibrd,
Ok(LCR_H) => u32::from(self.line_control),
Ok(CR) => u32::from(self.control),
Ok(FLS) => self.ifl,
Ok(IMSC) => self.int_enabled,
Ok(RIS) => self.int_level,
Ok(MIS) => self.int_level & self.int_enabled,
Ok(ICR) => {
RSR => u32::from(self.receive_status_error_clear),
FR => u32::from(self.flags),
FBRD => self.fbrd,
ILPR => self.ilpr,
IBRD => self.ibrd,
LCR_H => u32::from(self.line_control),
CR => u32::from(self.control),
FLS => self.ifl,
IMSC => self.int_enabled,
RIS => self.int_level,
MIS => self.int_level & self.int_enabled,
ICR => {
// "The UARTICR Register is the interrupt clear register and is write-only"
// Source: ARM DDI 0183G 3.3.13 Interrupt Clear Register, UARTICR
0
}
Ok(DMACR) => self.dmacr,
DMACR => self.dmacr,
};
std::ops::ControlFlow::Break(value.into())
(update, result)
}
pub fn write(&mut self, offset: hwaddr, value: u64) {
pub(self) fn write(
&mut self,
offset: RegisterOffset,
value: u32,
char_backend: *mut CharBackend,
) -> bool {
// eprintln!("write offset {offset} value {value}");
use RegisterOffset::*;
let value: u32 = value as u32;
match RegisterOffset::try_from(offset) {
Err(_bad_offset) => {
eprintln!("write bad offset {offset} value {value}");
match offset {
DR => {
// interrupts always checked
let _ = self.loopback_tx(value);
self.int_level |= Interrupt::TX.0;
return true;
}
Ok(DR) => {
// ??? Check if transmitter is enabled.
let ch: u8 = value as u8;
// XXX this blocks entire thread. Rewrite to use
// qemu_chr_fe_write and background I/O callbacks
// SAFETY: self.char_backend is a valid CharBackend instance after it's been
// initialized in realize().
unsafe {
qemu_chr_fe_write_all(addr_of_mut!(self.char_backend), &ch, 1);
}
self.loopback_tx(value);
self.int_level |= registers::INT_TX;
self.update();
RSR => {
self.receive_status_error_clear = 0.into();
}
Ok(RSR) => {
self.receive_status_error_clear.reset();
}
Ok(FR) => {
FR => {
// flag writes are ignored
}
Ok(ILPR) => {
ILPR => {
self.ilpr = value;
}
Ok(IBRD) => {
IBRD => {
self.ibrd = value;
}
Ok(FBRD) => {
FBRD => {
self.fbrd = value;
}
Ok(LCR_H) => {
LCR_H => {
let new_val: registers::LineControl = value.into();
// Reset the FIFO state on FIFO enable or disable
if self.line_control.fifos_enabled() != new_val.fifos_enabled() {
self.reset_rx_fifo();
self.reset_tx_fifo();
}
if self.line_control.send_break() ^ new_val.send_break() {
let update = (self.line_control.send_break() != new_val.send_break()) && {
let mut break_enable: c_int = new_val.send_break().into();
// SAFETY: self.char_backend is a valid CharBackend instance after it's been
// initialized in realize().
unsafe {
qemu_chr_fe_ioctl(
addr_of_mut!(self.char_backend),
char_backend,
CHR_IOCTL_SERIAL_SET_BREAK as i32,
addr_of_mut!(break_enable).cast::<c_void>(),
);
}
self.loopback_break(break_enable > 0);
}
self.loopback_break(break_enable > 0)
};
self.line_control = new_val;
self.set_read_trigger();
return update;
}
Ok(CR) => {
CR => {
// ??? Need to implement the enable bit.
self.control = value.into();
self.loopback_mdmctrl();
return self.loopback_mdmctrl();
}
Ok(FLS) => {
FLS => {
self.ifl = value;
self.set_read_trigger();
}
Ok(IMSC) => {
IMSC => {
self.int_enabled = value;
self.update();
return true;
}
Ok(RIS) => {}
Ok(MIS) => {}
Ok(ICR) => {
RIS => {}
MIS => {}
ICR => {
self.int_level &= !value;
self.update();
return true;
}
Ok(DMACR) => {
DMACR => {
self.dmacr = value;
if value & 3 > 0 {
// qemu_log_mask(LOG_UNIMP, "pl011: DMA not implemented\n");
@ -354,14 +302,12 @@ impl PL011State {
}
}
}
false
}
#[inline]
fn loopback_tx(&mut self, value: u32) {
if !self.loopback_enabled() {
return;
}
#[must_use]
fn loopback_tx(&mut self, value: u32) -> bool {
// Caveat:
//
// In real hardware, TX loopback happens at the serial-bit level
@ -379,12 +325,13 @@ impl PL011State {
// hardware flow-control is enabled.
//
// For simplicity, the above described is not emulated.
self.put_fifo(value);
self.loopback_enabled() && self.put_fifo(value)
}
fn loopback_mdmctrl(&mut self) {
#[must_use]
fn loopback_mdmctrl(&mut self) -> bool {
if !self.loopback_enabled() {
return;
return false;
}
/*
@ -410,51 +357,32 @@ impl PL011State {
// Change interrupts based on updated FR
let mut il = self.int_level;
il &= !Interrupt::MS;
il &= !Interrupt::MS.0;
if self.flags.data_set_ready() {
il |= Interrupt::DSR as u32;
il |= Interrupt::DSR.0;
}
if self.flags.data_carrier_detect() {
il |= Interrupt::DCD as u32;
il |= Interrupt::DCD.0;
}
if self.flags.clear_to_send() {
il |= Interrupt::CTS as u32;
il |= Interrupt::CTS.0;
}
if self.flags.ring_indicator() {
il |= Interrupt::RI as u32;
il |= Interrupt::RI.0;
}
self.int_level = il;
self.update();
true
}
fn loopback_break(&mut self, enable: bool) {
if enable {
self.loopback_tx(registers::Data::BREAK.into());
}
fn loopback_break(&mut self, enable: bool) -> bool {
enable && self.loopback_tx(registers::Data::BREAK.into())
}
fn set_read_trigger(&mut self) {
self.read_trigger = 1;
}
pub fn realize(&mut self) {
// SAFETY: self.char_backend has the correct size and alignment for a
// CharBackend object, and its callbacks are of the correct types.
unsafe {
qemu_chr_fe_set_handlers(
addr_of_mut!(self.char_backend),
Some(pl011_can_receive),
Some(pl011_receive),
Some(pl011_event),
None,
addr_of_mut!(*self).cast::<c_void>(),
core::ptr::null_mut(),
true,
);
}
}
pub fn reset(&mut self) {
self.line_control.reset();
self.receive_status_error_clear.reset();
@ -487,17 +415,6 @@ impl PL011State {
self.flags.set_transmit_fifo_empty(true);
}
pub fn can_receive(&self) -> bool {
// trace_pl011_can_receive(s->lcr, s->read_count, r);
self.read_count < self.fifo_depth()
}
pub fn event(&mut self, event: QEMUChrEvent) {
if event == bindings::QEMUChrEvent::CHR_EVENT_BREAK && !self.loopback_enabled() {
self.put_fifo(registers::Data::BREAK.into());
}
}
#[inline]
pub fn fifo_enabled(&self) -> bool {
self.line_control.fifos_enabled() == registers::Mode::FIFO
@ -517,7 +434,8 @@ impl PL011State {
1
}
pub fn put_fifo(&mut self, value: c_uint) {
#[must_use]
pub fn put_fifo(&mut self, value: u32) -> bool {
let depth = self.fifo_depth();
assert!(depth > 0);
let slot = (self.read_pos + self.read_count) & (depth - 1);
@ -529,19 +447,13 @@ impl PL011State {
}
if self.read_count == self.read_trigger {
self.int_level |= registers::INT_RX;
self.update();
self.int_level |= Interrupt::RX.0;
return true;
}
false
}
pub fn update(&self) {
let flags = self.int_level & self.int_enabled;
for (irq, i) in self.interrupts.iter().zip(IRQMASK) {
irq.set(flags & i != 0);
}
}
pub fn post_load(&mut self, _version_id: u32) -> Result<(), ()> {
pub fn post_load(&mut self) -> Result<(), ()> {
/* Sanity-check input state */
if self.read_pos >= self.read_fifo.len() || self.read_count > self.read_fifo.len() {
return Err(());
@ -563,19 +475,188 @@ impl PL011State {
}
}
impl PL011State {
/// Initializes a pre-allocated, unitialized instance of `PL011State`.
///
/// # Safety
///
/// `self` must point to a correctly sized and aligned location for the
/// `PL011State` type. It must not be called more than once on the same
/// location/instance. All its fields are expected to hold unitialized
/// values with the sole exception of `parent_obj`.
unsafe fn init(&mut self) {
const CLK_NAME: &CStr = c_str!("clk");
// SAFETY:
//
// self and self.iomem are guaranteed to be valid at this point since callers
// must make sure the `self` reference is valid.
unsafe {
memory_region_init_io(
addr_of_mut!(self.iomem),
addr_of_mut!(*self).cast::<Object>(),
&PL011_OPS,
addr_of_mut!(*self).cast::<c_void>(),
Self::TYPE_NAME.as_ptr(),
0x1000,
);
}
self.regs = Default::default();
// SAFETY:
//
// self.clock is not initialized at this point; but since `NonNull<_>` is Copy,
// we can overwrite the undefined value without side effects. This is
// safe since all PL011State instances are created by QOM code which
// calls this function to initialize the fields; therefore no code is
// able to access an invalid self.clock value.
unsafe {
let dev: &mut DeviceState = self.upcast_mut();
self.clock = NonNull::new(qdev_init_clock_in(
dev,
CLK_NAME.as_ptr(),
None, /* pl011_clock_update */
addr_of_mut!(*self).cast::<c_void>(),
ClockEvent::ClockUpdate.0,
))
.unwrap();
}
}
fn post_init(&self) {
self.init_mmio(&self.iomem);
for irq in self.interrupts.iter() {
self.init_irq(irq);
}
}
pub fn read(&mut self, offset: hwaddr, _size: u32) -> u64 {
match RegisterOffset::try_from(offset) {
Err(v) if (0x3f8..0x400).contains(&(v >> 2)) => {
let device_id = self.get_class().device_id;
u64::from(device_id[(offset - 0xfe0) >> 2])
}
Err(_) => {
// qemu_log_mask(LOG_GUEST_ERROR, "pl011_read: Bad offset 0x%x\n", (int)offset);
0
}
Ok(field) => {
let (update_irq, result) = self.regs.borrow_mut().read(field);
if update_irq {
self.update();
unsafe {
qemu_chr_fe_accept_input(&mut self.char_backend);
}
}
result.into()
}
}
}
pub fn write(&mut self, offset: hwaddr, value: u64) {
let mut update_irq = false;
if let Ok(field) = RegisterOffset::try_from(offset) {
// qemu_chr_fe_write_all() calls into the can_receive
// callback, so handle writes before entering PL011Registers.
if field == RegisterOffset::DR {
// ??? Check if transmitter is enabled.
let ch: u8 = value as u8;
// SAFETY: char_backend is a valid CharBackend instance after it's been
// initialized in realize().
// XXX this blocks entire thread. Rewrite to use
// qemu_chr_fe_write and background I/O callbacks
unsafe {
qemu_chr_fe_write_all(&mut self.char_backend, &ch, 1);
}
}
update_irq = self
.regs
.borrow_mut()
.write(field, value as u32, &mut self.char_backend);
} else {
eprintln!("write bad offset {offset} value {value}");
}
if update_irq {
self.update();
}
}
pub fn can_receive(&self) -> bool {
// trace_pl011_can_receive(s->lcr, s->read_count, r);
let regs = self.regs.borrow();
regs.read_count < regs.fifo_depth()
}
pub fn receive(&self, ch: u32) {
let mut regs = self.regs.borrow_mut();
let update_irq = !regs.loopback_enabled() && regs.put_fifo(ch);
// Release the BqlRefCell before calling self.update()
drop(regs);
if update_irq {
self.update();
}
}
pub fn event(&self, event: QEMUChrEvent) {
let mut update_irq = false;
let mut regs = self.regs.borrow_mut();
if event == QEMUChrEvent::CHR_EVENT_BREAK && !regs.loopback_enabled() {
update_irq = regs.put_fifo(registers::Data::BREAK.into());
}
// Release the BqlRefCell before calling self.update()
drop(regs);
if update_irq {
self.update()
}
}
pub fn realize(&self) {
// SAFETY: self.char_backend has the correct size and alignment for a
// CharBackend object, and its callbacks are of the correct types.
unsafe {
qemu_chr_fe_set_handlers(
addr_of!(self.char_backend) as *mut CharBackend,
Some(pl011_can_receive),
Some(pl011_receive),
Some(pl011_event),
None,
addr_of!(*self).cast::<c_void>() as *mut c_void,
core::ptr::null_mut(),
true,
);
}
}
pub fn reset(&self) {
self.regs.borrow_mut().reset();
}
pub fn update(&self) {
let regs = self.regs.borrow();
let flags = regs.int_level & regs.int_enabled;
for (irq, i) in self.interrupts.iter().zip(IRQMASK) {
irq.set(flags & i != 0);
}
}
pub fn post_load(&self, _version_id: u32) -> Result<(), ()> {
self.regs.borrow_mut().post_load()
}
}
/// Which bits in the interrupt status matter for each outbound IRQ line ?
pub const IRQMASK: [u32; 6] = [
const IRQMASK: [u32; 6] = [
/* combined IRQ */
Interrupt::E
| Interrupt::MS
| Interrupt::RT as u32
| Interrupt::TX as u32
| Interrupt::RX as u32,
Interrupt::RX as u32,
Interrupt::TX as u32,
Interrupt::RT as u32,
Interrupt::MS,
Interrupt::E,
Interrupt::E.0 | Interrupt::MS.0 | Interrupt::RT.0 | Interrupt::TX.0 | Interrupt::RX.0,
Interrupt::RX.0,
Interrupt::TX.0,
Interrupt::RT.0,
Interrupt::MS.0,
Interrupt::E.0,
];
/// # Safety
@ -584,11 +665,8 @@ pub const IRQMASK: [u32; 6] = [
/// the same size as [`PL011State`]. We also expect the device is
/// readable/writeable from one thread at any time.
pub unsafe extern "C" fn pl011_can_receive(opaque: *mut c_void) -> c_int {
unsafe {
debug_assert!(!opaque.is_null());
let state = NonNull::new_unchecked(opaque.cast::<PL011State>());
state.as_ref().can_receive().into()
}
let state = NonNull::new(opaque).unwrap().cast::<PL011State>();
unsafe { state.as_ref().can_receive().into() }
}
/// # Safety
@ -599,15 +677,11 @@ pub unsafe extern "C" fn pl011_can_receive(opaque: *mut c_void) -> c_int {
///
/// The buffer and size arguments must also be valid.
pub unsafe extern "C" fn pl011_receive(opaque: *mut c_void, buf: *const u8, size: c_int) {
let state = NonNull::new(opaque).unwrap().cast::<PL011State>();
unsafe {
debug_assert!(!opaque.is_null());
let mut state = NonNull::new_unchecked(opaque.cast::<PL011State>());
if state.as_ref().loopback_enabled() {
return;
}
if size > 0 {
debug_assert!(!buf.is_null());
state.as_mut().put_fifo(c_uint::from(buf.read_volatile()))
state.as_ref().receive(u32::from(buf.read_volatile()));
}
}
}
@ -618,11 +692,8 @@ pub unsafe extern "C" fn pl011_receive(opaque: *mut c_void, buf: *const u8, size
/// the same size as [`PL011State`]. We also expect the device is
/// readable/writeable from one thread at any time.
pub unsafe extern "C" fn pl011_event(opaque: *mut c_void, event: QEMUChrEvent) {
unsafe {
debug_assert!(!opaque.is_null());
let mut state = NonNull::new_unchecked(opaque.cast::<PL011State>());
state.as_mut().event(event)
}
let state = NonNull::new(opaque).unwrap().cast::<PL011State>();
unsafe { state.as_ref().event(event) }
}
/// # Safety
@ -647,7 +718,7 @@ pub unsafe extern "C" fn pl011_create(
}
#[repr(C)]
#[derive(Debug, qemu_api_macros::Object)]
#[derive(qemu_api_macros::Object)]
/// PL011 Luminary device model.
pub struct PL011Luminary {
parent_obj: ParentField<PL011State>,

View File

@ -6,45 +6,64 @@ use core::ptr::NonNull;
use std::os::raw::{c_int, c_void};
use qemu_api::{
bindings::*, c_str, vmstate_clock, vmstate_fields, vmstate_subsections, vmstate_uint32,
vmstate_uint32_array, vmstate_unused, zeroable::Zeroable,
bindings::*, c_str, prelude::*, vmstate_clock, vmstate_fields, vmstate_of, vmstate_struct,
vmstate_subsections, vmstate_unused, zeroable::Zeroable,
};
use crate::device::{PL011State, PL011_FIFO_DEPTH};
use crate::device::{PL011Registers, PL011State};
#[allow(clippy::missing_const_for_fn)]
extern "C" fn pl011_clock_needed(opaque: *mut c_void) -> bool {
unsafe {
debug_assert!(!opaque.is_null());
let state = NonNull::new_unchecked(opaque.cast::<PL011State>());
state.as_ref().migrate_clock
}
let state = NonNull::new(opaque).unwrap().cast::<PL011State>();
unsafe { state.as_ref().migrate_clock }
}
/// Migration subsection for [`PL011State`] clock.
pub static VMSTATE_PL011_CLOCK: VMStateDescription = VMStateDescription {
static VMSTATE_PL011_CLOCK: VMStateDescription = VMStateDescription {
name: c_str!("pl011/clock").as_ptr(),
version_id: 1,
minimum_version_id: 1,
needed: Some(pl011_clock_needed),
fields: vmstate_fields! {
vmstate_clock!(clock, PL011State),
vmstate_clock!(PL011State, clock),
},
..Zeroable::ZERO
};
extern "C" fn pl011_post_load(opaque: *mut c_void, version_id: c_int) -> c_int {
unsafe {
debug_assert!(!opaque.is_null());
let mut state = NonNull::new_unchecked(opaque.cast::<PL011State>());
let result = state.as_mut().post_load(version_id as u32);
if result.is_err() {
-1
} else {
0
}
let state = NonNull::new(opaque).unwrap().cast::<PL011State>();
let result = unsafe { state.as_ref().post_load(version_id as u32) };
if result.is_err() {
-1
} else {
0
}
}
static VMSTATE_PL011_REGS: VMStateDescription = VMStateDescription {
name: c_str!("pl011/regs").as_ptr(),
version_id: 2,
minimum_version_id: 2,
fields: vmstate_fields! {
vmstate_of!(PL011Registers, flags),
vmstate_of!(PL011Registers, line_control),
vmstate_of!(PL011Registers, receive_status_error_clear),
vmstate_of!(PL011Registers, control),
vmstate_of!(PL011Registers, dmacr),
vmstate_of!(PL011Registers, int_enabled),
vmstate_of!(PL011Registers, int_level),
vmstate_of!(PL011Registers, read_fifo),
vmstate_of!(PL011Registers, ilpr),
vmstate_of!(PL011Registers, ibrd),
vmstate_of!(PL011Registers, fbrd),
vmstate_of!(PL011Registers, ifl),
vmstate_of!(PL011Registers, read_pos),
vmstate_of!(PL011Registers, read_count),
vmstate_of!(PL011Registers, read_trigger),
},
..Zeroable::ZERO
};
pub static VMSTATE_PL011: VMStateDescription = VMStateDescription {
name: c_str!("pl011").as_ptr(),
version_id: 2,
@ -52,21 +71,7 @@ pub static VMSTATE_PL011: VMStateDescription = VMStateDescription {
post_load: Some(pl011_post_load),
fields: vmstate_fields! {
vmstate_unused!(core::mem::size_of::<u32>()),
vmstate_uint32!(flags, PL011State),
vmstate_uint32!(line_control, PL011State),
vmstate_uint32!(receive_status_error_clear, PL011State),
vmstate_uint32!(control, PL011State),
vmstate_uint32!(dmacr, PL011State),
vmstate_uint32!(int_enabled, PL011State),
vmstate_uint32!(int_level, PL011State),
vmstate_uint32_array!(read_fifo, PL011State, PL011_FIFO_DEPTH),
vmstate_uint32!(ilpr, PL011State),
vmstate_uint32!(ibrd, PL011State),
vmstate_uint32!(fbrd, PL011State),
vmstate_uint32!(ifl, PL011State),
vmstate_uint32!(read_pos, PL011State),
vmstate_uint32!(read_count, PL011State),
vmstate_uint32!(read_trigger, PL011State),
vmstate_struct!(PL011State, regs, &VMSTATE_PL011_REGS, BqlRefCell<PL011Registers>),
},
subsections: vmstate_subsections! {
VMSTATE_PL011_CLOCK

View File

@ -25,15 +25,13 @@
#![allow(clippy::upper_case_acronyms)]
#![allow(clippy::result_unit_err)]
extern crate bilge;
extern crate bilge_impl;
extern crate qemu_api;
use qemu_api::c_str;
pub mod device;
pub mod device_class;
pub mod memory_ops;
mod device;
mod device_class;
mod memory_ops;
pub use device::pl011_create;
pub const TYPE_PL011: &::std::ffi::CStr = c_str!("pl011");
pub const TYPE_PL011_LUMINARY: &::std::ffi::CStr = c_str!("pl011_luminary");
@ -45,8 +43,8 @@ pub const TYPE_PL011_LUMINARY: &::std::ffi::CStr = c_str!("pl011_luminary");
#[doc(alias = "offset")]
#[allow(non_camel_case_types)]
#[repr(u64)]
#[derive(Debug, qemu_api_macros::TryInto)]
pub enum RegisterOffset {
#[derive(Debug, Eq, PartialEq, qemu_api_macros::TryInto)]
enum RegisterOffset {
/// Data Register
///
/// A write to this register initiates the actual data transmission
@ -102,10 +100,11 @@ pub enum RegisterOffset {
//Reserved = 0x04C,
}
pub mod registers {
mod registers {
//! Device registers exposed as typed structs which are backed by arbitrary
//! integer bitmaps. [`Data`], [`Control`], [`LineControl`], etc.
use bilge::prelude::*;
use qemu_api::impl_vmstate_bitsized;
/// Receive Status Register / Data Register common error bits
///
@ -172,6 +171,7 @@ pub mod registers {
pub errors: Errors,
_reserved: u16,
}
impl_vmstate_bitsized!(Data);
impl Data {
// bilge is not very const-friendly, unfortunately
@ -208,6 +208,7 @@ pub mod registers {
pub errors: Errors,
_reserved_unpredictable: u24,
}
impl_vmstate_bitsized!(ReceiveStatusErrorClear);
impl ReceiveStatusErrorClear {
pub fn set_from_data(&mut self, data: Data) {
@ -280,6 +281,7 @@ pub mod registers {
pub ring_indicator: bool,
_reserved_zero_no_modify: u23,
}
impl_vmstate_bitsized!(Flags);
impl Flags {
pub fn reset(&mut self) {
@ -354,6 +356,7 @@ pub mod registers {
/// 31:8 - Reserved, do not modify, read as zero.
_reserved_zero_no_modify: u24,
}
impl_vmstate_bitsized!(LineControl);
impl LineControl {
pub fn reset(&mut self) {
@ -498,6 +501,7 @@ pub mod registers {
/// 31:16 - Reserved, do not modify, read as zero.
_reserved_zero_no_modify2: u16,
}
impl_vmstate_bitsized!(Control);
impl Control {
pub fn reset(&mut self) {
@ -516,38 +520,23 @@ pub mod registers {
}
/// Interrupt status bits in UARTRIS, UARTMIS, UARTIMSC
pub const INT_OE: u32 = 1 << 10;
pub const INT_BE: u32 = 1 << 9;
pub const INT_PE: u32 = 1 << 8;
pub const INT_FE: u32 = 1 << 7;
pub const INT_RT: u32 = 1 << 6;
pub const INT_TX: u32 = 1 << 5;
pub const INT_RX: u32 = 1 << 4;
pub const INT_DSR: u32 = 1 << 3;
pub const INT_DCD: u32 = 1 << 2;
pub const INT_CTS: u32 = 1 << 1;
pub const INT_RI: u32 = 1 << 0;
pub const INT_E: u32 = INT_OE | INT_BE | INT_PE | INT_FE;
pub const INT_MS: u32 = INT_RI | INT_DSR | INT_DCD | INT_CTS;
#[repr(u32)]
pub enum Interrupt {
OE = 1 << 10,
BE = 1 << 9,
PE = 1 << 8,
FE = 1 << 7,
RT = 1 << 6,
TX = 1 << 5,
RX = 1 << 4,
DSR = 1 << 3,
DCD = 1 << 2,
CTS = 1 << 1,
RI = 1 << 0,
}
pub struct Interrupt(pub u32);
impl Interrupt {
pub const E: u32 = INT_OE | INT_BE | INT_PE | INT_FE;
pub const MS: u32 = INT_RI | INT_DSR | INT_DCD | INT_CTS;
pub const OE: Self = Self(1 << 10);
pub const BE: Self = Self(1 << 9);
pub const PE: Self = Self(1 << 8);
pub const FE: Self = Self(1 << 7);
pub const RT: Self = Self(1 << 6);
pub const TX: Self = Self(1 << 5);
pub const RX: Self = Self(1 << 4);
pub const DSR: Self = Self(1 << 3);
pub const DCD: Self = Self(1 << 2);
pub const CTS: Self = Self(1 << 1);
pub const RI: Self = Self(1 << 0);
pub const E: Self = Self(Self::OE.0 | Self::BE.0 | Self::PE.0 | Self::FE.0);
pub const MS: Self = Self(Self::RI.0 | Self::DSR.0 | Self::DCD.0 | Self::CTS.0);
}
}

View File

@ -24,28 +24,11 @@ pub static PL011_OPS: MemoryRegionOps = MemoryRegionOps {
};
unsafe extern "C" fn pl011_read(opaque: *mut c_void, addr: hwaddr, size: c_uint) -> u64 {
assert!(!opaque.is_null());
let mut state = unsafe { NonNull::new_unchecked(opaque.cast::<PL011State>()) };
let val = unsafe { state.as_mut().read(addr, size) };
match val {
std::ops::ControlFlow::Break(val) => val,
std::ops::ControlFlow::Continue(val) => {
// SAFETY: self.char_backend is a valid CharBackend instance after it's been
// initialized in realize().
let cb_ptr = unsafe { core::ptr::addr_of_mut!(state.as_mut().char_backend) };
unsafe {
qemu_chr_fe_accept_input(cb_ptr);
}
val
}
}
let mut state = NonNull::new(opaque).unwrap().cast::<PL011State>();
unsafe { state.as_mut() }.read(addr, size)
}
unsafe extern "C" fn pl011_write(opaque: *mut c_void, addr: hwaddr, data: u64, _size: c_uint) {
unsafe {
assert!(!opaque.is_null());
let mut state = NonNull::new_unchecked(opaque.cast::<PL011State>());
state.as_mut().write(addr, data)
}
let mut state = NonNull::new(opaque).unwrap().cast::<PL011State>();
unsafe { state.as_mut() }.write(addr, data);
}

View File

@ -18,3 +18,5 @@ pub use crate::qom::ObjectType;
pub use crate::qom_isa;
pub use crate::sysbus::SysBusDeviceMethods;
pub use crate::vmstate::VMState;

View File

@ -4,7 +4,7 @@
//! Bindings to create devices and access device functionality from Rust.
use std::ffi::CStr;
use std::{ffi::CStr, ptr::NonNull};
pub use bindings::{DeviceClass, DeviceState, Property};
@ -23,14 +23,14 @@ pub trait DeviceImpl {
///
/// If not `None`, the parent class's `realize` method is overridden
/// with the function pointed to by `REALIZE`.
const REALIZE: Option<fn(&mut Self)> = None;
const REALIZE: Option<fn(&Self)> = None;
/// If not `None`, the parent class's `reset` method is overridden
/// with the function pointed to by `RESET`.
///
/// Rust does not yet support the three-phase reset protocol; this is
/// usually okay for leaf classes.
const RESET: Option<fn(&mut Self)> = None;
const RESET: Option<fn(&Self)> = None;
/// An array providing the properties that the user can set on the
/// device. Not a `const` because referencing statics in constants
@ -55,9 +55,8 @@ pub trait DeviceImpl {
/// can be downcasted to type `T`. We also expect the device is
/// readable/writeable from one thread at any time.
unsafe extern "C" fn rust_realize_fn<T: DeviceImpl>(dev: *mut DeviceState, _errp: *mut *mut Error) {
assert!(!dev.is_null());
let state = dev.cast::<T>();
T::REALIZE.unwrap()(unsafe { &mut *state });
let state = NonNull::new(dev).unwrap().cast::<T>();
T::REALIZE.unwrap()(unsafe { state.as_ref() });
}
/// # Safety
@ -66,9 +65,8 @@ unsafe extern "C" fn rust_realize_fn<T: DeviceImpl>(dev: *mut DeviceState, _errp
/// can be downcasted to type `T`. We also expect the device is
/// readable/writeable from one thread at any time.
unsafe extern "C" fn rust_reset_fn<T: DeviceImpl>(dev: *mut DeviceState) {
assert!(!dev.is_null());
let state = dev.cast::<T>();
T::RESET.unwrap()(unsafe { &mut *state });
let mut state = NonNull::new(dev).unwrap().cast::<T>();
T::RESET.unwrap()(unsafe { state.as_mut() });
}
impl<T> ClassInitImpl<DeviceClass> for T

View File

@ -58,6 +58,7 @@ use std::{
fmt,
ops::{Deref, DerefMut},
os::raw::c_void,
ptr::NonNull,
};
pub use bindings::{Object, ObjectClass};
@ -153,27 +154,34 @@ impl<T: fmt::Display + ObjectType> fmt::Display for ParentField<T> {
}
unsafe extern "C" fn rust_instance_init<T: ObjectImpl>(obj: *mut Object) {
let mut state = NonNull::new(obj).unwrap().cast::<T>();
// SAFETY: obj is an instance of T, since rust_instance_init<T>
// is called from QOM core as the instance_init function
// for class T
unsafe { T::INSTANCE_INIT.unwrap()(&mut *obj.cast::<T>()) }
unsafe {
T::INSTANCE_INIT.unwrap()(state.as_mut());
}
}
unsafe extern "C" fn rust_instance_post_init<T: ObjectImpl>(obj: *mut Object) {
let state = NonNull::new(obj).unwrap().cast::<T>();
// SAFETY: obj is an instance of T, since rust_instance_post_init<T>
// is called from QOM core as the instance_post_init function
// for class T
T::INSTANCE_POST_INIT.unwrap()(unsafe { &*obj.cast::<T>() })
T::INSTANCE_POST_INIT.unwrap()(unsafe { state.as_ref() });
}
unsafe extern "C" fn rust_class_init<T: ObjectType + ClassInitImpl<T::Class>>(
klass: *mut ObjectClass,
_data: *mut c_void,
) {
let mut klass = NonNull::new(klass)
.unwrap()
.cast::<<T as ObjectType>::Class>();
// SAFETY: klass is a T::Class, since rust_class_init<T>
// is called from QOM core as the class_init function
// for class T
T::class_init(unsafe { &mut *klass.cast::<T::Class>() })
T::class_init(unsafe { klass.as_mut() })
}
unsafe extern "C" fn drop_object<T: ObjectImpl>(obj: *mut Object) {
@ -581,11 +589,8 @@ pub trait ClassInitImpl<T> {
/// can be downcasted to type `T`. We also expect the device is
/// readable/writeable from one thread at any time.
unsafe extern "C" fn rust_unparent_fn<T: ObjectImpl>(dev: *mut Object) {
unsafe {
assert!(!dev.is_null());
let state = core::ptr::NonNull::new_unchecked(dev.cast::<T>());
T::UNPARENT.unwrap()(state.as_ref());
}
let state = NonNull::new(dev).unwrap().cast::<T>();
T::UNPARENT.unwrap()(unsafe { state.as_ref() });
}
impl<T> ClassInitImpl<ObjectClass> for T

View File

@ -4,277 +4,477 @@
//! Helper macros to declare migration state for device models.
//!
//! Some macros are direct equivalents to the C macros declared in
//! `include/migration/vmstate.h` while
//! [`vmstate_subsections`](crate::vmstate_subsections) and
//! [`vmstate_fields`](crate::vmstate_fields) are meant to be used when
//! declaring a device model state struct.
//! This module includes four families of macros:
//!
//! * [`vmstate_unused!`](crate::vmstate_unused) and
//! [`vmstate_of!`](crate::vmstate_of), which are used to express the
//! migration format for a struct. This is based on the [`VMState`] trait,
//! which is defined by all migrateable types.
//!
//! * [`impl_vmstate_forward`](crate::impl_vmstate_forward) and
//! [`impl_vmstate_bitsized`](crate::impl_vmstate_bitsized), which help with
//! the definition of the [`VMState`] trait (respectively for transparent
//! structs and for `bilge`-defined types)
//!
//! * helper macros to declare a device model state struct, in particular
//! [`vmstate_subsections`](crate::vmstate_subsections) and
//! [`vmstate_fields`](crate::vmstate_fields).
//!
//! * direct equivalents to the C macros declared in
//! `include/migration/vmstate.h`. These are not type-safe and only provide
//! functionality that is missing from `vmstate_of!`.
pub use crate::bindings::VMStateDescription;
use core::{marker::PhantomData, mem, ptr::NonNull};
#[doc(alias = "VMSTATE_UNUSED_BUFFER")]
pub use crate::bindings::{VMStateDescription, VMStateField};
use crate::{
bindings::{self, VMStateFlags},
zeroable::Zeroable,
};
/// This macro is used to call a function with a generic argument bound
/// to the type of a field. The function must take a
/// [`PhantomData`]`<T>` argument; `T` is the type of
/// field `$field` in the `$typ` type.
///
/// # Examples
///
/// ```
/// # use qemu_api::call_func_with_field;
/// # use core::marker::PhantomData;
/// const fn size_of_field<T>(_: PhantomData<T>) -> usize {
/// std::mem::size_of::<T>()
/// }
///
/// struct Foo {
/// x: u16,
/// };
/// // calls size_of_field::<u16>()
/// assert_eq!(call_func_with_field!(size_of_field, Foo, x), 2);
/// ```
#[macro_export]
macro_rules! vmstate_unused_buffer {
($field_exists_fn:expr, $version_id:expr, $size:expr) => {{
$crate::bindings::VMStateField {
name: c_str!("unused").as_ptr(),
err_hint: ::core::ptr::null(),
offset: 0,
size: $size,
start: 0,
num: 0,
num_offset: 0,
size_offset: 0,
info: unsafe { ::core::ptr::addr_of!($crate::bindings::vmstate_info_unused_buffer) },
flags: VMStateFlags::VMS_BUFFER,
vmsd: ::core::ptr::null(),
version_id: $version_id,
struct_version_id: 0,
field_exists: $field_exists_fn,
}
}};
macro_rules! call_func_with_field {
// Based on the answer by user steffahn (Frank Steffahn) at
// https://users.rust-lang.org/t/inferring-type-of-field/122857
// and used under MIT license
($func:expr, $typ:ty, $($field:tt).+) => {
$func(loop {
#![allow(unreachable_code)]
const fn phantom__<T>(_: &T) -> ::core::marker::PhantomData<T> { ::core::marker::PhantomData }
// Unreachable code is exempt from checks on uninitialized values.
// Use that trick to infer the type of this PhantomData.
break ::core::marker::PhantomData;
break phantom__(&{ let value__: $typ; value__.$($field).+ });
})
};
}
#[doc(alias = "VMSTATE_UNUSED_V")]
/// Workaround for lack of `const_refs_static`: references to global variables
/// can be included in a `static`, but not in a `const`; unfortunately, this
/// is exactly what would go in the `VMStateField`'s `info` member.
///
/// This enum contains the contents of the `VMStateField`'s `info` member,
/// but as an `enum` instead of a pointer.
#[allow(non_camel_case_types)]
pub enum VMStateFieldType {
null,
vmstate_info_bool,
vmstate_info_int8,
vmstate_info_int16,
vmstate_info_int32,
vmstate_info_int64,
vmstate_info_uint8,
vmstate_info_uint16,
vmstate_info_uint32,
vmstate_info_uint64,
vmstate_info_timer,
}
/// Workaround for lack of `const_refs_static`. Converts a `VMStateFieldType`
/// to a `*const VMStateInfo`, for inclusion in a `VMStateField`.
#[macro_export]
macro_rules! vmstate_unused_v {
($version_id:expr, $size:expr) => {{
$crate::vmstate_unused_buffer!(None, $version_id, $size)
}};
macro_rules! info_enum_to_ref {
($e:expr) => {
unsafe {
match $e {
$crate::vmstate::VMStateFieldType::null => ::core::ptr::null(),
$crate::vmstate::VMStateFieldType::vmstate_info_bool => {
::core::ptr::addr_of!($crate::bindings::vmstate_info_bool)
}
$crate::vmstate::VMStateFieldType::vmstate_info_int8 => {
::core::ptr::addr_of!($crate::bindings::vmstate_info_int8)
}
$crate::vmstate::VMStateFieldType::vmstate_info_int16 => {
::core::ptr::addr_of!($crate::bindings::vmstate_info_int16)
}
$crate::vmstate::VMStateFieldType::vmstate_info_int32 => {
::core::ptr::addr_of!($crate::bindings::vmstate_info_int32)
}
$crate::vmstate::VMStateFieldType::vmstate_info_int64 => {
::core::ptr::addr_of!($crate::bindings::vmstate_info_int64)
}
$crate::vmstate::VMStateFieldType::vmstate_info_uint8 => {
::core::ptr::addr_of!($crate::bindings::vmstate_info_uint8)
}
$crate::vmstate::VMStateFieldType::vmstate_info_uint16 => {
::core::ptr::addr_of!($crate::bindings::vmstate_info_uint16)
}
$crate::vmstate::VMStateFieldType::vmstate_info_uint32 => {
::core::ptr::addr_of!($crate::bindings::vmstate_info_uint32)
}
$crate::vmstate::VMStateFieldType::vmstate_info_uint64 => {
::core::ptr::addr_of!($crate::bindings::vmstate_info_uint64)
}
$crate::vmstate::VMStateFieldType::vmstate_info_timer => {
::core::ptr::addr_of!($crate::bindings::vmstate_info_timer)
}
}
}
};
}
/// A trait for types that can be included in a device's migration stream. It
/// provides the base contents of a `VMStateField` (minus the name and offset).
///
/// # Safety
///
/// The contents of this trait go straight into structs that are parsed by C
/// code and used to introspect into other structs. Generally, you don't need
/// to implement it except via macros that do it for you, such as
/// `impl_vmstate_bitsized!`.
pub unsafe trait VMState {
/// The `info` member of a `VMStateField` is a pointer and as such cannot
/// yet be included in the [`BASE`](VMState::BASE) associated constant;
/// this is only allowed by Rust 1.83.0 and newer. For now, include the
/// member as an enum which is stored in a separate constant.
const SCALAR_TYPE: VMStateFieldType = VMStateFieldType::null;
/// The base contents of a `VMStateField` (minus the name and offset) for
/// the type that is implementing the trait.
const BASE: VMStateField;
/// A flag that is added to another field's `VMStateField` to specify the
/// length's type in a variable-sized array. If this is not a supported
/// type for the length (i.e. if it is not `u8`, `u16`, `u32`), using it
/// in a call to [`vmstate_of!`](crate::vmstate_of) will cause a
/// compile-time error.
const VARRAY_FLAG: VMStateFlags = {
panic!("invalid type for variable-sized array");
};
}
/// Internal utility function to retrieve a type's `VMStateFieldType`;
/// used by [`vmstate_of!`](crate::vmstate_of).
pub const fn vmstate_scalar_type<T: VMState>(_: PhantomData<T>) -> VMStateFieldType {
T::SCALAR_TYPE
}
/// Internal utility function to retrieve a type's `VMStateField`;
/// used by [`vmstate_of!`](crate::vmstate_of).
pub const fn vmstate_base<T: VMState>(_: PhantomData<T>) -> VMStateField {
T::BASE
}
/// Internal utility function to retrieve a type's `VMStateFlags` when it
/// is used as the element count of a `VMSTATE_VARRAY`; used by
/// [`vmstate_of!`](crate::vmstate_of).
pub const fn vmstate_varray_flag<T: VMState>(_: PhantomData<T>) -> VMStateFlags {
T::VARRAY_FLAG
}
/// Return the `VMStateField` for a field of a struct. The field must be
/// visible in the current scope.
///
/// Only a limited set of types is supported out of the box:
/// * scalar types (integer and `bool`)
/// * the C struct `QEMUTimer`
/// * a transparent wrapper for any of the above (`Cell`, `UnsafeCell`,
/// [`BqlCell`](crate::cell::BqlCell), [`BqlRefCell`](crate::cell::BqlRefCell)
/// * a raw pointer to any of the above
/// * a `NonNull` pointer or a `Box` for any of the above
/// * an array of any of the above
///
/// In order to support other types, the trait `VMState` must be implemented
/// for them. The macros
/// [`impl_vmstate_bitsized!`](crate::impl_vmstate_bitsized)
/// and [`impl_vmstate_forward!`](crate::impl_vmstate_forward) help with this.
#[macro_export]
macro_rules! vmstate_of {
($struct_name:ty, $field_name:ident $([0 .. $num:ident $(* $factor:expr)?])? $(,)?) => {
$crate::bindings::VMStateField {
name: ::core::concat!(::core::stringify!($field_name), "\0")
.as_bytes()
.as_ptr() as *const ::std::os::raw::c_char,
offset: $crate::offset_of!($struct_name, $field_name),
$(.num_offset: $crate::offset_of!($struct_name, $num),)?
// The calls to `call_func_with_field!` are the magic that
// computes most of the VMStateField from the type of the field.
info: $crate::info_enum_to_ref!($crate::call_func_with_field!(
$crate::vmstate::vmstate_scalar_type,
$struct_name,
$field_name
)),
..$crate::call_func_with_field!(
$crate::vmstate::vmstate_base,
$struct_name,
$field_name
)$(.with_varray_flag($crate::call_func_with_field!(
$crate::vmstate::vmstate_varray_flag,
$struct_name,
$num))
$(.with_varray_multiply($factor))?)?
}
};
}
impl VMStateFlags {
const VMS_VARRAY_FLAGS: VMStateFlags = VMStateFlags(
VMStateFlags::VMS_VARRAY_INT32.0
| VMStateFlags::VMS_VARRAY_UINT8.0
| VMStateFlags::VMS_VARRAY_UINT16.0
| VMStateFlags::VMS_VARRAY_UINT32.0,
);
}
// Add a couple builder-style methods to VMStateField, allowing
// easy derivation of VMStateField constants from other types.
impl VMStateField {
#[must_use]
pub const fn with_version_id(mut self, version_id: i32) -> Self {
assert!(version_id >= 0);
self.version_id = version_id;
self
}
#[must_use]
pub const fn with_array_flag(mut self, num: usize) -> Self {
assert!(num <= 0x7FFF_FFFFusize);
assert!((self.flags.0 & VMStateFlags::VMS_ARRAY.0) == 0);
assert!((self.flags.0 & VMStateFlags::VMS_VARRAY_FLAGS.0) == 0);
if (self.flags.0 & VMStateFlags::VMS_POINTER.0) != 0 {
self.flags = VMStateFlags(self.flags.0 & !VMStateFlags::VMS_POINTER.0);
self.flags = VMStateFlags(self.flags.0 | VMStateFlags::VMS_ARRAY_OF_POINTER.0);
}
self.flags = VMStateFlags(self.flags.0 & !VMStateFlags::VMS_SINGLE.0);
self.flags = VMStateFlags(self.flags.0 | VMStateFlags::VMS_ARRAY.0);
self.num = num as i32;
self
}
#[must_use]
pub const fn with_pointer_flag(mut self) -> Self {
assert!((self.flags.0 & VMStateFlags::VMS_POINTER.0) == 0);
self.flags = VMStateFlags(self.flags.0 | VMStateFlags::VMS_POINTER.0);
self
}
#[must_use]
pub const fn with_varray_flag<T: VMState>(mut self, flag: VMStateFlags) -> VMStateField {
assert!((self.flags.0 & VMStateFlags::VMS_ARRAY.0) != 0);
self.flags = VMStateFlags(self.flags.0 & !VMStateFlags::VMS_ARRAY.0);
self.flags = VMStateFlags(self.flags.0 | flag.0);
self
}
#[must_use]
pub const fn with_varray_multiply(mut self, num: u32) -> VMStateField {
assert!(num <= 0x7FFF_FFFFu32);
self.flags = VMStateFlags(self.flags.0 | VMStateFlags::VMS_MULTIPLY_ELEMENTS.0);
self.num = num as i32;
self
}
}
/// This macro can be used (by just passing it a type) to forward the `VMState`
/// trait to the first field of a tuple. This is a workaround for lack of
/// support of nested [`offset_of`](core::mem::offset_of) until Rust 1.82.0.
///
/// # Examples
///
/// ```
/// # use qemu_api::vmstate::impl_vmstate_forward;
/// pub struct Fifo([u8; 16]);
/// impl_vmstate_forward!(Fifo);
/// ```
#[macro_export]
macro_rules! impl_vmstate_forward {
// This is similar to impl_vmstate_transparent below, but it
// uses the same trick as vmstate_of! to obtain the type of
// the first field of the tuple
($tuple:ty) => {
unsafe impl $crate::vmstate::VMState for $tuple {
const SCALAR_TYPE: $crate::vmstate::VMStateFieldType =
$crate::call_func_with_field!($crate::vmstate::vmstate_scalar_type, $tuple, 0);
const BASE: $crate::bindings::VMStateField =
$crate::call_func_with_field!($crate::vmstate::vmstate_base, $tuple, 0);
}
};
}
// Transparent wrappers: just use the internal type
macro_rules! impl_vmstate_transparent {
($type:ty where $base:tt: VMState $($where:tt)*) => {
unsafe impl<$base> VMState for $type where $base: VMState $($where)* {
const SCALAR_TYPE: VMStateFieldType = <$base as VMState>::SCALAR_TYPE;
const BASE: VMStateField = VMStateField {
size: mem::size_of::<$type>(),
..<$base as VMState>::BASE
};
const VARRAY_FLAG: VMStateFlags = <$base as VMState>::VARRAY_FLAG;
}
};
}
impl_vmstate_transparent!(std::cell::Cell<T> where T: VMState);
impl_vmstate_transparent!(std::cell::UnsafeCell<T> where T: VMState);
impl_vmstate_transparent!(crate::cell::BqlCell<T> where T: VMState);
impl_vmstate_transparent!(crate::cell::BqlRefCell<T> where T: VMState);
#[macro_export]
macro_rules! impl_vmstate_bitsized {
($type:ty) => {
unsafe impl $crate::vmstate::VMState for $type {
const SCALAR_TYPE: $crate::vmstate::VMStateFieldType =
<<<$type as ::bilge::prelude::Bitsized>::ArbitraryInt
as ::bilge::prelude::Number>::UnderlyingType
as $crate::vmstate::VMState>::SCALAR_TYPE;
const BASE: $crate::bindings::VMStateField =
<<<$type as ::bilge::prelude::Bitsized>::ArbitraryInt
as ::bilge::prelude::Number>::UnderlyingType
as $crate::vmstate::VMState>::BASE;
const VARRAY_FLAG: $crate::bindings::VMStateFlags =
<<<$type as ::bilge::prelude::Bitsized>::ArbitraryInt
as ::bilge::prelude::Number>::UnderlyingType
as $crate::vmstate::VMState>::VARRAY_FLAG;
}
};
}
// Scalar types using predefined VMStateInfos
macro_rules! impl_vmstate_scalar {
($info:ident, $type:ty$(, $varray_flag:ident)?) => {
unsafe impl VMState for $type {
const SCALAR_TYPE: VMStateFieldType = VMStateFieldType::$info;
const BASE: VMStateField = VMStateField {
size: mem::size_of::<$type>(),
flags: VMStateFlags::VMS_SINGLE,
..Zeroable::ZERO
};
$(const VARRAY_FLAG: VMStateFlags = VMStateFlags::$varray_flag;)?
}
};
}
impl_vmstate_scalar!(vmstate_info_bool, bool);
impl_vmstate_scalar!(vmstate_info_int8, i8);
impl_vmstate_scalar!(vmstate_info_int16, i16);
impl_vmstate_scalar!(vmstate_info_int32, i32);
impl_vmstate_scalar!(vmstate_info_int64, i64);
impl_vmstate_scalar!(vmstate_info_uint8, u8, VMS_VARRAY_UINT8);
impl_vmstate_scalar!(vmstate_info_uint16, u16, VMS_VARRAY_UINT16);
impl_vmstate_scalar!(vmstate_info_uint32, u32, VMS_VARRAY_UINT32);
impl_vmstate_scalar!(vmstate_info_uint64, u64);
impl_vmstate_scalar!(vmstate_info_timer, bindings::QEMUTimer);
// Pointer types using the underlying type's VMState plus VMS_POINTER
// Note that references are not supported, though references to cells
// could be allowed.
macro_rules! impl_vmstate_pointer {
($type:ty where $base:tt: VMState $($where:tt)*) => {
unsafe impl<$base> VMState for $type where $base: VMState $($where)* {
const SCALAR_TYPE: VMStateFieldType = <T as VMState>::SCALAR_TYPE;
const BASE: VMStateField = <$base as VMState>::BASE.with_pointer_flag();
}
};
}
impl_vmstate_pointer!(*const T where T: VMState);
impl_vmstate_pointer!(*mut T where T: VMState);
impl_vmstate_pointer!(NonNull<T> where T: VMState);
// Unlike C pointers, Box is always non-null therefore there is no need
// to specify VMS_ALLOC.
impl_vmstate_pointer!(Box<T> where T: VMState);
// Arrays using the underlying type's VMState plus
// VMS_ARRAY/VMS_ARRAY_OF_POINTER
unsafe impl<T: VMState, const N: usize> VMState for [T; N] {
const SCALAR_TYPE: VMStateFieldType = <T as VMState>::SCALAR_TYPE;
const BASE: VMStateField = <T as VMState>::BASE.with_array_flag(N);
}
#[doc(alias = "VMSTATE_UNUSED")]
#[macro_export]
macro_rules! vmstate_unused {
($size:expr) => {{
$crate::vmstate_unused_v!(0, $size)
}};
}
#[doc(alias = "VMSTATE_SINGLE_TEST")]
#[macro_export]
macro_rules! vmstate_single_test {
($field_name:ident, $struct_name:ty, $field_exists_fn:expr, $version_id:expr, $info:expr, $size:expr) => {{
$crate::bindings::VMStateField {
name: ::core::concat!(::core::stringify!($field_name), 0)
.as_bytes()
.as_ptr() as *const ::std::os::raw::c_char,
err_hint: ::core::ptr::null(),
offset: $crate::offset_of!($struct_name, $field_name),
name: $crate::c_str!("unused").as_ptr(),
size: $size,
start: 0,
num: 0,
num_offset: 0,
size_offset: 0,
info: unsafe { $info },
flags: VMStateFlags::VMS_SINGLE,
vmsd: ::core::ptr::null(),
version_id: $version_id,
struct_version_id: 0,
field_exists: $field_exists_fn,
info: unsafe { ::core::ptr::addr_of!($crate::bindings::vmstate_info_unused_buffer) },
flags: $crate::bindings::VMStateFlags::VMS_BUFFER,
..$crate::zeroable::Zeroable::ZERO
}
}};
}
#[doc(alias = "VMSTATE_SINGLE")]
// FIXME: including the `vmsd` field in a `const` is not possible without
// the const_refs_static feature (stabilized in Rust 1.83.0). Without it,
// it is not possible to use VMS_STRUCT in a transparent manner using
// `vmstate_of!`. While VMSTATE_CLOCK can at least try to be type-safe,
// VMSTATE_STRUCT includes $type only for documentation purposes; it
// is checked against $field_name and $struct_name, but not against $vmsd
// which is what really would matter.
#[doc(alias = "VMSTATE_STRUCT")]
#[macro_export]
macro_rules! vmstate_single {
($field_name:ident, $struct_name:ty, $version_id:expr, $info:expr, $size:expr) => {{
$crate::vmstate_single_test!($field_name, $struct_name, None, $version_id, $info, $size)
}};
}
#[doc(alias = "VMSTATE_UINT32_V")]
#[macro_export]
macro_rules! vmstate_uint32_v {
($field_name:ident, $struct_name:ty, $version_id:expr) => {{
$crate::vmstate_single!(
$field_name,
$struct_name,
$version_id,
::core::ptr::addr_of!($crate::bindings::vmstate_info_uint32),
::core::mem::size_of::<u32>()
)
}};
}
#[doc(alias = "VMSTATE_UINT32")]
#[macro_export]
macro_rules! vmstate_uint32 {
($field_name:ident, $struct_name:ty) => {{
$crate::vmstate_uint32_v!($field_name, $struct_name, 0)
}};
}
#[doc(alias = "VMSTATE_ARRAY")]
#[macro_export]
macro_rules! vmstate_array {
($field_name:ident, $struct_name:ty, $length:expr, $version_id:expr, $info:expr, $size:expr) => {{
macro_rules! vmstate_struct {
($struct_name:ty, $field_name:ident $([0 .. $num:ident $(* $factor:expr)?])?, $vmsd:expr, $type:ty $(,)?) => {
$crate::bindings::VMStateField {
name: ::core::concat!(::core::stringify!($field_name), 0)
name: ::core::concat!(::core::stringify!($field_name), "\0")
.as_bytes()
.as_ptr() as *const ::std::os::raw::c_char,
err_hint: ::core::ptr::null(),
offset: $crate::offset_of!($struct_name, $field_name),
size: $size,
start: 0,
num: $length as _,
num_offset: 0,
size_offset: 0,
info: unsafe { $info },
flags: VMStateFlags::VMS_ARRAY,
vmsd: ::core::ptr::null(),
version_id: $version_id,
struct_version_id: 0,
field_exists: None,
}
}};
}
#[doc(alias = "VMSTATE_UINT32_ARRAY_V")]
#[macro_export]
macro_rules! vmstate_uint32_array_v {
($field_name:ident, $struct_name:ty, $length:expr, $version_id:expr) => {{
$crate::vmstate_array!(
$field_name,
$struct_name,
$length,
$version_id,
::core::ptr::addr_of!($crate::bindings::vmstate_info_uint32),
::core::mem::size_of::<u32>()
)
}};
}
#[doc(alias = "VMSTATE_UINT32_ARRAY")]
#[macro_export]
macro_rules! vmstate_uint32_array {
($field_name:ident, $struct_name:ty, $length:expr) => {{
$crate::vmstate_uint32_array_v!($field_name, $struct_name, $length, 0)
}};
}
#[doc(alias = "VMSTATE_STRUCT_POINTER_V")]
#[macro_export]
macro_rules! vmstate_struct_pointer_v {
($field_name:ident, $struct_name:ty, $version_id:expr, $vmsd:expr, $type:ty) => {{
$crate::bindings::VMStateField {
name: ::core::concat!(::core::stringify!($field_name), 0)
.as_bytes()
.as_ptr() as *const ::std::os::raw::c_char,
err_hint: ::core::ptr::null(),
offset: $crate::offset_of!($struct_name, $field_name),
size: ::core::mem::size_of::<*const $type>(),
start: 0,
num: 0,
num_offset: 0,
size_offset: 0,
info: ::core::ptr::null(),
flags: VMStateFlags(VMStateFlags::VMS_STRUCT.0 | VMStateFlags::VMS_POINTER.0),
$(.num_offset: $crate::offset_of!($struct_name, $num),)?
offset: {
$crate::assert_field_type!($struct_name, $field_name, $type);
$crate::offset_of!($struct_name, $field_name)
},
size: ::core::mem::size_of::<$type>(),
flags: $crate::bindings::VMStateFlags::VMS_STRUCT,
vmsd: unsafe { $vmsd },
version_id: $version_id,
struct_version_id: 0,
field_exists: None,
..$crate::zeroable::Zeroable::ZERO $(
.with_varray_flag($crate::call_func_with_field!(
$crate::vmstate::vmstate_varray_flag,
$struct_name,
$num))
$(.with_varray_multiply($factor))?)?
}
}};
}
#[doc(alias = "VMSTATE_ARRAY_OF_POINTER")]
#[macro_export]
macro_rules! vmstate_array_of_pointer {
($field_name:ident, $struct_name:ty, $num:expr, $version_id:expr, $info:expr, $type:ty) => {{
$crate::bindings::VMStateField {
name: ::core::concat!(::core::stringify!($field_name), 0)
.as_bytes()
.as_ptr() as *const ::std::os::raw::c_char,
version_id: $version_id,
num: $num as _,
info: unsafe { $info },
size: ::core::mem::size_of::<*const $type>(),
flags: VMStateFlags(VMStateFlags::VMS_ARRAY.0 | VMStateFlags::VMS_ARRAY_OF_POINTER.0),
offset: $crate::offset_of!($struct_name, $field_name),
err_hint: ::core::ptr::null(),
start: 0,
num_offset: 0,
size_offset: 0,
vmsd: ::core::ptr::null(),
struct_version_id: 0,
field_exists: None,
}
}};
}
#[doc(alias = "VMSTATE_ARRAY_OF_POINTER_TO_STRUCT")]
#[macro_export]
macro_rules! vmstate_array_of_pointer_to_struct {
($field_name:ident, $struct_name:ty, $num:expr, $version_id:expr, $vmsd:expr, $type:ty) => {{
$crate::bindings::VMStateField {
name: ::core::concat!(::core::stringify!($field_name), 0)
.as_bytes()
.as_ptr() as *const ::std::os::raw::c_char,
version_id: $version_id,
num: $num as _,
vmsd: unsafe { $vmsd },
size: ::core::mem::size_of::<*const $type>(),
flags: VMStateFlags(
VMStateFlags::VMS_ARRAY.0
| VMStateFlags::VMS_STRUCT.0
| VMStateFlags::VMS_ARRAY_OF_POINTER.0,
),
offset: $crate::offset_of!($struct_name, $field_name),
err_hint: ::core::ptr::null(),
start: 0,
num_offset: 0,
size_offset: 0,
vmsd: ::core::ptr::null(),
struct_version_id: 0,
field_exists: None,
}
}};
}
#[doc(alias = "VMSTATE_CLOCK_V")]
#[macro_export]
macro_rules! vmstate_clock_v {
($field_name:ident, $struct_name:ty, $version_id:expr) => {{
$crate::vmstate_struct_pointer_v!(
$field_name,
$struct_name,
$version_id,
::core::ptr::addr_of!($crate::bindings::vmstate_clock),
$crate::bindings::Clock
)
}};
};
}
#[doc(alias = "VMSTATE_CLOCK")]
#[macro_export]
macro_rules! vmstate_clock {
($field_name:ident, $struct_name:ty) => {{
$crate::vmstate_clock_v!($field_name, $struct_name, 0)
}};
}
#[doc(alias = "VMSTATE_ARRAY_CLOCK_V")]
#[macro_export]
macro_rules! vmstate_array_clock_v {
($field_name:ident, $struct_name:ty, $num:expr, $version_id:expr) => {{
$crate::vmstate_array_of_pointer_to_struct!(
$field_name,
$struct_name,
$num,
$version_id,
::core::ptr::addr_of!($crate::bindings::vmstate_clock),
$crate::bindings::Clock
)
}};
}
#[doc(alias = "VMSTATE_ARRAY_CLOCK")]
#[macro_export]
macro_rules! vmstate_array_clock {
($field_name:ident, $struct_name:ty, $num:expr) => {{
$crate::vmstate_array_clock_v!($field_name, $struct_name, $name, 0)
($struct_name:ty, $field_name:ident) => {{
$crate::bindings::VMStateField {
name: ::core::concat!(::core::stringify!($field_name), "\0")
.as_bytes()
.as_ptr() as *const ::std::os::raw::c_char,
offset: {
$crate::assert_field_type!(
$struct_name,
$field_name,
core::ptr::NonNull<$crate::bindings::Clock>
);
$crate::offset_of!($struct_name, $field_name)
},
size: ::core::mem::size_of::<*const $crate::bindings::Clock>(),
flags: VMStateFlags(VMStateFlags::VMS_STRUCT.0 | VMStateFlags::VMS_POINTER.0),
vmsd: unsafe { ::core::ptr::addr_of!($crate::bindings::vmstate_clock) },
..$crate::zeroable::Zeroable::ZERO
}
}};
}
@ -287,20 +487,8 @@ macro_rules! vmstate_fields {
static _FIELDS: &[$crate::bindings::VMStateField] = &[
$($field),*,
$crate::bindings::VMStateField {
name: ::core::ptr::null(),
err_hint: ::core::ptr::null(),
offset: 0,
size: 0,
start: 0,
num: 0,
num_offset: 0,
size_offset: 0,
info: ::core::ptr::null(),
flags: VMStateFlags::VMS_END,
vmsd: ::core::ptr::null(),
version_id: 0,
struct_version_id: 0,
field_exists: None,
flags: $crate::bindings::VMStateFlags::VMS_END,
..$crate::zeroable::Zeroable::ZERO
}
];
_FIELDS.as_ptr()

View File

@ -1,13 +1,11 @@
// SPDX-License-Identifier: GPL-2.0-or-later
use std::ptr;
/// Encapsulates the requirement that
/// `MaybeUninit::<Self>::zeroed().assume_init()` does not cause undefined
/// behavior. This trait in principle could be implemented as just:
///
/// ```
/// pub unsafe trait Zeroable: Default {
/// pub unsafe trait Zeroable {
/// const ZERO: Self = unsafe { ::core::mem::MaybeUninit::<Self>::zeroed().assume_init() };
/// }
/// ```
@ -29,58 +27,76 @@ pub unsafe trait Zeroable: Default {
const ZERO: Self;
}
unsafe impl Zeroable for crate::bindings::Property__bindgen_ty_1 {
const ZERO: Self = Self { i: 0 };
/// A macro that acts similarly to [`core::mem::zeroed()`], only is const
///
/// ## Safety
///
/// Similar to `core::mem::zeroed()`, except this zeroes padding bits. Zeroed
/// padding usually isn't relevant to safety, but might be if a C union is used.
///
/// Just like for `core::mem::zeroed()`, an all zero byte pattern might not
/// be a valid value for a type, as is the case for references `&T` and `&mut
/// T`. Reference types trigger a (denied by default) lint and cause immediate
/// undefined behavior if the lint is ignored
///
/// ```rust compile_fail
/// use const_zero::const_zero;
/// // error: any use of this value will cause an error
/// // note: `#[deny(const_err)]` on by default
/// const STR: &str = unsafe{const_zero!(&'static str)};
/// ```
///
/// `const_zero` does not work on unsized types:
///
/// ```rust compile_fail
/// use const_zero::const_zero;
/// // error[E0277]: the size for values of type `[u8]` cannot be known at compilation time
/// const BYTES: [u8] = unsafe{const_zero!([u8])};
/// ```
/// ## Differences with `core::mem::zeroed`
///
/// `const_zero` zeroes padding bits, while `core::mem::zeroed` doesn't
macro_rules! const_zero {
// This macro to produce a type-generic zero constant is taken from the
// const_zero crate (v0.1.1):
//
// https://docs.rs/const-zero/latest/src/const_zero/lib.rs.html
//
// and used under MIT license
($type_:ty) => {{
const TYPE_SIZE: ::core::primitive::usize = ::core::mem::size_of::<$type_>();
union TypeAsBytes {
bytes: [::core::primitive::u8; TYPE_SIZE],
inner: ::core::mem::ManuallyDrop<$type_>,
}
const ZERO_BYTES: TypeAsBytes = TypeAsBytes {
bytes: [0; TYPE_SIZE],
};
::core::mem::ManuallyDrop::<$type_>::into_inner(ZERO_BYTES.inner)
}};
}
unsafe impl Zeroable for crate::bindings::Property {
const ZERO: Self = Self {
name: ptr::null(),
info: ptr::null(),
offset: 0,
bitnr: 0,
bitmask: 0,
set_default: false,
defval: Zeroable::ZERO,
arrayoffset: 0,
arrayinfo: ptr::null(),
arrayfieldsize: 0,
link_type: ptr::null(),
/// A wrapper to implement the `Zeroable` trait through the `const_zero` macro.
macro_rules! impl_zeroable {
($type:ty) => {
unsafe impl Zeroable for $type {
const ZERO: Self = unsafe { const_zero!($type) };
}
};
}
unsafe impl Zeroable for crate::bindings::VMStateDescription {
const ZERO: Self = Self {
name: ptr::null(),
unmigratable: false,
early_setup: false,
version_id: 0,
minimum_version_id: 0,
priority: crate::bindings::MigrationPriority::MIG_PRI_DEFAULT,
pre_load: None,
post_load: None,
pre_save: None,
post_save: None,
needed: None,
dev_unplug_pending: None,
fields: ptr::null(),
subsections: ptr::null(),
};
// bindgen does not derive Default here
#[allow(clippy::derivable_impls)]
impl Default for crate::bindings::VMStateFlags {
fn default() -> Self {
Self(0)
}
}
unsafe impl Zeroable for crate::bindings::MemoryRegionOps__bindgen_ty_1 {
const ZERO: Self = Self {
min_access_size: 0,
max_access_size: 0,
unaligned: false,
accepts: None,
};
}
unsafe impl Zeroable for crate::bindings::MemoryRegionOps__bindgen_ty_2 {
const ZERO: Self = Self {
min_access_size: 0,
max_access_size: 0,
unaligned: false,
};
}
impl_zeroable!(crate::bindings::Property__bindgen_ty_1);
impl_zeroable!(crate::bindings::Property);
impl_zeroable!(crate::bindings::VMStateFlags);
impl_zeroable!(crate::bindings::VMStateField);
impl_zeroable!(crate::bindings::VMStateDescription);
impl_zeroable!(crate::bindings::MemoryRegionOps__bindgen_ty_1);
impl_zeroable!(crate::bindings::MemoryRegionOps__bindgen_ty_2);

View File

@ -14,8 +14,8 @@ use qemu_api::{
cell::{self, BqlCell},
declare_properties, define_property,
prelude::*,
qdev::{DeviceImpl, DeviceState, Property},
qom::{ObjectImpl, ParentField},
qdev::{DeviceClass, DeviceImpl, DeviceState, Property},
qom::{ClassInitImpl, ObjectImpl, ParentField},
vmstate::VMStateDescription,
zeroable::Zeroable,
};
@ -37,6 +37,10 @@ pub struct DummyState {
qom_isa!(DummyState: Object, DeviceState);
pub struct DummyClass {
parent_class: <DeviceState as ObjectType>::Class,
}
declare_properties! {
DUMMY_PROPERTIES,
define_property!(
@ -49,7 +53,7 @@ declare_properties! {
}
unsafe impl ObjectType for DummyState {
type Class = <DeviceState as ObjectType>::Class;
type Class = DummyClass;
const TYPE_NAME: &'static CStr = c_str!("dummy");
}
@ -67,6 +71,51 @@ impl DeviceImpl for DummyState {
}
}
// `impl<T> ClassInitImpl<DummyClass> for T` doesn't work since it violates
// orphan rule.
impl ClassInitImpl<DummyClass> for DummyState {
fn class_init(klass: &mut DummyClass) {
<Self as ClassInitImpl<DeviceClass>>::class_init(&mut klass.parent_class);
}
}
#[derive(qemu_api_macros::offsets)]
#[repr(C)]
#[derive(qemu_api_macros::Object)]
pub struct DummyChildState {
parent: ParentField<DummyState>,
}
qom_isa!(DummyChildState: Object, DeviceState, DummyState);
pub struct DummyChildClass {
parent_class: <DummyState as ObjectType>::Class,
}
unsafe impl ObjectType for DummyChildState {
type Class = DummyChildClass;
const TYPE_NAME: &'static CStr = c_str!("dummy_child");
}
impl ObjectImpl for DummyChildState {
type ParentType = DummyState;
const ABSTRACT: bool = false;
}
impl DeviceImpl for DummyChildState {}
impl ClassInitImpl<DummyClass> for DummyChildState {
fn class_init(klass: &mut DummyClass) {
<Self as ClassInitImpl<DeviceClass>>::class_init(&mut klass.parent_class);
}
}
impl ClassInitImpl<DummyChildClass> for DummyChildState {
fn class_init(klass: &mut DummyChildClass) {
<Self as ClassInitImpl<DummyClass>>::class_init(&mut klass.parent_class);
}
}
fn init_qom() {
static ONCE: BqlCell<bool> = BqlCell::new(false);
@ -85,6 +134,7 @@ fn test_object_new() {
init_qom();
unsafe {
object_unref(object_new(DummyState::TYPE_NAME.as_ptr()).cast());
object_unref(object_new(DummyChildState::TYPE_NAME.as_ptr()).cast());
}
}

View File

@ -57,8 +57,8 @@ if have_user
stub_ss.add(files('cpu-synchronize-state.c'))
# Stubs for QAPI events. Those can always be included in the build, but
# they are not built at all for --disable-system --disable-tools builds.
if not (have_system or have_tools)
# they are not built at all for --disable-system builds.
if not have_system
stub_ss.add(files('qdev.c'))
endif
endif

View File

@ -1364,7 +1364,7 @@ FeatureWordInfo feature_word_info[FEATURE_WORDS] = {
"taa-no", NULL, NULL, NULL,
NULL, "sbdr-ssdp-no", "fbsdp-no", "psdp-no",
NULL, "fb-clear", NULL, NULL,
NULL, NULL, NULL, NULL,
"bhi-no", NULL, NULL, NULL,
"pbrsb-no", NULL, "gds-no", "rfds-no",
"rfds-clear", NULL, NULL, NULL,
},
@ -4547,6 +4547,160 @@ static const X86CPUDefinition builtin_x86_defs[] = {
MSR_VMX_VMFUNC_EPT_SWITCHING,
.xlevel = 0x80000008,
.model_id = "Intel Xeon Processor (SierraForest)",
.versions = (X86CPUVersionDefinition[]) {
{ .version = 1 },
{
.version = 2,
.props = (PropValue[]) {
{ "ss", "on" },
{ "tsc-adjust", "on" },
{ "cldemote", "on" },
{ "movdiri", "on" },
{ "movdir64b", "on" },
{ "gds-no", "on" },
{ "rfds-no", "on" },
{ "lam", "on" },
{ "intel-psfd", "on"},
{ "ipred-ctrl", "on"},
{ "rrsba-ctrl", "on"},
{ "bhi-ctrl", "on"},
{ "stepping", "3" },
{ /* end of list */ }
}
},
{ /* end of list */ },
},
},
{
.name = "ClearwaterForest",
.level = 0x23,
.xlevel = 0x80000008,
.vendor = CPUID_VENDOR_INTEL,
.family = 6,
.model = 221,
.stepping = 0,
/*
* please keep the ascending order so that we can have a clear view of
* bit position of each feature.
*/
.features[FEAT_1_EDX] =
CPUID_FP87 | CPUID_VME | CPUID_DE | CPUID_PSE | CPUID_TSC |
CPUID_MSR | CPUID_PAE | CPUID_MCE | CPUID_CX8 | CPUID_APIC |
CPUID_SEP | CPUID_MTRR | CPUID_PGE | CPUID_MCA | CPUID_CMOV |
CPUID_PAT | CPUID_PSE36 | CPUID_CLFLUSH | CPUID_MMX | CPUID_FXSR |
CPUID_SSE | CPUID_SSE2 | CPUID_SS,
.features[FEAT_1_ECX] =
CPUID_EXT_SSE3 | CPUID_EXT_PCLMULQDQ | CPUID_EXT_SSSE3 |
CPUID_EXT_FMA | CPUID_EXT_CX16 | CPUID_EXT_PCID | CPUID_EXT_SSE41 |
CPUID_EXT_SSE42 | CPUID_EXT_X2APIC | CPUID_EXT_MOVBE |
CPUID_EXT_POPCNT | CPUID_EXT_TSC_DEADLINE_TIMER | CPUID_EXT_AES |
CPUID_EXT_XSAVE | CPUID_EXT_AVX | CPUID_EXT_F16C | CPUID_EXT_RDRAND,
.features[FEAT_8000_0001_EDX] =
CPUID_EXT2_SYSCALL | CPUID_EXT2_NX | CPUID_EXT2_PDPE1GB |
CPUID_EXT2_RDTSCP | CPUID_EXT2_LM,
.features[FEAT_8000_0001_ECX] =
CPUID_EXT3_LAHF_LM | CPUID_EXT3_ABM | CPUID_EXT3_3DNOWPREFETCH,
.features[FEAT_8000_0008_EBX] =
CPUID_8000_0008_EBX_WBNOINVD,
.features[FEAT_7_0_EBX] =
CPUID_7_0_EBX_FSGSBASE | CPUID_7_0_EBX_TSC_ADJUST |
CPUID_7_0_EBX_BMI1 | CPUID_7_0_EBX_AVX2 | CPUID_7_0_EBX_SMEP |
CPUID_7_0_EBX_BMI2 | CPUID_7_0_EBX_ERMS | CPUID_7_0_EBX_INVPCID |
CPUID_7_0_EBX_RDSEED | CPUID_7_0_EBX_ADX | CPUID_7_0_EBX_SMAP |
CPUID_7_0_EBX_CLFLUSHOPT | CPUID_7_0_EBX_CLWB |
CPUID_7_0_EBX_SHA_NI,
.features[FEAT_7_0_ECX] =
CPUID_7_0_ECX_UMIP | CPUID_7_0_ECX_PKU | CPUID_7_0_ECX_GFNI |
CPUID_7_0_ECX_VAES | CPUID_7_0_ECX_VPCLMULQDQ |
CPUID_7_0_ECX_RDPID | CPUID_7_0_ECX_BUS_LOCK_DETECT |
CPUID_7_0_ECX_CLDEMOTE | CPUID_7_0_ECX_MOVDIRI |
CPUID_7_0_ECX_MOVDIR64B,
.features[FEAT_7_0_EDX] =
CPUID_7_0_EDX_FSRM | CPUID_7_0_EDX_SERIALIZE |
CPUID_7_0_EDX_SPEC_CTRL | CPUID_7_0_EDX_ARCH_CAPABILITIES |
CPUID_7_0_EDX_SPEC_CTRL_SSBD,
.features[FEAT_ARCH_CAPABILITIES] =
MSR_ARCH_CAP_RDCL_NO | MSR_ARCH_CAP_IBRS_ALL |
MSR_ARCH_CAP_SKIP_L1DFL_VMENTRY | MSR_ARCH_CAP_MDS_NO |
MSR_ARCH_CAP_PSCHANGE_MC_NO | MSR_ARCH_CAP_SBDR_SSDP_NO |
MSR_ARCH_CAP_FBSDP_NO | MSR_ARCH_CAP_PSDP_NO |
MSR_ARCH_CAP_BHI_NO | MSR_ARCH_CAP_PBRSB_NO |
MSR_ARCH_CAP_GDS_NO | MSR_ARCH_CAP_RFDS_NO,
.features[FEAT_XSAVE] =
CPUID_XSAVE_XSAVEOPT | CPUID_XSAVE_XSAVEC |
CPUID_XSAVE_XGETBV1 | CPUID_XSAVE_XSAVES,
.features[FEAT_6_EAX] =
CPUID_6_EAX_ARAT,
.features[FEAT_7_1_EAX] =
CPUID_7_1_EAX_SHA512 | CPUID_7_1_EAX_SM3 | CPUID_7_1_EAX_SM4 |
CPUID_7_1_EAX_AVX_VNNI | CPUID_7_1_EAX_CMPCCXADD |
CPUID_7_1_EAX_FSRS | CPUID_7_1_EAX_AVX_IFMA |
CPUID_7_1_EAX_LAM,
.features[FEAT_7_1_EDX] =
CPUID_7_1_EDX_AVX_VNNI_INT8 | CPUID_7_1_EDX_AVX_NE_CONVERT |
CPUID_7_1_EDX_AVX_VNNI_INT16 | CPUID_7_1_EDX_PREFETCHITI,
.features[FEAT_7_2_EDX] =
CPUID_7_2_EDX_PSFD | CPUID_7_2_EDX_IPRED_CTRL |
CPUID_7_2_EDX_RRSBA_CTRL | CPUID_7_2_EDX_DDPD_U |
CPUID_7_2_EDX_BHI_CTRL | CPUID_7_2_EDX_MCDT_NO,
.features[FEAT_VMX_BASIC] =
MSR_VMX_BASIC_INS_OUTS | MSR_VMX_BASIC_TRUE_CTLS,
.features[FEAT_VMX_ENTRY_CTLS] =
VMX_VM_ENTRY_LOAD_DEBUG_CONTROLS | VMX_VM_ENTRY_IA32E_MODE |
VMX_VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL |
VMX_VM_ENTRY_LOAD_IA32_PAT | VMX_VM_ENTRY_LOAD_IA32_EFER,
.features[FEAT_VMX_EPT_VPID_CAPS] =
MSR_VMX_EPT_EXECONLY | MSR_VMX_EPT_PAGE_WALK_LENGTH_4 |
MSR_VMX_EPT_WB | MSR_VMX_EPT_2MB | MSR_VMX_EPT_1GB |
MSR_VMX_EPT_INVEPT | MSR_VMX_EPT_AD_BITS |
MSR_VMX_EPT_INVEPT_SINGLE_CONTEXT | MSR_VMX_EPT_INVEPT_ALL_CONTEXT |
MSR_VMX_EPT_INVVPID | MSR_VMX_EPT_INVVPID_SINGLE_ADDR |
MSR_VMX_EPT_INVVPID_SINGLE_CONTEXT |
MSR_VMX_EPT_INVVPID_ALL_CONTEXT |
MSR_VMX_EPT_INVVPID_SINGLE_CONTEXT_NOGLOBALS,
.features[FEAT_VMX_EXIT_CTLS] =
VMX_VM_EXIT_SAVE_DEBUG_CONTROLS |
VMX_VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL |
VMX_VM_EXIT_ACK_INTR_ON_EXIT | VMX_VM_EXIT_SAVE_IA32_PAT |
VMX_VM_EXIT_LOAD_IA32_PAT | VMX_VM_EXIT_SAVE_IA32_EFER |
VMX_VM_EXIT_LOAD_IA32_EFER | VMX_VM_EXIT_SAVE_VMX_PREEMPTION_TIMER,
.features[FEAT_VMX_MISC] =
MSR_VMX_MISC_STORE_LMA | MSR_VMX_MISC_ACTIVITY_HLT |
MSR_VMX_MISC_VMWRITE_VMEXIT,
.features[FEAT_VMX_PINBASED_CTLS] =
VMX_PIN_BASED_EXT_INTR_MASK | VMX_PIN_BASED_NMI_EXITING |
VMX_PIN_BASED_VIRTUAL_NMIS | VMX_PIN_BASED_VMX_PREEMPTION_TIMER |
VMX_PIN_BASED_POSTED_INTR,
.features[FEAT_VMX_PROCBASED_CTLS] =
VMX_CPU_BASED_VIRTUAL_INTR_PENDING |
VMX_CPU_BASED_USE_TSC_OFFSETING | VMX_CPU_BASED_HLT_EXITING |
VMX_CPU_BASED_INVLPG_EXITING | VMX_CPU_BASED_MWAIT_EXITING |
VMX_CPU_BASED_RDPMC_EXITING | VMX_CPU_BASED_RDTSC_EXITING |
VMX_CPU_BASED_CR3_LOAD_EXITING | VMX_CPU_BASED_CR3_STORE_EXITING |
VMX_CPU_BASED_CR8_LOAD_EXITING | VMX_CPU_BASED_CR8_STORE_EXITING |
VMX_CPU_BASED_TPR_SHADOW | VMX_CPU_BASED_VIRTUAL_NMI_PENDING |
VMX_CPU_BASED_MOV_DR_EXITING | VMX_CPU_BASED_UNCOND_IO_EXITING |
VMX_CPU_BASED_USE_IO_BITMAPS | VMX_CPU_BASED_MONITOR_TRAP_FLAG |
VMX_CPU_BASED_USE_MSR_BITMAPS | VMX_CPU_BASED_MONITOR_EXITING |
VMX_CPU_BASED_PAUSE_EXITING |
VMX_CPU_BASED_ACTIVATE_SECONDARY_CONTROLS,
.features[FEAT_VMX_SECONDARY_CTLS] =
VMX_SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES |
VMX_SECONDARY_EXEC_ENABLE_EPT | VMX_SECONDARY_EXEC_DESC |
VMX_SECONDARY_EXEC_RDTSCP |
VMX_SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE |
VMX_SECONDARY_EXEC_ENABLE_VPID | VMX_SECONDARY_EXEC_WBINVD_EXITING |
VMX_SECONDARY_EXEC_UNRESTRICTED_GUEST |
VMX_SECONDARY_EXEC_APIC_REGISTER_VIRT |
VMX_SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY |
VMX_SECONDARY_EXEC_RDRAND_EXITING |
VMX_SECONDARY_EXEC_ENABLE_INVPCID |
VMX_SECONDARY_EXEC_ENABLE_VMFUNC | VMX_SECONDARY_EXEC_SHADOW_VMCS |
VMX_SECONDARY_EXEC_RDSEED_EXITING | VMX_SECONDARY_EXEC_ENABLE_PML |
VMX_SECONDARY_EXEC_XSAVES,
.features[FEAT_VMX_VMFUNC] =
MSR_VMX_VMFUNC_EPT_SWITCHING,
.model_id = "Intel Xeon Processor (ClearwaterForest)",
.versions = (X86CPUVersionDefinition[]) {
{ .version = 1 },
{ /* end of list */ },

View File

@ -951,6 +951,12 @@ uint64_t x86_cpu_get_supported_feature_word(X86CPU *cpu, FeatureWord w);
/* Speculative Store Bypass Disable */
#define CPUID_7_0_EDX_SPEC_CTRL_SSBD (1U << 31)
/* SHA512 Instruction */
#define CPUID_7_1_EAX_SHA512 (1U << 0)
/* SM3 Instruction */
#define CPUID_7_1_EAX_SM3 (1U << 1)
/* SM4 Instruction */
#define CPUID_7_1_EAX_SM4 (1U << 2)
/* AVX VNNI Instruction */
#define CPUID_7_1_EAX_AVX_VNNI (1U << 4)
/* AVX512 BFloat16 Instruction */
@ -963,6 +969,12 @@ uint64_t x86_cpu_get_supported_feature_word(X86CPU *cpu, FeatureWord w);
#define CPUID_7_1_EAX_FSRS (1U << 11)
/* Fast Short REP CMPS/SCAS */
#define CPUID_7_1_EAX_FSRC (1U << 12)
/* Flexible return and event delivery (FRED) */
#define CPUID_7_1_EAX_FRED (1U << 17)
/* Load into IA32_KERNEL_GS_BASE (LKGS) */
#define CPUID_7_1_EAX_LKGS (1U << 18)
/* Non-Serializing Write to Model Specific Register (WRMSRNS) */
#define CPUID_7_1_EAX_WRMSRNS (1U << 19)
/* Support Tile Computational Operations on FP16 Numbers */
#define CPUID_7_1_EAX_AMX_FP16 (1U << 21)
/* Support for VPMADD52[H,L]UQ */
@ -976,17 +988,23 @@ uint64_t x86_cpu_get_supported_feature_word(X86CPU *cpu, FeatureWord w);
#define CPUID_7_1_EDX_AVX_NE_CONVERT (1U << 5)
/* AMX COMPLEX Instructions */
#define CPUID_7_1_EDX_AMX_COMPLEX (1U << 8)
/* AVX-VNNI-INT16 Instructions */
#define CPUID_7_1_EDX_AVX_VNNI_INT16 (1U << 10)
/* PREFETCHIT0/1 Instructions */
#define CPUID_7_1_EDX_PREFETCHITI (1U << 14)
/* Support for Advanced Vector Extensions 10 */
#define CPUID_7_1_EDX_AVX10 (1U << 19)
/* Flexible return and event delivery (FRED) */
#define CPUID_7_1_EAX_FRED (1U << 17)
/* Load into IA32_KERNEL_GS_BASE (LKGS) */
#define CPUID_7_1_EAX_LKGS (1U << 18)
/* Non-Serializing Write to Model Specific Register (WRMSRNS) */
#define CPUID_7_1_EAX_WRMSRNS (1U << 19)
/* Indicate bit 7 of the IA32_SPEC_CTRL MSR is supported */
#define CPUID_7_2_EDX_PSFD (1U << 0)
/* Indicate bits 3 and 4 of the IA32_SPEC_CTRL MSR are supported */
#define CPUID_7_2_EDX_IPRED_CTRL (1U << 1)
/* Indicate bits 5 and 6 of the IA32_SPEC_CTRL MSR are supported */
#define CPUID_7_2_EDX_RRSBA_CTRL (1U << 2)
/* Indicate bit 8 of the IA32_SPEC_CTRL MSR is supported */
#define CPUID_7_2_EDX_DDPD_U (1U << 3)
/* Indicate bit 10 of the IA32_SPEC_CTRL MSR is supported */
#define CPUID_7_2_EDX_BHI_CTRL (1U << 4)
/* Do not exhibit MXCSR Configuration Dependent Timing (MCDT) behavior */
#define CPUID_7_2_EDX_MCDT_NO (1U << 5)
@ -1144,7 +1162,10 @@ uint64_t x86_cpu_get_supported_feature_word(X86CPU *cpu, FeatureWord w);
#define MSR_ARCH_CAP_FBSDP_NO (1U << 14)
#define MSR_ARCH_CAP_PSDP_NO (1U << 15)
#define MSR_ARCH_CAP_FB_CLEAR (1U << 17)
#define MSR_ARCH_CAP_BHI_NO (1U << 20)
#define MSR_ARCH_CAP_PBRSB_NO (1U << 24)
#define MSR_ARCH_CAP_GDS_NO (1U << 26)
#define MSR_ARCH_CAP_RFDS_NO (1U << 27)
#define MSR_CORE_CAP_SPLIT_LOCK_DETECT (1U << 5)

View File

@ -1630,7 +1630,7 @@ static void gen_CMC(DisasContext *s, X86DecodedInsn *decode)
static void gen_CMOVcc(DisasContext *s, X86DecodedInsn *decode)
{
gen_cmovcc1(s, decode->b & 0xf, s->T0, s->T1);
gen_cmovcc(s, decode->b & 0xf, s->T0, s->T1);
}
static void gen_CMPccXADD(DisasContext *s, X86DecodedInsn *decode)
@ -1739,11 +1739,7 @@ static void gen_CMPccXADD(DisasContext *s, X86DecodedInsn *decode)
static void gen_CMPS(DisasContext *s, X86DecodedInsn *decode)
{
MemOp ot = decode->op[2].ot;
if (s->prefix & (PREFIX_REPZ | PREFIX_REPNZ)) {
gen_repz_nz(s, ot, gen_cmps);
} else {
gen_cmps(s, ot);
}
gen_repz_nz(s, ot, gen_cmps);
}
static void gen_CMPXCHG(DisasContext *s, X86DecodedInsn *decode)
@ -2234,11 +2230,7 @@ static void gen_INS(DisasContext *s, X86DecodedInsn *decode)
}
translator_io_start(&s->base);
if (s->prefix & (PREFIX_REPZ | PREFIX_REPNZ)) {
gen_repz(s, ot, gen_ins);
} else {
gen_ins(s, ot);
}
gen_repz(s, ot, gen_ins);
}
static void gen_INSERTQ_i(DisasContext *s, X86DecodedInsn *decode)
@ -2293,8 +2285,11 @@ static void gen_IRET(DisasContext *s, X86DecodedInsn *decode)
static void gen_Jcc(DisasContext *s, X86DecodedInsn *decode)
{
TCGLabel *taken = gen_new_label();
gen_bnd_jmp(s);
gen_jcc(s, decode->b & 0xf, decode->immediate);
gen_jcc(s, decode->b & 0xf, taken);
gen_conditional_jump_labels(s, decode->immediate, NULL, taken);
}
static void gen_JCXZ(DisasContext *s, X86DecodedInsn *decode)
@ -2419,11 +2414,7 @@ static void gen_LGS(DisasContext *s, X86DecodedInsn *decode)
static void gen_LODS(DisasContext *s, X86DecodedInsn *decode)
{
MemOp ot = decode->op[1].ot;
if (s->prefix & (PREFIX_REPZ | PREFIX_REPNZ)) {
gen_repz(s, ot, gen_lods);
} else {
gen_lods(s, ot);
}
gen_repz(s, ot, gen_lods);
}
static void gen_LOOP(DisasContext *s, X86DecodedInsn *decode)
@ -2444,7 +2435,7 @@ static void gen_LOOPE(DisasContext *s, X86DecodedInsn *decode)
gen_update_cc_op(s);
gen_op_add_reg_im(s, s->aflag, R_ECX, -1);
gen_op_jz_ecx(s, not_taken);
gen_jcc1(s, (JCC_Z << 1), taken); /* jz taken */
gen_jcc(s, (JCC_Z << 1), taken); /* jz taken */
gen_conditional_jump_labels(s, decode->immediate, not_taken, taken);
}
@ -2456,7 +2447,7 @@ static void gen_LOOPNE(DisasContext *s, X86DecodedInsn *decode)
gen_update_cc_op(s);
gen_op_add_reg_im(s, s->aflag, R_ECX, -1);
gen_op_jz_ecx(s, not_taken);
gen_jcc1(s, (JCC_Z << 1) | 1, taken); /* jnz taken */
gen_jcc(s, (JCC_Z << 1) | 1, taken); /* jnz taken */
gen_conditional_jump_labels(s, decode->immediate, not_taken, taken);
}
@ -2621,11 +2612,7 @@ static void gen_MOVq_dq(DisasContext *s, X86DecodedInsn *decode)
static void gen_MOVS(DisasContext *s, X86DecodedInsn *decode)
{
MemOp ot = decode->op[2].ot;
if (s->prefix & (PREFIX_REPZ | PREFIX_REPNZ)) {
gen_repz(s, ot, gen_movs);
} else {
gen_movs(s, ot);
}
gen_repz(s, ot, gen_movs);
}
static void gen_MUL(DisasContext *s, X86DecodedInsn *decode)
@ -2787,11 +2774,7 @@ static void gen_OUTS(DisasContext *s, X86DecodedInsn *decode)
}
translator_io_start(&s->base);
if (s->prefix & (PREFIX_REPZ | PREFIX_REPNZ)) {
gen_repz(s, ot, gen_outs);
} else {
gen_outs(s, ot);
}
gen_repz(s, ot, gen_outs);
}
static void gen_PALIGNR(DisasContext *s, X86DecodedInsn *decode)
@ -3872,16 +3855,12 @@ static void gen_SBB(DisasContext *s, X86DecodedInsn *decode)
static void gen_SCAS(DisasContext *s, X86DecodedInsn *decode)
{
MemOp ot = decode->op[2].ot;
if (s->prefix & (PREFIX_REPZ | PREFIX_REPNZ)) {
gen_repz_nz(s, ot, gen_scas);
} else {
gen_scas(s, ot);
}
gen_repz_nz(s, ot, gen_scas);
}
static void gen_SETcc(DisasContext *s, X86DecodedInsn *decode)
{
gen_setcc1(s, decode->b & 0xf, s->T0);
gen_setcc(s, decode->b & 0xf, s->T0);
}
static void gen_SFENCE(DisasContext *s, X86DecodedInsn *decode)
@ -4082,11 +4061,7 @@ static void gen_STMXCSR(DisasContext *s, X86DecodedInsn *decode)
static void gen_STOS(DisasContext *s, X86DecodedInsn *decode)
{
MemOp ot = decode->op[1].ot;
if (s->prefix & (PREFIX_REPZ | PREFIX_REPNZ)) {
gen_repz(s, ot, gen_stos);
} else {
gen_stos(s, ot);
}
gen_repz(s, ot, gen_stos);
}
static void gen_SUB(DisasContext *s, X86DecodedInsn *decode)

View File

@ -113,7 +113,6 @@ typedef struct DisasContext {
#endif
bool vex_w; /* used by AVX even on 32-bit processors */
bool jmp_opt; /* use direct block chaining for direct jumps */
bool repz_opt; /* optimize jumps within repz instructions */
bool cc_op_dirty;
CCOp cc_op; /* current CC operation */
@ -506,17 +505,24 @@ static inline void gen_op_jmp_v(DisasContext *s, TCGv dest)
s->pc_save = -1;
}
static inline void gen_op_add_reg(DisasContext *s, MemOp size, int reg, TCGv val)
{
/* Using cpu_regs[reg] does not work for xH registers. */
assert(size >= MO_16);
if (size == MO_16) {
TCGv temp = tcg_temp_new();
tcg_gen_add_tl(temp, cpu_regs[reg], val);
gen_op_mov_reg_v(s, size, reg, temp);
} else {
tcg_gen_add_tl(cpu_regs[reg], cpu_regs[reg], val);
tcg_gen_ext_tl(cpu_regs[reg], cpu_regs[reg], size);
}
}
static inline
void gen_op_add_reg_im(DisasContext *s, MemOp size, int reg, int32_t val)
{
tcg_gen_addi_tl(s->tmp0, cpu_regs[reg], val);
gen_op_mov_reg_v(s, size, reg, s->tmp0);
}
static inline void gen_op_add_reg(DisasContext *s, MemOp size, int reg, TCGv val)
{
tcg_gen_add_tl(s->tmp0, cpu_regs[reg], val);
gen_op_mov_reg_v(s, size, reg, s->tmp0);
gen_op_add_reg(s, size, reg, tcg_constant_tl(val));
}
static inline void gen_op_ld_v(DisasContext *s, int idx, TCGv t0, TCGv a0)
@ -689,14 +695,6 @@ static inline void gen_string_movl_A0_EDI(DisasContext *s)
gen_lea_v_seg(s, cpu_regs[R_EDI], R_ES, -1);
}
static inline TCGv gen_compute_Dshift(DisasContext *s, MemOp ot)
{
TCGv dshift = tcg_temp_new();
tcg_gen_ld32s_tl(dshift, tcg_env, offsetof(CPUX86State, df));
tcg_gen_shli_tl(dshift, dshift, ot);
return dshift;
};
static TCGv gen_ext_tl(TCGv dst, TCGv src, MemOp size, bool sign)
{
if (size == MO_TL) {
@ -726,6 +724,46 @@ static inline void gen_op_jnz_ecx(DisasContext *s, TCGLabel *label1)
gen_op_j_ecx(s, TCG_COND_NE, label1);
}
static void gen_set_hflag(DisasContext *s, uint32_t mask)
{
if ((s->flags & mask) == 0) {
TCGv_i32 t = tcg_temp_new_i32();
tcg_gen_ld_i32(t, tcg_env, offsetof(CPUX86State, hflags));
tcg_gen_ori_i32(t, t, mask);
tcg_gen_st_i32(t, tcg_env, offsetof(CPUX86State, hflags));
s->flags |= mask;
}
}
static void gen_reset_hflag(DisasContext *s, uint32_t mask)
{
if (s->flags & mask) {
TCGv_i32 t = tcg_temp_new_i32();
tcg_gen_ld_i32(t, tcg_env, offsetof(CPUX86State, hflags));
tcg_gen_andi_i32(t, t, ~mask);
tcg_gen_st_i32(t, tcg_env, offsetof(CPUX86State, hflags));
s->flags &= ~mask;
}
}
static void gen_set_eflags(DisasContext *s, target_ulong mask)
{
TCGv t = tcg_temp_new();
tcg_gen_ld_tl(t, tcg_env, offsetof(CPUX86State, eflags));
tcg_gen_ori_tl(t, t, mask);
tcg_gen_st_tl(t, tcg_env, offsetof(CPUX86State, eflags));
}
static void gen_reset_eflags(DisasContext *s, target_ulong mask)
{
TCGv t = tcg_temp_new();
tcg_gen_ld_tl(t, tcg_env, offsetof(CPUX86State, eflags));
tcg_gen_andi_tl(t, t, ~mask);
tcg_gen_st_tl(t, tcg_env, offsetof(CPUX86State, eflags));
}
static void gen_helper_in_func(MemOp ot, TCGv v, TCGv_i32 n)
{
switch (ot) {
@ -793,16 +831,13 @@ static bool gen_check_io(DisasContext *s, MemOp ot, TCGv_i32 port,
#endif
}
static void gen_movs(DisasContext *s, MemOp ot)
static void gen_movs(DisasContext *s, MemOp ot, TCGv dshift)
{
TCGv dshift;
gen_string_movl_A0_ESI(s);
gen_op_ld_v(s, ot, s->T0, s->A0);
gen_string_movl_A0_EDI(s);
gen_op_st_v(s, ot, s->T0, s->A0);
dshift = gen_compute_Dshift(s, ot);
gen_op_add_reg(s, s->aflag, R_ESI, dshift);
gen_op_add_reg(s, s->aflag, R_EDI, dshift);
}
@ -1148,7 +1183,7 @@ static CCPrepare gen_prepare_cc(DisasContext *s, int b, TCGv reg)
return cc;
}
static void gen_setcc1(DisasContext *s, int b, TCGv reg)
static void gen_setcc(DisasContext *s, int b, TCGv reg)
{
CCPrepare cc = gen_prepare_cc(s, b, reg);
@ -1170,12 +1205,12 @@ static void gen_setcc1(DisasContext *s, int b, TCGv reg)
static inline void gen_compute_eflags_c(DisasContext *s, TCGv reg)
{
gen_setcc1(s, JCC_B << 1, reg);
gen_setcc(s, JCC_B << 1, reg);
}
/* generate a conditional jump to label 'l1' according to jump opcode
value 'b'. In the fast case, T0 is guaranteed not to be used. */
static inline void gen_jcc1_noeob(DisasContext *s, int b, TCGLabel *l1)
static inline void gen_jcc_noeob(DisasContext *s, int b, TCGLabel *l1)
{
CCPrepare cc = gen_prepare_cc(s, b, NULL);
@ -1190,13 +1225,14 @@ static inline void gen_jcc1_noeob(DisasContext *s, int b, TCGLabel *l1)
value 'b'. In the fast case, T0 is guaranteed not to be used.
One or both of the branches will call gen_jmp_rel, so ensure
cc_op is clean. */
static inline void gen_jcc1(DisasContext *s, int b, TCGLabel *l1)
static inline void gen_jcc(DisasContext *s, int b, TCGLabel *l1)
{
CCPrepare cc = gen_prepare_cc(s, b, NULL);
/*
* Note that this must be _after_ gen_prepare_cc, because it
* can change the cc_op from CC_OP_DYNAMIC to CC_OP_EFLAGS!
* Note that this must be _after_ gen_prepare_cc, because it can change
* the cc_op to CC_OP_EFLAGS (because it's CC_OP_DYNAMIC or because
* it's cheaper to just compute the flags)!
*/
gen_update_cc_op(s);
if (cc.use_reg2) {
@ -1206,39 +1242,22 @@ static inline void gen_jcc1(DisasContext *s, int b, TCGLabel *l1)
}
}
/* XXX: does not work with gdbstub "ice" single step - not a
serious problem. The caller can jump to the returned label
to stop the REP but, if the flags have changed, it has to call
gen_update_cc_op before doing so. */
static TCGLabel *gen_jz_ecx_string(DisasContext *s)
{
TCGLabel *l1 = gen_new_label();
TCGLabel *l2 = gen_new_label();
gen_update_cc_op(s);
gen_op_jnz_ecx(s, l1);
gen_set_label(l2);
gen_jmp_rel_csize(s, 0, 1);
gen_set_label(l1);
return l2;
}
static void gen_stos(DisasContext *s, MemOp ot)
static void gen_stos(DisasContext *s, MemOp ot, TCGv dshift)
{
gen_string_movl_A0_EDI(s);
gen_op_st_v(s, ot, s->T0, s->A0);
gen_op_add_reg(s, s->aflag, R_EDI, gen_compute_Dshift(s, ot));
gen_op_add_reg(s, s->aflag, R_EDI, dshift);
}
static void gen_lods(DisasContext *s, MemOp ot)
static void gen_lods(DisasContext *s, MemOp ot, TCGv dshift)
{
gen_string_movl_A0_ESI(s);
gen_op_ld_v(s, ot, s->T0, s->A0);
gen_op_mov_reg_v(s, ot, R_EAX, s->T0);
gen_op_add_reg(s, s->aflag, R_ESI, gen_compute_Dshift(s, ot));
gen_op_add_reg(s, s->aflag, R_ESI, dshift);
}
static void gen_scas(DisasContext *s, MemOp ot)
static void gen_scas(DisasContext *s, MemOp ot, TCGv dshift)
{
gen_string_movl_A0_EDI(s);
gen_op_ld_v(s, ot, s->T1, s->A0);
@ -1247,13 +1266,11 @@ static void gen_scas(DisasContext *s, MemOp ot)
tcg_gen_sub_tl(cpu_cc_dst, s->T0, s->T1);
set_cc_op(s, CC_OP_SUBB + ot);
gen_op_add_reg(s, s->aflag, R_EDI, gen_compute_Dshift(s, ot));
gen_op_add_reg(s, s->aflag, R_EDI, dshift);
}
static void gen_cmps(DisasContext *s, MemOp ot)
static void gen_cmps(DisasContext *s, MemOp ot, TCGv dshift)
{
TCGv dshift;
gen_string_movl_A0_EDI(s);
gen_op_ld_v(s, ot, s->T1, s->A0);
gen_string_movl_A0_ESI(s);
@ -1263,7 +1280,6 @@ static void gen_cmps(DisasContext *s, MemOp ot)
tcg_gen_sub_tl(cpu_cc_dst, s->T0, s->T1);
set_cc_op(s, CC_OP_SUBB + ot);
dshift = gen_compute_Dshift(s, ot);
gen_op_add_reg(s, s->aflag, R_ESI, dshift);
gen_op_add_reg(s, s->aflag, R_EDI, dshift);
}
@ -1282,7 +1298,7 @@ static void gen_bpt_io(DisasContext *s, TCGv_i32 t_port, int ot)
}
}
static void gen_ins(DisasContext *s, MemOp ot)
static void gen_ins(DisasContext *s, MemOp ot, TCGv dshift)
{
gen_string_movl_A0_EDI(s);
/* Note: we must do this dummy write first to be restartable in
@ -1293,11 +1309,11 @@ static void gen_ins(DisasContext *s, MemOp ot)
tcg_gen_andi_i32(s->tmp2_i32, s->tmp2_i32, 0xffff);
gen_helper_in_func(ot, s->T0, s->tmp2_i32);
gen_op_st_v(s, ot, s->T0, s->A0);
gen_op_add_reg(s, s->aflag, R_EDI, gen_compute_Dshift(s, ot));
gen_op_add_reg(s, s->aflag, R_EDI, dshift);
gen_bpt_io(s, s->tmp2_i32, ot);
}
static void gen_outs(DisasContext *s, MemOp ot)
static void gen_outs(DisasContext *s, MemOp ot, TCGv dshift)
{
gen_string_movl_A0_ESI(s);
gen_op_ld_v(s, ot, s->T0, s->A0);
@ -1306,47 +1322,154 @@ static void gen_outs(DisasContext *s, MemOp ot)
tcg_gen_andi_i32(s->tmp2_i32, s->tmp2_i32, 0xffff);
tcg_gen_trunc_tl_i32(s->tmp3_i32, s->T0);
gen_helper_out_func(ot, s->tmp2_i32, s->tmp3_i32);
gen_op_add_reg(s, s->aflag, R_ESI, gen_compute_Dshift(s, ot));
gen_op_add_reg(s, s->aflag, R_ESI, dshift);
gen_bpt_io(s, s->tmp2_i32, ot);
}
/* Generate jumps to current or next instruction */
static void gen_repz(DisasContext *s, MemOp ot,
void (*fn)(DisasContext *s, MemOp ot))
#define REP_MAX 65535
static void do_gen_rep(DisasContext *s, MemOp ot, TCGv dshift,
void (*fn)(DisasContext *s, MemOp ot, TCGv dshift),
bool is_repz_nz)
{
TCGLabel *l2;
l2 = gen_jz_ecx_string(s);
fn(s, ot);
gen_op_add_reg_im(s, s->aflag, R_ECX, -1);
TCGLabel *last = gen_new_label();
TCGLabel *loop = gen_new_label();
TCGLabel *done = gen_new_label();
target_ulong cx_mask = MAKE_64BIT_MASK(0, 8 << s->aflag);
TCGv cx_next = tcg_temp_new();
/*
* A loop would cause two single step exceptions if ECX = 1
* before rep string_insn
* Check if we must translate a single iteration only. Normally, HF_RF_MASK
* would also limit translation blocks to one instruction, so that gen_eob
* can reset the flag; here however RF is set throughout the repetition, so
* we can plow through until CX/ECX/RCX is zero.
*/
if (s->repz_opt) {
gen_op_jz_ecx(s, l2);
bool can_loop =
(!(tb_cflags(s->base.tb) & (CF_USE_ICOUNT | CF_SINGLE_STEP))
&& !(s->flags & (HF_TF_MASK | HF_INHIBIT_IRQ_MASK)));
bool had_rf = s->flags & HF_RF_MASK;
/*
* Even if EFLAGS.RF was set on entry (such as if we're on the second or
* later iteration and an exception or interrupt happened), force gen_eob()
* not to clear the flag. We do that ourselves after the last iteration.
*/
s->flags &= ~HF_RF_MASK;
/*
* For CMPS/SCAS, the CC_OP after a memory fault could come from either
* the previous instruction or the string instruction; but because we
* arrange to keep CC_OP up to date all the time, just mark the whole
* insn as CC_OP_DYNAMIC.
*
* It's not a problem to do this even for instructions that do not
* modify the flags, so do it unconditionally.
*/
gen_update_cc_op(s);
tcg_set_insn_start_param(s->base.insn_start, 1, CC_OP_DYNAMIC);
/* Any iteration at all? */
tcg_gen_brcondi_tl(TCG_COND_TSTEQ, cpu_regs[R_ECX], cx_mask, done);
/*
* From now on we operate on the value of CX/ECX/RCX that will be written
* back, which is stored in cx_next. There can be no carry, so we can zero
* extend here if needed and not do any expensive deposit operations later.
*/
tcg_gen_subi_tl(cx_next, cpu_regs[R_ECX], 1);
#ifdef TARGET_X86_64
if (s->aflag == MO_32) {
tcg_gen_ext32u_tl(cx_next, cx_next);
cx_mask = ~0;
}
#endif
/*
* The last iteration is handled outside the loop, so that cx_next
* can never underflow.
*/
if (can_loop) {
tcg_gen_brcondi_tl(TCG_COND_TSTEQ, cx_next, cx_mask, last);
}
gen_set_label(loop);
fn(s, ot, dshift);
tcg_gen_mov_tl(cpu_regs[R_ECX], cx_next);
gen_update_cc_op(s);
/* Leave if REP condition fails. */
if (is_repz_nz) {
int nz = (s->prefix & PREFIX_REPNZ) ? 1 : 0;
gen_jcc_noeob(s, (JCC_Z << 1) | (nz ^ 1), done);
/* gen_prepare_eflags_z never changes cc_op. */
assert(!s->cc_op_dirty);
}
if (can_loop) {
tcg_gen_subi_tl(cx_next, cx_next, 1);
tcg_gen_brcondi_tl(TCG_COND_TSTNE, cx_next, REP_MAX, loop);
tcg_gen_brcondi_tl(TCG_COND_TSTEQ, cx_next, cx_mask, last);
}
/*
* Traps or interrupts set RF_MASK if they happen after any iteration
* but the last. Set it here before giving the main loop a chance to
* execute. (For faults, seg_helper.c sets the flag as usual).
*/
if (!had_rf) {
gen_set_eflags(s, RF_MASK);
}
/* Go to the main loop but reenter the same instruction. */
gen_jmp_rel_csize(s, -cur_insn_len(s), 0);
if (can_loop) {
/*
* The last iteration needs no conditional jump, even if is_repz_nz,
* because the repeats are ending anyway.
*/
gen_set_label(last);
set_cc_op(s, CC_OP_DYNAMIC);
fn(s, ot, dshift);
tcg_gen_mov_tl(cpu_regs[R_ECX], cx_next);
gen_update_cc_op(s);
}
/* CX/ECX/RCX is zero, or REPZ/REPNZ broke the repetition. */
gen_set_label(done);
set_cc_op(s, CC_OP_DYNAMIC);
if (had_rf) {
gen_reset_eflags(s, RF_MASK);
}
gen_jmp_rel_csize(s, 0, 1);
}
static void do_gen_string(DisasContext *s, MemOp ot,
void (*fn)(DisasContext *s, MemOp ot, TCGv dshift),
bool is_repz_nz)
{
TCGv dshift = tcg_temp_new();
tcg_gen_ld32s_tl(dshift, tcg_env, offsetof(CPUX86State, df));
tcg_gen_shli_tl(dshift, dshift, ot);
if (s->prefix & (PREFIX_REPZ | PREFIX_REPNZ)) {
do_gen_rep(s, ot, dshift, fn, is_repz_nz);
} else {
fn(s, ot, dshift);
}
}
static void gen_repz(DisasContext *s, MemOp ot,
void (*fn)(DisasContext *s, MemOp ot, TCGv dshift))
{
do_gen_string(s, ot, fn, false);
}
static void gen_repz_nz(DisasContext *s, MemOp ot,
void (*fn)(DisasContext *s, MemOp ot))
void (*fn)(DisasContext *s, MemOp ot, TCGv dshift))
{
TCGLabel *l2;
int nz = (s->prefix & PREFIX_REPNZ) ? 1 : 0;
l2 = gen_jz_ecx_string(s);
fn(s, ot);
gen_op_add_reg_im(s, s->aflag, R_ECX, -1);
gen_jcc1(s, (JCC_Z << 1) | (nz ^ 1), l2);
if (s->repz_opt) {
gen_op_jz_ecx(s, l2);
}
/*
* Only one iteration is done at a time, so the translation
* block ends unconditionally after this instruction and there
* is no control flow junction - no need to set CC_OP_DYNAMIC.
*/
gen_jmp_rel_csize(s, -cur_insn_len(s), 0);
do_gen_string(s, ot, fn, true);
}
static void gen_helper_fp_arith_ST0_FT0(int op)
@ -1847,15 +1970,7 @@ static void gen_conditional_jump_labels(DisasContext *s, target_long diff,
gen_jmp_rel(s, s->dflag, diff, 0);
}
static void gen_jcc(DisasContext *s, int b, int diff)
{
TCGLabel *l1 = gen_new_label();
gen_jcc1(s, b, l1);
gen_conditional_jump_labels(s, diff, NULL, l1);
}
static void gen_cmovcc1(DisasContext *s, int b, TCGv dest, TCGv src)
static void gen_cmovcc(DisasContext *s, int b, TCGv dest, TCGv src)
{
CCPrepare cc = gen_prepare_cc(s, b, NULL);
@ -2103,46 +2218,6 @@ static void gen_interrupt(DisasContext *s, uint8_t intno)
s->base.is_jmp = DISAS_NORETURN;
}
static void gen_set_hflag(DisasContext *s, uint32_t mask)
{
if ((s->flags & mask) == 0) {
TCGv_i32 t = tcg_temp_new_i32();
tcg_gen_ld_i32(t, tcg_env, offsetof(CPUX86State, hflags));
tcg_gen_ori_i32(t, t, mask);
tcg_gen_st_i32(t, tcg_env, offsetof(CPUX86State, hflags));
s->flags |= mask;
}
}
static void gen_reset_hflag(DisasContext *s, uint32_t mask)
{
if (s->flags & mask) {
TCGv_i32 t = tcg_temp_new_i32();
tcg_gen_ld_i32(t, tcg_env, offsetof(CPUX86State, hflags));
tcg_gen_andi_i32(t, t, ~mask);
tcg_gen_st_i32(t, tcg_env, offsetof(CPUX86State, hflags));
s->flags &= ~mask;
}
}
static void gen_set_eflags(DisasContext *s, target_ulong mask)
{
TCGv t = tcg_temp_new();
tcg_gen_ld_tl(t, tcg_env, offsetof(CPUX86State, eflags));
tcg_gen_ori_tl(t, t, mask);
tcg_gen_st_tl(t, tcg_env, offsetof(CPUX86State, eflags));
}
static void gen_reset_eflags(DisasContext *s, target_ulong mask)
{
TCGv t = tcg_temp_new();
tcg_gen_ld_tl(t, tcg_env, offsetof(CPUX86State, eflags));
tcg_gen_andi_tl(t, t, ~mask);
tcg_gen_st_tl(t, tcg_env, offsetof(CPUX86State, eflags));
}
/* Clear BND registers during legacy branches. */
static void gen_bnd_jmp(DisasContext *s)
{
@ -2177,7 +2252,7 @@ gen_eob(DisasContext *s, int mode)
gen_set_hflag(s, HF_INHIBIT_IRQ_MASK);
}
if (s->base.tb->flags & HF_RF_MASK) {
if (s->flags & HF_RF_MASK) {
gen_reset_eflags(s, RF_MASK);
}
if (mode == DISAS_EOB_RECHECK_TF) {
@ -2864,7 +2939,7 @@ static void gen_x87(DisasContext *s, X86DecodedInsn *decode)
}
op1 = fcmov_cc[op & 3] | (((op >> 3) & 1) ^ 1);
l1 = gen_new_label();
gen_jcc1_noeob(s, op1, l1);
gen_jcc_noeob(s, op1, l1);
gen_helper_fmov_ST0_STN(tcg_env,
tcg_constant_i32(opreg));
gen_set_label(l1);
@ -3664,20 +3739,6 @@ static void i386_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cpu)
dc->cpuid_xsave_features = env->features[FEAT_XSAVE];
dc->jmp_opt = !((cflags & CF_NO_GOTO_TB) ||
(flags & (HF_RF_MASK | HF_TF_MASK | HF_INHIBIT_IRQ_MASK)));
/*
* If jmp_opt, we want to handle each string instruction individually.
* For icount also disable repz optimization so that each iteration
* is accounted separately.
*
* FIXME: this is messy; it makes REP string instructions a lot less
* efficient than they should be and it gets in the way of correct
* handling of RF (interrupts or traps arriving after any iteration
* of a repeated string instruction but the last should set RF to 1).
* Perhaps it would be more efficient if REP string instructions were
* always at the beginning of the TB, or even their own TB? That
* would even allow accounting up to 64k iterations at once for icount.
*/
dc->repz_opt = !dc->jmp_opt && !(cflags & CF_USE_ICOUNT);
dc->T0 = tcg_temp_new();
dc->T1 = tcg_temp_new();