Misc HW patches queue

- Allow more than 4 legacy IRQs on Generic PCI Express Bridge (Alexander)
 - Add MMIO-based Inter-VM shared memory device 'ivshmem-flat' (Gustavo)
 - Use UHCI register definitions (Guenter)
 - Propagate CPU endianness to microblaze_load_kernel (Philippe)
 - Mark x86/TriCore devices as little-endian, OpenRISC/SPARC as big (Philippe)
 - Don't set callback_opaque NULL in fw_cfg_modify_bytes_read (Shameer)
 - Simplify non-KVM checks on AMD IOMMU XTSup feature (Philippe)
 - Trivial cleanups on xilinx_ethlite, vmcoreinfo, qxl (Philippe, Hyman)
 - Move USB-HCD-XHCI msi/msix properties from NEC to superclass (Phil)
 - Redesign of main thread event handling due to macOS Cocoa (Phil)
 - Introduce ParavirtualizedGraphics.Framework support 'apple-gfx' (Phil)
 - Pad short Ethernet frames on macOS vmnet (William)
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE+qvnXhKRciHc/Wuy4+MsLN6twN4FAmd0Ul0ACgkQ4+MsLN6t
 wN7sCA/9HFWahKYW+6Y+gHfLPvJzkIqC5mwfQAUY7GsrNVFdIpUjK9ln9xUEqCQz
 DkVxoZQcP++d8cnnl17wXHsRcavyDDadGU5/161eNC7fbKbLRAslObz/dtExxDn2
 sctx9HMcbLl1UMFPqi/Pbt8NEZr0iOLzDDl+nRuOK8QRFnd2zGm1lF1oHeyja3t1
 flnQKI9YD0U/+0RVNR2FOpUam2Fu1EuQEPp0jMwkmcoyoNLwCXrP9XyRybVZnzgM
 cFm9fYbVlwjsVia+Bsk3CmHX5Gna/1bS3CL8Y9gUScYYwYU5VDAA8Fvv4gPsa4+u
 WSyttL2qCFdgF75S5FoAvEQzYFBcw25eFf8jJhbEn4I6MuQew8lww5OZEyvE8rag
 2hg3nc4W0x76mLunqrNm+h+Z3vqd/amFcd9YNZjpzxQK//TwvOAQTWi31VtWa4OF
 F1qdv78tQKkRY7noq8WkcL/io6D7iE/BMx/XIOF8uPf8BLIBMvPDnDABjaB/yLkS
 Q/e+/monxkhknDY6K9xkVei7rn6c0LkuLzKxVzEzVKPVzM8N0JAl/1KaNVO8fxjJ
 kLvfGP/RdYOZqG4dNi8W3PhV/+UZz1FS3L1MpI4NXQ59br57BbVQP9ARGO6WpPWn
 O9zIJOAqdzcWU0aULIsvQA3nC1iJnFHEovq0bl8qBbY51k26Lg0=
 =AL3L
 -----END PGP SIGNATURE-----

Merge tag 'hw-misc-20241231' of https://github.com/philmd/qemu into staging

Misc HW patches queue

- Allow more than 4 legacy IRQs on Generic PCI Express Bridge (Alexander)
- Add MMIO-based Inter-VM shared memory device 'ivshmem-flat' (Gustavo)
- Use UHCI register definitions (Guenter)
- Propagate CPU endianness to microblaze_load_kernel (Philippe)
- Mark x86/TriCore devices as little-endian, OpenRISC/SPARC as big (Philippe)
- Don't set callback_opaque NULL in fw_cfg_modify_bytes_read (Shameer)
- Simplify non-KVM checks on AMD IOMMU XTSup feature (Philippe)
- Trivial cleanups on xilinx_ethlite, vmcoreinfo, qxl (Philippe, Hyman)
- Move USB-HCD-XHCI msi/msix properties from NEC to superclass (Phil)
- Redesign of main thread event handling due to macOS Cocoa (Phil)
- Introduce ParavirtualizedGraphics.Framework support 'apple-gfx' (Phil)
- Pad short Ethernet frames on macOS vmnet (William)

# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCAAdFiEE+qvnXhKRciHc/Wuy4+MsLN6twN4FAmd0Ul0ACgkQ4+MsLN6t
# wN7sCA/9HFWahKYW+6Y+gHfLPvJzkIqC5mwfQAUY7GsrNVFdIpUjK9ln9xUEqCQz
# DkVxoZQcP++d8cnnl17wXHsRcavyDDadGU5/161eNC7fbKbLRAslObz/dtExxDn2
# sctx9HMcbLl1UMFPqi/Pbt8NEZr0iOLzDDl+nRuOK8QRFnd2zGm1lF1oHeyja3t1
# flnQKI9YD0U/+0RVNR2FOpUam2Fu1EuQEPp0jMwkmcoyoNLwCXrP9XyRybVZnzgM
# cFm9fYbVlwjsVia+Bsk3CmHX5Gna/1bS3CL8Y9gUScYYwYU5VDAA8Fvv4gPsa4+u
# WSyttL2qCFdgF75S5FoAvEQzYFBcw25eFf8jJhbEn4I6MuQew8lww5OZEyvE8rag
# 2hg3nc4W0x76mLunqrNm+h+Z3vqd/amFcd9YNZjpzxQK//TwvOAQTWi31VtWa4OF
# F1qdv78tQKkRY7noq8WkcL/io6D7iE/BMx/XIOF8uPf8BLIBMvPDnDABjaB/yLkS
# Q/e+/monxkhknDY6K9xkVei7rn6c0LkuLzKxVzEzVKPVzM8N0JAl/1KaNVO8fxjJ
# kLvfGP/RdYOZqG4dNi8W3PhV/+UZz1FS3L1MpI4NXQ59br57BbVQP9ARGO6WpPWn
# O9zIJOAqdzcWU0aULIsvQA3nC1iJnFHEovq0bl8qBbY51k26Lg0=
# =AL3L
# -----END PGP SIGNATURE-----
# gpg: Signature made Tue 31 Dec 2024 15:21:49 EST
# gpg:                using RSA key FAABE75E12917221DCFD6BB2E3E32C2CDEADC0DE
# gpg: Good signature from "Philippe Mathieu-Daudé (F4BUG) <f4bug@amsat.org>" [full]
# Primary key fingerprint: FAAB E75E 1291 7221 DCFD  6BB2 E3E3 2C2C DEAD C0DE

* tag 'hw-misc-20241231' of https://github.com/philmd/qemu: (29 commits)
  hw/display/qxl: Do not use C99 // comments
  net/vmnet: Pad short Ethernet frames
  MAINTAINERS: Add myself as maintainer for apple-gfx, reviewer for HVF
  hw/display/apple-gfx: Adds configurable mode list
  hw/display/apple-gfx: Adds PCI implementation
  hw/display/apple-gfx: Introduce ParavirtualizedGraphics.Framework support
  ui & main loop: Redesign of system-specific main thread event handling
  hw/usb/hcd-xhci: Unimplemented/guest error logging for port MMIO
  hw/usb/hcd-xhci-pci: Move msi/msix properties from NEC to superclass
  hw/block/virtio-blk: Replaces request free function with g_free
  hw/i386/amd_iommu: Simplify non-KVM checks on XTSup feature
  hw/misc/vmcoreinfo: Rename opaque pointer as 'opaque'
  hw/misc/vmcoreinfo: Declare QOM type using DEFINE_TYPES macro
  fw_cfg: Don't set callback_opaque NULL in fw_cfg_modify_bytes_read()
  hw/net/xilinx_ethlite: Rename rxbuf -> port_index
  hw/net/xilinx_ethlite: Correct maximum RX buffer size
  hw/net/xilinx_ethlite: Update QOM style
  hw/net/xilinx_ethlite: Remove unuseful debug logs
  hw/net/xilinx_ethlite: Convert some debug logs to trace events
  hw/sparc: Mark devices as big-endian
  ...

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This commit is contained in:
Stefan Hajnoczi 2025-01-01 15:17:07 -05:00
commit 8b70d7f207
61 changed files with 2413 additions and 256 deletions

View File

@ -509,6 +509,7 @@ F: target/arm/hvf/
X86 HVF CPUs
M: Cameron Esfahani <dirty@apple.com>
M: Roman Bolshakov <rbolshakov@ddn.com>
R: Phil Dennis-Jordan <phil@philjordan.eu>
W: https://wiki.qemu.org/Features/HVF
S: Maintained
F: target/i386/hvf/
@ -516,6 +517,7 @@ F: target/i386/hvf/
HVF
M: Cameron Esfahani <dirty@apple.com>
M: Roman Bolshakov <rbolshakov@ddn.com>
R: Phil Dennis-Jordan <phil@philjordan.eu>
W: https://wiki.qemu.org/Features/HVF
S: Maintained
F: accel/hvf/
@ -2631,6 +2633,11 @@ F: hw/display/edid*
F: include/hw/display/edid.h
F: qemu-edid.c
macOS PV Graphics (apple-gfx)
M: Phil Dennis-Jordan <phil@philjordan.eu>
S: Maintained
F: hw/display/apple-gfx*
PIIX4 South Bridge (i82371AB)
M: Hervé Poussineau <hpoussin@reactos.org>
M: Philippe Mathieu-Daudé <philmd@linaro.org>

View File

@ -86,6 +86,7 @@ Emulated Devices
devices/ccid.rst
devices/cxl.rst
devices/ivshmem.rst
devices/ivshmem-flat.rst
devices/keyboard.rst
devices/net.rst
devices/nvme.rst

View File

@ -0,0 +1,33 @@
Inter-VM Shared Memory Flat Device
----------------------------------
The ivshmem-flat device is meant to be used on machines that lack a PCI bus,
making them unsuitable for the use of the traditional ivshmem device modeled as
a PCI device. Machines like those with a Cortex-M MCU are good candidates to use
the ivshmem-flat device. Also, since the flat version maps the control and
status registers directly to the memory, it requires a quite tiny "device
driver" to interact with other VMs, which is useful in some RTOSes, like
Zephyr, which usually run on constrained resource targets.
Similar to the ivshmem device, the ivshmem-flat device supports both peer
notification via HW interrupts and Inter-VM shared memory. This allows the
device to be used together with the traditional ivshmem, enabling communication
between, for instance, an aarch64 VM (using the traditional ivshmem device and
running Linux), and an arm VM (using the ivshmem-flat device and running Zephyr
instead).
The ivshmem-flat device does not support the use of a ``memdev`` option (see
ivshmem.rst for more details). It relies on the ivshmem server to create and
distribute the proper shared memory file descriptor and the eventfd(s) to notify
(interrupt) the peers. Therefore, to use this device, it is always necessary to
have an ivshmem server up and running for proper device creation.
Although the ivshmem-flat supports both peer notification (interrupts) and
shared memory, the interrupt mechanism is optional. If no input IRQ is
specified for the device it is disabled, preventing the VM from notifying or
being notified by other VMs (a warning will be displayed to the user to inform
the IRQ mechanism is disabled). The shared memory region is always present.
The MMRs (INTRMASK, INTRSTATUS, IVPOSITION, and DOORBELL registers) offsets at
the MMR region, and their functions, follow the ivshmem spec, so they work
exactly as in the ivshmem PCI device (see ./specs/ivshmem-spec.txt).

View File

@ -673,7 +673,7 @@ static void create_pcie(SBSAMachineState *sms)
/* Map IO port space */
sysbus_mmio_map(SYS_BUS_DEVICE(dev), 2, base_pio);
for (i = 0; i < GPEX_NUM_IRQS; i++) {
for (i = 0; i < PCI_NUM_PINS; i++) {
sysbus_connect_irq(SYS_BUS_DEVICE(dev), i,
qdev_get_gpio_in(sms->gic, irq + i));
gpex_set_irq_num(GPEX_HOST(dev), i, irq + i);

View File

@ -1547,7 +1547,7 @@ static void create_pcie(VirtMachineState *vms)
/* Map IO port space */
sysbus_mmio_map(SYS_BUS_DEVICE(dev), 2, base_pio);
for (i = 0; i < GPEX_NUM_IRQS; i++) {
for (i = 0; i < PCI_NUM_PINS; i++) {
sysbus_connect_irq(SYS_BUS_DEVICE(dev), i,
qdev_get_gpio_in(vms->gic, irq + i));
gpex_set_irq_num(GPEX_HOST(dev), i, irq + i);

View File

@ -50,11 +50,6 @@ static void virtio_blk_init_request(VirtIOBlock *s, VirtQueue *vq,
req->mr_next = NULL;
}
static void virtio_blk_free_request(VirtIOBlockReq *req)
{
g_free(req);
}
static void virtio_blk_req_complete(VirtIOBlockReq *req, unsigned char status)
{
VirtIOBlock *s = req->dev;
@ -93,7 +88,7 @@ static int virtio_blk_handle_rw_error(VirtIOBlockReq *req, int error,
if (acct_failed) {
block_acct_failed(blk_get_stats(s->blk), &req->acct);
}
virtio_blk_free_request(req);
g_free(req);
}
blk_error_action(s->blk, action, is_read, error);
@ -136,7 +131,7 @@ static void virtio_blk_rw_complete(void *opaque, int ret)
virtio_blk_req_complete(req, VIRTIO_BLK_S_OK);
block_acct_done(blk_get_stats(s->blk), &req->acct);
virtio_blk_free_request(req);
g_free(req);
}
}
@ -151,7 +146,7 @@ static void virtio_blk_flush_complete(void *opaque, int ret)
virtio_blk_req_complete(req, VIRTIO_BLK_S_OK);
block_acct_done(blk_get_stats(s->blk), &req->acct);
virtio_blk_free_request(req);
g_free(req);
}
static void virtio_blk_discard_write_zeroes_complete(void *opaque, int ret)
@ -169,7 +164,7 @@ static void virtio_blk_discard_write_zeroes_complete(void *opaque, int ret)
if (is_write_zeroes) {
block_acct_done(blk_get_stats(s->blk), &req->acct);
}
virtio_blk_free_request(req);
g_free(req);
}
static VirtIOBlockReq *virtio_blk_get_request(VirtIOBlock *s, VirtQueue *vq)
@ -214,7 +209,7 @@ static void virtio_blk_handle_scsi(VirtIOBlockReq *req)
fail:
virtio_blk_req_complete(req, status);
virtio_blk_free_request(req);
g_free(req);
}
static inline void submit_requests(VirtIOBlock *s, MultiReqBuffer *mrb,
@ -612,7 +607,7 @@ static void virtio_blk_zone_report_complete(void *opaque, int ret)
out:
virtio_blk_req_complete(req, err_status);
virtio_blk_free_request(req);
g_free(req);
g_free(data->zone_report_data.zones);
g_free(data);
}
@ -661,7 +656,7 @@ static void virtio_blk_handle_zone_report(VirtIOBlockReq *req,
return;
out:
virtio_blk_req_complete(req, err_status);
virtio_blk_free_request(req);
g_free(req);
}
static void virtio_blk_zone_mgmt_complete(void *opaque, int ret)
@ -677,7 +672,7 @@ static void virtio_blk_zone_mgmt_complete(void *opaque, int ret)
}
virtio_blk_req_complete(req, err_status);
virtio_blk_free_request(req);
g_free(req);
}
static int virtio_blk_handle_zone_mgmt(VirtIOBlockReq *req, BlockZoneOp op)
@ -719,7 +714,7 @@ static int virtio_blk_handle_zone_mgmt(VirtIOBlockReq *req, BlockZoneOp op)
return 0;
out:
virtio_blk_req_complete(req, err_status);
virtio_blk_free_request(req);
g_free(req);
return err_status;
}
@ -750,7 +745,7 @@ static void virtio_blk_zone_append_complete(void *opaque, int ret)
out:
virtio_blk_req_complete(req, err_status);
virtio_blk_free_request(req);
g_free(req);
g_free(data);
}
@ -788,7 +783,7 @@ static int virtio_blk_handle_zone_append(VirtIOBlockReq *req,
out:
virtio_blk_req_complete(req, err_status);
virtio_blk_free_request(req);
g_free(req);
return err_status;
}
@ -855,7 +850,7 @@ static int virtio_blk_handle_request(VirtIOBlockReq *req, MultiReqBuffer *mrb)
virtio_blk_req_complete(req, VIRTIO_BLK_S_IOERR);
block_acct_invalid(blk_get_stats(s->blk),
is_write ? BLOCK_ACCT_WRITE : BLOCK_ACCT_READ);
virtio_blk_free_request(req);
g_free(req);
return 0;
}
@ -911,7 +906,7 @@ static int virtio_blk_handle_request(VirtIOBlockReq *req, MultiReqBuffer *mrb)
VIRTIO_BLK_ID_BYTES));
iov_from_buf(in_iov, in_num, 0, serial, size);
virtio_blk_req_complete(req, VIRTIO_BLK_S_OK);
virtio_blk_free_request(req);
g_free(req);
break;
}
case VIRTIO_BLK_T_ZONE_APPEND & ~VIRTIO_BLK_T_OUT:
@ -943,7 +938,7 @@ static int virtio_blk_handle_request(VirtIOBlockReq *req, MultiReqBuffer *mrb)
if (unlikely(!(type & VIRTIO_BLK_T_OUT) ||
out_len > sizeof(dwz_hdr))) {
virtio_blk_req_complete(req, VIRTIO_BLK_S_UNSUPP);
virtio_blk_free_request(req);
g_free(req);
return 0;
}
@ -960,14 +955,14 @@ static int virtio_blk_handle_request(VirtIOBlockReq *req, MultiReqBuffer *mrb)
is_write_zeroes);
if (err_status != VIRTIO_BLK_S_OK) {
virtio_blk_req_complete(req, err_status);
virtio_blk_free_request(req);
g_free(req);
}
break;
}
default:
virtio_blk_req_complete(req, VIRTIO_BLK_S_UNSUPP);
virtio_blk_free_request(req);
g_free(req);
}
return 0;
}
@ -988,7 +983,7 @@ void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq)
while ((req = virtio_blk_get_request(s, vq))) {
if (virtio_blk_handle_request(req, &mrb)) {
virtqueue_detach_element(req->vq, &req->elem, 0);
virtio_blk_free_request(req);
g_free(req);
break;
}
}
@ -1038,7 +1033,7 @@ static void virtio_blk_dma_restart_bh(void *opaque)
while (req) {
next = req->next;
virtqueue_detach_element(req->vq, &req->elem, 0);
virtio_blk_free_request(req);
g_free(req);
req = next;
}
break;
@ -1121,7 +1116,7 @@ static void virtio_blk_reset(VirtIODevice *vdev)
/* No other threads can access req->vq here */
virtqueue_detach_element(req->vq, &req->elem, 0);
virtio_blk_free_request(req);
g_free(req);
}
}

View File

@ -140,3 +140,16 @@ config XLNX_DISPLAYPORT
config DM163
bool
config MAC_PVG
bool
default y
config MAC_PVG_MMIO
bool
depends on MAC_PVG && AARCH64
config MAC_PVG_PCI
bool
depends on MAC_PVG && PCI
default y if PCI_DEVICES

285
hw/display/apple-gfx-mmio.m Normal file
View File

@ -0,0 +1,285 @@
/*
* QEMU Apple ParavirtualizedGraphics.framework device, MMIO (arm64) variant
*
* Copyright © 2023 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* SPDX-License-Identifier: GPL-2.0-or-later
*
* ParavirtualizedGraphics.framework is a set of libraries that macOS provides
* which implements 3d graphics passthrough to the host as well as a
* proprietary guest communication channel to drive it. This device model
* implements support to drive that library from within QEMU as an MMIO-based
* system device for macOS on arm64 VMs.
*/
#include "qemu/osdep.h"
#include "qemu/log.h"
#include "block/aio-wait.h"
#include "hw/sysbus.h"
#include "hw/irq.h"
#include "apple-gfx.h"
#include "trace.h"
#import <ParavirtualizedGraphics/ParavirtualizedGraphics.h>
OBJECT_DECLARE_SIMPLE_TYPE(AppleGFXMMIOState, APPLE_GFX_MMIO)
/*
* ParavirtualizedGraphics.Framework only ships header files for the PCI
* variant which does not include IOSFC descriptors and host devices. We add
* their definitions here so that we can also work with the ARM version.
*/
typedef bool(^IOSFCRaiseInterrupt)(uint32_t vector);
typedef bool(^IOSFCUnmapMemory)(void *, void *, void *, void *, void *, void *);
typedef bool(^IOSFCMapMemory)(uint64_t phys, uint64_t len, bool ro, void **va,
void *, void *);
@interface PGDeviceDescriptor (IOSurfaceMapper)
@property (readwrite, nonatomic) bool usingIOSurfaceMapper;
@end
@interface PGIOSurfaceHostDeviceDescriptor : NSObject
-(PGIOSurfaceHostDeviceDescriptor *)init;
@property (readwrite, nonatomic, copy, nullable) IOSFCMapMemory mapMemory;
@property (readwrite, nonatomic, copy, nullable) IOSFCUnmapMemory unmapMemory;
@property (readwrite, nonatomic, copy, nullable) IOSFCRaiseInterrupt raiseInterrupt;
@end
@interface PGIOSurfaceHostDevice : NSObject
-(instancetype)initWithDescriptor:(PGIOSurfaceHostDeviceDescriptor *)desc;
-(uint32_t)mmioReadAtOffset:(size_t)offset;
-(void)mmioWriteAtOffset:(size_t)offset value:(uint32_t)value;
@end
struct AppleGFXMapSurfaceMemoryJob;
struct AppleGFXMMIOState {
SysBusDevice parent_obj;
AppleGFXState common;
qemu_irq irq_gfx;
qemu_irq irq_iosfc;
MemoryRegion iomem_iosfc;
PGIOSurfaceHostDevice *pgiosfc;
};
typedef struct AppleGFXMMIOJob {
AppleGFXMMIOState *state;
uint64_t offset;
uint64_t value;
bool completed;
} AppleGFXMMIOJob;
static void iosfc_do_read(void *opaque)
{
AppleGFXMMIOJob *job = opaque;
job->value = [job->state->pgiosfc mmioReadAtOffset:job->offset];
qatomic_set(&job->completed, true);
aio_wait_kick();
}
static uint64_t iosfc_read(void *opaque, hwaddr offset, unsigned size)
{
AppleGFXMMIOJob job = {
.state = opaque,
.offset = offset,
.completed = false,
};
dispatch_queue_t queue =
dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_async_f(queue, &job, iosfc_do_read);
AIO_WAIT_WHILE(NULL, !qatomic_read(&job.completed));
trace_apple_gfx_mmio_iosfc_read(offset, job.value);
return job.value;
}
static void iosfc_do_write(void *opaque)
{
AppleGFXMMIOJob *job = opaque;
[job->state->pgiosfc mmioWriteAtOffset:job->offset value:job->value];
qatomic_set(&job->completed, true);
aio_wait_kick();
}
static void iosfc_write(void *opaque, hwaddr offset, uint64_t val,
unsigned size)
{
AppleGFXMMIOJob job = {
.state = opaque,
.offset = offset,
.value = val,
.completed = false,
};
dispatch_queue_t queue =
dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_async_f(queue, &job, iosfc_do_write);
AIO_WAIT_WHILE(NULL, !qatomic_read(&job.completed));
trace_apple_gfx_mmio_iosfc_write(offset, val);
}
static const MemoryRegionOps apple_iosfc_ops = {
.read = iosfc_read,
.write = iosfc_write,
.endianness = DEVICE_LITTLE_ENDIAN,
.valid = {
.min_access_size = 4,
.max_access_size = 8,
},
.impl = {
.min_access_size = 4,
.max_access_size = 8,
},
};
static void raise_irq_bh(void *opaque)
{
qemu_irq *irq = opaque;
qemu_irq_pulse(*irq);
}
static void *apple_gfx_mmio_map_surface_memory(uint64_t guest_physical_address,
uint64_t length, bool read_only)
{
void *mem;
MemoryRegion *region = NULL;
RCU_READ_LOCK_GUARD();
mem = apple_gfx_host_ptr_for_gpa_range(guest_physical_address,
length, read_only, &region);
if (mem) {
memory_region_ref(region);
}
return mem;
}
static bool apple_gfx_mmio_unmap_surface_memory(void *ptr)
{
MemoryRegion *region;
ram_addr_t offset = 0;
RCU_READ_LOCK_GUARD();
region = memory_region_from_host(ptr, &offset);
if (!region) {
qemu_log_mask(LOG_GUEST_ERROR,
"%s: memory at %p to be unmapped not found.\n",
__func__, ptr);
return false;
}
trace_apple_gfx_iosfc_unmap_memory_region(ptr, region);
memory_region_unref(region);
return true;
}
static PGIOSurfaceHostDevice *apple_gfx_prepare_iosurface_host_device(
AppleGFXMMIOState *s)
{
PGIOSurfaceHostDeviceDescriptor *iosfc_desc =
[PGIOSurfaceHostDeviceDescriptor new];
PGIOSurfaceHostDevice *iosfc_host_dev;
iosfc_desc.mapMemory =
^bool(uint64_t phys, uint64_t len, bool ro, void **va, void *e, void *f) {
*va = apple_gfx_mmio_map_surface_memory(phys, len, ro);
trace_apple_gfx_iosfc_map_memory(phys, len, ro, va, e, f, *va);
return *va != NULL;
};
iosfc_desc.unmapMemory =
^bool(void *va, void *b, void *c, void *d, void *e, void *f) {
return apple_gfx_mmio_unmap_surface_memory(va);
};
iosfc_desc.raiseInterrupt = ^bool(uint32_t vector) {
trace_apple_gfx_iosfc_raise_irq(vector);
aio_bh_schedule_oneshot(qemu_get_aio_context(),
raise_irq_bh, &s->irq_iosfc);
return true;
};
iosfc_host_dev =
[[PGIOSurfaceHostDevice alloc] initWithDescriptor:iosfc_desc];
[iosfc_desc release];
return iosfc_host_dev;
}
static void apple_gfx_mmio_realize(DeviceState *dev, Error **errp)
{
@autoreleasepool {
AppleGFXMMIOState *s = APPLE_GFX_MMIO(dev);
PGDeviceDescriptor *desc = [PGDeviceDescriptor new];
desc.raiseInterrupt = ^(uint32_t vector) {
trace_apple_gfx_raise_irq(vector);
aio_bh_schedule_oneshot(qemu_get_aio_context(),
raise_irq_bh, &s->irq_gfx);
};
desc.usingIOSurfaceMapper = true;
s->pgiosfc = apple_gfx_prepare_iosurface_host_device(s);
if (!apple_gfx_common_realize(&s->common, dev, desc, errp)) {
[s->pgiosfc release];
s->pgiosfc = nil;
}
[desc release];
desc = nil;
}
}
static void apple_gfx_mmio_init(Object *obj)
{
AppleGFXMMIOState *s = APPLE_GFX_MMIO(obj);
apple_gfx_common_init(obj, &s->common, TYPE_APPLE_GFX_MMIO);
sysbus_init_mmio(SYS_BUS_DEVICE(s), &s->common.iomem_gfx);
memory_region_init_io(&s->iomem_iosfc, obj, &apple_iosfc_ops, s,
TYPE_APPLE_GFX_MMIO, 0x10000);
sysbus_init_mmio(SYS_BUS_DEVICE(s), &s->iomem_iosfc);
sysbus_init_irq(SYS_BUS_DEVICE(s), &s->irq_gfx);
sysbus_init_irq(SYS_BUS_DEVICE(s), &s->irq_iosfc);
}
static void apple_gfx_mmio_reset(Object *obj, ResetType type)
{
AppleGFXMMIOState *s = APPLE_GFX_MMIO(obj);
[s->common.pgdev reset];
}
static const Property apple_gfx_mmio_properties[] = {
DEFINE_PROP_ARRAY("display-modes", AppleGFXMMIOState,
common.num_display_modes, common.display_modes,
qdev_prop_apple_gfx_display_mode, AppleGFXDisplayMode),
};
static void apple_gfx_mmio_class_init(ObjectClass *klass, void *data)
{
DeviceClass *dc = DEVICE_CLASS(klass);
ResettableClass *rc = RESETTABLE_CLASS(klass);
rc->phases.hold = apple_gfx_mmio_reset;
dc->hotpluggable = false;
dc->realize = apple_gfx_mmio_realize;
device_class_set_props(dc, apple_gfx_mmio_properties);
}
static const TypeInfo apple_gfx_mmio_types[] = {
{
.name = TYPE_APPLE_GFX_MMIO,
.parent = TYPE_SYS_BUS_DEVICE,
.instance_size = sizeof(AppleGFXMMIOState),
.class_init = apple_gfx_mmio_class_init,
.instance_init = apple_gfx_mmio_init,
}
};
DEFINE_TYPES(apple_gfx_mmio_types)

157
hw/display/apple-gfx-pci.m Normal file
View File

@ -0,0 +1,157 @@
/*
* QEMU Apple ParavirtualizedGraphics.framework device, PCI variant
*
* Copyright © 2023-2024 Phil Dennis-Jordan
*
* SPDX-License-Identifier: GPL-2.0-or-later
*
* ParavirtualizedGraphics.framework is a set of libraries that macOS provides
* which implements 3d graphics passthrough to the host as well as a
* proprietary guest communication channel to drive it. This device model
* implements support to drive that library from within QEMU as a PCI device
* aimed primarily at x86-64 macOS VMs.
*/
#include "qemu/osdep.h"
#include "hw/pci/pci_device.h"
#include "hw/pci/msi.h"
#include "apple-gfx.h"
#include "trace.h"
#import <ParavirtualizedGraphics/ParavirtualizedGraphics.h>
OBJECT_DECLARE_SIMPLE_TYPE(AppleGFXPCIState, APPLE_GFX_PCI)
struct AppleGFXPCIState {
PCIDevice parent_obj;
AppleGFXState common;
};
static const char *apple_gfx_pci_option_rom_path = NULL;
static void apple_gfx_init_option_rom_path(void)
{
NSURL *option_rom_url = PGCopyOptionROMURL();
const char *option_rom_path = option_rom_url.fileSystemRepresentation;
apple_gfx_pci_option_rom_path = g_strdup(option_rom_path);
[option_rom_url release];
}
static void apple_gfx_pci_init(Object *obj)
{
AppleGFXPCIState *s = APPLE_GFX_PCI(obj);
if (!apple_gfx_pci_option_rom_path) {
/*
* The following is done on device not class init to avoid running
* ObjC code before fork() in -daemonize mode.
*/
PCIDeviceClass *pci = PCI_DEVICE_CLASS(object_get_class(obj));
apple_gfx_init_option_rom_path();
pci->romfile = apple_gfx_pci_option_rom_path;
}
apple_gfx_common_init(obj, &s->common, TYPE_APPLE_GFX_PCI);
}
typedef struct AppleGFXPCIInterruptJob {
PCIDevice *device;
uint32_t vector;
} AppleGFXPCIInterruptJob;
static void apple_gfx_pci_raise_interrupt(void *opaque)
{
AppleGFXPCIInterruptJob *job = opaque;
if (msi_enabled(job->device)) {
msi_notify(job->device, job->vector);
}
g_free(job);
}
static void apple_gfx_pci_interrupt(PCIDevice *dev, uint32_t vector)
{
AppleGFXPCIInterruptJob *job;
trace_apple_gfx_raise_irq(vector);
job = g_malloc0(sizeof(*job));
job->device = dev;
job->vector = vector;
aio_bh_schedule_oneshot(qemu_get_aio_context(),
apple_gfx_pci_raise_interrupt, job);
}
static void apple_gfx_pci_realize(PCIDevice *dev, Error **errp)
{
AppleGFXPCIState *s = APPLE_GFX_PCI(dev);
int ret;
pci_register_bar(dev, PG_PCI_BAR_MMIO,
PCI_BASE_ADDRESS_SPACE_MEMORY, &s->common.iomem_gfx);
ret = msi_init(dev, 0x0 /* config offset; 0 = find space */,
PG_PCI_MAX_MSI_VECTORS, true /* msi64bit */,
false /* msi_per_vector_mask */, errp);
if (ret != 0) {
return;
}
@autoreleasepool {
PGDeviceDescriptor *desc = [PGDeviceDescriptor new];
desc.raiseInterrupt = ^(uint32_t vector) {
apple_gfx_pci_interrupt(dev, vector);
};
apple_gfx_common_realize(&s->common, DEVICE(dev), desc, errp);
[desc release];
desc = nil;
}
}
static void apple_gfx_pci_reset(Object *obj, ResetType type)
{
AppleGFXPCIState *s = APPLE_GFX_PCI(obj);
[s->common.pgdev reset];
}
static const Property apple_gfx_pci_properties[] = {
DEFINE_PROP_ARRAY("display-modes", AppleGFXPCIState,
common.num_display_modes, common.display_modes,
qdev_prop_apple_gfx_display_mode, AppleGFXDisplayMode),
};
static void apple_gfx_pci_class_init(ObjectClass *klass, void *data)
{
DeviceClass *dc = DEVICE_CLASS(klass);
PCIDeviceClass *pci = PCI_DEVICE_CLASS(klass);
ResettableClass *rc = RESETTABLE_CLASS(klass);
rc->phases.hold = apple_gfx_pci_reset;
dc->desc = "macOS Paravirtualized Graphics PCI Display Controller";
dc->hotpluggable = false;
set_bit(DEVICE_CATEGORY_DISPLAY, dc->categories);
pci->vendor_id = PG_PCI_VENDOR_ID;
pci->device_id = PG_PCI_DEVICE_ID;
pci->class_id = PCI_CLASS_DISPLAY_OTHER;
pci->realize = apple_gfx_pci_realize;
device_class_set_props(dc, apple_gfx_pci_properties);
}
static const TypeInfo apple_gfx_pci_types[] = {
{
.name = TYPE_APPLE_GFX_PCI,
.parent = TYPE_PCI_DEVICE,
.instance_size = sizeof(AppleGFXPCIState),
.class_init = apple_gfx_pci_class_init,
.instance_init = apple_gfx_pci_init,
.interfaces = (InterfaceInfo[]) {
{ INTERFACE_PCIE_DEVICE },
{ },
},
}
};
DEFINE_TYPES(apple_gfx_pci_types)

74
hw/display/apple-gfx.h Normal file
View File

@ -0,0 +1,74 @@
/*
* Data structures and functions shared between variants of the macOS
* ParavirtualizedGraphics.framework based apple-gfx display adapter.
*
* SPDX-License-Identifier: GPL-2.0-or-later
*/
#ifndef QEMU_APPLE_GFX_H
#define QEMU_APPLE_GFX_H
#include "qemu/queue.h"
#include "exec/memory.h"
#include "hw/qdev-properties.h"
#include "ui/surface.h"
#define TYPE_APPLE_GFX_MMIO "apple-gfx-mmio"
#define TYPE_APPLE_GFX_PCI "apple-gfx-pci"
@class PGDeviceDescriptor;
@protocol PGDevice;
@protocol PGDisplay;
@protocol MTLDevice;
@protocol MTLTexture;
@protocol MTLCommandQueue;
typedef QTAILQ_HEAD(, PGTask_s) PGTaskList;
typedef struct AppleGFXDisplayMode {
uint16_t width_px;
uint16_t height_px;
uint16_t refresh_rate_hz;
} AppleGFXDisplayMode;
typedef struct AppleGFXState {
/* Initialised on init/realize() */
MemoryRegion iomem_gfx;
id<PGDevice> pgdev;
id<PGDisplay> pgdisp;
QemuConsole *con;
id<MTLDevice> mtl;
id<MTLCommandQueue> mtl_queue;
AppleGFXDisplayMode *display_modes;
uint32_t num_display_modes;
/* List `tasks` is protected by task_mutex */
QemuMutex task_mutex;
PGTaskList tasks;
/* Mutable state (BQL protected) */
QEMUCursor *cursor;
DisplaySurface *surface;
id<MTLTexture> texture;
int8_t pending_frames; /* # guest frames in the rendering pipeline */
bool gfx_update_requested; /* QEMU display system wants a new frame */
bool new_frame_ready; /* Guest has rendered a frame, ready to be used */
bool using_managed_texture_storage;
uint32_t rendering_frame_width;
uint32_t rendering_frame_height;
/* Mutable state (atomic) */
bool cursor_show;
} AppleGFXState;
void apple_gfx_common_init(Object *obj, AppleGFXState *s, const char* obj_name);
bool apple_gfx_common_realize(AppleGFXState *s, DeviceState *dev,
PGDeviceDescriptor *desc, Error **errp);
void *apple_gfx_host_ptr_for_gpa_range(uint64_t guest_physical,
uint64_t length, bool read_only,
MemoryRegion **mapping_in_region);
extern const PropertyInfo qdev_prop_apple_gfx_display_mode;
#endif

879
hw/display/apple-gfx.m Normal file
View File

@ -0,0 +1,879 @@
/*
* QEMU Apple ParavirtualizedGraphics.framework device
*
* Copyright © 2023 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* SPDX-License-Identifier: GPL-2.0-or-later
*
* ParavirtualizedGraphics.framework is a set of libraries that macOS provides
* which implements 3d graphics passthrough to the host as well as a
* proprietary guest communication channel to drive it. This device model
* implements support to drive that library from within QEMU.
*/
#include "qemu/osdep.h"
#include "qemu/lockable.h"
#include "qemu/cutils.h"
#include "qemu/log.h"
#include "qapi/visitor.h"
#include "qapi/error.h"
#include "block/aio-wait.h"
#include "exec/address-spaces.h"
#include "system/dma.h"
#include "migration/blocker.h"
#include "ui/console.h"
#include "apple-gfx.h"
#include "trace.h"
#include <mach/mach.h>
#include <mach/mach_vm.h>
#include <dispatch/dispatch.h>
#import <ParavirtualizedGraphics/ParavirtualizedGraphics.h>
static const AppleGFXDisplayMode apple_gfx_default_modes[] = {
{ 1920, 1080, 60 },
{ 1440, 1080, 60 },
{ 1280, 1024, 60 },
};
static Error *apple_gfx_mig_blocker;
static uint32_t next_pgdisplay_serial_num = 1;
static dispatch_queue_t get_background_queue(void)
{
return dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
}
/* ------ PGTask and task operations: new/destroy/map/unmap ------ */
/*
* This implements the type declared in <ParavirtualizedGraphics/PGDevice.h>
* which is opaque from the framework's point of view. It is used in callbacks
* in the form of its typedef PGTask_t, which also already exists in the
* framework headers.
*
* A "task" in PVG terminology represents a host-virtual contiguous address
* range which is reserved in a large chunk on task creation. The mapMemory
* callback then requests ranges of guest system memory (identified by their
* GPA) to be mapped into subranges of this reserved address space.
* This type of operation isn't well-supported by QEMU's memory subsystem,
* but it is fortunately trivial to achieve with Darwin's mach_vm_remap() call,
* which allows us to refer to the same backing memory via multiple virtual
* address ranges. The Mach VM APIs are therefore used throughout for managing
* task memory.
*/
struct PGTask_s {
QTAILQ_ENTRY(PGTask_s) node;
AppleGFXState *s;
mach_vm_address_t address;
uint64_t len;
/*
* All unique MemoryRegions for which a mapping has been created in in this
* task, and on which we have thus called memory_region_ref(). There are
* usually very few regions of system RAM in total, so we expect this array
* to be very short. Therefore, no need for sorting or fancy search
* algorithms, linear search will do.
* Protected by AppleGFXState's task_mutex.
*/
GPtrArray *mapped_regions;
};
static PGTask_t *apple_gfx_new_task(AppleGFXState *s, uint64_t len)
{
mach_vm_address_t task_mem;
PGTask_t *task;
kern_return_t r;
r = mach_vm_allocate(mach_task_self(), &task_mem, len, VM_FLAGS_ANYWHERE);
if (r != KERN_SUCCESS) {
return NULL;
}
task = g_new0(PGTask_t, 1);
task->s = s;
task->address = task_mem;
task->len = len;
task->mapped_regions = g_ptr_array_sized_new(2 /* Usually enough */);
QEMU_LOCK_GUARD(&s->task_mutex);
QTAILQ_INSERT_TAIL(&s->tasks, task, node);
return task;
}
static void apple_gfx_destroy_task(AppleGFXState *s, PGTask_t *task)
{
GPtrArray *regions = task->mapped_regions;
MemoryRegion *region;
size_t i;
for (i = 0; i < regions->len; ++i) {
region = g_ptr_array_index(regions, i);
memory_region_unref(region);
}
g_ptr_array_unref(regions);
mach_vm_deallocate(mach_task_self(), task->address, task->len);
QEMU_LOCK_GUARD(&s->task_mutex);
QTAILQ_REMOVE(&s->tasks, task, node);
g_free(task);
}
void *apple_gfx_host_ptr_for_gpa_range(uint64_t guest_physical,
uint64_t length, bool read_only,
MemoryRegion **mapping_in_region)
{
MemoryRegion *ram_region;
char *host_ptr;
hwaddr ram_region_offset = 0;
hwaddr ram_region_length = length;
ram_region = address_space_translate(&address_space_memory,
guest_physical,
&ram_region_offset,
&ram_region_length, !read_only,
MEMTXATTRS_UNSPECIFIED);
if (!ram_region || ram_region_length < length ||
!memory_access_is_direct(ram_region, !read_only)) {
return NULL;
}
host_ptr = memory_region_get_ram_ptr(ram_region);
if (!host_ptr) {
return NULL;
}
host_ptr += ram_region_offset;
*mapping_in_region = ram_region;
return host_ptr;
}
static bool apple_gfx_task_map_memory(AppleGFXState *s, PGTask_t *task,
uint64_t virtual_offset,
PGPhysicalMemoryRange_t *ranges,
uint32_t range_count, bool read_only)
{
kern_return_t r;
void *source_ptr;
mach_vm_address_t target;
vm_prot_t cur_protection, max_protection;
bool success = true;
MemoryRegion *region;
RCU_READ_LOCK_GUARD();
QEMU_LOCK_GUARD(&s->task_mutex);
trace_apple_gfx_map_memory(task, range_count, virtual_offset, read_only);
for (int i = 0; i < range_count; i++) {
PGPhysicalMemoryRange_t *range = &ranges[i];
target = task->address + virtual_offset;
virtual_offset += range->physicalLength;
trace_apple_gfx_map_memory_range(i, range->physicalAddress,
range->physicalLength);
region = NULL;
source_ptr = apple_gfx_host_ptr_for_gpa_range(range->physicalAddress,
range->physicalLength,
read_only, &region);
if (!source_ptr) {
success = false;
continue;
}
if (!g_ptr_array_find(task->mapped_regions, region, NULL)) {
g_ptr_array_add(task->mapped_regions, region);
memory_region_ref(region);
}
cur_protection = 0;
max_protection = 0;
/* Map guest RAM at range->physicalAddress into PG task memory range */
r = mach_vm_remap(mach_task_self(),
&target, range->physicalLength, vm_page_size - 1,
VM_FLAGS_FIXED | VM_FLAGS_OVERWRITE,
mach_task_self(), (mach_vm_address_t)source_ptr,
false /* shared mapping, no copy */,
&cur_protection, &max_protection,
VM_INHERIT_COPY);
trace_apple_gfx_remap(r, source_ptr, target);
g_assert(r == KERN_SUCCESS);
}
return success;
}
static void apple_gfx_task_unmap_memory(AppleGFXState *s, PGTask_t *task,
uint64_t virtual_offset, uint64_t length)
{
kern_return_t r;
mach_vm_address_t range_address;
trace_apple_gfx_unmap_memory(task, virtual_offset, length);
/*
* Replace task memory range with fresh 0 pages, undoing the mapping
* from guest RAM.
*/
range_address = task->address + virtual_offset;
r = mach_vm_allocate(mach_task_self(), &range_address, length,
VM_FLAGS_FIXED | VM_FLAGS_OVERWRITE);
g_assert(r == KERN_SUCCESS);
}
/* ------ Rendering and frame management ------ */
static void apple_gfx_render_frame_completed_bh(void *opaque);
static void apple_gfx_render_new_frame(AppleGFXState *s)
{
bool managed_texture = s->using_managed_texture_storage;
uint32_t width = surface_width(s->surface);
uint32_t height = surface_height(s->surface);
MTLRegion region = MTLRegionMake2D(0, 0, width, height);
id<MTLCommandBuffer> command_buffer = [s->mtl_queue commandBuffer];
id<MTLTexture> texture = s->texture;
assert(bql_locked());
[texture retain];
[command_buffer retain];
s->rendering_frame_width = width;
s->rendering_frame_height = height;
dispatch_async(get_background_queue(), ^{
/*
* This is not safe to call from the BQL/BH due to PVG-internal locks
* causing deadlocks.
*/
bool r = [s->pgdisp encodeCurrentFrameToCommandBuffer:command_buffer
texture:texture
region:region];
if (!r) {
[texture release];
[command_buffer release];
qemu_log_mask(LOG_GUEST_ERROR,
"%s: encodeCurrentFrameToCommandBuffer:texture:region: "
"failed\n", __func__);
bql_lock();
--s->pending_frames;
if (s->pending_frames > 0) {
apple_gfx_render_new_frame(s);
}
bql_unlock();
return;
}
if (managed_texture) {
/* "Managed" textures exist in both VRAM and RAM and must be synced. */
id<MTLBlitCommandEncoder> blit = [command_buffer blitCommandEncoder];
[blit synchronizeResource:texture];
[blit endEncoding];
}
[texture release];
[command_buffer addCompletedHandler:
^(id<MTLCommandBuffer> cb)
{
aio_bh_schedule_oneshot(qemu_get_aio_context(),
apple_gfx_render_frame_completed_bh, s);
}];
[command_buffer commit];
[command_buffer release];
});
}
static void copy_mtl_texture_to_surface_mem(id<MTLTexture> texture, void *vram)
{
/*
* TODO: Skip this entirely on a pure Metal or headless/guest-only
* rendering path, else use a blit command encoder? Needs careful
* (double?) buffering design.
*/
size_t width = texture.width, height = texture.height;
MTLRegion region = MTLRegionMake2D(0, 0, width, height);
[texture getBytes:vram
bytesPerRow:(width * 4)
bytesPerImage:(width * height * 4)
fromRegion:region
mipmapLevel:0
slice:0];
}
static void apple_gfx_render_frame_completed_bh(void *opaque)
{
AppleGFXState *s = opaque;
@autoreleasepool {
--s->pending_frames;
assert(s->pending_frames >= 0);
/* Only update display if mode hasn't changed since we started rendering. */
if (s->rendering_frame_width == surface_width(s->surface) &&
s->rendering_frame_height == surface_height(s->surface)) {
copy_mtl_texture_to_surface_mem(s->texture, surface_data(s->surface));
if (s->gfx_update_requested) {
s->gfx_update_requested = false;
dpy_gfx_update_full(s->con);
graphic_hw_update_done(s->con);
s->new_frame_ready = false;
} else {
s->new_frame_ready = true;
}
}
if (s->pending_frames > 0) {
apple_gfx_render_new_frame(s);
}
}
}
static void apple_gfx_fb_update_display(void *opaque)
{
AppleGFXState *s = opaque;
assert(bql_locked());
if (s->new_frame_ready) {
dpy_gfx_update_full(s->con);
s->new_frame_ready = false;
graphic_hw_update_done(s->con);
} else if (s->pending_frames > 0) {
s->gfx_update_requested = true;
} else {
graphic_hw_update_done(s->con);
}
}
static const GraphicHwOps apple_gfx_fb_ops = {
.gfx_update = apple_gfx_fb_update_display,
.gfx_update_async = true,
};
/* ------ Mouse cursor and display mode setting ------ */
static void set_mode(AppleGFXState *s, uint32_t width, uint32_t height)
{
MTLTextureDescriptor *textureDescriptor;
if (s->surface &&
width == surface_width(s->surface) &&
height == surface_height(s->surface)) {
return;
}
[s->texture release];
s->surface = qemu_create_displaysurface(width, height);
@autoreleasepool {
textureDescriptor =
[MTLTextureDescriptor
texture2DDescriptorWithPixelFormat:MTLPixelFormatBGRA8Unorm
width:width
height:height
mipmapped:NO];
textureDescriptor.usage = s->pgdisp.minimumTextureUsage;
s->texture = [s->mtl newTextureWithDescriptor:textureDescriptor];
s->using_managed_texture_storage =
(s->texture.storageMode == MTLStorageModeManaged);
}
dpy_gfx_replace_surface(s->con, s->surface);
}
static void update_cursor(AppleGFXState *s)
{
assert(bql_locked());
dpy_mouse_set(s->con, s->pgdisp.cursorPosition.x,
s->pgdisp.cursorPosition.y, qatomic_read(&s->cursor_show));
}
static void update_cursor_bh(void *opaque)
{
AppleGFXState *s = opaque;
update_cursor(s);
}
typedef struct AppleGFXSetCursorGlyphJob {
AppleGFXState *s;
NSBitmapImageRep *glyph;
PGDisplayCoord_t hotspot;
} AppleGFXSetCursorGlyphJob;
static void set_cursor_glyph(void *opaque)
{
AppleGFXSetCursorGlyphJob *job = opaque;
AppleGFXState *s = job->s;
NSBitmapImageRep *glyph = job->glyph;
uint32_t bpp = glyph.bitsPerPixel;
size_t width = glyph.pixelsWide;
size_t height = glyph.pixelsHigh;
size_t padding_bytes_per_row = glyph.bytesPerRow - width * 4;
const uint8_t* px_data = glyph.bitmapData;
trace_apple_gfx_cursor_set(bpp, width, height);
if (s->cursor) {
cursor_unref(s->cursor);
s->cursor = NULL;
}
if (bpp == 32) { /* Shouldn't be anything else, but just to be safe... */
s->cursor = cursor_alloc(width, height);
s->cursor->hot_x = job->hotspot.x;
s->cursor->hot_y = job->hotspot.y;
uint32_t *dest_px = s->cursor->data;
for (size_t y = 0; y < height; ++y) {
for (size_t x = 0; x < width; ++x) {
/*
* NSBitmapImageRep's red & blue channels are swapped
* compared to QEMUCursor's.
*/
*dest_px =
(px_data[0] << 16u) |
(px_data[1] << 8u) |
(px_data[2] << 0u) |
(px_data[3] << 24u);
++dest_px;
px_data += 4;
}
px_data += padding_bytes_per_row;
}
dpy_cursor_define(s->con, s->cursor);
update_cursor(s);
}
[glyph release];
g_free(job);
}
/* ------ DMA (device reading system memory) ------ */
typedef struct AppleGFXReadMemoryJob {
QemuSemaphore sem;
hwaddr physical_address;
uint64_t length;
void *dst;
bool success;
} AppleGFXReadMemoryJob;
static void apple_gfx_do_read_memory(void *opaque)
{
AppleGFXReadMemoryJob *job = opaque;
MemTxResult r;
r = dma_memory_read(&address_space_memory, job->physical_address,
job->dst, job->length, MEMTXATTRS_UNSPECIFIED);
job->success = (r == MEMTX_OK);
qemu_sem_post(&job->sem);
}
static bool apple_gfx_read_memory(AppleGFXState *s, hwaddr physical_address,
uint64_t length, void *dst)
{
AppleGFXReadMemoryJob job = {
.physical_address = physical_address, .length = length, .dst = dst
};
trace_apple_gfx_read_memory(physical_address, length, dst);
/* Performing DMA requires BQL, so do it in a BH. */
qemu_sem_init(&job.sem, 0);
aio_bh_schedule_oneshot(qemu_get_aio_context(),
apple_gfx_do_read_memory, &job);
qemu_sem_wait(&job.sem);
qemu_sem_destroy(&job.sem);
return job.success;
}
/* ------ Memory-mapped device I/O operations ------ */
typedef struct AppleGFXIOJob {
AppleGFXState *state;
uint64_t offset;
uint64_t value;
bool completed;
} AppleGFXIOJob;
static void apple_gfx_do_read(void *opaque)
{
AppleGFXIOJob *job = opaque;
job->value = [job->state->pgdev mmioReadAtOffset:job->offset];
qatomic_set(&job->completed, true);
aio_wait_kick();
}
static uint64_t apple_gfx_read(void *opaque, hwaddr offset, unsigned size)
{
AppleGFXIOJob job = {
.state = opaque,
.offset = offset,
.completed = false,
};
dispatch_queue_t queue = get_background_queue();
dispatch_async_f(queue, &job, apple_gfx_do_read);
AIO_WAIT_WHILE(NULL, !qatomic_read(&job.completed));
trace_apple_gfx_read(offset, job.value);
return job.value;
}
static void apple_gfx_do_write(void *opaque)
{
AppleGFXIOJob *job = opaque;
[job->state->pgdev mmioWriteAtOffset:job->offset value:job->value];
qatomic_set(&job->completed, true);
aio_wait_kick();
}
static void apple_gfx_write(void *opaque, hwaddr offset, uint64_t val,
unsigned size)
{
/*
* The methods mmioReadAtOffset: and especially mmioWriteAtOffset: can
* trigger synchronous operations on other dispatch queues, which in turn
* may call back out on one or more of the callback blocks. For this reason,
* and as we are holding the BQL, we invoke the I/O methods on a pool
* thread and handle AIO tasks while we wait. Any work in the callbacks
* requiring the BQL will in turn schedule BHs which this thread will
* process while waiting.
*/
AppleGFXIOJob job = {
.state = opaque,
.offset = offset,
.value = val,
.completed = false,
};
dispatch_queue_t queue = get_background_queue();
dispatch_async_f(queue, &job, apple_gfx_do_write);
AIO_WAIT_WHILE(NULL, !qatomic_read(&job.completed));
trace_apple_gfx_write(offset, val);
}
static const MemoryRegionOps apple_gfx_ops = {
.read = apple_gfx_read,
.write = apple_gfx_write,
.endianness = DEVICE_LITTLE_ENDIAN,
.valid = {
.min_access_size = 4,
.max_access_size = 8,
},
.impl = {
.min_access_size = 4,
.max_access_size = 4,
},
};
static size_t apple_gfx_get_default_mmio_range_size(void)
{
size_t mmio_range_size;
@autoreleasepool {
PGDeviceDescriptor *desc = [PGDeviceDescriptor new];
mmio_range_size = desc.mmioLength;
[desc release];
}
return mmio_range_size;
}
/* ------ Initialisation and startup ------ */
void apple_gfx_common_init(Object *obj, AppleGFXState *s, const char* obj_name)
{
size_t mmio_range_size = apple_gfx_get_default_mmio_range_size();
trace_apple_gfx_common_init(obj_name, mmio_range_size);
memory_region_init_io(&s->iomem_gfx, obj, &apple_gfx_ops, s, obj_name,
mmio_range_size);
/* TODO: PVG framework supports serialising device state: integrate it! */
}
static void apple_gfx_register_task_mapping_handlers(AppleGFXState *s,
PGDeviceDescriptor *desc)
{
desc.createTask = ^(uint64_t vmSize, void * _Nullable * _Nonnull baseAddress) {
PGTask_t *task = apple_gfx_new_task(s, vmSize);
*baseAddress = (void *)task->address;
trace_apple_gfx_create_task(vmSize, *baseAddress);
return task;
};
desc.destroyTask = ^(PGTask_t * _Nonnull task) {
trace_apple_gfx_destroy_task(task, task->mapped_regions->len);
apple_gfx_destroy_task(s, task);
};
desc.mapMemory = ^bool(PGTask_t * _Nonnull task, uint32_t range_count,
uint64_t virtual_offset, bool read_only,
PGPhysicalMemoryRange_t * _Nonnull ranges) {
return apple_gfx_task_map_memory(s, task, virtual_offset,
ranges, range_count, read_only);
};
desc.unmapMemory = ^bool(PGTask_t * _Nonnull task, uint64_t virtual_offset,
uint64_t length) {
apple_gfx_task_unmap_memory(s, task, virtual_offset, length);
return true;
};
desc.readMemory = ^bool(uint64_t physical_address, uint64_t length,
void * _Nonnull dst) {
return apple_gfx_read_memory(s, physical_address, length, dst);
};
}
static void new_frame_handler_bh(void *opaque)
{
AppleGFXState *s = opaque;
/* Drop frames if guest gets too far ahead. */
if (s->pending_frames >= 2) {
return;
}
++s->pending_frames;
if (s->pending_frames > 1) {
return;
}
@autoreleasepool {
apple_gfx_render_new_frame(s);
}
}
static PGDisplayDescriptor *apple_gfx_prepare_display_descriptor(AppleGFXState *s)
{
PGDisplayDescriptor *disp_desc = [PGDisplayDescriptor new];
disp_desc.name = @"QEMU display";
disp_desc.sizeInMillimeters = NSMakeSize(400., 300.); /* A 20" display */
disp_desc.queue = dispatch_get_main_queue();
disp_desc.newFrameEventHandler = ^(void) {
trace_apple_gfx_new_frame();
aio_bh_schedule_oneshot(qemu_get_aio_context(), new_frame_handler_bh, s);
};
disp_desc.modeChangeHandler = ^(PGDisplayCoord_t sizeInPixels,
OSType pixelFormat) {
trace_apple_gfx_mode_change(sizeInPixels.x, sizeInPixels.y);
BQL_LOCK_GUARD();
set_mode(s, sizeInPixels.x, sizeInPixels.y);
};
disp_desc.cursorGlyphHandler = ^(NSBitmapImageRep *glyph,
PGDisplayCoord_t hotspot) {
AppleGFXSetCursorGlyphJob *job = g_malloc0(sizeof(*job));
job->s = s;
job->glyph = glyph;
job->hotspot = hotspot;
[glyph retain];
aio_bh_schedule_oneshot(qemu_get_aio_context(),
set_cursor_glyph, job);
};
disp_desc.cursorShowHandler = ^(BOOL show) {
trace_apple_gfx_cursor_show(show);
qatomic_set(&s->cursor_show, show);
aio_bh_schedule_oneshot(qemu_get_aio_context(),
update_cursor_bh, s);
};
disp_desc.cursorMoveHandler = ^(void) {
trace_apple_gfx_cursor_move();
aio_bh_schedule_oneshot(qemu_get_aio_context(),
update_cursor_bh, s);
};
return disp_desc;
}
static NSArray<PGDisplayMode *> *apple_gfx_create_display_mode_array(
const AppleGFXDisplayMode display_modes[], uint32_t display_mode_count)
{
PGDisplayMode *mode_obj;
NSMutableArray<PGDisplayMode *> *mode_array =
[[NSMutableArray alloc] initWithCapacity:display_mode_count];
for (unsigned i = 0; i < display_mode_count; i++) {
const AppleGFXDisplayMode *mode = &display_modes[i];
trace_apple_gfx_display_mode(i, mode->width_px, mode->height_px);
PGDisplayCoord_t mode_size = { mode->width_px, mode->height_px };
mode_obj =
[[PGDisplayMode alloc] initWithSizeInPixels:mode_size
refreshRateInHz:mode->refresh_rate_hz];
[mode_array addObject:mode_obj];
[mode_obj release];
}
return mode_array;
}
static id<MTLDevice> copy_suitable_metal_device(void)
{
id<MTLDevice> dev = nil;
NSArray<id<MTLDevice>> *devs = MTLCopyAllDevices();
/* Prefer a unified memory GPU. Failing that, pick a non-removable GPU. */
for (size_t i = 0; i < devs.count; ++i) {
if (devs[i].hasUnifiedMemory) {
dev = devs[i];
break;
}
if (!devs[i].removable) {
dev = devs[i];
}
}
if (dev != nil) {
[dev retain];
} else {
dev = MTLCreateSystemDefaultDevice();
}
[devs release];
return dev;
}
bool apple_gfx_common_realize(AppleGFXState *s, DeviceState *dev,
PGDeviceDescriptor *desc, Error **errp)
{
PGDisplayDescriptor *disp_desc;
const AppleGFXDisplayMode *display_modes = apple_gfx_default_modes;
uint32_t num_display_modes = ARRAY_SIZE(apple_gfx_default_modes);
NSArray<PGDisplayMode *> *mode_array;
if (apple_gfx_mig_blocker == NULL) {
error_setg(&apple_gfx_mig_blocker,
"Migration state blocked by apple-gfx display device");
if (migrate_add_blocker(&apple_gfx_mig_blocker, errp) < 0) {
return false;
}
}
qemu_mutex_init(&s->task_mutex);
QTAILQ_INIT(&s->tasks);
s->mtl = copy_suitable_metal_device();
s->mtl_queue = [s->mtl newCommandQueue];
desc.device = s->mtl;
apple_gfx_register_task_mapping_handlers(s, desc);
s->cursor_show = true;
s->pgdev = PGNewDeviceWithDescriptor(desc);
disp_desc = apple_gfx_prepare_display_descriptor(s);
/*
* Although the framework does, this integration currently does not support
* multiple virtual displays connected to a single PV graphics device.
* It is however possible to create
* more than one instance of the device, each with one display. The macOS
* guest will ignore these displays if they share the same serial number,
* so ensure each instance gets a unique one.
*/
s->pgdisp = [s->pgdev newDisplayWithDescriptor:disp_desc
port:0
serialNum:next_pgdisplay_serial_num++];
[disp_desc release];
if (s->display_modes != NULL && s->num_display_modes > 0) {
trace_apple_gfx_common_realize_modes_property(s->num_display_modes);
display_modes = s->display_modes;
num_display_modes = s->num_display_modes;
}
s->pgdisp.modeList = mode_array =
apple_gfx_create_display_mode_array(display_modes, num_display_modes);
[mode_array release];
s->con = graphic_console_init(dev, 0, &apple_gfx_fb_ops, s);
return true;
}
/* ------ Display mode list device property ------ */
static void apple_gfx_get_display_mode(Object *obj, Visitor *v,
const char *name, void *opaque,
Error **errp)
{
Property *prop = opaque;
AppleGFXDisplayMode *mode = object_field_prop_ptr(obj, prop);
/* 3 uint16s (max 5 digits) + 2 separator characters + nul. */
char buffer[5 * 3 + 2 + 1];
char *pos = buffer;
int rc = snprintf(buffer, sizeof(buffer),
"%"PRIu16"x%"PRIu16"@%"PRIu16,
mode->width_px, mode->height_px,
mode->refresh_rate_hz);
assert(rc < sizeof(buffer));
visit_type_str(v, name, &pos, errp);
}
static void apple_gfx_set_display_mode(Object *obj, Visitor *v,
const char *name, void *opaque,
Error **errp)
{
Property *prop = opaque;
AppleGFXDisplayMode *mode = object_field_prop_ptr(obj, prop);
const char *endptr;
g_autofree char *str = NULL;
int ret;
int val;
if (!visit_type_str(v, name, &str, errp)) {
return;
}
endptr = str;
ret = qemu_strtoi(endptr, &endptr, 10, &val);
if (ret || val > UINT16_MAX || val <= 0) {
error_setg(errp, "width in '%s' must be a decimal integer number"
" of pixels in the range 1..65535", name);
return;
}
mode->width_px = val;
if (*endptr != 'x') {
goto separator_error;
}
ret = qemu_strtoi(endptr + 1, &endptr, 10, &val);
if (ret || val > UINT16_MAX || val <= 0) {
error_setg(errp, "height in '%s' must be a decimal integer number"
" of pixels in the range 1..65535", name);
return;
}
mode->height_px = val;
if (*endptr != '@') {
goto separator_error;
}
ret = qemu_strtoi(endptr + 1, &endptr, 10, &val);
if (ret || val > UINT16_MAX || val <= 0) {
error_setg(errp, "refresh rate in '%s'"
" must be a positive decimal integer (Hertz)", name);
return;
}
mode->refresh_rate_hz = val;
return;
separator_error:
error_setg(errp,
"Each display mode takes the format '<width>x<height>@<rate>'");
}
const PropertyInfo qdev_prop_apple_gfx_display_mode = {
.name = "display_mode",
.description =
"Display mode in pixels and Hertz, as <width>x<height>@<refresh-rate> "
"Example: 3840x2160@60",
.get = apple_gfx_get_display_mode,
.set = apple_gfx_set_display_mode,
};

View File

@ -61,6 +61,13 @@ system_ss.add(when: 'CONFIG_ARTIST', if_true: files('artist.c'))
system_ss.add(when: 'CONFIG_ATI_VGA', if_true: [files('ati.c', 'ati_2d.c', 'ati_dbg.c'), pixman])
if host_os == 'darwin'
system_ss.add(when: 'CONFIG_MAC_PVG', if_true: [files('apple-gfx.m'), pvg, metal])
system_ss.add(when: 'CONFIG_MAC_PVG_PCI', if_true: [files('apple-gfx-pci.m'), pvg, metal])
if cpu == 'aarch64'
system_ss.add(when: 'CONFIG_MAC_PVG_MMIO', if_true: [files('apple-gfx-mmio.m'), pvg, metal])
endif
endif
if config_all_devices.has_key('CONFIG_VIRTIO_GPU')
virtio_gpu_ss = ss.source_set()

View File

@ -50,7 +50,7 @@
#undef ALIGN
#define ALIGN(a, b) (((a) + ((b) - 1)) & ~((b) - 1))
#define PIXEL_SIZE 0.2936875 //1280x1024 is 14.8" x 11.9"
#define PIXEL_SIZE 0.2936875 /* 1280x1024 is 14.8" x 11.9" */
#define QXL_MODE(_x, _y, _b, _o) \
{ .x_res = _x, \

View File

@ -194,3 +194,33 @@ dm163_bits_ppi(unsigned dest_width) "dest_width : %u"
dm163_leds(int led, uint32_t value) "led %d: 0x%x"
dm163_channels(int channel, uint8_t value) "channel %d: 0x%x"
dm163_refresh_rate(uint32_t rr) "refresh rate %d"
# apple-gfx.m
apple_gfx_read(uint64_t offset, uint64_t res) "offset=0x%"PRIx64" res=0x%"PRIx64
apple_gfx_write(uint64_t offset, uint64_t val) "offset=0x%"PRIx64" val=0x%"PRIx64
apple_gfx_create_task(uint32_t vm_size, void *va) "vm_size=0x%x base_addr=%p"
apple_gfx_destroy_task(void *task, unsigned int num_mapped_regions) "task=%p, task->mapped_regions->len=%u"
apple_gfx_map_memory(void *task, uint32_t range_count, uint64_t virtual_offset, uint32_t read_only) "task=%p range_count=0x%x virtual_offset=0x%"PRIx64" read_only=%d"
apple_gfx_map_memory_range(uint32_t i, uint64_t phys_addr, uint64_t phys_len) "[%d] phys_addr=0x%"PRIx64" phys_len=0x%"PRIx64
apple_gfx_remap(uint64_t retval, void *source_ptr, uint64_t target) "retval=%"PRId64" source=%p target=0x%"PRIx64
apple_gfx_unmap_memory(void *task, uint64_t virtual_offset, uint64_t length) "task=%p virtual_offset=0x%"PRIx64" length=0x%"PRIx64
apple_gfx_read_memory(uint64_t phys_address, uint64_t length, void *dst) "phys_addr=0x%"PRIx64" length=0x%"PRIx64" dest=%p"
apple_gfx_raise_irq(uint32_t vector) "vector=0x%x"
apple_gfx_new_frame(void) ""
apple_gfx_mode_change(uint64_t x, uint64_t y) "x=%"PRId64" y=%"PRId64
apple_gfx_cursor_set(uint32_t bpp, uint64_t width, uint64_t height) "bpp=%d width=%"PRId64" height=0x%"PRId64
apple_gfx_cursor_show(uint32_t show) "show=%d"
apple_gfx_cursor_move(void) ""
apple_gfx_common_init(const char *device_name, size_t mmio_size) "device: %s; MMIO size: %zu bytes"
apple_gfx_common_realize_modes_property(uint32_t num_modes) "using %u modes supplied by 'display-modes' device property"
apple_gfx_display_mode(uint32_t mode_idx, uint16_t width_px, uint16_t height_px) "mode %2"PRIu32": %4"PRIu16"x%4"PRIu16
# apple-gfx-mmio.m
apple_gfx_mmio_iosfc_read(uint64_t offset, uint64_t res) "offset=0x%"PRIx64" res=0x%"PRIx64
apple_gfx_mmio_iosfc_write(uint64_t offset, uint64_t val) "offset=0x%"PRIx64" val=0x%"PRIx64
apple_gfx_iosfc_map_memory(uint64_t phys, uint64_t len, uint32_t ro, void *va, void *e, void *f, void* va_result) "phys=0x%"PRIx64" len=0x%"PRIx64" ro=%d va=%p e=%p f=%p -> *va=%p"
apple_gfx_iosfc_map_memory_new_region(size_t i, void *region, uint64_t start, uint64_t end) "index=%zu, region=%p, 0x%"PRIx64"-0x%"PRIx64
apple_gfx_iosfc_unmap_memory(void *a, void *b, void *c, void *d, void *e, void *f) "a=%p b=%p c=%p d=%p e=%p f=%p"
apple_gfx_iosfc_unmap_memory_region(void* mem, void *region) "unmapping @ %p from memory region %p"
apple_gfx_iosfc_raise_irq(uint32_t vector) "vector=0x%x"

View File

@ -1652,17 +1652,10 @@ static void amdvi_sysbus_realize(DeviceState *dev, Error **errp)
memory_region_add_subregion_overlap(&s->mr_sys, AMDVI_INT_ADDR_FIRST,
&s->mr_ir, 1);
/* AMD IOMMU with x2APIC mode requires xtsup=on */
if (x86ms->apic_id_limit > 255 && !s->xtsup) {
error_report("AMD IOMMU with x2APIC confguration requires xtsup=on");
if (kvm_enabled() && x86ms->apic_id_limit > 255 && !s->xtsup) {
error_report("AMD IOMMU with x2APIC configuration requires xtsup=on");
exit(EXIT_FAILURE);
}
if (s->xtsup) {
if (kvm_irqchip_is_split() && !kvm_enable_x2apic()) {
error_report("AMD IOMMU xtsup=on requires support on the KVM side");
exit(EXIT_FAILURE);
}
}
pci_setup_iommu(bus, &amdvi_iommu_ops, s);
amdvi_init(s);

View File

@ -214,7 +214,7 @@ static void kvm_apic_mem_write(void *opaque, hwaddr addr,
static const MemoryRegionOps kvm_apic_io_ops = {
.read = kvm_apic_mem_read,
.write = kvm_apic_mem_write,
.endianness = DEVICE_NATIVE_ENDIAN,
.endianness = DEVICE_LITTLE_ENDIAN,
};
static void kvm_apic_reset(APICCommonState *s)

View File

@ -139,7 +139,7 @@ static void create_gpex(MicrovmMachineState *mms)
mms->gpex.mmio64.base, mmio64_alias);
}
for (i = 0; i < GPEX_NUM_IRQS; i++) {
for (i = 0; i < PCI_NUM_PINS; i++) {
sysbus_connect_irq(SYS_BUS_DEVICE(dev), i,
x86ms->gsi[mms->gpex.irq + i]);
}

View File

@ -1068,7 +1068,7 @@ DeviceState *pc_vga_init(ISABus *isa_bus, PCIBus *pci_bus)
static const MemoryRegionOps ioport80_io_ops = {
.write = ioport80_write,
.read = ioport80_read,
.endianness = DEVICE_NATIVE_ENDIAN,
.endianness = DEVICE_LITTLE_ENDIAN,
.impl = {
.min_access_size = 1,
.max_access_size = 1,
@ -1078,7 +1078,7 @@ static const MemoryRegionOps ioport80_io_ops = {
static const MemoryRegionOps ioportF0_io_ops = {
.write = ioportF0_write,
.read = ioportF0_read,
.endianness = DEVICE_NATIVE_ENDIAN,
.endianness = DEVICE_LITTLE_ENDIAN,
.impl = {
.min_access_size = 1,
.max_access_size = 1,

View File

@ -718,7 +718,7 @@ static uint64_t vapic_read(void *opaque, hwaddr addr, unsigned size)
static const MemoryRegionOps vapic_ops = {
.write = vapic_write,
.read = vapic_read,
.endianness = DEVICE_NATIVE_ENDIAN,
.endianness = DEVICE_LITTLE_ENDIAN,
};
static void vapic_realize(DeviceState *dev, Error **errp)

View File

@ -36,7 +36,7 @@ static void xen_apic_mem_write(void *opaque, hwaddr addr,
static const MemoryRegionOps xen_apic_io_ops = {
.read = xen_apic_mem_read,
.write = xen_apic_mem_write,
.endianness = DEVICE_NATIVE_ENDIAN,
.endianness = DEVICE_LITTLE_ENDIAN,
};
static void xen_apic_realize(DeviceState *dev, Error **errp)

View File

@ -514,7 +514,7 @@ static void platform_mmio_write(void *opaque, hwaddr addr,
static const MemoryRegionOps platform_mmio_handler = {
.read = &platform_mmio_read,
.write = &platform_mmio_write,
.endianness = DEVICE_NATIVE_ENDIAN,
.endianness = DEVICE_LITTLE_ENDIAN,
};
static void platform_mmio_setup(PCIXenPlatformState *d)

View File

@ -452,7 +452,7 @@ static void fdt_add_pcie_irq_map_node(const LoongArchVirtMachineState *lvms,
{
int pin, dev;
uint32_t irq_map_stride = 0;
uint32_t full_irq_map[GPEX_NUM_IRQS *GPEX_NUM_IRQS * 10] = {};
uint32_t full_irq_map[PCI_NUM_PINS * PCI_NUM_PINS * 10] = {};
uint32_t *irq_map = full_irq_map;
const MachineState *ms = MACHINE(lvms);
@ -465,11 +465,11 @@ static void fdt_add_pcie_irq_map_node(const LoongArchVirtMachineState *lvms,
* to wrap to any number of devices.
*/
for (dev = 0; dev < GPEX_NUM_IRQS; dev++) {
for (dev = 0; dev < PCI_NUM_PINS; dev++) {
int devfn = dev * 0x8;
for (pin = 0; pin < GPEX_NUM_IRQS; pin++) {
int irq_nr = 16 + ((pin + PCI_SLOT(devfn)) % GPEX_NUM_IRQS);
for (pin = 0; pin < PCI_NUM_PINS; pin++) {
int irq_nr = 16 + ((pin + PCI_SLOT(devfn)) % PCI_NUM_PINS);
int i = 0;
/* Fill PCI address cells */
@ -493,7 +493,7 @@ static void fdt_add_pcie_irq_map_node(const LoongArchVirtMachineState *lvms,
qemu_fdt_setprop(ms->fdt, nodename, "interrupt-map", full_irq_map,
GPEX_NUM_IRQS * GPEX_NUM_IRQS *
PCI_NUM_PINS * PCI_NUM_PINS *
irq_map_stride * sizeof(uint32_t));
qemu_fdt_setprop_cells(ms->fdt, nodename, "interrupt-map-mask",
0x1800, 0, 0, 0x7);
@ -805,7 +805,7 @@ static void virt_devices_init(DeviceState *pch_pic,
memory_region_add_subregion(get_system_memory(), VIRT_PCI_IO_BASE,
pio_alias);
for (i = 0; i < GPEX_NUM_IRQS; i++) {
for (i = 0; i < PCI_NUM_PINS; i++) {
sysbus_connect_irq(d, i,
qdev_get_gpio_in(pch_pic, 16 + i));
gpex_set_irq_num(GPEX_HOST(gpex_dev), i, 16 + i);

View File

@ -114,8 +114,8 @@ static uint64_t translate_kernel_address(void *opaque, uint64_t addr)
return addr - 0x30000000LL;
}
void microblaze_load_kernel(MicroBlazeCPU *cpu, hwaddr ddr_base,
uint32_t ramsize,
void microblaze_load_kernel(MicroBlazeCPU *cpu, bool is_little_endian,
hwaddr ddr_base, uint32_t ramsize,
const char *initrd_filename,
const char *dtb_filename,
void (*machine_cpu_reset)(MicroBlazeCPU *))
@ -144,13 +144,13 @@ void microblaze_load_kernel(MicroBlazeCPU *cpu, hwaddr ddr_base,
/* Boots a kernel elf binary. */
kernel_size = load_elf(kernel_filename, NULL, NULL, NULL,
&entry, NULL, &high, NULL,
TARGET_BIG_ENDIAN, EM_MICROBLAZE, 0, 0);
!is_little_endian, EM_MICROBLAZE, 0, 0);
base32 = entry;
if (base32 == 0xc0000000) {
kernel_size = load_elf(kernel_filename, NULL,
translate_kernel_address, NULL,
&entry, NULL, NULL, NULL,
TARGET_BIG_ENDIAN, EM_MICROBLAZE, 0, 0);
!is_little_endian, EM_MICROBLAZE, 0, 0);
}
/* Always boot into physical ram. */
boot_info.bootstrap_pc = (uint32_t)entry;

View File

@ -2,8 +2,8 @@
#define MICROBLAZE_BOOT_H
void microblaze_load_kernel(MicroBlazeCPU *cpu, hwaddr ddr_base,
uint32_t ramsize,
void microblaze_load_kernel(MicroBlazeCPU *cpu, bool is_little_endian,
hwaddr ddr_base, uint32_t ramsize,
const char *initrd_filename,
const char *dtb_filename,
void (*machine_cpu_reset)(MicroBlazeCPU *));

View File

@ -204,7 +204,7 @@ petalogix_ml605_init(MachineState *machine)
cpu->cfg.pvr_regs[5] = 0xc56be000;
cpu->cfg.pvr_regs[10] = 0x0e000000; /* virtex 6 */
microblaze_load_kernel(cpu, MEMORY_BASEADDR, ram_size,
microblaze_load_kernel(cpu, true, MEMORY_BASEADDR, ram_size,
machine->initrd_filename,
BINARY_DEVICE_TREE_FILE,
NULL);

View File

@ -129,7 +129,7 @@ petalogix_s3adsp1800_init(MachineState *machine)
create_unimplemented_device("xps_gpio", GPIO_BASEADDR, 0x10000);
microblaze_load_kernel(cpu, ddr_base, ram_size,
microblaze_load_kernel(cpu, !TARGET_BIG_ENDIAN, ddr_base, ram_size,
machine->initrd_filename,
BINARY_DEVICE_TREE_FILE,
NULL);

View File

@ -172,7 +172,7 @@ static void xlnx_zynqmp_pmu_init(MachineState *machine)
qdev_realize(DEVICE(pmu), NULL, &error_fatal);
/* Load the kernel */
microblaze_load_kernel(&pmu->cpu, XLNX_ZYNQMP_PMU_RAM_ADDR,
microblaze_load_kernel(&pmu->cpu, true, XLNX_ZYNQMP_PMU_RAM_ADDR,
machine->ram_size,
machine->initrd_filename,
machine->dtb,

View File

@ -458,7 +458,7 @@ static inline void loongson3_virt_devices_init(MachineState *machine,
virt_memmap[VIRT_PCIE_PIO].base, s->pio_alias);
sysbus_mmio_map(SYS_BUS_DEVICE(dev), 2, virt_memmap[VIRT_PCIE_PIO].base);
for (i = 0; i < GPEX_NUM_IRQS; i++) {
for (i = 0; i < PCI_NUM_PINS; i++) {
irq = qdev_get_gpio_in(pic, PCIE_IRQ_BASE + i);
sysbus_connect_irq(SYS_BUS_DEVICE(dev), i, irq);
gpex_set_irq_num(GPEX_HOST(dev), i, PCIE_IRQ_BASE + i);

View File

@ -72,6 +72,11 @@ config IVSHMEM_DEVICE
default y if PCI_DEVICES
depends on PCI && LINUX && IVSHMEM && MSI_NONBROKEN
config IVSHMEM_FLAT_DEVICE
bool
default y
depends on LINUX && IVSHMEM
config ECCMEMCTL
bool

459
hw/misc/ivshmem-flat.c Normal file
View File

@ -0,0 +1,459 @@
/*
* Inter-VM Shared Memory Flat Device
*
* SPDX-FileCopyrightText: 2023 Linaro Ltd.
* SPDX-FileContributor: Gustavo Romero <gustavo.romero@linaro.org>
* SPDX-License-Identifier: GPL-2.0-or-later
*
*/
#include "qemu/osdep.h"
#include "qemu/units.h"
#include "qemu/error-report.h"
#include "qemu/module.h"
#include "qapi/error.h"
#include "hw/irq.h"
#include "hw/qdev-properties-system.h"
#include "hw/sysbus.h"
#include "chardev/char-fe.h"
#include "exec/address-spaces.h"
#include "trace.h"
#include "hw/misc/ivshmem-flat.h"
static int64_t ivshmem_flat_recv_msg(IvshmemFTState *s, int *pfd)
{
int64_t msg;
int n, ret;
n = 0;
do {
ret = qemu_chr_fe_read_all(&s->server_chr, (uint8_t *)&msg + n,
sizeof(msg) - n);
if (ret < 0) {
if (ret == -EINTR) {
continue;
}
exit(1);
}
n += ret;
} while (n < sizeof(msg));
if (pfd) {
*pfd = qemu_chr_fe_get_msgfd(&s->server_chr);
}
return le64_to_cpu(msg);
}
static void ivshmem_flat_irq_handler(void *opaque)
{
VectorInfo *vi = opaque;
EventNotifier *e = &vi->event_notifier;
uint16_t vector_id;
const VectorInfo (*v)[64];
assert(e->initialized);
vector_id = vi->id;
/*
* The vector info struct is passed to the handler via the 'opaque' pointer.
* This struct pointer allows the retrieval of the vector ID and its
* associated event notifier. However, for triggering an interrupt using
* qemu_set_irq, it's necessary to also have a pointer to the device state,
* i.e., a pointer to the IvshmemFTState struct. Since the vector info
* struct is contained within the IvshmemFTState struct, its pointer can be
* used to obtain the pointer to IvshmemFTState through simple pointer math.
*/
v = (void *)(vi - vector_id); /* v = &IvshmemPeer->vector[0] */
IvshmemPeer *own_peer = container_of(v, IvshmemPeer, vector);
IvshmemFTState *s = container_of(own_peer, IvshmemFTState, own);
/* Clear event */
if (!event_notifier_test_and_clear(e)) {
return;
}
trace_ivshmem_flat_irq_handler(vector_id);
/*
* Toggle device's output line, which is connected to interrupt controller,
* generating an interrupt request to the CPU.
*/
qemu_irq_pulse(s->irq);
}
static IvshmemPeer *ivshmem_flat_find_peer(IvshmemFTState *s, uint16_t peer_id)
{
IvshmemPeer *peer;
/* Own ID */
if (s->own.id == peer_id) {
return &s->own;
}
/* Peer ID */
QTAILQ_FOREACH(peer, &s->peer, next) {
if (peer->id == peer_id) {
return peer;
}
}
return NULL;
}
static IvshmemPeer *ivshmem_flat_add_peer(IvshmemFTState *s, uint16_t peer_id)
{
IvshmemPeer *new_peer;
new_peer = g_malloc0(sizeof(*new_peer));
new_peer->id = peer_id;
new_peer->vector_counter = 0;
QTAILQ_INSERT_TAIL(&s->peer, new_peer, next);
trace_ivshmem_flat_new_peer(peer_id);
return new_peer;
}
static void ivshmem_flat_remove_peer(IvshmemFTState *s, uint16_t peer_id)
{
IvshmemPeer *peer;
peer = ivshmem_flat_find_peer(s, peer_id);
assert(peer);
QTAILQ_REMOVE(&s->peer, peer, next);
for (int n = 0; n < peer->vector_counter; n++) {
int efd;
efd = event_notifier_get_fd(&(peer->vector[n].event_notifier));
close(efd);
}
g_free(peer);
}
static void ivshmem_flat_add_vector(IvshmemFTState *s, IvshmemPeer *peer,
int vector_fd)
{
if (peer->vector_counter >= IVSHMEM_MAX_VECTOR_NUM) {
trace_ivshmem_flat_add_vector_failure(peer->vector_counter,
vector_fd, peer->id);
close(vector_fd);
return;
}
trace_ivshmem_flat_add_vector_success(peer->vector_counter,
vector_fd, peer->id);
/*
* Set vector ID and its associated eventfd notifier and add them to the
* peer.
*/
peer->vector[peer->vector_counter].id = peer->vector_counter;
g_unix_set_fd_nonblocking(vector_fd, true, NULL);
event_notifier_init_fd(&peer->vector[peer->vector_counter].event_notifier,
vector_fd);
/*
* If it's the device's own ID, register also the handler for the eventfd
* so the device can be notified by the other peers.
*/
if (peer == &s->own) {
qemu_set_fd_handler(vector_fd, ivshmem_flat_irq_handler, NULL,
&peer->vector);
}
peer->vector_counter++;
}
static void ivshmem_flat_process_msg(IvshmemFTState *s, uint64_t msg, int fd)
{
uint16_t peer_id;
IvshmemPeer *peer;
peer_id = msg & 0xFFFF;
peer = ivshmem_flat_find_peer(s, peer_id);
if (!peer) {
peer = ivshmem_flat_add_peer(s, peer_id);
}
if (fd >= 0) {
ivshmem_flat_add_vector(s, peer, fd);
} else { /* fd == -1, which is received when peers disconnect. */
ivshmem_flat_remove_peer(s, peer_id);
}
}
static int ivshmem_flat_can_receive_data(void *opaque)
{
IvshmemFTState *s = opaque;
assert(s->msg_buffered_bytes < sizeof(s->msg_buf));
return sizeof(s->msg_buf) - s->msg_buffered_bytes;
}
static void ivshmem_flat_read_msg(void *opaque, const uint8_t *buf, int size)
{
IvshmemFTState *s = opaque;
int fd;
int64_t msg;
assert(size >= 0 && s->msg_buffered_bytes + size <= sizeof(s->msg_buf));
memcpy((unsigned char *)&s->msg_buf + s->msg_buffered_bytes, buf, size);
s->msg_buffered_bytes += size;
if (s->msg_buffered_bytes < sizeof(s->msg_buf)) {
return;
}
msg = le64_to_cpu(s->msg_buf);
s->msg_buffered_bytes = 0;
fd = qemu_chr_fe_get_msgfd(&s->server_chr);
ivshmem_flat_process_msg(s, msg, fd);
}
static uint64_t ivshmem_flat_iomem_read(void *opaque,
hwaddr offset, unsigned size)
{
IvshmemFTState *s = opaque;
uint32_t ret;
trace_ivshmem_flat_read_mmr(offset);
switch (offset) {
case INTMASK:
ret = 0; /* Ignore read since all bits are reserved in rev 1. */
break;
case INTSTATUS:
ret = 0; /* Ignore read since all bits are reserved in rev 1. */
break;
case IVPOSITION:
ret = s->own.id;
break;
case DOORBELL:
trace_ivshmem_flat_read_mmr_doorbell(); /* DOORBELL is write-only */
ret = 0;
break;
default:
/* Should never reach out here due to iomem map range being exact */
trace_ivshmem_flat_read_write_mmr_invalid(offset);
ret = 0;
}
return ret;
}
static int ivshmem_flat_interrupt_peer(IvshmemFTState *s,
uint16_t peer_id, uint16_t vector_id)
{
IvshmemPeer *peer;
peer = ivshmem_flat_find_peer(s, peer_id);
if (!peer) {
trace_ivshmem_flat_interrupt_invalid_peer(peer_id);
return 1;
}
event_notifier_set(&(peer->vector[vector_id].event_notifier));
return 0;
}
static void ivshmem_flat_iomem_write(void *opaque, hwaddr offset,
uint64_t value, unsigned size)
{
IvshmemFTState *s = opaque;
uint16_t peer_id = (value >> 16) & 0xFFFF;
uint16_t vector_id = value & 0xFFFF;
trace_ivshmem_flat_write_mmr(offset);
switch (offset) {
case INTMASK:
break;
case INTSTATUS:
break;
case IVPOSITION:
break;
case DOORBELL:
trace_ivshmem_flat_interrupt_peer(peer_id, vector_id);
ivshmem_flat_interrupt_peer(s, peer_id, vector_id);
break;
default:
/* Should never reach out here due to iomem map range being exact. */
trace_ivshmem_flat_read_write_mmr_invalid(offset);
break;
}
return;
}
static const MemoryRegionOps ivshmem_flat_ops = {
.read = ivshmem_flat_iomem_read,
.write = ivshmem_flat_iomem_write,
.endianness = DEVICE_LITTLE_ENDIAN,
.impl = { /* Read/write aligned at 32 bits. */
.min_access_size = 4,
.max_access_size = 4,
},
};
static void ivshmem_flat_instance_init(Object *obj)
{
SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
IvshmemFTState *s = IVSHMEM_FLAT(obj);
/*
* Init mem region for 4 MMRs (ivshmem_registers),
* 32 bits each => 16 bytes (0x10).
*/
memory_region_init_io(&s->iomem, obj, &ivshmem_flat_ops, s,
"ivshmem-mmio", 0x10);
sysbus_init_mmio(sbd, &s->iomem);
/*
* Create one output IRQ that will be connect to the
* machine's interrupt controller.
*/
sysbus_init_irq(sbd, &s->irq);
QTAILQ_INIT(&s->peer);
}
static bool ivshmem_flat_connect_server(DeviceState *dev, Error **errp)
{
IvshmemFTState *s = IVSHMEM_FLAT(dev);
SysBusDevice *sbd = SYS_BUS_DEVICE(dev);
int64_t protocol_version, msg;
int shmem_fd;
uint16_t peer_id;
struct stat fdstat;
/* Check ivshmem server connection. */
if (!qemu_chr_fe_backend_connected(&s->server_chr)) {
error_setg(errp, "ivshmem server socket not specified or incorret."
" Can't create device.");
return false;
}
/*
* Message sequence from server on new connection:
* _____________________________________
* |STEP| uint64_t msg | int fd |
* -------------------------------------
*
* 0 PROTOCOL -1 \
* 1 OWN PEER ID -1 |-- Header/Greeting
* 2 -1 shmem fd /
*
* 3 PEER IDx Other peer's Vector 0 eventfd
* 4 PEER IDx Other peer's Vector 1 eventfd
* . .
* . .
* . .
* N PEER IDy Other peer's Vector 0 eventfd
* N+1 PEER IDy Other peer's Vector 1 eventfd
* . .
* . .
* . .
*
* ivshmem_flat_recv_msg() calls return 'msg' and 'fd'.
*
* See ./docs/specs/ivshmem-spec.txt for details on the protocol.
*/
/* Step 0 */
protocol_version = ivshmem_flat_recv_msg(s, NULL);
/* Step 1 */
msg = ivshmem_flat_recv_msg(s, NULL);
peer_id = 0xFFFF & msg;
s->own.id = peer_id;
s->own.vector_counter = 0;
trace_ivshmem_flat_proto_ver_own_id(protocol_version, s->own.id);
/* Step 2 */
msg = ivshmem_flat_recv_msg(s, &shmem_fd);
/* Map shmem fd and MMRs into memory regions. */
if (msg != -1 || shmem_fd < 0) {
error_setg(errp, "Could not receive valid shmem fd."
" Can't create device!");
return false;
}
if (fstat(shmem_fd, &fdstat) != 0) {
error_setg(errp, "Could not determine shmem fd size."
" Can't create device!");
return false;
}
trace_ivshmem_flat_shmem_size(shmem_fd, fdstat.st_size);
/*
* Shmem size provided by the ivshmem server must be equal to
* device's shmem size.
*/
if (fdstat.st_size != s->shmem_size) {
error_setg(errp, "Can't map shmem fd: shmem size different"
" from device size!");
return false;
}
/*
* Beyond step 2 ivshmem_process_msg, called by ivshmem_flat_read_msg
* handler -- when data is available on the server socket -- will handle
* the additional messages that will be generated by the server as peers
* connect or disconnect.
*/
qemu_chr_fe_set_handlers(&s->server_chr, ivshmem_flat_can_receive_data,
ivshmem_flat_read_msg, NULL, NULL, s, NULL, true);
memory_region_init_ram_from_fd(&s->shmem, OBJECT(s),
"ivshmem-shmem", s->shmem_size,
RAM_SHARED, shmem_fd, 0, NULL);
sysbus_init_mmio(sbd, &s->shmem);
return true;
}
static void ivshmem_flat_realize(DeviceState *dev, Error **errp)
{
if (!ivshmem_flat_connect_server(dev, errp)) {
return;
}
}
static const Property ivshmem_flat_props[] = {
DEFINE_PROP_CHR("chardev", IvshmemFTState, server_chr),
DEFINE_PROP_UINT32("shmem-size", IvshmemFTState, shmem_size, 4 * MiB),
};
static void ivshmem_flat_class_init(ObjectClass *klass, void *data)
{
DeviceClass *dc = DEVICE_CLASS(klass);
dc->hotpluggable = true;
dc->realize = ivshmem_flat_realize;
set_bit(DEVICE_CATEGORY_MISC, dc->categories);
device_class_set_props(dc, ivshmem_flat_props);
/* Reason: Must be wired up in code (sysbus MRs and IRQ) */
dc->user_creatable = false;
}
static const TypeInfo ivshmem_flat_types[] = {
{
.name = TYPE_IVSHMEM_FLAT,
.parent = TYPE_SYS_BUS_DEVICE,
.instance_size = sizeof(IvshmemFTState),
.instance_init = ivshmem_flat_instance_init,
.class_init = ivshmem_flat_class_init,
},
};
DEFINE_TYPES(ivshmem_flat_types)

View File

@ -37,7 +37,9 @@ system_ss.add(when: 'CONFIG_SIFIVE_U_PRCI', if_true: files('sifive_u_prci.c'))
subdir('macio')
system_ss.add(when: 'CONFIG_IVSHMEM_DEVICE', if_true: files('ivshmem.c'))
# ivshmem devices
system_ss.add(when: 'CONFIG_IVSHMEM_DEVICE', if_true: files('ivshmem-pci.c'))
system_ss.add(when: 'CONFIG_IVSHMEM_FLAT_DEVICE', if_true: files('ivshmem-flat.c'))
system_ss.add(when: 'CONFIG_ALLWINNER_SRAMC', if_true: files('allwinner-sramc.c'))
system_ss.add(when: 'CONFIG_ALLWINNER_A10_CCM', if_true: files('allwinner-a10-ccm.c'))

View File

@ -368,3 +368,19 @@ aspeed_sli_read(uint64_t offset, unsigned int size, uint32_t data) "To 0x%" PRIx
aspeed_sliio_write(uint64_t offset, unsigned int size, uint32_t data) "To 0x%" PRIx64 " of size %u: 0x%" PRIx32
aspeed_sliio_read(uint64_t offset, unsigned int size, uint32_t data) "To 0x%" PRIx64 " of size %u: 0x%" PRIx32
# ivshmem-flat.c
ivshmem_flat_irq_handler(uint16_t vector_id) "Caught interrupt request: vector %d"
ivshmem_flat_new_peer(uint16_t peer_id) "New peer ID: %d"
ivshmem_flat_add_vector_failure(uint16_t vector_id, uint32_t vector_fd, uint16_t peer_id) "Failed to add vector %u (fd = %u) to peer ID %u, maximum number of vectors reached"
ivshmem_flat_add_vector_success(uint16_t vector_id, uint32_t vector_fd, uint16_t peer_id) "Successful addition of vector %u (fd = %u) to peer ID %u"
ivshmem_flat_irq_resolved(const char *irq_qompath) "IRQ QOM path '%s' correctly resolved"
ivshmem_flat_proto_ver_own_id(uint64_t proto_ver, uint16_t peer_id) "Protocol Version = 0x%"PRIx64", Own Peer ID = %u"
ivshmem_flat_shmem_size(int fd, uint64_t size) "Shmem fd (%d) total size is %"PRIu64" byte(s)"
ivshmem_flat_shmem_map(uint64_t addr) "Mapping shmem @ 0x%"PRIx64
ivshmem_flat_mmio_map(uint64_t addr) "Mapping MMRs @ 0x%"PRIx64
ivshmem_flat_read_mmr(uint64_t addr_offset) "Read access at offset %"PRIu64
ivshmem_flat_read_mmr_doorbell(void) "DOORBELL register is write-only!"
ivshmem_flat_read_write_mmr_invalid(uint64_t addr_offset) "No ivshmem register mapped at offset %"PRIu64
ivshmem_flat_interrupt_invalid_peer(uint16_t peer_id) "Can't interrupt non-existing peer %u"
ivshmem_flat_write_mmr(uint64_t addr_offset) "Write access at offset %"PRIu64
ivshmem_flat_interrupt_peer(uint16_t peer_id, uint16_t vector_id) "Interrupting peer ID %u, vector %u..."

View File

@ -18,17 +18,17 @@
#include "migration/vmstate.h"
#include "hw/misc/vmcoreinfo.h"
static void fw_cfg_vmci_write(void *dev, off_t offset, size_t len)
static void fw_cfg_vmci_write(void *opaque, off_t offset, size_t len)
{
VMCoreInfoState *s = VMCOREINFO(dev);
VMCoreInfoState *s = opaque;
s->has_vmcoreinfo = offset == 0 && len == sizeof(s->vmcoreinfo)
&& s->vmcoreinfo.guest_format != FW_CFG_VMCOREINFO_FORMAT_NONE;
}
static void vmcoreinfo_reset(void *dev)
static void vmcoreinfo_reset(void *opaque)
{
VMCoreInfoState *s = VMCOREINFO(dev);
VMCoreInfoState *s = opaque;
s->has_vmcoreinfo = false;
memset(&s->vmcoreinfo, 0, sizeof(s->vmcoreinfo));
@ -65,7 +65,7 @@ static void vmcoreinfo_realize(DeviceState *dev, Error **errp)
* This device requires to register a global reset because it is
* not plugged to a bus (which, as its QOM parent, would reset it).
*/
qemu_register_reset(vmcoreinfo_reset, dev);
qemu_register_reset(vmcoreinfo_reset, s);
vmcoreinfo_state = s;
}
@ -93,16 +93,13 @@ static void vmcoreinfo_device_class_init(ObjectClass *klass, void *data)
set_bit(DEVICE_CATEGORY_MISC, dc->categories);
}
static const TypeInfo vmcoreinfo_device_info = {
.name = VMCOREINFO_DEVICE,
.parent = TYPE_DEVICE,
.instance_size = sizeof(VMCoreInfoState),
.class_init = vmcoreinfo_device_class_init,
static const TypeInfo vmcoreinfo_types[] = {
{
.name = VMCOREINFO_DEVICE,
.parent = TYPE_DEVICE,
.instance_size = sizeof(VMCoreInfoState),
.class_init = vmcoreinfo_device_class_init,
}
};
static void vmcoreinfo_register_types(void)
{
type_register_static(&vmcoreinfo_device_info);
}
type_init(vmcoreinfo_register_types)
DEFINE_TYPES(vmcoreinfo_types)

View File

@ -513,3 +513,7 @@ xen_netdev_connect(int dev, unsigned int tx, unsigned int rx, int port) "vif%u t
xen_netdev_frontend_changed(const char *dev, int state) "vif%s state %d"
xen_netdev_tx(int dev, int ref, int off, int len, unsigned int flags, const char *c, const char *d, const char *m, const char *e) "vif%u ref %u off %u len %u flags 0x%x%s%s%s%s"
xen_netdev_rx(int dev, int idx, int status, int flags) "vif%u idx %d status %d flags 0x%x"
# xilinx_ethlite.c
ethlite_pkt_lost(uint32_t rx_ctrl) "rx_ctrl:0x%" PRIx32
ethlite_pkt_size_too_big(uint64_t size) "size:0x%" PRIx64

View File

@ -3,6 +3,9 @@
*
* Copyright (c) 2009 Edgar E. Iglesias.
*
* DS580: https://docs.amd.com/v/u/en-US/xps_ethernetlite
* LogiCORE IP XPS Ethernet Lite Media Access Controller
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
@ -30,9 +33,10 @@
#include "hw/irq.h"
#include "hw/qdev-properties.h"
#include "net/net.h"
#include "trace.h"
#define D(x)
#define R_TX_BUF0 0
#define BUFSZ_MAX 0x07e4
#define R_TX_LEN0 (0x07f4 / 4)
#define R_TX_GIE0 (0x07f8 / 4)
#define R_TX_CTRL0 (0x07fc / 4)
@ -53,10 +57,9 @@
#define CTRL_S 0x1
#define TYPE_XILINX_ETHLITE "xlnx.xps-ethernetlite"
DECLARE_INSTANCE_CHECKER(struct xlx_ethlite, XILINX_ETHLITE,
TYPE_XILINX_ETHLITE)
OBJECT_DECLARE_SIMPLE_TYPE(XlnxXpsEthLite, XILINX_ETHLITE)
struct xlx_ethlite
struct XlnxXpsEthLite
{
SysBusDevice parent_obj;
@ -67,13 +70,12 @@ struct xlx_ethlite
uint32_t c_tx_pingpong;
uint32_t c_rx_pingpong;
unsigned int txbuf;
unsigned int rxbuf;
unsigned int port_index; /* dual port RAM index */
uint32_t regs[R_MAX];
};
static inline void eth_pulse_irq(struct xlx_ethlite *s)
static inline void eth_pulse_irq(XlnxXpsEthLite *s)
{
/* Only the first gie reg is active. */
if (s->regs[R_TX_GIE0] & GIE_GIE) {
@ -84,7 +86,7 @@ static inline void eth_pulse_irq(struct xlx_ethlite *s)
static uint64_t
eth_read(void *opaque, hwaddr addr, unsigned int size)
{
struct xlx_ethlite *s = opaque;
XlnxXpsEthLite *s = opaque;
uint32_t r = 0;
addr >>= 2;
@ -99,7 +101,6 @@ eth_read(void *opaque, hwaddr addr, unsigned int size)
case R_RX_CTRL1:
case R_RX_CTRL0:
r = s->regs[addr];
D(qemu_log("%s " HWADDR_FMT_plx "=%x\n", __func__, addr * 4, r));
break;
default:
@ -113,7 +114,7 @@ static void
eth_write(void *opaque, hwaddr addr,
uint64_t val64, unsigned int size)
{
struct xlx_ethlite *s = opaque;
XlnxXpsEthLite *s = opaque;
unsigned int base = 0;
uint32_t value = val64;
@ -125,13 +126,10 @@ eth_write(void *opaque, hwaddr addr,
if (addr == R_TX_CTRL1)
base = 0x800 / 4;
D(qemu_log("%s addr=" HWADDR_FMT_plx " val=%x\n",
__func__, addr * 4, value));
if ((value & (CTRL_P | CTRL_S)) == CTRL_S) {
qemu_send_packet(qemu_get_queue(s->nic),
(void *) &s->regs[base],
s->regs[base + R_TX_LEN0]);
D(qemu_log("eth_tx %d\n", s->regs[base + R_TX_LEN0]));
if (s->regs[base + R_TX_CTRL0] & CTRL_I)
eth_pulse_irq(s);
} else if ((value & (CTRL_P | CTRL_S)) == (CTRL_P | CTRL_S)) {
@ -155,8 +153,6 @@ eth_write(void *opaque, hwaddr addr,
case R_TX_LEN0:
case R_TX_LEN1:
case R_TX_GIE0:
D(qemu_log("%s addr=" HWADDR_FMT_plx " val=%x\n",
__func__, addr * 4, value));
s->regs[addr] = value;
break;
@ -178,29 +174,28 @@ static const MemoryRegionOps eth_ops = {
static bool eth_can_rx(NetClientState *nc)
{
struct xlx_ethlite *s = qemu_get_nic_opaque(nc);
unsigned int rxbase = s->rxbuf * (0x800 / 4);
XlnxXpsEthLite *s = qemu_get_nic_opaque(nc);
unsigned int rxbase = s->port_index * (0x800 / 4);
return !(s->regs[rxbase + R_RX_CTRL0] & CTRL_S);
}
static ssize_t eth_rx(NetClientState *nc, const uint8_t *buf, size_t size)
{
struct xlx_ethlite *s = qemu_get_nic_opaque(nc);
unsigned int rxbase = s->rxbuf * (0x800 / 4);
XlnxXpsEthLite *s = qemu_get_nic_opaque(nc);
unsigned int rxbase = s->port_index * (0x800 / 4);
/* DA filter. */
if (!(buf[0] & 0x80) && memcmp(&s->conf.macaddr.a[0], buf, 6))
return size;
if (s->regs[rxbase + R_RX_CTRL0] & CTRL_S) {
D(qemu_log("ethlite lost packet %x\n", s->regs[R_RX_CTRL0]));
trace_ethlite_pkt_lost(s->regs[R_RX_CTRL0]);
return -1;
}
D(qemu_log("%s %zd rxbase=%x\n", __func__, size, rxbase));
if (size > (R_MAX - R_RX_BUF0 - rxbase) * 4) {
D(qemu_log("ethlite packet is too big, size=%x\n", size));
if (size >= BUFSZ_MAX) {
trace_ethlite_pkt_size_too_big(size);
return -1;
}
memcpy(&s->regs[rxbase + R_RX_BUF0], buf, size);
@ -211,15 +206,15 @@ static ssize_t eth_rx(NetClientState *nc, const uint8_t *buf, size_t size)
}
/* If c_rx_pingpong was set flip buffers. */
s->rxbuf ^= s->c_rx_pingpong;
s->port_index ^= s->c_rx_pingpong;
return size;
}
static void xilinx_ethlite_reset(DeviceState *dev)
{
struct xlx_ethlite *s = XILINX_ETHLITE(dev);
XlnxXpsEthLite *s = XILINX_ETHLITE(dev);
s->rxbuf = 0;
s->port_index = 0;
}
static NetClientInfo net_xilinx_ethlite_info = {
@ -231,7 +226,7 @@ static NetClientInfo net_xilinx_ethlite_info = {
static void xilinx_ethlite_realize(DeviceState *dev, Error **errp)
{
struct xlx_ethlite *s = XILINX_ETHLITE(dev);
XlnxXpsEthLite *s = XILINX_ETHLITE(dev);
qemu_macaddr_default_if_unset(&s->conf.macaddr);
s->nic = qemu_new_nic(&net_xilinx_ethlite_info, &s->conf,
@ -242,7 +237,7 @@ static void xilinx_ethlite_realize(DeviceState *dev, Error **errp)
static void xilinx_ethlite_init(Object *obj)
{
struct xlx_ethlite *s = XILINX_ETHLITE(obj);
XlnxXpsEthLite *s = XILINX_ETHLITE(obj);
sysbus_init_irq(SYS_BUS_DEVICE(obj), &s->irq);
@ -252,9 +247,9 @@ static void xilinx_ethlite_init(Object *obj)
}
static const Property xilinx_ethlite_properties[] = {
DEFINE_PROP_UINT32("tx-ping-pong", struct xlx_ethlite, c_tx_pingpong, 1),
DEFINE_PROP_UINT32("rx-ping-pong", struct xlx_ethlite, c_rx_pingpong, 1),
DEFINE_NIC_PROPERTIES(struct xlx_ethlite, conf),
DEFINE_PROP_UINT32("tx-ping-pong", XlnxXpsEthLite, c_tx_pingpong, 1),
DEFINE_PROP_UINT32("rx-ping-pong", XlnxXpsEthLite, c_rx_pingpong, 1),
DEFINE_NIC_PROPERTIES(XlnxXpsEthLite, conf),
};
static void xilinx_ethlite_class_init(ObjectClass *klass, void *data)
@ -266,17 +261,14 @@ static void xilinx_ethlite_class_init(ObjectClass *klass, void *data)
device_class_set_props(dc, xilinx_ethlite_properties);
}
static const TypeInfo xilinx_ethlite_info = {
.name = TYPE_XILINX_ETHLITE,
.parent = TYPE_SYS_BUS_DEVICE,
.instance_size = sizeof(struct xlx_ethlite),
.instance_init = xilinx_ethlite_init,
.class_init = xilinx_ethlite_class_init,
static const TypeInfo xilinx_ethlite_types[] = {
{
.name = TYPE_XILINX_ETHLITE,
.parent = TYPE_SYS_BUS_DEVICE,
.instance_size = sizeof(XlnxXpsEthLite),
.instance_init = xilinx_ethlite_init,
.class_init = xilinx_ethlite_class_init,
},
};
static void xilinx_ethlite_register_types(void)
{
type_register_static(&xilinx_ethlite_info);
}
type_init(xilinx_ethlite_register_types)
DEFINE_TYPES(xilinx_ethlite_types)

View File

@ -729,7 +729,6 @@ static void *fw_cfg_modify_bytes_read(FWCfgState *s, uint16_t key,
ptr = s->entries[arch][key].data;
s->entries[arch][key].data = data;
s->entries[arch][key].len = len;
s->entries[arch][key].callback_opaque = NULL;
s->entries[arch][key].allow_write = false;
return ptr;

View File

@ -266,7 +266,7 @@ static void openrisc_sim_serial_init(Or1ksimState *state, hwaddr base,
}
serial_mm_init(get_system_memory(), base, 0, serial_irq, 115200,
serial_hd(uart_idx),
DEVICE_NATIVE_ENDIAN);
DEVICE_BIG_ENDIAN);
/* Add device tree node for serial. */
nodename = g_strdup_printf("/serial@%" HWADDR_PRIx, base);

View File

@ -236,7 +236,7 @@ static void openrisc_virt_serial_init(OR1KVirtState *state, hwaddr base,
qemu_irq serial_irq = get_per_cpu_irq(cpus, num_cpus, irq_pin);
serial_mm_init(get_system_memory(), base, 0, serial_irq, 115200,
serial_hd(0), DEVICE_NATIVE_ENDIAN);
serial_hd(0), DEVICE_BIG_ENDIAN);
/* Add device tree node for serial. */
nodename = g_strdup_printf("/serial@%" HWADDR_PRIx, base);
@ -318,7 +318,7 @@ static void create_pcie_irq_map(void *fdt, char *nodename, int irq_base,
{
int pin, dev;
uint32_t irq_map_stride = 0;
uint32_t full_irq_map[GPEX_NUM_IRQS * GPEX_NUM_IRQS * 6] = {};
uint32_t full_irq_map[PCI_NUM_PINS * PCI_NUM_PINS * 6] = {};
uint32_t *irq_map = full_irq_map;
/*
@ -330,11 +330,11 @@ static void create_pcie_irq_map(void *fdt, char *nodename, int irq_base,
* possible slot) seeing the interrupt-map-mask will allow the table
* to wrap to any number of devices.
*/
for (dev = 0; dev < GPEX_NUM_IRQS; dev++) {
for (dev = 0; dev < PCI_NUM_PINS; dev++) {
int devfn = dev << 3;
for (pin = 0; pin < GPEX_NUM_IRQS; pin++) {
int irq_nr = irq_base + ((pin + PCI_SLOT(devfn)) % GPEX_NUM_IRQS);
for (pin = 0; pin < PCI_NUM_PINS; pin++) {
int irq_nr = irq_base + ((pin + PCI_SLOT(devfn)) % PCI_NUM_PINS);
int i = 0;
/* Fill PCI address cells */
@ -357,7 +357,7 @@ static void create_pcie_irq_map(void *fdt, char *nodename, int irq_base,
}
qemu_fdt_setprop(fdt, nodename, "interrupt-map", full_irq_map,
GPEX_NUM_IRQS * GPEX_NUM_IRQS *
PCI_NUM_PINS * PCI_NUM_PINS *
irq_map_stride * sizeof(uint32_t));
qemu_fdt_setprop_cells(fdt, nodename, "interrupt-map-mask",
@ -409,7 +409,7 @@ static void openrisc_virt_pcie_init(OR1KVirtState *state,
memory_region_add_subregion(get_system_memory(), pio_base, alias);
/* Connect IRQ lines. */
for (i = 0; i < GPEX_NUM_IRQS; i++) {
for (i = 0; i < PCI_NUM_PINS; i++) {
pcie_irq = get_per_cpu_irq(cpus, num_cpus, irq_base + i);
sysbus_connect_irq(SYS_BUS_DEVICE(dev), i, pcie_irq);

View File

@ -32,6 +32,7 @@
#include "qemu/osdep.h"
#include "qapi/error.h"
#include "hw/irq.h"
#include "hw/pci/pci_bus.h"
#include "hw/pci-host/gpex.h"
#include "hw/qdev-properties.h"
#include "migration/vmstate.h"
@ -41,20 +42,25 @@
* GPEX host
*/
struct GPEXIrq {
qemu_irq irq;
int irq_num;
};
static void gpex_set_irq(void *opaque, int irq_num, int level)
{
GPEXHost *s = opaque;
qemu_set_irq(s->irq[irq_num], level);
qemu_set_irq(s->irq[irq_num].irq, level);
}
int gpex_set_irq_num(GPEXHost *s, int index, int gsi)
{
if (index >= GPEX_NUM_IRQS) {
if (index >= s->num_irqs) {
return -EINVAL;
}
s->irq_num[index] = gsi;
s->irq[index].irq_num = gsi;
return 0;
}
@ -62,7 +68,7 @@ static PCIINTxRoute gpex_route_intx_pin_to_irq(void *opaque, int pin)
{
PCIINTxRoute route;
GPEXHost *s = opaque;
int gsi = s->irq_num[pin];
int gsi = s->irq[pin].irq_num;
route.irq = gsi;
if (gsi < 0) {
@ -74,6 +80,13 @@ static PCIINTxRoute gpex_route_intx_pin_to_irq(void *opaque, int pin)
return route;
}
static int gpex_swizzle_map_irq_fn(PCIDevice *pci_dev, int pin)
{
PCIBus *bus = pci_device_root_bus(pci_dev);
return (PCI_SLOT(pci_dev->devfn) + pin) % bus->nirq;
}
static void gpex_host_realize(DeviceState *dev, Error **errp)
{
PCIHostState *pci = PCI_HOST_BRIDGE(dev);
@ -82,6 +95,8 @@ static void gpex_host_realize(DeviceState *dev, Error **errp)
PCIExpressHost *pex = PCIE_HOST_BRIDGE(dev);
int i;
s->irq = g_malloc0_n(s->num_irqs, sizeof(*s->irq));
pcie_host_mmcfg_init(pex, PCIE_MMCFG_SIZE_MAX);
sysbus_init_mmio(sbd, &pex->mmio);
@ -128,19 +143,27 @@ static void gpex_host_realize(DeviceState *dev, Error **errp)
sysbus_init_mmio(sbd, &s->io_ioport);
}
for (i = 0; i < GPEX_NUM_IRQS; i++) {
sysbus_init_irq(sbd, &s->irq[i]);
s->irq_num[i] = -1;
for (i = 0; i < s->num_irqs; i++) {
sysbus_init_irq(sbd, &s->irq[i].irq);
s->irq[i].irq_num = -1;
}
pci->bus = pci_register_root_bus(dev, "pcie.0", gpex_set_irq,
pci_swizzle_map_irq_fn, s, &s->io_mmio,
&s->io_ioport, 0, 4, TYPE_PCIE_BUS);
gpex_swizzle_map_irq_fn,
s, &s->io_mmio, &s->io_ioport, 0,
s->num_irqs, TYPE_PCIE_BUS);
pci_bus_set_route_irq_fn(pci->bus, gpex_route_intx_pin_to_irq);
qdev_realize(DEVICE(&s->gpex_root), BUS(pci->bus), &error_fatal);
}
static void gpex_host_unrealize(DeviceState *dev)
{
GPEXHost *s = GPEX_HOST(dev);
g_free(s->irq);
}
static const char *gpex_host_root_bus_path(PCIHostState *host_bridge,
PCIBus *rootbus)
{
@ -166,6 +189,7 @@ static const Property gpex_host_properties[] = {
gpex_cfg.mmio64.base, 0),
DEFINE_PROP_SIZE(PCI_HOST_ABOVE_4G_MMIO_SIZE, GPEXHost,
gpex_cfg.mmio64.size, 0),
DEFINE_PROP_UINT8("num-irqs", GPEXHost, num_irqs, PCI_NUM_PINS),
};
static void gpex_host_class_init(ObjectClass *klass, void *data)
@ -175,6 +199,7 @@ static void gpex_host_class_init(ObjectClass *klass, void *data)
hc->root_bus_path = gpex_host_root_bus_path;
dc->realize = gpex_host_realize;
dc->unrealize = gpex_host_unrealize;
set_bit(DEVICE_CATEGORY_BRIDGE, dc->categories);
dc->fw_name = "pci";
device_class_set_props(dc, gpex_host_properties);

View File

@ -179,7 +179,7 @@ static void create_pcie_irq_map(RISCVVirtState *s, void *fdt, char *nodename,
{
int pin, dev;
uint32_t irq_map_stride = 0;
uint32_t full_irq_map[GPEX_NUM_IRQS * GPEX_NUM_IRQS *
uint32_t full_irq_map[PCI_NUM_PINS * PCI_NUM_PINS *
FDT_MAX_INT_MAP_WIDTH] = {};
uint32_t *irq_map = full_irq_map;
@ -191,11 +191,11 @@ static void create_pcie_irq_map(RISCVVirtState *s, void *fdt, char *nodename,
* possible slot) seeing the interrupt-map-mask will allow the table
* to wrap to any number of devices.
*/
for (dev = 0; dev < GPEX_NUM_IRQS; dev++) {
for (dev = 0; dev < PCI_NUM_PINS; dev++) {
int devfn = dev * 0x8;
for (pin = 0; pin < GPEX_NUM_IRQS; pin++) {
int irq_nr = PCIE_IRQ + ((pin + PCI_SLOT(devfn)) % GPEX_NUM_IRQS);
for (pin = 0; pin < PCI_NUM_PINS; pin++) {
int irq_nr = PCIE_IRQ + ((pin + PCI_SLOT(devfn)) % PCI_NUM_PINS);
int i = 0;
/* Fill PCI address cells */
@ -221,7 +221,7 @@ static void create_pcie_irq_map(RISCVVirtState *s, void *fdt, char *nodename,
}
qemu_fdt_setprop(fdt, nodename, "interrupt-map", full_irq_map,
GPEX_NUM_IRQS * GPEX_NUM_IRQS *
PCI_NUM_PINS * PCI_NUM_PINS *
irq_map_stride * sizeof(uint32_t));
qemu_fdt_setprop_cells(fdt, nodename, "interrupt-map-mask",
@ -1246,7 +1246,7 @@ static inline DeviceState *gpex_pcie_init(MemoryRegion *sys_mem,
sysbus_mmio_map(SYS_BUS_DEVICE(dev), 2, pio_base);
for (i = 0; i < GPEX_NUM_IRQS; i++) {
for (i = 0; i < PCI_NUM_PINS; i++) {
irq = qdev_get_gpio_in(irqchip, PCIE_IRQ + i);
sysbus_connect_irq(SYS_BUS_DEVICE(dev), i, irq);

View File

@ -238,7 +238,7 @@ static void iommu_mem_write(void *opaque, hwaddr addr,
static const MemoryRegionOps iommu_mem_ops = {
.read = iommu_mem_read,
.write = iommu_mem_write,
.endianness = DEVICE_NATIVE_ENDIAN,
.endianness = DEVICE_BIG_ENDIAN,
.valid = {
.min_access_size = 4,
.max_access_size = 4,

View File

@ -254,7 +254,7 @@ static void power_mem_write(void *opaque, hwaddr addr,
static const MemoryRegionOps power_mem_ops = {
.read = power_mem_read,
.write = power_mem_write,
.endianness = DEVICE_NATIVE_ENDIAN,
.endianness = DEVICE_BIG_ENDIAN,
.valid = {
.min_access_size = 4,
.max_access_size = 4,

View File

@ -47,7 +47,7 @@ static const MemoryRegionOps tricore_testdevice_ops = {
.min_access_size = 4,
.max_access_size = 4,
},
.endianness = DEVICE_NATIVE_ENDIAN,
.endianness = DEVICE_LITTLE_ENDIAN,
};
static void tricore_testdevice_init(Object *obj)

View File

@ -220,8 +220,9 @@ static void uhci_async_cancel(UHCIAsync *async)
uhci_async_unlink(async);
trace_usb_uhci_packet_cancel(async->queue->token, async->td_addr,
async->done);
if (!async->done)
if (!async->done) {
usb_cancel_packet(&async->packet);
}
uhci_async_free(async);
}
@ -322,7 +323,7 @@ static void uhci_reset(DeviceState *dev)
s->fl_base_addr = 0;
s->sof_timing = 64;
for(i = 0; i < UHCI_PORTS; i++) {
for (i = 0; i < UHCI_PORTS; i++) {
port = &s->ports[i];
port->ctrl = 0x0080;
if (port->port.dev && port->port.dev->attached) {
@ -387,8 +388,8 @@ static void uhci_port_write(void *opaque, hwaddr addr,
trace_usb_uhci_mmio_writew(addr, val);
switch(addr) {
case 0x00:
switch (addr) {
case UHCI_USBCMD:
if ((val & UHCI_CMD_RS) && !(s->cmd & UHCI_CMD_RS)) {
/* start frame processing */
trace_usb_uhci_schedule_start();
@ -404,7 +405,7 @@ static void uhci_port_write(void *opaque, hwaddr addr,
int i;
/* send reset on the USB bus */
for(i = 0; i < UHCI_PORTS; i++) {
for (i = 0; i < UHCI_PORTS; i++) {
port = &s->ports[i];
usb_device_reset(port->port.dev);
}
@ -423,34 +424,38 @@ static void uhci_port_write(void *opaque, hwaddr addr,
}
}
break;
case 0x02:
case UHCI_USBSTS:
s->status &= ~val;
/* XXX: the chip spec is not coherent, so we add a hidden
register to distinguish between IOC and SPD */
if (val & UHCI_STS_USBINT)
/*
* XXX: the chip spec is not coherent, so we add a hidden
* register to distinguish between IOC and SPD
*/
if (val & UHCI_STS_USBINT) {
s->status2 = 0;
}
uhci_update_irq(s);
break;
case 0x04:
case UHCI_USBINTR:
s->intr = val;
uhci_update_irq(s);
break;
case 0x06:
if (s->status & UHCI_STS_HCHALTED)
case UHCI_USBFRNUM:
if (s->status & UHCI_STS_HCHALTED) {
s->frnum = val & 0x7ff;
}
break;
case 0x08:
case UHCI_USBFLBASEADD:
s->fl_base_addr &= 0xffff0000;
s->fl_base_addr |= val & ~0xfff;
break;
case 0x0a:
case UHCI_USBFLBASEADD + 2:
s->fl_base_addr &= 0x0000ffff;
s->fl_base_addr |= (val << 16);
break;
case 0x0c:
case UHCI_USBSOF:
s->sof_timing = val & 0xff;
break;
case 0x10 ... 0x1f:
case UHCI_USBPORTSC1 ... UHCI_USBPORTSC4:
{
UHCIPort *port;
USBDevice *dev;
@ -464,8 +469,8 @@ static void uhci_port_write(void *opaque, hwaddr addr,
dev = port->port.dev;
if (dev && dev->attached) {
/* port reset */
if ( (val & UHCI_PORT_RESET) &&
!(port->ctrl & UHCI_PORT_RESET) ) {
if ((val & UHCI_PORT_RESET) &&
!(port->ctrl & UHCI_PORT_RESET)) {
usb_device_reset(dev);
}
}
@ -487,29 +492,29 @@ static uint64_t uhci_port_read(void *opaque, hwaddr addr, unsigned size)
UHCIState *s = opaque;
uint32_t val;
switch(addr) {
case 0x00:
switch (addr) {
case UHCI_USBCMD:
val = s->cmd;
break;
case 0x02:
case UHCI_USBSTS:
val = s->status;
break;
case 0x04:
case UHCI_USBINTR:
val = s->intr;
break;
case 0x06:
case UHCI_USBFRNUM:
val = s->frnum;
break;
case 0x08:
case UHCI_USBFLBASEADD:
val = s->fl_base_addr & 0xffff;
break;
case 0x0a:
case UHCI_USBFLBASEADD + 2:
val = (s->fl_base_addr >> 16) & 0xffff;
break;
case 0x0c:
case UHCI_USBSOF:
val = s->sof_timing;
break;
case 0x10 ... 0x1f:
case UHCI_USBPORTSC1 ... UHCI_USBPORTSC4:
{
UHCIPort *port;
int n;
@ -533,12 +538,13 @@ static uint64_t uhci_port_read(void *opaque, hwaddr addr, unsigned size)
}
/* signal resume if controller suspended */
static void uhci_resume (void *opaque)
static void uhci_resume(void *opaque)
{
UHCIState *s = (UHCIState *)opaque;
if (!s)
if (!s) {
return;
}
if (s->cmd & UHCI_CMD_EGSM) {
s->cmd |= UHCI_CMD_FGR;
@ -674,7 +680,8 @@ static int uhci_handle_td_error(UHCIState *s, UHCI_TD *td, uint32_t td_addr,
return ret;
}
static int uhci_complete_td(UHCIState *s, UHCI_TD *td, UHCIAsync *async, uint32_t *int_mask)
static int uhci_complete_td(UHCIState *s, UHCI_TD *td, UHCIAsync *async,
uint32_t *int_mask)
{
int len = 0, max_len;
uint8_t pid;
@ -682,8 +689,9 @@ static int uhci_complete_td(UHCIState *s, UHCI_TD *td, UHCIAsync *async, uint32_
max_len = ((td->token >> 21) + 1) & 0x7ff;
pid = td->token & 0xff;
if (td->ctrl & TD_CTRL_IOS)
if (td->ctrl & TD_CTRL_IOS) {
td->ctrl &= ~TD_CTRL_ACTIVE;
}
if (async->packet.status != USB_RET_SUCCESS) {
return uhci_handle_td_error(s, td, async->td_addr,
@ -693,12 +701,15 @@ static int uhci_complete_td(UHCIState *s, UHCI_TD *td, UHCIAsync *async, uint32_
len = async->packet.actual_length;
td->ctrl = (td->ctrl & ~0x7ff) | ((len - 1) & 0x7ff);
/* The NAK bit may have been set by a previous frame, so clear it
here. The docs are somewhat unclear, but win2k relies on this
behavior. */
/*
* The NAK bit may have been set by a previous frame, so clear it
* here. The docs are somewhat unclear, but win2k relies on this
* behavior.
*/
td->ctrl &= ~(TD_CTRL_ACTIVE | TD_CTRL_NAK);
if (td->ctrl & TD_CTRL_IOC)
if (td->ctrl & TD_CTRL_IOC) {
*int_mask |= 0x01;
}
if (pid == USB_TOKEN_IN) {
pci_dma_write(&s->dev, td->buffer, async->buf, len);
@ -780,9 +791,11 @@ static int uhci_handle_td(UHCIState *s, UHCIQueue *q, uint32_t qh_addr,
if (async) {
if (queuing) {
/* we are busy filling the queue, we are not prepared
to consume completed packages then, just leave them
in async state */
/*
* we are busy filling the queue, we are not prepared
* to consume completed packages then, just leave them
* in async state
*/
return TD_RESULT_ASYNC_CONT;
}
if (!async->done) {
@ -832,7 +845,7 @@ static int uhci_handle_td(UHCIState *s, UHCIQueue *q, uint32_t qh_addr,
}
usb_packet_addbuf(&async->packet, async->buf, max_len);
switch(pid) {
switch (pid) {
case USB_TOKEN_OUT:
case USB_TOKEN_SETUP:
pci_dma_read(&s->dev, td->buffer, async->buf, max_len);
@ -911,12 +924,15 @@ static void qhdb_reset(QhDb *db)
static int qhdb_insert(QhDb *db, uint32_t addr)
{
int i;
for (i = 0; i < db->count; i++)
if (db->addr[i] == addr)
for (i = 0; i < db->count; i++) {
if (db->addr[i] == addr) {
return 1;
}
}
if (db->count >= UHCI_MAX_QUEUES)
if (db->count >= UHCI_MAX_QUEUES) {
return 1;
}
db->addr[db->count++] = addr;
return 0;
@ -970,8 +986,10 @@ static void uhci_process_frame(UHCIState *s)
for (cnt = FRAME_MAX_LOOPS; is_valid(link) && cnt; cnt--) {
if (!s->completions_only && s->frame_bytes >= s->frame_bandwidth) {
/* We've reached the usb 1.1 bandwidth, which is
1280 bytes/frame, stop processing */
/*
* We've reached the usb 1.1 bandwidth, which is
* 1280 bytes/frame, stop processing
*/
trace_usb_uhci_frame_stop_bandwidth();
break;
}
@ -1120,8 +1138,10 @@ static void uhci_frame_timer(void *opaque)
uhci_async_validate_begin(s);
uhci_process_frame(s);
uhci_async_validate_end(s);
/* The spec says frnum is the frame currently being processed, and
* the guest must look at frnum - 1 on interrupt, so inc frnum now */
/*
* The spec says frnum is the frame currently being processed, and
* the guest must look at frnum - 1 on interrupt, so inc frnum now
*/
s->frnum = (s->frnum + 1) & 0x7ff;
s->expire_time += frame_t;
}
@ -1174,7 +1194,7 @@ void usb_uhci_common_realize(PCIDevice *dev, Error **errp)
if (s->masterbus) {
USBPort *ports[UHCI_PORTS];
for(i = 0; i < UHCI_PORTS; i++) {
for (i = 0; i < UHCI_PORTS; i++) {
ports[i] = &s->ports[i].port;
}
usb_register_companion(s->masterbus, ports, UHCI_PORTS,
@ -1200,8 +1220,10 @@ void usb_uhci_common_realize(PCIDevice *dev, Error **errp)
memory_region_init_io(&s->io_bar, OBJECT(s), &uhci_ioport_ops, s,
"uhci", 0x20);
/* Use region 4 for consistency with real hardware. BSD guests seem
to rely on this. */
/*
* Use region 4 for consistency with real hardware. BSD guests seem
* to rely on this.
*/
pci_register_bar(&s->dev, 4, PCI_BASE_ADDRESS_SPACE_IO, &s->io_bar);
}

View File

@ -37,8 +37,6 @@ struct XHCINecState {
};
static const Property nec_xhci_properties[] = {
DEFINE_PROP_ON_OFF_AUTO("msi", XHCIPciState, msi, ON_OFF_AUTO_AUTO),
DEFINE_PROP_ON_OFF_AUTO("msix", XHCIPciState, msix, ON_OFF_AUTO_AUTO),
DEFINE_PROP_UINT32("intrs", XHCINecState, intrs, XHCI_MAXINTRS),
DEFINE_PROP_UINT32("slots", XHCINecState, slots, XHCI_MAXSLOTS),
};

View File

@ -197,6 +197,11 @@ static void xhci_instance_init(Object *obj)
qdev_alias_all_properties(DEVICE(&s->xhci), obj);
}
static const Property xhci_pci_properties[] = {
DEFINE_PROP_ON_OFF_AUTO("msi", XHCIPciState, msi, ON_OFF_AUTO_AUTO),
DEFINE_PROP_ON_OFF_AUTO("msix", XHCIPciState, msix, ON_OFF_AUTO_AUTO),
};
static void xhci_class_init(ObjectClass *klass, void *data)
{
PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
@ -208,6 +213,7 @@ static void xhci_class_init(ObjectClass *klass, void *data)
k->realize = usb_xhci_pci_realize;
k->exit = usb_xhci_pci_exit;
k->class_id = PCI_CLASS_SERIAL_USB;
device_class_set_props(dc, xhci_pci_properties);
}
static const TypeInfo xhci_pci_info = {

View File

@ -2810,9 +2810,15 @@ static uint64_t xhci_port_read(void *ptr, hwaddr reg, unsigned size)
case 0x08: /* PORTLI */
ret = 0;
break;
case 0x0c: /* reserved */
case 0x0c: /* PORTHLPMC */
ret = 0;
qemu_log_mask(LOG_UNIMP, "%s: read from port register PORTHLPMC",
__func__);
break;
default:
trace_usb_xhci_unimplemented("port read", reg);
qemu_log_mask(LOG_GUEST_ERROR,
"%s: read from port offset 0x%" HWADDR_PRIx,
__func__, reg);
ret = 0;
}
@ -2881,9 +2887,22 @@ static void xhci_port_write(void *ptr, hwaddr reg,
}
break;
case 0x04: /* PORTPMSC */
case 0x0c: /* PORTHLPMC */
qemu_log_mask(LOG_UNIMP,
"%s: write 0x%" PRIx64
" (%u bytes) to port register at offset 0x%" HWADDR_PRIx,
__func__, val, size, reg);
break;
case 0x08: /* PORTLI */
qemu_log_mask(LOG_GUEST_ERROR, "%s: Write to read-only PORTLI register",
__func__);
break;
default:
trace_usb_xhci_unimplemented("port write", reg);
qemu_log_mask(LOG_GUEST_ERROR,
"%s: write 0x%" PRIx64 " (%u bytes) to unknown port "
"register at offset 0x%" HWADDR_PRIx,
__func__, val, size, reg);
break;
}
}

View File

@ -169,7 +169,7 @@ static inline void xenpvh_gpex_init(XenPVHMachineState *s,
*/
assert(xpc->set_pci_intx_irq);
for (i = 0; i < GPEX_NUM_IRQS; i++) {
for (i = 0; i < PCI_NUM_PINS; i++) {
qemu_irq irq = qemu_allocate_irq(xpc->set_pci_intx_irq, s, i);
sysbus_connect_irq(SYS_BUS_DEVICE(dev), i, irq);

View File

@ -93,7 +93,7 @@ static void create_pcie(MachineState *ms, CPUXtensaState *env, int irq_base,
/* Connect IRQ lines. */
extints = xtensa_get_extints(env);
for (i = 0; i < GPEX_NUM_IRQS; i++) {
for (i = 0; i < PCI_NUM_PINS; i++) {
void *q = extints[irq_base + i];
sysbus_connect_irq(SYS_BUS_DEVICE(dev), i, q);

View File

@ -0,0 +1,85 @@
/*
* Inter-VM Shared Memory Flat Device
*
* SPDX-FileCopyrightText: 2023 Linaro Ltd.
* SPDX-FileContributor: Gustavo Romero <gustavo.romero@linaro.org>
* SPDX-License-Identifier: GPL-2.0-or-later
*
*/
#ifndef IVSHMEM_FLAT_H
#define IVSHMEM_FLAT_H
#include "qemu/queue.h"
#include "qemu/event_notifier.h"
#include "chardev/char-fe.h"
#include "exec/memory.h"
#include "qom/object.h"
#include "hw/sysbus.h"
#define IVSHMEM_MAX_VECTOR_NUM 64
/*
* QEMU interface:
* + QOM property "chardev" is the character device id of the ivshmem server
* socket
* + QOM property "shmem-size" sets the size of the RAM region shared between
* the device and the ivshmem server
* + sysbus MMIO region 0: device I/O mapped registers
* + sysbus MMIO region 1: shared memory with ivshmem server
* + sysbus IRQ 0: single output interrupt
*/
#define TYPE_IVSHMEM_FLAT "ivshmem-flat"
typedef struct IvshmemFTState IvshmemFTState;
DECLARE_INSTANCE_CHECKER(IvshmemFTState, IVSHMEM_FLAT, TYPE_IVSHMEM_FLAT)
/* Ivshmem registers. See ./docs/specs/ivshmem-spec.txt for details. */
enum ivshmem_registers {
INTMASK = 0,
INTSTATUS = 4,
IVPOSITION = 8,
DOORBELL = 12,
};
typedef struct VectorInfo {
EventNotifier event_notifier;
uint16_t id;
} VectorInfo;
typedef struct IvshmemPeer {
QTAILQ_ENTRY(IvshmemPeer) next;
VectorInfo vector[IVSHMEM_MAX_VECTOR_NUM];
int vector_counter;
uint16_t id;
} IvshmemPeer;
struct IvshmemFTState {
SysBusDevice parent_obj;
uint64_t msg_buf;
int msg_buffered_bytes;
QTAILQ_HEAD(, IvshmemPeer) peer;
IvshmemPeer own;
CharBackend server_chr;
/* IRQ */
qemu_irq irq;
/* I/O registers */
MemoryRegion iomem;
uint32_t intmask;
uint32_t intstatus;
uint32_t ivposition;
uint32_t doorbell;
/* Shared memory */
MemoryRegion shmem;
int shmem_fd;
uint32_t shmem_size;
};
#endif /* IVSHMEM_FLAT_H */

View File

@ -32,8 +32,6 @@ OBJECT_DECLARE_SIMPLE_TYPE(GPEXHost, GPEX_HOST)
#define TYPE_GPEX_ROOT_DEVICE "gpex-root"
OBJECT_DECLARE_SIMPLE_TYPE(GPEXRootState, GPEX_ROOT_DEVICE)
#define GPEX_NUM_IRQS 4
struct GPEXRootState {
/*< private >*/
PCIDevice parent_obj;
@ -49,6 +47,7 @@ struct GPEXConfig {
PCIBus *bus;
};
typedef struct GPEXIrq GPEXIrq;
struct GPEXHost {
/*< private >*/
PCIExpressHost parent_obj;
@ -60,8 +59,8 @@ struct GPEXHost {
MemoryRegion io_mmio;
MemoryRegion io_ioport_window;
MemoryRegion io_mmio_window;
qemu_irq irq[GPEX_NUM_IRQS];
int irq_num[GPEX_NUM_IRQS];
GPEXIrq *irq;
uint8_t num_irqs;
bool allow_unmapped_accesses;

View File

@ -1,6 +1,17 @@
#ifndef HW_USB_UHCI_REGS_H
#define HW_USB_UHCI_REGS_H
#define UHCI_USBCMD 0
#define UHCI_USBSTS 2
#define UHCI_USBINTR 4
#define UHCI_USBFRNUM 6
#define UHCI_USBFLBASEADD 8
#define UHCI_USBSOF 0x0c
#define UHCI_USBPORTSC1 0x10
#define UHCI_USBPORTSC2 0x12
#define UHCI_USBPORTSC3 0x14
#define UHCI_USBPORTSC4 0x16
#define UHCI_CMD_FGR (1 << 4)
#define UHCI_CMD_EGSM (1 << 3)
#define UHCI_CMD_GRESET (1 << 2)

View File

@ -5,7 +5,19 @@
#ifndef QEMU_MAIN_H
#define QEMU_MAIN_H
int qemu_default_main(void);
/*
* The function to run on the main (initial) thread of the process.
* NULL means QEMU's main event loop.
* When non-NULL, QEMU's main event loop will run on a purposely created
* thread, after which the provided function pointer will be invoked on
* the initial thread.
* This is useful on platforms which treat the main thread as special
* (macOS/Darwin) and/or require all UI API calls to occur from the main
* thread. Those platforms can initialise it to a specific function,
* while UI implementations may reset it to NULL during their init if they
* will handle system and UI events on the main thread via QEMU's own main
* event loop.
*/
extern int (*qemu_main)(void);
#endif /* QEMU_MAIN_H */

View File

@ -817,6 +817,8 @@ socket = []
version_res = []
coref = []
iokit = []
pvg = not_found
metal = []
emulator_link_args = []
midl = not_found
widl = not_found
@ -838,6 +840,8 @@ elif host_os == 'darwin'
coref = dependency('appleframeworks', modules: 'CoreFoundation')
iokit = dependency('appleframeworks', modules: 'IOKit', required: false)
host_dsosuf = '.dylib'
pvg = dependency('appleframeworks', modules: 'ParavirtualizedGraphics')
metal = dependency('appleframeworks', modules: 'Metal')
elif host_os == 'sunos'
socket = [cc.find_library('socket'),
cc.find_library('nsl'),

View File

@ -18,6 +18,7 @@
#include "qemu/error-report.h"
#include "qapi/error.h"
#include "system/runstate.h"
#include "net/eth.h"
#include <vmnet/vmnet.h>
#include <dispatch/dispatch.h>
@ -147,10 +148,26 @@ static int vmnet_read_packets(VmnetState *s)
*/
static void vmnet_write_packets_to_qemu(VmnetState *s)
{
uint8_t *pkt;
size_t pktsz;
uint8_t min_pkt[ETH_ZLEN];
size_t min_pktsz;
ssize_t size;
while (s->packets_send_current_pos < s->packets_send_end_pos) {
ssize_t size = qemu_send_packet_async(&s->nc,
s->iov_buf[s->packets_send_current_pos].iov_base,
s->packets_buf[s->packets_send_current_pos].vm_pkt_size,
pkt = s->iov_buf[s->packets_send_current_pos].iov_base;
pktsz = s->packets_buf[s->packets_send_current_pos].vm_pkt_size;
if (net_peer_needs_padding(&s->nc)) {
min_pktsz = sizeof(min_pkt);
if (eth_pad_short_frame(min_pkt, &min_pktsz, pkt, pktsz)) {
pkt = min_pkt;
pktsz = min_pktsz;
}
}
size = qemu_send_packet_async(&s->nc, pkt, pktsz,
vmnet_send_completed);
if (size == 0) {

View File

@ -24,26 +24,56 @@
#include "qemu/osdep.h"
#include "qemu-main.h"
#include "qemu/main-loop.h"
#include "system/system.h"
#ifdef CONFIG_SDL
/*
* SDL insists on wrapping the main() function with its own implementation on
* some platforms; it does so via a macro that renames our main function, so
* <SDL.h> must be #included here even with no SDL code called from this file.
*/
#include <SDL.h>
#endif
int qemu_default_main(void)
#ifdef CONFIG_DARWIN
#include <CoreFoundation/CoreFoundation.h>
#endif
static void *qemu_default_main(void *opaque)
{
int status;
bql_lock();
status = qemu_main_loop();
qemu_cleanup(status);
bql_unlock();
return status;
exit(status);
}
int (*qemu_main)(void) = qemu_default_main;
int (*qemu_main)(void);
#ifdef CONFIG_DARWIN
static int os_darwin_cfrunloop_main(void)
{
CFRunLoopRun();
g_assert_not_reached();
}
int (*qemu_main)(void) = os_darwin_cfrunloop_main;
#endif
int main(int argc, char **argv)
{
qemu_init(argc, argv);
return qemu_main();
bql_unlock();
if (qemu_main) {
QemuThread main_loop_thread;
qemu_thread_create(&main_loop_thread, "qemu_main",
qemu_default_main, NULL, QEMU_THREAD_DETACHED);
return qemu_main();
} else {
qemu_default_main(NULL);
g_assert_not_reached();
}
}

View File

@ -41,6 +41,7 @@ static FuzzTargetList *fuzz_target_list;
static FuzzTarget *fuzz_target;
static QTestState *fuzz_qts;
int (*qemu_main)(void);
void flush_events(QTestState *s)

View File

@ -73,6 +73,8 @@ typedef struct {
int height;
} QEMUScreen;
@class QemuCocoaPasteboardTypeOwner;
static void cocoa_update(DisplayChangeListener *dcl,
int x, int y, int w, int h);
@ -107,6 +109,7 @@ static bool allow_events;
static NSInteger cbchangecount = -1;
static QemuClipboardInfo *cbinfo;
static QemuEvent cbevent;
static QemuCocoaPasteboardTypeOwner *cbowner;
// Utility functions to run specified code block with the BQL held
typedef void (^CodeBlock)(void);
@ -1326,8 +1329,10 @@ static CGEventRef handleTapEvent(CGEventTapProxy proxy, CGEventType type, CGEven
{
COCOA_DEBUG("QemuCocoaAppController: dealloc\n");
if (cocoaView)
[cocoaView release];
[cocoaView release];
[cbowner release];
cbowner = nil;
[super dealloc];
}
@ -1943,8 +1948,6 @@ static Notifier mouse_mode_change_notifier = {
@end
static QemuCocoaPasteboardTypeOwner *cbowner;
static void cocoa_clipboard_notify(Notifier *notifier, void *data);
static void cocoa_clipboard_request(QemuClipboardInfo *info,
QemuClipboardType type);
@ -2007,43 +2010,8 @@ static void cocoa_clipboard_request(QemuClipboardInfo *info,
}
}
/*
* The startup process for the OSX/Cocoa UI is complicated, because
* OSX insists that the UI runs on the initial main thread, and so we
* need to start a second thread which runs the qemu_default_main():
* in main():
* in cocoa_display_init():
* assign cocoa_main to qemu_main
* create application, menus, etc
* in cocoa_main():
* create qemu-main thread
* enter OSX run loop
*/
static void *call_qemu_main(void *opaque)
{
int status;
COCOA_DEBUG("Second thread: calling qemu_default_main()\n");
bql_lock();
status = qemu_default_main();
bql_unlock();
COCOA_DEBUG("Second thread: qemu_default_main() returned, exiting\n");
[cbowner release];
exit(status);
}
static int cocoa_main(void)
{
QemuThread thread;
COCOA_DEBUG("Entered %s()\n", __func__);
bql_unlock();
qemu_thread_create(&thread, "qemu_main", call_qemu_main,
NULL, QEMU_THREAD_DETACHED);
// Start the main event loop
COCOA_DEBUG("Main thread: entering OSX run loop\n");
[NSApp run];
COCOA_DEBUG("Main thread: left OSX run loop, which should never happen\n");
@ -2125,8 +2093,6 @@ static void cocoa_display_init(DisplayState *ds, DisplayOptions *opts)
COCOA_DEBUG("qemu_cocoa: cocoa_display_init\n");
qemu_main = cocoa_main;
// Pull this console process up to being a fully-fledged graphical
// app with a menubar and Dock icon
ProcessSerialNumber psn = { 0, kCurrentProcess };
@ -2190,6 +2156,12 @@ static void cocoa_display_init(DisplayState *ds, DisplayOptions *opts)
qemu_clipboard_peer_register(&cbpeer);
[pool release];
/*
* The Cocoa UI will run the NSApplication runloop on the main thread
* rather than the default Core Foundation one.
*/
qemu_main = cocoa_main;
}
static QemuDisplay qemu_display_cocoa = {

View File

@ -38,6 +38,7 @@
#include "qemu/cutils.h"
#include "qemu/error-report.h"
#include "qemu/main-loop.h"
#include "qemu-main.h"
#include "ui/console.h"
#include "ui/gtk.h"
@ -2485,6 +2486,9 @@ static void gtk_display_init(DisplayState *ds, DisplayOptions *opts)
#ifdef CONFIG_GTK_CLIPBOARD
gd_clipboard_init(s);
#endif /* CONFIG_GTK_CLIPBOARD */
/* GTK's event polling must happen on the main thread. */
qemu_main = NULL;
}
static void early_gtk_display_init(DisplayOptions *opts)

View File

@ -34,6 +34,7 @@
#include "system/system.h"
#include "ui/win32-kbd-hook.h"
#include "qemu/log.h"
#include "qemu-main.h"
static int sdl2_num_outputs;
static struct sdl2_console *sdl2_console;
@ -965,6 +966,9 @@ static void sdl2_display_init(DisplayState *ds, DisplayOptions *o)
}
atexit(sdl_cleanup);
/* SDL's event polling (in dpy_refresh) must happen on the main thread. */
qemu_main = NULL;
}
static QemuDisplay qemu_display_sdl2 = {